Checklist

Pre-release QA checklist for teams without a QA engineer

By Sergei Pustovalov · 9 May 2026 · 4 min read

This is a working checklist for B2B SaaS teams that don't have a QA engineer and need to ship reliably anyway. It takes about 15 minutes if done manually, less if you've automated the items via a regression suite.

The goal isn't comprehensive coverage. It's catching the embarrassing classes of bug (login broken, dashboard blank, signup form rejecting valid input) before customers do.

How to use this

Three modes, depending on where you are:

  • Manual. One person clicks through the checklist on the staging URL before each promotion to production. ~15 minutes. Good for first month or two.
  • Semi-automated. Half the checklist is automated (auth, critical paths), half is manual (browser sanity, perf smoke). Run automated bits on every deploy, manual bits weekly.
  • Fully automated. Every item except subjective ones is in your regression suite. The suite runs on every staging deploy and gates promotion.

Don't aim for fully automated on day one. Aim for manual coverage of the right items, then automate over time as you find which ones actually catch things.

The checklist

Critical user paths

  • Login with email/password completes and lands on the dashboard
  • Login with OAuth (Google/GitHub if you support it) completes and lands on the dashboard
  • Signup creates an account, sends verification email, and reaches the activation step
  • The core revenue action (the thing customers pay for) executes end-to-end without errors

Auth and account

  • Logout clears the session and redirects to the public landing
  • Password reset email arrives within 60 seconds and the link works
  • Settings: email change, password change, plan view, all render with current data

Data integrity

  • Creating a new entity (project, doc, record) shows up in the list view immediately
  • Editing an entity persists after a hard refresh (not just optimistic UI)
  • Deleting an entity removes it everywhere it was referenced (no orphan rows in lists)

Browser and device sanity

  • Critical pages render on Chromium without console errors
  • Mobile viewport (~375px wide) doesn't break the main navigation

Performance smoke

  • Dashboard initial load completes in under 5 seconds on staging
  • No new ~500 ms+ blocking requests appeared since the last release

What's intentionally not on this list

  • Cross-browser (Firefox, Safari). If you're 5-50 engineers, your customers are 90%+ Chromium. Add this when the data shows it matters.
  • Visual regression / pixel diffs. Too noisy. Use only after exhausting other signals.
  • Accessibility audit. Important, but a separate concern with its own tooling cycle. Run quarterly with axe or Lighthouse, not pre-release.
  • Load / stress testing. Different category. Run before scale events (launch, big customer onboarding), not before every release.
  • Penetration testing. Annual third-party audit, not pre-release.

When to revise this checklist

Revise after every customer-reported regression. The pattern: regression slips through, customer reports it, postmortem identifies that the affected flow wasn't on the checklist. Add it. Repeat.

Within 6 months of disciplined use, this checklist tends to grow from 14 items to 20-25, with the additions being the exact paths your specific product is most fragile on. Don't fight that growth. The list is supposed to evolve.

What you should fight: items that have been on the list for three months without ever catching anything. Those are dead weight. Delete them.

Want this checklist on autopilot?

Most items above can be automated as a regression suite that runs on every staging deploy. Regresco does this without writing test code. Free plan is 5 runs/month, no card.