Founder

Why I'm building Regresco: the QA gap small SaaS teams keep falling into

By Sergei Pustovalov · 9 May 2026 · 6 min read

I've spent the last few years watching the same pattern play out at small B2B SaaS teams. Five to fifty engineers, weekly or biweekly releases, product growing fast. They know they should have regression coverage. They don't.

They tried. They adopted Cypress, or Playwright, or Ghost Inspector. They wrote 30 tests in a hackathon week. By month six the suite was amber. By month nine it was a placebo. The engineering team stopped looking at the dashboard, but the team's mental model still said "we have tests," so they kept shipping like they had a safety net. They didn't.

I built Regresco because none of the existing tools were the right answer for this team profile, and I'd watched enough teams hit the same wall to be sure it was a pattern, not a series of one-off failures.

What I kept seeing

The companies were different in domain, but the QA story was the same. Founder or tech lead would tell me they "do a lot of manual testing before release." When I asked what that meant, the answer was always one of three things:

One person on the team clicks through five flows in their browser before promoting to production. Sometimes they remember the obscure flow that broke last quarter, sometimes they don't.

Or: they have a Cypress suite that "kind of works." Half the tests are skipped, no one has time to fix them, and when something fails the team's reaction is "probably flaky" rather than investigation.

Or: they have nothing, ship on Friday afternoon, and find out about regressions from customer support tickets on Monday morning.

All three are the same problem in different costumes. The team needs regression coverage but doesn't have engineering capacity to maintain test code as a real workstream.

Why existing tools don't solve it

There are good tools for browser testing. Cypress, Playwright, Ghost Inspector, Checkly. I use Playwright myself, under the hood of the runner.

The gap isn't the runner. The gap is the structural one: code-first frameworks put tests in your repo, where they compete with feature work for engineering attention and lose. Record-and-play tools put tests in a UI, but the recordings are fragile and there's no signal for which failure is real. Monitoring tools cover prod but cost a fortune for what amounts to a regression check.

What was missing was a tool that:

  • Doesn't live in your repo (so it doesn't compete with feature work)
  • Doesn't require code (so a PM or support engineer can author flows)
  • Generates the first version of flows from your real site (so you don't start from a blank recorder)
  • Classifies failures (so a red dashboard tells you something more useful than red)
  • Self-heals brittle selectors (so the maintenance burden is close to zero)
  • Costs a price small SaaS teams can absorb on a credit card without budget approval

That's Regresco. None of the individual ideas are novel. The combination, scoped specifically for the small B2B SaaS team that doesn't have a dedicated QA engineer and isn't going to hire one, is.

What's hard about this

Three things have been harder than I expected.

The first is that regression testing is a category most engineering managers consider solved. They tried Cypress, it didn't work, and they concluded the category is just hard. Convincing them to try a different shape of solution is harder than convincing them the problem exists.

The second is the tradeoff between "no-code" and "powerful." If the tool is too no-code, the engineers don't trust it. If it's too code-heavy, the non-engineers can't use it. We're trying to thread a needle: enough structure that flows are reliable, enough simplicity that a non-engineer can author one, plus a Playwright import for teams that already have suites.

The third is honesty in marketing. The temptation to lead with "AI-powered regression testing" is real. The reality is that AI helps in three specific places (drafting flows from a crawl, healing broken selectors, classifying failures) and most of the product is just well-engineered Playwright execution underneath. We've tried to keep the marketing on the practical end. Whether that's working is a question I'll know in another quarter.

Where I'm at

The product is live at regresco.com. Cloud execution, Playwright import, AI flow generation, failure classification, auto-heal, scheduled runs, three pricing tiers, Stripe billing, Jira integration. Everything a small SaaS team needs to put structured regression in place this week.

What's not yet there: pilots. We're early. The first paying customer hasn't landed yet at the time of writing. I'm doing cold-DM outreach to YC W25/S25 founders this month, hand-walking each prospective pilot through setup, collecting feedback aggressively. If you're at a 5-50 person SaaS company shipping weekly without a dedicated QA hire, and you've felt the pattern I described above, I'd love to talk.

Free plan is 5 runs a month, no credit card. The fastest path is to try it on your staging URL and tell me what's broken about it. The second fastest is to email me at [email protected].

What I want from people reading this

Three things:

  • If the pattern resonates, point Regresco at your staging URL on the free plan. The whole product loop runs in 10 minutes. If something's wrong with it, tell me. I read every reply.
  • If you've solved this differently, I want to hear how. Genuinely. Discipline-based in-repo suites work for some teams. I want to learn from the ones where it stuck.
  • If you know a team that fits the profile, a short forwarded email or LinkedIn intro from someone they trust is worth more than any of my cold DMs.

Thanks for reading. More posts to come as the product matures and pilots land.

Try it on your staging URL

Free plan, 5 runs a month, no card. The whole loop (project + flows + first regression run + results) takes about 10 minutes.