Security

The accidental continuous pentest (week 3)

May 4, 2026

Two weeks ago I posted week 1 of this build log and asked anyone running a small dev team to tell me how they were handling pentest pressure from their enterprise customers. Around 5 people DM'd. One of them turned into something interesting.

The customer

They run a FastAPI backend for a SaaS product, around 10 people on the team. Exactly the profile I'd been describing.

The first scan surfaced a handful of issues, clustered in two categories: BOLA (broken object level authorization) and unrestricted resource consumption (eg. rate limiting). Both are common in fast-moving FastAPI codebases. Both are the kind of thing that gets flagged in enterprise security questionnaires, where reviewers are increasingly looking for OWASP API Top 10 coverage specifically.

The thing I didn't expect

As they started patching the issues one by one, we rescanned. Meanwhile I was shipping improvements to the tool, week by week. Each rescan surfaced things the previous one couldn't see. They fixed those too. We rescanned again.

It's almost a continuous loop now. Find, fix, rescan, find, fix.

This wasn't planned. They didn't ask for a continuous engagement and I didn't pitch one. It emerged because the compute and labor cost of running another scan was low enough that nobody had to schedule a meeting to approve it. I help them find gaps, they help me improve the tool.

What I'm taking from this

A few weeks ago I'd have described what I was building as "an affordable pentest." Similar product, lower price. The behavior I'm seeing is making me realize that's not quite right.

When the per-scan cost drops by an order of magnitude, you don't get the same product cheaper. You get a different product. A $15,000 pentest is an event: planned, scoped, scheduled, reviewed. A scan that runs weekly is a tool. It joins the toolbox alongside Sentry, Datadog, the linter. The findings join the team's normal triage process. The fixes happen as part of regular sprint work.

That's a real difference, and it's bigger than the price tag suggests. Most of the value isn't in the lower number on the invoice. It's in the cadence change that the lower number enables.

What's next

I want to be careful not to over-extrapolate from one engagement. Some customers will run a single scan for a customer questionnaire response and never come back. That's a fine use case too. But the continuous loop emerging here is opening up a possibility I wasn't initially building for. The tool was designed to deliver a single audit-ready report. It turns out it can also sit in a workflow that looks nothing like that.

I'll know more after the next two or three engagements. Until then, this is one customer's behavior, surprising me in a way I'm trying to learn from rather than generalize from.

Week 4 next Tuesday.

About the author
This is some text inside of a div block.

I'm an AWS certified cloud architect from New York, who loves writing about DevSecOps, Infrastructure as Code and Serverless. Having run a tech company myself for years, I love helping other start-up scale using the latest cloud services.

Join my mailing list

Stay up to date with everything Skripted.

Sign up for periodic updates on #IaC techniques, interesting AWS services and serverless.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.