Insights & Updates on Application Security

We Fixed 41 Healthcare App Vulnerabilities Before the Demo Coffee Got Cold

Written by Bruce Fram | Mar 20, 2026 5:00:00 PM

Healthcare apps are supposed to be hard targets. They're complex, handle data you definitely don't want leaked, and security researchers have been poking at them since the Obama administration.

So naturally, we pointed AppSecAI at one for a live demo.

OpenMRS is a widely-used open-source electronic medical records application. Translation: it's been battle-tested, publicly scrutinized, and is nobody's idea of low-hanging fruit. We ran it for a healthcare company's AppSec team, watching in real-time.

No WebGoat. No toy apps. No "we prepared this earlier" nonsense.

Here's what happened in 26 minutes.

The Setup (AKA We Used Their Own Tools)

The prospect's team was already running Checkmarx for SAST scanning. Perfect. We plugged AppSecAI into their existing Checkmarx results.

This matters: AppSecAI doesn't replace your SAST or DAST tools. We're not here to sell you another scanner that generates more findings you'll never fix. We accept results from virtually any static or dynamic analysis tool (Checkmarx, Semgrep, Veracode, Snyk, you name it) and actually do something useful with them.

The scan surfaced 54 findings.

Cue the AppSec team's collective sigh. 54 more tickets. 54 more Jira comments. 54 more "I'll get to it next sprint" conversations.

Not today!

 

Triage: Where AI Earns Its Keep

First job: separate signal from noise. Our Expert Triage Automation (ETA) analyzed all 54 findings.

Verdict: 13 were false positives.

Gone. Removed. Deleted from existence. Zero human investigation required.

That's 24% of findings eliminated before a developer or security analyst wastes 10 seconds looking at them. For a team drowning in 50,000 to 100,000 annual findings (yeah, that's real), this isn't a nice-to-have. It's the difference between a backlog that shrinks and one that just mocks you from Jira.

ETA hits 97% triage accuracy across our customer base. We built the Python OWASP Benchmark with David Wichers specifically to prove that number isn't marketing fluff.

Remediation: 41 Pull Requests While You Were in Standup

The remaining 41 findings? AppSecAI generated 41 pull requests in 26 minutes.

Each PR included:

  • The original SAST finding

  • Why it actually matters (not just "SQL injection is bad, m'kay")

  • How we fixed it

  • The actual code change

Two favorites from the demo:

SQL Injection Fix The vulnerable code was doing the classic amateur move: appending variables directly into SQL statements. You know, the thing every security training says not to do, but developers do anyway because it's fast.

The fix: parameterized prepared statements. But here's the cool part - we didn't just patch the one vulnerable line. We updated the underlying getInt() function so every call to it inherits the fix. One change, systemic improvement. Developer writes it once, security gets it everywhere!

CRLF Injection (Log Injection) User input was being written directly into logs without sanitization. This lets attackers inject fake log entries, which is hilarious for them and a compliance nightmare for you.

The fix: added a null check and stripped carriage return and line feed characters before logging. Simple. Clean. Done. The kind of fix a senior developer would write on a good day, except AI wrote it in seconds on every day.

The Secret Sauce: Code Fingerprinting

Here's where most AI fix tools face-plant: they treat your codebase like it's generic.

Your app isn't generic. You have existing security libraries. Authentication patterns your team spent years building. Internal controls that actually work. When an AI tool drops in some random third-party library to "fix" a vulnerability - ignoring the fact that you already have an internal method handling the exact same thing - it creates new problems while solving old ones.

Congrats, now you have two ways to do authentication and neither team knows which one to use.

AppSecAI fingerprints your code during onboarding. We map how your application already handles CSRF protection, output encoding, input validation, authentication, the works. When we generate fixes, we use your patterns, not some generic Stack Overflow answer from 2015.

This is the difference between "a tool that generates pull requests" and "a tool that generates pull requests your team will actually merge without a 47-comment code review argument."

Humans Still Run the Show (As They Should)

Let's be clear: AppSec doesn't get automated away. It gets faster.

Our workflow: your security team reviews fix candidates in the AppSecAI platform before any PR hits your developers' queue. They validate the triage. They approve the fixes. Then - and only then - the pull requests go out.

This prevents the most common failure mode of AI-assisted remediation: flooding developers with garbage PRs that erode trust faster than you can say "AI-generated slop."

In real-world testing, an AppSec professional who had never used our product and had never seen the code worked through findings (triage, fix review, complete documentation) in 8.2 minutes per vulnerability.

For a team with 50,000 annual medium and high-severity findings, that math doesn't just help. It changes everything.

The CFO Math (The Part They Actually Care About)

Industry data puts manual vulnerability remediation at $5,000 to $20,000 per fix when you factor in developer time, security review cycles, retesting, and documentation. According to Veracode, the median time to fix a medium or high-severity vulnerability is 252 days.

252 days. Your vulnerability sits there for 8+ months while developers work on features that actually ship.

AppSecAI's Expert Fix Automation (EFA) brings that cost down to a tenth of the cost. 

The pricing model is outcome-based: you only pay for pull requests your team merges. If the fix isn't good enough to use, you don't pay for it. Zero risk. Zero "we spent $200k on a tool that generates alerts we ignore."

That's not a pitch. That's just math.

 

The Bottom Line

We ran AppSecAI against a real healthcare application in front of a live audience. In 26 minutes, we triaged 54 findings, eliminated 13 false positives, and generated 41 production-ready pull requests.

No toy apps. No hand-waving. No "trust us, it works in production."

Just 41 vulnerabilities fixed before lunch.

Want to see what AppSecAI finds in your codebase? Request a demo