Remember when landing on the moon was the craziest thing humanity ever attempted? Turns out, we're now spending that much on AI annually—and most organizations still can't fix their vulnerability backlogs. Let's talk about what happens when moonshot-level investment meets application security.
The Math That Made Me Stop Mid-Conversation
I was talking with 96 year old Dad about AI investments recently, and something clicked. I pulled up ChatGPT and started running numbers. The Apollo program—the entire effort that put humans on the moon—cost roughly $200 billion in 2025 dollars.
We're investing that much in AI. Every. Single. Year.
Not over a decade. Not across multiple administrations. Annually. The United States is pumping Apollo-level resources into artificial intelligence at a pace that makes the Space Race look quaint.
Breaking Down the Numbers
Let's put this in perspective:
Apollo Program (1961-1972):
- Total cost: ~$200 billion (2025 dollars)
- Peak spending: 2-3% of US federal budget
- Share of GDP at peak: 0.4-0.5%
AI Investment (2025):
- Annual investment: ~$200 billion
- Share of GDP: 0.37% (private sector alone)
- NVIDIA AI chip revenue: ~$130 billion
- Add servers, data centers, power, infrastructure: another $70+ billion
NVIDIA represents roughly half the AI investment pie just in chips. Around that sits the entire ecosystem—data centers, cooling systems, power infrastructure, the developers building on top of it all. We've essentially industrialized a moonshot.
The Apollo 8 Connection
Here's what made this comparison click for me. I'm a space buff—I have a massive metal print of the Earthrise photo in my living room. You know the one: Earth rising over the lunar horizon, taken during Apollo 8's orbit around the moon on Christmas Eve, 1968.
That mission was audacious even by Apollo standards. They went to the moon on the fourth crewed Apollo flight. Fourth. They orbited ten times, then came home. The Earthrise photo became one of the most iconic images in human history—a reminder of what's possible when you commit resources and talent to a singular goal.
That photo was paid for with tax dollars. So was the entire Apollo program. Public investment in a public goal: demonstrate American technological superiority, advance science, and do something no human had ever done.
Now? Private companies are making Apollo-scale investments annually, and the race isn't to the moon—it's to artificial general intelligence, quantum supremacy, and every adjacent technology that gets funded along the way.
What This Means for Application Security
Here's where it gets relevant to anyone reading this: we're in a moonshot era for AI, but most security teams are still using manual processes from the pre-Apollo era.
Think about the contradiction:
- AI coding assistants like Cursor generate 1 billion lines of committed code per day
- AI models write 41-80% of code at 70% of organizations
- But most AppSec teams still manually triage SAST scanner results and manually remediate vulnerabilities
- Average time to fix: 200+ days
- Average cost: $5,000-$20,000 per vulnerability
We're investing Apollo money in AI while securing it with processes that predate the Space Race.
The Capacity Question
The Apollo program succeeded because NASA coordinated resources an unprecedented scale. They didn't try to manually calculate every trajectory or hand-craft every component. They automated what could be automated and focused human effort on decisions that required human judgment.
Application security needs the same transformation. When your SAST and DAST tools flag thousands of vulnerabilities, manually reviewing each one isn't diligent—it's mathematically impossible. The backlog grows faster than humans can clear it.
At current fix rates (5% annually) with current vulnerability discovery rates (10% growth), you're not managing technical debt—you're documenting perpetual failure. The numbers don't work, and pretending they will if you just hire two more AppSec engineers is like thinking Apollo 11 could have reached the moon with bigger slide rules.
AI Built With AI, Secured With... What, Exactly?
Here's the uncomfortable question: if we're investing $200 billion annually in AI, and that AI is writing most of our code, and that code contains vulnerabilities at scale, what secures the securers?
Security vendors use AI to build products. Those products have codebases. Those codebases have backlogs. We're running the same race as everyone else—development speed accelerating while remediation capacity stays flat.
The vendors that win won't be the ones with the best marketing. They'll be the ones who actually automated their own security workflows before selling automation to others. We eat our own dog food at AppSecAI. We use AI to build AI-powered security tools because manual processes don't scale to Apollo-level velocity.
The Moonshot Mentality Applied to Security
Apollo succeeded because of a simple premise: set an impossible goal, commit the resources, and don't accept "we can't" as an answer.
Application security needs that mentality. Not the incrementalism of "let's prioritize our backlog better" or "let's add more headcount." The moonshot version: "Let's reduce our 10,000-vulnerability backlog to zero in 12 months using AI-powered automation."
Impossible? Apollo 8 orbiting the moon on the program's fourth crewed flight seemed impossible too. Then they did it. On Christmas Eve. And brought back a photo that changed how humanity sees itself.
The Earthrise Moment for AppSec
We're at an inflection point. AI investment has reached Apollo scale. The tools exist to automate vulnerability remediation at the same speed we're creating vulnerabilities. The question isn't capability—it's will.
Organizations still treating AppSec like a manual spreadsheet exercise will fall behind the same way pre-computer societies fell behind industrialized ones. Not slowly. Not gradually. Suddenly, when they realize the math never worked and they've been pretending otherwise.
The ones that embrace AI-powered automation—using SAST and DAST findings as inputs to automated fix systems rather than tickets for overloaded engineers—will clear their backlogs while everyone else is still arguing about prioritization frameworks.
We're investing Apollo money in creating code. It's time to invest in securing it at Apollo scale too.
Ready to automate remediation at the speed of development? Explore Expert Fix Automation or download The AI Security Advantage to see why manual processes can't keep pace with AI-generated code.