Your SaaS Backend is Too Heavy
In August 2023, Elon Musk livestreamed a drive in a Tesla Model S running FSD v12. The car navigated construction zones, roundabouts, and pedestrians with eerie smoothness.
https://www.youtube.com/watch?v=704gOMfiH1c
The livestream was on X…but here’s a recording from YouTube if anyone is interested. But the most significant part of that demo wasn’t what the car did. It was what the engineers had removed.
For years, Tesla’s Autopilot (FSD v11 and prior) ran on Software 1.0. It was a massive collection of heuristic rules written in C++ (just examples):
- if (red_light) stop();
- if (roundabout) yield();
- if (object_is_cone) change_lane();
The problem is that the real world is High Entropy. A police officer waving you through a red light breaks the if (red_light) rule. A plastic bag blowing in the wind looks like a rock. To handle reality, Tesla engineers had to write 300,000 lines of C++ code, creating a “Hydra” of logic that was becoming impossible to maintain. With FSD v12, they deleted almost all of it. They replaced the C++ heuristics with a single, end-to-end Neural Network. They stopped telling the car how to drive (rules) and started showing the car what good driving looks like (data).
This is the shift from Software 1.0 to Software 2.0. If you haven’t read my previous post, feel free to: Stop Coding Like It’s 2015 And while most of us aren’t building self-driving cars, this shift is about to fundamentally change how we build SaaS applications.
The SaaS “Heuristic Trap” Consider the typical architecture of a B2B SaaS application. Let’s say, an expense management tool. We like to think our backend code is elegant architecture. In reality, 80% of our backend code is just input plumbing. Seriously, think about it. When a user uploads a messy invoice PDF, our code has to:
- OCR the text (means convert image to text).
- Run a Regex to find the date (eg. is it DD/MM or MM/DD?).
- Write logic to distinguish “Total” from “Subtotal”.
- Write if/else statements to categorize “Starbucks” as “Meals”.
Just like Tesla’s v11, we are writing brittle rules to try and tame a chaotic, unstructured reality. We are building our own little 300,000-line monster.
This is a trap. In 2026 (since 2025 has almost ended), writing a parser from scratch is not “Engineering”; it is undifferentiated heavy lifting.
The “Universal Adapter” Thesis The core insight of AI Engineering is not that “AI writes code for you” (Copilot).
The insight is that AI replaces code for you. LLMs act as a Universal Adapter. They allow us to treat Unstructured Data (Natural Language, Images) as if it were Structured Data (JSON).
- Software 1.0 Approach: You spend 2 weeks writing a fragile parser that breaks if the invoice format changes.
- Software 2.0 Approach: You write a prompt: “Extract date, total, and merchant from this image. Return JSON.” (there’s a better way, read my Pydantic post)
The LLM becomes a probabilistic computing layer that absorbs the complexity of the real world, so your deterministic code doesn’t have to.
The Sandwich Architecture However, the “Full AI” approach has a fatal flaw: Hallucination. Tesla can afford to use an end-to-end neural net because if the model outputs a 99% correct steering angle, the car still turns. But in SaaS, if your model outputs a 99% correct bank balance, you get sued. We cannot simply “delete all code.” We need a new architectural pattern that balances the Creativity of Probabilistic Models with the Reliability of Deterministic Code. I call this The Sandwich Architecture.
1. The Top Bun (Deterministic Guardrails) This is traditional code.
- Auth: Who is this user?
- Rate Limiting: Do they have budget?
- Schema Validation: Is the input safe?
You never let an LLM handle security. Prompt Injection is real. if (user.isAdmin) must remain code.
2. The Meat (Probabilistic Intelligence) This replaces your complex business logic.
- Parsing: Turning emails into tickets.
- Routing: Deciding which department needs to see this.
- Extraction: Pulling data points from unstructured text.
This is where you save months of development time. You replace thousands of lines of logic with a few API calls.
3. The Bottom Bun (Deterministic Execution) Once the LLM has extracted the data (eg. { “amount”: 50, “currency”: “USD” }), you hand control back to code.
- Database Writes: SQL is deterministic.
- Math: amount * tax_rate.
- API Calls: Stripe charges.
The Shift in Value The implication of this shift is profound for us as engineers. In the Software 1.0 era, your value was defined by your ability to write the logic that solved the problem. You were the Translator (Human → Machine Code). In the Software 2.0 era, the “Translation” is commoditized. The LLM does the translation better than you. Your value shifts to Orchestration. (you might want to highlight this)
- Designing the Sandwich.
- Defining the Schemas (the interface between the Buns and the Meat).
- Evaluating the reliability of the Model.
Tesla deleted their code to make their car a better driver. You should too delete your code to make your SaaS a better product. This is the trend. The best backend for your next app is no longer a 50000 lines Express.js (or whatever framework) server. It’s a 500 lines orchestration layer that knows exactly when to call upon the ghost in the machine.
💡 The Takeaway Next time you are about to write a Regular Expression or a complex switch statement to handle user input, stop. Ask yourself: “Am I trying to code a rule for a falling leaf?” If the answer is yes, delete the code. Use a prompt instead to the LLM.
Engineering insights for the AI age. Human strategies for a clearer mind. The newsletter for the builder who thinks. Join now for free to receive weekly visual mental models and decision frameworks.
Till We Code Again | Dylan Oh | Substack Engineering insights at the intersection of Legacy Systems and Future Intelligence. Click to read Till We Code Again… dylanoh.substack.com
Cheers.
Read the full article here: https://levelup.gitconnected.com/your-saas-backend-is-too-heavy-efc8ddcffa60