Writing Software: How We Do It

Writing Software: How We Do It

Writing Software: How We Do It

Luis Galeas

Jul 31, 2025

5 Mins Read

When I tell people how we build software, I often get three reactions in a row. First, they get excited; who doesn't want to create software faster? Then they get suspicious; wait, what's the catch? And finally, they get it, "Oh, now that makes sense."

The confusion stems from the hype surrounding AI. Everyone promises miracles, but when engineering teams attempt to utilize today's tools to deliver enterprise software, they hit a wall. Let me explain what we do instead and why it works.

Why "Instant App Generators" Don't Work

You've probably seen those tools that promise to turn a paragraph or even an entire specification into a working app. Cool demo, right? But try building something complex, like a neo bank, and you'll hit a wall. The code is difficult to modify, the logic is convoluted, or it simply doesn't do what you need.

Here's why: building software isn't one big problem. It's a bunch of different problems. You need to talk to users and figure out what they want. You need to design how everything fits together. You need a good tech architecture. You need to write the code. You need to prioritize what should be built now vs later. You need to think about tradeoffs. You need to test your code. Document it. Each of these is entirely different work.

Asking AI to do all of this at once is like asking someone to study for a calculus test while juggling, writing a poem, and changing a baby's clothes in the middle. Even if they're great at each thing separately, combining them all means they'll likely mess up all four.

What We Do Instead

We realized long ago. AI excels at specific, well-defined tasks. Give it a vague, open-ended problem, and it struggles. Give it a straightforward task with good context, and it often does better than humans.

So we broke everything down.

First, we talk to people and write down what they need: We interview stakeholders and use AI to help organize our notes and build a knowledge base. But humans decide what's essential—AI just helps us keep track of everything.

Then we design the system: Our engineers sketch out how everything should work. We've developed specialized visualization tools that make this process easier, where AI helps identify potential problems. But humans make the big decisions about how to structure things. (For my fellow nerds out there: our designs model dependencies between components as "sparse graphs" to make the solution more tractable for both humans and AI.)

Next, we write super-precise specifications: This is the secret sauce. We describe precisely what the software should do in a special format—like a contract that says "when X happens, do Y." No ambiguity. (For my fellow nerds out there: we use our own Domain Specific Language / DSL.)

Then AI writes the code: This is where AI shines. We're not asking it to be creative or make decisions. We're saying, "Here's exactly what we want; now write the code." It's like having a speedy typist who never gets tired.

Finally, humans review everything: Every single line is checked by a person. Not because AI is bad, but because this is how you catch the weird edge cases and make sure everything makes sense.

The key is that humans check the work at each step. When you catch problems early, they don't snowball into disasters later.

Where Companies Are At

We see companies at different stages of figuring this out:

Stage 0: "AI is overhyped nonsense": Still doing everything the old way. Often burned by trying a tool that didn't work.

Stage 1: "Let's try LLMs": Using basic autocomplete tools. Getting marginally faster at some code writing, but nothing revolutionary.

Stage 2: "AI for specific tasks": Using AI to write tests or documentation. Helpful but not game-changing. For example, asking LLM+CLI tools to give you a first draft for a feature.

Stage 3: "AI for many tasks": Using AI for many tasks, but not in a holistic way with a streamlined end-to-end process.

Stage 4: "Complete transformation": This is where we work. AI helps at every step, but humans stay in control. Speed improves by orders of magnitude not just in code writing, but across the entire software development lifecycle.

The Technical Bit (Stay With Me)

Here's why our approach works, technically speaking. Modern AI (the transformer architecture, if you care) has a specific limitation: it looks at pairs of things and finds patterns. When you need to figure out complex chains of reasoning—like "A affects B, B affects C, so A must affect C in this specific way"—AI can recognize patterns it's seen before but struggles to figure out genuinely new connections.

That's why we use humans for the complex reasoning tasks (understanding business needs and making architectural decisions) and AI for the pattern-matching tasks (writing code that follows a clear specification and identifying bugs in existing patterns).

The result? We regularly deliver in 4-6 weeks what used to take 12-18 months, not by cutting corners, but by being way more efficient at each step.

What's Next?

The companies succeeding with AI aren't the ones looking for magic "do everything" buttons. They're the ones thoughtfully using AI where it helps while keeping humans in charge of the critical decisions.

Want to get started? Look at where you're currently stuck. Is it figuring out requirements? Making architecture decisions? Writing code? Testing? Then think about how AI could help with that specific problem while keeping human judgment where it counts.

The future isn't replacing developers with AI. Its developers and AI are working together, each doing what they do best.

When I tell people how we build software, I often get three reactions in a row. First, they get excited; who doesn't want to create software faster? Then they get suspicious; wait, what's the catch? And finally, they get it, "Oh, now that makes sense."

The confusion stems from the hype surrounding AI. Everyone promises miracles, but when engineering teams attempt to utilize today's tools to deliver enterprise software, they hit a wall. Let me explain what we do instead and why it works.

Why "Instant App Generators" Don't Work

You've probably seen those tools that promise to turn a paragraph or even an entire specification into a working app. Cool demo, right? But try building something complex, like a neo bank, and you'll hit a wall. The code is difficult to modify, the logic is convoluted, or it simply doesn't do what you need.

Here's why: building software isn't one big problem. It's a bunch of different problems. You need to talk to users and figure out what they want. You need to design how everything fits together. You need a good tech architecture. You need to write the code. You need to prioritize what should be built now vs later. You need to think about tradeoffs. You need to test your code. Document it. Each of these is entirely different work.

Asking AI to do all of this at once is like asking someone to study for a calculus test while juggling, writing a poem, and changing a baby's clothes in the middle. Even if they're great at each thing separately, combining them all means they'll likely mess up all four.

What We Do Instead

We realized long ago. AI excels at specific, well-defined tasks. Give it a vague, open-ended problem, and it struggles. Give it a straightforward task with good context, and it often does better than humans.

So we broke everything down.

First, we talk to people and write down what they need: We interview stakeholders and use AI to help organize our notes and build a knowledge base. But humans decide what's essential—AI just helps us keep track of everything.

Then we design the system: Our engineers sketch out how everything should work. We've developed specialized visualization tools that make this process easier, where AI helps identify potential problems. But humans make the big decisions about how to structure things. (For my fellow nerds out there: our designs model dependencies between components as "sparse graphs" to make the solution more tractable for both humans and AI.)

Next, we write super-precise specifications: This is the secret sauce. We describe precisely what the software should do in a special format—like a contract that says "when X happens, do Y." No ambiguity. (For my fellow nerds out there: we use our own Domain Specific Language / DSL.)

Then AI writes the code: This is where AI shines. We're not asking it to be creative or make decisions. We're saying, "Here's exactly what we want; now write the code." It's like having a speedy typist who never gets tired.

Finally, humans review everything: Every single line is checked by a person. Not because AI is bad, but because this is how you catch the weird edge cases and make sure everything makes sense.

The key is that humans check the work at each step. When you catch problems early, they don't snowball into disasters later.

Where Companies Are At

We see companies at different stages of figuring this out:

Stage 0: "AI is overhyped nonsense": Still doing everything the old way. Often burned by trying a tool that didn't work.

Stage 1: "Let's try LLMs": Using basic autocomplete tools. Getting marginally faster at some code writing, but nothing revolutionary.

Stage 2: "AI for specific tasks": Using AI to write tests or documentation. Helpful but not game-changing. For example, asking LLM+CLI tools to give you a first draft for a feature.

Stage 3: "AI for many tasks": Using AI for many tasks, but not in a holistic way with a streamlined end-to-end process.

Stage 4: "Complete transformation": This is where we work. AI helps at every step, but humans stay in control. Speed improves by orders of magnitude not just in code writing, but across the entire software development lifecycle.

The Technical Bit (Stay With Me)

Here's why our approach works, technically speaking. Modern AI (the transformer architecture, if you care) has a specific limitation: it looks at pairs of things and finds patterns. When you need to figure out complex chains of reasoning—like "A affects B, B affects C, so A must affect C in this specific way"—AI can recognize patterns it's seen before but struggles to figure out genuinely new connections.

That's why we use humans for the complex reasoning tasks (understanding business needs and making architectural decisions) and AI for the pattern-matching tasks (writing code that follows a clear specification and identifying bugs in existing patterns).

The result? We regularly deliver in 4-6 weeks what used to take 12-18 months, not by cutting corners, but by being way more efficient at each step.

What's Next?

The companies succeeding with AI aren't the ones looking for magic "do everything" buttons. They're the ones thoughtfully using AI where it helps while keeping humans in charge of the critical decisions.

Want to get started? Look at where you're currently stuck. Is it figuring out requirements? Making architecture decisions? Writing code? Testing? Then think about how AI could help with that specific problem while keeping human judgment where it counts.

The future isn't replacing developers with AI. Its developers and AI are working together, each doing what they do best.