Skip to main content

Why your AI coding agent needs more than a plan: Lessons from the trenches

Moving into AI-first development is a journey, and we’re all learning together. I want to share some bittersweet lessons from my recent experience that might save you from hitting the same walls I did.

The “Secret” Everyone Knows

Let’s address the elephant in the room. By now, there are probably a million YouTube videos titled “A Super Secret Trick To Make Your Coding Agent 20x Better.” You know the trick, I know the trick: create a detailed plan in a markdown file and direct the agent to execute it step by step.

Armed with this knowledge, my trusted army of agents and I were happy campers for several days of non-stop AI coding. In AI terms, that’s significant—countless tokens, kilowatts of electricity, and increasingly capable agents working in harmony. It was an idyll with me being the conductor of the agentic orchestra, or if you want a warmer metaphor, my agents being trusty golden retrievers happily bringing the ball back over and over again.

The project grew to 158 source code files (not counting tests, documentation, or build scripts). While some were adapted from a permissively licensed open source SDK, most were new or substantial rewrites. For a prototype, it was a considerable codebase.

When Things Go South

Everything was smooth sailing while the codebase remained small. I wasn’t meticulously reviewing every line (“I’m a trained professional – don’t do that at home”, or more appropriately, “don’t do that at work”), but the plan was solid, and the app did what it needed to do.

But as the codebase grew, my agent hit a wall like a test car in a crash test. Well, at least that’s how it felt when, despite numerous attempts to re-prompt around or through that wall, the agent was getting nowhere. Sure, I could have dug through the code myself, but I was too lazy to read and debug a bunch of “not mine” code written on frameworks I’d never worked in, especially after the agent had made multiple “off-plan” modifications trying to solve the problem.

The Hard-Won Lessons

From this failure (and my past successes), I’ve extracted valuable insights that will fundamentally change how I approach AI-driven development. “In it to win it.”

1. Architecture-First Approach

Old way: Plan → Execute

New way: High-level plan → For each module:

  • Develop module_architecture.md (defining key data structures, interfaces, control flow, and design patterns)
  • Create module_execution_plan.md
  • Execute the module plan step-by-step
  • Move to the next module

The key insight? I never truly “discussed” the architecture with my agent. Without that shared understanding, I couldn’t fully trust the foundation—a much bigger problem than doubting a single function. Next time, I’ll co-own both the plan and the architecture doc, so I would feel that it’s my app, even if a lot of the code isn’t mine.

2. Testing Standards from Day One

I would define my testing standards up front and force the agent to follow them. EVERY STEP would require building new regression tests and executing the full set of regression tests. Without it, the agent was creating random tests to debug random problems and either auto-cleaning those tests or leaving them in inconsistent places.

3. Comprehensive Logging Strategy

I would define my logging standards upfront, including verbosity levels and some decorators to auto-log a lot of stuff without bloating the code with debug messaging. That would keep the code readable and the logs detailed.

The Payoff

With this approach, I’m confident several good things will happen:

  • Higher capability ceiling: My agent would be able to solve the gnarly issue that got it running in circles. With well-organized tests and logs, it’s much easier to identify and solve complex issues.
  • Better human intervention points: When I need to step in, I’ll know exactly where to look.
  • Fewer architectural problems: Having good architecture would help avoid the most significant problems. Small stuff is small by definition.

And of course, when it comes to production, there’s going to be a security review, code review, and more thorough testing.

The Investment

This isn’t a light lift; it takes effort. In traditional development, proper architecture for critical components can easily take ⅓ of the project timeline. It’s high-skill, high-value work – your principal architect likely earns (and is worth) at least five of your juniors (and that’s before you start counting the equity…). So this is not free cheese.

But here’s the key: this approach front-loads the strategic work, done collaboratively between you and AI, leaving the more mundane backlog to AI alone.

Redefining Collaboration

When I say “co-own architecture,” I don’t mean you need a decade of “architecturing” experience. I’m an engineer by training, a product guy by heart, and a business guy by trade. I am pretty rusty when it comes to coding, but I have a keen mind and endless curiosity.

When working on architecture, I’m not alone. Whenever I have a question, whether it’s about some options to solve the problem, or our codebase, or open-source comparables, my trusted agents are there to run some background research and queries for me. This is one of the easiest things to parallelise and multitask, which means you are getting the biggest leverage from AI.

We’re essentially redefining the division of labor: humans focus on architecture, standards, and strategic decisions while AI handles the implementation details within those well-defined boundaries. This is where we envision AI and humans in the future – we want AI to create jobs and help multiply human capabilities/velocity/productivity.

What’s Next

In Part 2 (when my busy work allows for another deep dive session), I’ll share specific examples of how this architecture-first approach solved real problems, including the exact templates and prompts that made the difference. Stay tuned.

 

The post Why your AI coding agent needs more than a plan: Lessons from the trenches appeared first on SD Times.



from SD Times https://ift.tt/s8aZqwF

Comments

Popular posts from this blog

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet...

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are dec...

Olive and NTT DATA Join Forces to Accelerate the Global Development and Deployment of AI Solutions

U.S.A., March 14, 2021 — Olive , the automation company creating the Internet of Healthcare, today announced an alliance with NTT DATA , a global digital business and IT services leader. The collaboration will fast track the creation of new healthcare solutions to transform the health experience for humans — both in the traditional healthcare setting and at home. As a member of Olive’s Deploy, Develop and Distribute Partnership Programs , NTT DATA is leveraging Olive’s open platform to innovate, build and distribute solutions to Olive’s customers, which include some of the country’s largest health providers. Olive and NTT DATA will co-develop new Loops — applications that work on Olive’s platform to provide humans real-time intelligence — and new machine learning and robotic process automation (RPA) models. NTT DATA and Olive will devote an early focus to enabling efficiencies in supply chain and IT, with other disciplines to follow. “This is an exciting period of growth at Olive, so...