AI, Agile, and the Return of Detailed Planning: A New Balance for Leaders
Introduction: When AI Meets Agile Development
Imagine a software team that, after years of "move fast" Agile practices, suddenly writes a 50-page specification before coding a single feature. This scenario isn't fantasy — it's happening in response to new AI coding assistants. One startup reportedly tripled its development productivity by effectively going "full waterfall," creating exhaustive specs and designs up front before turning things over to an AI agent to execute.
After decades of Agile evangelism, many teams are quietly returning to detailed specs, comprehensive planning, and upfront design. The reason? AI is extremely powerful at following exact instructions, but terrible at reading minds or filling in blanks. In the words of one AI expert, "the hottest new programming language is English" — meaning the skill of precisely describing what the software should do is more important than ever.
This shift raises a critical question for tech leaders: Can we harness AI's strengths without losing the adaptability and customer-focus that Agile brought us?
Why AI Is Changing the Rules
Recent advances in generative AI (large language models) have given us coding assistants that can build entire features or applications from natural language descriptions. These AI agents excel at deterministic execution — if you give them a clear, unambiguous spec, they will tirelessly generate code to match. Studies have shown AI can boost coding task efficiency threefold in ideal conditions.
However, today's AI cannot truly understand your business intent or ask clarifying questions the way a human developer might in daily stand-ups. It does exactly what you ask, literally what you ask — and nothing more. A vague prompt like "build a user dashboard" might yield a technically functional but underwhelming result. But a detailed page-by-page requirement document with explicit criteria will be implemented faithfully by the AI, down to edge cases.
In short, AI amplifies the consequences of ambiguity. Teams are learning that if you "wing it" with minimal instructions, the AI won't magically fill gaps — it will produce something broken or off-target. To unlock AI's productivity benefits, you must spell out requirements with a rigor that many Agile teams haven't practiced in years.
The Lure of Detailed Specs: Waterfall Planning Makes a Comeback
Tech leaders are observing a pattern: teams using AI are writing much more detailed design docs and specs than before — in some cases 3× more time is spent on up-front design. Product requirement documents start to read like legal contracts, and user stories now include exhaustive acceptance criteria covering every edge case. Architecture decisions are being finalized before a single line of code is written.
One engineering leader dubbed this trend "Agile planning, waterfall execution." The idea is to retain short development cycles (sprints or iterative releases), but pack each cycle with far more upfront planning so that an AI can execute the plan without constant course-correction.
An example of this new approach comes from Amazon's experimental AI coding environment called Kiro. Kiro actually forces developers to choose between two modes: "Vibe" (exploratory chat-driven coding) vs. "Spec" (plan-first coding). In Spec mode, the tool won't generate any code until you've written a requirements document, a technical design, and a detailed implementation plan — a workflow uncannily reminiscent of old-school waterfall processes.
The reasoning is simple: Kiro's creators found that users weren't giving AI enough detail to get high-quality results, so the tool itself acts like a project manager, guiding the team to plan more before coding. The outcome? In a trial project, following a strict spec→design→code sequence produced a "good quality" application that worked as intended on the first try.
Experienced AI developers echo this: they're writing longer prompts and feeding models very detailed plans, resulting in code with far fewer errors than "just winging it" with minimal instructions. As one AI engineer put it, "the agents we have right now need what waterfall provides even more than people do." In other words, AI thrives on the kind of exhaustive, upfront detail that waterfall methodologies championed.
Why Clarity Was Always King (and Agile Didn't Actually Ban It)
Before we declare Agile dead, it's important to remember: "Agile" never meant no planning or documentation. Agile practices always valued clear requirements and good design — they simply approach them iteratively and collaboratively. The Agile Manifesto, after all, favors "responding to change over following a plan," not ignoring the plan entirely.
In healthy Agile teams, user stories are discussed and refined, acceptance criteria are defined, and architecture is considered continuously. If some teams took "individuals and interactions over processes and tools" to an extreme of zero documentation, that was a misapplication of Agile principles.
As one commentator noted, "AI doesn't kill Agile. It punishes vague thinking." In effect, AI is shining a spotlight on any lazy or incomplete thinking in our software process. Teams that survived on unwritten tribal knowledge and on-the-fly decisions now must write things down for the AI to understand. This isn't a betrayal of Agile — it's a corrective for bad habits.
It's also worth noting that Waterfall methodology wasn't wholly bad, either. Its emphasis on analysis and design came from hard lessons about software quality. The problem was the rigidity: assuming all requirements could be known upfront and never change. Agile arose to embrace change and uncertainty, favoring working software over comprehensive documentation and continuous customer feedback. Those values remain as crucial as ever.
Yet, what we're seeing now is a recalibration: to work effectively with AI, teams are rediscovering the value of thoroughness. Think of it this way — Agile encouraged minimal documentation only because humans could clarify details through conversation, collaboration, and adaptation. But an AI agent lacks the context of hallway conversations or the intuition of an experienced engineer; the only way to give it that context is through explicit instructions and documentation.
In essence, when your team includes non-human "developers," you need to externalize knowledge more formally. The challenge for leaders is to do this without losing the agility to adapt when new insights emerge.
The Risks of Overcorrecting: Don't Throw Out Agile's Benefits
While detailed planning can dramatically improve AI's output, leaders must guard against swinging to the opposite extreme — a brittle, feedback-starved process. A key lesson from the past: a perfect spec on paper can still produce the wrong product. Markets change, users discover new needs, or simply the spec might have flaws that only become evident once working software is tested.
Traditional waterfall projects often failed because they delivered something that met the original requirements but no longer met the real need, due to lack of intermediate checkpoints and adjustments. That risk still exists, AI or not. In fact, an AI will blissfully build exactly what you ask, even if your ask is misguided.
As one AI practitioner observed from experiments, the AI could get "hung up" implementing a specification to the letter even when the spec didn't make sense in the real world — leading him to scrap the result and start over. In a human team, a developer might raise a question or flag an issue midway; an AI won't — it's a very fast, very literal worker. So if your plan is flawed, an AI will simply get you to a flawed result faster! Or as another expert quipped, "Waterfall will be a faster death march with AI" if you aren't careful.
To avoid this, retain short feedback loops. You might write detailed specs, but do it for smaller increments of work. For example, instead of a monolithic 50-page spec for a 6-month project, you produce a 5-page spec for this sprint's feature set, have the AI build it, then review and validate the output with real users or stakeholders.
This aligns with what some teams are calling mini-waterfalls or micro-iterations. Ed Lyons, an architect who experimented with AI-driven development, found success in running a series of "small waterfall" cycles inside an Agile project — essentially treating each feature or two as its own requirements/design/code cycle. He noted it felt like "a little too much documentation" (a reminder of why we moved to Agile), but it ensured the AI had all the info needed, and it still allowed adjustments between cycles.
In practice, this looks like Agile in macro (frequent releases and reprioritization) but Waterfall in micro (each piece built with thorough forethought).
Another crucial practice is continuous validation. When an AI writes code, incorporate automated tests and verification against the spec at every step. Think of a "spec-to-validation" workflow: you feed the AI detailed requirements, it generates code, and then you immediately run test suites or even AI-driven checks to see if the code meets the spec.
If something fails, you've caught the mismatch early and can correct course. This is analogous to traditional Agile's emphasis on getting working software in front of users fast — except now the "user" might first be a test harness or an AI validator. The goal is the same: ensure we're building the right thing before we've invested too much.
Human oversight remains paramount; AI can and will produce errors or unintended solutions that pass superficial tests. A senior engineer or architect should review critical outputs. By front-loading clarity but still testing assumptions through quick iterations, you get the best of both worlds: the AI's speed and precision plus the human ability to steer the product toward real needs.
"Waterfall 2.0" — An Emerging Hybrid Approach
What might the future software process look like as we integrate AI? Many believe it's not a return to 1970s-style development, but rather a new hybrid paradigm sometimes dubbed Waterfall 2.0. In this vision, development is more structured and documentation-driven at each step (much like waterfall), but the overall process is still iterative and responsive (like Agile).
Large Language Models become like specialized team members in each phase, consuming and producing project artifacts. For example, comprehensive Architecture Decision Records, API interface specs, design guidelines, test plans, etc., are not only written for human clarity but also to feed into AI tools that generate and validate code. All these artifacts are kept version-controlled (e.g. in a repository) as the single source of truth that both humans and AIs refer to.
The AI might help draft these documents (speeding up the tedious parts) and then follow them when coding. In essence, documents and specifications become first-class deliverables and inputs in the development lifecycle, rather than afterthoughts.
This approach also gives rise to new roles and practices. We're already hearing about roles like "Specification Engineers" who excel at writing precise software specs, or "AI Prompt Engineers" who craft effective prompts and constraints for LLMs. There are AI validation specialists to design tests ensuring the AI's output aligns with requirements.
Teams might adopt AI orchestration, where multiple AI agents handle different stages — e.g. one AI generates code while another reviews security or style compliance. As a leader, you should recognize that writing and managing requirements is now a critical competency. Your team's productivity with AI will directly correlate to how well they can communicate objectives and constraints to the machine.
Investing in training staff on clearer technical writing, incorporating domain experts to define requirements, and adopting tools for managing specifications will pay off. This doesn't mean every engineer must become a novelist of technical specs, but it does mean the balance of work shifts: more time spent on what to build, so that less time is spent on wrestling with how the AI built it.
Leadership Perspective: Embrace Clarity, Preserve Agility
For technology leaders, the rise of AI development tools is both an opportunity and a test. The opportunity is huge — when guided well, AI can massively accelerate development and even reduce the grunt work (one can imagine a future where your team focuses almost entirely on high-level design and validation, while AI writes and refactors the code).
Early adopters are seeing faster delivery times and the ability to tackle more ambitious projects with leaner teams. However, realizing these gains requires a shift in team culture and process. Leaders should encourage and reward the production of clear specifications, documentation, and test plans. Make it standard that any task handed to an AI (whether it's coding, test generation, or data migration) comes with a well-defined spec or acceptance criteria.
In practical terms, this might mean reviving practices from the past (brushing up those PRD and design document templates) and integrating them into your Agile ceremonies. For instance, sprint planning might include a step to write a mini-spec for each significant user story, which the AI pair-programmer will then implement. Daily stand-ups might include discussing not just code tasks but spec updates ("What did we learn yesterday that changes our requirements?").
At the same time, preserve the essence of agility. Encourage teams to plan in detail but also plan to change the plan. If new information comes in — a user test shows a flaw, or a stakeholder tweaks the feature — don't let the hefty spec become a barrier to change. It's critical that everyone understands a spec is a living document, not a contract etched in stone.
Culturally, reinforce that feedback and iterative improvement are still expected. One way to do this is by shortening the planning horizon: allow detailed planning for the next few weeks of work, but hold off on spec-ing features further out until you periodically reassess. This keeps the organization adaptable.
Also, keep customers in the loop. Use Agile's practice of frequent demos or releases to gather user feedback on each AI-delivered increment, and be ready to rewrite specs for subsequent iterations based on that feedback. Remember, the ultimate measure of success is not how perfectly the team followed a plan, but whether the product succeeds for users and the business.
Finally, be vigilant about the quality of AI outputs. AI may write syntactically correct code that passes basic tests, but it could introduce subtle bugs or inefficiencies. It might also strictly follow an imperfect spec without question. Hence, quality assurance and code review are still necessary — though their focus might shift more toward verifying alignment with requirements and overall system soundness.
You might find that your QA engineers and developers spend more time reading and refining specs (before coding) and reviewing AI-generated code (after coding) than writing code themselves. That's okay — it's part of the new allocation of effort. Your job as a leader is to realign roles and expectations so that everyone knows where to add value in this AI-augmented process.
Developers, for instance, may need to improve skills in prompt engineering and in interpreting AI output, debugging not just software issues but communication issues between humans and AI. Product managers might take on a stronger role in detailing requirements. Architects might produce more explicit guidelines (coding standards, architecture diagrams, interface contracts) that AI must adhere to.
These shifts all aim at one thing: making sure the AI builds the right product, and builds it right.
Conclusion: A New Balance, Not a Binary Choice
The advent of AI in software development is driving a return to some "old" ways (like thorough upfront planning), but this doesn't mean we simply rewind to the 1990s and forget everything we learned with Agile. Think of it instead as an evolution toward greater discipline. We are learning that humans and AI have different strengths: humans excel at creative thinking, understanding ambiguous needs, and adapting to change, whereas AI excels at following explicit instructions and churning out code or analyses at superhuman speed.
To leverage both, we must give the AI what it needs — clarity — and still retain our human-driven flexibility. In practical terms, some Agile principles may feel less emphasized (we are writing more documentation than "minimal," for example), but the core Agile mindset of frequent adaptation and stakeholder collaboration should remain intact.
It's a trade-off: "responding to change over following a plan" is still valid, but when the plan can be executed at lightning speed by an unthinking agent, you had better make sure the plan is correct — or be ready to change that plan often.
So, what "agile principle" dies first if we go down this path? Perhaps it's the misconception that Agile means no documentation or design. That was never a true principle, and it's certainly not sustainable with AI in the mix. The real Agile principles — like embracing change, delivering value quickly, and close collaboration — should not die. In fact, they become even more important to correct and guide the course of our AI-accelerated execution.
The teams thriving with AI are not necessarily those with the fanciest models or tools; they are the ones who "finally learned to write down what they actually want," and to do so in manageable increments. As a technology leader, you have the responsibility to set this balanced course. Embrace the rigorous thinking and planning that AI development demands, but don't lose the nimbleness that lets your organization pivot when needed.
If we strike that balance, we can have the best of both worlds: the foresight of careful planning plus the agility to continuously align with reality. In the end, AI won't kill Agile — it may just kill our bad habits and force us all to up our game. And that could be a very good thing for building software that's both high-quality and highly relevant in a fast-changing world.
Sources and References
Core Industry Discussions & Thought Leadership
-
LinkedIn Post by Jonathan Vanderford
Original inspiration: "I just watched a startup go full waterfall. Their AI productivity tripled..." - Observes the shift from agile "figure it out" to highly detailed, waterfall-style specs driven by AI's need for clear, deterministic instructions. -
Ed Lyons on AI + Waterfall ("The AI Spec Trap")
Describes how Amazon's Kiro AI coding tool nudges developers toward detailed requirements and design docs, resulting in "mini-waterfall" cycles that deliver higher quality AI output. -
Anup Jadhav: "AI and the Rise of Detailed Software Specifications"
Industry commentary on how LLMs and generative AI require explicit, detailed specifications for optimal results, shifting workflows from coding to specification authoring. -
"The Hottest New Programming Language Is English" — Andrej Karpathy
Karpathy, ex-OpenAI/Tesla, on how the "skill of the future" is the ability to write unambiguous instructions for LLMs.
Agile, Waterfall, and Hybrid Methodology Analysis
-
Agile Manifesto (Official Site)
The foundational document for Agile development; stresses "individuals and interactions," "working software," "customer collaboration," and "responding to change." -
BMC Software — Waterfall vs. Agile: Key Differences
Comprehensive breakdown of waterfall and agile, with pros/cons and historical context. -
Winston Royce (1970): Managing the Development of Large Software Systems
The foundational paper outlining "waterfall" and its pitfalls, especially rigidity and lack of feedback loops.
AI in Development: Tools, Challenges, & Best Practices
-
Amazon's Kiro (AI Code Tool)
Community observations on Kiro enforcing detailed requirements before generating code, steering teams toward "waterfall in micro, agile in macro." -
GitHub Copilot Overview
The most widely used LLM-based coding assistant, which excels at deterministic code generation based on prompt quality. -
Microsoft Research: LLMs in Software Engineering
Research on how LLMs transform the software development lifecycle, including best practices for requirements and prompt engineering.