Two People, 560K Lines, 5 Months
April 15, 2026|Dubravko Ban

The pipeline

aiworkflowmethodology

How we work, from idea to production, follows six stages (aka The pipeline). LLMs are part of every step and improve the throughput of work. What would take us days and weeks before, now takes hours and some back-and-forth. In terms of tooling, we reused from open source what we safely could. What we couldn't, we built via LLMs. Everything is aimed at solving real business problems and providing value to end users.

Stage 1: Market & requirements analysis

We decide the direction: Using a combination of domain expertise, user conversations, competitive analysis, common sense (inserts shocked Picard meme), deciding what to build and, more importantly, what not to build.

What AI contributes: Research synthesis, exploring adjacent requirements ("what do similar systems typically include for inventory management?"), stress-testing assumptions by asking "what about X?" questions that broaden coverage. Being generally critical to our ideas and a conversation partner that can be helpful or adversarial (both are important). Speeds up data gathering by orders of magnitude, as well as happily does the boring stuff.

The boundary: AI is useful for breadth (exploring the space). Humans are essential for depth and direction (deciding what matters to these users in this market).

Stage 2: Business analysis & relational database modeling

This is the heavy-lifting part: Entity relationship decisions, normalization choices, understanding which real-world concepts become tables vs. columns vs. enums. The 10's of entities in the data model make up the product's backbone. This is where we need to make the critical decisions. Getting this wrong will cost us in the long run more than any other choice we can make. Contrary to popular belief, but changing software isn't all that easy, especially once it's in production and you really don't want to go about dropping or renaming table columns that are full of data.

What AI contributes: Generating initial entity sketches from requirements, spotting missing relationships ("you have PurchaseOrderLines but no link to ReceiptItems - how do you track partial receipts?"), suggesting index strategies for common query patterns. Designing initial redundancy, performance and stability strategies. Again, conversational partner, helps speed things up considerably.

The boundary: AI accelerates drafting but a human must own the final model. AI doesn't understand that the OrganizationalUnit entity will eventually become the backbone of the entire permission scoping system. That's a judgment call based on vision.

Stage 3: Written spec with business logic, guardrails, validation

You guessed it, us again: Writing the spec - status transitions, validation rules, what blocks what, what cascades. Our docs/business-logic/ folder contains detailed specs for each major domain.

Each spec includes:

  • Exact status transitions (with diagrams)
  • Terminal states (no escape - by design)
  • UI button matrix by status
  • Cancellation guards (what blocks, what warns, what cascades)
  • Delete rules (block if child documents exist)
  • Validation rules (field-level and cross-entity)
  • Edge cases (explicitly documented)

These are crucial system behavior decisions that are kept in version control and we continue to develop and test against them over time. These are living code documents, just like regular code files. The main thought guideline is to keep the logic of the system behaves close to where the LLM operates, make it easy to access and course correct on the go.

What AI contributes: Identifying gaps ("your spec doesn't cover what happens when a work order is cancelled but has open material requests"), generating validation rule matrices, checking consistency across related specs. Incredibly useful and great for assisting tasks with high congitive load but generally low creativity required (lots of rote work that we humans love to get wrong or skip over).

The boundary: Specs are governance. The work order cancellation rules exist because of past incidents in similar systems. AI can help write specs but can't provide the judgment of "this will cause a support ticket in 3 months."

Stage 4: Handoff to claude for analysis and planning

AI-led but human reviews: Claude reads the spec, examines the existing codebase, and proposes an implementation plan: which files to create or modify, what patterns to follow, what tests to write.

This is where the .claude/rules/ system pays off. We have pletny of rules files that encode hard-won conventions - auth, database, API, web UI, Flutter UI, mobile, E2E tests, notifications etc. Claude doesn't just plan in the abstract, it plans against our codebase's actual constraints. Each rule file was born from a bug, a production incident, a painful review cycle or just good old experience. They're executable documentation. And the rules will change and/or expand over time as well.

The boundary: The plan is a starting point. It gets us going. It also changes, a lot.

Stage 5: Hashing out details

Collaborative. We challenge the plan, add domain nuance, remove over-engineering, adjust scope (and only sometimes introduce new issues...):

  • "Do we really need a separate service for this, or can it live in the existing one?"
  • "This validation is too strict - users need to save drafts without all required fields."

What AI contributes: Rapid iteration on alternatives, "what if we did X instead?" explorations, detailed impact analysis ("changing this entity means updating 3 controllers, 2 views, and 4 tests"). One of the other really interesting features that LLMs enable is creating quick and dirty throw-away UI's or samples of code that can be iterated over, discussed and then either embraced or discarded immediately - all without investing too much time. This has helped us out in numerous occasions to make better decisions.

The boundary: The best results come from 3-5 rounds of refinement, not accepting the first plan. AI is eager to agree. Push back. Don't be afraid to do over or re-analyze the plan from scratch.

Stage 6: Implementation

Split by nature of the work:

Work TypeWhoExamples
Architecture decisionsHumanMulti-tenancy model, auth proxy design, RBAC evaluation order
Novel debuggingHumanHttpClient session leak, CSRF double-cookie, rate limiter tuning
Security-sensitive codeHuman (reviews all)XSS fixes, CVE patches, token handling
High-volume pattern codeAI10's of MVC controllers, 100's of views, 1000's of E2E tests
UI standardizationAI"Audit all 20 view sets against web-ui.md"
LocalizationAIEN/HR resource files across all views
RefactoringAI (human directs)Sidebar+content layout migration, identity bar standardization
Mobile featuresCollaborativeFlutter pages follow human architecture, AI implements

AI doesn't just help with step 6. It participates in every stage. But its role shifts: from research assistant (stages 1-2) to stress-tester (stage 3) to planner (stage 4) to sparring partner (stage 5) to implementation engine (stage 6).

Ograničena dostupnost

Spremni izgraditi nešto značajno?

Razgovarajmo o Vašem projektu. Javite nam što trebate, odgovaramo unutar 24 sata.

Odgovor unutar 24 sata
Bez obveza
EU vremenska zona