Intro
Planning
Brainstorm
Design
Delivery

How we work

The AI
Process.
End-to-end.

Most teams apply AI as a finishing touch — a chatbot here, an autocomplete there. We do it differently. AI is the operating system of every engagement we run, from the first discovery call to the final deployment pipeline. Four phases. One continuous intelligence loop.

Faster planning
60%
Less rework
10+
AI tools per project
100%
AI-native delivery
Scroll to explore the process
PHASE 01
Planning
AI-Driven Discovery & Roadmapping
PHASE 02
Brainstorming
Generative Ideation & Architecture
PHASE 03
Design
Intelligent Design Systems
PHASE 04
Delivery
Accelerated AI Engineering
01
Phase — 01
Phase 01 — Planning

AI-Driven
Discovery

Planning is where most projects fail — vague requirements, misaligned assumptions, and weeks spent in workshops that produce nothing actionable. We replace that with structured AI-augmented discovery that synthesises requirements, surfaces hidden gaps, and generates delivery roadmaps in a fraction of the time.

AI contributionHuman oversight
70%30%
What AI does
Rapid synthesis at scale

LLMs ingest raw inputs — stakeholder interviews, existing documentation, legacy code, market context — and produce structured summaries, dependency maps, and risk registers that would take a human analyst days.

Transcribe and analyse stakeholder interviews automatically
Extract functional and non-functional requirements from raw documents
Identify contradictions and ambiguities across source materials
Generate first-draft user story maps and acceptance criteria
Surface delivery risks and produce mitigation frameworks
What humans do
Judgment, context & accountability

Our strategists and delivery leads bring domain knowledge, stakeholder relationships, and business intuition that AI cannot replicate. They validate AI outputs, make priority calls, and translate requirements into a delivery contract.

Conduct stakeholder workshops and relationship management
Apply domain expertise to validate AI-generated requirements
Make prioritisation and trade-off decisions
Build consensus and alignment across business units
Sign off and own the delivery roadmap
AI tools deployed in this phase
Anthropic
Claude 3.5 Sonnet
Long-document synthesis, requirements extraction, stakeholder transcript analysis, and gap identification across large context windows.
Primary
OpenAI
GPT-4o
Multimodal input processing — analysing wireframes, diagrams, and spreadsheets alongside text documents for holistic discovery.
Notion AI
Notion AI
Living project wiki with AI-generated summaries, smart meeting notes, and automatic linking of decisions to requirements.
Perplexity AI
Perplexity Pro
Real-time market and competitive intelligence gathering to contextualise requirements against industry landscape and best practices.
Microsoft
Copilot for M365
Automatically generates meeting recaps, action items, and project briefs from Teams calls and Outlook threads.
Linear + AI
Linear AI
AI-powered project scoping that converts requirement documents into structured epics, stories, and sprint plans automatically.
Deliverable
AI-Generated Requirements Brief
Structured document capturing functional, non-functional, and compliance requirements with confidence scores and gap flags.
Deliverable
Dependency & Risk Register
Automatically generated dependency map and risk register with LLM-suggested mitigations, reviewed and validated by our leads.
Deliverable
Delivery Roadmap
Phased, story-pointed delivery roadmap with milestone gates, resource assumptions, and AI-estimated confidence intervals.
Planning → Brainstorming: requirements feed ideation
02
Phase — 02
Phase 02 — Brainstorming

Generative
Ideation

Solution design is constrained by what the room has seen before. AI expands the solution space by generating multiple competing architectures simultaneously, stress-testing each against requirements, and pressure-testing assumptions before a line of code is written. Then our architects make the call.

AI contributionHuman oversight
60%40%
What AI does
Multi-model ideation & pressure testing

We run structured ideation sessions across multiple frontier models simultaneously. Each model produces competing architectures. We then use AI to critique each proposal — identifying scalability, security, and integration risks before human review.

Generate 3–5 distinct architectural patterns for every problem
AI-vs-AI critique: one model attacks another's proposal
Stress-test assumptions against edge cases and failure modes
Auto-generate technical decision records (ADRs) for each option
Model cost, timeline, and complexity trade-off matrices
What humans do
Architecture selection & strategic framing

Our senior architects evaluate the AI-generated options with a lens AI doesn't have: political feasibility, team capability, organisational risk appetite, and long-term technical debt implications. They select, modify, and own the chosen approach.

Review and evaluate AI-generated architectural proposals
Apply organisational and political context to option selection
Deep-dive on security and compliance implications
Facilite team alignment workshops on chosen direction
Author and own the final Solution Design Document
AI tools deployed in this phase
xAI
Grok 3
Real-time research-augmented ideation — pulls live technical documentation, papers, and community knowledge to ground architectural proposals.
Primary
Google DeepMind
Gemini 2.0 Pro
Multi-modal architecture diagramming — generates and critiques system diagrams, data flow charts, and infrastructure schematics.
Anthropic
Claude — Extended Thinking
Deep reasoning mode for complex architectural trade-off analysis, with step-by-step reasoning chains exposed for human review.
Deliverable
Architecture Options Paper
Three competing architectural options with AI-generated trade-off matrices covering cost, complexity, scalability, and risk.
Deliverable
Technical Decision Records
Auto-generated ADRs capturing every significant decision, rationale, and the alternatives considered — permanently linked to requirements.
Deliverable
Solution Design Document
Human-authored and AI-drafted SDD combining the chosen architecture with implementation guidelines, non-functional requirements, and API contracts.
Brainstorming → Design: architecture shapes the experience
03
Phase — 03
Phase 03 — Design

Intelligent
Design Systems

Design iteration is where weeks disappear. AI compresses that cycle from days to hours — generating interface variants, enforcing accessibility from the first frame, and keeping experience design continuous with the underlying architecture rather than siloed from it.

AI contributionHuman oversight
55%45%
What AI does
Generate, critique & iterate at speed

AI generates multiple UI directions simultaneously, runs automated WCAG accessibility audits, proposes design tokens, and creates component variants — compressing what used to take weeks of back-and-forth into structured, reviewable outputs within hours.

Generate multiple UI directions from design briefs and brand guidelines
Automated WCAG 2.2 AA accessibility review on every iteration
Propose and enforce design token systems for consistency at scale
Generate component documentation and usage guidelines automatically
Translate Figma designs into production-ready React/HTML code
What humans do
Creative direction & user empathy

Our designers bring brand intuition, user empathy, and creative judgment that AI-generated design lacks. They direct the AI, curate outputs, conduct user research, and ensure that every design decision serves real human needs — not just pattern matching on existing UI libraries.

Set creative direction and curate AI-generated design options
Conduct user research, testing sessions, and usability reviews
Apply brand and emotional design judgment to refine AI outputs
Design complex interactive states, micro-animations, and edge cases
Handoff review, design QA, and developer collaboration
AI tools deployed in this phase
Vercel
v0.dev
AI-powered component generation — describe a UI in plain language, get production-ready shadcn/Tailwind components ready for integration.
Primary
Figma
Figma AI
Auto-layout suggestions, content generation, asset search, and first-pass responsive design generation embedded in the design workflow.
Anthropic
Claude — Design Critic
Custom prompting pipeline that reviews design screenshots against WCAG, brand guidelines, and UX heuristics — returning structured critique reports.
Midjourney
Midjourney v7
Visual direction and mood board generation — exploring art direction options and hero visual concepts before entering the design system.
Builder.io
Builder AI
Figma-to-code pipeline that converts design files to clean React/Vue components, reducing design-dev handoff friction significantly.
ElevenLabs
ElevenLabs + Lottie
AI-generated audio and animation assets for products requiring voice interfaces, onboarding animations, and interactive micro-moments.
Deliverable
Design System & Token Library
A complete, AI-documented design system with tokens, component specs, usage guidelines, and Figma + code-side representations.
Deliverable
Interactive Prototype
High-fidelity Figma prototype covering all primary user journeys, edge cases, and responsive breakpoints — validated against user testing findings.
Deliverable
Accessibility Report
AI-generated WCAG 2.2 AA audit of every key screen, with annotated issues, severity ratings, and recommended remediations.
Design → Delivery: specifications become systems
04
Phase — 04
Phase 04 — Delivery

Accelerated
Engineering

The final phase is where AI earns its place most visibly. AI co-pilots write boilerplate, catch bugs, generate test suites, and review pull requests in parallel with engineers. The result: faster delivery, fewer defects, and continuous integration that learns your codebase with every commit.

AI contributionHuman oversight
65%35%
What AI does
Code, test, monitor & learn

Every engineer on our team ships with AI co-pilots embedded in their IDE, PR review process, and CI/CD pipelines. AI writes boilerplate, generates test cases, reviews code for security and performance issues, and flags anomalies in production — in real time.

AI co-pilots generate implementation from specs and acceptance criteria
Automated test generation: unit, integration, and E2E test suites
AI PR review: security vulnerabilities, performance issues, code quality
Intelligent CI/CD: predictive test selection, build optimisation
Production monitoring with LLM-powered anomaly detection and RCA
What humans do
Engineering craft & system ownership

Our engineers don't just review AI output — they architect it, challenge it, and own it. They handle complex distributed systems problems, make performance optimisation decisions, and build the observability infrastructure that keeps AI-generated systems trustworthy in production.

Architect complex system components and distributed patterns
Review, refine, and approve all AI-generated code before merge
Deep performance engineering and database optimisation
Security architecture, threat modelling, and penetration testing
Production incident response, RCA, and system reliability ownership
AI tools deployed in this phase
Anthropic
Claude Code
Agentic coding that understands your entire repository — refactors across files, generates multi-file features, and reasons about system-level implications.
Primary
Anysphere
Cursor IDE
AI-native IDE with context-aware code completion, codebase-wide chat, and automated refactoring — the daily driver for every engineer on the team.
GitHub
GitHub Copilot Enterprise
Organisation-aware code suggestions trained on internal codebase patterns, plus AI-powered PR summaries and code review commentary.
Deliverable
Production-Ready Codebase
Fully tested, documented, and reviewed code — with AI-generated inline documentation, test coverage reports, and security scan results.
Deliverable
Observability Stack
End-to-end monitoring, logging, and alerting configured from day one — with AI-baseline anomaly detection calibrated to your production traffic patterns.
Deliverable
Runbook & Knowledge Transfer
AI-generated operational runbooks, architecture decision logs, and onboarding documentation — so your team owns the system from day one of production.

How we think about
AI in delivery

01
AI amplifies, humans decide

AI accelerates every phase but never makes the final call. Every significant decision — architectural, strategic, or creative — is owned by a human who can account for it. AI is the engine; humans are the driver.

02
Multi-model, not mono-model

No single model is best at everything. We run Claude, GPT-4o, Grok, and Gemini in parallel, selecting the right tool for each task and using cross-model critique to pressure-test outputs. Diversity of model creates robustness of output.

03
Transparency at every layer

Clients see what AI generated and what humans modified. Every AI output is traceable — from prompt to final deliverable. We never pass off AI outputs as purely human work, and we never obscure AI contribution from the client.

04
Enterprise-safe by default

Sensitive client data never enters public model APIs without explicit consent and data processing agreements. We maintain self-hosted model options (Llama, Mistral) for regulated industries and privacy-critical use cases.

05
Continuous model evaluation

We run structured evaluations on frontier models as they release. If a new model produces materially better output for a given task, we adopt it. Our process is model-agnostic by design — we are not locked to any vendor.

06
Build for transfer, not dependency

At every phase, we document how AI tools were used so clients can replicate the process internally. The goal is capability transfer — your team leaves each engagement more AI-capable, not more dependent on us.

Want to see this
process in action?

We run a free 90-minute AI discovery session for qualified organisations. You'll leave with a clear view of where AI can accelerate your specific challenges — and what a realistic implementation looks like.

Start a project