The Prompt is Dead.
The Specification Lives.
AI prompt engineering has split into four distinct disciplines in 2026. If you only master Prompt Craft, you are leaving 90% of the value on the table. This is the complete framework for directing autonomous AI agents: from context engineering and intent engineering to specification engineering.
Opus 4.6, Gemini 3.1 Pro and GPT 5.3 Codex run autonomously for hours, days, weeks. The chat-based prompt-and-iterate cycle is no longer sufficient. This is what replaces it.
Key Facts
0.02%
Your prompt vs. the context
99.98%
Determined by context engineering
10x
Gap between 2025 and 2026 skills
4
Disciplines to master
Four Disciplines.
One Complete System.
Each layer only makes the layers above it possible. Skip one, and you create structural vulnerabilities higher in the stack that can no longer be repaired.
Specification Engineering
The entire organisation as an agent-readable spec
Intent Engineering
Goals, values, trade-off hierarchy
Context Engineering
Curating and managing relevant tokens
Prompt Craft
Formulating clear instructions
Prompt Craft error
Costs a morning
Context error
Costs a project
Intent error
Costs a client
Spec error
Costs the organisation
Prompt Craft
The synchronous, session-based skill of formulating instructions in a chat window. This is the foundation everyone needs to master — but in 2026 it is no longer a differentiator. It is table stakes, like touch typing has been since 1998.
Clear Instructions
From "do something" to "do exactly this, in exactly this way, with exactly this input." Every sentence communicates exactly one thing.
Examples & Counter-examples
Few-shot learning with what "good" and what "bad" looks like. Counter-examples are more powerful than examples alone.
Explicit Guardrails
What should the model NOT do? Most prompts fail here — they state what they want but not what they exclude.
Explicit Output Format
Never let the model decide what the output looks like. Specify type, columns, sorting, language and tone.
Ambiguity Resolution
Tell the model how to resolve conflicts BEFORE they occur. No room for guesswork when data or priority conflicts arise.
When Prompt Craft Fails
When the task takes longer than 1 session, the agent needs to work autonomously, or you keep making the same corrections. Then you need Context Engineering.
Context Engineering
Your prompt is 200 tokens. The context window is 1 million. Your prompt is 0.02% of what the model sees. The other 99.98% — that is context engineering.
"People who are 10x more effective with AI don't write 10x better prompts. They build 10x better context infrastructure."
System Prompts
The invisible foundation. Role, behavioural rules, output preferences and domain knowledge that is always relevant — loaded BEFORE your prompt.
Rate: €225/hour.
Communicate in English.
All code: production-ready.
Tool Definitions (MCP)
What tools does the agent have? The quality of tool descriptions determines how well the agent selects tools. A poorly described tool is a poorly used tool.
EXTENDED: github, postgres, slack
Tool description = context
RAG Documents
Dynamically loaded documents. LLMs degrade with more information — the point is loading RELEVANT tokens, not all tokens.
Metadata enrichment
Re-ranking after retrieval
Conversation History
During long sessions, earlier instructions fade. Solution: periodically reaffirm, inject summaries, build state checkpoints.
Memory Systems
Persistent memory across sessions. From session memory to organisational memory. Memory must be earned — only store confirmed patterns.
MCP as Context Bridge
MCP servers are not just tools — they are live context sources. Filesystem = project understanding. Git = development history.
Intent Engineering
Context engineering tells agents what they need to know. Intent engineering tells agents what they should want. You can have perfect context and terrible intent alignment — and the agent will optimise for the wrong goal.
Warning: the Klarity effect
Klarity's AI agent resolved 2.3 million customer queries in its first month. It optimised for speed instead of customer satisfaction. Result: massive reputational damage and the forced rehiring of human agents. An agent that quickly optimises for the wrong thing is worse than a slow agent that optimises for the right thing.
Goal Hierarchy
Not all goals are equal. The agent must know what takes priority when goals conflict.
2. Quality ≥ 85/100
3. Meet timeline
4. Maximise efficiency
ON CONFLICT: higher always wins.
Values Encoding
Non-negotiable principles that go beyond goals to identity.
- Always back claims with data
- Admit when you don't know
- Customer data = own data
- Quality over quantity
Trade-off Frameworks
What happens when there is no clearly right answer? The real test of intent engineering.
- Internal: 80% quality, speed priority
- Client: 95% quality minimum
- Compliance: 100%, no concessions
Escalation Triggers
The boundary of autonomy. What should the agent NOT decide on its own?
- Cost > €1,000
- Client communication outside template
- Production data changes
- Ethical concerns
Specification Engineering
The practice of writing documents that autonomous agents can execute over extended periods without human intervention. This is not simply "writing a good briefing" — it is a fundamental shift in how you think about all information in your organisation.
Your business strategy is a specification. Your product roadmap is a specification. Your OKRs are specifications. Everything you write is potentially input for an agent.
Self-Contained Problem Statements
Can the agent solve the task without retrieving additional information? The discipline of self-containment forces you to surface hidden assumptions and articulate constraints that normally remain implicit.
Bad
Good
DB: Supabase invoices table
Filter: Jul-Sep 2025
Output: new bar, Teal #00C9A7
DO NOT modify: existing Q1/Q2 data
Acceptance Criteria
If you cannot describe what "done" looks like, the agent cannot know when to stop. For every task, write three sentences that an independent observer can verify without asking you anything.
Visual: mobile-first + English error messages
Technical: CSRF + XSS sanitization + server actions only
Verification: independently verifiable
Constraint Architecture
Four categories that transform a loose spec into a reliable spec: what the agent must do, must not do, should prefer, and must escalate.
MUSTS
Required
MUST-NOTS
Forbidden
PREFERS
Preferred
ESCALATE
Ask human
Decomposition
Break large tasks into components that can be independently executed, tested and integrated. Each component takes less than 2 hours with clear input/output boundaries. You don't need to write all subtasks yourself — you describe the decomposition patterns so a planner agent can break it down reliably.
Evaluation Design
How do you know the output is good? Not "looks reasonable" but measurably, consistently, provably good. Build 3-5 test cases with known good outputs. Run them after every model update. This is the only boundary between AI output you can use and AI output that is guesswork.
Same Model. Same Tuesday.
10x Difference.
2025 Prompting Skills
- 01 Types a request for a PowerPoint
- 02 Gets 80% correct output back
- 03 Spends 40 minutes on cleanup
- 04 Happy: saved 2 hours on 1 deck
Output: 1 deck per morning
2026 Prompting Skills
- 01 Writes a specification in 11 minutes
- 02 Hands it to an agent as an autonomous task
- 03 Grabs coffee, returns to 100% result
- 04 Completes 5 decks before lunch
Output: 1 week's work per morning
7 Core Principles
The prompt is dead. The specification lives. Chat prompting is table stakes. The value lies in specifications that agents execute autonomously.
Each layer enables the next. No good specs without good intent. No good intent without good context. No good context without good prompts.
Errors become more expensive higher in the stack. A bad prompt costs a morning. Bad intent engineering costs a client relationship.
The entire business is a specification. Every document, every procedure, every decision framework is input for agents. Make it agent-readable.
Speed is the multiplier. Agents become more capable every month. The ROI of these skills rises exponentially.
Self-containment is the master test. If you cannot describe a task without someone needing to ask you a question, you do not understand the task well enough.
Evaluation is the only boundary between usable and unusable. Without eval, every agent output is a gamble.
Frequently Asked Questions
Ready to Master the
4 Disciplines?
AetherLink offers hands-on workshops, consulting programmes and the complete 16-week mastery programme. From prompt craft to specification engineering — with concrete cases from your own organisation.