The Wrong Question
Why your prompts keep getting longer and your output keeps getting worse.
You write a prompt. The output is close but flat. So you add detail. You specify tone, structure, audience, format. You add examples. The prompt grows to three paragraphs, then five. The output gets different, but it does not get better. It gets blurry. You add more constraints. The model starts contradicting itself within the same response. You conclude that the tool is not capable enough, and you wait for the next version.
This is the most common experience practitioners have with AI, and almost everyone draws the wrong conclusion from it.
The problem is not the model. The problem is the question you are asking about the model. The entire AI governance conversation, from prompt engineering courses to enterprise deployment frameworks, starts from the same premise: how do we control what the model does? How do we constrain the output to match what we need?
There is a better question. It comes from a discipline that has been solving this exact structural problem for decades, in rooms full of people with wildly different processing profiles, under federal law.
A compound instruction is not one instruction
In a special education classroom, a compound instruction looks like this: "Take out your notebook, turn to page 42, and answer questions three through seven." For a neurotypical student, that is one task. For a student with processing delays, that is three tasks delivered simultaneously, and if you do not break them apart, the student fails before they start.
The failure is not about ability. The student can do all three things. The failure is about input format. The instruction arrived in a shape the receiving system could not process whole.
The fix, codified in federal special education law, is called task decomposition. You break compound instructions into sequential, single-objective steps. You provide each step with clear success criteria. You let the student complete one before introducing the next. The capability was always there. The structure of the input was the barrier.
• • •Now look at what happens when you give an AI a compound evaluation prompt: "Evaluate this work for voice quality, structural integrity, narrative coherence, and brand alignment." The model processes the first criterion with full attention. Each subsequent criterion gets less. By the fourth, the model is blending dimensions together, producing a vague average that is none of them. Contradictions appear within a single response. The evaluation is mush.
The structural problem is identical. A compound instruction delivered to a system that cannot process it whole. The response most practitioners reach for is also identical to the wrong one in a classroom: repeat the instruction louder. Add more detail. Write a longer prompt. Specify harder.
The fix is identical too. Decompose. One dimension per prompt. One clear objective. One defined output. Run them independently. A coordinator collects the results and maps where they converge (act on it) and where they diverge (the practitioner decides). The model was never incapable. The instruction was shaped wrong for the system receiving it.
Accommodation, not constraint
Special education calls this accommodation. You design the input to match the processing reality of the receiving system. You do not ask how to control what the student does. You ask what does this student actually need to do this work well. The question changes what you see. It changes what you build. And it changes the results.
The same question, applied to AI, changes everything about how you work with it. Instead of asking how to constrain the model's output, you ask what the model actually needs: decomposed tasks, single objectives, clear criteria, independent evaluation. You design the input to match the processing architecture.
This reframe is the foundation of Accommodation Design. It is not a technique. It is a question you learn to ask first, before any prompt, before any workflow, before any system design. What does the system receiving this work actually need to do the job well?
Both systems in the room
The reframe goes further than the model. In any AI-mediated workflow, there are two systems in the room. The model has processing realities: token limits, context degradation, attention distribution. The human has processing realities too: limited working memory, cognitive fatigue, the gap between having an idea and articulating it clearly.
Most AI workflows accommodate neither. The model gets compound instructions it cannot process whole. The human gets asked to produce carefully structured input at the exact moment their thinking is least structured, when the idea is raw and alive and has not been organized yet.
Accommodation Design works both directions. Structure the model's input so it matches how the model processes. Remove friction from the human's input so raw thinking enters the system without losing fidelity. The tools handle the translation. The model gets what it needs. The practitioner gets to think like a practitioner, not like a prompt engineer.
Try this today
Take one prompt you use regularly. One that produces output that is close but never quite right. Ask yourself: is this actually one task, or is it several tasks wearing a single prompt?
Break it apart. One objective per prompt. Run each one independently. Compare what you get to what you were getting before.
The difference will not be subtle.
Accommodation Design is a six-domain methodology.
Task decomposition is the starting point. The full framework covers input architecture, cognitive extension, semantic preservation, voice governance, and evaluation design. Courses and workshops are in development.