You finished the work. Is it good? You ask three people and get three different answers. The question was wrong. It's not "is it good." It's "good according to whom, measured by what."
Every practitioner carries a set of references, whether they've named them or not. The designer who studied under Vignelli evaluates differently than the one who grew up on grunge typography. The filmmaker who learned from Kurosawa sees composition differently than the one who learned from Wong Kar-wai. These aren't preferences. They're evaluative frameworks. And until you extract and name them, you can't use them deliberately.
When you ask AI to evaluate your work, it evaluates against its training distribution: the average of everything it has read. That average is nobody's standard. It belongs to no practitioner, no tradition, no compositional value system.
Lens Extraction fixes this. You identify the practitioners whose judgment you trust. You extract what each one would look for. You build a stack of named lenses and run your work through each one independently. The AI stops giving you generic feedback and starts evaluating against the specific standards you declared. The evaluation is yours because the lenses are yours.
A working lens array for your own practice. You'll identify three to five practitioners whose work shaped your standards, extract what each one would evaluate, and build a reusable evaluation stack. By the end, you have a system that answers "is this good?" with "good according to these specific people, measured by these specific things." You use it on every project after.
Deliverable: A lens array of 3-5 named evaluative lenses, tested against one real project, with documented evaluation results.
Case study
You have practitioners whose judgment you trust. You recognize when something meets their standard, even if you've never written down what that standard is. This week you study three extracted lenses built from real practitioners: Vignelli (restraint, economy, systematic limitation), Victore (fierce presence, personality, the courage of your own identity), and Millman (specificity, a real person with real stakes). You see how each one evaluates the same piece of work and arrives at a different verdict. The same site that scores high on structural discipline scores low on personality. Same work, different lenses, different readings.
You learn the extraction protocol: study the practitioner's output, identify what they consistently demand, codify it as testable criteria, validate it against their own work. By the end of the week you can read an evaluative framework underneath a body of work you admire, not just recognize the surface decisions.
Deliverable: Three case study analyses showing the same work evaluated under three different extracted lenses, with documented convergence and divergence.
Studio / Workshop
Now you build your own. You identify three to five practitioners whose judgment shaped how you evaluate work. Not who you admire most. Who you actually measure against when you're deciding whether something is done. You extract each one's evaluative framework using the protocol from Week 1: study their output, find the patterns they never break, codify those patterns as criteria specific enough to test.
The lens array is not an analysis exercise. It is a declaration. When you select Vignelli and Victore, you are saying: this work must hold both restraint and fierce presence simultaneously. When you add Millman, you are saying: and a stranger must believe a real person made this. The combination of lenses you choose defines what your work is designed to be. "Designed to be," not "designed for." This is the first act of authorship on the raw material that input inversion produced.
Deliverable: A lens array of 3-5 extracted evaluative frameworks, each with named practitioner, extracted criteria, and validation against the practitioner's own work.
Studio / Workshop
You run your lens array against your own real project. Each lens evaluates independently, one dimension at a time. Where the lenses agree, strong signal: the work is holding to your declared values. Where they disagree, a decision point that only you can resolve. Vignelli says remove it. Victore says it's the only part with personality. That disagreement is not a problem. It's where the real work happens.
You map convergence and divergence across your lenses. You resolve tensions. Over time, how you resolve those tensions (consistently choosing personality over restraint in headlines, restraint over personality in navigation) is what makes the work yours. A persistent disagreement that won't resolve across multiple passes signals a values conflict in the array itself: either a lens was chosen carelessly, or the project has a contradiction that needs to surface before the work continues.
Deliverable: A working lens array applied to one real project, with extraction documentation, convergence/divergence map, and resolution notes showing where your judgment shaped the outcome.
Same site. Two evaluation approaches.
A portfolio site evaluated once, as a whole. Then the same site evaluated through independent lenses, each one measuring a different dimension. The first approach produced a grade. The second produced a decision.
"The site is rich, timely, and ambitious, with significant potential for originality through careful execution." Strengths, Challenges, Opportunities, Recommendations. Four sections, fourteen bullet points. A single perspective applying a generic rubric across every dimension simultaneously.
Structural lenses scored seven out of nine criteria at the top tier. The grid was sound. The typography was clean. The spacing held. The narrative lens ("does this feel like a world? would a stranger know who this person is?") scored WEAK. Same site. Two layers. Opposite readings. Both correct.
This course uses studio/workshop and case study methods. Lens extraction is a compositional act: you're declaring what your work is designed to be. That declaration can't be lectured. It has to be workshopped. You study real lens arrays built by real practitioners, then build your own in a studio environment where you can test, revise, and compare. The case study component shows how different lens stacks produce different evaluations of the same work, making the relationship between values and judgment concrete.
This course applies the research published in the Lens Extraction whitepaper. Decomposing "is this good?" into independent evaluations drawn from real practitioners.
This course is in development. $499 per course. Payment plans available on all courses. Foundations (Course 01) is the prerequisite. See the full curriculum.
$499. Requires Foundations. Payment plans available.