Courses 02-06: $499 each. Requires Foundations. Join the list.
Course 04 · Prerequisite: Foundations

Voice Governance

Self-paced · 3 weeks · $499 · Pedagogy: Apprenticeship to Independent Practice
Waitlist Open

The situation you recognize

Three drafts in and you can't hear yourself anymore. The AI smoothed your rhythm into something competent and generic. Each draft was only slightly off. The cumulative drift is real and quiet.

You read the output and it sounds fine. It sounds like anyone. The sentence lengths are even. The word choices are safe. The rhythm is metronomic. Your voice had short bursts, long setups, specific hesitations. That texture is gone, and you didn't notice it leaving because the AI replaced it with something competent enough to pass.

What this means for AI work

All AI copy sounds the same because voice constraints are applied after generation, not during. The usual workflow: generate, then edit to sound like you. But the AI's defaults already occupy the space where your voice should have been. Editing after the fact is cosmetic. The structure of the sentences, the rhythm, the priorities are already set.

Voice Governance reverses this. You extract how you actually talk (from conversation, not published writing), codify it as generation rules, and apply those rules before the first word is produced. The output sounds like you because the constraints occupied the space where the AI's defaults would have gone. The difference is structural, not editorial.

What you'll build

A voice protocol extracted from your own speech patterns. You'll record yourself talking about your work, analyze the patterns (sentence length, word choice, rhythm, what you avoid, what you repeat), codify those patterns as generation rules, and test the protocol against real output. The deliverable is a reusable document that governs AI output to sound like you, not a style guide you refer to after the fact.

Deliverable: A voice protocol extracted from your own speech patterns, tested against real output, with before/after comparison.

Week 1

Voice Sampling

Apprenticeship: watch the process

Published writing is performance. You've already edited it, shaped it, filtered it for an audience. Training an AI on your published voice produces performed output. The real voice lives in conversation: how you explain something to a colleague, how you argue with yourself in a voice note, how you talk when you're not performing.

This week demonstrates the full extraction pipeline. You watch how conversation transcripts become source material: where to look, what counts, why a Slack message to a trusted peer contains more of your actual voice than a polished case study. Then you collect your own source material. Conversation exports, voice memos, raw thinking sessions, anything where you were working out an idea instead of presenting one.

This week is apprenticeship because the process is technical and sequential. Seeing it done first prevents the most common mistake: sampling the wrong source. If you start by analyzing your blog posts, you'll extract the voice you perform, not the voice you have.

Deliverable: A curated corpus of your own conversational source material, with notes on which samples carry the strongest signal.

Week 2

Rule Extraction and Protocol Building

Guided practice

You analyze your conversation samples for patterns. Sentence rhythm. Opening moves. Vocabulary you reach for without thinking. What you never say. How you build from specific to general, or general to specific, or whether you stay concrete the entire time. These are observations, not opinions about your voice.

Then you codify those patterns as generation constraints. Not style descriptions ("write with warmth and directness") but testable rules ("zero em dashes," "lead with the specific moment, not the concept," "no fortune-cookie closers"). The rules tell the model what it cannot do. That is what forces it to find another way to build the paragraph. A style description gives the model a target to interpret. A constraint removes the path the model would have taken by default.

You build a voice protocol: hard rules (zero tolerance, applied during generation) and register definition (testable criteria for whether a sentence sounds like you). The protocol is a document. It loads before the AI writes anything.

This week is guided practice because converting observations into operational rules is the hardest step. You know your voice when you hear it, but you have never had to describe it as a set of prohibitions. The guidance gives you the vocabulary and the structure. You do the extraction.

Deliverable: A voice protocol with hard rules and register definition, extracted from your own speech patterns.

Week 3

Constrained Generation and Verification

Independent practice

You apply your voice protocol during generation. Before the first word, not after. The protocol loads, the constraints are active, and the model builds the paragraph around your rules from the first token. Then you compare: the same content, the same model, one version unconstrained and one version governed by your protocol. The structural difference is visible immediately. The unconstrained version opens with abstraction, falls into mechanical parallelism, closes with a fortune-cookie sentence. The constrained version leads with the specific, varies its rhythm, and ends where the thought actually ends.

You iterate. The first pass catches the obvious failures. The second pass catches the patterns you didn't know to prohibit: a phrase the AI keeps reaching for, a sentence structure that recurs at paragraph boundaries, a rhythm that flattens in the middle of longer sections. Each failure mode becomes a new rule. Each rule makes the next generation closer to your actual voice.

This week is independent practice because the protocol is yours now. You know how to identify failure modes, convert them to constraints, and retest. The skill transfers to any writing context where you need AI output that sounds like a specific person wrote it.

Deliverable: A working voice protocol extracted from your own speech patterns, with before/after evidence showing constrained vs. unconstrained output on the same content.

What the difference looks like.

Same content. Same model. One version unconstrained. One version with voice protocol applied before the first word.

A whitepaper opening. The first draft was produced through conventional AI-assisted writing. The second was produced with voice constraints active during generation: zero em dashes, zero banned words, no fortune-cookie closers, no negation-affirmation, practitioner register throughout.

No Voice Protocol

Whitepaper Opening

"The AI governance conversation is organized around a single question." Abstract-concept opening. 27 em dashes. 15+ negation-affirmation patterns ("Not X. Y."). Six subsections with identical four-beat skeletons. Strip the author's name and it could belong to anyone.

Voice Protocol Applied

Whitepaper Opening

Zero em dashes. First person throughout. Each subsection structured differently: one opens with the loss, another with the compound question, another with the degradation pattern. Peter's texture present: the Brooklyn classroom, the version that reproduced the exact loss, "arguing with myself, changing direction mid-sentence."

The unconstrained version describes. The constrained version renders. The difference is structural: the paragraphs were built around different bones. Post-hoc filtering changes the words. Generation constraints change the shape of the sentence from the first token. The author's reaction after reading the rewrite: "this tone feels perfect."

Why this teaching method

This course uses an apprenticeship-to-independent-practice model. Voice extraction is deeply personal work. You can't extract your voice from a template. The course begins with close guidance: you study an existing voice protocol, observe how it was built, and work alongside a demonstrated process. Then you build your own, with decreasing support at each stage. By the end, you're operating independently. The apprenticeship model works here because the skill is tacit: you know your voice when you hear it, but you've never had to describe it as rules. The guided phase gives you the vocabulary and the method. The independent phase ensures the result is actually yours.

Research

This course applies the research published in the Voice Governance whitepaper. Why constraining AI output during generation produces structurally different work than editing after.

This course is in development. $499 per course. Payment plans available on all courses. Foundations (Course 01) is the prerequisite. See the full curriculum.

Join the waitlist for this course.

$499. Requires Foundations. Payment plans available.