The 15-year-old prodigy: Managing AI so it actually delivers

Written in

by

When my former supervisor first tried Tesla’s Full Self-Driving (FSD), he said it reminded him of teaching his daughter to drive at fifteen — careful, deliberate, and prone to unexpected errors. Years later, I’ve found myself feeling the same with Generative AI. It’s powerful, even breathtaking, in the right conditions. But left unguided, it will wander. The lesson is the same in both cases: capability without context produces unpredictable outcomes. And context must come from you.

The causal mechanism: Capability without judgment

AI can execute at extraordinary speed, but it lacks the accumulated judgment we take for granted in experienced people. It doesn’t “know” which trade-offs you prefer unless you specify them. It doesn’t “notice” that the audience is executive, the tone should be restrained. When the context is missing, the system fills gaps with reasonable-sounding but sometimes wrong assumptions.

The predictable result: if your intent is fuzzy, the output drifts. If your criteria are crisp, the output tightens. That’s not temperament; it’s causality.

Naming the pattern: The Novice Prodigy Effect

Think of modern AI as a novice prodigy: immense raw capability, little lived experience. It will ace the path you mark and hesitate or blunder at unmarked intersections. It thrives when the game is well-defined, and it struggles when the rules sit silently in your head. Your job is not to cajole it into brilliance; your job is to make the rules legible.

This is why “one idea at a time” works. It reduces ambiguity, forces sequence, and creates testable checkpoints. The prodigy isn’t distracted; it’s under-specified.

From Learner’s Permit to License: DRIVE for AI Mastery

  • Define the outcome: Who is this for? What job must it do for them? What will make it “good enough to ship”?
  • Reduce the scope: One deliverable, one audience, one style. Constrain length, format, palette, and channel.
  • Itemize the steps: If multiple actions are needed, number them and require approvals between steps.
  • Verify in loops: Ask for small samples early. Reject quickly and specifically when off target. Pressure‑test assumptions.
  • Edit to spec: Apply acceptance criteria before calling it done. If it doesn’t meet the spec, it isn’t done.

A concrete case: Recurring video character (Part 1)

In producing a series of sequences with a recurring character, my intent was consistency—a single persona moving fluidly across scenes. Initially, I gave broad but seemingly clear cues: “wear a light jacket.” The results matched the letter of the prompt but not the spirit of consistency—the jacket was always light-colored, but it shifted in hue and style from sequence to sequence.

The same drift appeared in other attributes: subtle changes in facial features; accessories that migrated or vanished. The character was recognizably similar, but not reliably the same.

Only after I tightened the definition to a strict, exhaustive profile—specifying attributes like age, skin tone, garment color, and the position of accessories (e.g., “wears a smartwatch on the left wrist”)—did the system begin producing sequences that truly carried the same character forward. Small, acceptable variations remained, but the identity was coherent.

Lesson learned: In DRIVE, definition governs continuity. When a creative element must persist, general descriptors invite drift. Only by nailing the non‑negotiables — down to position, proportion, and palette — can you carry a constant through change.

A concrete case: The making of a micro site app (Part 2)

When a coding assistant was given the entire, finely detailed brief for a microsite in a single request, the output was solid in its skeleton—the architecture stood—but the marrow was missing. Details were diluted; the language, generic.

This is not a failure of capability; it’s a signal that the shape of the request governs the shape of the answer. A broad aperture forces the system to spread its “attention” across competing demands, producing an average of many possible solutions rather than the essence of the best one.

When the same outcome was pursued through DRIVE’s discipline of constraint—breaking the objective into discrete, explicit steps—each step became a focal point. The system could pour its capacity into a single action, and the results, piece by piece, accumulated into a richer, more precise whole.

Lesson learned: In DRIVE, definition isn’t only about clarity of language — it’s also about clarity of scope. Tighten the frame, and you sharpen the picture.

Correction is a feature, not a failure

The fastest learning happens with immediate, specific feedback. Treat AI outputs like driver-instruction: point to the miss, restate the rule, and try again. Vague disappointment teaches nothing; precise rejection teaches everything.

The takeaway

Treat AI like a novice prodigy. Don’t expect instinct; encode it. Don’t hope for judgment; specify it. When you do, the system’s raw capability translates into dependable outcomes. When you don’t, it guesses — and guesses don’t ship. Mastery isn’t about making AI perfect—it’s about making it legible. And that begins with how you lead.

Leave a comment

Technologist | Senior Product Manager Product Strategy |  Cyber-Security | Mobile

xAkamai, xArm, xBlackberry, xMotorola | Lead Product Manager