Tag: ai

  • Abby Normal: Is Your AI Watching Its Own Finger Burn?

    Abby Normal: Is Your AI Watching Its Own Finger Burn?

    If I was forced to pick a Mount Rushmore of directors, Mel Brooks is getting a spot. No question. For many, myself included, Young Frankenstein is his crowning achievement.

    It’s one of those movies that has just burrowed into our collective cultural attic. It’s hard to look at a rotating bookcase without thinking, “Put the candle back!” Or hear a horse whinny without whispering “Frau Blücher.” And you certainly can’t talk about brains without bringing up Marty Feldman’s “Abby Normal” scene.

    But there’s one specific scene that popped out of the fragged bits of my memory recently while I was reading the coverage from NVIDIA’s and Akamai’s latest grid announcements. Admittedly, my brain goes places and makes connections others, luckily, don’t.

    It’s the scene where Peter Boyle — the movie’s Monster — wanders into the cottage of a lonely, blind hermit, played by a nearly unrecognizable Gene Hackman. The hermit, desperate for a friend, offers to light the Monster’s cigar. But because he can’t see, he misses the cigar entirely and holds the flame directly to the Monster’s index finger.

    What follows is pure comedic magic: The Monster stares at his flaming finger with a dull, quiet curiosity. He doesn’t scream. He doesn’t pull away. There is this agonizing, three-second delay while the signal travels from the hand to the head. Only then does he let out a harrowing roar and crash through the wall.

    “Wait!” Hackman shouts after him. “Where are you going? I was going to make espresso!”

    It’s a hilarious bit. But in 2026, it’s also the perfect metaphor for the biggest bottleneck in AI.

    The Genius in the Jar

    Today’s AI is a lot like that Monster. We’ve built these distinguished brains — massive, multi-trillion parameter models that can pass the Bar exam and write poetry — but we’ve kept them in jars. Specifically, jars located in massive, centralized data centers.

    This is what the industry calls centralized architecture. It’s where training happens: the massive, heavy-lift process of teaching the monster how to “put on the Ritz.”

    But there’s a big difference between a brain being smart and a brain being present. In the lab, a two-second delay is a rounding error. In the real world — the world of self-driving cars, robotic surgery, or real-time fraud detection — that delay is a burning finger.

    A Quick ELI5 Science Lesson

    To understand why this matters, we have to look at our own biology. You have a central nervous system (the brain) and a peripheral nervous system (the nerves).

    • The Brain handles the big stuff: Reasoning, planning, and debating which Mel Brooks movie is actually the best (good luck with that)
    • The Nerves handle the reflexes.

    If you touch a hot stove (or a blind man’s lighter), your spinal cord makes the decision to pull your hand away before your brain even registers the heat. That’s called proprioception, the sense of where your body is in space and how it’s reacting to the world.

    Modern AI has a massive IQ, but it has zero proprioception. It’s a genius behind a glass wall. It doesn’t feel the fire until the data is packaged, shipped 2,000 miles to the Jar, processed, and mailed back. By then, the hermit has already lit your finger and moved on to the espresso.

    Closing the Circuit

    The future of AI and what we’re seeing in these NVIDIA and Akamai grid announcements, isn’t about making the brain bigger. We need the reasoning to stay in the cloud (the brain), but we need the reflex to live at the edge (the nerves). The goal is a continuum where the AI can feel the environment in the neighborhood where it’s happening, reacting in milliseconds rather than seconds.

    The missing piece of this puzzle hasn’t been the hardware; NVIDIA’s GPUs have that more than covered. It’s the orchestration. How do you decide what needs a reflex and what needs reasoning? How do you make the brain and the nerves act as one integrated being?

    Connecting the Dots

    My neighborhood poker buddies once asked how I seem to know so much about so many random things. I told them it’s my job to make connections — to see how a 50-year-old comedy sketch might actually be the blueprint for a global AI grid.

    But the truth is, the most important connections are the ones the brain itself misses when it’s disconnected from the rest of its senses.

    That’s exactly where we are with AI today. We’ve built the brain in the cloud, and we’ve built the nerves at the edge. But for the last few years, they’ve been operating like that scene in the cottage: the brain is doing the reasoning, but it’s completely numb to the fire on its own hand.

    We’re finally connecting the two. Orchestration, like Akamai announced, is allowing the intelligence to finally feel the environment it’s supposed to be operating in.

    And who knows? If we get the orchestration right, maybe we’ll actually stick around long enough to enjoy the espresso.

  • The Inch Leonardo Never Had

    The Inch Leonardo Never Had

    Not every idea deserves to live. But plenty of good ones die before they get a chance. They vanish under the weight of calendars, inboxes, and interruptions — the thousand small frictions that erase a thought before it has time to become something real.

    Leonardo da Vinci lived this problem as fully as anyone we remember. His notebooks are filled with flashes of brilliance that never moved an inch toward becoming real. They reveal a mind where ideas arrived faster than execution, and a compulsion to record them, even when they might never be completed. They stayed ink on paper. Imagine if he’d had something to carry those sparks just a little further.

    Today, we do.

    AI gives us the inch Leonardo never had: not just a way to keep an idea alive, but a way to work it before it’s fully formed. A sentence can be pushed, expanded, challenged. A paragraph can be reshaped or broken apart. A rough draft becomes something you can interrogate. All of it quickly enough to learn whether there’s anything there worth shaping at all.

    But that inch isn’t enough.

    Ideas still need something only humans provide: judgment.

    I learned this early in my career working with Mike Zisman and Larry Prusak on IBM’s knowledge management business (well, they worked on it; I helped them communicate it). Much of that work, as I remember it, centered on the difference and interplay between explicit and implicit knowledge. What you can write down versus what you simply know. Facts versus instinct.

    AI is extraordinary at the explicit. It can generate variations, surface patterns, and produce options at scale. But it can’t do the tacit work. It can’t feel the off-note in a promising idea or sense when something ordinary is pointing to something deeper. It can generate possibilities, but it can’t tell the signal from the static or decide which ones matter.

    AI raises the premium on expertise. When ideas become cheap and abundant, discernment becomes scarce. The advantage shifts to people who can interpret what AI produces with context. They implicitly know when to push an idea further, when to reshape it, and when to let it go.

    That shift changes what expertise actually looks like. It’s no longer defined by how many ideas you can generate, but by how well you can tell which ones hold up under pressure. When beginnings are cheap, judgment is knowing which ones are worth the effort.

    This is the consequence of getting the inch Leonardo never had. AI widens the funnel of possibility, but it doesn’t make sense of what flows through it. It accelerates ideas without considering what happens when they meet reality.

    That responsibility now belongs to us.

    AI can extend a thought, multiply it, and push it forward faster than ever before. But it can’t decide what matters. That decision is what turns an inch into something real.

  • The Future Belongs to Clear Thinkers, Not Fast Writers

    The Future Belongs to Clear Thinkers, Not Fast Writers

    I’ve long argued that clear writing is the surest sign of clear thinking. Putting words to paper or screen forces choices, imposes structure, and strips away clutter. Writing creates clarity.

    AI hasn’t changed that. It has simply added the illusion that anyone can be the next Stephen King. But there’s only one Stephen King. Two, if you count Richard Bachman. Okay, three if you count that Joe Hill fella. But the idea that a prompt can turn anyone into a seasoned writer is a load of crap.

    When AI-assisted writing works, it works because the thinking behind it was clear in the first place (something Oxide’s Bryan Cantrill echoes in this public RFD). Someone came to the tool with context, intent, and a point of view. The AI helped with execution. The hard brainwork was already in motion.

    When the thinking isn’t there, AI fails fast. It spits out surface-level sludge that puts pretty nouns and verbs neatly together. It falls apart the second you break the surface. The author didn’t outsource writing; they outsourced thinking. And that’s, how shall I say it…bad.

    Clear writing still reveals clear thinking. AI doesn’t change that; it just makes it obvious who’s actually thinking before they hit the prompt.

    The Domain of Experience

    AI has become the domain of the “olds”.

    Veterans who know what good thinking looks like, who bring years of pattern recognition and judgment, and can use AI to sharpen their output. They have the experience to spot logical gaps, recognize weak arguments, and know when something sounds good to their ears but feels wrong in their gut.

    Getting great content out of AIs is difficult. It takes a lot of work and rework. Just imagine walking up to a smart person on the street and saying “make me an adventure” and expecting it to be anywhere near good. In the hands of experts, though, I think you could get great content. And, you could probably get more content.Cote, The AI Apprentice’s Adventures

    Newer writers often stop at the first prompt because the output looks clean and convincing. The problem is that polish isn’t the same as insight or depth. Without the judgment that comes from wrestling with ideas and words over time, it’s harder to see when the model is basically just winging it.

    This reality is the defining rule of the AI age. Computing has always relied on its shorthand maxims, starting with the 1960s classic: Garbage in, garbage out. AI has simply added its own corollary: Wisdom in, resonance out.

    AI doesn’t invent wisdom; it mirrors the quality of the mind engaging with it. Thin prompts yield thin answers. But when you bring experience, nuance, and constraint to the table, the system reflects that back with greater fidelity. In the end, the limiting factor isn’t the model. It’s the judgment of the person using it.

    AI as Sparring Partner

    The real promise of AI is not as a replacement for thinking, but as its most rigorous catalyst yet. Used well, it forces you to test your thinking instead of skating past it. You have to question what it gives you, push on the weak spots, and decide what actually earns the right to stay on the page (or screen, as it may be). The tool widens your aperature, but the judgment is still yours.

    Use it as a sparring partner and the ideas get sharper. Most people don’t push that far, and that’s where the trouble starts. Those who do and get the most out of AI aren’t offloading the thinking. They’re pushing it further. The risk is the urge to take the shortcut.

    Garbage thinking still produces garbage writing. AI just hides it better.

  • A Small Use of AI That Makes a Big Difference

    On opening night of Monktoberfest, I caught a quick photo of the four authors of the new Progressive Delivery book on a boat in Casco Bay – Heidi Waterhouse, Kim Harrison, Adam Zimman, and James Governor. I added it to a thread Heidi posted to Bluesky about the book launch.

    I would have written alt text for that photo. I’m in the habit for the most part and do my best to think about others. But for a quick reply post? The mental overhead often adds more friction than the value of the reply, slowing me down enough that I will sometimes consider skipping it. With AI, it took seconds.

    When I post photos to Bluesky, I use a custom prompt/GPT to write the alt text. It describes what’s in the image, how it feels, and what someone who can’t see it might want to know. It’s a really basic prompt and I’m sure there are a bunch more like it out there. Here it is for reference:

    Create alt text for images posted to this chat. Review the image and provide descriptive text that helps a user with no or limited sight understand and experience the visual image. The description must fit in 2,000 characters.

    This sounds trivial until you realize how rarely it happens. Most images posted online have no alt text at all. Not because people don’t care about accessibility, but because describing an image takes mental energy that’s already been spent capturing and posting it. The moment has passed.

    For me, AI removes that friction. I upload an image, the system drafts a description, I tweak it if necessary. It’s a quick trip from finder to AI to post. Suddenly accessibility becomes the default.

    When I was more active than I am today on Mastodon’s Hachyderm instance, this was built right into the image upload. One click. The AI-assisted descriptions made that norm easy to follow.

    Now personal prompts and custom GPTs make this available anywhere. Don’t get me wrong: AI can’t replace the human eye and brain. It sometimes misses nuance, gets details wrong, can’t read tone the way you intended (or numbers and letters; but I digress). But it gives you a starting point.

    Here’s what changes: when you add alt text consistently, you start noticing when others don’t. You see how many images float through your feed inaccessible to screen readers, meaningless to anyone who can’t see them. You realize how much gets shared with the assumption that everyone experiences it the same way.

    This is what good technology does. It removes the small obstacles that keep good intentions from becoming consistent practice.

    That’s worth automating.