Tag: artificial intelligence

  • The Inch Leonardo Never Had

    The Inch Leonardo Never Had

    Not every idea deserves to live. But plenty of good ones die before they get a chance. They vanish under the weight of calendars, inboxes, and interruptions — the thousand small frictions that erase a thought before it has time to become something real.

    Leonardo da Vinci lived this problem as fully as anyone we remember. His notebooks are filled with flashes of brilliance that never moved an inch toward becoming real. They reveal a mind where ideas arrived faster than execution, and a compulsion to record them, even when they might never be completed. They stayed ink on paper. Imagine if he’d had something to carry those sparks just a little further.

    Today, we do.

    AI gives us the inch Leonardo never had: not just a way to keep an idea alive, but a way to work it before it’s fully formed. A sentence can be pushed, expanded, challenged. A paragraph can be reshaped or broken apart. A rough draft becomes something you can interrogate. All of it quickly enough to learn whether there’s anything there worth shaping at all.

    But that inch isn’t enough.

    Ideas still need something only humans provide: judgment.

    I learned this early in my career working with Mike Zisman and Larry Prusak on IBM’s knowledge management business (well, they worked on it; I helped them communicate it). Much of that work, as I remember it, centered on the difference and interplay between explicit and implicit knowledge. What you can write down versus what you simply know. Facts versus instinct.

    AI is extraordinary at the explicit. It can generate variations, surface patterns, and produce options at scale. But it can’t do the tacit work. It can’t feel the off-note in a promising idea or sense when something ordinary is pointing to something deeper. It can generate possibilities, but it can’t tell the signal from the static or decide which ones matter.

    AI raises the premium on expertise. When ideas become cheap and abundant, discernment becomes scarce. The advantage shifts to people who can interpret what AI produces with context. They implicitly know when to push an idea further, when to reshape it, and when to let it go.

    That shift changes what expertise actually looks like. It’s no longer defined by how many ideas you can generate, but by how well you can tell which ones hold up under pressure. When beginnings are cheap, judgment is knowing which ones are worth the effort.

    This is the consequence of getting the inch Leonardo never had. AI widens the funnel of possibility, but it doesn’t make sense of what flows through it. It accelerates ideas without considering what happens when they meet reality.

    That responsibility now belongs to us.

    AI can extend a thought, multiply it, and push it forward faster than ever before. But it can’t decide what matters. That decision is what turns an inch into something real.

  • The Future Belongs to Clear Thinkers, Not Fast Writers

    The Future Belongs to Clear Thinkers, Not Fast Writers

    I’ve long argued that clear writing is the surest sign of clear thinking. Putting words to paper or screen forces choices, imposes structure, and strips away clutter. Writing creates clarity.

    AI hasn’t changed that. It has simply added the illusion that anyone can be the next Stephen King. But there’s only one Stephen King. Two, if you count Richard Bachman. Okay, three if you count that Joe Hill fella. But the idea that a prompt can turn anyone into a seasoned writer is a load of crap.

    When AI-assisted writing works, it works because the thinking behind it was clear in the first place (something Oxide’s Bryan Cantrill echoes in this public RFD). Someone came to the tool with context, intent, and a point of view. The AI helped with execution. The hard brainwork was already in motion. When the thinking isn’t there, AI fails fast. It spits out surface-level sludge that puts pretty nouns and verbs neatly together and unravels the second you break the surface and ask it to mean something. The author didn’t outsource writing; they outsourced thinking. And that’s, how shall I say it…bad.

    Clear writing still reveals clear thinking. AI doesn’t change that; it just makes it obvious who’s actually thinking before they hit the prompt.

    The Domain of Experience

    Ironically, AI has become the domain of the “olds”.

    Veterans who know what good thinking looks like, who bring years of pattern recognition and judgment, and can use AI to sharpen their output. They have the experience to spot logical gaps, recognize weak arguments, and know when something sounds good to their ears but feels wrong in their gut.

    Getting great content out of AIs is difficult. It takes a lot of work and rework. Just imagine walking up to a smart person on the street and saying “make me an adventure” and expecting it to be anywhere near good. In the hands of experts, though, I think you could get great content. And, you could probably get more content.Cote, The AI Apprentice’s Adventures

    Newer writers often stop at the first prompt because the output looks clean and convincing. The challenge is that polish isn’t the same as insight or depth. Without the judgment that comes from wrestling with ideas and words over time, it’s harder to see when the model is basically just winging it.

    This reality is the defining rule of the AI age. Computing has always relied on its shorthand maxims, starting with the 1960s classic: Garbage in, garbage out. AI has simply added its own corollary: Wisdom in, resonance out.

    AI doesn’t invent wisdom; it mirrors the quality of the mind engaging with it. Thin prompts yield thin answers. But when you bring experience, nuance, and constraint, the system reflects that back with greater fidelity. In the end, the limiting factor isn’t the model. It’s the judgment of the person using it.

    AI as Sparring Partner

    The real promise of AI is not as a replacement for thinking, but as its most rigorous catalyst yet. Used well, it forces you to test your thinking instead of skating past it. You have to question what it gives you, push on the weak spots, and decide what actually holds. The tool widens your field of view, but the judgment is still yours.

    Use it as a sparring partner and the ideas get sharper. Most people don’t push that far, and that’s where the trouble starts.

    Those who get the most out of AI aren’t offloading the thinking. They’re pushing it further. The risk is the urge to take the shortcut.

    Garbage thinking still produces garbage writing. AI just hides it better.

  • A Small Use of AI That Makes a Big Difference

    On opening night of Monktoberfest, I caught a quick photo of the four authors of the new Progressive Delivery book on a boat in Casco Bay – Heidi Waterhouse, Kim Harrison, Adam Zimman, and James Governor. I added it to a thread Heidi posted to Bluesky about the book launch.

    I would have written alt text for that photo. I’m in the habit for the most part and do my best to think about others. But for a quick reply post? The mental overhead often adds more friction than the value of the reply, slowing me down enough that I will sometimes consider skipping it. With AI, it took seconds.

    When I post photos to Bluesky, I use a custom prompt/GPT to write the alt text. It describes what’s in the image, how it feels, and what someone who can’t see it might want to know. It’s a really basic prompt and I’m sure there are a bunch more like it out there. Here it is for reference:

    Create alt text for images posted to this chat. Review the image and provide descriptive text that helps a user with no or limited sight understand and experience the visual image. The description must fit in 2,000 characters.

    This sounds trivial until you realize how rarely it happens. Most images posted online have no alt text at all. Not because people don’t care about accessibility, but because describing an image takes mental energy that’s already been spent capturing and posting it. The moment has passed.

    For me, AI removes that friction. I upload an image, the system drafts a description, I tweak it if necessary. It’s a quick trip from finder to AI to post. Suddenly accessibility becomes the default.

    When I was more active than I am today on Mastodon’s Hachyderm instance, this was built right into the image upload. One click. The AI-assisted descriptions made that norm easy to follow.

    Now personal prompts and custom GPTs make this available anywhere. Don’t get me wrong: AI can’t replace the human eye and brain. It sometimes misses nuance, gets details wrong, can’t read tone the way you intended (or numbers and letters; but I digress). But it gives you a starting point.

    Here’s what changes: when you add alt text consistently, you start noticing when others don’t. You see how many images float through your feed inaccessible to screen readers, meaningless to anyone who can’t see them. You realize how much gets shared with the assumption that everyone experiences it the same way.

    This is what good technology does. It removes the small obstacles that keep good intentions from becoming consistent practice.

    That’s worth automating.

  • We’re Getting Closer to Jetson

    The Jetson'sCall me a nerd, but this is why I love doing what I do and why I’m excited about the future. For all the fun of Facebook, Twitter and Foursquare (not to mention all the supposedly non-social technologies used by the search giant Google), the collective data our generation is creating has the potential to – finally – build the Jetson’s-like future we’ve been promised for so many years:

    To understand where the combination of mobile sensors, cloud databases and computer algorithms augmented by human action is leading us, consider the self-driving car. Stanley, a driverless vehicle, won the US Darpa (Defense Advanced Research Projects Agency) grand challenge in 2005 by navigating a course of slightly over seven miles in a little under seven hours. Last year, Google demonstrated an autonomous vehicle that has driven over 100,000 miles in ordinary traffic. The difference: Stanley used traditional artificial intelligence algorithms and techniques; the Google autonomous vehicle is augmented with the memory of millions of road miles put in by human drivers building the Google Street View database. Those cars recorded countless details – the location of stop signs, obstacles, even the road surface.This is man-computer symbiosis at its best, where the computer program learns from the activity of human teachers, and its sensors notice and remember things the humans themselves would not. This is the future: massive amounts of data created by people, stored in cloud applications that use smart algorithms to extract meaning from it, feeding back results to those people on mobile devices, gradually giving way to applications that emulate what they have learned from the feedback loops between those people and their devices.

    I encourage you to read the entire Financial Times Article (“Birth of the Global Mind”) written by one of tech’s smartest, Tim O’Reilly.