Category: Tech

  • The Inch Leonardo Never Had

    The Inch Leonardo Never Had

    Not every idea deserves to live. But plenty of good ones die before they get a chance. They vanish under the weight of calendars, inboxes, and interruptions — the thousand small frictions that erase a thought before it has time to become something real.

    Leonardo da Vinci lived this problem as fully as anyone we remember. His notebooks are filled with flashes of brilliance that never moved an inch toward becoming real. They reveal a mind where ideas arrived faster than execution, and a compulsion to record them, even when they might never be completed. They stayed ink on paper. Imagine if he’d had something to carry those sparks just a little further.

    Today, we do.

    AI gives us the inch Leonardo never had: not just a way to keep an idea alive, but a way to work it before it’s fully formed. A sentence can be pushed, expanded, challenged. A paragraph can be reshaped or broken apart. A rough draft becomes something you can interrogate. All of it quickly enough to learn whether there’s anything there worth shaping at all.

    But that inch isn’t enough.

    Ideas still need something only humans provide: judgment.

    I learned this early in my career working with Mike Zisman and Larry Prusak on IBM’s knowledge management business (well, they worked on it; I helped them communicate it). Much of that work, as I remember it, centered on the difference and interplay between explicit and implicit knowledge. What you can write down versus what you simply know. Facts versus instinct.

    AI is extraordinary at the explicit. It can generate variations, surface patterns, and produce options at scale. But it can’t do the tacit work. It can’t feel the off-note in a promising idea or sense when something ordinary is pointing to something deeper. It can generate possibilities, but it can’t tell the signal from the static or decide which ones matter.

    AI raises the premium on expertise. When ideas become cheap and abundant, discernment becomes scarce. The advantage shifts to people who can interpret what AI produces with context. They implicitly know when to push an idea further, when to reshape it, and when to let it go.

    That shift changes what expertise actually looks like. It’s no longer defined by how many ideas you can generate, but by how well you can tell which ones hold up under pressure. When beginnings are cheap, judgment is knowing which ones are worth the effort.

    This is the consequence of getting the inch Leonardo never had. AI widens the funnel of possibility, but it doesn’t make sense of what flows through it. It accelerates ideas without considering what happens when they meet reality.

    That responsibility now belongs to us.

    AI can extend a thought, multiply it, and push it forward faster than ever before. But it can’t decide what matters. That decision is what turns an inch into something real.

  • The Future Belongs to Clear Thinkers, Not Fast Writers

    The Future Belongs to Clear Thinkers, Not Fast Writers

    I’ve long argued that clear writing is the surest sign of clear thinking. Putting words to paper or screen forces choices, imposes structure, and strips away clutter. Writing creates clarity.

    AI hasn’t changed that. It has simply added the illusion that anyone can be the next Stephen King. But there’s only one Stephen King. Two, if you count Richard Bachman. Okay, three if you count that Joe Hill fella. But the idea that a prompt can turn anyone into a seasoned writer is a load of crap.

    When AI-assisted writing works, it works because the thinking behind it was clear in the first place (something Oxide’s Bryan Cantrill echoes in this public RFD). Someone came to the tool with context, intent, and a point of view. The AI helped with execution. The hard brainwork was already in motion. When the thinking isn’t there, AI fails fast. It spits out surface-level sludge that puts pretty nouns and verbs neatly together and unravels the second you break the surface and ask it to mean something. The author didn’t outsource writing; they outsourced thinking. And that’s, how shall I say it…bad.

    Clear writing still reveals clear thinking. AI doesn’t change that; it just makes it obvious who’s actually thinking before they hit the prompt.

    The Domain of Experience

    Ironically, AI has become the domain of the “olds”.

    Veterans who know what good thinking looks like, who bring years of pattern recognition and judgment, and can use AI to sharpen their output. They have the experience to spot logical gaps, recognize weak arguments, and know when something sounds good to their ears but feels wrong in their gut.

    Getting great content out of AIs is difficult. It takes a lot of work and rework. Just imagine walking up to a smart person on the street and saying “make me an adventure” and expecting it to be anywhere near good. In the hands of experts, though, I think you could get great content. And, you could probably get more content.Cote, The AI Apprentice’s Adventures

    Newer writers often stop at the first prompt because the output looks clean and convincing. The challenge is that polish isn’t the same as insight or depth. Without the judgment that comes from wrestling with ideas and words over time, it’s harder to see when the model is basically just winging it.

    This reality is the defining rule of the AI age. Computing has always relied on its shorthand maxims, starting with the 1960s classic: Garbage in, garbage out. AI has simply added its own corollary: Wisdom in, resonance out.

    AI doesn’t invent wisdom; it mirrors the quality of the mind engaging with it. Thin prompts yield thin answers. But when you bring experience, nuance, and constraint, the system reflects that back with greater fidelity. In the end, the limiting factor isn’t the model. It’s the judgment of the person using it.

    AI as Sparring Partner

    The real promise of AI is not as a replacement for thinking, but as its most rigorous catalyst yet. Used well, it forces you to test your thinking instead of skating past it. You have to question what it gives you, push on the weak spots, and decide what actually holds. The tool widens your field of view, but the judgment is still yours.

    Use it as a sparring partner and the ideas get sharper. Most people don’t push that far, and that’s where the trouble starts.

    Those who get the most out of AI aren’t offloading the thinking. They’re pushing it further. The risk is the urge to take the shortcut.

    Garbage thinking still produces garbage writing. AI just hides it better.

  • A Small Use of AI That Makes a Big Difference

    On opening night of Monktoberfest, I caught a quick photo of the four authors of the new Progressive Delivery book on a boat in Casco Bay – Heidi Waterhouse, Kim Harrison, Adam Zimman, and James Governor. I added it to a thread Heidi posted to Bluesky about the book launch.

    I would have written alt text for that photo. I’m in the habit for the most part and do my best to think about others. But for a quick reply post? The mental overhead often adds more friction than the value of the reply, slowing me down enough that I will sometimes consider skipping it. With AI, it took seconds.

    When I post photos to Bluesky, I use a custom prompt/GPT to write the alt text. It describes what’s in the image, how it feels, and what someone who can’t see it might want to know. It’s a really basic prompt and I’m sure there are a bunch more like it out there. Here it is for reference:

    Create alt text for images posted to this chat. Review the image and provide descriptive text that helps a user with no or limited sight understand and experience the visual image. The description must fit in 2,000 characters.

    This sounds trivial until you realize how rarely it happens. Most images posted online have no alt text at all. Not because people don’t care about accessibility, but because describing an image takes mental energy that’s already been spent capturing and posting it. The moment has passed.

    For me, AI removes that friction. I upload an image, the system drafts a description, I tweak it if necessary. It’s a quick trip from finder to AI to post. Suddenly accessibility becomes the default.

    When I was more active than I am today on Mastodon’s Hachyderm instance, this was built right into the image upload. One click. The AI-assisted descriptions made that norm easy to follow.

    Now personal prompts and custom GPTs make this available anywhere. Don’t get me wrong: AI can’t replace the human eye and brain. It sometimes misses nuance, gets details wrong, can’t read tone the way you intended (or numbers and letters; but I digress). But it gives you a starting point.

    Here’s what changes: when you add alt text consistently, you start noticing when others don’t. You see how many images float through your feed inaccessible to screen readers, meaningless to anyone who can’t see them. You realize how much gets shared with the assumption that everyone experiences it the same way.

    This is what good technology does. It removes the small obstacles that keep good intentions from becoming consistent practice.

    That’s worth automating.

  • Nature’s Light Show: A Shared Moment and Tech’s Perfect Snapshot

    Nature’s Light Show: A Shared Moment and Tech’s Perfect Snapshot

    Maybe it’s just the jolt from that first sip of coffee this morning, but two things stuck in my brain from last night’s spectacular light show by Ma Nature:

    1. It was a collective, shared experience that echoed the unity in the early days of the pandemic lockdowns.

    2. Apple could not have scripted a better global ad for the iPhone’s camera capabilities.

    The cosmos and technology never cease to amaze and inspire me.

  • Two’s a coincidence, three’s a trend

    Two’s a coincidence, three’s a trend as the old saying goes.

    I’ve had more than a few conversations lately with folks around the technology industry that have had a common theme running through them — not just topically thematic, but in tone, too.

    And that theme and tone have me thinking we’re starting a brand new cycle of tech that feels (and looks) a lot like the start of the Internet more than the start of the Web.

    A cycle where engineers move to the forefront tackling new infrastructure, architecture, and networking challenges that future waves of developers will build on.

    A cycle that makes the acceleration we witnessed over the last decade feel like a blip on the timeline of innovation.

  • Cloud computing’s next act

    Whether in reaction to economic conditions, or taking advantage of the leveling off of the core services that used to differentiate cloud providers, companies are beginning to take a closer look at their cloud sprawl and spend. Some are resetting strategies by taking things back in house; some are going in the opposite direction and spreading workloads across multiple providers to find the best fit for their business; and some are using this inflection point to reconsider whether they want to continue building on a legacy centralized architecture or prepare for a more decentralized and distributed future.

    So while things like egress costs and price performance appear to be about saving money, what they’re really about — to me — is something more profound: the beginning of a new phase for cloud computing that shifts control back to the customer.

    “Linode has phenomenally-generous bandwidth that, all told, has shown us savings of around 60% over AWS even without considering the savings on hardware,” said Jonathan. “It’s easy to get new servers whenever we want, the Linode API is extremely reliable, and pricing is never a surprise. We also use Linode Managed Databases, and we’ve found that Linode’s CPU performance per dollar blows everyone else out of the water.” 

    Jonathan Oliver, CEO, Smarty
  • Talking cloud with TFiR

    I had an opportunity recently to chat with TFiR host Swapnil Bhartiya about the current state of the hyperscale cloud computing market.

  • The GPT moat

    Back in 2016, I wrote a post about people naturally wanting to work for the good of humanity. I included a pullquote highlighting OpenAI’s non-profit mission.

    OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

    Fast-forward to December 2022 and the meteoric buzz and building around ChatGPT. And I am reminded of this four year old Monktoberfest talk by Bryan Cantrill on the too frequent gulf between an organization’s stated principles and their actions.

  • Replicating the river of news

    Replicating the river of news

    Twitter was never a social network to me. I mean, sure, I made and had friends there. And I interacted with reporters, analysts, and influencers. But it was first and foremost a newsfeed. A wire service of industry and world news. A place to spot trends and stay on top of breaking events. It was RSS on ‘roids.

    And then it imploded.

    How do you replicate that river of news? It’s not like journalists stopped producing news. For some of us old timers, RSS fills some of that void. But it’s not the same. Many have migrated to the fediverse to maintain as much of those Twitter connections as possible. But it’s not the same, either. Twitter was different.

    The fediverse shows promise as a Twitter replacement, but it’s likely too byzantine for a generation raised on walled-garden technologies. But it’s what we’ve got today. So how do we use it to fill the Twitter void? I’m just spitballing here, but I could envision someone cranking up a Mastodon instance just for technology new outlets to let their publishing bots run free. An instantaneous RSS feed, if you will. I’m sure others also have ideas (sound off in the comments).

    The scale of technology is shifting, the weight transferring from random algorithms and advertising to individual control of the bits and bytes one puts out into the world. It’ll be messy. It’ll take time. But make no mistake, it’s happening.

  • Pint of View, Episode One: Eric & Mike Drink a Beer

    Pint of View, Episode One: Eric & Mike Drink a Beer

    It’s a rare Sunday morning when Eric Norlin and I aren’t engaged in heady philosphical debate. Fingers furiously tapping away in chat windows swaying each other to the merits of our individual point of view.  It’s an exercise of critical thought. A way to check our personal assumptions – and have the cross-checked. Often, at least from my side, it’s the argumentative equivalent to getting smashed into the boards by the burliest Canadian NHL defensemen (even though Eric lives just over Canada’s southern border).

    These are enjoyable, invigorating conversations historically conducted over coffee and kept to ourselves. Except Eric and I also enjoy and are invigorated by craft beer. IYou know where this is going.

    Below is the first episode — the worldwide premiere — of the future Emmy-nominated video podcast series, “Pint of View.” Or, as Eric and I refer to it, “Mike and Eric Drink a Beer and Argue About Stuff.” In epsidode one, we tee up the debate about the future of the Covid-19 work from home shift. We’re still working out the audio visual kinks, but figured we’d follow the old startup adage of ship it and iterate as we go.

    Because time is now measured in ounces. Cheers.