The Problem of Good Enough

I grew up in the neon-drenched worlds of post-apocalyptic speculative fiction. These stories and worlds were as much a warning about the past and a meditation on the present as they were an attempt at predicting the future. I can’t find the exact William Gibson quote I’m thinking of now - although this one is pretty cool - but he said something in an interview that I’m approximating as basically that because the past has led up to the present, the future is in the present moment if only you could step outside of time to observe it.

The 80s was a particularly almost time. Technology was almost there. Concepts like AR, VR, and AI, which all feel ultra-modern were, in reality, explored in mostly theoretical discussion and incredibly clunky devices back then and even before, while the thing that would truly revolutionize society (the iPhone and the subsequent smart phone and endlessly-connected generations) were never mentioned. Or else, assumed to be mundane upgrades to everyday items, such as the tablets in Star Trek being essentially digital clipboards without any of the personality that allowed smartphones and tablets to take over the world.

Most speculative fiction missed the future entirely, and the ones that predicted something like the world we now inhabit did so with the wrong mechanisms.

For the ones that imagined we’d have an expanded memory with access to an encyclopedia of knowledge, most imagined neural implants or some other augmentation, rather than the (comparatively mundane) idea of globally connected information storage and retrieval accessible through a pocketable device approximately 100,000 times more performant than the Apollo 11 guidance computer.

For the ones that imagined some kind of artificial intelligence, they generally fell into either the android or the ephemeral computer categories. Think Commander Data from Star Trek - or that show’s ever-present voice activated onboard computer - or more comedic or terrifying examples, such as the disembodied computer voice fining John Spartan one half credit for violation of the verbal morality code in Demolition Man, or The Terminator, or Bladerunner or Tron. You get the idea.

I’m also aware I’m mixing 80s and 90s shows here, and really, that’s intentional to help explain: even as technology and computers progressed, even as the World Wide Web of the internet came into being, it still seemed both too fanciful and too mundane to be the future that everyone had been waiting for.

I’ve mentioned GeoCities et al before, and really, the late 90s and very early 2000s were quite an inflection point. The first iPhone didn’t release until 2007, and early social networks started in maybe 2005, but didn’t fully explode until they became apps on a phone. Indeed, Twitter’s original 160 character limit was a holdover from its beginnings on the SMS (text messaging) platform, where early text messages were datagrams of a relatively short length, and Twitter was really a way of texting someone without necessarily knowing their number.

But the point is, this time was full of localized speculation. By that I mean, hundreds, thousands, hundreds of thousands, millions of people eventually, all hurling some version of their consciousness out into the void. The blogs and websites and pages that quoted Neuromancer and Snow Crash. The ones that imagined their own similar worlds; or pretended to be parts of those original stories. People wanted to live in The Matrix a decade or more before that movie came out. Wrote about it 50 years before, achieved some level of it by placing their written consciousness online.

The current models of “artificial intelligence” are just the internet reflecting our own thoughts back at us. Decades of content floating in cyberspace - to use an outdated term - collected, collated, and indexed, searchable in natural language. It’s like asking Google a series of questions and pasting the answers into a document and editing it into a coherent narrative, but without that intermediate step.

This is both great and terrifying.

Of course, some aspects of generative AI are morally questionable. Creating “new” art out of sometimes whole-cloth piecemealing of human artist’s collected works is both questionable legally, and morally unjustifiable to some.

While I understand these complaints - and honestly, probably agree with some - I think there’s a more interesting critique of AI to be made, which goes all the way back to the foundations of the industrial revolution: the problem of good enough.

What’s the difference between a finely hand-crafted hardwood chair, with intricate carvings, a plush cushion stuffed with the softest mosses or cotton and covered with the finest felts, or a mass-produced pine four-legger with foam padding? Everything and nothing. If you just want something to sit on, they’re both good enough.

And therein lies the biggest objection I have to today’s AI - Large Language Models like OpenAI’s GhatGPT, or Copilot, or Claude, etc - which is not in the technology itself, but in the way people use them.

Let’s take pre-LLM link aggregator websites as an example. Mostly-automated through scripting, these sorts of sites use various techniques to grab headlines and snippets of content and package them into something that looks like a curated feed, with AdSense or similar plastered on top. Popular back when people read things online, these sorts of sites made money from the ad revenue of people hitting them in search engine results, or in some cases, finding value in them as part of niche genres where the sites did at least have some sort of human editorial team behind them.

In the LLM world, entire websites can be built by asking the AI to write articles and then automating the publishing of them online. If it’s about subjects that are interesting to people, they’ll probably get some eyeballs as well, especially as AI gets better at generating audio and video too. And we all know this stuff is hokey and uninteresting, but it’s already working because the generated content is good enough in the appropriate contexts.

Just to be clear, this isn’t an endorsement of AI or LLMs, in fact, far from it. I’m not saying people should be doing this, rather making the observation that they are.

We as a society settled for good enough a long time ago. LLMs wouldn’t be a threat or a problem if we weren’t so busy trying to get to some mythical future or goal that we were willing to use good enough as a stepping stone to something imaginarily better.

So what’s the conclusion here? Really, this time, I don’t know that there is one. Scribes probably objected to the introduction of the printing press. Tradecrafters definitely objected to the methods of mass production, and in many ways they were right. Typists, engineers, traditional media artists, and many others objected to computers in the 70s and 80s, and perhaps they were right too. Intellectuals in the 90s and 2000s objected to the internet and to social media, and perhaps they were right as well. Many people today are objecting to AI and LLMs, and yes, they are also probably right.

But like every other technical revolution before them, LLMs are already redefining the present, and steering the future off in directions that nobody predicted, and the models available to us today aren’t even that intelligent.

I mentioned William Gibson at the start of this story, and there is a quote of his that you probably know, even if you’ve never read his stories or even know of his name. It goes like this:

The future is already here – it’s just not evenly distributed.

I try to avoid using this quote myself, because after having read many of his books long before this quote became popular, I know it’s mostly used incorrectly.

But I do want to do the audacious thing and modify it a bit. Sacrilege to the Gibson fans, I know, but in the context of LLMs it’s really this:

The future is already here – it’s up to you how evenly distributed it is.

 
0
Kudos
 
0
Kudos

Now read this

On Being Alone

It’s Friday evening and it’s raining outside. It’s been raining all day and I worked from home. I ran to the grocery store at lunchtime, sudden gusts of wind testing my skin’s ability to provide tactile feedback for my grasp reflex... Continue →