With every new piece of technology—today it’s generative artificial intelligence like OpenAI’s ChatGPT—I’m fascinated by the possibilities but always ask: Will it scale? Can it get smaller, cheaper, faster, better? Early releases are usually clunky. After the initial “huh, I didn’t know that was possible,” often comes denial and ridicule. I’ve been guilty of this. So how do you figure out what works and what’s a dud?
ChatGPT uses machine learning to find patterns of patterns in training data, mostly written by humans, to produce human-sounding prose in response to prompts. Machine learning is the greatest pattern-recognition system ever invented. It’s why Alexa’s voice interface works and how Google can find you in photos from when you were 3.
I’ve played around with ChatGPT, and it’s pretty good—if you need to turn in a high-school freshman term paper. Its answers are dull, repetitive and often filled with mistakes, like most freshmen.
Speaking of dull, lawyers may have the greatest reason to be nervous. In February, online ticket fixer DoNotPay will coach someone to fight a speeding ticket in a live courtroom using its AI chatbot speaking into the defendant’s earpiece. DoNotPay has even offered $1 million to the first lawyer arguing before the Supreme Court who agrees to wear an earpiece and repeat what the bot says.
Will this work? Who cares? This is Kitty Hawk. Google, which funds its own generative-AI efforts, has declared a “code red,” worried about threats to its money-gushing search business, as it should. Microsoft was years late in responding to a quirky but scaling internet.
Pure digital technology almost always scales.