In the rudimentary days of videogames, I met the team that created the first multiplayer Formula 1 Grand Prix racing game. They had to alter the original code because they discovered almost every player at the start of the first race would turn his car around on the track and crash into the incoming traffic. I started to laugh, because that’s what I did too. Gives new meaning to the Facebook motto: Move fast and break things.
That’s exactly what’s going on with the newfangled generative AI chatbots. Everyone’s trying to break them and show their limitations and downsides. It’s human nature. A New York Times reporter was “thoroughly creeped out” after using Microsoft Bing’s chatbot. Sounds as if someone needs reassignment to the society pages. In 2016 Microsoft had to shut down its experimental chatbot, Tay, after users turned it into what some called a “neo-Nazi sexbot.”
Coders can’t test for everything, so they need thousands or millions banging away to find their flaws. Free testers. In the coming months, you’re going to hear a lot more about RLHF, reinforced learning from human feedback. Machine-learning systems scan large quantities of data on the internet but then learn by chatting with actual humans in a feedback loop to hone their skills.
Unfortunately, some people are ruder than others. This is what destroyed Tay. So ChatGPT currently limits its human feedback training to paid contractors. That will eventually change. Windows wasn’t ready until version 3.0; generative AI will get there too.
For now Microsoft’s solution is to limit users to six questions a session for the Bing chatbot, effectively giving each session an expiration date. This sounds eerily similar to the Tyrell Corporation’s Nexus-6 replicants from the 1982 movie “Blade Runner.” If I remember, that didn’t end well.
Every time something new comes out, lots of people try to break it or foolishly try to find the edge, like jumping into the back seat of a self-driving Tesla.