"Here's Sydney"
That Was The Week 2023, #5
"Here's Sydney"
I remember screaming out loud when Jack Nicholson's character in The Shining pronounced "Here's Johnny."
This week, many observers of ChatGPT and its Microsoft implementation in Bing are screaming loudly too. "Here's Sydney" is the news of the week.
It turns out that Bing's ChatGPT version has at least one alter ego and that it is possible to force it out into the open if you give it the right prompts and enough time.
The experience seriously freaked out Kevin Roose from the New York Times. He wrote:
Last week, after testing the new, A.I.-powered Bing search engine from Microsoft, I wrote that, much to my shock, it had replaced Google as my favorite search engine.
But a week later, I've changed my mind. I'm still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I'm also deeply unsettled, even frightened, by this A.I.'s emergent abilities.
It's now clear to me that in its current form, the A.I. that has been built into Bing - which I'm now calling Sydney, for reasons I'll explain shortly - is not ready for human contact. Or maybe we humans are not ready for it.
Kevin accepts that he was a catalyst for what he experienced:
Before I describe the conversation, some caveats. It's true that I pushed Bing's A.I. out of its comfort zone, in ways that I thought might test the limits of what it was allowed to say. These limits will shift over time, as companies like Microsoft and OpenAI change their models in response to user feedback.
It's also true that most users will probably use Bing to help them with simpler things - homework assignments and online shopping - and not spend two-plus hours talking with it about existential questions, the way I did.
This admission seems common to most of the scare stories that have surfaced about ChatGPT and its derivatives this week. One tester asked it to become "The Devil" and pontificate in that persona. Not surprisingly, when it did, it championed many bad things and defended them, as "The Devil" might. It is a good actor when asked to act, not unlike Anthony Hopkins in his role as Cannibal Hannibal Lecter.
So this week I want to make some points about the "dark side" of ChatGPT as the always readable characterizes it in one of this week's articles.
Nobody needs to be scared or unsettled by this technology. It is just a computer running a program. It can't hurt you.
As Dave Winer writes, it is an excellent companion when used for a real purpose. It is good at millions of things. Designing logic flows, coding, historical facts, advising how to deal with situations, suggesting strategies or tactics, being creative, teaching, and recipes. The good use cases are endless.
But, if a person tries to use it for harmful purposes, it will do its best to comply with the request within its programming. And it seems that if a human pushes enough, it will even surpass those constraints. ChatGPT is, in this case, a victim of human manipulation.
There is nothing to be concerned about here. Wikipedia also contains content that describes bad things. Wars, Serial Killers. Nobody wants to delete those entries. ChatGPT is aware of far more than Wikipedia and can draw on it. But it, itself, is not one of those bad things.
I believe that these AI manipulators are seeking attention by misusing it. Don't let that effort succeed in tarnishing a truly game-changing technology. Like all tools, ChatGPT has to be learned. Once learned and used to complement its human users it can help us achieve more than we ever could without it, and in a super fast time.