Elon Musk and an array of public figures have signed their names to an open letter that went viral this week, calling for a six-month pause on training language models more powerful than GPT-4, the technology underpinning ChatGPT.
Q1 2023 hedge fund letters, conferences and more
Some strange inconsistencies with the signatories aside, the letter is odd. It criticizes the deployment of powerful chatbot technology as rash, but also over-hypes their capabilities, drawing on the doom-mongering about AI and killer robots that have captivated the press and distracted from more nuanced, real-world risks.
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it asks dramatically (emphasis from the authors). “Should we risk loss of control of our civilization?”
Of course not, but there are issues we should be more worried about now, like the concentration of AI capabilities among a few increasingly secretive companies, inequality as artists find their work plagiarized without compensation, and all the risks to come from companies racing to plug ChatGPT into their systems.
On that last point, the toothpaste is already out of the tube. OpenAI last week launched a new system that will allow businesses to plug ChatGPT into their proprietary databases, allowing its chatbot to carry out tasks on their systems like retrieving information, making bookings and even running new software that it creates.
While the plugin announcement didn’t get much attention in the mainstream press, many technologists saw it as a stunning leap forward for ChatGPT. Not only could it search and synthesize information it had been trained on, it could take action.
Think about that for a moment. Machine learning systems make decisions in an inscrutable black box. OpenAI spent seven months testing GPT-4 before releasing it into to the wild, but its so-called “red team” engineers, who tested how it might be misused, could only cover a fraction of the ways it might be exploited by millions of real-world users. However much OpenAI has tested and prodded its system to make sure it is safe, no one really knows the full extent of its risks until it is deployed publicly. And those risks become more serious when ChatGPT can start doing things on the web.
Read the full article here by Parmy Olson, Advisor Perspectives.


