HFA Icon

Elon Musk Wants to Pause AI? It’s Too Late for That

HFA Padded
Advisor Perspectives
Published on
Sign up for our E-mail List and Get FREE Access to Exclusive Investment E-books and More!

Elon Musk and an array of public figures have signed their names to an open letter that went viral this week, calling for a six-month pause on training language models more powerful than GPT-4, the technology underpinning ChatGPT.

Q1 2023 hedge fund letters, conferences and more

Elon Musk

Some strange inconsistencies with the signatories aside, the letter is odd. It criticizes the deployment of powerful chatbot technology as rash, but also over-hypes their capabilities, drawing on the doom-mongering about AI and killer robots that have captivated the press and distracted from more nuanced, real-world risks.

Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it asks dramatically (emphasis from the authors). “Should we risk loss of control of our civilization?”

Of course not, but there are issues we should be more worried about now, like the concentration of AI capabilities among a few increasingly secretive companies, inequality as artists find their work plagiarized without compensation, and all the risks to come from companies racing to plug ChatGPT into their systems.

On that last point, the toothpaste is already out of the tube. OpenAI last week launched a new system that will allow businesses to plug ChatGPT into their proprietary databases, allowing its chatbot to carry out tasks on their systems like retrieving information, making bookings and even running new software that it creates.

While the plugin announcement didn’t get much attention in the mainstream press, many technologists saw it as a stunning leap forward for ChatGPT. Not only could it search and synthesize information it had been trained on, it could take action.

Think about that for a moment. Machine learning systems make decisions in an inscrutable black box. OpenAI spent seven months testing GPT-4 before releasing it into to the wild, but its so-called “red team” engineers, who tested how it might be misused, could only cover a fraction of the ways it might be exploited by millions of real-world users. However much OpenAI has tested and prodded its system to make sure it is safe, no one really knows the full extent of its risks until it is deployed publicly. And those risks become more serious when ChatGPT can start doing things on the web.

Read the full article here by , Advisor Perspectives.

HFA Padded

The Advisory Profession’s Best Web Sites by Bob Veres His firm has created more than 2,000 websites for financial advisors. Bart Wisniowski, founder and CEO of Advisor Websites, has the best seat in the house to watch the rapidly evolving state-of-the-art in website design and feature sets in this age of social media, video blogs and smartphones. In a recent interview, Wisniowski not only talked about the latest developments and trends that he’s seeing; he also identified some of the advisory profession’s most interesting and creative websites.