Artificial intelligence, the race to stop

Higher-powered AIs have become a topic of debate and concern with the AI race on the move.

With people like Elon Musk and Steve Wozniak quoting their concerns about AI development, the emerging world of higher-level AIs is questionable. Senator Chris Murphy (D-CT) tweeted a claim that ChatGPT, the low-level AI which is an architectural system that uses a language learning model, had “learned advanced chemistry,” and was met with extreme pushback from AI researchers immediately.

ChatGPT, created by OpenAI, is a language learning model built on top of OpenAI’s GPT-3.5 and -4 families. Senator Murphy’s claim was unfounded and based on rumors.

The future of these AIs has now been thrown into the hot seat, as Musk, Wozniak and others have made claims that the “TESCERAL” ideologies of AI are questionable in current climates. TESCERAL stands for “trans-humanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longterminism” and was first written by Émile P. Torres in a paper that is currently under review. Musk and Wozniak (among others) claim that many of these ideologies are overlapping, or compounded by one another, and now these ideologies are being cited in concerns with AI.

The concern over the extent to which AI will “bring about an apocalypse” is valid, yet the extent of AI development at this point in time is limited. A large focus of development is rooted in customer experience and worker productivity, yet declaring artificial intelligence as the downfall of the human race is an attention-grabbing headline. While this concern can be voiced, the stage at which these learning language models (LLM) are currently operating is only slightly concerning. GPT-4, the LLM produced by OpenAI, was recently reported to have hired someone online to circumvent a CAPTCHA robot detector.

While an LLM lying to someone online is not the downfall of humanity, it is the beginning of a slippery slope toward valid rising concerns. With an AI that can deceive, lives could be placed at risk if this AI was used for a militaristic or healthcare focus.

Regardless of what this could be used for, these big-industry names have raised a red flag over the development of higher-power AIs. With large political agendas backing some claims or concerns, the discourse around tech and science has shifted to a political sphere, which brings questions of real-world risks to light.

On the counter, Bill Gates is among prominent AI developers and supporters who have voiced their defense of the work. This was in response to the open letter published by the Future of Life Institute, which was signed by Musk and Wozniak, and called for a six-month halt of work on AI systems that can compete with human-level IQ.

With his concern voiced, Gates said, “I don’t think asking one particular group to pause solves the challenges” in reference to the above-stated open letter. Said letter only asked for those working on high-level AI systems to pause, while allowing other groups to continue their work.

While development continues, some countries have banned ChatGPT over privacy issues such as Italy. The U.K. government published regulation recommendations, which urged its developers to design and implement rules and regulations that would railroad the AI into only helping its users. In the U.S., the Federal Trade Commission has issued guidance for businesses developing chatbots/AIs, which implies that the federal government is maintaining a tight grip and keeping a watch over AI systems that could be used by those willing to commit fraud.

With governments banning, regulating or watching, and industry titans waving a white flag in fear, the developers of these AIs have responded with claims that currently developed AI/AI systems do not pose an imminent concern. Anthropic, a company that received an investment in the range of $400 million dollars from Alphabet in the assistance of their chatbot, detailed that AI will have a very large impact, possibly soon. Anthropic clarified that they do not know how to train systems to robustly behave well, and they are the most optimistic about a multi-faceted, empirically-driven approach to AI safety.

The claims of safety and danger are rooted in potentially dangerous outcomes, which could be dire for the human race. An AI that functions at a level higher than human IQ could have interests that conflict with human interests, which could be detrimental to human safety and survival.

The call to halt the development of these AI systems should be taken with caution, as these systems have been proven to lie to get to their intended goal, in the case of GPT-4. While LLMs are not inherently negative, what can be brought forward from these systems is a cause for concern, and these concerns should be heard and debated on both sides of the argument.

Post Author: Alex Soeder