The new AI service has been documented sharing incorrect information and pictures.
Google’s recent introduction of the “ChatGPT Killer” Bard, an AI service nearly identical to ChatGPT, was seen failing in a test of capability. On Monday, Feb. 6, Google revealed this service, and in its first public test, Bard was shown giving an incorrect answer to the question presented. When asked to display discoveries from the James Webb space telescope, Bard provided pictures and information that was not correct, displaying pictures that were from the JWST, while falsely proclaiming that the JWST was the first telescope to provide pictures of a planet outside of our solar system.
AI systems such as Bard and ChatGPT, while similar to what can be perceived as Artificial Intelligence, are simply architectural word systems that sift through enormous volumes of data and information, creating sentences and words based on a percentage system. This percentage system strings words together based off of the percentage that the words make sense in a coherent system. Google’s Bard and OpenAIs ChatGPT are far from Artificial Intelligence. These systems simply string words together by forming sentences through volumes of data, each system is not individually thinking, it is confined to the rails of volume without being capable of thought outside of interaction with people using the system.
Developments in potential with these systems has brought further issues into the limelight. Across the country, students are utilizing these services and completing simple homework assignments or even getting assistance on exams. The future of these services is questionable at best, and with systems getting advancements fairly quickly it remains to be seen how this will impact universities.
Google’s Bard requires beta-testing, with only a select percentage of users who apply are allowed to use this service, while ChatGPT is open to public use, with creation of an account. A large language model (LLM) like Bard has been nicknamed the “ChatGPT killer,” but with limitations to accessibility and accuracy, as well as a first release statement that was inherently false, Google’s Bard has started the “AI race” half a lap behind.
No other mainstream LLM system is similar to either Bard, ChatGPT or Bing’s chatbot “Sydney,” which is similar to both systems, but Microsoft has been alleged to be testing Sydney roughly six years ago, with its first public usage in 2021.
With chatbots moving to mobile phones, and accessible from nearly anywhere, the ability to do busy work, minor assignments or even have these systems write entire essays for students has become more accessible than ever, and professors are struggling to differentiate between LLM written assignments and human written works.
New systems are being released to catch up with this new wave of LLM’s, such as GPTZero and OpenAI. Both systems give a graded number for the probability that a sentence, paragraph or form of words was created by an AI. Professors can utilize these services, and “catch” these AI systems in the act. With services such as OpenAI and GPTZero, worries about students cheating, or completing homework without doing any of the actual work, can be culled.