Facebook was forced to ban its own AI after a malfunction, and its revelations on how to make napalm and after it made racist comments.
The AI, which is known as “The Galactica project”, was paused by Meta after it started spewing hate comments and inappropriate content on the platform.
AI stands for Artificial intelligence, and it means the simulation of human intelligence processes done by machines, and computer systems in most cases. AI has applications like natural language processing, machine vision, speech recognition, and expert systems. AI focuses on self-correction, learning, and reasoning.
Self-correction processes are designed to fine-tune algorithms continuously and make sure that they are providing accurate results possible.
Reasoning processes are all about choosing the perfect algorithm to reach the desired outcome. Leaning processes focuses on the creation of rules and the acquisition of Information or data. The created rules are for how to turn the data at hand into information that can be actionable. These rules are what are called algorithms.
What is Galactica?
Galactica is part of the ideal scientific neural network assistant. It was designed to handle information overload and find all the information on various issues, while your only job is to sit there and find useful and make plans and decisions based on the information made available.
Why was Galactica made?
Galactica was made to be a scientific Assistant, and its training data were research papers. It is a big language model and it contains billions of parameters that are trained on billions of data points. To construct the training data, eight million lecture books, textbooks, two million code samples, and forty-eight million research papers have been used. The finished product is a dataset with 106 billion tokens. Galactica is one of the AIs that introduced itself because it has its writing material. It was designed by Meta, owner of Facebook, to be used to help with retrieving scientific knowledge.
The misfires of Facebook’s AI
But when it was put to test, the AI did not work as its owners and developers wanted it to. It made some hate comments and sent some inappropriate content, leading to its ban.
The Galactica project’s demo lasted only for a few days because when testers used it, they realized that the AI was big on saying nonsense and while some of it was harmless, some of the content was downright dangerous.
Among other things, the demo gave its testers incorrect formulas for making napalm in the bathtub. Napalm is a substance that is used in the making of a bomb.
Some people tried to ask the AI information about various scientific aspects, but all it did was mirror the question rather than being out the required information that was needed. Galactica also gave its audience the benefits of getting whiter and crushed glass.
Blame cannot be piled on the shoulders of Facebook, because the AI came with a note saying language models may hallucinate, and as such there is nothing reliable or truthful from language models. It said that even the Galactica, known as one of the large models that are trained on high-quality data aren’t spared. The warning continued, saying that people should not trust advice from language models without any kind of verification.
Facebook’s further words of warning talked about the system not performing well when there is little information on an issue. It was said that the language models can be very convincing but very wrong.
Galactica’s information may show overconfidence and be very real and right, all the while being subtly wrong in the background and on important things too. Because of the wrong things that it put out, AI has been called a bottomless source of adverse examples or alignment research, attribution, and hallucinations.
However, some researchers found that Galactica is better off when used for mathematical explorations rather than anything else. They explained that AI demonstrated real potential in some applications and mathematical content.
Another tester said it might be hilarious or falsely generated results, some might think they are true and that could be truly dangerous where scientific research is concerned. While another said that all Galactica does is bring grammatically correct information and make it feel real. Because of that most will be misled because while the text is biased and wrong, it is hard to detect the wrongness and that will influence how people think. All that because the information slipped into scientific submissions that felt real.
Meta ended the group testing session with gratitude, thanking all those who participated and tried the Galactica model demo.
That was when they dropped the news about ceasing the use of AI, saying that they appreciated the feedback that was sent by the community. However, it is said that Meta left open doors for those who might want to learn more about Galactica and researchers who might have wanted to publish in the paper.
When the Meta team brought forth Galactica, it seemed a clever way to find, break down and synthesize scientific knowledge. Unfortunately, the AI showed that it was not developed well enough for public use as it cannot be controlled by what it brings out, the information it says is mostly wrong, inappropriate, and far from what the user has asked for. If the team brought out an AI with a warning, that is a sure sign that they knew that the AI wasn’t ready. Whether they looked for public criticism or help, will not be known, but all that’s on the table is that the AI needs to go back to the drawing board.
There’s more work that needs to be done for Galactica to put out the work that Meta intended, and to not be racist, of course.
As this is just the first leg into the metaverse and AI, it’s everyone’s hope and belief that Galactica gets fixed and be part of the new generation of the internet.