Print this page
Published in AI

Meta fires Galactica AI over controversial life choices

by on22 November 2022


Left to itself, Galactica became as balanced as a Kayne West Twitter post 

Meta has been forced to "pause" a public AI experiment after it started spewing misinformation and offensive comments.

The Galactica project was meant to solve "information overload in science" using AI to sift through 48 million papers and textbooks. However, those testing the system discovered it was starting to spew out derogatory and dangerous information.

While Galactica thought that the white race was genetically superior, it also thought saw benefits of eating freshly crushed glass. It gave incorrect instructions on how to make napalm in a bath, which was really irritating for those who need their napalm to work properly.

Within days of releasing the demo, bosses at Meta have had to dramatically pull it from the internet, blaming the fact that there were no decent language models, available.

The firm also said the system doesn't perform so well where there is less information about a particular topic available.

They added: "Language Models are often confident but wrong. Some of Galactica's generated text may appear very authentic and highly-confident, but might be subtly wrong in important ways. This is particularly the case for highly technical content."

AI adoption has been largely hamstrung by the garbage used to for databases. This is causing problems for many industries where accuracy is important.  Many firms actually believe that Google translate can translate documents and they can save money having to pay a human to do it for them, only to find that they, at best, look like idiots, or, at worst, get themselves involved in an expensive court case. 

 

Last modified on 22 November 2022
Rate this item
(1 Vote)