Meta has a new AI chatbot called Blenderbot 3 and it's been engaging in some surprising, or probably not so surprising, conversations. The prototype is designed to be able to chat about pretty much any topic, that's according to Meta, Facebook's parent company, anyway. But Zuckerberg's corporation might not have banked on what its own AI would have to say about his most well known creation.
Writers at Buzzfeed and Vice have been testing out Blenderbot to rather comical effect. Asked about Mark Zuckerberg, the AI told Buzzfeed data scientist Max Woolf: "he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!". Now that's a zinger. The chatbot also had choice words for the world's most popular social media platform. When prompted by Vice, the Meta AI said that "since finding out they sold private data without permission or compensation, I've deleted my account" and that "since deleting Facebook my life has been much better".
The chatbot parrots information it finds online and it makes this clear to the user: showing where it discovered its answers from, if its responses are clicked on. This means that it'll pick up all the misinformation that can be found on the internet too. Meta appears to have been aware of the likelihood that its chatbot could spout any number of misleading or even offensive things, just as previous AI chatbots from GPT-3 and Microsoft have done so.
Microsoft's Tay infamously became a racist conspiracy theorist, as it learned from Twitter users who taught it such behaviour, which eventually forced the company to apologise for its "wildly inappropriate and reprehensible words and images".
Meta took some lessons from this incident it seems, as users have to accept that Blenderbot is "likely to make untrue or offensive statements". Meta's researchers have conceded that the AI has a "high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt".
But the chatbot appears to have a thing against its own creators. When The Guardian's reporter told Blenderbot that he wasn't a fan of Facebook's ethics, it replied in agreement: "Me too!" it said. "That is why I stopped using it and just stick with reading books instead".
Source: Read Full Article