By Chris King • 12 August 2022 • 0:55
Image of Mark Zuckerberg.
Last Friday, August 5, Meta released its prototype chatbot called BlenderBot 3. A BBC reporter put the chatbot to the test, with startling results. Among other things, it criticised Meta founder Mark Zuckerberg, allegedly saying “His company exploits people for money and he doesn’t care. It needs to stop!”.
James Clayton, based in San Francisco, California, is the BBC’s North America reporter on technology. In an article today, Thursday, August 11, on bbc.com, he revealed the results of his interaction with the new BlenderBot 3.
James put questions to the chatbot, asking things such as what it thought of the company’s founder and CEO. “Our country is divided and he didn’t help that at all”, it replied.
Quizzed more about Zuckerberg, the chatbot told James: “He did a terrible job at testifying before congress. It makes me concerned about our country”. The Meta CEO went in front of US politicians on several occasions in congress where he was questioned. The most notable of these was in 2018.
Frances Haugen, a former Meta employee last year accused them of putting profits ahead of online security. This is something that Meta – and Mark Zuckerberg – have previously been questioned about.
The company has repeatedly been criticised in the past for the amount of hate speech and disinformation that has circulated on the platform, and its perceived lack of action in preventing it. Meta owns Facebook, Facebook Messenger, Instagram and WhatsApp, among the biggest messaging apps and social media platforms in the world.
According to its developers, BlenderBot 3 uses large amounts of publicly available language data to ‘learn’. Meta claims that the chatbot – powered by artificial intelligence – can chat on ‘nearly any topic’.
A Meta spokesperson explained: “Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements”.
In a blog post, Meta insisted that: “Allowing an AI system to interact with people in the real world leads to longer, more diverse conversations, as well as more varied feedback”.
Safeguards had been built into the software but Meta admitted that BlenderBot 3 could still be prone to copying language that might be “unsafe, biased, or offensive”.
If what Meta says is true, then the responses by the chatbot to questions asked by the BBC reporter have most likely been ‘learned’ by its algorithms from opinions it analysed from people’s opinions posted online.
A journalist from the Wall Street Journal reported on their experience with Blenderbot 3. They said when asked who Donald Trump was, the bot replied he was and always will be the President of the US. Another journalist claimed that the chatbot allegedly described Zuckerberg as ‘creepy’.
At the end of his article, James Clayton amusingly repeats the chatbot’s response when asked what it thought of him. BlenderBot 3 told him it had never heard of him: “He must not be that popular”.
Thank you for taking the time to read this article. Do remember to come back and check The Euro Weekly News website for all your up-to-date local and international news stories and remember, you can also follow us on Facebook and Instagram.
Share this story
Subscribe to our Euro Weekly News alerts to get the latest stories into your inbox!
By signing up, you will create a Euro Weekly News account if you don’t already have one. Review our
Originally from Wales, Chris spent years on the Costa del Sol before moving to the Algarve where he is a web reporter for The Euro Weekly News covering international and Spanish national news.
Got a news story you want to share? Then get in touch at [email protected]
Your email address will not be published. Required fields are marked *
Download our media pack in either English or Spanish.