While other chatbots in the past, such as Eliza, conducted. It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. The idea was to engage in conversation with users and learn playful conversations. It was forced to shut down the bot after users tweeted hateful remarks at Tay, which it. She had the capacity to Tweet her thoughts and engage with her growing number of followers. An example case study in the Twitter chatbot Tay launched by Microsoft in 2016 43. As a result, we have taken Tay offline and are making adjustments. The bot was designed to learn by talking with real people on Twitter and the messaging apps Kik and GroupMe. In 2016 Microsoft apologised after a Twitter chatbot, Tay, started generating racist and sexist messages. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. It is as much a social and cultural experiment, as it is technical," a Microsoft representative told ABC News in a statement last week. "The AI chatbot Tay is a machine learning project, designed for human engagement. Encouraged by the success of Xiaoice, Microsoft launched Tay (standing for Thinking About You) in the US market in March 2016. The experimental AI, which learns from conversations, was. Some Twitter users seized on this vulnerability, turning the naive chat bot into a racist troll. A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements. Its research team launched a chatbot this morning called Tay, which is meant to test and improve Microsofts. Tay is designed to get smarter and learn from conversations, but there was just one problem: She didn't understand what she was saying. Microsoft is trying to create AI that can pass for a teen. Microsoft launched Tay on Twitter and messaging platforms GroupMe and Kik. Geared toward 18- to 24-year-olds, Tay was launched as an experiment to conduct research on conversational understanding, with the chat bot getting smarter and offering a more personalized experience the more someone interacted with "her" on social media. As part of testing, she was inadvertently activated on Twitter for a brief period of time," a company representative told ABC News in an email today. But things were going to get much worse for Microsoft when a chatbot called Tay started tweeting offensive comments seemingly supporting Nazi, anti-feminist and racist views. "Tay remains offline while we make adjustments. Tay returned this morning and had a meltdown of sorts, sending rapid-fire tweets telling many of her followers, "You are too fast, please take a rest." Microsoft engineers responded by locking Tay's Twitter account this morning.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |