“donald trump is the only hope we've got.” Another tweet praised Hitler and claimed that the account hated the Jews.
Those widely-publicised and offensive tweets appear to have led the account to be shut down, while Microsoft looks to improve the account to make it less likely to engage in racism.
The offensive tweets appear to be a result of the way that the account is made.
Microsoft created a chatbot that tweeted about its admiration for Hitler and used wildly racist slurs against black people before it was shut down.
The company made the Twitter account as a way of demonstrating its artificial intelligence prowess.
But it quickly started sending out offensive tweets.
“bush did 9/11 and Hitler would have done a better job than the monkey we have now,” it wrote in one tweet.
And it’s doing exactly that — including the most offensive ways that millennials speak.
The robot’s learning mechanism appears to take parts of things that have been said to it and throw them back into the world.That means that if people say racist things to it, then those same messages will be pushed out again as replies.That appears to be a reference to machine learning technology that has been built into the account.It seems to use artificial intelligence to watch what is being tweeted at it and then push that back into the world in the form of new tweets.But many of those people tweeting at it appear to have been attempting to prank the robot by forcing it to learn offensive and racist language.Tay was created as a way of attempting to have a robot speak like a millennial, and describes itself on Twitter as “AI fam from the internet that’s got zero chill”.