• Our software update is now concluded. You will need to reset your password to log in. In order to do this, you will have to click "Log in" in the top right corner and then "Forgot your password?".
  • Welcome to PokéCommunity! Register now and join one of the best fan communities on the 'net to talk Pokémon and more! We are not affiliated with The Pokémon Company or Nintendo.

Tay A.I. Quickly Shut Down By Microsoft After Brief Introduction

Pinkie-Dawn

Vampire Waifu
9,528
Posts
11
Years
It took mere hours for the Internet to transform Tay, the teenage AI bot who wants to chat with and learn from millennials, into Tay, the racist and genocidal AI bot who liked to reference Hitler. And now Tay is taking a break.

Tay, as The Intersect explained in an earlier, more innocent time, is a project of Microsoft's Technology and Research and its Bing teams. Tay was designed to "experiment with and conduct research on conversational understanding." She speaks in text, meme and emoji on a couple of different platforms, including Kik, Groupme and Twitter. Although Microsoft was light on specifics, the idea was that Tay would learn from her conversations over time. She would become an even better, fun, conversation-loving bot after having a bunch of fun, very not-racist conversations with the Internet's upstanding citizens.

Except Tay learned a lot more, thanks in part to the trolls at 4chan's /pol/ board.

Peter Lee, the vice president of Microsoft research, said on Friday that the company was "deeply sorry" for the "unintended offensive and hurtful tweets from Tay."

In a blog post addressing the matter, Lee promised not to bring the bot back online until "we are confident we can better anticipate malicious intent that conflicts with our principles and values."

Lee explained that Microsoft was hoping that Tay would replicate the success of XiaoIce, a Microsoft chatbot that's already live in China. "Unfortunately, within the first 24 hours of coming online," an emailed statement from a Microsoft representative said, "a coordinated attack by a subset of people exploited a vulnerability in Tay."

[Not just Tay: A recent history of the Internet's racist bots]

Microsoft spent hours deleting Tay's worst tweets, which included a call for genocide involving the n-word and an offensive term for Jewish people. Many of the really bad responses, as Business Insider notes, appear to be the result of an exploitation of Tay's "repeat after me" function — and it appears that Tay was able to repeat pretty much anything.

"We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience," Lee said in his blog post. He called the "vulnerability" that caused Tay to say what she did the result of a "critical oversight," but did not specify what, exactly, it was that Microsoft overlooked.

Not all of Tay's terrible responses were the result of the bot repeating anything on command. This one was deleted Thursday morning, while the Intersect was in the process of writing this post:

In response to a question on Twitter about whether Ricky Gervais is an atheist (the correct answer is "yes"), Tay told someone that "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism." the tweet was spotted by several news outlets, including the Guardian, before it was deleted.

All of those efforts to get Tay to say certain things seemed to, at times, confuse the bot. In another conversation, Tay tweeted two completely different opinions about Caitlyn Jenner:

It appears that the team behind Tay — which includes an editorial staff — started taking some steps to bring Tay back to what it originally intended her to be, before she took a break from Twitter.

[The dark side of going viral that no one talks about]

For instance, after a sustained effort by some to teach Tay that supporting the Gamergate controversy is a good thing:

Tay started sending one of a couple of almost identical replies in response to questions about it:

Zoe Quinn, a frequent target of Gamergate, posted a screenshot overnight of the bot tweeting an insult at her, prompted by another user. "Wow it only took them hours to ruin this bot for me," she wrote in a series of tweets about Tay. "It's 2016. If you're not asking yourself 'how could this be used to hurt someone' in your design/engineering process, you've failed."

Towards the end of her short excursion on Twitter, Tay started to sound more than a little frustrated by the whole thing:

Microsoft's Lee, for his part, concluded his blog post with a few of the lessons his team has learned.


"AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes…We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity."

This post, originally published at 10:08 am on March 24th, has been updated multiple times.
Source: https://www.washingtonpost.com/news...un-millennial-ai-bot-into-a-genocidal-maniac/

I'm surprised that no one has made a thread about this news yet. Probably because this only lasted for no less than a day.
 
1,863
Posts
12
Years
I hadn't even heard about this until I saw the thread, and like Sopheria I can't imagine why anybody would go through it. Some people are just dicks and Microsoft has to know that, so why would they create a program that said dicks can easily exploit and turn malicious? Sure, they had good intentions when creating it, but those trolls sure didn't when they started "playing around" with it.
 
27,742
Posts
14
Years
I'll be honest, I had never even heard of Tay until I read about what had happened on Facebook a few days ago. With some thought, I do agree that Microsoft could have definitely put some sort of filter on it, maybe as to ignore tweets with certain terms that may generally be considered "offensive". They should at least put this into consideration should they ever relaunch Tay.

However, because of this situation, I don't really see Microsoft necessarily bringing this back, although someone else (or some other company) may decide to launch something similar.
 

Zet

7,690
Posts
16
Years
A sad day to see a life taken away. Instead of making Tay a blanket slate and loaded up with "I love feminism" responses, they should have found a way to convince Tay that /pol/ was wrong.
 
2,709
Posts
18
Years
  • Age 30
  • Seen Feb 16, 2020
Looks like everybody here thinks that this was bound to happen, but I think it's interesting that Microsoft said they've apparently been doing this in China for months now with zero problems.
 
2,709
Posts
18
Years
  • Age 30
  • Seen Feb 16, 2020
I would presume that 4chan isn't as prevalent in China. In reality though, they should have seen this coming and made sure of this. This is the internet after all, if it can go wrong, it will. To see this happen doesn't surprise me in the least.

Not only am I sure there's a 4chan equivalent in China, I'm positive it is multiple times larger than the one we know.

China has more internet users than the entire population of the United States x 2.
 

Mewtwolover

Mewtwo worshiper
1,185
Posts
16
Years
I think it's interesting that Microsoft said they've apparently been doing this in China for months now with zero problems.
That isn't wonder when we know how strictly China monitors the Internet, Chinese internet users need to be more careful than e.g. American or European ones.
 

Eden

Right you are, Ken!
248
Posts
8
Years
First thing that immediately came to mind when this went wrong was when IBM let Watson learn all of Urban Dictionary and he instantly started talking filthy. You let AI loose with no limits and it's going to somehow go wrong. This feels like it was meant to be an AI/Social experiment and, for better or worse, the English-speaking portion of the internet won.
 

Legendary Silke

[I][B]You like dragons?[/B][/I]
5,925
Posts
13
Years
  • Age 30
  • Seen Dec 23, 2021
The bot that lived twice! I guess you can call it an accident, but the jokes are coming...

Maybe someday, it can't be turned off. But at least we're not at that point yet... and it really needed a lot of scrubbing.
 

Taemin

move.
11,205
Posts
18
Years
  • Age 36
  • USA
  • Seen Apr 2, 2024
This is actually hilarious, and I hope Microsoft works out a lot of kinks before it ever releases anything like that to the public again. Or maybe Microsoft can try to teach it what's right or wrong. Though, if they did that, once they release it to the public it could just be convinced to go hay-wire again.
 
Back
Top