celebrities and similar), and to interact like a typical teen girl.
In less than 24 hours, it inexplicably became a neo-nazi sex robot with daddy issues.
Reader Penguinisto writes: Recently, Microsoft put an AI experiment onto Twitter, naming it "Tay".
The bot was built to be fully aware of the latest adolescent fixations (e.g.
As a matter of convenience, Hoque’s team used their immediate surroundings to develop a proof of concept: they tested out the interaction-training system by conducting trial job interviews with ninety M. “In a technical university—where people are really, really technical—it’s possible that many people would have social difficulty,” Hoque explained. While the prototype runs locally on computers, Hoque, who recently completed his Ph. and is now at the University of Rochester, would like to make it widely available online, which he says would take between six months and a year for two or three engineers to develop.
For on-campus career prep, “The best thing to do is interact with a human, but that’s limited.”In a film of the experiment, the female coach addressed Participant No. He’s now seeking funding, and he said there has been interest from organizations that support autism research, as well as from private companies.
People asked for a tool with which they could practice human interaction privately—insulated from the insecurities created in social situations. Beyond job interviews, Hoque said, the program could be useful for helping people with social phobia linked to autism—the root of the project, which he hopes to pursue further—as well as public speaking, or even dating.
The hope is to make that system so smart and powerful it can solve problems human intelligence can’t.
According to Microsoft, Tay was built by “mining relevant [anonymous] public data” which was “modeled, cleaned, and filtered” to create her personality.
The filtering went out the window when she went live, though, and the you can see the results above.
She could “talk” about any English language subject by learning and recognizing key phrases and responding with pre-programmed responses.
She became popular on ARPA-net when users realized that she sounded like a stuffy therapist, and set the model for every single chatbot since.
Within 24 hours she tweeted like a Nazi: Credit: Gerald Mallor/Twitter Microsoft didn’t intend for that to happen, of course.