FIRST PUBLISHED IN THE ATLANTIC, MAY 2011
ONE DAY LAST February, a Twitter user in California named Billy received a tweet from @JamesMTitus, identified in his profile as a “24 year old dude” from Christchurch, New Zealand, who had the avatar of a tabby cat. “If you could bring one character to life from your favorite book, who would it be?,” @JamesMTitus asked. Billy tweeted back, “Jesus,” to which @JamesMTitus replied: “honestly? no fracking way. ahahahhaa.” Their exchange continued, and Billy began following @JamesMTitus. It probably never occurred to him that the Kiwi dude with an apparent love of cats was, in fact, a robot.
JamesMTitus was manufactured by cyber-security specialists in New Zealand participating in a two-week social-engineering experiment organized by the Web Ecology Project. Based in Boston, the group had conducted demographic analyses of Chatroulette and studies of Twitter networks during the recent Middle East protests. It was now interested in a question of particular concern to social-media experts and marketers: Is it possible not only to infiltrate social networks, but also to influence them on a large scale?
The group invited three teams to program “social bots”—fake identities—that could mimic human conversation on Twitter, and then picked 500 real users on the social network, the core of whom shared a fondness for cats. The Kiwis armed JamesMTitus with a database of generic responses (“Oh, that’s very interesting, tell me more about that”) and designed it to systematically test parts of the network for what tweets generated the most responses, and then to talk to the most responsive people.
After the first week, the teams were allowed to tweak their bot’s code and to launch secondary identities designed to sabotage their competitors’ bots. One team unleashed @botcops, which alerted users, “You might want to be suspicious about JamesMTitus.” In one exchange, a British user confronted the alleged bot: “What do you say @JamesMTitus?” The robot replied obliquely, “Yeah, so true!” The Brit pressed: “Yeah so true! You mean I should be suspicious of you? Or that @botcops should be challenged?” JamesMTitus evaded detection with a vague tweet back—“Right on bro”—and acquired 109 followers over two weeks. Network graphs subsequently showed that the three teams’ bots had insinuated themselves into the center of the target network.
Can one person controlling an identity, or a group of identities, really shape social architecture? Actually, yes. The Web Ecology Project’s analysis of 2009’s post-election protests in Iran revealed that only a handful of people accounted for most of the Twitter activity there. The attempt to steer large social groups toward a particular behavior or cause has long been the province of lobbyists, whose “astroturfing” seeks to camouflage their campaigns as genuine grassroots efforts, and company employees who pose on Internet message boards as unbiased consumers to tout their products. But social bots introduce new scale: they run off a server at practically no cost, and can reach thousands of people. The details that people reveal about their lives, in freely searchable tweets and blogs, offer bots a trove of personal information to work with. “The data coming off social networks allows for more-targeted social ‘hacks’ than ever before,” says Tim Hwang, the director emeritus of the Web Ecology Project. And these hacks use “not just your interests, but your behavior.”
A week after Hwang’s experiment ended, Anonymous, a notorious hacker group, penetrated the e-mail accounts of the cyber-security firm HBGary Federal and revealed a solicitation of bids by the United States Air Force in June 2010 for “Persona Management Software”—a program that would enable the government to create multiple fake identities that trawl social-networking sites to collect data on real people and then use that data to gain credibility and to circulate propaganda.
“We hadn’t heard of anyone else doing this, but we assumed that it’s got to be happening in a big way,” says Hwang. His group has published the code for its experimental bots online, “to allow people to be aware of the problem and design countermeasures.”
The Web Ecology Project has started a spin-off group, called Pacific Social, to plan future experiments in social networking, like creating “connection-building” bots that bring together pro-democracy activists in a particular country, or ones that promote healthy habits. “There’s a lot of potential for a lot of evil here,” admits Hwang. “But there’s also a lot of potential for a lot of good.”