A new paper being presented next month at the Annual Computer Security Applications Conference (ACSAC) shows easy it is to infiltrate Facebook and harvest valuable user data.
Botnets, networks of hijacked computers controlled remotely for criminal gain or spreading propaganda, have been aggravating cybersecurity professionals for years. The near-billion people connected to social networks has made Facebook and Twitter the new juicy targets for similar schemes. These so-called social botnets establish fake Facebook or Twitter accounts (or take over real ones) and mimic human actions to harvest valuable personal information like email addresses and phone numbers. Truthy, a website created by researchers at Indiana University to analyze the spread of Web memes,exposed a year ago how Twitter bots were used to run astroturf and smear campaigns during the 2010 U.S. elections. The action has matured enough to the point where social botnets are for sale on the Internet black market for as much as $29 per bot.
Facebook over the years has created a sophisticated defense mechanism called the Facebook Immune System that is designed to identify spam and bot-driven clicks. It works pretty well–Facebook claims that only 1% of its users experience spam–but it definitely isn’t foolproof.
The paper’s authors, four computer security researchers at the University of British Columbia in Vancouver, set up and operated their own socialbotnet on Facebook for 8 weeks. They found that Facebook can be infiltrated with a success rate of up to 80% (8 out of 10 people approved the bot’s friend requests) and that, depending on a user’s privacy settings, a successful infiltration can result in privacy breaches that expose a good amount of valuable user data.
The UBC researchers set up a network of 102 bots controlled by a single botmaster piece of software. It’s pretty easy to set up sham automated accounts on Facebook. Hackers link their accounts to phony e-mail addresses, pick attractive photos for user profiles and pay cheap labor or use optical character recognition to the thwart the CAPTCHA codes that users have to input to “prove” they’re human.
The bots were able to send 8,570 friend requests. Eighty percent of Facebook users contacted okayed the requests. The botnet was then able to pull down email addresses, phone numbers and other personal data and not just of the direct connections but of Facebook users connected to those people. Their experiment showed that the controlling botherder can continue to do its work undetected and collect, on average, 175 new pieces of publicly-inaccessible user data per bot per day.
In their test, Facebook’s defense system didn’t stand up so well. Facebook was able to spot the infiltration of a user account within the first 3 days after the friend request had been sent by the socialbot. That’s plenty of time for the bot to get well integrated into the Facebook social network. But the Facebook Immune System was able to block only 20% of the accounts used by the socialbots, and those were almost entirely a result of users themselves flagging the accounts as spam. The researchers found zero evidence that FIS spotted what was really going on.