Skip to Content Facebook Feature Image

How Facebook likes could profile voters for manipulation

News

How Facebook likes could profile voters for manipulation
News

News

How Facebook likes could profile voters for manipulation

2018-03-20 17:21 Last Updated At:18:21

Facebook "likes" can tell a lot about a person. Maybe even enough to fuel a voter-manipulation effort like the one a Trump-affiliated data-mining firm stands accused of — and which Facebook may have enabled.

File - This Jan. 17, 2017, file photo shows a Facebook logo being displayed in a start-up companies gathering at Paris' Station F, in Paris. (AP Photo/Thibault Camus, File)

File - This Jan. 17, 2017, file photo shows a Facebook logo being displayed in a start-up companies gathering at Paris' Station F, in Paris. (AP Photo/Thibault Camus, File)

The social network is under fire after The New York Times and The Guardian newspaper reported that former Trump campaign consultant Cambridge Analytica used data, including user likes, inappropriately obtained from roughly 50 million Facebook users to try to influence elections.

Monday was a wild roller coaster ride for Facebook, whose shares plunged 7 percent in its worst one-day decline since 2014. Officials in the EU and the U.S. sought answers, while Britain's information commissioner said she will seek a warrant to access Cambridge Analytica's servers because the British firm had been "uncooperative" in her investigation. The first casualty of that investigation was an audit of Cambridge that Facebook had announced earlier in the day; the company said it "stood down" that effort at the request of British officials.

Adding to the turmoil, the New York Times reported that Facebook security chief Alex Stamos will step down by August following clashes over how aggressively Facebook should address its role in spreading disinformation. In a tweet , Stamos said he's still fully engaged at Facebook but that his role has changed.

It would have been quieter had Facebook likes not turned out to be so revealing. Researchers in a 2013 study found that likes on hobbies, interests and other attributes can predict personal attributes such as sexual orientation and political affiliation. Computers analyze such data to look for patterns that might not be obvious, such as a link between a preference for curly fries and higher intelligence.

Chris Wylie, a Cambridge co-founder who left in 2014, said the firm used such techniques to learn about individuals and create an information cocoon to change their perceptions. In doing so, he said, the firm "took fake news to the next level."

"This is based on an idea called 'informational dominance,' which is the idea that if you can capture every channel of information around a person and then inject content around them, you can change their perception of what's actually happening," Wylie said Monday on NBC's "Today." It's not yet clear exactly how the firm might have attempted to do that.

Late Friday, Facebook said Cambridge improperly obtained information from 270,000 people who downloaded an app described as a personality test. Those people agreed to share data with the app for research — not for political targeting. And the data included who their Facebook friends were and what they liked — even though those friends hadn't downloaded the app or given explicit consent.

Cambridge got limited information on the friends, but machines can use detailed answers from smaller groups to make good inferences on the rest, said Kenneth Sanford of the data science company Dataiku.

Cambridge was backed by the conservative billionaire Richard Mercer, and at one point employed Stephen Bannon — later President Donald Trump's campaign chairman and White House adviser — as a vice president. The Trump campaign paid Cambridge roughly $6 million according to federal election records, although officials have more recently played down that work.

The type of data mining reportedly used by Cambridge Analytica is fairly common, but is typically used to sell diapers and other products. Netflix, for instance, provides individualized recommendations based on how a person's viewing behaviors fit with what other customers watch.

But that common technique can take on an ominous cast if it's connected to possible elections meddling, said Robert Ricci, a marketing director at Blue Fountain Media.

Wylie said Cambridge Analytica aimed to "explore mental vulnerabilities of people." He said the firm "works on creating a web of disinformation online so people start going down the rabbit hole of clicking on blogs, websites etc. that make them think things are happening that may not be."

Wylie told "Today" that while political ads are also targeted at specific voters, the Cambridge effort aimed to make sure people wouldn't know they were getting messages aimed at influencing their views.

The Trump campaign has denied using Cambridge's data. The firm itself denies wrongdoing, and says it didn't retain any of the data pulled from Facebook and didn't use it in its 2016 campaign work.

Yet Cambridge boasted of its work after another client, Texas Republican Sen. Ted Cruz, won the Iowa caucus in 2016.

Cambridge helped differentiate Cruz from similarly minded Republican rivals by identifying automated red light cameras as an issue of importance to residents upset with government intrusion. Potential voters living near the red light cameras were sent direct messages saying Cruz was against their use.

Even on mainstay issues such as gun rights, Cambridge CEO Alexander Nix said at the time, the firm used personality types to tailor its messages. For voters who care about tradition, it could push the importance of making sure grandfathers can offer family shooting lessons. For someone identified as introverted, a pitch might have described keeping guns for protection against crime.

It's possible that Cambridge tapped other data sources, including what Cruz's campaign app collected. Nix said during the Cruz campaign that it had five or six sources of data on each voter.

Facebook declined to provide officials for interview and didn't immediately respond to requests for information beyond its statements Friday and Monday. Cambridge also didn't immediately respond to emailed questions.

Facebook makes it easy for advertisers to target users based on nuanced information about them. Facebook's mapping of the "social graph" — essentially the web of people's real-life connections — is also invaluable for marketers.

For example, researchers can look at people's clusters of friends and get good insight as to who is important and influential, said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. People who bridge different friend networks, for example, can have more influence when they post something, making them prime for targeting.

Two-thirds of Americans get at least some of their news on social media, according on Pew Research Center. While people don't exist in a Facebook-only vacuum, it is possible that bogus information users saw on the site could later be reinforced by the "rabbit hole" of clicks and conspiracy sites on the broader internet, as Wylie described.

LONDON (AP) — The European Union said Tuesday that it's investigating Facebook and Instagram for suspected violations of the bloc's digital rulebook, including not doing enough to protect users from foreign disinformation ahead of EU-wide elections.

The European Commission, the EU's executive arm, said it's opening formal proceedings into whether parent company Meta Platforms breached the Digital Services Act, a sweeping set of regulations designed to protect internet users and clean up social media platforms under threat of hefty fines worth up to 6% of annual revenue.

European authorities are scrambling to safeguard elections amid official warnings that Russia is seeking to meddle with the vote in June, when citizens of the bloc's 27 nations pick lawmakers for the European Parliament.

The investigation includes an urgent request for Meta to provide information about its move to discontinue a key tool for monitoring elections.

“We have a well established process for identifying and mitigating risks on our platforms," Meta said in a statement. "We look forward to continuing our cooperation with the European Commission and providing them with further details of this work.”

Meta is being scrutinized “for suspected breach of DSA obligations to protect integrity of elections," European Commissioner Thierry Breton said in a social media post.

The Commission said it's looking into whether Meta is doing enough to curb the spread of “deceptive advertisements, disinformation campaigns and coordinated inauthentic behaviour” that could pose a risk to “electoral processes” and consumer protection.

Officials said they suspected Meta's content moderation system for advertisements was inadequate, allowing ads made with generative AI including deepfakes to exploited by malicious foreign actors seeking to meddle in elections even as the company makes money from them.

Experts worry that new generative AI systems could be used to disrupt the many elections being held around the world this year, by supercharging the ability to spread disinformation at scale.

The EU also suspects that Facebook and Instagram might be reducing the visibility in recommendation feeds of political content from accounts that pump out a lot of it - a practice known as shadowbanning - and not being transparent about it with users, which would violate the DSA.

A third concern is Meta's decision to phase out Crowdtangle, a tool used by researchers, journalists and civil society groups for real-time monitoring of trending social media posts including during elections. The Commission is giving Meta five days to respond with information on how it will remedy the lack of such a tool.

The Commission is also investigating whether Meta's mechanism for users to flag illegal content is good enough under the DSA, because it suspects it's neither easy to access nor user-friendly.

Brussels has been cracking down on tech companies since the DSA took effect last year, opening investigations into social media sites TikTok and X and ecommerce platform AliExpress. TikTok bowed to EU pressure last week and halted a reward feature on its new app after the Commission started demanding answers about it.

FILE - The Facebook logo is seen on a cell phone in Boston, USA, Oct. 14, 2022. The European Union said Tuesday April 30, 2024 that it's scrutinizing Facebook and Instagram over a range of suspected violations of the bloc's digital rulebook, including not doing enough to protect users from foreign disinformation ahead of EU-wide elections. (AP Photo/Michael Dwyer, File)

FILE - The Facebook logo is seen on a cell phone in Boston, USA, Oct. 14, 2022. The European Union said Tuesday April 30, 2024 that it's scrutinizing Facebook and Instagram over a range of suspected violations of the bloc's digital rulebook, including not doing enough to protect users from foreign disinformation ahead of EU-wide elections. (AP Photo/Michael Dwyer, File)

Recommended Articles