Big tech; Facebook, Twitter, and others were held responsible for the misinformation campaign that resulted in the present political climate.
Multiple congressional hearings finally got them to admit one thing. They had been to slow to arrest the abuse of their platforms to shape public opinion and skew political results.
The midterms put forward a new question: would the Social sites be able to protect the integrity of the mid-term votes.
With the votes ended, it seems like there’ve managed to hold off assaults this time. But it’s too early to rest easy, experts say.
‘I remain convinced that the real action will be after the election when we start to see the voter fraud, hacking claims start to roll in,’ suggested Bret Schafer from the Alliance to Secure Democracy, which runs a tool that tracks propaganda bot activity.
Alex Stamos, Facebook’s former head of security, has said he expects bad actors may seek to aggressively spread misinformation the day after the vote, in an effort to undermine the integrity of the result.
The sites have used a combination of measures to arrest efforts at voter suppression and misinformation. These tools include:
On Monday, Facebook announced it had removed 115 accounts that had displayed ‘inauthentic’ behaviour, the term the site uses for fake or propaganda accounts. Thirty of those accounts were on the main Facebook site, while 85 were removed from image-sharing site Instagram. It had been told about the accounts by US law enforcement, it said.
‘So far we haven’t seen anything unexpected,” Facebook said.
Earlier, Twitter said it removed 10,000 bot accounts from its platform after being alerted by the Democratic Congressional Campaign Committee. The ‘campaign’ was found to be coming from a message board popular members of the alt-right.
An eye-opening report from NBC News discussed how misinformation efforts to tell people the vote was on Wednesday, not Tuesday, were perhaps being made more difficult by Twitter’s moderators or algorithms.
Independent analysis from the Atlantic Council think-tank suggests these posts reached only a handful – around 5,000 – of Twitter users. That’s 5,000 too many, perhaps, but Twitter will consider that a resounding success on a platform where viral tweets can quickly garner an audience of millions.
Some misinformation did slip through the cracks, however.
In one example, retweeted more than 4,000 times, a misleadingly cut together video showed images of flag burning with clips of a well-known US news anchor laughing. But, soon after being discovered, the video was blocked on the platform.
Bot activity was muted and, according to one activity tracker, actually down on previous days over the past month.
In the hotly contested and highly polarised race between Republican Senator Ted Cruz and Democratic hopeful Beto O’Rourke (results have come in and O’Rourke lost, though by a small margin), bot-powered posts made up just 15-16% of posts for each candidate.
‘One possible explanation for low bot behaviour in Texas is more organic tweets – due to the popularity of the race – leading to a comparatively lower portion of bot activity,’ said Max Jenkins-Goetz, from RobHat, a firm that created bot tracker, botcheck.me.
However there were noticed instances when bots seemed to be amplifying views from genuine accounts, but mostly stopping short of pushing misinformation.
The US Department of Homeland Security set up what it called a ‘situational awareness’ room to monitor various aspects of election security, including misinformation efforts and the integrity of its election infrastructure, such as the electronic voting machines that many had warned was hopelessly insecure.
On social media, the DHS said it had discovered ‘intentional misinformation,’ but that it had been ‘rapidly addressed’ by the network.
‘We are not seeing any malicious activity associated with any of these technical glitches,’ the department said of ‘sparse’ voting machine issues.