Elizabeth Warren calls on Big Tech to step up the fight against fake news
The 2016 U.S. presidential election was defined — if not decided — by disinformation. While the fallout of that election cycle put a significant amount of pressure on social media companies like Facebook, Twitter, and YouTube to take action and help fight propaganda and outright falsehoods, that content still crops up and spreads widely across those platforms. Disinformation may once again be a defining feature of the 2020 election, but if the dust settles and she has taken the presidency, then Elizabeth Warren will have a plan to ensure the problem isn't prevalent in future election cycles.
This week, Warren announced a 14-point plan on how she intends to fight the issue of disinformation online, both as a candidate and as an elected official. Warren takes a three-pronged approach, starting with more transparency and integrity at the candidate level — surely a not-so-subtle shot at the Donald Trump campaign. The senator from Massachusetts also uses her plan to establish new levers for the government to more strictly regulate social media platforms — a continuation of her more overarching plan to break up Big Tech and the unruly web of businesses that many tech firms operate. Finally, the plan sets forth new rules for holding tech companies responsible for the type of content that is allowed to spread on their massive platforms.
The pledge starts with those running for office, where Warren promises not to "knowingly use or spread false or manipulated information," promote content from fraudulent accounts, or allow campaign staff or surrogates to spread disinformation. These points come in contrast to the Trump campaign, which has actively made and promoted posts on Facebook that contain outright lies about other candidates. This type of behavior is not only allowed but tacitly encouraged by Facebook, which has made it clear that it will not fact check political ads nor posts made by political figures. Warren has previously tried to draw attention to this policy by running ads on Facebook that intentionally contain false information, claiming that Mark Zuckerberg and Facebook endorsed Donald Trump for the 2020 election. It seems like Warren and her campaign will get out of fake ad business as part of this plan to fight disinformation. They will also have to avoid other Trumpian tactics, like boosting surrogates when they spout misinformation and promoting content from random Twitter eggs who often turn out to be white nationalists and conspiracy theorists.
On the government side, Warren wants to use her administration to give more bite to the agencies that oversee tech firms. Her plan calls on the creation of "civil and criminal penalties" for knowingly spreading false information, particularly about when and how to vote. Voter suppression was a common goal of disinformation in 2016, including posts that contained false information about the election, attempts to call on people to boycott the election to discourage them from voting and false stories of voter suppression and intimidation that are meant to keep people from going to the polls.
In addition to addressing intentional acts of voter suppression via misinformation, Warren also wants to bring back the cybersecurity coordinator at the National Security Council. Trump eliminated the position in 2018, arguing it was redundant because lower-level staffers already address cybersecurity issues. Warren intends to bring the position back — a stance supported by many lawmakers and security experts — and use it to help coordinate efforts to combat disinformation campaigns. Finally, Warren intends to create a panel of leadership from a number of countries with the goal of improving information sharing and working together to fight disinformation. The plan acknowledges that the U.S. is not the only country that faces these problems — online disinformation has been linked to the spread of lies in the lead up to the Brexit vote in the United Kingdom, a spate of fake stories ahead of the 2018 election in Brazil, genocide in Myanmar and a number of other incidents. Warren intends to work with other countries to address these issues in a more coordinated fashion rather than pretending like these are isolated incidents.
The biggest part of Warren's plan focuses on tech companies themselves. The senator and presidential candidate is calling on these companies to make fighting disinformation a top priority and to adopt best practices for stopping its spread. That starts with coordinating with the government and other platforms to share information about disinformation campaigns: where is it coming from, what sort of information are they trying to spread? If Facebook and Twitter had coordinated with one another in 2016, perhaps they could have identified the trend of Russian state-sponsored actors disseminating fake stories and misinformation. This type of cooperation may be key to helping address these widespread and coordinated campaigns in the future.
Warren is also calling on social media companies to more clearly label and highlight potential disinformation campaigns. That starts with clear content warnings for anything that is created or promoted by state-controlled organizations. Social media companies are in agreement that this is necessary and have promised to take action, but have largely failed to do so. Facebook has failed to introduce its planned label for state-sponsored content and a ProPublica report found that YouTube's promised efforts to identify propaganda have fallen short. Warren also wants these companies to actively alert users when they have been targeted by disinformation campaigns. Twitter and Facebook have taken steps to implement features like this, but Warren is calling on these companies to provide further information to users so they are aware they have interacted with a disinformation campaign.
Perhaps the biggest undertaking that Warren is asking of tech companies is for them to pop open the hood of their operations and show us how they work. Her plan calls on companies likes Facebook and Twitter to provide their data to researchers and watchdog organizations who can use it to identify trends and figure out just how this information spreads. Warren also wants these companies to show users exactly how their algorithms work so they can understand why they are seeing the content they are seeing — and give them an option to opt out of being exposed to algorithmic amplification so they won't be exposed to targeted content. Facebook has taken some steps toward this, including creating tools that allow users to see what terms are used by advertisers to target them and introducing an ad library for political ads. Problem is, the ad library has been deeply flawed and largely useless for researchers.
Warren's plan for disinformation recognizes that the issue isn't going anywhere and that this isn't simply a problem for presidential elections. Disinformation is at play all the time across the world as malicious actors — including political candidates — abuse and misuse tools to spread bad information to millions of people. Addressing that will take a concerted effort that includes well-intentioned candidates, vigilance from tech companies, and real enforcement levers in place for governments. Accomplishing all of that will be a heavy lift, but so is wining the presidency and Warren is still giving that a try.