The Information War: How the Biden Administration Can Deal with Misinformation Online

By Alan Cunningham

Misinformation was at the heart of the 2016 Presidential Election and has been a recurring problem for governments and the public across the globe since then. With the election of President Joe Biden, the proliferation of misinformation and conspiracy theories online has grown substantially. Even during the 2020 U.S. Presidential election, it is known that Vladimir Putin and the Russian government directed misinformation efforts against the Biden campaign. 

Despite the real threat misinformation poses in our current environment, the Biden administration has been slow in combating misinformation online. It was only in July of 2021 that Biden called out social media and Big Tech over COVID-19 misinformation, saying platforms were “killing people” before walking back this statement. Beginning in July as well, and continuing into October, the Biden administration is reviewing “whether to try to alter Sec. 230 in order to tackle COVID-19 vaccine misinformation on social media sites like Facebook and Twitter”. The bulk of these actions, however, have been public statements; little in the way of legislation or the creation of different organizations has been done.

It goes without saying that misinformation on social media can have an extreme effect upon the public’s views and opinions on domestic and foreign policy. News agencies, both credible (like CBS News, The Hill, or AP) and discredited (like InfoWars, The Free Thought Project, or GlobalResearch) ones, now advertise on social media and put out content through social media platforms (Facebook, Instagram, Snapchat, etc.). 

News agencies now have the ability to reach people in a way never before conceived or imagined. Brian Ott, a professor of communications and rhetoric at Texas Tech University, opined, “So, one of the things we know is that people increasingly are getting their news from social media. This is deeply problematic, regardless of what end of the political spectrum someone might be on, because we know that as they get their news in that way, what tends to happen is that they tend to get news and information that already confirms biases they already have. So they’re only confirming news that reinforces existing opinions”. Professor Ott continues and states that algorithms utilized by social media can more selectively narrow down the content and provide one with media that is highly biased to one single viewpoint. 

As an example, if someone on social media starts reading articles from The American Conservative and The Blaze (conservative to right-wing news with shoddy reporting styles), eventually, provided they follow certain articles, they will get content from the American Thinker and American Renaissance (far-right, conspiracy theorist, white supremacist/nationalist news agencies that are wildly inaccurate and heavily biased). Obviously, that is a significant problem for the public, government, and social media companies, but it shows how influential the media and social media can be in changing someone’s view and leading them to take on views that some may otherwise not have had developed without social media.

Groups like these, ones that are explicit in the proliferation of clickbait, the promulgation of conspiracy theories and unsubstantiated claims, have a very large reach online. InfoWars has 10 million viewers per month, The Free Thought Project gains, “six to 15 million unique views per month and reaches 60 million people on Facebook”, and GlobalResearch acquires roughly 3.4 million views via their YouTube channel and another 2.7 million through direct articles. 

The promulgation of fake news is a big problem as these types of stories are having a greater and greater effect on the way that domestic and foreign policy is conducted and can have a much larger effect on the way foreign policy can be conducted. 

For example, believing that Syria’s president, Bashar al-Assad, did not use chemical weapons against his own people, that the United Nations are spreading lies about North Korea’s abuses, and that the government was involved in 9/11, if read and believed by the majority of a population, could result in the government being forced to abide by the public’s desires. This could possibly mean opening inquiry upon inquiry into certain matters, halting government processes and spending time having to discredit baseless theories all while further sowing domestic discord and allowing foreign powers to gain the upper hand abroad. 

The idea that al-Assad did not use chemical weapons (when multiple independent reports examined the site and all available information, agreeing with both the UN and the U.S. government’s assertions) is an extremely impactful claim and, if promulgated nationally, could significantly change the way the U.S. goes about conducting foreign policy in Syria, negatively benefiting the people of Syria and regional security while beneficially affecting Putin and the Russian government’s goals. The same can be said with North Korea, as someone’s views on the country could be tainted by such a pro-North Korea outlook. The capacity for growth within this industry is enormous and can have legitimately debilitating effects on society and the way governments can go about conducting foreign policy.

In combating misinformation campaigns online, there are many potential solutions. In a 2001 paper written at the U.S. Army’s Command and General Staff College, Major Simon Hulme, a member of the Royal Engineers of the United Kingdom, describes his own thoughts on how best to prevent disinformation and the potential inaccuracies of the media on the internet, advising that an “independent regulatory body [must begin] the almost impossible task of monitoring and censoring information contained on the net” is necessary to ensuring that information is accurate and correct. 

To this day, there is still no body that regulates content online nor regulates the accuracy of information put out by social media platforms, news aggregates, or self-defined news agencies. While some groups, like Facebook, have taken up the mantle to a certain degree, many of these sites that were removed are now up on Facebook again under new accounts. I think a regulatory body is a great idea and would surely help in stopping a lot of the bad information that is a threat to democracy and to rational thought processes, however there a lot more intricate questions involved; Who will run such a group, the government or the titans of the internet? Will they utilize academic descriptions of what constitutes fake news and disinformation or will they create their own? How will they enforce repeat offenders?

I argue that a combination of both a regulatory agency and a team of information warfare specialists and counterintelligence investigators would be an effective way to combat this. Having a regulatory body headed by cybersecurity experts and administrators from academia and the State Department as well as having a small crew of former counterintelligence investigators, information warfare specialists from NATO’s StratCom or the U.S. Air Force, and others from the fields of academia and journalism would be a solid team. This team of investigators, researchers, and analysts would compose written analyses on organizations that produce fake news and misinformation and promulgate these throughout the web. Having government oversight (the Departments of Justice or Commerce seem to be the best choice for an oversight authority) and utilizing academic standards on what constitutes fake news would provide ample oversight capabilities and have a strict and concrete defining terms for which media organizations are a threat to the public online and what groups may potentially be working for foreign governments or in line with their interests.

This tactic most likely would not work with those who are already anti-government or do not trust the government (which includes a large swath of people on both the political left and right), however, it would work incredibly well in halting those who are in the precipice of becoming radicalized or entranced by misinformation. Furthermore, it would provide an official rebuke against these organizations that engage in journalistically unacceptable and publicly damaging behavior while providing a clear set of evidence in written form.

While some may find a unit like this unnecessary or unrealistic, it is important to note that Canada, Spain, Turkey, Sweden, and others, all have some form of task force aimed at stopping disinformation. Some of these, like Mexico’s, are merely government sponsored fact-checking organizations, while Turkey’s has the ability to criminally charge persons they find engaging in misinformation. As well, the State of California in 2018 did try to create a task force built on stopping disinformation, but was vetoed by Governor Jerry Brown as it was seen as “unnecessary”. It is worth noting that any task force or unit aimed at stopping misinformation online would need to comply within a very tight and restricted set of rules while also ensuring that no freedoms of speech or press are being constrained or restricted. It is imperative that such an operation like this be conducted legally and in a democratic manner, one respective of the ideals of the United States and with the enumerated rights described in the U.S. Constitution.

I would argue that stopping misinformation on the web is enormously imperative and, given that blatant foreign intelligence interference in the conduct of democratic processes is a known problem, it must be dealt with swiftly and expertly by the Biden administration before more damage is done.

One thought on “The Information War: How the Biden Administration Can Deal with Misinformation Online

  1. This is a critical piece. Misinformation can seriously impact opinions and can damage our democracy. It can negatively impact our collective ability to engage in discussion and resolve issues to improve our overall quality of life

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s