logosmall2.jpg

The Wheel

St. Catherine University’s official student news, since 1935.

Special Edition Spring 2024: How concerned should you be about disinformation?

Special Edition Spring 2024: How concerned should you be about disinformation?

By KC Meredyk

In its announcement that the Doomsday Clock would remain at 90 seconds to midnight for 2024, the Bulletin of the Atomic Scientists concerningly listed artificial intelligence as one of the factors contributing to the world’s imminent doom. Begun in 1947, the Doomsday Clock is a metaphorical device that “warns the public about how close we are to destroying our world with dangerous technologies of our own making.” The creators of the clock note that AI is a disruptive technology with, “great potential to magnify disinformation and corrupt the information environment upon which democracy depends.”

As AI increasingly infiltrates the public consciousness and our daily lives, there is looming fear about how disinformation and misinformation generated by AI may influence and undermine this election season. Flashback to 2016 and cries of Russian interference in the election and misinformation or, more accurately, disinformation campaigns. (Disinformation is deliberately misleading, whereas misinformation is unintentional.) Now, flash forward eight years to another election year. What are news agencies sounding the alarm about? Disinformation campaigns and generative AI’s potential use in them.

Generative AI has the possibility of producing more sophisticated disinformation campaigns at a lower cost. If you were to open your phone and search “misinformation” in news stories and start scrolling, you would quickly find something about election concerns or AI (or both if you scroll long enough!).

To learn more about our news environment, I talked to Margret McCue-Enser Ph.D., associate professor of communications. McCue-Enser pointed out that over the past 50 years, news has become increasingly commodified as advertising has become intertwined with news. This has resulted in the proliferation of news outlets with different versions and evaluations of reality. 

Despite the concerns regarding misinformation this election season, it doesn’t affect how people vote. America is a two-party nation, and the vast majority of people vote along party lines: Misinformation isn’t likely to change how anyone votes.

Less comforting, though, is that according to NBC News, misinformation does impact how people view the world, providing evidence for false claims and resulting in false beliefs like “the election was stolen” or “vaccines are dangerous.” Furthermore, evidence from the Brookings Institute shows that the intense coverage of misinformation has resulted in an erosion of trust in news sources in general.

Beyond the erosion of trust in news sources caused by the coverage of disinformation or “fake news,” AI has introduced more complexity to the situation. As seen in the cases of the Jan. 6 rioter trials and Elon Musk, the belief in the saturation of fake news has allowed some to claim that evidence against them is faked through AI. This potentially serves as a broader threat against the democratic information environment because it could allow politicians plausible deniability against any accusation so long as the public believes disinformation saturates the news. 

The reality of the extent of misinformation, however, is that it is less than you think. Disinformation is nowhere near as rampant as the news coverage around it makes it seem.

According to New York University’s Center on Social Media and Politics, during Russia’s 2016 disinformation campaign on Twitter (now X), a user saw 25 times more posts from national news and media than from Russian bot accounts. Another study from the same center, looking at Facebook users in 2020, found that credible sources were viewed and shared more often than non-credible sources. 

Now, onto the most popular social media platform for college-aged students: TikTok. I know it’s overdone, so let’s follow the trends and blame the youth. 

TikTok’s algorithm functions differently than on other social media platforms. Generally speaking, most platforms will only recommend content from those “like you” or in the same social networks. In my case, for example, social media algorithms will recommend hours and hours of cat content because that’s what I consume.

But as I am sure many have noticed, the “For You” page on TikTok will recommend stuff from outside these networks, according to the Brookings Institute. This introduces the potential for finding misinformation that you otherwise would not have stumbled on if you used other apps.

The 2020 Facebook study found that while older people are more likely to share misinformation, younger people are more likely to believe it. Thus, TikTok is an unknown factor in the dangerous game of disinformation.

How concerned should you be about misinformation? It is something to worry about — McCue-Enser emphasizes that the important thing is to practice good media literacy. AI or not, disinformation campaigns are likely already appearing on social media platforms.

It is our duty to cross-check what we learn online. A good rule of thumb for if a media source is accurate is that it is not dependent on advertising money. Public news outlets, for example, are not reliant on ad revenue.

Realistically, just make sure that you double-check any news you get from doom-scrolling or your weird aunt Karen against a credible news source. Your usual news sources are OK. You don’t have to seek out three articles from different platforms. If the video looks weird, check how many fingers the person in it has so make sure they aren’t AI-generated. Stay safe out there on the internet, and keep an eye out for misinformation.

Special Edition Spring 2024: Tick tock goes the Doomsday Clock

Special Edition Spring 2024: Tick tock goes the Doomsday Clock

Special Edition Spring 2024: St. Kate’s students and their unique journeys to graduation

Special Edition Spring 2024: St. Kate’s students and their unique journeys to graduation