As AI is being used by authoritarian powers to manipulate public opinion, the revolutionary technology also offers ways to combat propaganda and hate speech. The race is on for which side can develop the fastest.
Two faces of technology: AI tools have created an opportunity for the production of mass disinformation
As technology advances, those craving power also advance in the way they use it against their opposition worldwide. Authoritarian governments might be just learning how to use artificial intelligence, but they have years of knowledge and experience in silencing their opponents. Now the race is on to establish whether the new technological revolution will be in the hands of the public good, or become another black box.
Although AI has featured in our daily lives for years now, it was only after the success of ChatGPT in late 2022 that this dilemma of “good” and “evil” was discussed more widely. OpenAI’s creation popularized mass AI access, raising concerns over the consequences of the technology as it became an overnight success. Neither AI, nor automation in media is new, but it is the meteoric rise and public access to it that has left many calling our future with AI into question.
In the short time since then, AI has already been used by authoritarian governments to suppress freedom of speech and journalism. In countries like China or Vietnam, machine learning is also being used to monitor, detect and delete content that has been critical towards authoritarian governments or opposed their official narratives, creating more obstacles for independent journalism in an unwelcoming environment for free speech. The AI tools also exacerbate the dissemination of information that can be misleading for audiences, eventually threatening the fragile state of democracy worldwide.
Artificial intelligence also brings the craft of disinformation to a completely new level, where fabricating visual content is now a possibility for everyone. With little to no human engagement, the existing AI tools have created an opportunity for the production of mass disinformation. Fabricating disinformation is now faster, cheaper and more effective than ever.
In a recent report, NewsGuard, an Internet trust tool, identified hundreds of unreliable AI-generated news items and information websites, freely “farming” convincing misinformation. This number is only growing. Detected in various languages, the articles are created completely by AI and primarily shared through social media.
A combination of hoaxes, outdated reports and false claims, these news pieces cover various topics from politics to entertainment. The trend is worrying even for many of those who worked on AI development at its roots, like Sam Altman, CEO of who fears these models are being “used for large-scale disinformation.”
For as little as 30 dollars per month, an avatar created by an American tech company Synthesia can do the work of a whole news crew. And one of their many software-made reporters has already starred on the Venezuelan state-run TV, falsifying a foreign-news report and pushing pro-government narratives. This software has also been detected in use in Africa and Asia, particularly by China.
A falsified Pentagon explosion photo was even reported to have moved stock markets. Generating an image like that one, for example, takes a matter of seconds on Photoshop’s new AI feature update. The creation of realistic fake images further undermines the validity of public decision-making.
With much of the world already in need of better media literacy, AI literacy is becoming a new challenge. Meanwhile, the blurred line between real and fake is fading in the flow of misinformation campaigns, threatening further polarisation and mistrust for journalism and endangering the state of democracy.
Chances and opportunities
Nevertheless, if used properly, many also see the opportunities AI presents as a way to strengthen democracy, through diversifying media content and helping to combat disinformation.
Known as “Genesis,” a Google-created tool has been introduced to do the work of certain types of journalism, generating news content. The ongoing development and improvement of the tools are expected to produce texts, with the potential of being indistinguishable from human-written texts, and further improvements in the process.
Similar productivity tools are already being used in newsrooms, including helping them to overcome language barriers. Delegating part of the work to AI can save journalists time, allowing them to work more effectively and opening a door to more quality journalism.
Small newsrooms and regional outlets with limited human and financial resources are likely to be at the forefront of these developments. AI tools are also being used to monitor, detect and remove hate speech in real-time, contributing to a safer online environment.
At the same time, tools like Transverstia can use AI algorithms to verify information and to determine the trustworthiness of news stories. The tool has the potential to analyse around 60,000 stories per day and is still being improved. But the stories that may need double checking could be countless.
A Europol report from 2023 predicts that as much as 90% of online content could be AI-generated by 2026. The race between those disseminating fake news and those trying to combat it is an urgent one.
In this tight competition between “bad” and “good” actors, fortune will favor those who act the fastest.
Policies to regulate AI, however, are lagging behind technological advancements. In June 2023, the European Union became the first to agree on a draft regulating AI, the Artificial Intelligence Act. The landmark amendment is aimed at strengthening and promoting regulation to become a standard for the rest of the world. The draft is a major step in the right direction, but it’s clear the race for this technology is still gathering pace.
Aren Melikyan is a journalist and a Mundus Journalism Scholar. He is currently doing his master's studies in Journalism and Globalization in a joint program by Aarhus University and Charles University. During an internship at DW Akademie, he researched the topic of AI and its impact on media.
Aren has worked and written for several international media outlets, mostly covering social issues and politics in Eastern Europe, with a focus on the South Caucasus.