With the advent of generative AI, the media development sector faces new disruptions of the global information sphere. Experts and partners of DW Akademie see the need for a dedicated joint effort.
AI is creating the threat of a new digital divide: between individuals who have access to the tools and resources to afford them and those who cannot. Between media enterprises who are able to make the necessary technology investments and those who cannot. Between groups who profit from the power of AI while being protected by meaningful rules and laws and those for whom neither is the case.
The media development sector must face the challenges the technical revolution entails. It must examine the implications for its fields of action, find answers to changing conditions for freedom of speech, access to information, and digital participation. It must realize opportunities. And it must view AI through its own lens and define its own positions and perspectives.
The questions that arise here are manifold: Who has access to technology? Who runs the systems and who can use them for which purposes? Whose values are represented in the data that AI is building on? How was this data collected? What about privacy, copyright, and data governance? How can vulnerable people and groups be protected from automated campaigns that are run against them?
Currently several news organizations and networks are developing and presenting guidelines and principles on the use of AI in media and journalism, such as the "Global Principles for AI" by an international group of publishing organizations, or, with a more pronounced orientation towards journalism and newsrooms, the "AI Charter in Media" by an expert group called together by Reporters without Borders (RSF). DW Akademie is part of the latter.
We at DW Akademie believe that more voices from a more diverse geography should be part of this conversation. We therefore talked to experts on media and AI from eight countries. Based on these interviews and discussions we have come to the conclusion that several actions for the media development sector need to be taken. They should include not only ideas for the craft of journalism, but also take a broader, systematic view on information ecosystems.
The experts we spoke to have stressed the importance of thoroughly analyzing the latest developments in AI and what they bring about in our information ecosystems.
It is essential to base the discussion on a better technical understanding, says Asme Teka, co-founder of Lesan, a German-Ethiopian AI startup for machine translation systems of Ethiopian languages. "We need to create a common ground of what exactly we mean when we talk about media and AI before establishing principles. Generative tools are bypassing many of the traditional principles of journalistic work."
Odanga Madung, Kenyan journalist und senior researcher at Mozilla Foundation, expects a widening divide between those that can harness the new technology and those that are simply misguided and exploited by it. "AI is going to be a transformative type of technology almost as powerful as the internet in terms of how it restructures our society," he says.
"AI is really going to mess up our belief system. There's a moment when we all as a society need to adapt to the fact that a lot of content is being generated, and that we could move from post-truth to a post-reality society," warns Julie Ricard, director at Data-Pop Alliance from Brazil. This applies to the mass dissemination of disinformation in connection with elections, for example, as she explains.
Experts anticipate fundamental changes and urge political thinking when dealing with the repercussions of the AI revolution.
"The mistakes that happened with the rise of social media platforms should not be repeated," says Jerry Sam, executive director at Penplusbytes, a digital media NGO from Ghana. Zoe Titus, director at Namibia Media Trust points out: “We from the media development community need to advocate and provide the policy frameworks that are needed. It must be done from a human rights perspective: AI needs to be used for good. We need to campaign for just and equitable societies."
Every discussion about technology always has a political dimension, which has to be taken into account especially in the media context, Caesar Atuire emphasizes. The philosopher from the University of Ghana underlines that the AI discussion must start with the system question: "Artificial intelligence as it is being developed today is a reflection of who we are. And if we are already in a world that is entrenched with biases and prejudices, artificial intelligence is only going to entrench these biases and it is going to consolidate them."
"AI is the biggest disruptor," says Zoe Titus. "If we do not come to an agreement on how we integrate it in our discussion then we will not even be at the table in any space where we can speak about AI policy for media freedom," she adds.
Currently, most media actors are in the role of consumers, says Layal Bahnam, program manager at Maharat Foundation, a Beirut-based media freedom NGO. "Until now it's just like they give us the product we consume—there's no awareness to demand our right to be involved in any kind of development process."
According to the experts we spoke to, AI affects crucial fields of action of the media development sector, like regulation, digital divides, fundamental rights, media viability, education, innovation, or Media and Information Literacy.
"Regulation of AI is among the most important issues on which the media sector must position itself," Ceasar Atuire says. "We cannot imagine a positive and fair AI if we live with the systems that we have because these systems are generated by super powerful organizations that even nation states cannot hold to account."
According to Layal Bahnam, "AI regulation may be something good in Europe, but it's definitely not going to be good in our part of the world, because the way we see our regulations, they're always not in favor of the free flow of information and freedom of expression."
"Four billion people are not even connected to the internet. The risk is that we are having these major developments that are spearheaded by an elite few and leaving behind the largest part of the global population," warns Zoe Titus.
"Smaller media houses are not able to invest in premium AI tools but the big ones are. This is going to escalate or even deepen the divide between the landscape of community radios and commercial radios for instance," Jerry Sam says. Layal Bahnam fears even negative effects on the diversity of opinions if only financially strong media houses can make use of AI technology: "In the MENA region the big media organizations are often mouthpieces of the government. There is the risk of an AI-supported media capture," she says.
"We should emphasize different journalistic practices in different contexts," Asme Teka says. This is also important to compensate for the deficits caused by AI. "Normally in our part of the world we focus on community-based stories to bring about change to enhance advocacy by civil society. AI is not able to give out those sort of community feelings and sentiments. So we are losing that."
"There is a lot of fear and uncertainty in newsrooms about how to harness this technology," Zoe Titus says.
It remains important to make clear what core functions journalism still has, Odanga Madung is convinced. "Tell people to put the foot on the brake pad and just slow it down. Think about what exactly you're putting in your article because you are already operating within a legal framework. If you don't respect it, your credibility as a journalist goes out the window."
"We need AI literacy, not in terms of programming AI but in understanding what it is and what it can do. For me it also has a democratization aspect. We need to make sure that those conversations don't remain in a bubble of the tech savvy elites,” Layal Bahnam points out. “I think it's very important to bring this knowledge to the regions that are poorer, that have less access to knowledge, where these debates are not even happening, because governments will not facilitate that," she adds.
Faced with fast-paced developments in the AI field, the media development sector must gain traction in its response in various fields.
It must analyze the transformation of media markets by AI and support media in developing new business models. For example, by helping media houses recognize the value of their own data or leverage their reach by using large language models.
As AI threatens to shift the weights in the media markets to the cost of small and independent media, the media development sector must add to mitigating the risk of new cost traps and dependencies on AI service providers. There is a danger of a two-class society between AI-augmented media houses and traditional ones.
Media development must also help tap the potential AI holds for innovation in very specific development settings. It must systematically network with players from the technology sector. Above all, it is important to develop new use cases and find out how AI can innovate new forms of constructive public dialogue and broader access to information.
Further, media development must adapt curricula in journalism education and teach the opportunities and risks of AI. And last but not least, the audience must be included: modern media development no longer works without the involvement of the user perspective.
As a result of the previous thoughts, we propose three next steps for the media development sector in the face of the ongoing AI revolution:
Julius Endert is senior consultant at the Policy and Learning department at DW Akademie.
Jan Lublinski is head of the Policy and Learning department at DW Akademie.