« Back to all News

Funding for AI in Media: A Practical Look at What’s Actually Available

Artificial intelligence continues to reshape the information environment, and the media development community is exploring its implications — from ethical and editorial challenges to emerging regulation, newsroom safety, and the impact of generative AI on trust and public-interest journalism. As a result, there is a growing need for clear, practical information on a key question: What funding is available for media organisations that want to experiment with AI in their own work, and report on the societal impact of AI?

Author: Communications Gfmd | 19. November 2025

Despite the strong interest in AI across the sector, dedicated funding streams remain limited. Existing opportunities are frequently embedded in broader innovation, digital-rights, or verification programmes. This article provides an overview of current funding options — and what donors are looking for — for teams considering AI-related projects that support public-interest journalism.

1. Funding for Newsrooms Experimenting With AI Tools

Google News Initiative – JournalismAI Innovation Challenge

GNI’s JournalismAI programme is one of the most established global funders for newsroom experimentation with applied AI. Its focus is on responsible automation and workflow support rather than the replacement of editorial roles.

Typical areas of support include:

  • AI-assisted investigative workflows
  • Transcription, translation and accessibility tools
  • Structured-data extraction
  • Verification and fact-checking support
  • Cross-newsroom collaborations

Competitive proposals clearly define the problem being addressed, demonstrate editorial–technical collaboration, and include plans for sharing results and lessons learned.

➡️ See GFMD’s profile on GNI here.

European Media and Information Fund (EMIF)

While not explicitly positioned as an AI programme, EMIF increasingly funds AI-supported verification, debunking, scraping and media-monitoring activities. Examples include:

  • automated monitoring pipelines
  • entity-matching tools
  • large-scale content analysis for fact-checking teams

Both established organisations and smaller outlets are eligible.

➡️ See GFMD’s profile on EMIF (under the Calouste Gulbenkian Foundation) here. 

Mozilla Technology Fund

Mozilla’s Reliable and Trustworthy AI tracks often support journalism-aligned innovation, particularly where it strengthens transparency or public oversight of algorithmic systems.

Suitable for organisations working on:

  • open-source tools
  • verification and audit methodologies
  • AI transparency and accountability research
  • collaborations with researchers or civic-tech partners

➡️ See GFMD’s profile on Mozilla here.

National and Regional Programmes

A number of national and regional schemes are open to media organisations under specific conditions:

  • Nordic innovation funds (Denmark, Sweden, Norway): support for accessibility tools, low-resource languages and digital public-interest technologies.
  • German federal and Länder AI programmes: encourage cross-sector civic-tech collaborations.
  • UK Alan Turing Institute: occasional calls open to media partners involved in public-interest AI research.
  • Local philanthropic initiatives: growing interest in AI-assisted local journalism and public-interest digital tools.

2. Funding for Journalism About AI

AI is not only a tool; it is also a major public-policy and democratic challenge. Several donors that do not fund AI tool development do support journalism examining the societal implications of AI systems.

Below are key opportunities:

Omidyar Network – Tech Journalism Fund (Rolling)

The Tech Journalism Fund is not an AI innovation programme. It does not fund the development of AI tools, automation systems or newsroom workflows. Instead, it supports journalism about the civic and societal impact of technology, including AI.

Areas commonly supported include:

  • algorithmic discrimination and digital rights
  • AI governance and regulatory developments
  • AI surveillance and its implications for civic space
  • platform and data-power dynamics
  • investigations at the intersection of technology, rights and equality

In short: Omidyar supports journalism about AI, not journalism powered by AI.

➡️ See GFMD’s profile on Omidyar here.

Luminate – Information Integrity & Tech Accountability

Luminate funds independent journalism that scrutinises powerful digital actors and emerging technologies. Relevant areas include:

  • investigations into AI governance
  • transparency in algorithmic systems
  • public-interest reporting on automated decision-making
  • civic impacts of data and platform power

Suited to organisations with a strong accountability track record.

➡️ See GFMD’s profile on Luminate here.

Open Society Foundations (OSF)

OSF programmes regularly support journalism related to:

  • rights-based impacts of AI systems
  • AI governance and accountability
  • digital justice and equity issues
  • public-interest investigations involving emerging technologies

Funding structures vary by year and programme, but AI accountability is an increasingly common theme.

➡️ See GFMD’s profile on OSF here.

Mozilla (Journalism-Linked Tracks)

Beyond innovation tooling, Mozilla also funds:

  • reporting on algorithmic transparency
  • public-interest explainers on AI harm
  • investigations linked to tech governance and digital rights
  • cross-disciplinary collaborations

➡️ See GFMD’s profile on Mozilla here.

Other Digital-Rights Funders

Depending on geography, additional relevant funders include:

  • Ford Foundation (digital justice, algorithmic accountability)
  • Tides Foundation (tech accountability and civic rights)
  • European AI governance initiatives that include media as dissemination or research partners

3. What Donors Look for in AI-Related Proposals

Across both categories — AI experimentation and journalism about AI — donors consistently value:

  • A clearly defined problem: Proposals grounded in an identified editorial, civic or public-interest need.
  • Realistic expectations: Understanding of the limitations, risks and potential harms of AI systems.
  • Collaboration: Partnerships with researchers, civic-tech groups, universities or other newsrooms.
  • Ethical safeguards: Clear risk-mitigation strategies, transparency and editorial independence.
  • Knowledge-sharing: Plans for documentation, open methodologies or training sessions that extend project impact.

4. Examples of Pitchable AI-Related Projects

  • Projects using AI in the newsroom:
    automated data extraction for investigations
  • low-resource language translation pipelines
  • AI-assisted verification triage
  • audience-needs clustering
  • structured editorial knowledge bases

Projects reporting on AI:

  • investigations into algorithmic harms in public services
  • explainers on AI governance and regulation
  • reporting on AI and elections
  • accountability reporting on facial-recognition systems
  • local journalism examining AI use in public administration

For those ready to dive deeper, explore more resources in the MediaDev Fundraising Guide

Search

You are using an outdated browser which can not show modern web content.

We suggest you download Chrome or Firefox.