Google.org open sources AI readiness playbook as foundations ask where to begin with funding

Google's philanthropic arm releases 63-page roadmap, surveys and workshop guides, pushing funders to reclassify AI licenses and compute as program costs rather than overhead.

Google.org's AI Readiness Playbook for Funders gives philanthropic foundations a 63-page roadmap for building internal AI capacity and evaluating grantee proposals across the social sector.

Google.org has open sourced its AI Readiness Playbook for Funders, a 63-page internal roadmap that helps philanthropic foundations build AI literacy across grantmaking, operations and communications teams.

The guidance will shape how foundations evaluate and fund AI projects across the social sector, including EdTech and skills-focused initiatives that rely on philanthropic capital.

The document lands at a moment when nonprofits are facing what Google.org describes as unprecedented demand and budget uncertainty, and it pushes funders to reclassify AI licenses, cloud infrastructure and model maintenance as program costs rather than overhead.

Program Manager at Google Carina Box announced the release on LinkedIn. "Calling all funders! We're hearing a growing demand from foundations that want to support AI for social impact but aren't sure where to begin. As a first step, Google.org is open-sourcing our AI Readiness Playbook for Funders", Box wrote.

She continued: "This resource is built on Google.org's own 2025 AI journey rather than a policy document." Box went on to frame the playbook around five practical shifts: "closing the confidence gap with a five-minute readiness survey, moving from policy to practice through a Principles in Practice workshop, hosting mission-first role-specific sessions, vetting the problem rather than the tool during due diligence, and empowering the ecosystem by reclassifying technology as a program cost."

Maggie Johnson, Global Head and Vice President, Google.org, writes in the playbook's opening letter that funders must act deliberately. "In the past, technology adoption could live with a single entity like a CTO's team or a systems administrator, but AI is different", Johnson writes. She continued that every staff member should understand the technology regardless of role, and urged action. "If funders do not act, the efficiency, creativity and scale that AI offers will exist in some places but not in the local communities that nonprofits reach."

Playbook splits AI strategy into three modules for foundation teams

The roadmap is structured across three modules. Module one focuses on strategic foundation and includes an AI Readiness Survey template, a Principles in Practice workshop guide, and an Ethics Inquiry Deck adapted from Google DeepMind and the Markkula Center for Applied Ethics. Module two covers implementation through role-based training, a 90-minute workshop guide, guidance on activating internal AI Champions, and a central hub model for collecting use cases across grantmaking, communications and operations teams. Module three turns outward with a due diligence framework for evaluating grantee AI proposals and a revised funding model for technology costs.

Google.org anchors its internal strategy on three pillars: Bold Innovation, Responsible Development and Collaborative Progress. The playbook also introduces the ACT framework for responsible AI use, which stands for Ask, Check and Tell, and a jagged frontier concept that positions AI as uneven in capability and requires teams to test every use case rather than assume consistency.

Reclassifying overhead to program costs signals a shift for EdTech grantees

For EdTech organizations reliant on philanthropic capital, the most consequential guidance sits in module three. Google.org advises funders to treat licenses, compute power, cloud infrastructure and model maintenance as program costs, arguing that inference expenses, the cost incurred each time a model is queried, scale with user volume and are routinely underestimated in grant budgets. The playbook directs funders to the Google Cloud pricing calculator to estimate these recurring costs.

The playbook also recommends funders structure grants with flexibility for prototyping and pivoting during the first 12 to 18 months of an AI project, proactively fund what Google.org calls boring infrastructure such as data cleaning and storage, and account realistically for technical talent costs that many social sector organizations struggle to cover.

Due diligence framework pushes funders to vet the problem, not the tool

The evaluation framework is adapted from the Patrick J. McGovern Foundation's AI for Good Diligence Guide and Google.org's own Stanford Social Innovation Review white paper, Investing in AI for Good. It distinguishes between back-office and beneficiary-facing AI solutions, applies tighter risk tolerance to the latter, and pushes funders to interrogate training data representation, explainability of predictive models, plans for monitoring data drift, and whether organizations have explored existing open-source tools before building new ones.

Google.org generally requires grantee tools be open sourced as a principle, the playbook states, and recommends funders ask whether proposed solutions are co-designed with the communities they serve.

An interactive NotebookLM experience accompanies the playbook, allowing users to query the document directly and generate personalized roadmaps for their own foundation's challenges. Google.org is now asking whether mid-sized foundations without dedicated technical strategy teams will adopt the framework at scale, and how fast that shift will reach EdTech grantees whose budgets rarely account for inference or model maintenance costs.

Previous
Previous

Microsoft Elevate, EY and Caribou open Changemaker Fellowship applications for nonprofit and UN staff

Next
Next

Microsoft and OpenAI tighten cybersecurity pact as AI models reshape defender workflows