From AI Curiosity to AI Readiness: Steps Nonprofits Can Take Now

Meena Das—nonprofit data and AI expert, and the founder and CEO of NamasteData—helps nonprofit organizations implement human-centric data and ethical AI practices. We asked Meena to share her expertise and guide nonprofit professionals on moving from AI curiosity to practical use.

 

Awareness (with AI) is knowing it exists. Readiness is being able to use it (AI) responsibly.

Right now, many nonprofits are “AI aware.” Teams have attended webinars, tried ChatGPT formally or personally, and have a mix of excitement and anxiety. That’s normal. But awareness alone can create a risky situation: people start using tools informally and siloed for work, without guidance, and the organization ends up with shadow AI—untracked, inconsistent, and sometimes harmful.

AI readiness means your organization can use AI in a way that is:

  • Aligned with mission and values

  • Safe for community and staff

  • Governed (clear rules, not vibes)

  • Practical (real use cases, measurable outcomes)

Readiness is not “everyone becomes an AI expert.” It’s “we have shared clarity, boundaries, and the ability to learn responsibly.” This is why I want to share the behavioural patterns I see every day when it comes to this difference inside an organization.

AI Awareness

  • People are experimenting with AI individually

  • Conversations are mostly theoretical (“AI is the future”/ “AI is scary!”)

  • No shared guidance on privacy, bias, or appropriate use

  • Tools are used for speed, not necessarily quality or equity

  • Leadership is unsure what to approve or prohibit

AI Readiness

  • The organization has a simple AI use policy and decision rules

  • Staff know what is allowed what’s not, and what to do when unsure

  • Use cases are prioritized based on impact and risk

  • Data protection and community trust are built into the workflow

  • Learning is shared, documented, and improving over time

 

Moving from Curiosity to Practical Implementation

So, the question then becomes: what steps should we prioritize to move toward practical implementation? My go-to number one recommendation is “If you do only one thing, don’t start with tools. Start with guardrails and purpose.”

Here is a step-by-step process I recommend for getting started:

Step 1: Name your “why” for AI (in plain language)

Start by answering what problem you are trying to solve with AI.

Common nonprofit answers include, for example, reducing admin burden and burnout, improving consistency and quality of communications, summarizing and analyzing qualitative feedback faster, supporting fundraising and donor stewardship, or drafting first versions (not final) of content and reports.

Your “why” should be specific enough that you can later evaluate whether AI helped.

Step 2: Create a minimalist AI policy (one page is fine)

Your policy should cover:

  • What data must never be entered into public AI tools (e.g., personal data, case notes, donor data, sensitive HR info)

  • Acceptable use examples (e.g., drafting, brainstorming, formatting, summarizing non-sensitive text)

  • Review expectations (e.g., human review required before anything goes out)

  • Bias and harm awareness (e.g., AI can stereotype; staff must check)

  • Accessibility and inclusion guidance (e.g., avoid excluding people through automation)

The goal is not legal perfection. The goal is shared safety.

Step 3: Choose 2–3 low-risk, high-value pilots

Pick use cases that help quickly without touching sensitive data, like turning a meeting agenda into a clearer facilitator guide, generating first drafts of emails or event blurbs (with human editing), creating FAQ drafts from existing policy text, or summarizing non-confidential reports for board briefings.

Make sure each pilot has:

  • A clear owner

  • A simple success metric (e.g., time saved, quality improved, fewer revisions)

  • A risk check (what could go wrong?)

Step 4: Build a “prompting + review” habit

Readiness depends on consistency. Teach staff a basic workflow:

  1. Provide context and audience

  2. Ask for output in a specific format

  3. Check for accuracy, tone, bias, missing voices

  4. Add organizational facts and local context

  5. Final human approval

Your organization doesn’t need 100 prompts. It needs a repeatable process.

Step 5: Decide what “good enough” looks like

A hidden readiness problem is perfectionism. Teams either over-trust AI or refuse it entirely because it’s imperfect. The mature stance is that AI is a draft assistant, not a truth machine. Decide where AI is appropriate—and where human judgment must lead.

 

Building an Engaged and Trustworthy AI Culture

Now, the biggest hurdle I see when nonprofits go through these steps is engaging their entire staff in the process. And I want to remind us that AI readiness is cultural. If people feel judged, they hide their experimentation. If leaders demand adoption, people comply without understanding. Both paths create risks.

So, here are some best practices I want to offer for staff engagement in this process:

1) Start with psychological safety: “curiosity over shame”

Make it safe to say:

  • “I used AI and it didn’t work.”

  • “I’m worried about bias.”

  • “I don’t want to use this tool.”

Your goal is collective learning, not compliance theatre.

2) Create an AI working group (small, cross-functional)

Include programs, fundraising, operations, IT/data, and someone close to community experience. Their job is not to “own AI.” Their job is to document learnings from pilots, update guidance, surface risks early, and keep values at the center.

3) Communicate in a way boards understand

Boards often hear “AI” and think about risk, reputation, and legal exposure. Give them a simple framework that covers:

  • What we’re using AI for (and not for)

  • How we protect privacy and community trust

  • What human review looks like

  • What success looks like and what we’ll monitor

4) Train in short, ongoing moments—not one big workshop

Offer 30 to 45-minute sessions that cover topics like:

  • “AI basics for our context”

  • “Privacy and data do’s/don’ts”

  • “Prompting and review practice”

  • “Bias spotting and accessibility checks”

Readiness builds through repetition.

The point of readiness is, first and foremost, trustable usefulness.

AI readiness isn’t about being modern. It’s about being responsible and practical. When you pair clear guardrails with small pilots, you prevent chaos, reduce fear, and give your nonprofit a real chance to benefit—without sacrificing the trust you have spent years building.