We’ve all heard the criticisms about using AI (artificial intelligence). For example, AI-generated content has included
- Misinformation or inaccuracies
- Harmful bias
- Unvetted or unclear sources
- Illegal use of proprietary, copyrighted, and private information
In our 2024 Nonprofit Communications Trends Report, despite these problems, we found that most nonprofits use AI. Only 19% of nonprofit communications professionals said they weren’t using AI. Yet, 93% of nonprofits do not have an internal policy about using AI.
Concerns from Nonprofit Communications Directors about AI
The aforementioned issues, along with more general concerns about replacing the human touch with automation, lead nonprofit communicators to voice many specific worries about the use of AI on communications tasks, whether performed by the communications team or others on staff.
These include
- The need to train staff to expect and look for problems in AI-generated copy, including factual errors and implicit bias, rather than simply trusting it.
- Discerning whether the AI-generated content has been filtered through or embraces points of view that are inconsistent with or contrary to the organization’s values.
- The fear of producing repetitive, unoriginal content that sounds like every other organization using AI.
- Taking the heart and lived experiences out of organizational messaging.
- Undervaluing content writers on staff for their ability to produce original, tailor-made content.
- Staff relying on AI rather than on their own subject matter experts.
- Creating complacency among staff (e.g., staff relying on AI to write copy and getting disconnected from listening to and building relationships with real target audiences, including donors).
- Bearing responsibility for AI content developed and approved by complacent staff members outside the communications team when problems are discovered after publication.
- Overreliance on easy and fast AI solutions rather than investing in staff and in professional development and skill building.
- Inconsistent use among staff members, leading to inconsistent expectations (e.g., whether all drafts should be run through AI for optimization before being reviewed, or how quickly certain tasks should be completed when relying on AI assistance or not).
AI is Here to Stay to It’s Time to Create Policies About AI Use
No question, AI can save nonprofit communicators a great deal of time and energy. For example, nonprofits have been relying on early forms such as spelling and grammar checkers for decades. More recently, nonprofits have used these editing tools to minimize the wordiness and complexity in their writing and to tweak the tone or style of the writing as well.
With the rise of ChatGPT and other similar content-generating tools, AI now goes well beyond editing assistance. If you provide the right prompts, AI can create all of the content needed for months-long multi-channel marketing campaigns. And if you give it feedback with additional prompts, it can produce even better drafts, improving each time you ask.
But ultimately, it’s just another tool that requires some form of regulation by its users. We encourage nonprofits to consider the concerns listed here as they develop policies.
Nonprofit Marketing Guide is also working on a set of best practices and sample policies around AI that we will share later this spring.
Until then, check out these resources:
How to Create a Generative AI Use Policy — Tech Soup
Eight Steps Nonprofits Can Take to Adopt AI Responsibly — SSIR
A Guide to AI Ethics and Governance for Mission-Drive Organizations – Board Effect