For nonprofits, generative AI can be used to help write and edit fundraising requests, social media posts, and annual reports, along with creating useful graphics and making content more accessible with translation and simplification.
However, generative AI also poses some challenges and risks that should be avoided. Improper AI use can lead to accidental disclosure of sensitive or personal information, infringement of others’ intellectual property rights, and advancement of unfair, false, or biased content. Nonprofits need to be aware of the best practices and the pitfalls of using generative AI in their organizations. For guidelines and a list of best practices, see this article.
Creighton Frommer, Esq. and Stephanie Hooks, Esq., RELX; and Emily Noggle, Esq., Everbridge
We have all seen the examples of generative AI used to create amazing new content, such as stories, images, audio, and even video, based on simple or complex “prompts,” or instructions for the AI tool.
Particularly for nonprofits, generative AI can be used to help write and edit fundraising requests, social media posts, and annual reports, along with creating useful graphics and making content more accessible with translation and simplification.
However, generative AI also poses some challenges and risks that should be avoided. Improper AI use can lead to accidental disclosure of sensitive or personal information, infringement of others’ intellectual property rights, and advancement of unfair, false, or biased content. Nonprofits need to be aware of the best practices and the pitfalls of using generative AI in their organizations.
Guidelines for Using Generative AI
Here are several do’s and don’ts for nonprofits working with generative AI, grouped in four areas: selecting the right AI software, creating content, communicating and advocating, and decision making and analysis. These guidelines are not exhaustive or definitive, and the law around generative AI remains new and changing. But these offer some practical and ethical advice for responsible use of generative AI. If you have follow-up questions, please contact an attorney.
Selecting Generative AI Software
- Do use generative AI tools from trusted providers that can protect your organization’s private information from public disclosure. Confirm the AI software will keep your private information from being used to train its public data.
- Do review all of the applicable legal terms that govern the use of the AI tool, such as a terms of use, acceptable use policy, privacy policy or data processing agreement. A commercial license (while possibly more expensive) may offer better, more protective, terms for a nonprofit than a non-commercial, individual license.
- Do consider and test various generative AI tools when choosing a new tool to use. Compare them by asking them the same prompt and judging how the responses fit your organization’s needs.
- Don’t include personal information or sensitive information in your prompts when you do not know or understand if the AI tool will learn from your prompts and train its public AI model. You might instead consider asking generic questions of tools like this.
Creating Content with Generative AI
- Do remember that generative AI software should merely aid your organization in creating content. Review all AI output before publishing it.
- Don’t use generative AI to create content that you are not otherwise authorized to create yourself. Don’t ask to include real people or brands in the output.
- Do check output content for quality, accuracy, unfair bias, inclusiveness, and appropriateness for its context.
- Do respect the intellectual property rights of people and companies when using generative AI. Don’t ask generative AI to duplicate existing creative works.
- Don’t use generative AI to create misleading, deceptive, or harmful content, such as fake news, deepfakes, or propaganda. In some contexts, AI-generated content is not acceptable for publication.
- Do follow specific rules, if applicable, when publishing AI-generated content. Some publications may require adding notices, disclaimers, or watermarks to AI-generated content.
Communicating and Advocating for your Cause
- Do provide clear and specific prompts to the AI to generate accurate and relevant responses to create internal and external communications. And always double-check the output.
- Do fact-check and verify important information created by the AI tool, as it may contain errors or biases.
- Do use AI as a tool to enhance and supplement human communication, not as a complete replacement.
- Do leverage AI to quickly personalize and target messaging for different audience segments to increase engagement and impact.
- Do consider using AI-powered chatbots or virtual assistants, if such services have been properly reviewed and properly licensed, to provide information and resources to supporters, volunteers, donors, and interested parties.
- Do consider using AI to optimize your advocacy campaigns, such as identifying the best channels, timing, and messaging for maximum reach and influence.
Decision Making and Data Analysis
- Don’t rely solely on AI, especially for sensitive or high-stakes communications or decisions, such as financial or employment decisions.
- Do use AI to gather and analyze data related to your advocacy cause, such as public sentiment, trends and relevant statistics, but challenge its analysis and conclusions.
- Do ensure that the AI tool adequately protects your uploaded data from being used by other users.
- Do ensure data quality and cleanliness before feeding it into an AI model to avoid “garbage-in, garbage-out” scenarios.
- Do validate the results of AI-driven data analysis with experts and other relevant sources of information.
- Don’t ignore ethical considerations, such as privacy, fairness, and transparency, when using AI for data analysis.
Conclusion
AI presents extremely exciting opportunities for nonprofit organizations to enhance their operations and create efficiencies in their outreach and impact. However, it is crucial for nonprofit organizations to recognize that AI is not a perfect product and has inherent limitations and risks. Ultimately, when used responsibly, AI should be viewed as a complementary tool used to augment and support human communications and decision-making within nonprofit organizations.
Need Legal Advice?
If you are a PBPO client and would like more information or need assistance regarding this issue, contact PBPO at [email protected] or (513) 977-0304.
Not a Client? Apply to become a PBPO by submitting a Request for Legal Assistance form online, or contact us at [email protected].
About the Authors
Creighton Frommer, Chief Counsel, Intellectual Property, Technology & Procurement, RELX. Prior to becoming a lawyer, Creighton worked as a software developer. He also serves on the Advisory Board of Pro Bono Partnership of Atlanta and has served as President of the Association of Corporate Counsel Georgia Chapter.
Stephanie Hooks, Senior Counsel, RELX. An experienced legal professional with a focus on global procurement and technology initiatives, Stephanie currently supports the RELX Global Technology & Procurement team.
Emily Noggle, Senior Corporate Counsel, Everbridge. Emily is an accomplished in-house counsel with over 9 years of experience advising on complex commercial transactions and contracts. She is also a member of the Grow PBPO committee and enjoys volunteering with PBPO whenever possible. Prior to joining Everbridge, Emily served as Senior Counsel at RELX, following her tenure as Corporate Counsel at LexisNexis.
Disclaimer
This article presents general guidelines for Ohio nonprofit organizations as of the date written and should not be construed as legal advice. Always consult an attorney to address your particular situation.
© 2025 Pro Bono Partnership of Ohio. All rights reserved. Dated February 6, 2025