We Did Something Uncomfortable Last Year (And We Were One of the First to Do It)
We did something uncomfortable early last year.
We created an AI policy for our agency. Put it in writing. Made everyone sign it.
And we were one of the first PR firms in Arizona to do it.
Not because we’re anti-AI. We’re not. We use AI every day at HMA – for research, drafting, brainstorming, all the routine work that used to eat up hours of billable time.
But here’s what was keeping me up at night:
We didn’t have guidelines.
What We Actually Included
Our policy isn’t complicated. One page.
But it covers what matters:
Human oversight is mandatory. Every piece of AI-generated content gets reviewed by the account lead, then by me, before it goes to a client. No exceptions.
Transparency when it matters. We disclose to clients when AI was used in developing materials – especially if the majority of content came from AI.
Client confidentiality is protected. No inputting confidential client information into public AI platforms without written authorization. Period.
Here’s something a lot of firms don’t realize: the free version of ChatGPT uses your inputs to train its models. That means anything you put in could potentially end up in someone else’s output.
For a professional services firm handling confidential client information, that’s unacceptable.
We invested in enterprise AI subscriptions, paid services with data protection agreements that don’t use our inputs for training. It costs more, but it’s not optional. Usually $20-30 per user per month, and client confidentiality isn’t something you cut corners on.
PRSA Code of Ethics still applies. Just because AI can do something doesn’t mean we should. Our ethical obligations don’t change because the tool changed.
Then we made everyone sign it.
What Happened After
Honestly? Relief.
My team had clear rules.
Clients appreciated the transparency. When we tell them “we used AI to compile this research, and here’s how we verified it,” they trust us more, not less.
And prospective clients started asking about our AI practices during pitch meetings. Having a policy to share became a competitive advantage – we could show them exactly how we handle AI use.
Why More Agencies Need to Do This
Your team is already using AI. Whether you’ve given them permission or not.
Some are using it responsibly. Some aren’t. And you have no idea which is which.
An AI policy doesn’t limit your team – it protects them. And it protects your clients. And it protects your reputation.
When we implemented ours in early 2025, we started talking more about it.
Clients are asking. And if you don’t have an answer, someone else will.
What I’d Tell Another Professional Services Owner
Start with one page. Cover these five things:
- When AI can be used (and when it can’t)
- What review process is required before AI content reaches clients
- How to handle client confidentiality with AI tools
- Disclosure requirements
- Who’s accountable when something goes wrong
Then invest in enterprise AI subscriptions. Don’t use free versions that train on your inputs.
Make everyone sign the policy. Review it annually. Actually enforce it.
The Bottom Line
We’re using AI more strategically because everyone knows what “strategic use” actually means. The AI policy provide clarity.
And when clients ask about our AI practices (and they do ask), we can hand them our policy and say: “Here’s exactly how we handle it.” That’s a competitive advantage – not because we’re using AI (everyone is), but because we’re using it intentionally, ethically, and transparently.
You don’t need to be perfect at AI to create a policy. You just need to be honest about what you need to protect. Your team needs clarity. Your clients need transparency. Your agency needs accountability. An AI policy gives you all three.
Was it one of the smartest things we did in 2025? Absolutely.
If you’re reading this thinking “we should probably do that, too,” you’re right. You should.
Abbie S. Fink is president and owner of HMA Public Relations, Arizona’s oldest continuously operating PR firm. In early 2025, HMA implemented a comprehensive AI use policy. Want to know more, please reach out to afink@hmapr.com to discuss AI policy implementation for your firm.
Want help developing your agency’s AI policy? HMA Public Relations can guide you through creating practical, enforceable AI guidelines that protect your clients, your team, and your reputation. Contact us at info@hmapr.com or 602-957-8881.
