Since everyone from high school interns to chief people officers is exploring what it has to offer, AI and your brand have to be thought about together. Its impact on your brand perception, value, and equity could be significant.
In what might be my most meta activity yet (other than wine-inspired Instagram purchases), I asked Anthropic’s Claude, “How should leaders think about the ethical employment of AI technology?” His responses follow in italics, followed by my thoughts.
Given the profound impact AI can have, I believe leaders should anchor their AI deployment decisions around three core ethical principles:
First, human dignity and agency must remain paramount. Any AI implementation should enhance rather than diminish human potential. This means being thoughtful about automation—not just asking “can we automate this?” but “should we automate this?” Leaders need to consider how AI deployment affects employees’ sense of purpose, creativity, and growth opportunities. When implementing AI tools, they should actively involve employees in the process, helping them become skilled partners with AI rather than viewing it as a replacement.
No argument there from me, but I think the response lacks some explanation as to why human dignity and agency are paramount. For one, it’s about maintaining a work culture that inspires the best outcomes. In order for any organization to thrive, everyone has to feel like an integral part of the work, why it matters, and how it’s done uniquely.
Second, fairness and equity must be actively designed for. AI systems can inadvertently amplify existing biases or create new forms of discrimination. Leaders must rigorously test AI systems for bias, particularly in high-stakes decisions like hiring, promotion, or customer service. This requires diverse teams involved in AI development and deployment, and regular auditing of AI outcomes across different demographic groups.
I’ll let slide the sentence ending in a preposition (come on, Claude!). Again, no argument but that can be a big lift for small and mid-sized businesses and nonprofits. It’s particularly difficult for organizations whose leadership teams are not made of professionals from diverse backgrounds. One way to address this dynamic is challenging AI to vet its responses for cultural biases and insensitivities. Key takeaway for me: vet all the AI-generated content that your organization shares or acts on.
Third, transparency and accountability are essential. Employees and customers should understand when they’re interacting with AI systems and how those systems make decisions. Leaders should establish clear governance structures for AI deployment, including processes for addressing concerns and mistakes. When AI systems affect important decisions, there should always be meaningful human oversight and clear appeals processes.
Agreed, but AI usage is skyrocketing and that means that a lot of the horses might escape the barn before the door is closed. I remember when corporate use of social media started to take off. I was at Thomson Reuters at the time and in what seemed like minutes, there were more than 50 active social media accounts from across the corporation. We hadn’t had time and/or foresight to put standards and governance into place, resulting in a lot of duplicated effort, confused audiences, and mixed messages.
I think that dynamic is true in this context as well. Before AI starts infiltrating all aspects of your operations, get some governance and standards in place, stat!
Beyond these principles, I believe leaders must cultivate what I’d call “ethical foresight,” actively anticipating potential negative consequences of AI deployment before they occur. This means regularly asking questions like:
- How might this AI system be misused?
- What groups might be disadvantaged by this deployment?
- What human capabilities might atrophy if we automate this process?
- How do we maintain human connection and empathy in AI-mediated interactions?
This one might be my favorite of Claude’s responses. It touches a bit on my previous points. The concept of human capabilities atrophying is an interesting one as well. The math I could do in my head decades ago is far superior to what I can do today, simply because there are so many ways to get the same answer from calculators, phones, and my Google Assistant.
Maintaining human connection and empathy has, perhaps, the biggest impact on an organization’s brand. Brands are emotional by design, and emotions are intrinsically human. Organizations have to balance their use of AI with authentic, emotional connections to both their internal and external audiences.
How to do that varies by organization, audience, and relationship, but here are three suggestions on how to go about it.
- Find regular opportunities and channels to connect person to person, authentically. Connection doesn’t mean just sending a message personally, it means employing ways to actively listen as well, whether that’s through surveys, one-on-one conversations, or focus groups. Communication is bi-directional.
- Ensure that your employees feel an emotional connection to your brand, that they believe in your Why. Make sure they feel what you want the outside world to feel about your brand.
- Showcase and celebrate people, both inside and out of your organization. Celebrate what makes them special and human!
Most importantly, leaders need to recognize that ethical AI deployment isn’t a one-time decision but an ongoing process requiring constant evaluation and adjustment. We need to stay humble about our ability to predict AI’s impacts and remain willing to course-correct when we see unintended consequences emerging.
The phrase, “We need to…” made me chuckle a bit. Who’s this “we” to whom you refer? Reminds me of a friend who has this uncanny ability to get his husband to do something simply by saying “we need to…” Nonetheless, it’s good advice. As consumers and deployers of AI, we do need to stay vigilant, attentive, and ready to course correct at any point.
How did Claude and I do? Do you want to agree or disagree with anything we’ve said or add caveats? I won’t speak for Claude, but I certainly would like to hear them.