How to Make AI Policies More Robust

by
Peter Purcell
December 11, 2024

AI’s rapid adoption has left many organizations uncertain about how to manage potential risks. Concerns over legal liability, data privacy, ethical considerations, and regulatory compliance have made some companies hesitant to embrace AI, particularly in areas like human resources and accounting.

A recent survey made up of 330 executives across the US revealed that more than half of organizations don’t currently have AI policies in place to mitigate risk. Many organizations that do have policies in place are missing some key components that make them effective.

In the same study, executives cited reasons why their organization has not established policies. Here are the three most common:

  • Perception of low risk to the organization
  • Lack of internal expertise
  • Rapid evolution of AI and uncertainty about the future

Organizations need policies that are more comprehensive, actionable, and effective as AI becomes more pervasive.

How to Create More Robust AI Policies

1. Robust Training

One of the most significant gaps in AI policy implementation is the lack of comprehensive employee training. Even the most well-crafted policies are ineffective if employees don’t understand or know how to apply them.

Setting basic expectations at the start isn’t enough. Training programs should go beyond basic awareness to include:

  • AI Literacy: Educating employees on what AI is, how it works, legal ramifications, and its potential applications and limitations.
  • Training on Specific Tools: Teaching employees how to properly use specific tools that apply to them.
  • Data Privacy and Confidentiality: Ensuring employees understand how to protect sensitive information and avoid common pitfalls, such as inputting confidential data into public AI tools.
  • Ethical Use: Highlighting the importance of transparency and responsible AI use in decision-making and reporting.
  • Using AI to One’s Benefit: Teaching employees how to write effective AI prompts and interpret AI outputs critically.

Investing in training not only mitigates risks but also empowers employees to use AI tools effectively. Training should be a continuous process along with audits and reviews of work processes where AI is involved.

2. Align the Executive Team

A challenge in AI adoption is the lack of executive alignment. Misaligned perceptions of AI’s use and risks can result in fragmented policies that fail to address critical vulnerabilities.

To create cohesive and effective AI policies, organizations should:

  • Establish a Centralized AI Decision-Making Group: Bring together leaders from HR, legal, technology, finance, and other relevant departments to ensure policies are holistic and enforceable. Although separate, all departments impact one another and alignment is key, especially when there are disparities.
  • Encourage Open Communication: Foster dialogue among executives to align on AI’s role, benefits, and risks within the organization.
  • Set Clear Objectives: Specifically define what the organization hopes to achieve with AI and ensure all policies and practices support these goals.

When leadership is aligned, policies are more likely to be effective and embraced across the organization.

3. Pay Attention to Regulatory Challenges

US and international laws create a complex compliance environment, especially with AI in its formative years. For example, the European Union’s AI Act and state laws in New York and Colorado impose stringent requirements on AI use.

To navigate these challenges effectively, organizations must stay informed. Organizations must monitor legal developments and adapt policies as regulations emerge. Risk assessments are also important for evaluating how AI tools are used, particularly in areas like finance where accurate reporting is vital. Many organizations in the study mentioned earlier reported concerns with litigation risk. Engaging legal experts may be helpful.

Seizing the Opportunities of AI While Managing Risks

AI is a game-changer for many organizations looking to improve processes, reduce manual work, and make teams more efficient and effective. With comprehensive and aligned policies, organizations can mitigate risk while unlocking AI’s potential. The key lies in viewing AI not as a liability but as an opportunity—one that requires careful planning, collaboration, and a forward-thinking approach to governance.

At Trenegy, we help organizations leverage AI realistically and develop processes, policies, and procedures that align with business goals. For more information, reach out to us at info@trenegy.com.