Ryan Stickel

By: Ryan Stickel on May 20th, 2024

Print/Save as PDF

Eight Artificial Intelligence Policy Principles for Businesses

You’ve probably heard a lot about artificial intelligence (AI) in the news lately. It’s also likely that you have employees who are currently leveraging AI technology during their workday. While AI use can lead to increased productivity, it can also raise questions about the work's legality, safety and accuracy. So, how can we regulate AI use?

As with any new process or technology, AI opens up a whole new world of policies that your organization could and, in many cases, should implement. Let’s talk about AI and outline some principles that could apply to your business.

Before diving in, we must note that AI technology is rapidly advancing daily. It’s constantly learning and evolving, so what’s true about AI today might not be true tomorrow.

Please take this article as broad guidance about what to consider including in your policy. Understand that your policies and procedures should be proactive and flexible as you navigate these new technologies.

And as always, please consult your IT partner before implementing any new technological changes to your organization and consult your HR and/or legal team on any formal policy documents.

Why do I need an artificial intelligence policy?

New technologies, particularly powerful ones, require organizations to take responsibility for ensuring the safety of their employees and clients. We know AI provides plenty of great productivity benefits, but it’s a brand-new world with the continued waves of generative AI tools like ChatGPT, among others.

There are countless AI tools on the market that have different uses. Some can generate text or images based on user prompts, while others can summarize information from provided materials and provide further context. The issue is that these tools don’t necessarily pull their generated content purely out of thin air. They rely on various databases or the Internet to source their material and often don’t tell you exactly where it’s coming from.

This creates risks and ethical considerations that can’t be ignored. Each industry will have its own challenges related to AI, so keep that in mind as you shape your policies and understand that the perfect AI policy probably doesn’t exist at the moment.

What we can advise for now is that your business works to understand the tools being used and which of those tools you can confidently allow your employees to keep using. You want your workforce to leverage the latest and greatest to maximize productivity while remaining secure and within their legal rights.

Possible AI Policy Principles for Your Business

1. Policy Compliance

For starters, your staff’s AI use must adhere to all existing company policies. This can include policies on harassment, confidential information, use of electronic resources and copyright protections.

2. Legal Compliance

This one is pretty straightforward: AI use should always adhere to all applicable laws. It should not be used for illegal purposes or to avoid legal requirements.

3. Ethical Responsibility

You don’t want anyone in your organization to use AI to generate any harmful or malicious content or anything that runs counter to the company's ethical standards. This includes, but is not limited to, generating malicious or defamatory content, engaging in discriminatory behavior or spreading misinformation.

OpenAI, the artificial intelligence research organization that develops ChatGPT, regularly updates its model to try to curb the creation of harmful content. However, with so many users and an endless supply of prompts, some are bound to slip through the cracks.

Take this example from earlier this year, where a group of 4chan users used an AI-powered image generator to create a wave of inappropriate images using Taylor Swift’s likeness. While this is one of the worst use cases for generative AI and likely won’t be seen in your organization, it just goes to show the power of these tools and why we must be diligent in monitoring their use.

4. Accuracy

Any material generated by AI should be checked for accuracy and factuality. While generative AI tends to be pretty accurate, it is not 100 percent reliable. There are times when it produces factually inaccurate material.

Just as you would with any other reports, presentations, or communication materials, double-check AI-generated content for accuracy and ensure the reliability of your sources.

5. Copyright Protection

Copyright issues are an ongoing topic of debate with AI. You should avoid using AI-generated materials that include copyrighted or plagiarized content or otherwise infringe on third-party intellectual property rights.

For example, you could use a generative AI program to create an image of SpongeBob SquarePants standing in front of your company’s building. While that image is a one-of-one original, it still does not give you the right to use an image of SpongeBob. Someone else owns the rights to that character.

6. Privacy and Data Protection

Many companies go to great lengths to protect their data through various cybersecurity and data protection services. If you include private company data in a prompt for ChatGPT or a similar tool, it now exists outside of your organization, and you likely don’t have a great idea where it is stored or what measures are being taken to protect it.

7. Bias and Fairness

AI can inadvertently reflect biases present in the data they are instructed on. And while it might sound silly to think a computer can have biases, humans created the technology and the prompts for the AI. Bias will naturally exist at times, emphasizing the need to proofread and check for accuracy on any AI-generated materials.

8. User Feedback and Reporting

This principle is included to help ensure employees a reliable and safe experience. With such rapidly changing technologies, it’s essential that you remain open to feedback.

Suppose an employee has identified a previously unknown safety issue with a tool, or maybe they’ve identified a new tool that appears safe, and they feel it should be included as acceptable to use. In that case, that input should be valued and assessed. Where this feedback is sent will depend on your organization, but at some point, it should run through the person accountable for the AI policy.

Expertise from an IT Partner

If you’re new to the world of AI, this might be a lot to unpack, but have no fear! Your IT partner should be ready and willing to discuss this topic with you and assist in developing any AI policies. In fact, they have likely just implemented one or are working on one themselves.

New call-to-action