Should Businesses Create an AUP for Generative AI?


Thank you for your interest. Kindly fill out the details below and we will contact you back soon.

Contact Us

Please complete this form to be contacted by one of our experts.

[hubspot type=form portal=25515721 id=d6181c33-f2bb-4030-8cb7-108bef5e36c9]

  1. Home
  2. /
  3. Our Blog
  4. /
  5. Should Businesses Create an AUP for Generative AI?

In a recent article titled Generative AI Growth and Cybersecurity, Resham Ganglani, CEO at Halodata, wrote about some of the data and cybersecurity risks associated with the expanding use of generative AI tools.

“All organisations should create and enforce an Acceptable Use Policy (AUP) for LLMs and other AI tools for their staff. Just as most organisations now have AUPs for Internet use, company laptop use etc., the time has come for AI AUPs to be standard.”

This article summarises the need for AI AUPs, discusses the policies, and provides links to further articles covering what organisations should include in their AUPs.

The Risks from Generative AI


In our previous article, we outlined the risks that can flow from using LLM (Large Language Model) based generative AI systems like OpenAI ChatGPT and Google Bard. Here is a summary of the risks:

  • Data leakage – When individuals input text into an LLM in order to receive a response, there is a possibility that they may inadvertently include Personally Identifiable Information (PII) or other sensitive data. This PII could potentially be stored, accessed, and released to unauthorised parties by the LLM.
  • Exposure of business IP – Users of LLMs may input not just personally identifiable information but also commercially sensitive or proprietary data. An example of this occurred when Samsung Semiconductor employees utilised ChatGPT to modify some source code. They submitted both the private code they wanted to modify and confidential internal information for the LLM to use while creating the new code. After this incident, Samsung prohibited the use of LLMs.
  • Insecure generated source code – Many programmers have started using LLMs like GitHub Copilot and ChatGPT. However, the code produced by these systems is often poor. A study of over 1600 code generation tasks performed with Copilot revealed that 40% of the generated code contained MITRE ATT@CK vulnerabilities that were already known.

The early months of 2023 have demonstrated a growing interest in Generative AI, and it will play a role in business from now on. However, it will be up to leadership teams, particularly those in legal positions, to decide when and how staff can use generative AI.


Why AUPs are Needed

As generative AI systems become common, organisations must consider the potential risks. Proper management and consideration of the risk is essential, and a crucial aspect of this risk management is user education, specifically regarding what data they should not enter into LLMs. Additionally, the creation and enforcement of an acceptable use policy is essential.

Many organisations have existing AUPs covering other aspects of IT, such as laptop, smartphone, social media, and internet use policies. These are in place to protect both the staff and the organisation. Creating a new AUP to cover AI tools is probably now required rather than a recommendation. Everyone needs to know what is acceptable when using these new tools. And what is not acceptable!


What Should Be in a Generative AI AUP?

If you already have AUPs for other IT use, the content you include in an AUP for AI will probably be similar. However, if you don’t have existing policies, you will need to create a new one from scratch to cover the specific data and operations that are unique to your business. The good news is that many legal minds around the globe have spent time considering this, and they have made their thoughts on AI AUPs available for anyone to read.

As an aside, if you don’t have any AUPs in place for other IT items like smartphones, the Internet and more, then it might be time to consider doing AUPs for those as well. But do them one at a time. If you try to do many AUPs simultaneously, the project could become overwhelming, and none will be completed and implemented.


Industry Thoughts and Examples on AI AUPs

Here are some AI AUP articles that have been published in 2023 that discuss this topic. There are many more, but these are good, and cover what most organisations will need to address in their AUP.

Perkins Coie – Perkins Coie is one of the largest international law firms that operate across the USA and Asia-Pacific region. In June 2023, they published Ten Considerations for Developing an Effective Generative AI Use Policy. The Perkins Coie article is an excellent starting point when considering an AI AUP. 

CIO – An April 2023 article in the Managing Innovation & Disruptive Technology section of the CIO website addressed the need for AI AUPs. Titled 6 best practices to develop a corporate use policy for generative AI, the article covers why you need an AI AUP and how to create one.

DLA Piper – In July 2023, the global law firm DLA Piper published an article titled Generative AI – framing a business-centric policy to address opportunities and risks on Generative AI and AUPs. In the article, the authors look at frameworks for an AUP that enables the benefits of using generative AI to be realised while mitigating the risks. DLA Piper has a strong worldwide presence, including offices in Singapore and other APAC cities.


If you have any questions about AUP for AI or any other aspect of IT that might impact your security, then reach out to us, and we’ll be happy to arrange a chat with an expert from our team or from one of our Partners or Vendors, who can advise you on how to proceed.

It’s Time to Get Started with Halodata

Request Demo

Contact Us

Please complete this form to be contacted by one of our experts.

[hubspot type=form portal=25515721 id=d6181c33-f2bb-4030-8cb7-108bef5e36c9]

Talk to one of our experts and discover the benefits of Halodata for your company.