What are the key components of a policy to regulate employee use of AI?

Following on from our recent podcast on generative AI and employment law we describe below some key questions to consider if you’re thinking about introducing a workplace policy to regulate the use of generative AI, such as ChatGPT.

First, think about whether you need an AI policy at all? Why is the policy necessary? If external AI assistance is being utilised in your workplace, it is likely that you will need to put some limitations or a framework around its use. Some employers are imposing blanket bans on the use of generative AI because of the risks around accuracy, confidentiality, discrimination and data security at this early stage of its development. In those circumstances, a detailed policy would not be required.

Having established the need and a rationale for a policy, the following key points should be considered for inclusion.

  • Scope – including listed types of AI tools that are covered, the parts of the business and which categories of employee are included. Does the policy cover employees/workers/consultants/contractors/volunteers/work experience placement students?

  • Consultation – are you legally required to consult your workforce? If not, then consider whether consultation with your workforce may be beneficial, given the level of concern about the impact of AI amongst many employees and the complexity of the issues.

  • Other policies – are there other policies which should be referenced or would crossover with an AI policy? For example, IT policies, data security policies and Bring Your Own Device policies.

  • Ownership and roles – who is responsible for developing and maintaining the policy? How often will the policy be reviewed and, if necessary, updated? Given the pace of change in relation to generative AI, frequent review will be required. Will the policy be enforced by the same people who have ownership of the policy?

  • Record keeping and monitoring – will you require employees to record their use of AI? How will any monitoring take place and how will you ensure you comply with existing legal limitations on monitoring? Will this involve any new types of monitoring? If it does involve new monitoring, it may be necessary to undertake a data protection impact assessment / privacy impact assessment.

  • Use of generative AI – how do you anticipate AI tools being used for business activities in your organisation? Will its use be limited to internal use only, or for external client/customer work? Will decisions be based on generative AI and, if yes, what decisions and who will be affected by those decisions? Will employees be allowed to use their personal devices or will it be mandatory for generative AI to be used only on employer-provided devices, or will both be an option? Will you require employees to ‘opt out’ of the application’s ability to train itself using data gathered after a user has entered a prompt? If not, this should be covered in your impact assessment.

  • Risk of inaccuracies – what would be the impact of inaccurate outputs of the generative AI tools? Would these significantly affect individuals? How will any inaccuracies be managed and who will be accountable for the risk of these internally?

  • Guidelines – what standards will you require of employees using generative AI for their work? When developing these, you will need to take into account the risks that are likely to be triggered when using AI for business activities. A non-exhaustive list of examples is as follows.

    • Confidentiality of use of data – for example, employee, customer or supplier data.

    • Compliance with data protection policies and legislation.

    • Intellectual policy rights and licensing – for example, any licensing conditions imposed by AI applications’ terms of use and restrictions on unauthorised entering of third-party data.

    • Equalities, discrimination and ethics – ensure that discriminatory language is not used when entering prompts into applications. Consider cross-referencing other equality/diversity policies, and any Code of Conduct and/or organisational ethics policies. Think about explicitly extending restrictions on bullying, harassment and discrimination to use of generative AI, and ensure that use of generative AI is only authorised for ethical, responsible use.

    • Security measures – cross-referencing other IT security policies, strong passwords etc.

    • ‘Human in the loop’ requirements – what is your approach to ensuring that critical thought is applied to AI generated outputs? How will you ensure that you comply with existing legal requirements in relation to automated decision making? How will you be able to explain any decisions made in reliance on AI data?

  • Training – what training on AI will be offered, and how will it be provided? Will it be voluntary or mandatory? One of the main concerns currently being reported amongst employees is a lack of training on AI, despite a widespread belief that AI will have an important impact on job roles. What technical support will be provided on the use of applications? Who is the point of contact for training and support?

  • Enforcement – who should breaches be reported to, what sanctions will be imposed for breach of the policy and what factors will be taken into account when considering the seriousness of the breach? Consider whether you will require employees to provide access to devices on which they have used a generative AI application and whether they will need to provide passwords/log in details.

These are some of the broad, high-level issues which should be taken into account at the policy scoping and design stage. The extent of the use of generative AI within a workplace will depend very much on an individual organisation’s requirements and the extent to which it wishes to build in protections against the various risks currently associated with employee use of AI applications.

Careful consideration should be given to the creation and implementation of AI policies (whether on the employer or employee side) – the requirements will depend very much on each individual employer; there is no ‘one size fits all’ approach. But a well-drafted, written policy which is clearly communicated to staff and regularly updated is recommended to provide reassurance to employers who wish to harness this powerful new technology whilst limiting the risks.

Contributor: Emma Erskine-Fox

This publication is intended for general guidance and represents our understanding of the relevant law and practice as at October 2023. Specific advice should be sought for specific cases. For more information see our terms & conditions.

Date published

30 October 2023

RELATED INSIGHTS AND EVENTS

View all