The BDA and artificial intelligence (AI)

The BDA is excited by the opportunities for innovation and efficiency offered by the proliferation of artificial intelligence (AI) models and tools.

This policy establishes guidelines for the responsible and ethical use of AI within the BDA. It aims to ensure compliance with applicable laws, protect data privacy, and promote transparency in AI usage, benefiting staff, members, other customers, and the organisation.

The BDA has adopted the Trades Union Congress (TUC) definition of AI which is currently as below in its draft bill.

An “artificial intelligence system” means a machine-based system that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Trades Union Congress (TUC) Artificial Intelligence (Regulation and Employment Rights) Bill

The policy sets out which AI tools and models may be used within our organisation, how they may be used, and by whom. It is not a policy which guides the use of AI within dietetic practice. Learn more about AI in practice in our Digital Vision.

Glossary of terms

  • AI – Artificial Intelligence (as defined within the policy)
  • AI systems, tools and models – software, apps, bots etc that BDA might use
  • Deepfake videos - a type of synthetic media created using AI that mimics a real person's appearance or voice in a video or image
  • Enterprise products – core business software such as Teams and CRM
  • Generative AI - a type of artificial intelligence that can create new content, such as text, images, audio, and video, based on existing data
  • Machine learning - the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data
  • Person in the loop – means someone who is actively involved in an ongoing process, in addition to AI being employed
  • Voice clone generators - uses deep learning in voice cloning to replicate a person's voice from sample recordings

Scope of policy

This policy applies to all employees, volunteers, directors and others acting on behalf of BDA, a specialist group or committee, contractors, and third-party vendors who use or interact with AI systems on behalf of the BDA. It covers AI applications used for internal processes, member and customer interactions, and decision-making.

General principles

  1. AI usage must comply with UK laws, including but not limited to the Data Protection Act 2018 (incorporating GDPR), the Equality Act 2010, relevant employment and intellectual property laws.
  2. Staff, members and customers must be informed when interacting with AI systems where relevant. Any AI-driven decisions affecting individuals (e.g. recruitment, pricing, or member service eligibility) must be explainable and auditable.
  3. AI must not be used to replace dietetic skills in relation to the production of clinical or health-related information.
  4. AI must not be used to engage in discriminatory practices or generate harmful content.
  5. Personal data processed by AI systems must adhere to data protection regulations. Adequate security measures must be in place to protect AI systems from unauthorised access or misuse.
  6. Where necessary the BDA will retain a person in the loop of processes involving AI. Critical decisions affecting staff, members/customers, or the business must involve human oversight. AI systems should assist rather than replace human judgment in decision-making processes.
  7. AI systems must be regularly tested for accuracy, reliability, and fairness. Feedback loops should be in place to identify and rectify issues promptly.
  8. AI should only be used for purposes that align with the BDA’s core purpose and values. In particular while adopting an agile and responsive approach to our work, we must maintain our value as a trusted and credible organisation. We must also take into account climate impact and ensure proportionate and professional use that adds value for our customers. Employees are prohibited from using AI tools for unauthorised or personal purposes during work hours and using BDA equipment.

Responsibilities

Finance Audit and Risk Committee

The Finance Audit and Risk Committee will consider risks associated with AI.

Chief Executive Officer (CEO)

The CEO holds overall responsibility for content and outputs created by the use of AI within the BDA. The CEO will report at least annually to the Board on AI utilisation by the BDA.

Management

  1. Ensure this policy is communicated and enforced across the BDA.
  2. Maintain a register of AI use which is reported to the board annually for information.
  3. Be aware that AI tools may produce systemic biases which we may perpetuate by its use and apply the general principles above to monitor this. An Equality Impact Assessment must be conducted as part of due diligence when considering the use of AI in a business process.
  4. Review how effective AI has been against stated objectives for each process where AI has been tried or deployed.
  5. Provide appropriate training and support for staff.

Staff, Directors and Volunteers

  1. Adhere to this policy when using AI tools. Staff, Directors and Volunteers (when involved in BDA related activity) must comply with all provisions and must seek assistance via the identified contacts if uncertain on any point. The use of AI should be paused until any such uncertainties are resolved.
  2. Report any misuse, errors, or risks associated with AI to management.

Suppliers

  1. Must comply with this policy when providing AI-related services or tools to the BDA.

The BDA’s use of AI

We allow and encourage the use of AI tools by our staff and volunteers but only within the parameters set out below.

Tools that staff are permitted to use may include digital support tools, machine learning tools used on Enterprise products or generative tools. These will meet the following criteria:

  1. Alignment with the general principles above
  2. Access approved via one of the below:
  • Approved by the BDA’s IT partners as safe (via admin approval to add plug-ins or by not accessing blocked websites)
  • Offered as part of an Enterprise product for which the BDA holds a license (e.g. the CRM, SurveyMonkey)

A BDA AI Toolkit will be available to all staff which will list permitted tools and their suggested uses will be available to all staff. The list detailed in the toolkit will also be available to Board directors and volunteers when undertaking BDA activity.

Decisions on AI tools generally benefit from being made by a diverse multidisciplinary team that understand the dependencies, so it is important that all BDA roles can contribute to this work.

Staff, volunteers and directors are encouraged to recommend additional tools to the BDA AI Toolkit. The process for their approval is via submission to the Wider Leadership Team (WLT). Anyone can contact a member of WLT to communicate their suggestion. Their suggestion will be considered by WLT against the criteria above and, if appropriate the tool or model will be added to the toolkit.

The toolkit will be reported to the Board annually.

How use should be conducted

BDA staff are encouraged to investigate how AI might be used to enhance services for members or introduce efficiencies to their area of work and associated business processes, or the broader business.

Content such as articles, social media posts, guidelines and presentations should always be checked by a human before being published. Experience with these tools shows that what they produce is not always appropriate for the professional channels like the BDA’s and needs a separate application of the organisation’s brand and style guidelines.

Where AI has been used to support production, this should be transparent. Content may also be run through an approved AI platform for readability and quality improvement. AI should not be used to replace clinical judgement or skills and any content that contains clinical information must be carefully checked prior to publishing.

Accuracy

Any output generated by AI tools or models may be inaccurate. This includes information purported to be factual, e.g. legal, medical, or technical advice.

This applies regardless of media, i.e. whether the output is textual, graphic, audio, or of any other form. AI-generated output should never be taken to be true, or accurate and users should always check the accuracy of any purported statements, facts, or representations before using these to inform or contribute to their work in any way.

Any content developed to be shared or for publication must be checked by an appropriately qualified individual.

The BDA does not use voice clone generators or create deepfake videos. We do not use AI tools to recreate the voice or appearance of anyone, as there are major ethical implications and risks attached to this.

The exception is if we are publishing something about AI and want to demonstrate what these tools can do. In these cases, we will only ever work with the consent of the original speaker and we will always make that clear to audiences.

Bias

Users should assume that the AI tool or model used may have been trained on information that reinforces an existing privilege held by a social group and includes systemic biases that discriminate against particular groups of people – and therefore these can be reflected and perpetuated via its output.

Where potential for incorporation of such bias exists, users must consider:

  • The potential impacts of their using the relevant AI tool or model in the planned way e.g. whether decisions may be made in a manner that reinforces these
  • How such biases and/or their effects may be mitigated – the brief Equality Impact assessment will support this

An Equality Impact Assessment is to be conducted before AI is used as part of a standard BDA process. Based on the points above, whether the planned use can be conducted in a way that does not risk harm to anybody and will not constitute discrimination under the Equality Act 2010. If they cannot be confident that it can, they should not conduct the planned use or should first seek advice from a member of the Senior Leadership Team (SLT).

Regulations and license agreements

Use of enterprise systems should always be conducted in accordance with the terms of any particular licences (e.g. software licences) or agreements (e.g. user agreements) that allow and/or govern the use of permitted tools or models when such licences or agreements are held by or apply to either the BDA, individual staff members or others performing activities for the BDA.

If a user is in any doubt as to what any such licences or agreements require, they should contact the relevant member of the Wider Leadership Team and request further information.

Whenever a user uses an AI tool or model, they must do so in accordance with all laws relevant to the specific use. These may include, but are not limited to:

  • Advertising and marketing laws and regulations (including copyright and moral rights)
  • Laws dealing with defamation, libel, and slander
  • Anti-discrimination laws
  • Privacy and data protection laws
  • Laws restricting the disclosure of confidential information
  • Intellectual property laws

Use should always be conducted in accordance with relevant governmental and other industry-standard regulations, sets of guidance, and codes of practice. These include, but are not limited to:

Additional requirements

Additionally, when using AI tools or models, users should always:

  • Redact any personal, proprietary or sensitive information and data before uploading to an AI tool. Not only may this breach our data protection and GDPR responsibilities – it may also contribute to the training of a freely available large language model, therefore reducing the value of the information to our community who have invested in it
  • Make any image designers, videographers or photographers aware that we may use image or audio tools to correct or make minor edits (such as resizing or removing background noise)
  • Prior to publishing or sharing information created or enhanced by AI, staff must ensure it is reviewed by an appropriately qualified human (e.g. a registered dietitian or subject matter expert) and follow the usual approval process. A disclaimer should be added if content has been developed using AI-generated summaries or ideas

For example: During the preparation of this guidance/document/image, ChatGPT (Open AI, Microsoft Corporation) was used with prompts to summarise key points and improve readability in the [list sections]. After using this tool, the key contributors reviewed and edited the content and take full responsibility for this guidance.

When requesting any approvals required under this policy, users should communicate what the relevant generated output is, its purpose, and any additional considerations (e.g., the outcome of fact-checking conducted, the equality impact assessment and consideration of potential biases that may be represented in the output).

At any point if, users are uncertain as to how use AI tools or models in a risk-averse manner and in compliance with this policy and with the law, they should not hesitate to contact the relevant Wider Leadership Team member to discuss their questions or concerns.

Reporting

Organisational reporting

BDA will maintain a register of AI use which is reported to the board annually for information.

Incident reporting

Any breaches or concerns related to the use of AI must be reported immediately to the Chief Executive Officer. Deliberate breaches of the policy may result in the disciplinary process being followed.

Raising concerns

Individuals wanting to raise concerns about the use of AI within the BDA should follow the BDA's Raising Concerns/ Whistleblowing policy.

Intellectual property

Generally, any intellectual property rights created by or arising in works created by any of the BDA’s staff in the course of their employment will be the property of the BDA, unless alternative provisions are made in law or in an individual‘s contractual arrangements with us (e.g. employment contracts or consultancy agreements).

This may include any intellectual property rights that an individual holds in any output created by an AI tool or model that the individual was responsible for creating (e.g. which was created by an AI model in response to parameters entered by the staff member) in the course of their employment.

AI users should be aware that they (and/the BDA) may not always hold all intellectual property rights existing in AI output that they were responsible for creating.

Ownership of any intellectual property may depend on:

  • The relevant AI tool's or model's user agreements or licenses that apply to the user in relation to the use made of the tool or model. For example, the provisions within such agreements dealing with intellectual property ownership
  • Any pre-existing intellectual property rights in the output created, whether the works in which the rights exist were input into the tool or model as training data, as user input, or not at all
  • The terms of any licenses or agreements governing the use of the model’s or tool’s training data
  • Any other factors impacting intellectual property ownership, for example, existing license agreements, previous disputes, or rules on different types of intellectual property rights

Individuals must take care not to infringe the intellectual property rights of any other individual or organisation when:

  • sourcing or using an AI tool or model
  • contributing to training any AI tools or models
  • inputting data of any kind into any AI tools or models
  • receiving and using the output of any AI tools or models

Individuals should be careful not to use the output of any AI tool or model in a way that infringes any other party’s intellectual property rights. They should be aware that AI tools and models may have been trained on content containing intellectual property rights belonging to others and such data may have been used without a valid license for this use.

Even if training was conducted in accordance with a license, publication of any output may not be covered by the provisions of the license. Therefore, individuals should always check that they are complying with the terms of any relevant licenses.

This includes intellectual property licenses granted to staff, to the BDA, or to another party but which via further licenses or agreements they are covered by (e.g. user agreements for relevant AI tools or models).

Individuals should use AI tools and models in accordance with their terms of use and similar and in accordance with this policy; and contact a relevant member of the Wider Leadership Team for assistance if they are unsure whether a particular use of given output is likely to constitute intellectual property right infringement.

Data protection and privacy

All uses of AI within our organisation must be conducted in accordance with the UK’s data protection laws, including the Data Protection Act 2018 and the UK General Data Protection Regulation (UK GDPR).

No personal data (i.e. information about an individual from which they may be identified) belonging to anybody, including members/customers, staff, and members of the public, should be input into any tool or model unless express approval to do so in the manner and for the purposes in question has been obtained beforehand.

A member of the Senior Leadership Team must always sign off the inclusion or use of individual information when utilising AI. Such approval will only be granted when the proposed use is in reliance on a legitimate basis for processing (e.g. it is with data subjects’ consent) and in accordance with other data protection principles (e.g. this processing is necessary and appropriate for the relevant purpose).

Users must consider whether any AI-generated output they receive, and use contains (or could contain) any personal data belonging to anybody. This applies regardless of whether or not a user inputs any personal data into the relevant AI tool or model themselves to generate this output.

If output contains personal data, this output should not be used any further by the user who was responsible for generating it unless and until approval for such use is granted as above.

Additionally, users should always comply with the BDA’s other policies and procedures relevant to data protection and privacy, including our: Privacy Policy and Data Security Policy.

Protection of confidential information

Individuals must take care when using any of the BDA’s confidential or commercially sensitive information as input into or to inform input into any AI tool or model. If any restrictions on the AI or otherwise (e.g. by line managers), these should be observed.

If any AI-generated output contains the BDA’s confidential or commercially sensitive information or if such information could be extrapolated from the output, this output should not be communicated outside of our organisation (e.g. via publication or communication to a client) without prior approval from SLT.

If a staff member has access to any confidential information belonging to a partner, collaborator, subsidiary, employee, or similar of the BDA, the rules set out within this section, above, also apply to this information.

Further, such information must only be used in accordance with any agreements governing the exchange and use of such information (e.g. any collaboration agreements, purchase or investment agreements, or non-disclosure agreements).