The BDA is excited by the opportunities for innovation and efficiency offered by the proliferation of artificial intelligence (AI) models and tools.
This policy establishes guidelines for the responsible and ethical use of AI within the BDA. It aims to ensure compliance with applicable laws, protect data privacy, and promote transparency in AI usage, benefiting staff, members, other customers, and the organisation.
The BDA has adopted the Trades Union Congress (TUC) definition of AI which is currently as below in its draft bill.
An “artificial intelligence system” means a machine-based system that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Trades Union Congress (TUC) Artificial Intelligence (Regulation and Employment Rights) Bill
The policy sets out which AI tools and models may be used within our organisation, how they may be used, and by whom. It is not a policy which guides the use of AI within dietetic practice. Learn more about AI in practice in our Digital Vision.
This policy applies to all employees, volunteers, directors and others acting on behalf of BDA, a specialist group or committee, contractors, and third-party vendors who use or interact with AI systems on behalf of the BDA. It covers AI applications used for internal processes, member and customer interactions, and decision-making.
The Finance Audit and Risk Committee will consider risks associated with AI.
The CEO holds overall responsibility for content and outputs created by the use of AI within the BDA. The CEO will report at least annually to the Board on AI utilisation by the BDA.
We allow and encourage the use of AI tools by our staff and volunteers but only within the parameters set out below.
Tools that staff are permitted to use may include digital support tools, machine learning tools used on Enterprise products or generative tools. These will meet the following criteria:
A BDA AI Toolkit will be available to all staff which will list permitted tools and their suggested uses will be available to all staff. The list detailed in the toolkit will also be available to Board directors and volunteers when undertaking BDA activity.
Decisions on AI tools generally benefit from being made by a diverse multidisciplinary team that understand the dependencies, so it is important that all BDA roles can contribute to this work.
Staff, volunteers and directors are encouraged to recommend additional tools to the BDA AI Toolkit. The process for their approval is via submission to the Wider Leadership Team (WLT). Anyone can contact a member of WLT to communicate their suggestion. Their suggestion will be considered by WLT against the criteria above and, if appropriate the tool or model will be added to the toolkit.
The toolkit will be reported to the Board annually.
BDA staff are encouraged to investigate how AI might be used to enhance services for members or introduce efficiencies to their area of work and associated business processes, or the broader business.
Content such as articles, social media posts, guidelines and presentations should always be checked by a human before being published. Experience with these tools shows that what they produce is not always appropriate for the professional channels like the BDA’s and needs a separate application of the organisation’s brand and style guidelines.
Where AI has been used to support production, this should be transparent. Content may also be run through an approved AI platform for readability and quality improvement. AI should not be used to replace clinical judgement or skills and any content that contains clinical information must be carefully checked prior to publishing.
Any output generated by AI tools or models may be inaccurate. This includes information purported to be factual, e.g. legal, medical, or technical advice.
This applies regardless of media, i.e. whether the output is textual, graphic, audio, or of any other form. AI-generated output should never be taken to be true, or accurate and users should always check the accuracy of any purported statements, facts, or representations before using these to inform or contribute to their work in any way.
Any content developed to be shared or for publication must be checked by an appropriately qualified individual.
The BDA does not use voice clone generators or create deepfake videos. We do not use AI tools to recreate the voice or appearance of anyone, as there are major ethical implications and risks attached to this.
The exception is if we are publishing something about AI and want to demonstrate what these tools can do. In these cases, we will only ever work with the consent of the original speaker and we will always make that clear to audiences.
Users should assume that the AI tool or model used may have been trained on information that reinforces an existing privilege held by a social group and includes systemic biases that discriminate against particular groups of people – and therefore these can be reflected and perpetuated via its output.
Where potential for incorporation of such bias exists, users must consider:
An Equality Impact Assessment is to be conducted before AI is used as part of a standard BDA process. Based on the points above, whether the planned use can be conducted in a way that does not risk harm to anybody and will not constitute discrimination under the Equality Act 2010. If they cannot be confident that it can, they should not conduct the planned use or should first seek advice from a member of the Senior Leadership Team (SLT).
Use of enterprise systems should always be conducted in accordance with the terms of any particular licences (e.g. software licences) or agreements (e.g. user agreements) that allow and/or govern the use of permitted tools or models when such licences or agreements are held by or apply to either the BDA, individual staff members or others performing activities for the BDA.
If a user is in any doubt as to what any such licences or agreements require, they should contact the relevant member of the Wider Leadership Team and request further information.
Whenever a user uses an AI tool or model, they must do so in accordance with all laws relevant to the specific use. These may include, but are not limited to:
Use should always be conducted in accordance with relevant governmental and other industry-standard regulations, sets of guidance, and codes of practice. These include, but are not limited to:
Additionally, when using AI tools or models, users should always:
For example: During the preparation of this guidance/document/image, ChatGPT (Open AI, Microsoft Corporation) was used with prompts to summarise key points and improve readability in the [list sections]. After using this tool, the key contributors reviewed and edited the content and take full responsibility for this guidance.
When requesting any approvals required under this policy, users should communicate what the relevant generated output is, its purpose, and any additional considerations (e.g., the outcome of fact-checking conducted, the equality impact assessment and consideration of potential biases that may be represented in the output).
At any point if, users are uncertain as to how use AI tools or models in a risk-averse manner and in compliance with this policy and with the law, they should not hesitate to contact the relevant Wider Leadership Team member to discuss their questions or concerns.
BDA will maintain a register of AI use which is reported to the board annually for information.
Any breaches or concerns related to the use of AI must be reported immediately to the Chief Executive Officer. Deliberate breaches of the policy may result in the disciplinary process being followed.
Individuals wanting to raise concerns about the use of AI within the BDA should follow the BDA's Raising Concerns/ Whistleblowing policy.
Generally, any intellectual property rights created by or arising in works created by any of the BDA’s staff in the course of their employment will be the property of the BDA, unless alternative provisions are made in law or in an individual‘s contractual arrangements with us (e.g. employment contracts or consultancy agreements).
This may include any intellectual property rights that an individual holds in any output created by an AI tool or model that the individual was responsible for creating (e.g. which was created by an AI model in response to parameters entered by the staff member) in the course of their employment.
AI users should be aware that they (and/the BDA) may not always hold all intellectual property rights existing in AI output that they were responsible for creating.
Ownership of any intellectual property may depend on:
Individuals must take care not to infringe the intellectual property rights of any other individual or organisation when:
Individuals should be careful not to use the output of any AI tool or model in a way that infringes any other party’s intellectual property rights. They should be aware that AI tools and models may have been trained on content containing intellectual property rights belonging to others and such data may have been used without a valid license for this use.
Even if training was conducted in accordance with a license, publication of any output may not be covered by the provisions of the license. Therefore, individuals should always check that they are complying with the terms of any relevant licenses.
This includes intellectual property licenses granted to staff, to the BDA, or to another party but which via further licenses or agreements they are covered by (e.g. user agreements for relevant AI tools or models).
Individuals should use AI tools and models in accordance with their terms of use and similar and in accordance with this policy; and contact a relevant member of the Wider Leadership Team for assistance if they are unsure whether a particular use of given output is likely to constitute intellectual property right infringement.
All uses of AI within our organisation must be conducted in accordance with the UK’s data protection laws, including the Data Protection Act 2018 and the UK General Data Protection Regulation (UK GDPR).
No personal data (i.e. information about an individual from which they may be identified) belonging to anybody, including members/customers, staff, and members of the public, should be input into any tool or model unless express approval to do so in the manner and for the purposes in question has been obtained beforehand.
A member of the Senior Leadership Team must always sign off the inclusion or use of individual information when utilising AI. Such approval will only be granted when the proposed use is in reliance on a legitimate basis for processing (e.g. it is with data subjects’ consent) and in accordance with other data protection principles (e.g. this processing is necessary and appropriate for the relevant purpose).
Users must consider whether any AI-generated output they receive, and use contains (or could contain) any personal data belonging to anybody. This applies regardless of whether or not a user inputs any personal data into the relevant AI tool or model themselves to generate this output.
If output contains personal data, this output should not be used any further by the user who was responsible for generating it unless and until approval for such use is granted as above.
Additionally, users should always comply with the BDA’s other policies and procedures relevant to data protection and privacy, including our: Privacy Policy and Data Security Policy.
Individuals must take care when using any of the BDA’s confidential or commercially sensitive information as input into or to inform input into any AI tool or model. If any restrictions on the AI or otherwise (e.g. by line managers), these should be observed.
If any AI-generated output contains the BDA’s confidential or commercially sensitive information or if such information could be extrapolated from the output, this output should not be communicated outside of our organisation (e.g. via publication or communication to a client) without prior approval from SLT.
If a staff member has access to any confidential information belonging to a partner, collaborator, subsidiary, employee, or similar of the BDA, the rules set out within this section, above, also apply to this information.
Further, such information must only be used in accordance with any agreements governing the exchange and use of such information (e.g. any collaboration agreements, purchase or investment agreements, or non-disclosure agreements).
Please accept {{cookieConsents}} cookies to view this content