LES DAMES D’ESCOFFIER

LONDON CHAPTER

  —  Policies and Governance  —  


ARTIFICIAL INTELLIGENCE (AI) USE POLICY


Version 1.0  |  February 2026

Registered Charity, England & Wales


An affiliate chapter of Les Dames d’Escoffier International (LDEI), USA

ARTIFICIAL INTELLIGENCE (AI) USE POLICY


Policy Overview

Organisation:     Les Dames d’Escoffier, London Chapter

Type:                   Registered Charity — Women in Food, Beverage & Hospitality

Policy:                 Artificial Intelligence (AI) Use Policy

Version:              1.0

Effective Date:  February 2026

Review Date:     February 2027

Approved by:     Board of Trustees


1. Purpose and Scope

Les Dames d’Escoffier, London (hereafter “the Chapter”) is a women-led membership organisation and registered charity dedicated to supporting and championing women leaders in the food, beverage, and hospitality industries. This Policy sets out how the Chapter will use Artificial Intelligence (AI) tools responsibly, ethically, and in compliance with United Kingdom law, including guidance from the Information Commissioner’s Office (ICO).

This Policy applies to:

  • All trustees, board members, and officers of the London Chapter

  • Volunteers, contractors, and consultants acting on behalf of the Chapter

  • All use of AI tools in connection with the Chapter’s activities, including communications, events, grant-making, scholarships, and mentorship programmes

  • Any processing of data shared with or received from Les Dames d’Escoffier International (LDEI), USA


2. Definitions

For the purposes of this Policy, the following definitions apply:

Term

Definition

AI Tool

Any software, application, or system that uses machine learning, natural language processing, generative AI, or other algorithmic processes to assist with tasks. Examples include ChatGPT, Microsoft Copilot, Google Gemini, Claude, Grammarly, and similar tools.

Generative AI

AI systems capable of producing text, images, audio, or other content in response to user prompts.

Personal Data

Any information relating to an identified or identifiable living individual, as defined under the UK GDPR and the Data Protection Act 2018.

Special Category Data

Personal data that warrants higher protection under UK GDPR Article 9, including data relating to health, racial or ethnic origin, religious beliefs, or sexual orientation.

LDEI

Les Dames d’Escoffier International, the USA-based international organisation with which member data may be shared.

IDTA

UK International Data Transfer Agreement — the mechanism used to lawfully transfer personal data from the UK to countries without UK adequacy status.

Data Bridge

The UK–US Data Bridge (in force October 2023), a framework under which certified US organisations can lawfully receive personal data from the UK without additional transfer safeguards.


3. Guiding Principles

The Chapter’s use of AI is guided by the following principles, which reflect our values as a women-led charity in the food and hospitality sector and align with the ICO’s framework for responsible AI use:

3.1 Responsible and Ethical Use

We will use AI in a manner consistent with our charitable objectives and our commitment to our members, volunteers, and the communities we serve. AI must not be used in ways that are discriminatory, misleading, harmful, or contrary to the Chapter’s values.

3.2 Human Oversight

AI tools are assistants, not decision-makers. All material decisions — including grant and scholarship awards, membership applications, communications with beneficiaries, and financial matters — must be reviewed and approved by a person with appropriate authority. AI-generated content must always be critically reviewed before use or publication.

3.3 Data Protection and Privacy

The Chapter is committed to full compliance with the UK General Data Protection Regulation (UK GDPR), the Data Protection Act 2018, and all applicable guidance from the ICO, including the ICO’s guidance on AI and data protection. Personal data must never be entered into third-party AI tools unless a lawful basis exists and appropriate data processing agreements are in place.

3.4 Transparency and Honesty

Where AI has played a material role in creating content published or communicated by the Chapter, this should be acknowledged. We will not misrepresent AI-generated content as entirely human-authored where that distinction is material to the audience.

3.5 Fairness and Non-Discrimination

We recognise that AI systems can embed and amplify biases, including gender bias, racial bias, and cultural bias. We will monitor AI outputs for discriminatory content and will not use AI in any way that disadvantages a person on the basis of a protected characteristic under the Equality Act 2010.

4. Permitted Uses of AI

The following uses of AI are permitted, subject to compliance with this Policy:

  • Drafting, editing, and proofreading correspondence, newsletters, and promotional materials

  • Creating first drafts of grant communications, press releases, event descriptions, and social media posts — subject to human review and approval before publication

  • Research and information gathering to inform programme development or industry trends, where outputs are independently verified

  • Summarising publicly available information or internal meeting notes that contain no personal data

  • Generating images or visual assets for events and publications — subject to intellectual property review before use

  • Translation assistance for multilingual communications

  • Accessibility improvements, such as generating image alt-text

  • Administrative drafting tasks such as template responses or scheduling text, where no personal data is processed


5. Prohibited Uses of AI

The following uses of AI are strictly prohibited:

  • Entering personal data, membership details, financial information, or special category data into publicly accessible or free-tier AI tools (such as the free versions of ChatGPT, Google Gemini, or similar services)

  • Using AI to make, or to automate without human oversight, decisions about membership, grants, scholarships, or the exclusion of any individual

  • Creating deepfakes, synthetic media depicting real individuals, or any misleading or fabricated content

  • Producing content that is discriminatory, offensive, or in breach of the Chapter’s values or the Equality Act 2010

  • Using AI to impersonate trustees, officers, members, or external stakeholders

  • Submitting AI-generated content to funding bodies or grant applications without disclosure, where the funder’s terms require it

  • Using any AI tool that has not been assessed under this Policy where personal data may be processed


6. Data Protection and the ICO

6.1 ICO Registration

Any organisation that processes personal data is required to register with the ICO and pay the applicable data protection fee, unless exempt. The Chapter’s Data Protection Lead is responsible for ensuring that the Chapter’s ICO registration is current, accurately reflects its processing activities, and is renewed annually. ICO registration can be verified and renewed at ico.org.uk.


6.2 UK GDPR Compliance

Any use of AI that involves or could involve personal data must comply with the UK GDPR and the Data Protection Act 2018. Before using an AI tool to process personal data, the responsible person must:

  1. Confirm a lawful basis for processing under UK GDPR Article 6 (and Article 9 for special category data)

  2. Review the AI provider’s privacy policy and data processing terms to understand how data is handled

  3. Ensure a Data Processing Agreement (DPA) is in place where the AI provider acts as a data processor on the Chapter’s behalf

  4. Conduct a Data Protection Impact Assessment (DPIA) where the processing is likely to result in a high risk to individuals, in line with ICO guidance


The ICO has published specific guidance on AI and data protection, including on the use of generative AI tools. The Data Protection Lead should consult this guidance — available at ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence — before approving the use of any new AI tool.


6.3 International Data Transfers to LDEI (USA)

The Chapter may share member data and programme information with Les Dames d’Escoffier International (LDEI) in the United States as part of its affiliate obligations. Any such transfer of personal data must be carried out lawfully. The appropriate mechanism will depend on whether LDEI is certified under the UK–US Data Bridge:

  • If LDEI is certified under the UK Extension to the EU–US Data Privacy Framework (the “UK–US Data Bridge”, in force from 17 October 2023), personal data may be transferred to LDEI without additional safeguards under that certification.

  • If LDEI is not certified under the Data Bridge, transfers must be governed by a UK International Data Transfer Agreement (IDTA) or equivalent approved safeguard, accompanied by a Transfer Risk Assessment (TRA).


The Data Protection Lead is responsible for confirming LDEI’s certification status before any transfer takes place, and for ensuring that appropriate documentation is in place and kept up to date.


6.4 AI Tools and Third-Party Data Sharing

Many consumer-facing AI tools may use inputted data to train their models or share data with third parties. To manage this risk:

  • Only enterprise or business-grade versions of AI tools — which offer data isolation and contractually commit not to use inputs for model training — may be used where member, applicant, or stakeholder data is involved

  • Free-tier AI tools may only be used for entirely non-personal, anonymised, or publicly available content


7. Intellectual Property

The copyright position on AI-generated content is not fully settled under current UK law. Until further clarity emerges, the Chapter will:

  • Not assert copyright ownership over content known to be entirely AI-generated without first seeking legal advice

  • Ensure that AI tools used for content creation do not reproduce third-party copyrighted material

  • Maintain a record of significant AI-generated content and the tools used to produce it

  • Review AI-generated text and images for potential copyright issues before publication or external use


8. Approved AI Tools

The following categories of tools may be used by the Chapter, subject to Board approval and compliance with this Policy. A register of approved tools shall be maintained by the Data Protection Lead:

Tool Category

Conditions for Use

Enterprise / Business Generative AI (e.g. Microsoft 365 Copilot, Google Workspace AI, Claude for Teams, ChatGPT Team / Enterprise)

Permitted for drafting, summarisation, and administrative tasks. Personal data may only be processed where a DPA is in place and, where required, a DPIA has been completed.

Spelling and Grammar Tools (e.g. Grammarly Business)

Permitted. Avoid inputting sensitive or personal data. Confirm the business version’s data terms before use.

Free-tier Generative AI (e.g. ChatGPT Free, Google Gemini Free)

Permitted for non-personal, anonymised, or publicly available content drafting only. Must never be used with member, applicant, or stakeholder data.

AI Image Generators (e.g. Adobe Firefly, Microsoft Designer)

Permitted for event and marketing assets. Intellectual property review required before publication.

AI Translation Tools

Permitted for non-personal content. A DPIA is required if personal data is involved.


Any tool not listed above must be reviewed and approved by the Board before use.

9. Roles and Responsibilities

Role

Responsibility

Board of Trustees

Approve this Policy and any material revisions. Ensure adequate resources are available for compliance. Receive an annual report on AI use and data protection matters.

Chair / President

Champion responsible AI use across the Chapter. Act as the escalation point for concerns that cannot be resolved by the Data Protection Lead.

Data Protection Lead

Maintain the register of approved AI tools. Confirm ICO registration is current. Oversee DPIAs, DPAs, and LDEI data transfer arrangements. Act as the primary point of contact with the ICO.

All Officers & Volunteers

Follow this Policy at all times. Seek approval before using any new AI tool. Report concerns or potential breaches promptly to the Data Protection Lead.

10. Training and Awareness

The Chapter will ensure that all trustees, officers, and active volunteers who use AI tools in connection with the Chapter’s activities receive appropriate guidance on this Policy. This will be delivered through:

  • Induction materials provided to all new trustees and officers

  • Annual briefings or written updates, particularly where the Policy or ICO guidance has changed

  • Signposting to ICO guidance on AI and data protection, available at ico.org.uk

11. Monitoring and Review

This Policy will be reviewed annually by the Board, or sooner in the event of:

  • Significant changes to UK law, the UK GDPR, or ICO guidance on AI and data protection

  • Material changes to the AI tools used by the Chapter or to the tools available on the market

  • A data breach, ICO investigation, or incident involving AI

  • Changes to the Chapter’s relationship with LDEI or to the status of LDEI’s UK–US Data Bridge certification

The next scheduled review date is February 2027. The Board will record its review in the minutes of the relevant meeting.

12. Concerns and Complaints

Any member, volunteer, or stakeholder who has a concern about the Chapter’s use of AI, or who suspects a breach of this Policy or of data protection law, should raise it with the Data Protection Lead or the Chair in the first instance. Complaints will be handled in accordance with the Chapter’s general complaints procedure.

Where a concern relates to a potential breach of the UK GDPR or the Data Protection Act 2018, individuals also have the right to raise a complaint directly with the Information Commissioner’s Office (ICO) at ico.org.uk or by calling 0303 123 1113.


Approved by the Board of Trustees, Les Dames d’Escoffier, London Chapter

February 2026