Preventing Data Leaks When Using Public AI

Preventing Data Leaks When Using Public AI

Read Our Previous Blog In Our Path Forward Series

Discover how charity CIOs can champion user empowerment, from nurturing a user-centric IT culture to implementing robust training programmes and much more.

New Year with a Cleaner, Safer Digital Environment

Make now the moment to clean up, secure, and future-proof your digital environment. From reducing risk and improving performance to strengthening security, we can help you build a cleaner, safer digital foundation for the year ahead.

Award-Winning IT Services

Discover why we were named one of Britain’s 50 Best Managed IT Companies Award 2025, and how it reflects our continued focus on strong leadership and operational excellence.

It is widely acknowledged that public AI tools provide significant value for general business activities such as idea generation and handling non-sensitive customer information. These technologies enable professionals to compose emails rapidly, craft marketing materials, and distil complex reports with remarkable efficiency. However, alongside these advantages, there are substantial risks for organisations, particularly those entrusted with customer Personally Identifiable Information (PII).

Most public AI platforms utilise the inputs provided to enhance and train their underlying models. Consequently, any prompt entered into tools such as ChatGPT or Gemini may contribute to their future outputs. A single inadvertent error by an employee could inadvertently disclose client data, internal business strategies, or proprietary intellectual property. For business leaders and managers in London and the Greater London Area, it is imperative to act proactively to prevent data breaches before they escalate into critical liabilities.

Financial and Reputational Safeguards

While integrating AI into business operations is now indispensable for maintaining a competitive edge, ensuring its safe deployment must remain the foremost concern. The repercussions of a data breach caused by careless AI usage far exceed the investment required for preventative measures. A solitary mistake could expose confidential strategies, proprietary systems, or client-sensitive information, resulting in severe financial penalties, regulatory action, loss of competitive advantage, and lasting reputational harm to your organisation.

Consider the real-world example involving Samsung in 2023: Several employees within the semiconductor division, in pursuit of efficiency, inadvertently disclosed confidential information by pasting it into ChatGPT. The leaked materials included source code for emerging semiconductors and confidential meeting transcripts, which were subsequently retained by the public AI model for training purposes. This incident was not the result of a sophisticated cyberattack, but rather human error compounded by insufficient policy and technical safeguards. The consequence was a company-wide prohibition on generative AI tools to prevent further incidents of this nature.

Six Essential Prevention Strategies

Outlined below are six actionable strategies to help organisations in London and the Greater London Area secure their engagement with AI tools and foster a culture of security awareness.

1. Develop a Comprehensive AI Security Policy

For such a critical matter, guesswork is unacceptable. The cornerstone of your defence is a clearly articulated policy specifying the appropriate use of public AI tools. This policy should define what constitutes confidential information and explicitly state which types of data-such as National Insurance numbers, financial records, merger discussions, or strategic roadmaps-must never be entered into any public AI model.

It is essential to educate all team members on this policy during their induction and reinforce it with regular (ideally quarterly) refresher sessions. A comprehensive policy eliminates ambiguity and establishes robust security standards for your organisation.

2. Require Use of Business-Grade AI Accounts

Free public AI tools often conceal complex data-handling terms, as their primary objective is the continuous improvement of their models. Upgrading to business-oriented solutions—such as ChatGPT Team or Enterprise, Google Workspace, or Microsoft Copilot for Microsoft 365—is crucial. Commercial agreements with these vendors typically ensure that customer data is not used for model training. In contrast, the free or Plus versions of ChatGPT, for example, default to using user data for this purpose, although some settings can be adjusted to limit this.

Business-tier agreements provide essential data privacy assurances, creating a vital technical and legal barrier between your sensitive information and the public domain. By investing in these solutions, organisations in London and the Greater London Area gain not only advanced features but also robust privacy and compliance guarantees.

3. Implement Data Loss Prevention Solutions with AI Prompt Protection

Human error and intentional misuse are inevitable risks. Employees may inadvertently input confidential data into a public AI chat or attempt to upload files containing sensitive client PII. These risks can be mitigated by implementing Data Loss Prevention (DLP) solutions that intercept and assess data before it reaches the AI platform. Tools such as Cloudflare DLP and Microsoft Purview offer advanced browser-level context analysis, scanning prompts and uploads in real time.

These systems automatically block data identified as sensitive or confidential. For unclassified data, they employ contextual analysis to redact information matching specific patterns, such as credit card numbers, project code names, or internal file paths. Collectively, these controls provide a protective net, detecting, logging, and reporting incidents before they become serious breaches.

4. Deliver Ongoing Employee Training

Even the most comprehensive AI policy is ineffective if it remains unused. Security is a dynamic practice that must evolve with emerging threats; simple memos or compliance lectures are insufficient.

Organise interactive workshops where staff can practice crafting secure and effective AI prompts based on real-life scenarios relevant to their daily responsibilities. This practical training empowers employees to de-identify sensitive information before analysis, making them active contributors to organisational data security while still leveraging AI for operational efficiency.

5. Conduct Routine Audits of AI Tool Usage and Logs

Security programmes are only effective when they are actively monitored. It is essential to maintain visibility over how teams are using public AI tools. Most business-grade platforms offer administrative dashboards—review these regularly (weekly or monthly) to identify unusual activity patterns or alerts that could indicate potential policy violations.

The purpose of audits is not to assign blame but to uncover training gaps or weaknesses in technology infrastructure. Analysing logs may reveal which departments require additional guidance or highlight areas where policies need to be refined.

6. Foster a Culture of Security Awareness

Even the most robust policies and technical controls are insufficient without a supportive organisational culture. Leaders must exemplify secure AI practices and encourage open dialogue, allowing employees to raise concerns or questions without hesitation.

This cultural commitment transforms security into a shared responsibility, creating collective vigilance that surpasses the capacity of any single tool. In this way, your team becomes the strongest defence against data breaches.

Embedding AI Safety as a Core Business Principle

The adoption of AI in business workflows is now essential for competitiveness and efficiency. As such, prioritising safe and responsible integration is paramount. The six strategies outlined above provide a framework for organisations in London and the Greater London Area to harness AI’s potential while safeguarding their most valuable data assets.

To advance your organisation’s secure AI adoption, contact us today to formalise your approach and ensure comprehensive protection for your business.