AI - Its unavoidable, so lean in!

Rob Tregaskes

Oct 2025

A user monitoring the activity of a corporate controlled AI tool

A severe cybersecurity incident is the most dangerous risk most companies face. It is the greatest threat to a company’s ability to deliver for its customers, and so it’s value. This is the eleventh in a series of posts about cybersecurity risk, and how you can reduce it to give customers and investors confidence.

Everyone is using AI - your best choice is to take control

At OpenAI's DevDay event on 6th October Sam Altman shared that there were about 800 million weekly active users of ChatGPT. The explosion in AI use has been meteoric and set new precedents for what growth is possible, sitting on the shoulders of broad internet access and a mature smartphone market. Estimates for the number of smartphone users are around the 4.7 billion mark globally, so if these number are right, in less than three years, about 17% of smartphone users are using ChatGPT every week. Pragmatically in developed markets those penetration rates will be much higher. If you have a team of 5 or more, your team are already using AI at least weekly.

So what are risks of AI usage? There are a number of more broad risks around AI usage, but the two keys ones to be aware of are:

  • Accidental data leakage. The most common form of data leakage is associated with sensitive files and data being shared with an AI tool with unintentional implications such as becoming part of the training data, or exposing sensitive data to another user who should not have access.

  • Prompt injection. The amazing power of LLMs (Large Language Models, the technology underneath most modern AI tools) is that they can read and follow written instructions (most commonly referred to as 'prompts'). The downside of this power is that bad actors can write prompts into all sorts of places being read by the LLM and direct the LLM to do something malicious.

Practical steps to minimise AI risk

There are a number of steps that can be taken to minimise the risks associated with AI tool use, whilst keeping the tremendous productivity benefits they provide:

  • Get a paid account. Most paid accounts have the option to opt out of inclusion in training data, or opt out by default. For companies getting a paid account is particularly important as then you can then:

    • Ensure no organisational data is included in public model training.

    • Enforce sharing and access security controls such as Single Sign On (SSO) and capturing your corporate domain (e.g. patching.co) so that your data accessed by the AI tool can still only be accessed from the same places that the source data can already be accessed from.

    • Limit what the tool can be integrated with in terms of company data. For example, don't allow integrations to SharePoint / Google Drive / Box until your file storage is well organised and users only have access to what they need to have access to.

  • Go with ease of use and adoption until your needs are more refined. ChatGPT has the lead in the consumer contest, so get a ChatGPT business account to at least put some controls around what your staff are already doing. Eventually tools better integrated by core platform vendors  such as Microsoft Copilot / Google Gemini might be more secure option better fitting your business need, but until you have the technical capability to block unauthorised tools and properly configure the better integrated tools, having some control is better than none due to implementation delays.

  • Data loss prevention (DLP) tools just got far more important. Whether it is developers using AI tools to assist in coding, or staff using AI tools to review and summarise long documents and contracts, your company data will be flowing freely into AI tools all over the place. The more user friendly your IT controlled AI tools are the more likely staff will use those. The better your DLP tools, the more likely you are to identify unauthorised use of unsanctioned tools before the data leak becomes too serious.

  • Human review, ownership and disclosure are critical. AI tools hallucinate and get things wrong. The more advanced models have discussions with themselves to test and improve on the quality of their answer before delivery to the user, but the inescapable reality is that these are probabilistic not deterministic systems, so they won't always produce the same output and cannot be accountable. Make it clear in your organisation that whilst AI use is expected, it must still be explicitly disclosed, and the human user is accountable for the output or deliverable.

  • Avoid the 'lethal trifecta' for AI agents. When all three characteristics are in place the conditions exist for an attacker to use prompt injection within untrusted data to trick your agent into accessing your private organisational data and sharing it externally with an attacker.

    • Access to private organisational data.

    • Exposure to untrusted content (i.e the internet).

    • The ability to communicate externally.

  • Finally, building secure AI tools is very challenging from both a technical architecture and development governance perspective. Errors during development or training can result in polluted training data or data leaks that get worse over time and require expensive re-training. It is wise to assume that smaller companies with more limited resources are likely to have less secure models and to apply corresponding caution about how you use them.

Supplier certifications won't protect you...yet

It is increasingly common for companies to go through security audits and gain certifications to try and give customers comfort, with badges for ISO27001, Cyber Essentials Plus, SOC2 and more on newly launched trust portals. Unfortunately most of these will not protect against the risks from poorly managed AI development. There is however a new(ish) emerging standard, ISO 42001, which involves continuous monitoring and is specifically designed for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organisations. It is not perfect as it is evolving alongside an AI sector moving at breakneck speed, but it should at least allay some concern about how well a company is managing AI risk in it's product development.

Bottom line

Use of AI tools is unavoidable - its like trying to avoid using the internet or computers. Therefore the least insecure path forwards is to embrace the change and try and establish some technical controls, whilst being pragmatic about the limits of those protections in a dynamic and exceptionally fast moving sector.

If you are navigating some of these challenges at the moment we can help. Our mission is to help you reduce your cyber risk, and so our help can be in whatever form is most helpful to you, from conducting an assessment of your current setup, to advising on system architecture and config, introducing trusted partners, training up staff or helping with op model development or hiring. Please reach out below 👇

How can we help you secure your growth?