AI as a co-worker: Enhancing productivity while ensuring security
Picture this: you’re at the office (or at home) and your new coworker, ChatGPT, never needs coffee breaks, loves handling repetitive tasks, and has a knack for cracking jokes that are… hit-or-miss.
Generative AI (Gen AI) tools like OpenAI’s ChatGPT and Microsoft’s Copilot are rapidly evolving, and while it’s tempting to let these digital assistants take over, there’s a nagging worry that there may be some spilling of sensitive data, causing privacy and security headaches. So, how do we utilise their power safely and securely?
Everybody is still getting to grips with the possibilities of Gen AI, from writing a blog post on AI (yes, I did consider it), to the creation of images – here’s one I made earlier.

First, we need to address what we know so far. These systems collect, store, and process large amounts of data from numerous sources – including every single prompt you enter. Then, the Large Language Model (LLM) gets to work and provides you with the perfect answer… or does it?
The risks:
Bias, inaccuracies and hallucination
We’ve seen LLMs not only become biased in their responses but also lack accuracy within their answers, mainly due to incorrectly understanding the context of the prompt and lacking human understanding. Due to this, the tool can start to hallucinate, generating responses that are not based on factual data but rather on the patterns it has learned from its training data.
Data leakage
Due to the fast-paced adoption of GenAI tools, employees lack the security-minded approach needed when using these tools. Employees will be entering company sensitive data into their prompts, leaving the organisation open to a third-party attack. Do you trust tools such as OpenAI’s ChatGPT and Microsoft’s Copilot to be securely storing your sensitive data?
Prompt injection attacks
This is where an attacker crafts a special prompt, designed in such a way that makes the Gen AI tool behave in a malicious or unintended way. Leveraged correctly, it can populate new and intricate types of malware that have the potential to avoid conventional detection methods, as well as being used to retrieve confidential information from the model itself. Additionally, we are seeing a rise in malicious AI-tools being sold on the dark web, such as WormGPT, which takes its name from OpenAI’s popular chatbot.
Phishing attacks
Whilst there are concerns about inaccuracies, the tool shines in creating malicious content that mimics real content. Attackers can leverage this to trick users into revealing sensitive information; it’s making it harder to spot phishing emails. Attackers are able to populate extremely realistic phishing campaigns in a matter of minutes. Due to this, organisations can expect to see an increase in the number of phishing attempts. Spoofing doesn’t stop there… Deep fakes – where a person in an existing image or video is replaced with someone else’s likeness using AI. This can be used to spread misinformation and create fraudulent identities, posing a significant challenge for security and trust in digital content.
What can we do?
In an attempt to mitigate the above risks, from a holistic viewpoint, we must consider three key areas: Security awareness, Governance, and Technology.
Security awareness
Educating employees is key. Handling sensitive data isn’t new, but with the introduction of AI tools, the business needs to reinforce the handling of sensitive data in the context of AI tool usage. Businesses need to make it clear what information employees can and can’t share with AI tools. Phishing training… yes, I said it. Employees are your first and last line of defence. With AI tools creating malicious content that mimics real content better than ever, it’s important that all employees are more security-minded when looking at their emails.
Governance
The past few years have seen the population of standards and tools designed to support organisations in managing and mitigating risks associated with AI systems, such as ISO/IEC 42001:2023 and NIST AI 600-1. At the time of writing, the NCSC is running a consultation for their AI Management Essentials (AIME) tool. “AIME is a resource that is designed to provide clarity to organisations around practical steps for establishing a baseline of good practice for managing artificial intelligence (AI) systems that they develop and/or use.” Use these standards and tools to guide your governance principles; it starts at the top. It will allow you to understand the people, processes, and technologies involved and the best ways to mitigate any risks.
Technology
I’m sure we will see a number of products come to market to help aid in the fight against AI threats. One thing you can do now is limit the scope at the source; only allow access to Gen AI tools you deem as “secure.” But how do we know if they are secure? Here are a few questions to consider:
- Is the vendor prioritising security? (e.g. bug bounty programs)
- Do they offer end-to-end encryption?
- Do they offer secure data storage?
- What compliance certifications do they have?
- What level of access controls do they offer?
This is starting to look like the remnants of a security vendor questionnaire, but the key point is, you should perform a thorough security evaluation of every vendor. Don’t just do it once—make ongoing assessments a routine part of your business operations, as new risks can emerge over time.
Harnessing generative AI can be extremely beneficial for your organisation’s performance and productivity, but it doesn’t come without its challenges. It’s important that, as a business, you understand the risks these new technologies can introduce. Supplied with this information, your security team can take smart steps to reduce these risks, thus minimising the potential business impact.