Security Stop Press : LLM Malicious “Prompt Injection” Attack Warning

  • September 13, 2023
  • News

The UK’s National Cyber Security Centre (NCSC) has warned of the susceptibility of existing Large Language Models (LLMs) to malicious “prompt injection” attacks. These are where a user creates inputs intended to cause an AI model to behave in an unintended way e.g., generating offensive content or disclosing confidential information.

This means that businesses integrating LLMs like ChatGPT into their business, products, or services could be leaving themselves open to risks like inaccurate, controversial, or biased data content, data poisoning and concealed prompt injection attacks.

The advice is for businesses to establish cybersecurity principles and make sure that they are able to deal with even the worst case scenario of whatever their LLM-powered app is permitted to do.

About us and this blog

We are a IT solutions and support company. In our BLOG you can find more information about services and solutions we provide and learn how they can benefit you and your business.

We offer professional IT support for small and medium size businesses, as well as support home based businesses.

To check how we can help improve your security and productivity, request your FREE IT health check today!

More from our blog

See all posts