I Never Said That– the AI Chatbot Did | The Good and the Bad of Large Language Models 

ChatGPT and Security


While ChatGPT remains the fastest-growing consumer app,
FTC (Federal Trade Commission) investigates OpenAI, to check whether the company’s data security violates consumer protection laws. At the same time, Europol has also recently released a report on ChatGPT and the impact of Large Language Models on law enforcement.  

 

The LLM developed by OpenAI is a fantastic new tool that presents great opportunities to security professionals, but also raises some concerns, because just like any tool, it can be flawed and biased, and if used inappropriately, it can compromise the safety of sensitive data or aid in generating false content on a great scale. 

 

According to Danielle Benecke (the global head of the machine learning practice at Baker McKenzie), many companies are currently torn between the fear of missing out on the opportunity and the fear of messing up their reputation or compromising safety. Even though AI presents a great chance to automate some of the processes, it also poses a myriad of threats.
 

DATA SAFETY 

Data safety sits at the very top of the pile of concerns when it comes to LLMs. Those Language Models are trained on vast amounts of data, including user-generated content. This means that sensitive or private information shared with the chatbot might be stored within the model’s parameters, potentially leading to data breaches or misuse of personal data. Moreover, as LLMs are often hosted by third-party companies, concerns arise regarding who owns the data generated when users interact with the model and how it may be used by the hosting company.
 

UNINTENDED OUTPUTS 

Can an AI be biased? You bet! LLMs require extensive datasets for training – but if that data is not representative enough, the model’s performance might be biased. The data they are exposed to may also contain limited or false information. As a result, the model could inadvertently generate biased or misleading outputs, reinforcing stereotypes and spreading misinformation. LLM users should be wary of the data generated because sometimes it can generate outputs that are offensive, harmful, or inappropriate.  

 

MALICIOUS MISUSE 

ChatGPT is quite resilient to attempts of misuse. It refuses to generate content concerning corruption or criminal activity unless masterfully manipulated. That being said, by using a special prompt it can be jailbroken into a DAN version (which stands for Do Anything Now) and exploited. Additionally, it is not the only LLM AI on the Internet. LLMs can be used to create convincing fake content, such as fake news articles, emails, or social media posts, making it difficult to distinguish between genuine and manipulated information. It is to be expected that over time, there will emerge another ChatGPT out there on the dark web that will happily generate plausible conspiracies, shitposts and fake news with very real consequences.

 

It might be a good idea to regulate how your company uses the available technology – the fear of missing out cannot compromise safety. At the same time, however, if used appropriately, this new tool could offer substantial help, cutting time on certain tasks and assisting in running and maintaining a security risk program – but this is a fine line that each organization must navigate independently.  

Read more on how AI can affect the safety of your organization here. 

Office worker using Chat GPT on the mobile

Read more?

We can help you today

If you want to see what the Human Risks platform can do, for your company.  Contact us today

Contact