The Cybersecurity of ChatGPT

And, so, ChatGPT has become a target for cybersecurity attacks. Recently, it was identified that a bug in its open source library caused a…

Photo by Andrea De Santis on Unsplash

The Cybersecurity of ChatGPT

And, so, ChatGPT has become a target for cybersecurity attacks. Recently, it was identified that a bug in its open source library caused a data breach, and where recently was “put to sleep” for a while:

Actually, although is says that the data breach levels are low, a figure of 1.3% could put the number of users breached at nearly one million. The targeted library is related to the Redis open-source library and opened up the chat history of active users. In the code, Redis is used to clear a user cache of messages and replies.

In fact, software supply chain attacks are one of the fast-growing routes to compromise. But, the data breach was only a small part of the possible vulnerabilities, as the OpenAI team found that it may have been responsible for revealing payment information in the hours before they took it offline. These details did not expose the full credit card details, but only the last four digits of the credit card number and credit card expiration date. But, under GDPR, this could be seen as a significant data breach.

Leaking all over the place

But, the data breach opens up a whole lot of questions about the actual security of the ChatGPT, which could lead to a significant number of fines for leakages. This is because massive amounts of data need to be stored that are used to train and retrain the model. Just like us, we hold things in our memory and could leak any of the secrets that we have to anyone we meet.

Thus, ChatGPT’s memory and storage are likely to be a major attack surface, especially in leaking information between user chats and failing to implement strict ethical and data compliance approaches. The toughest of all is possibly GDPR, which has a foundation principle around the right to be forgotten. This is not an easy task for ChatGPT, as it must actually hold-down and requests that relate to a specific person or specific intellectual property. Recently, Italy blocked access to ChatGPT as it did not comply with GDPR.

And, so, a major clampdown is already happening in some companies, in order that staff do not leak sensitive information to ChatGPT. JPMorgan Chase is one company who are limiting access to it and in its use of third-party software. The way it handles financial information could also be a major problem area for the bot.

The adversarial side of ChatGPT

And, the adversarial side of ChatGPT could see it used to generate spear phishing emails, as it overcomes the poor spelling and grammar of the past. In fact, ChatGPT could be used to target specific people, with information that is focused on them:

Dear Bob,

Your daughter - Alice - has fallen at Cyber School and has broken her leg.
We have taken her to Eve's Hospital on Main Street. You will find directions
to the hospital in the email attached. Please do not worry about her condition
as she does not seem to be in any pain, but we ask you to follow the steps
defined in the attachment.

Your Sincerly,

Eve (Principal of Cyber School)

Of course, the attached PDF could have a backdoor trojan added, and in Bob’s panic he will quickly open it, and install a backdoor onto his computer (if his system is unpatched, of course). All of the details in the email could thus be targetted specifically to Bob, and who has a daughter named Alice, and who attends Cyber School (and which has a head teacher of Eve). This type of email in the past, would be generic, and full of spelling errors:

Dear Reciptiant

Your daugher has fallen at schoool and has leg broken. She is now in hosptal.
Open attachment and read the word to find the place and doctor appoitment. Help
is needed for you.

Be kind and respect

Doctor Kim
Medical Lead

ChatGPT could easy customise for whichever language was Bob’s native one, and each spear sphishing email could be differently constructed (and which could get around email filters).

A worry, too, could be from malicious and not-so-malicious reviews that are written about products, and where ChatGPT could continually be used to write fake reviews and look almost like the style that an online reviewer would use.

Bug bounties for some of ChatGPT

OpenAI, though, is fighting back with its own bug bounty of $20K to find bugs. Unfortunately, it does not cover the actual writing of malicious code, and only the code that OpenAI uses:

Conclusions

I have a presentation this week, and will not hold back on the risks of AI: