ChatGPT and other AI tools are designed to deliver value for data. The problem? It’s all sensitive data.

Bad Privacy Blog by Claudiu Popa
2 min readJul 14, 2023

Why would anyone want to gain access to your company’s ChatGPT accounts?

Because they know that in the absence of regulation and policy enforcement, users are likely to enter sensitive #information, intellectual property details, #personal data and strategically important information.
#ChatGPT saves this data in chat logs by default, giving thieves exceptional visibility into company operations and creating a vast opportunity for targeted attacks, including profitable secondary markets for #cyberespionage, #extortion and other flavours of organized crime.

Click to read the referenced article: Over 100,000 ChatGPT accounts stolen via info-stealing malware (bleepingcomputer.com)

In addition to augmenting your online/cloud communications policy to include acceptable use of #AI, ensure that your users:
1. secure their accounts with #2FA or other multifactor access
2. reduce #cybercrime #risk by disabling or clearing their chat logs often
3. never enter sensitive information or #privacy details into any #cloud app, let alone any that purport to use #machinelearning or #LLM.

Claudiu Popa is a book collector, author and the co-founder of the Knowledgeflow Foundation, a nonprofit organization that empowers communities to weaponize digital literacy and critical thinking against disinformation. He is also the CEO of Datarisk Canada, one of the first information security companies focused on the protection of intangible assets.

--

--

Bad Privacy Blog by Claudiu Popa

Fīat jūstitia, ruat cælum. Personal musings on data protection fails, snafus & oddities, written & edited by Claudiu Popa; author, educator, booknerd.