April 2023 - Assessing Cloud-Based AI Chatbot Risks: A Guide for Fellow CISOs
As Chief Information Security Officers, we are entrusted with the responsibility of protecting our organizations' data and ensuring the security of our digital infrastructure. In today's interconnected business environment, the adoption of cloud-based AI chatbots has become increasingly popular. I would like to share my insights on the potential risks these chatbots pose to data privacy and security and offer recommendations on how we can collectively mitigate these risks.
AI Chatbot Privacy Policies: Ensuring Compliance
As we evaluate and integrate AI chatbot solutions into our respective organizations, it's crucial to assess their privacy policies and how they align with our companies' internal data protection guidelines. AI chatbot providers often collect, store, and process user data, which can lead to potential conflicts with your own internal privacy and data protection policies as well as privacy regulations like GDPR and CCPA.
I encourage you to closely review the privacy policies of any chatbot service providers you may be considering and ensure that their data handling practices align with your organization's policies and regulatory requirements. This due diligence will help maintain compliance and protect our companies' reputations.
Vendor Due Diligence: Audits, Certifications, and Data Use
API Integration Risks and Internal Company Resources
When connecting a chatbot to internal company resources through APIs, we must be cautious of the risks associated with exposing sensitive data. It's important to carefully evaluate the API's security, and how it handles authentication, authorization, and data protection, to minimize potential threats.
Employee Guidelines and Policies for AI Chatbot Usage
Given the popularity of tools like ChatGPT, it's vital that we, as CISOs, provide clear guidance to our employees on the acceptable use of AI chatbots. Establishing a company-wide policy on chatbot usage will help ensure that employees understand the risks associated with sharing sensitive or confidential information through chatbot platforms.
Confidential Data and the Risk of Data Breach
As we know, AI chatbots are designed to process and understand vast amounts of information. However, when employees within our organizations inadvertently or intentionally input confidential or sensitive information into the chatbot, it could lead to potential data breaches if the chatbot's infrastructure is compromised.
To minimize this risk, I recommend educating employees about the proper use of AI chatbots and establishing clear guidelines regarding the types of information that should never be shared through these platforms. Additionally, we should only work with chatbot providers that offer robust security measures, including data encryption, secure storage, and ongoing monitoring for potential threats.
Several prominent companies, such as Amazon, J.P. Morgan Chase, and Microsoft, have recognized the risks associated with AI chatbot usage and taken steps to either restrict or ban their use within their organizations. These measures aim to reduce the likelihood of data breaches, maintain compliance with data privacy regulations, and protect sensitive information.
For instance, Amazon has implemented strict guidelines on the use of AI chatbots to ensure that employees do not inadvertently disclose confidential information or violate privacy policies. J.P. Morgan Chase, on the other hand, has opted to ban the use of certain AI chatbot platforms, demonstrating their commitment to data security and privacy.
Microsoft has also chosen to impose restrictions on AI chatbot usage, focusing on educating employees about the potential risks and providing guidance on the acceptable use of chatbot technology within the organization.
Samsung's Experience with ChatGPT: A Cautionary Tale for CISOs
In a recent incident, Samsung employees reportedly leaked sensitive, confidential company information to OpenAI's ChatGPT on multiple occasions. These leaks exemplify the widespread popularity of AI chatbots among professionals and the risk of sensitive data exposure to chatbot providers.
Following the leaks, Samsung implemented an "emergency measure" that limited the size of ChatGPT prompts for each employee. Interestingly, these leaks occurred just three weeks after Samsung lifted a previous ban on ChatGPT usage due to concerns about this very issue. Now, Samsung is developing its own in-house AI solution.
OpenAI retains data submitted to ChatGPT and may use it to improve its AI models unless users explicitly opt-out. This data retention policy heightens the risk of sharing sensitive information with AI chatbots. A recent study by cybersecurity company Cyberhaven revealed that 3.1% of their customers using ChatGPT had submitted confidential data at some point.
This cautionary tale from Samsung demonstrates the importance of addressing AI chatbot risks. Several large companies, such as Amazon, Walmart, Verizon, and J.P. Morgan Chase, have taken notice and issued notices, warnings, or even blocked AI chatbot usage among their employees to prevent data leaks.
As CISOs, it's essential that we learn from these incidents and establish policies and guidelines for AI chatbot usage within our organizations to safeguard sensitive information and maintain data privacy.
In conclusion, the adoption of cloud-based AI chatbots offers numerous benefits for our organizations, but it's our responsibility as CISOs to remain vigilant in safeguarding data privacy and security. Be careful that your organization does not get caught up in the hype-cycle and forget its founding principles and policies around data security and privacy. By carefully evaluating chatbot privacy policies, promoting employee awareness, and conducting thorough vendor due diligence, we can ensure the secure and compliant use of these technologies within our organizations.