A MoFo Privacy Minute: Do You Know What AI Tools Are Installed on Your Company’s Systems?
A MoFo Privacy Minute: Do You Know What AI Tools Are Installed on Your Company’s Systems?
This is A MoFo Privacy Minute, where we answer the questions our clients are asking us in sixty seconds or less.
Question: The use of AI tools without formal approval from a company (referred to as “Shadow AI”) is increasing. What are the risks and how should companies respond?
Answer: Employees across industries are quietly integrating AI tools into their daily workflows—without formal approval or oversight. This phenomenon, known as “Shadow AI,” is rapidly becoming the norm rather than the exception. Twenty percent of organizations surveyed in a recent report from IBM suffered a security incident involving Shadow AI, with such incidents reportedly being more likely to result in the exposure of personal information and intellectual property than other types of security incidents.
While AI systems, such as systems that can prepare meeting transcripts, can deliver significant productivity benefits to employees, the use of such AI systems without appropriate legal and cybersecurity oversight and protections creates significant risk.
Where an employee provides input containing confidential information or personal information to a third-party AI system not procured by the company (a “Shadow AI system”), the company will not be able to control how that information is subsequently used by the Shadow AI system provider, how it is protected, or what rights of deletion, if any, exist. Further, once the Shadow AI system is discovered, the provider may take the position that its agreement for the provision of the Shadow AI system is with the employee as an individual, as opposed to with the company. This may result in the company (absent cooperation from the employee) having limited recourse to request that the employee’s account with the provider (and any data related to or derived from that employee’s account) be deleted. In addition, by the time the use of the Shadow AI system has been discovered, any information held by the provider may have already been shared with third parties or used for the provider’s product development, depending on that provider’s privacy policy and terms of service.
Further, Shadow AI systems will have not been subject to the company’s standard vendor due diligence, and as such may not have adequate security controls to protect the company’s information or provide output that is sufficiently accurate for the given purpose. Even where Shadow AI does have such controls, the failure of the company to assess such a vendor, document its engagement, and exercise appropriate service provider diligence creates risk.
In addition, where Shadow AI tools are used, it’s important to bear in mind that the existence of the resulting AI-generated material may be disclosable in response to a data subject access request or as part of a disclosure exercise in litigation. Just because the company didn’t approve the use of the tool doesn’t mean the records generated don’t exist.
Training, technical preventative controls, and controls to detect any new or existing Shadow AI can all help.
From a technical perspective, companies can use blocking lists to seek to prevent instances where employees attempt to use Shadow AI. Further, Data Loss Prevention (DLP) can be used to detect, prevent, and raise awareness of potential leakage of information to Shadow AI. Employers should also review existing policies and consider providing training across their employee population and with relevant technical teams to ensure that personnel are sufficiently aware of the risks of AI and Shadow AI.
Given the productivity benefits provided by AI tools, employees are likely to continue to find ways to incorporate AI tools into their workflow. Companies should consider providing clear information to employees on what AI tools are authorized and encourage such use. By providing parameters for employees to use AI tools for the purposes they consider to be helpful, employees are less likely to find ways around company safeguards to use Shadow AI systems.

