AI Compliance Tips for Investment Advisers
AI Compliance Tips for Investment Advisers
As artificial intelligence (AI) systems such as ChatGPT and other generative tools gain traction across the financial services industry, investment advisers are increasingly exploring ways to leverage AI in their operations. While these technologies may offer significant benefits for research, investment analysis, and administrative efficiency, they also introduce complex legal, regulatory, and fiduciary challenges. For SEC-registered investment advisers (RIAs), implementing these technologies creates challenges under the Investment Advisers Act of 1940 (the “Advisers Act”) and the rules thereunder.[1] Below, we outline key considerations for investment advisers that seek to use AI tools, with a focus on compliance obligations, governance practices, and practical steps to help mitigate legal and regulatory risk.
Generative AI models may produce information that appears plausible but is ultimately false, misleading, or misinterpreted (commonly referred to as “hallucinations”). Investment advisers, as fiduciaries, are subject to a duty of care to provide investment advice in the best interest of their clients. Investment advisers must ensure the investment advice provided to clients is based on factually sound and accurate information. Thus, investment advisers should implement processes to validate information generated by AI, especially when such information informs the development of investment advice for clients. For example, if an investment adviser uses an AI tool to summarize information or data that will inform investment decisions, it should consider how to incorporate human reviewers into that process and conduct periodic testing to confirm that the output from the tool is accurate.
Investment advisers must consider how any arrangements with an AI vendor will treat confidential information. Information exposed to an AI tool, for example, can be used to train the AI tool’s model and potentially be discoverable by other users, both internally and externally. An investment adviser, as a fiduciary, has a duty to safeguard clients’ confidential information. Typically, investment advisory agreements with clients and other service provider agreements include confidentiality provisions. RIAs are also subject to Regulation S-P, which imposes disclosure and compliance obligations regarding the safety and privacy of client information.
Investment advisers should confirm that agreements with an AI vendor include confidentiality provisions that sufficiently protect the information uploaded to the AI tool (including the investment adviser’s own confidential or proprietary information, as well as its clients’ confidential information) from model training or unrelated processing. Investment advisers should also conduct initial and ongoing due diligence into the vendor’s practices, including to assess the vendor’s data security practices.
AI use can expose sensitive information across internal teams or external vendors if not properly managed. Both RIAs and ERAs should apply strict access controls to restrict data visibility by role or department and implement data segregation mechanisms to prevent unauthorized exposure. Advisers should also periodically audit AI logs, prompts, and usage patterns for compliance.
AI platforms often function conversationally, creating iterative records that give rise to other risks, in addition to the confidentiality concerns noted above. For example, information entered into such tools can be exposed to third parties, thus waiving attorney-client privilege.
Investment advisers should also consider vendor data retention risks, including the extent to which data is processed and stored on vendor cloud servers and the limitations of so-called “zero-retention” settings. For example, even vendors that claim they do not retain data could become subject to court orders or other compelled disclosures that would require the vendor to retain and produce outputs or processed documents that contain sensitive information (e.g., court orders mandating preservation of user data, including chats deleted by user request or due to privacy obligations). Investment advisers should weigh these potential risks in light of the type of information they intend to expose to the AI tools.
For RIAs, the use of AI intersects directly with various obligations under the Advisers Act, including the following:
Given the evolving regulatory and legal risks, investment advisers that adopt AI tools should consider establishing a formal governance process (e.g., an AI committee) to oversee AI tools and risk mitigation, including the following functions:
AI technologies present opportunities for investment advisers seeking to enhance research efficiency, data management, and internal operations. However, these benefits must be balanced against the need to comply with regulatory and compliance obligations and manage legal and other risks. Before using any AI tool, investment advisers should adopt a strong AI governance framework, policies and procedures, and other safeguards to manage evolving legal, operational, and regulatory risks.
Please contact us if you have any questions concerning this alert or navigating AI compliance generally.
[1] While they are not subject to many of the SEC’s rules promulgated under the Advisers Act, exempt reporting advisers (ERAs) are “investment advisers” under the Advisers Act and thus subject to the Adviser Act’s antifraud provisions. Many of the considerations discussed in this article are also relevant for ERAs.
[2] See “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence,” (SEC Press Rel. No. 2024-36).



