On June 10, the Financial Industry Regulatory Authority (FINRA) released its Artificial Intelligence (AI) in the Securities Industry Report (Report), a culmination of a two-year review by FINRA’s Office of Financial Innovation to learn about the emerging challenges confronted by broker-dealers (Firms) and other market participants as they introduce AI-based applications into their businesses. The Report provides an overview of AI technology, explores its diverse, multifaceted applications in the securities industry and identifies the challenges and legal considerations with leveraging this technology. FINRA requests industry feedback on topics covered in the Report by August 31, 2020.
Artificial Intelligence in the Securities Industry
FINRA acknowledges that there is no universally agreed-on definition of AI and refers to it as an “umbrella term” that encompasses a broad range of different technologies and applications. The Report aptly summarizes the various technological models and points to specific examples of AI implemented in the securities industry including the automation and customization of customer communications, the creation and development of real-time, holistic customer profiles and the enhancement of compliance and risk management systems.
Key Challenges and Regulatory Considerations
FINRA highlights challenges and regulatory considerations for Firms to consider in the areas of model risk management, data governance, customer privacy and supervisory controls systems. Firms should also remain aware of additional considerations in the areas of cybersecurity, outsourcing and vendor management, books and records requirements, and workforce structure.
Nevertheless, FINRA emphasizes that the list is not exhaustive and every Firm should conduct its own due diligence prior to implementation to determine the utility, legal impact and other potential risks of an AI-based application.
Model Risk Management
Firms should consider incorporating in their policies and procedures a model validation process to account for all the intricacies of an AI-based application. Such a review would involve searches for potential biases in the input data and errors in the algorithm, verification of the risk thresholds and an articulation of the explainability of the output. It is important that Firms obtain an adequate level of explainability to comply with their supervisory obligations under FINRA Rule 3110.1
As such, Firms are relying on compliance, audit and risk personnel to understand how AI-based applications function. The internal assessments may include how a Firm’s outputs are derived, whether those outputs align with the Firm’s business goals, risk appetites, internal policies and procedures and whether the actions taken pursuant to those outputs comport with the Firm’s legal and compliance requirements.
Data is the lifeblood of any AI-based application. Human interaction is not impactful if the AI application does not have input data to conduct comprehensive analyses, identify patterns, make predictions and readjust its model. FINRA notes that AI applications are best positioned to yield meaningful results when the underlying stream of data is large, valid and current. Many Firms striving to meet these conditions should engage in the following practices:
- data source verification
- data integration
- data security
- data quality benchmarks and metrics
However, the dataset may still be skewed if it contains a bias. One common type is a demographic bias, likely the product of historical practices, and a flaw that must be recognized early in the modeling to avoid discriminatory outcomes. Firms should enhance their policies and procedures to include a review of the underlying dataset for any potential built-in biases. Firms should also apply specific data filters to assess how the model outputs are affected, leverage proxies in lieu of demographic data and utilize a diverse group of staffers to review the dataset and test the model outputs. A Firm may risk not meeting their ethical obligations under FINRA Rule 20102 if it disregards the potential for inherent biases in the underlying data.
Firms will likely collect sensitive customer data such as Personal Identifiable Information (PII) when employing AI-based applications. The collection of PII exposes Firms to potential customer privacy concerns if the information is not properly safeguarded.
Each Firm has a duty to protect a customer’s financial and personal information. In accordance with Securities and Exchange Commission Regulation S-P,3 Firms are required to maintain a written set of policies and procedures that govern the security of a customer’s information and records. The governing document should take into consideration a number of key issues such as customer consent, user entitlements and access to data and the redaction of sensitive information.4 There are also numerous other legal regimes at the state, federal and international levels governing customer data privacy that Firms should consider before implementing an AI solution.
Supervisory Control Systems
FINRA Rules 3110 and 3220 require Firms to maintain written supervisory procedures (WSPs) and control systems for AI-based tools. FINRA reminds Firms to consider the following measures to construct a robust set of WSPs and controls:
- establish a cross-functional technology governance structure
- conduct extensive application testing
- create fallback plans
- verify personnel registrations
Moreover, Firms should conduct an overall assessment of the specific functions and activities that use AI-based applications and update their supervisory procedures accordingly. Areas that Firms should consider reviewing are trading applications, funding and liquidity risk management, and investment advice tools.
AI-based applications provide many benefits to Firms in the security industry. Yet, Firms should evaluate prior to implementation whether any potential inefficiencies exist with the application’s internal processes and whether its deployment would comply with all applicable law. Firms should review the full Report and carefully consider its discussion on the various business and regulatory implications. Firms are also encouraged to provide feedback by August 31, 2020 on FINRA rules or regulations that may need to be modified or clarified as applied to AI-based applications in the securities industry.
1Firms are required to “establish and maintain a system to supervise the activities of its associated persons that is reasonably designed to achieve compliance with the applicable securities laws and regulations and FINRA rules.”
2Firms must observe high standards of “commercial honor and just and equitable principles of trade” in the course of conducting business.
3Privacy of Consumer Financial Information and Safeguarding of Personal Information.
4Similar requirements have been promulgated under National Association of Securities Dealers Notice to Members 05-49 (Safeguarding Confidential Customer Information).
Sidley Austin LLP provides this information as a service to clients and other friends for educational purposes only. It should not be construed or relied on as legal advice or to create a lawyer-client relationship.
Attorney Advertising - For purposes of compliance with New York State Bar rules, our headquarters are Sidley Austin LLP, 787 Seventh Avenue, New York, NY 10019, 212.839.5300; One South Dearborn, Chicago, IL 60603, 312.853.7000; and 1501 K Street, N.W., Washington, D.C. 20005, 202.736.8000.