Skip to main content
Articles

AI as a blueprint for fintech startups

By August 27, 2020September 21st, 2021No Comments
Screen Shot 2020 08 27 at 2.10.25 PM

While most startup founders would prefer not to pore over laws, regulations and interpretive materials to design a perfect product, it’s an essential exercise for those developing financial services solutions. For fintechs and the other finserv-related startups (e.g., regtech, suptech, etc.) understanding the regulatory obligations of customers and prospects will be core to your mission. In some cases, the process of interpretation and analysis might be a heavy lift involving expert outside counsel, lobbying efforts, and specialized consulting services.

A complicating factor for any fintech looking to solidify its understanding of regulatory paradigms is the gray area where regulators have issued cursory guidance, or no guidance at all. One gray-ish realm where financial services regulators have shown interest, but are largely treading lightly, has been offering guidance about the use of artificial intelligence (“AI”). However, a few regulators are now applying institutional and intellectual rigor to the subject given its use in almost every aspect of banking and finance.

In June, the Financial Industry Regulatory Authority’s (“FINRA”) Office of Financial Innovation issued a report called “Artificial Intelligence in the Securities Industry.”1 FINRA is the self-regulatory organization responsible for oversight of broker-dealers in the United States — in simpler terms, it regulates the big and small brokerage firms that offer financial advice. FINRA has consistently been one of the more technologically engaged regulators — in 2018 they solicited industry comments on AI, which led to the report, and produced reports on the use of regulatory technology and selected cybersecurity practices. (The U.K.’s FCA as well as the CFTC in the U.S. have also been key boosters for innovation in financial services, including the use of AI.)

 

FINRA’s AI Report is particularly interesting for fintechs since it explores how firms (read: fintech clients) are deploying AI as well as the agency’s expectations for AI oversight. Fintechs can use this report as a blueprint — identifying areas for potential AI product growth and as a guidepost for the regulatory and operational concerns that firms, and by extension fintechs themselves, must manage when implementing AI.

The report consists of three parts: an overview of AI, applications of AI in the securities industry, and key challenges and regulatory considerations. This article will hit the high points of the report and discuss the lessons fintechs can draw from it. The report findings will be beneficial to nebulous, idea-on-a-whiteboard-stage startups and established incumbents alike.

In an effort to demystify AI, the first section of the report outlines the technologies that have been collectively referred to as AI, namely: machine learning (“ML”), natural language processing (“NLP”), computer vision and robotics process automation.

Although old hat to some, parsing out these definitional nuances is helpful for budding fintechs as it can help them develop a language for describing how they use AI. Alternately, the process of defining AI highlights the types of technologies that do not fall within its remit. So, if your application performs simple keyword searches, or matches images or documents without leveraging these underlying technologies, be careful about referring to your application as AI-enabled.

FINRA also outlines the “components” of AI applications in this section, listing data, algorithms and human interaction as key for a functioning AI. While these concepts may seem obvious, fintechs should pay particular attention to the data and human interaction elements. Given the focus on bias in AI, fintechs must vet data sources to determine if they include or exclude certain racial, gender, income or other categories of sensitive information that would skew conclusions. Additionally, fintechs must focus on the human elements of their AI processes — explain how models are tested, create nontechnical explanations of them and describe how models are monitored post-deployment. The evolution of machine learning

operations, or “ML Ops,” which relies on human interaction as a sort of continuous feedback loop to validate and retest the technical components of AI is crucial. Black box algorithms only understood by engineers whose operations can’t be explained by sales staff or others at the company will not be successful in the marketplace. FINRA was wise to include human interaction as an integral component of AI development.

A semi-related last word on the practical implications of FINRA’s AI definitions for fintechs: patents. Although not addressed in the report, if your fintech is painstakingly creating unique models or processes to apply AI to financial services problems, consider how to protect those ideas with patents and seek the help of outside counsel. Part of any startup’s value is its intellectual property, so considering patents at an early stage is important. Patent development could have its own 10-part Extra Crunch expose, but we’re trying to multitask here.

The second section of the report outlines three areas where firms are deploying AI: communications with customers, investment processes and operational functions. These areas are quite broad, but provide fintechs with a sense of where their customers are actively using AI today.

With respect to communications with customers, FINRA first points to virtual assistants like Alexa and Siri, which interpret voice commands and provide information about market data, account inquiries and other data. A quick scan, for example, of Alexa skills related to banking shows offerings from U.S. Bank, Capital One, J.P. Morgan, PayPal, Ally and many retail banks for everything from account queries to research updates.

AI is being employed to analyze customer queries from email and webforms. FINRA found that firms are “using AI-based applications to screen and classify incoming client emails based on key features, such as the sender’s identity, the email’s subject line and an automated review of the email message itself.” Finally, AI is being used to proactively contact customers to provide research, recommendations and market information based on AI-powered analysis of historical trends.

Investment processes, the second zone of AI application, includes the management of brokerage accounts and creation of holistic customer profiles as a means for making investment recommendations. Based on FINRA’s research, AI-driven profiles are still in the embryonic stage as “industry participants noted taking a cautious approach to employing AI tools that may offer investment advice and recommendations directly to retail customers, citing several legal, regulatory and reputational concerns.”

AI is being used to power trading strategies as part of portfolio management activities — it can “identify new patterns and predict potential price movements of specific products or asset classes.” However, FINRA observed some uncertainty regarding reliance on AI in this area “particularly where the trading and execution applications are designed to act autonomously. Circumstances not captured in model training — such as unusual market volatility, natural disasters, pandemics or geopolitical changes — may create a situation where the AI model no longer produces reliable predictions, and this could trigger undesired trading behavior resulting in negative consequences.”

Despite the advancement of AI to fuel trading and portfolio management, the notion of autonomous AI clearly makes the regulator and firms uneasy. Furthermore, there appears to be some hesitation about AI’s ability to absorb massively disruptive events like the COVID-19 crisis. These indications of uncertainty should be signals to fintechs about where they might face pushback from potential customers in the use of AI. A fintech developing an AI-assisted portfolio management tool might consider how safeguards or other risk thresholds could be applied to alleviate concerns about fully “autonomous” AI. Fintechs should also ensure that models account for disruptive and unexpected events to support availability for business continuity purposes, and to adapt or alert about extraordinary activity during extreme events.

The last category of “operational functions” is quite broad, and includes everything from compliance and risk management to administrative functions. In the compliance domain, AI is being leveraged to supervise communications, including the dynamic video, audio and chat content from collaboration platforms, which are being rapidly adopted as firms shift to remote work as part of pandemic response. From a compliance perspective, AI is also being used for liquidity and cash management, credit risk, cybersecurity, and to identify and advise on regulatory rule changes. On the administrative side, AI is used to make labor-intensive processes more efficient by improving paper-based processing, document review and information extraction.

The third section of the report may be the most impactful for budding fintechs — key challenges and regulatory considerations. This section provides insight into how regulators and firms are approaching deployment and oversight of AI and how those activities intersect with existing FINRA rulesets.

First, FINRA tackles model risk management. According to FINRA, model risk management, encompasses “model development, validation, deployment, ongoing testing and monitoring,” which puts pressure on fintechs to articulate how their models work, how they are trained, and how they are managed and updated over time. As mentioned above, demonstrating ML Ops chops will reinforce the discipline applied to AI processes and assuage the concerns of clients and regulators. (The Federal Reserve has discussed its expectations for reasonable model management in SR 11-7.)

Another key consideration from FINRA’s perspective is how model risk management intersects with its Rule on Supervision. The regulatory notion of supervision often serves as a catchall for situations where there is a failure of a firm’s policies, procedures or oversight mechanisms. Of the supervisory obligation FINRA states “in supervising activities related to AI applications, firms have indicated that they seek to understand how those applications function, how their outputs are derived and whether actions taken pursuant to those outputs are in line with the firm’s legal and compliance requirements.” Pretty simple, right?

Fintechs and firms creating and deploying AI applications must be able to explain the design, function and outputs of their data models. Since a firm’s supervisory obligations extend to every facet of an AI system, the way those systems operate must meet relevant FINRA mandates. This means that, where applicable, a fintech can demonstrate how its AI functions consistent with rules like the SEC’s Regulation Best Interest standard for investment advice, FINRA’s Best Execution Rule for order taking or the Communications with the Public Rule for advertising materials.

Regarding data governance, FINRA expects that firms can demonstrate that models are “complete, current and accurate” as well as account for demographic bias in models and data. To address the issue of data bias, FINRA connects the notion of ethical AI to Rule 2010, which requires high standards of commercial honor in dealing with customers. Rule 2010 has historically been applied broadly to misconduct in situations ranging from supervisory failures to money laundering lapses. The reference to Rule 2010 means that FINRA has discretion to make determinations about how “commercial honor” might relate to potentially unethical practices as they pertain to AI. Moreover, AI must be vetted for potential bias at both the technical and organizational levels. Meaning, firms and fintechs that employ AI should demonstrate diversity in staffing teams and viewpoints as well as in data sources, integrations and benchmarks.

For fintechs, taking an offensive approach to data governance, ethics and bias that demonstrates how their approach to AI aligns to regulatory expectations will differentiate your company and product. Developing whitepapers, reports and technical overviews about data governance and operations can reduce friction during the vendor onboarding process and in conversations with regulatory authorities. So, for example, if you are developing an AI app for a broker-dealer that provides investment recommendations based on signals from third-party sources like social media, you must validate that the data from these sources isn’t skewed in a way that would negatively impact the recommendation.

Given that AI is often leveraged for business applications that collect or use customer data, privacy is another core consideration for FINRA. Fintechs must familiarize themselves with the strictures of data privacy laws like the California Consumer Privacy Act and document internal privacy practices. Fintechs should also understand the contours of the red flags rule (Reg S- ID) and the privacy of consumer financial information rule (Reg S-P) if their AI will be part of critical outsourced activities that pertain to the analysis of financial data.

For good measure, FINRA tacks on several “additional considerations” regarding the use of AI that are particularly impactful for fintechs. The two most crucial for startups are cybersecurity and outsourcing/vendor management.

Cybersecurity must be an essential focus for any fintech startup. Aligning cybersecurity and information security controls to a commonly accepted framework such as those articulated in the SOC 2, Type 2 audit, ISO or CSA is near-mandatory when selling to financial services. Considering how to deploy industry standard cybersecurity controls during the design phase of your product will pay dividends when you enter into vendor assessment with a financial services prospect.

Fintechs should familiarize themselves with FINRA’s outsourcing and vendor management requirements to gain a sense of how to structure internal practices — from policies and procedures to insurance and technical controls — in a manner that meets finserv expectations (See NASD’s Notice to Members 05-48 as well as the FFIEC’s Appendix J for further guidance).

Any work fintechs can do in the early stages of developing their AI technology to implement organizational governance and corresponding supervisory controls will be massively helpful in smoothing the vendor assessment and onboarding process.

Taken as a whole, FINRA’s report offers member firms and fintechs important insights into expectations for the use and development of AI applications. Perhaps the most important takeaways are that oversight and deployment of AI will have impacts that extend far beyond code itself. Savvy fintechs will carefully consider several aspects of their business — compliance, governance, cybersecurity and operations — as fundamental pillars of a successful business strategy. Planning now will minimize frustration and pain in the long run.

Theta Lake periodically met with FINRA’s OFI as part of its outreach initiative open to all startups, but did not provide any input on the report.