WASHINGTON — The Federal Reserve and other banking regulators are considering a formal request for public feedback about the adoption of artificial intelligence in the financial services sector, Fed Gov. Lael Brainard said Tuesday.
If the agencies move forward with the request for information, it could be the first step toward an interagency policy on AI.
Brainard said the RFI would accompany the Fed’s own efforts to explore how AI and machine learning can be used for bank supervision purposes.
“To ensure that society benefits from the application of AI to financial services, we must understand the potential benefits and risks, and make clear our expectations for how the risks can be managed effectively by banks,” Brainard said in remarks for a symposium on AI hosted by the central bank. “Regulators must provide appropriate expectations and adjust those expectations as the use of AI in financial services and our understanding of its potential and risks evolve.”
Financial institutions have started using AI for operational risk management purposes and for customer-facing applications, as well as fraud prevention efforts, Brainard said. Those functions could remake the way banks monitor suspicious activity, she added.
“Machine learning-based fraud detection tools have the potential to parse through troves of data — both structured and unstructured — to identify suspicious activity with greater accuracy and speed, and potentially enable firms to respond in real time,” she said.
AI could also be used to analyze alternative data for customers, Brainard said. She added that that could be particularly helpful to the segment of the population that is “credit invisible.”
But Brainard also acknowledged challenges in the widespread adoption of AI and machine learning in banking. If models are based on historical data that has racial bias baked in, they could “amplify rather than ameliorate racial gaps in access to credit” and lead to “digital redlining.”
“It is our collective responsibility to ensure that as we innovate, we build appropriate guardrails and protections to prevent such bias and ensure that AI is designed to promote equitable outcomes,” she said.
Brainard explained that there is also often a lack of transparency in how AI and machine learning processes work behind the scenes to accomplish tasks. The adoption of the technology in the financial services space should work to avoid a “one size fits all” explanation, she said.
“To ensure that the model comports with fair-lending laws that prohibit discrimination, as well as the prohibition against unfair or deceptive practices, firms need to understand the basis on which a machine learning model determines creditworthiness,” she said.