The risk lies in the increasing difficulty of explaining how the technology reaches its decisions
Canada’s Office of the Superintendent of Financial Institutions recently indicated that it might be tightening the regulations governing artificial intelligence.
Over the past few years, AI-powered solutions have been steadily deployed by banks and other financial entities as smart, cost-cutting measures.
However, this widespread adoption may expose lenders and the general public to risk, especially when it becomes more difficult to explain to stakeholders how the technology arrives at its decisions.
“AI presents challenges of transparency and explainability, auditability, bias, data quality, representativeness and ongoing data governance,” OSFI Assistant Superintendent Jamey Hubbs said last month, as quoted by the Financial Post.
“The credibility of analytical outcomes may erode as transparency and justification become more difficult to demonstrate and explain,” Hubbs added. “There may also be risks that are not fully understood and limited time would be available to respond if those risks materialize.”
In a contribution for Forbes last year, author and futurist Bernard Marr warned that potential decision-making hazards can lead to the tool doing more harm than good.
“Biased AI systems are likely to become an increasingly widespread problem as artificial intelligence moves out of the data science labs and into the real world,” he said.
“An algorithm might pick a white, middle-aged man to fill a vacancy based on the fact that other white, middle-aged men were previously hired to the same position, and subsequently promoted. This would be overlooking the fact that the reason he was hired, and promoted, was more down to the fact he is a white, middle-aged man, rather than that he was good at the job.”
Marr stressed that AI still needs intensive refinement before being rolled out for large-scale use in mortgage and other critical financial sectors.
The implications on the mortgage space are particularly serious, as AI might not consider the human circumstances that lead to problems such as delinquency or misleading documentation, only the results of such problems.