Artificial Intelligence (AI) has become a driving force for innovation.
Its ability to sift through large datasets of complex data is streamlining decisions and unearthing new opportunities that were previously impossible.
The benefit of the application is especially prevalent in fintech. The sector, underlined with data sets and numbers, makes it the perfect setting for the extensive application of AI tech.
According to Mordor Intelligence, the global AI in the fintech market in 2020 was estimated at $9.91 billion, with a predicted average growth of 23% between 2021 and 2026.
Given the right parameters and data sets, AI can identify patterns in historical data, informing real-time decisions such as those taken in investment trading within a matter of seconds.
Many of the largest financial institutions have used various forms of AI for many years, and as the technology develops, the potential application becomes even more varied.
The term “Artificial Intelligence” was first coined in 1956 by John McCarthy, although the technology that formed the basis of modern-day AI was from a decade earlier. It wasn’t used until 1982 in James Simons’ quantitative hedge-fund Renaissance technologies in finance. Renaissance used their data to analyze statistical probabilities for the trend in securities prices in any market, then formed models to predict trends.
“The major paradigm shift is that if you go back to 50 years ago, you had various theoretical models for decision making, for example, the Cohen Model for financial markets,” said Jörg Osterrieder, Professor of Finance and Risk Modelling at the ZHAW School of Engineering and Action Chair of the EU COST Action of Fintech and Artificial Intelligence in Finance (FIN_AI).
“Theoretical models had one or two parameters, and then you used data to check if your model was correct. Now it is exactly the opposite.”
“You don’t even need a model anymore. You don’t need to know how the financial markets work. You just need this data set you give to the computer, and it will learn your optimum trading strategy. It doesn’t know about the theoretical models.”
AI is now used in all areas of the fintech landscape, from chatbots to automated investment, even creating new, hyper-personalized financial products as individual datasets become more open.
The use of historical data is essential to AI. Fundamentally the technology uses its ability to analyze data to inform any decision made. This, in turn, has its restrictions, as unimagined events can render these predictions null.
However, as data becomes more varied and computational power becomes more robust, more scenarios can be simulated, and statistical evidence can form various decisions and outcomes.
“If you read the news, you hear people talking about the AI revolution,” said Osterrieder. “That means that there are always huge breakthroughs. It’s ongoing but steady development. It’s a steady development because increasingly more people are looking into it, with more computing power, and more data that is made available.”
“You can find individual examples of AI applications everywhere,” he continued. “They all have two requirements to use AI: one, they have to have a good data set, and two, it has to be something quantitative.”
These two simple-sounding requirements open the technology to multiple applications, increasing potential as widespread access to data becomes the norm.
A survey conducted for the World Economic Forum in 2020 showed that 85% of financial players worldwide already use some form of AI, and 65% were looking to adopt AI for mass financial operations.
Companies such as Ocrulus and Kensho Technologies use AI to form the basis of their product offering, while other companies integrate AI to help inform certain areas. Fintech is becoming ever more synonymous with AI.
AI detection of money laundering
Osterrieder explained that in the business model, AI could be used to increase profits through the creation of new customized products and increase efficiency through streamlined decision-making. In addition to this, security is heightened by reducing fraud and money laundering.
Several companies now use AI-based fraud and anti-crime detection software to ensure safety for their customers. The software can detect suspicious activity and provide an automated response using various techniques.
Due to the large amount of data needed to be analyzed to detect such activity, technologies such as AI seem like the perfect solution. In many instances, however, the use of technology has created problems.
Earlier this month, German neobank, N26, came under fire after closing hundreds of accounts without warning.
Now under investigation by the Directorate of the Repression of Fraud (DGCCRF), the company issued a statement accrediting the closures to anti-financial crime efforts. This follows their “heavy investment” into expanding the area last year, with more than €25 million used to expand their anti-financial crime team and technology.
They have stated that to make such decisions, activity is monitored through automated systems and machine learning using AI.
They are not alone. Many other banks, such as Revolut and Monzo, have also faced issues.
The explainability Issue
The issue of explainability is one that restricts the sector
“If the AI forms a complicated model, it will have millions of parameters, so fundamentally, it’s impossible to really explain why a decision was made,” said Osterrieder.
He said that globally, regulators request the reasoning for decisions which is challenging to give. This limits the mass use of AI in certain areas.
It is an area the EU COST FIN-AI, which Osterrieder leads, has set its research focus. The group is funded by the EU Commission to properly investigate the aspects of AI in fintech for development in the field.
According to the research facility, AI solutions are often referred to as “black boxes” due to the difficulty in tracing the steps taken by the algorithms in making a decision.
Their working group is tasked with investigating the establishment of more transparent, interpretable, and explainable models.
Following the completion of a project titled Towards Explainable Artificial Intelligence and Machine Learning in Credit Risk Management, the research initiative suggested the development of a visual analytics tool for both developers and evaluators.
The device was presented to enable insights into how AI is applied to processes and identify the reasons behind decisions taken, therefore going some way to encourage mass adoption.
Issue of data bias
In addition, the issue of data bias concerns some industry professionals. Thought of as a way to avoid human subjectivity by some, the impartiality of machine and data-based descisioning is still not yet immune to bias.
In an interview with McKinsey, Liz Grennan, McKinsey expert associate partner, said, “Without AI risk management, unfairness can become endemic in organizations and can be further shrouded by the complexity.”
“One of the worst things is that it can perpetuate systematic discrimination and unfairness.”
Biases in AI are found in two capacities; Cognitive, which could be introduced to the system through programming of the machine learning algorithm, consciously or subconsciously; and Lack of complete data, which can result in data collection from a specific group that is not representative of a wider audience.
“Every model we have, even AI, is based on historical data,” said Osterrieder. “There’s just nothing else. We can play with that. We can change it, manipulate it, but it’s still historical data, so if there is a bias in the data, any model unless you specifically force it to do something else will have that bias again.”
Data bias is a factor many are investigating in all sectors of AI applications. Facilitating impartial decisions based purely on unbiased data points is seen to maximize the potential of AI, enabling trust in the systems.
The EU Artificial Intelligence Act
The EU AI act is the first proposed law on AI globally. It aims to regulate the application of AI, banning specific practices to protect consumer rights while still allowing the technology to develop.
The proposal stipulates unacceptable and high-risk AI applications while also set parameters for regulating accepted applications.
The title focused on Unacceptable applications of AI brings to light the intrusive potential of the technology.
Prohibited use of AI includes subliminal techniques for subconscious influence or exploitation of consumers based on vulnerabilities such as age and “social score” classification systems based on social behavior over a period of time.
In addition, the use of real-time remote biometric identification systems in public spaces is highly regulated, only deemed appropriate for minimal specific occasions such as identifying suspected criminals.
“High risk” applications, such as CV- scanning tools that rank job applicants, are highly regulated with numerous legal requirements, while other unlisted applications remain unregulated.
Transparency remains a crucial factor for application within the proposed law, as does risk management and data governance.
Barriers for development
As the AI sector within finance continues to grow, the focus turns to the future and the timeline to mass adoption.
“I think in the future, we will see developments in specialized places with specialized products, but we will not see major changes in the finance. It’s very incremental,” said Osterrieder.
“We have a long way to go, but I don’t think it’s the AI itself. It’s more about the data and computing power.”
There are various barriers facing the further development of the technology, which may explain the incremental changes. Many were concerned about AI in its conception, but as it has developed and restrictions have become more apparent, it has become clear that uncontrolled mass adoption is unlikely.
“I think there are three things restricting development.” he continued. “One, it’s the data. We still have a lot of data, but we are not able to process it efficiently. It takes a lot of IT resources to process data efficiently, and we have a lot of unstructured data which has to be processed. The data issue is ongoing.”
I think the second is the computing power. If you really have a very complex ai model, you really have to have enormous computing power, which only the large companies have.”
“The third that will affect widespread adoption is the social aspect. Society and the regulators need to accept that a computer is now doing something that a human once did. To accept that, we need legislation, we need explainability, we need these unbiased decisions, and we need ethical guidelines.”