Greg Hancell, Manager of Global Consulting at OneSpan, talks to Finextra TV about how financial institutions can fight fraud using machine learning. The interview also contains insights on why explainable artificial intelligence is important and how banks can get started with continuous monitoring and contextual authentication.
Watch the interview in full or read the transcript below.
Hannah Wallace: We hear a lot of stories about account takeovers and hacking. How can financial institutions get better at detecting and mitigating these attacks?
Greg Hancell: There's three areas financial institutions need to focus on - technology, process and people. The main focus at the moment is on technology, which is really being driven forwards by machine learning.
HW: So, machine learning, let's talk a little bit more about that. Can you pinpoint the main areas of machine learning that are helping mitigate these fraud risks?
GH: With machine learning, there are two main models that are applied to fraud detection: supervised machine learning and unsupervised machine learning.
Unsupervised machine learning uses anomaly detection, where it determines what is usual and what is unusual. With supervised machine learning, the model is trained using historical information around fraud. It is therefore able to determine - ‘is this event fraud or not?’ It is also able to predict a fraud score. A machine learning model can apply, in real time, to every event that's occurring and sending a score back. This can therefore allow a solution, or a user, to take an action based upon these events. It also has the capability to think in multiple dimensions.
What is it in a dimension? That would be the data elements that you would take such as:
- a device;
- the user’s IP address;
- the user’s internet service provider;
- and many more.
These data elements are then calculated into what's called features. Features are used inside of a machine learning model. So, if we take the example of the device, the features might be:
- How is that device used?
- What is the age of that device?
- Is that device new to the user?
- Is it new to the bank or is it new to the financial institution?
- Is it new to the corporation?
- What security premises are on that device?
- What biometric methods and authentication methods are subscribed to that device?
- What communication method is it using?
- What model is it?
- What operating system?
- Has anything changed?
All of these are questions you can ask around the device alone.
Financial institutions that leverage machine learning can take this data and ask these questions in real time. Based upon the answers, they can then model that information in a high dimensional space, which has the capability to model lots of different parameters, often into the thousands of dimensions - far beyond a human's capability. What that analysis does is then give the financial institution the likelihood of an action being anomalous, or the likelihood of fraud, in real time.
Machine learning can also be used from an automation perspective as well. It's impossible to have a fraud expert in place 24/7, seeing all events. It is just not possible. So, machine learning is removing that availability bias that we as humans face, as well as potentially an affirmation bias. Machine learning removes these kinds of human challenges and allows for the capability to take decisions on events in real time in an automated way.
Machine learning can also make decisions for other types of workflows – such as what type of authentication a financial institution should apply to a transaction. It can determine whether the strength of the authentication required relates to the risk. This can also be used to improve the customer experience – whereby financial institutions can determine that where the risk is low, there is no need to request authentication from the user at this point in time. If financial institutions are using continuous monitoring then if the risk changes, they can then serve up a stronger authentication biometric. So, machine learning allows the financial institutions to adapt authentication types to the level of risk as well.
HW: Really interesting, certainly a space to watch. And while I've got you on the topic of machine learning, are there any other trends that we should be keeping an eye out for?
GH: In the future, a big challenge will be the capability to have explainable machine learning. It is important that financial institutions understand what their machine learning model has learnt. It’s no good if the model learns something, and then financial institutions believe it is detecting fraud, when in fact it's learned something incorrectly.
It could have also learned something from a biased perspective too - it might be making decisions incorrectly because it's learned something incorrectly. This could be unfair to somebody that might be applying for credit or a loan, but it would also be a risk if it's learned something incorrect from a fraud perspective.
So, financial institutions need to be able to explain what their machine learning model is learning and what is the data set that it's learning upon. They also need to explain the weighting of the features I reference - the device and other intelligence points – and how they influence the machine learning model. Why that's so important, if you are a fraud analyst, is that you need to know what a score means. If a score is 90, what does that mean? And why is it 90? And, how does that relate to a population sense as well? How does it relate to previous events? How many users are above 90? And so forth.
So, machine learning needs to be explainable in terms of understanding: what the score is; how it was determined by different elements that are used to arrive at that score; and the decision-making process as well.
On top of that, there needs to be the capability to audit. In machine learning models, there is a capability to apply what we call a champion challenger - which is where a model will be in place scoring, and then a new one is spun up, and fraud experts are able to see, in real-time, if the new model will out-perform the existing model. When it does, it will then come into place, and a new challenge will be spun up. That's great, because it means that you have relevant machine learning models that can handle drift and different types of new fraud, or changing patterns, spends or products. The risk, from an audit perspective, is that previous model could have made a decision that impacts a customer. So, how does that link back to the customer that is impacted by it at that at that point in time? There needs to be an auditability, traceability, as well as an explain-ability around machine learning.
HW: Well, Greg, you've certainly painted the picture for us, and I think it's safe to say, watch this space. But for now, thank you very much for sharing your insights. It's been a pleasure.
GH: Thank you. It's been a pleasure. Thank you.