Neural networks that are based on multi-layered neuron structures that mimic the human brain along with highly sophisticated algorithm innovation are part of the latest thinking in artificial intelligence (AI) that’s sweeping the financial services industry. Expected to create opportunities to automate ever more complex processes and decisions with the highest possible degree of accuracy, this next step in the financial services AI journey is an exciting one.

But it’s not without its challenges. As I explained in my previous blog post, making the most of neural networks and avoiding pitfalls that can have serious consequences depends on understanding their benefits and drawbacks and taking the right approach to implementation.

The pros and cons of neural networks

Neural networks are capable of highly complex decision-making with superior degrees of accuracy. Within the financial services industry, they can be very useful in predicting future value, extracting meaning from unstructured data and recognizing objects on images. This makes them very attractive to firms that are ready to take the next leap in applying AI across the business.

However, their focus on accuracy can also be a drawback. Decisions made within financial services, such as loan approvals, often have monumental impacts on people’s lives. Individuals expect to understand why those decisions have been made. Neural networks don’t give the reasons, only the answers. The ever-increasing customer demand for explainability and transparency is contrary to the cut and dried nature of neural networks. To date, these “black-box” solutions are unable to articulate the reasoning behind their decisions.

What can go wrong…

Because these solutions are so complex, it can be very difficult for a human to articulate the reasoning behind the decisions they make. The focus on accuracy that defines neural networks can also make it difficult to identify bias before it has contributed to long term discriminatory outcomes. The injection of bias through skewed data is always a risk when using AI, but it’s amplified in a neural network―which relies on huge amounts of data. The end result could be legal action for discriminatory practices. It’s paramount that the data used to train these models is of sufficient quality, scale and diversity to avoid potential bias.

Neural networks also require massive amounts of processing power. IT infrastructure limitations could be a hindrance to effectively employing these models. Often, firms turn to the cloud to access the processing power they need to run their neural networks.

As in so many other cases with implementing new technology, stakeholder trust and confidence are critical to their application and long-term use. Gaps in data or processing power are usually noticeable immediately and can be rectified. However, the deeper issue with neural networks is the potential for bias and the need for explainability and algorithm transparency. Often these gaps aren’t noticed until the damage has been done and it’s too late.

From “black box” to “glass box”

Delivering explainability, transparency and freedom from bias may be more difficult when using neural networks, but not impossible. In my next blog post, I’ll offer a six-step methodology for making sure you’re using the right processes, practices, tools and controls to make responsible and ethical use of neural networks.

For detailed information on neural networks and how to apply them in financial services, please see Accenture’s report: Neural Networks: The Next Step for Artificial Intelligence in Financial Services.

Sabyasachi Roy

Managing Director – Financial Services Technology Advisory Artificial Intelligence, UKI

View Profile


Submit a Comment

Your email address will not be published. Required fields are marked *