Express Computer
Home  »  Artificial Intelligence (AI)  »  Financial AI compounds both efficiency and risk

Financial AI compounds both efficiency and risk

AI can process large amounts of information very quickly, and financial institutions will start adopting AI-enabled tools to make accurate risk assessments, detect insider trading, and streamline daily operations

0 265

By Michelle Cantos

The formidable power of the digital economy, aka economic activity that results from online interactions between users and businesses, has the potential provide India with $1 trillion in economic value by 2025. Companies across multiple industries, such as the financial sector, are eager to reap the benefits of the digital economy. Enterprising institutions seek to increasingly adopt modern tools and techniques, such as artificial intelligence (AI) enabled applications, in order to tap into this mountain of economic potential.

- Advertisement -

AI can process large amounts of information very quickly, and financial institutions will start adopting AI-enabled tools to make accurate risk assessments, detect insider trading, and streamline daily operations. However, researchers have also demonstrated how exploiting vulnerabilities in certain AI models can adversely affect the final performance of a system.

Currently, threat actors possess limited access to the technology required to conduct disruptive operations against financial AI systems and the risk of this targeting type remains low. Nonetheless, there is a high risk of cyber threat actors potentially leveraging these weaknesses for financial disruption or economic gain in the future.

Recent advances in adversarial AI research highlights the vulnerabilities in some AI techniques used by the financial sector. Data poisoning attacks, or manipulating a model’s training data, can affect the end performance of a system by leading the model to generate inaccurate outputs or assessments. Manipulating the data used to train a model can be particularly powerful if it remains undetected, since “finished” models are often trusted implicitly.

It should be noted that adversarial AI research demonstrates how anomalies in a model do not necessarily point users toward a wrong answer but redirect users away from the more correct output. Additionally, some cases of compromise require threat actors to obtain a copy of the model itself, through reverse engineering or compromising the machine learning pipeline of the target.

The following are uses of some of the financial AI tools and their potentially exploitable weaknesses:

Sentiment Analysis
Use: Branding and reputation are variables that help analysts plan future trade activity and examine potential risks associated with a business. AI techniques, such as natural language processing, can help analysts quickly identify public discussions referencing a business and examine the sentiment of these conversations to inform trades or help assess the risks associated with a firm.

Potential Exploitation: Threat actors can potentially insert fraudulent data that could generate erroneous analyses regarding a publicly traded firm. For example, threat actors could distribute false negative information about a company that could have adverse effects on a business’ future trade activity or lead to a damaging risk assessment.

- Advertisement -

Portfolio Management
Use: Several financial institutions plan to employ AI applications to select stocks for investment funds, or in the case of AI-based hedge funds, automatically conduct trades to maximize profits. Financial institutions can also leverage AI applications to help customize a client’s trade portfolio. AI applications can analyze a client’s previous trade activity and propose future trades analogous to those already found in a client’s portfolio.

Potential Exploitation: Actors could influence recommendation systems to redirect a hedge fund toward irreversible bad trades, causing the company to lose money. For instance, flooding the market with trades that can confuse the recommendation system and cause the system to start trading in a way that damages the company. Moreover, many of the automated trading tools used by hedge funds operate without human supervision and conduct trade activity that directly affects the market. This lack of oversight could leave future automated applications more vulnerable to exploitation as there is no human in the loop to detect anomalous threat activity.

Compliance and Fraud Detection
Use: Financial institutions and regulators are aiming to leverage AI-enabled anomaly detection tools to ensure that traders are not engaging in illegal activity. These tools can examine trade activity, internal communications, and other employee data to ensure that workers are not capitalizing on advanced knowledge of the market to engage in fraud, theft, insider trading, or embezzlement.

Potential Exploitation: Sophisticated threat actors can exploit the weaknesses in classifiers to alter an AI-based detection tool and mischaracterize anomalous illegal activity as normal activity. Manipulating the model helps insider threats conduct criminal activity without fear of discovery.

Trade Simulation
Use: Financial entities can use AI tools that leverage historical data from previous trade activity to simulate trades and examine their effects. Quant-fund managers and high-speed traders can use this capability to strategically plan future activity, such as the optimal time of the day to trade.

Potential Exploitation: By exploiting inherent weaknesses in an AI model, threat actors could lull a company into a false sense of security regarding the way a trade will play out. Specifically, threat actors could find out when a company is training their model and inject corrupt data into a dataset being used to train the model. These models are regularly trained on the latest financial information to improve a simulation’s performance, providing threat actors with multiple opportunities for data poisoning attacks. Additionally, some high-speed traders speculate that threats could flood the market with fake sell orders to confuse trading algorithms and potentially cause the market to crash.

Risk Assessment and Modeling
Use: AI can help the financial insurance sector’s underwriting process by examining client data and highlighting features that it considers vulnerable prior to market-moving actions (such as joint ventures, mergers & acquisitions, research & development breakthroughs). Creating an accurate insurance policy ahead of market catalysts requires a risk assessment to highlight a client’s potential weaknesses. Financial services can also employ AI applications to improve their risk models.

Potential Exploitation: If a country is conducting market-moving events with a foreign business, state-sponsored espionage actors could use data poisoning attacks to cause AI models to over or underestimate the value or risk associated with a firm to gain a competitive advantage ahead of planned trade activity. For example, espionage actors could feasibly use this knowledge and help turn a joint venture into a hostile takeover or eliminate a competitor in a bidding process.

Businesses must be aware of the risks and vulnerabilities they can encounter when integrating AI applications into their workflows. These models are not static; they are routinely updated with new information to make them more accurate. This constant model training frequently leaves them vulnerable to manipulation. Companies should remain vigilant and regularly audit their training data to eliminate poisoned inputs. Additionally, where applicable, AI applications should incorporate human supervision to ensure that erroneous outputs or recommendations do not automatically result in financial disruption.

(The author is Strategic Intelligence Analyst at FireEye)


If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]

Advertisement

Advertisement

Get real time updates directly on you device, subscribe now.

Subscribe to our newsletter
Sign up here to get the latest news, updates delivered directly to your inbox.
You can unsubscribe at any time
Leave A Reply

Your email address will not be published.