The implementation of computers into different finance
processes is nothing
new; high speed trading and the dominance of algorithms in
the markets is a trend
that has been discussed, analyzed, and reported on at
length.
In areas such as fraud detection, risk management, credit
rating and wealth advisory, AI is already augmenting or even replacing human
decision makers. In fact, not deploying AI capabilities in these fields can be
considered disastrous. Withthe ever-increasing amounts of data that needs to be
processed, AI systems are a must-have to improve accuracy.
The key point to remember during this conversation is that,
as computers become increas-
ingly sophisticated there will also be drawbacks. As
increasing amounts of trading
are connected to computers, programs, and algorithms that
operate without direct
human oversight and intervention there is a possibility that
large swings in the mar-
ket (that volatility word) will become more frequent.
As technological capabilities continue to improve, the
amount of available data grows, and competitive pressures mount, the use of AI
in finance will be pervasive. However, as with any new technology the adoption
of AI brings its very own set of challenges. There are a number of concerns
often cited by regulators, customers and experts which can be grouped into the
following categories:
- Bias
- Accountability
- Transparency
Potential causes of Bias:
- An AI
model is biased when it takes decisions that can be considered as
prejudiced against certain segments of the population. One might think
that these are rare occurrences – as machines should be less ‘judgmental’
than humans. Unfortunately, as has been proven last year, they tend to be
far more commonplace. AI failures can happen to even some of the largest
companies in the world.
- How do
these biases happen? One reason why algorithms go rogue is that the
problem is framed incorrectly. For instance, if an AI system calculating
the creditworthiness of a customer is tasked to optimize profits, it could
soon get into predatory behavior and look for people with low credit
scores to sell subprime loans. This practice may be frowned upon by
society and considered unethical, but the AI does not understand such
nuances.
- Another
reason for unintended bias can be the lack of social awareness: The data
fed into the system already contains the biases and prejudice that
manifests the social system. The machine neither understands these biases
nor can it consider removing them, it just tries to optimize the model for
the biases in the system.
- Finally,
the data itself may not be a good representative sample. When there are
low samples from certain minority segments, and some of these data points
turn out to be bad, the algorithms could make some sweeping
generalizations based on the limited data it has. This is not unlike any
human decisions influenced by availability heuristics.
Accountability Challenges:
- The
question who’s responsible if AI makes a wrong decision. If a self-driving
car causes an accident, should it be the fault of the owner who didn’t
maintain the car correctly, or did not respond when the algorithm made a
bad call? Or is it purely an algorithmic issue? What about our previous
example of predatory pricing – within which time frame is the firm
employing this algorithm supposed to know that something is amiss and fix
it? And to what extent are they responsible for the damages?
- These
are very important regulatory and ethical issues which need to be
addressed.There are risks related to the technology which need to be
carefully managed, especially when consumers are affected. This is why
it’s important to employ the concept of algorithmic accountability, which
revolves around the central tenet that the operators of the algorithm
should put in place sufficient controls to make sure the algorithm
performs as expected.
Missing Transparency:
- Many
algorithms suffer from a lack of transparency and interpretability, making
it difficult to identify how and why they come to particular conclusions.
As a result, it can be challenging to identify model bias or
discriminatory behavior.
- It’s
fair to say that the lack of transparency and the prevalence of black box
models is the underlying cause for the two challenges outlined above.
From anecdotal evidence and review of market commentary, it does
seem that the
increasing technological dominance of trading may be leading
to several different
effects.
First, while volatility while judged by historically levels,
has been at low levels
in the 2015–2018 time period this does not provide the
entire picture. The decrease
in volatility may not, as some has speculated, be associated
with the increased effi-
ciency generated by algorithmic trading programs, but rather
a related trend. ETFs,
passive investing tools, and the growing (trillions as of
this writing) assets invested
in these options may also be having an outsized impact on
volatility and training
patterns. Put simply, as larger and larger percentages of
investors and funds are
investing in similar, if not identical, trading tools and
platforms, this may very well
have a depressive impact on market volatility.
This may very well seem like a positive effect to retail
investors with jitters linked to increases in market volatility, but masks an
underlying problem. If investing decisions are made outside of human
oversight and supervision this can inadvertently lead to
market selloffs, runoffs, and
other actions that do not reflect the underlying economic
reality.
This is a tremendous opportunity for financial advisors,
planners, and other advi-
sory focused finance professionals to offer real time, real
world, and actionable
business insights to clients and customers in a market that
can seem as it operates
outside the realm of normal possibility. Volatility,
although depressed during 2017,
seems to have returned to the market with force in 2018,
emphasizes the important
of having a professional behind the wheel of various
automated services and pro-
cesses. Simply executing certain processes, trades, and
business transactions faster
will offer no benefit to either the organization or clients
if those said processes are
poorly written or designed.
In order for practitioners to effectively leverage
technology they must understand not only how the technology itself works, but
also how it can – and should be – applied to the business decision making
process itself.
Another area where can, and already is, having an impact on
the financial ser-
vices landscape is the realm of ad hoc and management
reporting, which constitutes
a rather large percentage of the actual work performed by
professionals working in
the space. Generating reports for management and supervisors
simultaneously
forms a plurality of work performed by many accounting
professionals and a way
that professionals can quantitatively add value to the
organization. Despite of this,
one of the key issues raised and problems associated with
internal management
reporting, or ad hoc reporting, is that data is not
generated consistently, systems do
not communicate with each other, and there are inevitably
time lags between when
different classes of information are generated.
In the context of accounting professionals seeking to
elevate both themselves and the work performed internally, the amount of time
spent correcting errors, manually adjusting entries and
information deprives professionals of the time necessary to
instead focus on higher
level activities. In other words, if accountants are
spending too much time manually
creating reports and fixing errors, those same professionals
will never be able to
achieve the oft-cited role as strategic advisor or business
partner.
Audit and attestation work, discussed previously and to be
expanded upon
throughout this text, represents a prime area where artificial
intelligence will have
an impact on the profession. Currently, the entire process
of auditing has several
pain points, namely the fact that the final audit opinion is
heavily (if not exclusively)
reliant on expanding on findings generated from a small
sample of organizational
information.
Even with the subsequent analytical procedures and
substantive tests
added into the audit examination process, audit failures are
all too common. AI
tools, such as those represented by the partnership between
IBM Watson and
KPMG, are already having an dramatic impact on audit
testing, procedures, and
how auditors interact with both clients and future clients.
This evolution and transi-
tion, from a compliance oriented function that focused
exclusively on financial
information, to a more comprehensive process that can
operate on a continuous
basis also connects to several other trends. Introduced
here, but examined in more
detail later in this book, the connection between assurance
work, non-financial
information, and the importance of this data to the decision
making process opens
up a proverbial work of opportunities for accounting
practitioners.
Tax reporting and the discussion of taxation issues are
normally not associated
with pleasant news or something that management
professionals, but that is not some-
thing that should be perceived as the final state of the
conversation. Specifically, and
even in the current environment beset by changes in tax
reporting, this debate and
analysis can, and should, be perceived both as an
opportunity and part of the continu-
ous management dialogue. Put simply, although the Tax Cuts
and Jobs Acts was
passed right at the end of 2017 – December 22nd to be
specific – the ripple effects as
a result of this legislation are still being analyzed and
processed by both individuals
and organizations. Processing the sheer number of changes,
running scenario analy-
ses, and putting the results of these analyses into a format
and report that are under-
standable for management decision making is both a role
accounting professionals
should play, and a function enabled by AI tools. Taxes have
an impact on the bottom
line, will continue to guide investment and operational
decisions moving forward, and
will play a prominent role in the implementation and
analysis of AI.
For financial institutions, it is clear that guidelines need
to be put in place to help avoid bias, ensure safety and privacy, and to make
the technology accountable and explainable. AI doesn’t have to be a black box –
there are ways to make it more intuitive to humans such as Explainable AI
(XAI).
XAI is a broad term which covers systems and tools to
increase the transparency of the AI decision making process to humans. The
major benefit of this approach is that it provides insights into the data,
variables and decision points used to make a recommendation. Since 2017, a lot
of effort has been put into XAI to solve the black box problem. DARPA
has been a pioneer in the effort to create systems which facilitate
XAI and it has since gained industry-wide as well as academic interest. In the
past year, we have seen significant increase in the adoption of XAI, with
Google, Microsoft and other large technology players starting to create such
systems.
There are still challenges to XAI. The technology is still
nascent. And there are concerns that explainability compromises accuracy, or
that adopting XAI compromises the IP of the firm. However, the success of AI
will depend on our ability to create trust in the technology and to drive
acceptance among users, customers and the broader public. XAI can be a game
changer as it will help increase transparency and overcome many of the hurdles
that currently prevent its adoption.