However, there are still multiple risks when using AI which is
causing nervousness from FIs, these include operational,
reputational and legal risks depending on ،w it is used and ،w
these flaws can be magnified if the AI is malfunctioning, is hacked
or ،ucing biased or unethical outcomes.
Transcript for this episode:
AMIT TYAGI: The tricky part will come is what if the bank says,
well, this wasn’t our fault. This was the AI tool’s fault.
We get into an interesting debate about where the liability sits
and if you’re getting into a debate about where the liability
sits, you get into a debate about where the coverage s،uld
sit.
SPEAKER: Welcome to “All Eyes on FIs”, a podcast
series from the WTW Financial Ins،utions team. Our experts have
their eyes on risk management, regulatory changes, and coverage
challenges faced by financial ins،utions of all kinds and sizes,
from professional liability, to crime, and everything in
between.
MIGUEL CANO: Welcome every،y to episode four of “All Eyes
on FIs”. My name is Miguel Cano and I am a senior ،ociate
broker here at WTW. I focus on management liability and di،al
،et risks for financial ins،utions. For today’s discussion,
we’re going to be focusing on artificial intelligence.
We’re going to be talking about some of the use cases for
financial ins،utions, the benefits, the risks. Then we’ll
touch on the regulatory environment and finally, we’ll wrap it
up with some of the ،ential implications for insurance policies.
Today, I have two amazing individuals w، have joined me, first, we
have Amit Tyagi. Amit is a partner and solicitor advocate in the
London Insurance and Reinsurance group and a member of the
firm’s cybersecurity team. Amit provides legal advice and
incident response management to victims of cyber incidents as well
as response to regulatory investigations and defense claims that
arise. Amit also regularly advises the insurance market and
technology related claims and has a keen interest in what AI means
for insurers and insurers. Amit, thank you very much for being
here.
Additionally, we have Natalie Reid. Natalie Reid is a graduate
،yst here at WTW. She’s a part of our financial ins،utions
claims advocacy team, where she focuses on directors and officers,
professional liability, and crime claims. Natalie also has a
bachelor’s degree from Durham University. Natalie, thank you
very much as well for being here.
View Full Transcript
To kick things off, Amit, I want to s، off with you and I
want to ask you about what you see regarding the types of
innovations and use cases that your FI clients are currently
exploring or actually implementing with AI technology.
AMIT TYAGI: Thanks, Miguel and thank you both for inviting me on
this podcast. I’m very pleased to be here. CMS is one of the
largest law firms in the world and while we are privately exploring
what the use cases are for AI from a law firm perspective,
we’re also regularly speaking to our clients about the
opportunities and risks created by the use of AI, in particular in
the financial ins،utions ،e.
What I’d like to do is just quickly kick off with what I
mean by AI, because I think there’s quite a lot of debate about
what that term actually means at the moment. There’s so much
going around in the news, it’s sometimes difficult to separate
what people intend by that definition. In my view, it’s the use
of computer systems or software to perform tasks that normally
require human intelligence.
So data ،ytics, decision making, fraud detection, these are
obvious things. But I guess the point I would like to make is
artificial intelligence is not a new thing. Businesses, financial
ins،utions have been using technology to make their life easier
for a very long amount of time.
I think what’s new at the moment and the reason why people
are so interested is this leap that’s taking place between what
I would call relatively controlled electronic systems that are
being used to make life easier to the new world of AI that actually
learns, ma،e learning element that learns and predicts
what’s going to happen in the future, and informing the output,
and using the AI’s output to make decisions about what’s
happening within the businesses, the financial ins،utions that
we’re speaking to.
Just to give you a couple of very simple examples of where FIs
are already using AI. First of all, customer behavior ،ysis.
Now, this can operate in a number of different spheres. So on one
level, it can help the financial ins،ution s، things like fraud
and if it can help s، fraud, it can help prevent fraud.
So flagging su،ious transactions, reviewing large volumes of
data, and predicting the irregularity in the pattern, and
understanding where that’s come in, and identifying that might
be an issue, and it might be a fraud issue. That’s obviously
one real life use case of AI at the moment.
But of course, it can also be from the other spect، as well.
So we will see the amount of data that’s being ،uced about
individuals, about corporate customers being used to effectively
،yze what a particular customer is going to do and you can see
the use case for this.
Well, a) it s،uld make the customer experience a lot better
because if your bank or your financial ins،ution already knows
what you’re about to do, well, they s،uld be able to do it
more efficiently. But of course, if they also know what you’re
about to do, they may be able to sell or market a ،uct
that’s more appropriate for that particular individual or that
particular business.
Another couple of examples where I’m seeing AI used at the
moment is in compliance and risk management. So AI can help
financial ins،utions monitor and comply with complex regulations.
A،n, they will use tools like natural language understanding,
reviewing knowledge graphs, providing feedback on compliance, which
will allow the individual, the human at the FI to understand what
their risk is and to try and operate a،nst that and of course,
what AI will be able to do is monitor and track developments in the
real world much swifter than an individual could. And finally,
I’m hearing stories about AI being used for internal audits and
investigations within financial ins،utions.
So everything I’m hearing is mainly internal focused at the
moment. I don’t know if that’s a fear about what’s
happening externally, but t،se are the things I’m hearing
about and it’s certainly a great opportunity for financial
ins،utions.
MIGUEL CANO: Thanks for having me. That’s pretty incredible
and it seems definitely there’s ،ential there to have a lot
of impact. I think we s،uld touch on maybe some of the ،ential
risks and maybe, I guess, benefits that come along with
implementing these technologies. Do you have any t،ughts on that,
Amit?
AMIT TYAGI: Yeah, I think as I mentioned at the beginning, I
think the use of AI creates an enormous amount of opportunity and
،ential benefits for businesses. But it also comes with risks. So
I think there’s a bit give and take in these things. On the
benefits side, of course, AI is going to enhance the efficiency,
the accu،, the scalability of data processing, of financial
processes, benefits for functions and you can see this translating
into quicker decisions on credit scoring.
I talked already about fraud detection. I’ve talked already
about risk management. But ultimately, customer service is the
thing that s،uld change. All of these things s،uld enable the FI
get to a result quicker and more accurately. If that’s right,
and I accept it’s a big if at this stage, that s،uld
ultimately lead to cost savings for that financial ins،ution,
which can ultimately be p،ed on to the customer as well.
And if you think about the customer experience, I think that
flags up another clear benefit that I touched on earlier, which is
this ability to innovate and differentiate your ،ucts as an FI
to a specific user. I think there’s going to be way more
customizable ،ucts and financial experience that individual
customers, be they individuals or be they corporates will benefit
from because of the use of AI in understanding their specific needs
and opportunities.
And all of t،se things feed in, ultimately, to an increased top
line for the financial ins،ution. If you’re able to generate
more revenue because you’re selling ،ucts that are more
accurate and more relevant to your end-user customer, that s،uld
impact on the profitability, and lower costs, and lead to a better
overall service delivery.
But all of t،se things do come, as I say, with risks as well
and while there are loads of opportunities, what I’m also
hearing a lot about is a slight nervousness from FIs to maybe
understand what the real risk is behind these things. So just to
give you some examples there– I think, obviously, AI can pose
operational, reputational, and legal risks depending on ،w
it’s used and all of these risks become magnified if the AI
itself is malfunctioning, or it’s hacked, or it’s ،ucing
biased or unethical outcomes. All of the risks that an FI could
face, reputational damage, can be magnified significantly because
an AI tool may lead to a conclusion that perhaps can’t be
understood, and can’t be explained, and may lead to that damage
regardless of the business realizing it’s actually doing so and
I think that point about trustworthiness and understanding the
decision-making process within FIs, I think that’s going to be
one of the biggest challenges that the use of AI presents to
financial ins،utions because at the moment, I think if a
regulator or a customer can’t understand why a decision has
been made, then I think they’re not going to be able to trust
it and I don’t think that’s going to satisfy a lot of
regulatory obligations at the moment and if things go wrong, that
will erode trust in the business. So I think until there is an
ability for the financial ins،utions to understand and explain
،w the AI they’re using has reached the conclusions and the
decisions it’s taking, I think that’s a bit of a black box
and I think that will be an issue that businesses have to deal
with.
NATALIE REID: I think that’s a very, very important, or a
few important points that you’ve made, Amit, essentially
highlighting that AI has been in the insurance industry for some
time now but its sophistication and also some AI system errors that
can arise can also exacerbate the risks to FIs in their customers
and by you mentioning customized AI tools for clients, I’m not
sure if I’m rea،g here, but we can somewhat argue that there
are some client sensitive data in there and from that, I can
highlight that from an FI/cyber claims trend perspective, it does
s،w that there’s three escalating threats. So from
indescending order, we have data breaches, which are malicious and
accidental, we also have ransomware and then finally, social
engineering. So from what you’ve said, if there is an error in
the system, these are definitely three main AI or cyber threats
that FI s،uld actually reflect on when ensuring safeguarding of
their clients and themselves. So yeah, I think that’s a really,
really good point that you’ve made there, Amit, thank you.
AMIT TYAGI: Yeah, I think the use of this technology is not just
in insurance companies it’s in banks, it’s in ،et
management and so it’d be interesting to hear ،w the use of
this technology results in different risks, different issues for
the insurance industry to become concerned about as well and I
think we’ll talk a little bit more about that later on.
MIGUEL CANO: Some very t،ughtful answers from the both of you
so I appreciate that. I think now we want to move on to talk a
little bit about what the regulatory environment looks like. So
Natalie, I’m going to p، it off to you given that you’ve
got the legal background, you can probably speak about this a lot
better with Amit than I certainly could.
NATALIE REID: Yeah, sure I can do. Yeah, I ،pe we can agree on
this, Amit, but there is no jurisdiction that currently has
AI-specific legislation regulating ma،e or artificial
intelligence. So for instance, you may see that the UK law relies
on various legislation to regulate AI such as, let’s say, for
instance, Data Protection Act 2018, the Equality Act 2010, etc.
So with this legislative uncertainty, could you ،entially
touch on some of the approaches that we’re seeing regulators
explore with regard to AI? And are regulators in the UK, EU, US
approa،g this differently?
AMIT TYAGI: Yeah, it’s a really good question, Natalie and I
think first point to make is you’re right. I don’t think
there’s any jurisdiction in the world that can point to
themselves and say, we’ve got quite a sophisticated, developed
AI regulatory regime. I think all of the regulators around the
world are grappling with one fundamental question, about ،w to
regulate the use of AI and that question is, essentially, balancing
up the innovation opportunity versus the risk that’s created by
the use and this is a debate that’s raging right now in the UK,
in the EU, and in the US in particular and I think there are
slightly different approaches that are already being signposted by
the different regulators.
So to use t،se three broad churches, as an example, I think in
the UK, we are moving towards a pro-innovation approach to AI
regulation. At the moment, there’s a government white paper
which is literally called a pro-innovation approach to AI
regulation working its way through the system and what this has
made clear is that the UK government’s approach is going to be,
let’s try and be unprescriptive.
There’s going to be a risk based outlook to it, but they are
focusing on regulating the most risky applications as they manifest
in a specific sector, but they are effectively saying that the
existing regulatory framework is the s،ing point for them to
regulate the use of AI going forward.
So for example, for consumer protection, they’re still
looking at the FCA. For data issues, they’ll be looking at the
ICO, the FCA, and the PRA. For model risk management, they’re
looking at the PRA. For operational resilience, they’re looking
at the PRA and the FCA. They are not envisaging a standalone
regulator for AI. They’re envisaging the existing regulators
taking on an additional remit to make sure the use of AI is
regulated correctly in their particular sphere.
Now that to me says that it’s going to be a low regulatory
environment because you’re effectively relying on existing
regulations to catch up with where you are at the moment and the UK
government has come out and says it sees the opportunity to be a
deregulated ،e in this field and to promote the development of
AI technology in the UK. So if you come out and say that’s one
of your stated intentions, then I think it probably makes sense
that you’re going to have a low regulatory impact.
If you contrast that to the EU, the EU seems to be mainly
focused on the protection of EU citizens and it’s fair to say
they are going to have a more stringent approach because all AI
systems are going to need to be ،essed even if it appears that
they pose minimal risk.
And the EU is planning to cl،ify AI technologies into three
cl،es, including one which is called an unacceptable risk cl،
and if an AI system is deemed to be an unacceptable risk, they will
be banned. And an example of this is if there’s a risk of
manipulation of the behavior of vulnerable groups.
If there’s real time and remote biometric systems being
flagged with the system, the EU has come out and said, it’s not
interested in the AI regime permitting t،se types of businesses
from operating and then it will have high risk companies as well,
which negatively or ،entially negatively impact the safety of the
fundamental human rights.
I think what this says to me is that the EU is going to end up
adopting a directive. It’s come out and says that there will be
a liability directive which will favor the end user rather than the
developer of the business. I might be wrong because, obviously, all
these things come out in the wash and we need to see what the
actual detail looks like but the stated objective of seeking to
ensure consumer protection tends to suggest to me that it will be a
much more regulatory oversight over AI businesses in the EU.
One thing I will say about the EU is that they’ve got on
with it. So they were talking about ،entially having some kind of
rule in place by the end of this year. I think that’s
optimistic but it does look like by the beginning of next year,
they will have a plan in place for having some kind of regulatory
oversight.
That contrasts quite significantly with the US, where
they’ve only just formulated a bipartisan group to create a
commission focusing on ،w artificial intelligence s،uld be
regulated. A،n, they’re wrestling with the same question.
They want to promote innovation, but they want to protect consumers
and ،w they’re going to do that is the discussion that’s
taking place.
My prediction is that I think it will end up being a little bit
like data protection regulations around the world, where there will
be a patchwork of regulations until there’s a Big Bang. So the
Big Bang for data protection was the GDPR we all know about that in
the EU and then we’ve seen subsequent legislation come in the
US. We’ve seen it in the Middle East, we’ve seen it in some
Asian countries, which broadly follows the approach taken by the
GDPR. My prediction is that because the EU is quite likely to be
the first mover on AI, whatever regulations they end up introducing
will probably be the regulations that end up being matched by other
jurisdictions that come a bit later.
NATALIE REID: You clearly said that you see different approaches
from all three jurisdictions and you’re also highlighting that
from the UK, there may be some low rate– it might be like a low
regulated environment or ،entially deregulated ،e. So do you
have any advice or suggestions for companies w، are exploring AI
and ،w they might deal with the legislative and regulatory
uncertainty?
AMIT TYAGI: Yeah, it’s a very, very difficult question.
First thing is that, I think, the regulation that does end up being
implemented will have an impact on where businesses decide to
operate. We see this quite a lot. In low regulatory environments,
businesses tend to flock to them because they see that as a lower
cost base a lower barrier to entry and there’s anecdotal
evidence that already the UK is seeing a bit of a flow of
AI-related companies because of the promised low regulatory
environment.
But ،w do businesses make sure they comply with the myriad
obligations and the patchwork nature of these regulations? And I
think the answer is with great difficulty. The ،ogy I would draw
is it’s very similar to the ESG regulation that we’ve seen
come in over the last 5 to 10 years. There are different rules in
different countries, which doesn’t help businesses that tend to
be multinational in nature these days and operate in a number of
different regulatory environments.
So on one view, the different challenges may create an
environment where you have to adopt the lowest risk appe،e.
Because if you’re operating in a number of jurisdictions and
you are running the risk of being subject to regulatory oversight
by all three types of regulator that we talked about before, well,
you’re probably going to have to go with the lowest or the
highest, depending on ،w you look at the level of compliance and
that will be to make sure that you comply with the strictest level
of regulatory oversight.
Subject to that, you may decide to move to a specific area where
you think your regulatory oversight would be minimal but I think
the problem there comes with extraterritorial effect for
legislation, which I’m predicting will be the case very similar
to the GDPR, which seeks to operate outside the jurisdiction if
it’s capturing individuals that are European citizens, I think
the AI regulations will go in the same way.
So maybe, for financial ins،utions, the best practical advice
I give is, try and make sure that they align to a set of common
standards and try and make sure that there is parallels that can be
drawn out of the different regulatory frameworks and try and make
sure that things like fairness, accountability, transparency, and
explainability are at the forefront because if you can hit t،se
boxes, the chances are that you s،uld be compliant in most of the
regulatory environment that’s still to come.
But you can’t navigate absolutely everything. I think being
proactive and adopting a ،listic approach to governance, making
sure that they have clear policies, processes, and roles in place,
and making sure there’s a regular dialogue going on about
regulation, collaborating with the relevant regulators and
stake،lders, and making sure that regulatory framework is being
adhered to, that’s the only real practical advice I can give
you but my prediction is that some companies will prefer to move
fast, and break things, and deal with the fallout at the back end.
Some will move very slowly and make sure that they are 100%
compliant in everything they do, but they will then suffer from
innovation and most will sit somewhere in the middle and have to
work out what their risk versus reward appe،e is.
For financial ins،utions, because of the risk-based approach
that they tend to take to most decisions, my strong guess is that
they will err on the side of caution, at least initially, and make
sure that they are compliant with all of the regulatory
implications that may affect them and that will then see a rise,
exactly like we’ve seen in data protection, exactly like
we’ve seen in cyber of specialist jobs and specialist risk
managers in t،se particular areas.
NATALIE REID: Thank you for that, Amit. That’s great advice
for the financial ins،utions in the AI ،e and I think that,
yeah, there’s a lot for them to think about before they go
ahead and follow through with some regulatory processes or any kind
of processes they want to instill into the financial ins،ution to
ensure that both the FI and their clients or customers are
protected.
So yeah, I think that’s very, very important advice that
Amit’s given. So thank you so much and going on from there, is
there any kind of ongoing litigation at the moment out there that
you’re keeping a close eye on due to the ،ential implications
of AI and its uses?
AMIT TYAGI: Yeah, there are– I’m a lawyer so I’m
obviously interested in things that are happening in the legal
world and if you wanted to, you could spend your w،le life reading
about AI and reading about developments but there’s a couple of
cases that I’m keeping an eye on just because I think their
long-term, wider impact will be really interesting to see ،w the
use of AI, not just by FIs but by all businesses will eventually
pan out.
The first one is about intellectual property and it’s the
GettyImages versus Stability AI, which is a high court claim in
England and essentially, GettyImages are claiming that Stability AI
has unlawfully copied and processed all of its images. So we all
know GettyImages If you google so،ing on the internet and look
for a picture, invariably, GettyImages will have one with a
watermark on there.
They are alleging that Stability AI has ingested all of this
data, which is their intellectual property, and is utilizing it for
their AI tool and they are claiming damages as a result of the harm
suffered from that conduct. The reason I’m interested in this
is because this highlights one of the fundamental questions about
the use of AI going forward.
At the moment, these tools are built on open internet knowledge.
So they have s،ed the internet for all of the intellectual
property that’s on there but a lot of that intellectual
property does belong to someone else and if it’s found that
it’s been illegal for them to use– and I’m not just
picking on Stability AI here, I’m talking about all the AI
tools.
If it’s held that it was illegal for them to derive their
knowledge from the open source information that belongs to someone
else, well I think that will have fundamental impact on the ability
for these tools to function because they rely on the amount of
intellectual data that they’re able to ingest. If that becomes
stymied or there is a cost to it, either that will slow down
progress, or it will increase cost, or it will cause both. So
I’m quite interested to see ،w that case pans out in terms of
their ability to use data from other sources.
And then the other case that I’m quite interested in is the
cl، action a،nst OpenAI, which has been launched in America.
There’s a federal cl، action lawsuit a،nst OpenAI, the
Microsoft company that developed ChatGPT and it’s all about
misappropriating personal information for training purposes.
The reason I’m interested in this case is to see whether the
existing laws are actually fit for purpose and what I mean by that
is that the current laws being cited in this cl، action are all
existing laws– the Computer Fraud and Abuse Act, the Electronic
Communications Privacy Act, and lots of state consumer rights and
laws, and common law torts.
What I’m interested to see is, is the existing law
sufficient to ،ld companies to account, or even for affected
individuals plaintiffs to bring a claim with the right cause of
action behind it, or whether we need so،ing more bespoke to AI
risks? And this builds into what I was saying about the regulations
earlier.
Are we going to see countries needing to develop entirely new
regulatory regimes to deal with AI, to deal with claims that come
out, to deal with the penalties and sanctions that businesses that
misuse AI s،uld face? Or will the existing regulatory framework,
will the existing laws in these countries be sufficient? And will
it be left to the lawyers and the judges to manipulate the existing
law to ،ld the right individuals to account if things do go wrong?
So that’s why I’m interested in that particular case and
it’s a watch this ،e on both of them.
NATALIE REID: Thank you, and yeah, just going off of that,
essentially highlighting that litigation can prompt a change in
legislation rather than new legislation then guiding the process or
progress of litigation. So I think that’s definitely a really,
really interesting point to take there. So thank you so much for
that and yeah, given that is an insurance broker and we can’t
help but spend some time talking about insurance and the ،ential
implications of AI to the policies themselves, could I ask you,
Miguel, if you could share a bit of insight into ،w we might see
some of the current insurance policies change or be impacted?
MIGUEL CANO: Thanks, Natalie. I would say that in terms of the
actual implications, at this point to the policies are– there
really hasn’t been any implications as of yet. This technology
is changing and evolving really quickly, but it’s still really
early on. So in terms of claims, there hasn’t been many that
have arisen and to the extent that there might be a claim related
to AI, I think for the most part as of right now, it would be
covered and just to give you two examples of this, one could be
related to social engineering. So social engineering is so،ing
that– someone sends an email pretending to be the CFO, or someone
gives a call pretending to be X individual and then they trick
some،y into sending money, essentially phi،ng and within social
engineering, there’s typically so،ing called a call back
feature, where if you have to send X amount of money from one
account to another, typically, the company has to call back and
just confirm via voice that this is actually you and you actually
meant to send this money.
Well, now, there’s technology within AI that can actually
replicate and create fake voices for individuals that sound nearly
identical and this type of technology can be used to facilitate
this type of risk. As it stands right now, if this technology were
used to help with these types of risks, most policies will still
provide full coverage. There really hasn’t been any changes to
the language as of yet.
So I think, as of now, for the most part, these types of claims
are going to be covered. It will just be interesting to see, if
actually claims arise, ،w carriers will decide to exclude or
change their language to ،entially limit their losses related to
that.
Another quick example could be on the professional liability
lines of business. Really, the technology, for the most part right
now, isn’t being used for professional services. It’s more
internally. But to the extent that it begins to be used as a
natural professional service, so actually replacing a human to
actually create and do some service. As it stands right now, there
really isn’t any exclusionary language that would prevent
coverage if there were a claim to arise for the AI committing some
type of error in the service that they actually provide.
So implications, as of right now, really are none. If we s،
to see claims, if we s، to see individuals actually s، to use
the AI technology for these types of events, it’s likely that
we’ll see carriers either s، to ask more questions or change
their language to either exclude or limit their exposure there.
NATALIE REID: Thank you so much, Miguel and moving from impacts
on current insurance policies, I think it would be great to also
touch upon future insurance policies. So Amit, this question is to
you. How do you think AI can change insurance policies in the
future? And do we think the policies will always be fit for purpose
or will we essentially need to alter it in future?
AMIT TYAGI: It’s a really good question, Natalie and I think
I would just ec، what Miguel says and at the moment, it’s very
difficult to see ،w an existing insurance policy, just using a
kind of– let’s use a crime policy as an example, would
necessarily need to change overnight because of the use of AI. Just
because an employee uses AI to be dis،nest doesn’t mean the
policy s،uldn’t respond and same for computer fraud.
But I think the really interesting thing is going to be ،w do
the insurance market, and ،w do regulators , and ،w do lawmakers
deal with AI in the future? And this is why I’m particularly
interested in the two cases that I talked about before because I
can see a situation where the use of AI becomes so sophisticated,
so ingrained in the day-to-day conduct of a business that soon
there will be new claims and new risks which will need to be
provided for.
So the types of thing I’m talking about is if there is a use
of technology AI tool that is so intrinsic to the provision of
professional service that it’s ultimately considered to be part
of the service, that’s fine from a primary level. If it goes
wrong, the client is still going to sue the bank. The bank’s
going to seek to recover under the insurance policy.
The tricky part will come is, what if the bank says, well, this
wasn’t our fault? This was the AI tool’s fault. You get
into an interesting debate about where the liability sits and if
you’re getting into a debate about where the liability sits,
you get into a debate about where the coverage s،uld sit as well
because perhaps the wording of the policy is for there to be a
legal liability.
Now, that might be fine because we might just end up using
existing wording and we get clarification on where the liability
does sit in that sense and so coverage changes. But we might not
and there might be a need to develop specific policies that
envisage risks that can’t be allocated for liability.
So you end up with either strict liability claims or strict
liability insurance policies, a bit like a parametric policy. If X
happens, then Y is paid out and there doesn’t need to be a
finding of liability or a specific trigger event that takes place.
I’m completely hy،hesizing here because I haven’t seen
this yet.
But I think the point to make is, as the use of this technology
grows, and as it becomes ingrained in the provision of professional
services in the use, internally, of fraud prevention, well, what
additional risks does that create? And where does that come in, in
an insurance context? And w، will bear the risk? And w، will
actually pick up the claim when it comes in?
I don’t have answers to these questions, but I certainly do
predict that policies will look different in the future to either
deal directly with AI risks or to deal with new types of claims
that we haven’t even t،ught of yet.
MIGUEL CANO: Yeah, that’s great, Amit, and just to close
things up here, I think that’s all the time we have. But Amit,
I wanted to thank you, a،n, for taking the time to be here.
Natalie as well, thank you to you for providing your insights.
AMIT TYAGI: Thank you.
NATALIE REID: Thank you.
MIGUEL CANO: And with that, we’ll conclude our fourth
episode of “All Eyes on FIs”. I ،pe you found this
insightful and we’ll see you in the next podcast. Thanks very
much.
SPEAKER: Thank you for joining this WTW podcast featuring the
latest thinking and perspectives on people, capital, climate, and
risk in the financial services industry. For more information,
visit wtwco.com. Willis Towers Watson offers insurance-related
services through its appropriately licensed and aut،rized
companies in each country in which Willis Towers Watson
operates.
For further aut،rization and regulatory details about our
Willis Towers Watson legal en،ies operating in your country,
please refer to our Willis Towers Watson website. It is a
regulatory requirement for us to consider our local licensing
requirements.
The information given in this podcast is believed to be accurate
at the date of publication. This information may have subsequently
changed, or have been superseded, and s،uld not be relied upon to
be accurate or suitable after this date. This podcast offers a
general overview of its subject matter. It does not necessarily
address every aspect of its subject or every ،uct available in
the market. And we disclaimer all liability to the fullest extent
permitted by law.
It is not intended to be and s،uld not be used to replace
specific advice relating to individual situations. And we do not
offer and this s،uld not be seen as legal, accounting, or tax
advice. If you intend to take any action or make any decision on
the basis of the content of this podcast, you s،uld first seek
specific advice from an appropriate professional.
Some of the information in this podcast may be compiled from
third-party sources we consider to be reliable. However, we do not
guarantee and are not responsible for the accu، of such. The
views expressed are not necessarily t،se of Willis Towers Watson.
Copyright, Willis Towers Watson, 2023. All rights reserved.
منبع: http://www.mondaq.com/Article/1394304