Why AI Ethics Is Even More Important Now - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
IT Leadership // Security & Risk Strategy
Commentary
5/28/2020
08:00 AM
Lisa Morgan
Lisa Morgan
Commentary
Connect Directly
Twitter
RSS
50%
50%

Why AI Ethics Is Even More Important Now

Contact-tracing apps are fueling more AI ethics discussions, particularly around privacy. The longer term challenge is approaching AI ethics holistically.

Image: momius - stock.adobe.com
Image: momius - stock.adobe.com

If your organization is implementing or thinking of implementing a contact-tracing app, it's wise to consider more than just workforce safety. Failing to do so could expose your company other risks such as employment-related lawsuits and compliance issues. More fundamentally, companies should be thinking about the ethical implications of their AI use.

Contact-tracing apps are raising a lot of questions. For example, should employers be able to use them? If so, must employees opt-in or can employers make them mandatory? Should employers be able to monitor their employees during off hours? Have employees been given adequate notice about the company's use of contact tracing, where their data will be stored, for how long and how the data will be used? Enterprises need to think through these questions and others because the legal ramifications alone are complex.

Contact-tracing apps are underscoring the fact that ethics should not be divorced from technology implementations and that employers should think carefully about what they can, cannot, should and should not do.

"It's easy to use AI to identify people with a high likelihood of the virus. We can do this, not necessarily well, but we can use image recognition, cough recognition using someone's digital signature and track whether you've been in close proximity with other people who have the virus," said Kjell Carlsson, principal analyst at Forrester Research. "It's just a hop, skip and a jump away to identify people who have the virus and mak[e] that available. There's a myriad of ethical issues."

The larger issue is that companies need to think about how AI could impact stakeholders, some of which they may not have considered.

Kjell Carlsson, Forrester
Kjell Carlsson, Forrester

"I'm a big advocate and believer in this whole stakeholder capital idea. In general, people need to serve not just their investors but society, their employees, consumers and the environment and I think to me that's a really compelling agenda," said Nigel Duffy, global artificial intelligence leader at professional services firm EY. "Ethical AI is new enough that we can take a leadership role in terms of making sure we're engaging that whole set of stakeholders."

Organizations have a lot of maturing to do

AI ethics is following a trajectory that's akin to security and privacy. First, people wonder why their companies should care. Then, when the issue becomes obvious, they want to know how to implement it. Eventually, it becomes a brand issue.

"If you look at the large-scale adoption of AI, it's in very early stages and if you ask most corporate compliance folks or corporate governance folks where does [AI ethics] sit on their list of risks, it's probably not in their top three," said EY's Duffy. "Part of the reason for this is there's no way to quantify the risk today, so I think we're pretty early in the execution of that."

Some organizations are approaching AI ethics from a compliance point of view, but that approach fails to address the scope of the problem. Ethical boards and committees are necessarily cross-functional and otherwise diverse, so companies can think through a broader scope of risks than any single function would be capable of doing alone.

AI ethics is a cross-functional issue

AI ethics stems from a company's values. Those values should be reflected in the company's culture as well as how the company utilizes AI. One cannot assume that technologists can just build or implement something on their own that will necessarily result in the desired outcome(s).

"You cannot create a technological solution that will prevent unethical use and only enable the ethical use," said Forrester's Carlsson. "What you need actually is leadership. You need people to be making those calls about what the organization will and won't be doing and be willing to stand behind those, and adjust those as information comes in."

Translating values into AI implementations that align with those values requires an understanding of AI, the use cases, who or what could potentially benefit and who or what could be potentially harmed.

"Most of the unethical use that I encounter is done unintentionally," said Forrester's Carlsson. " Of the use cases where it wasn't done unintentionally, usually they knew they were doing something ethically dubious and they chose to overlook it."

Part of the problem is that risk management professionals and technology professionals are not yet working together enough.

Nigel Duffy, EY
Nigel Duffy, EY

"The folks who are deploying AI are not aware of the risk function they should be engaging with or the value of doing that," said EY's Duffy. "On the flip side, the risk management function doesn't have the skills to engage with the technical folks or doesn't have the awareness that this is a risk that they need to be monitoring."

In order to rectify the situation, Duffy said three things need to happen: Awareness of the risks; measuring the scope of the risks; and connecting the dots between the various parties including risk management, technology, procurement and whichever department is using the technology.

Compliance and legal should also be included.

Responsible implementations can help

AI ethics isn't just a technology problem, but the way the technology is implemented can impact its outcomes. In fact, Forrester's Carlsson said organizations would reduce the number of unethical consequences, simply by doing AI well. That means:

  • Analyzing the data on which the models are trained
  • Analyzing the data that will influence the model and be used to score the model
  • Validating the model to avoid overfitting
  • Looking at variable importance scores to understand how AI is making decisions
  • Monitoring AI on an ongoing basis
  • QA testing
  • Trying AI out in real-world setting using real-world data before going live

"If we just did those things, we'd make headway against a lot of ethical issues," said Carlsson.

Fundamentally, mindfulness needs to be both conceptual as expressed by values and practical as expressed by technology implementation and culture. However, there should be safeguards in place to ensure that values aren't just aspirational concepts and that their implementation does not diverge from the intent that underpins the values.

"No. 1 is making sure you're asking the right questions," said EY's Duffy. "The way we've done that internally is that we have an AI development lifecycle. Every project that we [do involves] a standard risk assessment and a standard impact assessment and an understanding of what could go wrong. Just simply asking the questions elevates this topic and the way people think about it."

For more on AI ethics, read these articles:

AI Ethics: Where to Start

AI Ethics Guidelines Every CIO Should Read

9 Steps Toward Ethical AI

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
News
Top 10 Data and Analytics Trends for 2021
Jessica Davis, Senior Editor, Enterprise Apps,  11/13/2020
Commentary
Where Cloud Spending Might Grow in 2021 and Post-Pandemic
Joao-Pierre S. Ruth, Senior Writer,  11/19/2020
Slideshows
The Ever-Expanding List of C-Level Technology Positions
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/10/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Why Chatbots Are So Popular Right Now
In this IT Trend Report, you will learn more about why chatbots are gaining traction within businesses, particularly while a pandemic is impacting the world.
Slideshows
Flash Poll