Ethical Leadership in an AI World: Navigating Complex Challenges with Integrity was originally published on Ivy Exec.
In a recent survey, IBM found that global adoption of AI in business grew 35% in 2022 – a four percent point increase from 2021.
Additionally, 44% of businesses plan to add AI technologies to their current applications and processes.
“AI is rapidly providing new benefits and efficiencies to organizations worldwide through new automation capabilities, greater ease of use and accessibility, and a wider variety of well-established use cases. AI is both being applied through off-the-shelf solutions like virtual assistants and embedded in existing business operations like IT processes,” the company noted.
But like any new technology, AI has its ethical risks. For instance, AI may be discriminatory, invade privacy, and unfairly manipulate users.
In this guide, we’ll talk about how to avoid missteps with this new technology and integrate only ethical AI processes in your workplace.
Unethical AI Practices
Artificial intelligence can reinforce real-world prejudices against women, people of color, and other minority groups.
“Search-engine technology is not neutral as it processes big data and prioritizes results with the most clicks relying both on user preferences and location. Thus, a search engine can become an echo chamber that upholds biases of the real world and further entrenches these prejudices and stereotypes online,” said UNESCO.
This bias extends beyond search engine results. For instance, an investigation by The Markup found that Black applicants were less likely to be approved by mortgage algorithms.
The algorithm denied Black candidates earning over $100,000 each year but offered loans to White applicants earning less.
AI can also run into privacy issues.
These algorithms learn through machine learning programs that cull data from social media, devices, search engine queries, recommendation feeds, and elsewhere.
“However, as AI tracks every click, view, duration of view, post, keyword search, and like, it builds complex profiles on individuals. This can create worrisome privacy concerns if profiles are sold or used for purposes users did not consent to,” said the Maryville University blog.
In hiring, AI also has been used to make guesses about applicants, like, for instance, their mental health and political persuasion.
With the ability to collect such deep footprints about users, companies can use this information to prey on customers’ vulnerabilities.
“While this information may simply be used by businesses to offer individuals more personalized services and to deliver targeted marketing, such knowledge can also be used to create manipulative tools that prey on people’s weaknesses and propensities and guide them toward specific decisions,” said Maryville.
How to Adopt Ethical Standards
Reid Blackman, author of Ethical Machines, and Beena Ammanath, Executive Director of the Deloitte AI Institute, talk about how to develop AI integrity standards at your workplace.
Consider what your standards will be
Once you understand the ethical implications of AI, it’s time to think about how you want to avoid them and to what degree.
In other words, how ethical do you think your AI should be?
“Suppose, for instance, your AI hiring software discriminates against women, but it discriminates less than they’ve been historically discriminated against. Is your benchmark for sufficiently unbiased ‘better than humans have done in the last ten years’?” ask Blackman and Ammanath.
The goal is to figure out the minimum ethical standards you expect your AI to reach. Then, you can articulate these standards to your team, customers, and regulators.
“They demonstrate your due diligence has been performed should regulators investigate whether your organization has deployed a discriminatory model,” add the authors.
Figure out where and how you’re not meeting these ethical standards
Once you have identified the standards you want to meet, the next step is to figure out the “technological maturity” necessary to meet these ethical demands.
For instance, if you want to implement a completely bias-free recruitment algorithm, what does that entail?
“Having productive conversations about what AI ethical risk management goals are achievable requires keeping an eye on what is technologically feasible for your organization,” suggested Blackman and Ammanath.
Create solutions that help you meet your ethical AI goals
The last step is to understand how AI inputs can be biased and problematic.
Until you understand how AI makes its decisions, you won’t be able to find solutions to its ethical issues.
“Other issues abound: how inputs are weighted, where thresholds are set, and what objective function is chosen. In short, the conversation around discriminatory algorithms has to go deep around the sources of the problem and how those sources connect to various risk-mitigation strategies,” the authors note.
Adopting Ethical AI in Your Workplace
Artificial intelligence is a relatively new technology, meaning that it will have its share of ethical issues, including extant problems with bias and privacy.
To ensure integrity in your AI solutions, consider your ethical standards, ways you’re not meeting these standards, and ideas to improve your processes.
Want to learn more about ethical AI? Watch the webinar from Bikram Ghosh, Associate Professor of Marketing at The University of Arizona, titled “The Business and Application of Artificial Intelligence.”