AI itself is not inherently ethical or unethical, as it is simply a tool that can be used for various purposes. However, ethical considerations arise when it is developed and used by humans.
In the context of software development, AI can potentially be used in unethical ways, such as perpetuating biases or discriminating against certain groups of people. Therefore, it is important for technology designers and developers to be aware of the ethical implications of AI and to take steps to ensure that their AI systems are developed and used ethically.
This can include measures such as conducting regular audits to detect and address bias, ensuring that the data used to train the AI is diverse and representative, and designing AI systems to be transparent and accountable. Additionally, developers should be mindful of the potential impact of their AI systems on society as a whole and strive to create systems that benefit all members of society rather than just a select few.
Ultimately, the ethical use of AI in software development requires a thoughtful and deliberate approach, with a focus on creating systems that are fair, just, and beneficial to all.
The ethics of artificial intelligence, broadly, are complex and have been debated for decades. They engage with topics as weighty as what it is to be human, and the future of humanity. Today we will be focusing more specifically on how artificial intelligence and machine learning are applied currently, and in the near future, to the field of service delivery. That includes things such as AI analysis of datasets, customer engagement via chatbots, and assessment of risk and eligibility as examples.
AI services have been building momentum rapidly in recent months, spurring a new wave of attention from the public (and scrutiny from regulators). This has made AI ethics as hot-button as they have ever been—while at the same time, broader layoffs in tech have brought cuts to the “responsible AI teams” of major tech giants. The time is now for everyone in the service design field to be thinking through what responsible AI integration may—or may not—look like.
Overviews such as the European Union’s 7-point Guidelines for Trustworthy AI and the Montreal Declaration for responsible development of AI have been created. But it falls to us in each industry and workplace to understand how those might apply to AI in our situations.
As with Rules as Code, it is useful to think of AI not as a totally separate thing done by robots that need their own rules, but as something that amplifies our existing processes. That requires that the ethical considerations already relevant to those processes continue to apply and that they be scaled up accordingly.
AI can apply what we do to more things, at a much faster pace, and often with less transparency. It is our responsibility to ensure that ethical considerations do not end up “out of sight, out of mind” when the work is being performed very quickly and not by us directly.
Many of the key ethical concerns around implementing AI in services fall into two general areas: Data-handling and decision-making.
AI’s analytic effectiveness relies on large samples of information that it can discern patterns. As we’ve said, AI can be understood as an “amplifier” or “accelerator” of existing processes. In this way, the ethical concerns about AI’s use of data resemble our existing ethical concerns around data collection and use.
They include: Do we have sufficient consent to use this data? Do its sources know their data is being used? Is it being kept secure, and safe from breaches? Is it being handled in a way that preserves privacy where relevant—is disaggregated data, for example, being kept fully separate or might AI that uses both datasets be able to re-connect them? Could AI with access to other large datasets connect contextual information from responses with similar comments or information on public profiles?
Canada’s Office of the Privacy Commissioner announced on April 4th that it would be investigating OpenAI’s popular ChatGPT over allegations that it collects and discloses personal information without consent. That move came just after Italy placed a temporary ban on ChatGPT over personal data collection concerns.
Meanwhile, generative AI that produces images rather than text responses has faced suits from artists and stock-image companies for the way they are trained on existing images and at times appear to use portions of those images. In these cases, the data at issue is copyrighted material rather than personal information.
There are also concerns about how data sets are managed, not just what they contain. Some are wary that the massive amount of processing required for AIs could consume too much energy or silicon, or produce too many emissions. Because AIs require so much data, most have deals with different Big Tech companies that can provide massive cloud computing—which has prompted antitrust scrutiny around supplier-competitor overlap and worries that the carbon cost of AI is being obscured.
Many of these macro issues and legal questions will take years to sort out. In the short term, service designers should make sure they are aware of what the AIs they use are doing and whether those processes are consistent with their usual data-use practices.
AI’s fast and large-scale analytical abilities can be used to make or to help make decisions about hiring, lending, investing, advertising, allocating resources, and much more. Essentially, AI can be a useful tool in any choices that involve parsing a lot of data and discerning patterns in successful past results.
Here, too, existing problems can happen faster and on a larger scale. Algorithmic decision-making offers some promise of being more or less shaped by personal opinions and biases. But it runs the risk of locking in and widely applying the biases of its programmers—or, more subtly, the biases present in previous decisions. AI is not innovative—it follows history. Innovation has no algorithm.
If an AI is trained to replicate previous human decisions, it may end up arriving at the same tendencies that were, in the original sample, motivated by bias. For instance, Karen Mills of Harvard Business School has warned that using AI in bank lending runs a risk of reproducing racial bias that has historically been present in lending. There are also documented concerns that mortgage and criminal justice machine-learning algorithms learn from and reinforce past disparities.
Bias in AI is not just a hypothetical risk—significant racial and gender disparity has already been documented in AI facial recognition. That is in addition to the concerns about how widespread facial surveillance and recognition, as a tool, can be used to further social inequalities and racist treatment. This speaks to the dual ethical concerns with AI: Its internal processes may have ethical issues, and its use by humans may have ethical issues.
In addition to data sets reflecting prior bias, there are many other ways in which inequalities can subtly enter into AI decisions. Are some demographics less represented, or less accurately represented, in the data sets the AI is trained on? Are there language or internet access barriers to contend with?
Again, AI is not a way to hand over responsibility and not worry about it, but a way to speed up or expand certain parts of a process that we still need to be rigorous, careful, and transparent about. Transparency and accountability are important in service work overall but become especially crucial when the processes being performed become more opaque.
As CalTech’s Science Exchange has described, AI models’ processes are much less clear than traditional programs, since an AI model that automatically detects and adopts useful patterns “may find patterns a human does not understand and then act unpredictably.” This necessitates a form of AI transparency called “explainability”: if an AI arrives at a decision, can the people responsible for it explain exactly why the decision was made to the people affected by it?
Another area in which AI must be relied on for responsible choices is in interactions with the public: Chatbots are often used to help users navigate websites and services. They do not make binding decisions, but they advise and guide the user. In these cases, it should be made clear to the user that they are interacting with a virtual assistant rather than a real person, and—if applicable—that their responses will be incorporated into its learning process.
There are also many of the same liabilities that exist when a human representative is talking to a user: Are the things being said legal, appropriate, and accurate? And if they are not, who bears the liability?
In recent months media has featured many examples of users trying to get a troubling or controversial response out of the popular ChatGPT interactive AI—in some cases succeeding, and in other cases being met with a cautious or conscientious response that the AI has been trained to provide to requests that have ethical or public-relations concerns. As chatbots advance, industries that use them will need to be wary of these PR or liability pitfalls.
Developers of AI generally agree that it is a morally neutral tool, according to a recent study, but many were also wary that its use in practice could seldom remain neutral. “It’s never as simple as pushing a button or writing a line of code that says: don’t be evil, one said. “We don’t really understand how these systems work [and] it’s difficult to build systems with good incentives when there is a profit motive.”
We’ve already flagged a few ways in which AI could be used for irresponsible or malicious purposes, such as getting past copyright law or privacy rights, or pursuing biased and overreaching surveillance. AI creates new avenues for unethical business practices that could exploit or risk customers, the general public, vulnerable populations, staff, or others. Businesses will also need to determine how to responsibly handle the labour impact of AI.
Whether AI will cause mass job losses is a hot debate, with some experts forecasting wide upheaval and others arguing there will be some disruption but not mass job losses. It’s expected that the jobs most vulnerable to AI replacement will be white-collar and admin work, which some fear could push more people into lower-paid industries. But the increase in productivity that AI can bring can be an opportunity for at least some companies and industries to prosper and even add staff.
AI needs human handlers, but how many, and with what particular training? Will these be new roles, and will they offset roles lost to AI? Or will AI be more integrated into existing jobs, with many people becoming more responsible for handling AI alongside their regular work?
Employers should consider what a responsible and sustainable incorporation of AI will look like, and whether it creates the possibility to reallocate responsibilities, retrain workers, reduce hours, or raise pay, rather than simply being a chance to cut costs by replacing staff. The value there is not just about doing right by team members—it is also that proactively allocating staff to overseeing AI can help discern and tackle the many ethical issues mentioned above.
Ultimately the thing that will best help us ensure that AI processes are accountable to human ethics is to have humans in those processes applying ethics—thinking through what can happen and regularly monitoring and assessing what does happen.
Image Credit: Midjourney
Our twice-monthly digital services newsletter. Get our best tips, insights and resources delivered straight to your inbox!
We love to have conversations with decision makers, technology leaders, and product managers from government and industry. If that sounds like you, and you have a digital project you’d like to let us know about, please fill out our contact form.
Our business development team will reach out promptly.