The Ethical Risks of Automation - a primer
As organisations increasingly look for efficiency gains through the use of automation, the ethical risks of automating complex decision making that requires human judgment and oversight are coming into sharper focus.
Advancements in technology and artificial intelligence (AI), and the proliferation of data accessible to organisations across many industries are providing opportunities to automate elements of business processes that were previously reliant on human decision makers, including both manual and cognitive tasks.
While the promised benefits of increased speed and accuracy in automated decision making systems are often highlighted, the associated ethical risks of unintended harmful impacts on customers, consumers, employees, the community at large, and the environment are often glossed over.
The key risk for organisations is that automation is seen as a technology related issue, while the human, social, and economic implications are overlooked. CEOs cannot absolve themselves from accountability for the consequences of automated decision making systems, especially where those decisions have material, significant impacts on people, including potential bias, unfairness, or discrimination towards vulnerable groups (e.g. based on ethnicity, gender, age, health).
Organisations need to carefully weigh up the benefits and risks of automation initiatives and consider the real life impacts of these changes. Ethical implementation of automation in the decision making process should be part of organisational risk management, accountability and governance frameworks, and assessed against an ethical framework that is an extension of the organisation’s values and code of conduct.
THE BALANCING ACT
To ensure that automated decision making systems operate within the contexts for which they were designed, the appropriate boundaries for fair and safe use should be clearly communicated to all relevant stakeholders. High risk automation uses must have a clear escalation pathway, meaningful human oversight and ongoing monitoring of potential harm.
To achieve the benefits of automation in an ethical way, organisations have to be clear on whether these benefits extend beyond its own commercial interests to include clear benefits for their customers, while mitigating against unintended social and economic harms.
Ethical risks arise when a pursuit of profits is prioritised above the concerns and interests of individuals. Any application of automated decision making systems requires constraints and human judgment to ensure that human rights are respected and organisational values are upheld.