Ethical Challenges of Emerging Technologies

Introduction – Ethics as a Field of Study

Study of ethics deals with the analysis of concepts around right and wrong behavior. Ethical theories are primarily categorized into three areas:

  • Metaethics delves deep into the origin and meaning of the ethical principles of human life.
  • Normative ethics is oriented towards a more practical aspect of life. It defines and regulates right and wrong behavior and the consequences thereof.
  • Applied ethics addresses the ongoing controversial issues like LGBT rights, abortion, capital punishment, and the like.

Ethical Dilemma – An Example Relevant to Autonomous Vehicle

Trolly Problem is a major research area in Ethics. It is highly relevant in Emerging Technology examples like Autonomous/Self-Driving/Driverless Vehicles. It is also known as the “Bystander at the Switch” or “Switch” Dilemma.

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do?

(From Wikipedia)

How IT Applications challenge Ethics of Good Society

IT Applications are, unintentionally or inadvertently, introducing biases against the following primary categories of citizens:

  1. Women
  2. People with Disabilities   
  3. People of Color
  4. Elderly people
  5. People of developing and under-developed countries

IT Applications are used for making courtroom decisions, hiring decisions, or deciding access to credit, and making other similarly important decisions. Any bias in those decisions begets discrimination. As a consequence, this discrimination breeds inequality across society. The root cause may be the skewed data or the inaccurate algorithm, or both. The self-perpetuating effect of IT applications lacking precision institutionalizes bias, discrimination, and inequality with long-term negative impacts on society.

Application Areas compromising with Ethical Standards of Society

Emerging Technologies are all-pervasive with a huge positive impact on society. Despite the beneficial influence of Emerging Technologies on society, there are application areas given below that demand serious governance and diligent oversight:

Autonomous/ Self-driving/Driverless Cars – What could be the decisions around any fatal crash of autonomous/driverless/self-driving vehicles? It is a very important area for both Government and Private Sectors to ponder over before rolling out these vehicles on the roads. Websites like attempt to gather public opinions on this dilemma.

For a detailed analysis on Ethics of autonomous/driverless/self-driving vehicles, please refer to this:

AI-led Hiring – Many forms of bias related to racism, sexism, ableism, homophobia, and xenophobia impact the AI-led decision-making of hiring.

AI in Court Room – COMPAS algorithm is used in the U.S. criminal justice system. It was found to have a significant bias against people of color. For details:

AI in Creativity – It is probably not doing a good service for the well-being of the creative communities.

Computer Vision-based Face Recognition for Mass Detention – It can legitimize violence against marginalized people.

Emerging Technologies with harmful intentions like Automated Weapons, AI in Surveillance with Privacy Infringement, AI in Manipulation of Human Judgement, mainly through Social Media, DeepFake, etc, are raising serious concerns.

Governance around Ethics of Information Technology Professionals

Business requirements are determined by business users. Practically developers are not really taking any decision about the requirements of an application. The actual issue with Ethics starts from the point when the code is running without any error. Then questions like “Has the application been designed for inclusion?” arise.

There is a published Code of Ethics for Software Programmers by the Association for Computing Machinery. Details of the Code of Ethics are here:

Art. 22 GDPR defining Automated individual decision-making (including profiling) PolicyThe data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling. Any related malpractice produces legal effects. Please refer to the following for the policy details:

Data Ethics principles should govern Data Collection, Data Storage, Data access, Data used, and above all, Data Sharing. It is very important for organizations to respect the customers as human beings behind customers’ personal data they possess. In turn, organizations need to respect the privacy of the customers.

There are ongoing discussions around the Governance of Algorithmic Bias in IT Applications. A significant number of policies are in the pipeline. For details, please refer to:

Actions to Ensure Social Good

AI decisions must pass through Human Judgement before getting executed in important application areas.

Fair Data, Blind Algorithm – Data being used to train the ML algorithms needs to be diversified to filter out any bias. Similarly, algorithms must be scrutinized for offering Trustworthy, Meaningful, and Explainable AI.

Training data must include diversity to prevent any leakage of bias from racial, gender, ethnic, and age standpoint. The collection of representative datasets from the sensitive communities is important.

Ethics by Design – Traceability of any discriminatory decision caused by AI to the actual data element or characteristic of the algorithm is very important. Identification of the root cause can only prevent its occurrence in the future with necessary redressal.

Blind Test of an AI system with or without a particular feature in the potentially skewed dataset can pinpoint the bias leakage in the system.

Bias Mitigation through training data pre-processing, in-processing and post-processing are necessary.

Bias Elimination in Hiring– To make sure that all applicants will receive consideration for employment without regard to race, color, religion, sex, or national origin, diverse datasets are important to train ML Algorithms. Separate calculation of accuracy for each discriminating feature help identification of the nature of the skewness.

Practice to follow AI Design Principles and conducting IT and AI audits for applications is important. For details, please refer to:

Conclusion – The Future of IT Ethics

Ensuring that all IT applications are not causing any negative impact on humanity is a collaborative effort of all stakeholders involved. When programmers need to remain vigilant to prevent any cognitive bias get impregnated in the code, business users should also stay conscious of any occurrence indicating discrimination and bias during their day-to-day use of the application. Policies around AI Governance are coming up. It is really a good sign that the questions like the following are being discussed:

  • Should Social Media companies suppress DeepFake at source?
  • Should search engines be transparent about their rankings?
  • Should Digital Privacy be a Constitutional Right?

Leading organizations are coming up with AI Ethics guidelines. For example,  

There is every reason to believe that the world will use Emerging Technologies to make it better and more beautiful, not at all to destroy it.