Sudeep Duggal,
Originally published in eth-ds
Artificial Intelligence (AI) is an umbrella term for algorithms that allow machines to understand the world and make predictions. It is ubiquitous, and you interact with an AI algorithm on a regular basis when you:
You may also unknowingly interact with algorithms used for:
AI and algorithms in general are automating decision-making in all spheres of our lives1. These decisions affect our quality of life, our choices and our future. Algorithms decide whether we are able to afford health insurance, allowed access to public utilities, post bail, get a job, get the right healthcare treatment and travel globally. Algorithms encode the biases of historical data on which they are built. Algorithms and the choices made in building technologies have been shown to be discriminatory, misogynistic, racist, and affecting minority groups disproportionately2, 3, 4, 5, 6, 7. For example, one type of predictive policing algorithm tries to predict where crime is likely to happen to proactively police those areas. In the U.S., the data used to build these algorithms contains a majority of data points for neighborhoods with minority communities that have historically been over-policed due to racist policies. Hence, the algorithm makes predictions that are skewed towards over-policing those same neighborhoods, effectively reinforcing racial biases.
Most importantly, technologies and the algorithms they are built upon have the power to decide the kind of government we have. Left unchecked, they can be enablers of authoritarianism, totalitarianism and potentially fascism8, 9, 10, 11. For example, social media is being used to spread misinformation to gain political power and undermine democracy. Social media companies can choose to make a design decision to curb the spread of misinformation - they only have to make sharing of posts more cumbersome. This will force people to reflect on what they are sharing and only share posts which they deem important. However, social media companies try to make apps “frictionless”, minimizing the number of clicks it takes to do a task. This design decision boosts the spread of misinformation.
Discussions surrounding the definition of ethics may not prove fruitful due to differences in individual values. However, we do have a list of agreed-upon, enforceable shared human values - the United Nations Universal Declaration of Human Rights (UDHR). These include values such as equal rights to all without discrimination in any form, the right to privacy, not being treated inhumanely, and having just and favorable working conditions. Researchers have been proposing using the UDHR as a guide for building ethical technology8, 12.
What questions should we ask ourselves when deciding what to build and how to build, and also what not to build and how not to build? These questions will enable us to make choices when designing and building ethical technology - technology that is in line with the shared values of the UDHR. A good starting point is the Assessment List produced by the High-Level Expert Group on Artificial Intelligence that is set up by the European Commission.
We will look at that in the next post!