News

Racist Algorithm: How Programs Endanger Human Lives

Algorithms are powerful tools that can only be as clever as we humans have trained them to be. They are now used in a wide variety of areas: As an application program, propaganda machine or also as insurance software. Algorithms learn from data that researchers make available to them. In this way, however, they also take on their prejudices. Why this is so dangerous and how one can counteract it – a comment on the programmed discrimination.

A well-intentioned approach

The now most recent case of a misguided algorithm comes from US hospitals and their caring health insurers. In order to reduce costs, more and more of these health care institutions are investing in the prevention of patients. One approach for a categorization system is an algorithm that classifies patients according to the level of care they require. First of all a laudable thought: the whole thing saves time and money.

For classification purposes, these algorithms evaluate data from patient files: Diagnoses, treatments, medications. The results are then used to calculate a risk value that predicts how a person’s state of health will develop within the next year. Based on this result, the patient is offered better prevention and health care. So if the algorithm calculates that diabetes, hypertension and chronic kidney disease result in a fatal combination, precautions must be taken. As a precautionary measure, the physician could put the patient on an intensive programme to lower blood sugar levels. Problem solved, algorithms save lives!

Discriminating algorithm

Not quite, I’m afraid. In a new study, Ziad Obermeyer, a health policy researcher at the University of California, and his colleagues investigated the effectiveness of such a risk prediction program in a large research hospital. They wanted to find out how well the predictions of the algorithm match reality. The team soon noticed that the program assigned a “strangely low” risk value to many dark-skinned patients despite their deteriorating health.

If one compares dark-skinned and light-skinned patients to whom the algorithm assigned similar risk values, it was found that the dark-skinned people were significantly sicker. For example, they had higher blood pressure or poorly controlled diabetes, but these aspects do not seem to affect the algorithm. The result was foreseeable: Those affected were excluded from an additional preventive programme because of their skin colour. And this only because a value calculated by an algorithm is not sufficient.

For the published study, the researchers searched almost 50,000 data sets with which the algorithm learns and evolves. They found that the program used invoices and insurance payments as an indicator of a person’s general health – a traditional learning tactic in academic and commercial health algorithms. In plain language, this means that the algorithm looks primarily at the costs incurred by patients.

In fact, the system does not predict how a person’s health will develop over the next year. In principle, the value was an indication of how much this patient would cost next year. Why this correlates again with the skin colour of a person is also quickly explained. In the accounting data provided to the algorithm for learning, it is basically stated that black patients incur lower costs because they receive less medical treatment. As a result, they are not given a high risk value.

Rate of amputations in black patients is higher

Of course, there is no racist algorithm behind this that has a problem with dark-skinned people. From all the hustle and bustle a much bigger and above all social problem can be derived. Health costs for black patients tend to be lower, regardless of their actual well-being. In the US, racial and ethnic minorities face some challenges in medical care. Even if they receive health care, their quality may not be equivalent to that of other groups. The subject is very complex and includes areas such as solvency, patient preferences, differentiated treatment by physicians, and geographical variability.

For example, a study of 400 hospitals in the US showed that black patients with heart disease received older, cheaper and more conservative treatments than patients with light skin.  After surgery, they are discharged earlier from hospital and black women receive less radiotherapy for breast cancer. Even the rate of amputations is higher in black patients. In Dayna Bowen Matthews’ book “Just Medicine: A Cure for Racial Inequality in the American Health Care System” from 2015, the author tries to blame the low level of health care on the indirect prejudices of doctors.

Of course, this thesis could support the assertion, but if you look at the health care system in the USA, hair-raising stories reveal themselves. Both undocumented immigrants and admitted residents who have lived in the US for less than five years are not covered by state health insurance. There is a two-tier health care system that provides all-care care for privately insured people and mediocre care for those without. In addition, dark-skinned people tend to live further away from hospitals than white people. This makes regular visits difficult. They also tend to have less flexible working hours and more responsibility for childcare.

Better decisions with algorithms?

There are no really objective decisions that we humans can make. To counteract this, machines and algorithms are increasingly being programmed to free us from prejudices and subjective perceptions. However, this and many other examples show that if we put prejudices in, nothing objective comes out.

There are two reasons why software is not only mistaken, but also discriminates against population groups in an astonishingly consistent way. Of course, such an algorithm comes from human hands – what we select as training data, the system will learn. However, if the training data has already been insufficiently selected by the developers, the algorithms simply continue to reproduce the problem. It is not the racist computer programs that may have developed a life of their own and opinions about different groups of people – they can only produce discriminatory results from the data.

The second problem with incorrectly learned algorithms is not so much the input as the output. Often we only recognize faulty training data when a system recognizes correlations and patterns. In applications, programs for the automatic selection of candidates are increasingly being used. In one example, an algorithm found a correlation between increasing distance from home to work and a change of employees. As a result, he suggested applicants who lived near the company, which in turn increasingly excluded dark-skinned people. They are more likely to live on the outskirts.

And the solution to the problem?

One positive thing can finally be derived from the whole history of hospitals in the USA. It holds a mirror up to society because we are the origin of discrimination. But what can be done to prevent discriminatory algorithms from making decisions in the future? Remove all factors that might suggest dark skin? If one were to remove all attributes from training data from which the system could deduce the skin color, one would in principle delete all data. There are ethnic groups that we must not exclude, but include. And their skin colour does not become a stumbling block for them again.

A good approach could therefore be algorithms that explain themselves. Open the system as a black box to track how algorithms come to their decisions. The most recent concept for such an approach is the non-profit organization OpenAI: They want to get two algorithms to discuss their decision-making process in human language. The developers could then follow up this discussion and elicit the factors on which their decision is based from the systems.

Questioning, is certainly a good start, but even here the dog is not buried. Tobias Matzner is a technical ethicist and computer scientist at the University of Paderborn. He thinks it is wrong to want to create objectivity through technology. He considers it wrong to simply open the black box called the algorithm and check whether the “right thing” is in it. It would suggest that we humans know what is right. And he’s right. How do we know we’re making the right decision and eating wisdom with spoons?

Only the attempt to make predictions about people based on all available data about them inspires categorized prejudices. According to the motto: “People who do, do. Suddenly algorithms discover certain patterns from the data and people become suspicious because of their information. Is this the new drawer society in which we will live in the future? I think we urgently need to start a debate about why and whether we really need supposedly objective algorithms to make judgments about people. In the end, we only outsource the responsibility for a decision. In a few years’ time, algorithms will certainly be able to take some of our judgments away from us. Until then, however, we should not treat human lives as guinea pigs.

Simon Lüthje

I am co-founder of this blog and am very interested in everything that has to do with technology, but I also like to play games. I was born in Hamburg, but now I live in Bad Segeberg.

Related Articles

Leave a Reply

Back to top button