To stop a tech apocalypse we need ethics and the arts

Sara James, La Trobe University and Sarah Midford, La Trobe University

If recent television shows are anything to go by, we’re a little concerned about the consequences of technological development. Dystopian narratives abound.

Black Mirror projects the negative consequences of social media, while artificial intelligence turns rogue in The 100 and Better Than Us. The potential extinction of the human race is up for grabs in Travellers, and Altered Carbon frets over the separation of human consciousness from the body. And Humans and Westworld see trouble ahead for human-android relations.

Narratives like these have a long lineage. Science fiction has been articulating our hopes and fears about technological disruption at least since Mary Shelley’s Frankenstein (1818).

However, as the likes of driverless cars and robot therapists emerge, some previously fictional concerns are no longer imaginative speculation. Instead, they represent real and urgent problems.


What kind of future do we want?

Last year, Australia’s Chief Scientist Alan Finkel suggested that we in Australia should become “human custodians”. This would mean being leaders in technological development, ethics, and human rights.

Finkel isn’t alone in his concern. But it won’t be simple to address these issues in the development of new technology.

Many people in government, industry and universities now argue that including perspectives from the humanities and social sciences will be a key factor.

A recent report from the Australian Council of Learned Academies (ACOLA) brought together experts from scientific and technical fields as well as the humanities, arts and social sciences to examine key issues arising from artificial intelligence.

According to the chair of the ACOLA board, Hugh Bradlow, the report aims to ensure that “the well-being of society” is placed “at the centre of any development.”

Human-centred AI

A similar vision drives Stanford University’s Institute for Human-Centered Artificial Intelligence. The institute brings together researchers from the humanities, education, law, medicine, business and STEM to study and develop “human-centred” AI technologies. The idea underpinning their work is that “AI should be collaborative, augmentative and enhancing to human productivity and quality of life”.

Meanwhile, across the Atlantic, the Future of Humanity Institute at the University of Oxford similarly investigates “big-picture questions” to ensure “a long and flourishing future for humanity”.

The centre is set to double in size in the next year thanks to a £13.3 million (A$25 million) contribution from the Open Philanthropy Project. The founder of the institute, philosopher Nick Bostrom, said:

There is a long-distance race on between humanity’s technological capability, which is like a stallion galloping across the fields, and humanity’s wisdom, which is more like a foal on unsteady legs.


What to build and why

The IT sector is also wrestling with the ethical issues raised by rapid technological advancement. Microsoft’s Brad Smith and Harry Shum wrote in their 2018 book The Future Computed that one of their “most important conclusions” was that the humanities and social sciences have a crucial role to play in confronting the challenges raised by AI:

Languages, art, history, economics, ethics, philosophy, psychology and human development courses can teach critical, philosophical and ethics-based skills that will be instrumental in the development and management of AI solutions.

Hiring practices in tech companies are already shifting. In a TED talk on “Why tech needs the humanities”, Eric Berridge – chief executive of the IBM-owned tech consulting firm Bluewolf – explains why his company increasingly hires humanities graduates.

While the sciences teach us how to build things, it’s the humanities that teach us what to build and why to build them.

Only 100 of Bluewolf’s 1,000 employees have degrees in computer science and engineering. Even the Chief Technology Officer is an English major.

Tech CEO Eric Berridge explains why his company hires humanities graduates.

Education for a brighter future

Similarly, Matt Reaney, the chief executive and founder of Big Cloud – a recruitment company that specialises in data science, machine learning and AI employment – has argued that technology needs more people with humanities training.

[The humanities] give context to the world we operate in day to day. Critical thinking skills, deeper understanding of the world around us, philosophy, ethics, communication, and creativity offer different approaches to problems posed by technology.

Reaney proposes a “more blended approach” to higher education, offering degrees that combine the arts and STEM.

Another advocate of the interdisciplinary approach is Joseph Aoun, President of Northeastern University in Boston. He has argued that in the age of AI, higher education should be focusing on what he calls “humanics”, equipping graduates with three key literacies: technological literacy, data literacy and human literacy.

The time has come to answer the call for humanities graduates capable of crossing over into the world of technology so that our human future can be as bright as possible.

Without training in ethics, human rights and social justice, the people who develop the technologies that will shape our future could make poor decisions. And that future might turn out to be one of the calamities we have already seen on screen.

Sara James, Senior Lecturer, Sociology, La Trobe University and Sarah Midford, Senior Lecturer, Classics and Ancient History and Director of Teaching and Learning (ugrad), School of Humanities and Social Sciences, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How can we make sure that algorithms are fair?

Karthik Kannan, Purdue University

Using machines to augment human activity is nothing new. Egyptian hieroglyphs show the use of horse-drawn carriages even before 300 B.C. Ancient Indian literature such as “Silapadikaram” has described animals being used for farming. And one glance outside shows that today people use motorized vehicles to get around.

Where in the past human beings have augmented ourselves in physical ways, now the nature of augmentation also is more intelligent. Again, all one needs to do is look to cars – engineers are seemingly on the cusp of self-driving cars guided by artificial intelligence. Other devices are in various stages of becoming more intelligent. Along the way, interactions between people and machines are changing.

Machine and human intelligences bring different strengths to the table. Researchers like me are working to understand how algorithms can complement human skills while at the same time minimizing the liabilities of relying on machine intelligence. As a machine learning expert, I predict there will soon be a new balance between human and machine intelligence, a shift that humanity hasn’t encountered before.

Such changes often elicit fear of the unknown, and in this case, one of the unknowns is how machines make decisions. This is especially so when it comes to fairness. Can machines be fair in a way that people understand?

When people are illogical

To humans, fairness is often at the heart of a good decision. Decision-making tends to rely on both the emotional and rational centers of our brains, what Nobel laureate Daniel Kahneman calls System 1 and System 2 thinking. Decision theorists believe that the emotional centers of the brain have been quite well developed across the ages, while brain areas involved in rational or logical thinking evolved more recently. The rational and logical part of the brain, what Kahneman calls System 2, has given humans an advantage over other species.

However, because System 2 was more recently developed, human decision-making is often buggy. This is why many decisions are illogical, inconsistent and suboptimal.

For example, preference reversal is a well-known yet illogical phenomenon that people exhibit: In it, a person who prefers choice A over B and B over C does not necessarily prefer A over C. Or consider that researchers have found that criminal court judges tend to be more lenient with parole decisions right after lunch breaks than at the close of the day.

Part of the problem is that our brains have trouble precisely computing probabilities without appropriate training. We often use irrelevant information or are influenced by extraneous factors. This is where machine intelligence can be helpful.

Machines are logical…to a fault

Well-designed machine intelligence can be consistent and useful in making optimal decisions. By their nature, they can be logical in the mathematical sense – they simply don’t stray from the program’s instruction. In a well-designed machine-learning algorithm, one would not encounter the illogical preference reversals that people frequently exhibit, for example. Within margins of statistical errors, the decisions from machine intelligence are consistent.

The problem is that machine intelligence is not always well designed.

As algorithms become more powerful and are incorporated into more parts of life, scientists like me expect this new world, one with a different balance between machine and human intelligence, to be the norm of the future.

Judges’ rulings about parole can come down to what the computer program advises. THICHA SATAPITANON/Shutterstock.com

In the criminal justice system, judges use algorithms during parole decisions to calculate recidivism risks. In theory, this practice could overcome any bias introduced by lunch breaks or exhaustion at the end of the day. Yet when journalists from ProPublica conducted an investigation, they found these algorithms were unfair: white men with prior armed robbery convictions were rated as lower risk than African American females who were convicted of misdemeanors.

There are many more such examples of machine learning algorithms later found to be unfair, including Amazon and its recruiting and Google’s image labeling.

Researchers have been aware of these problems and have worked to impose restrictions that ensure fairness from the outset. For example, an algorithm called CB (color blind) imposes the restriction that any discriminating variables, such as race or gender, should not be used in predicting the outcomes. Another, called DP (demographic parity), ensures that groups are proportionally fair. In other words, the proportion of the group receiving a positive outcome is equal or fair across both the discriminating and nondiscriminating groups.

Researchers and policymakers are starting to take up the mantle. IBM has open-sourced many of their algorithms and released them under the “AI Fairness 360” banner. And the National Science Foundation recently accepted proposals from scientists who want to bolster the research foundation that underpins fairness in AI.

Improving the fairness of machines’ decisions

I believe that existing fair machine algorithms are weak in many ways. This weakness often stems from the criteria used to ensure fairness. Most algorithms that impose “fairness restriction” such as demographic parity (DP) and color blindness (CB) are focused on ensuring fairness at the outcome level. If there are two people from different subpopulations, the imposed restrictions ensure that the outcome of their decisions is consistent across the groups.

Beyond just the inputs and the outputs, algorithm designers need to take into account how groups will change their behavior to adapt to the algorithm. elenabsl/Shutterstock.com

While this is a good first step, researchers need to look beyond the outcomes alone and focus on the process as well. For instance, when an algorithm is used, the subpopulations that are affected will naturally change their efforts in response. Those changes need to be taken into account, too. Because they have not been taken into account, my colleagues and I focus on what we call “best response fairness.”

If the subpopulations are inherently similar, their effort level to achieve the same outcome should also be the same even after the algorithm is implemented. This simple definition of best response fairness is not met by DP- and CB-based algorithms. For example, DP requires the positive rates to be equal even if one of the subpopulations does not put in effort. In other words, people in one subpopulation would have to work significantly harder to achieve the same outcome. While a DP-based algorithm would consider it fair – after all, both subpopulations achieved the same outcome – most humans would not.

There is another fairness restriction known as equalized odds (EO) which satisfies the notion of best response fairness – it ensures fairness even if you take into account the response of the subpopulations. However, to impose the restriction, the algorithm needs to know the discriminating variables (say, black/white), and it will end up setting explicitly different thresholds for subpopulations – so, the thresholds will be explicitly different for white and black parole candidates.

While that would help increase fairness of outcomes, such a procedure may violate the notion of equal treatment required by the Civil Rights Act of 1964. For this reason, a California Law Review article has urged policymakers to amend the legislation so that fair algorithms that utilize this approach can be used without potential legal repercussion.

These constraints motivate my colleagues and me to develop an algorithm that is not only “best response fair” but also does not explicitly use discriminating variables. We demonstrate the performance of our algorithms theoretically using simulated data sets and real sample data sets from the web. When we tested our algorithms with the widely used sample data sets, we were surprised at how well they performed relative to open-source algorithms assembled by IBM.

Our work suggests that, despite the challenges, machines and algorithms will continue to be useful to humans – for physical jobs as well as knowledge jobs. We must remain vigilant that any decisions made by algorithms are fair, and it is imperative that everyone understands their limitations. If we can do that, then it’s possible that human and machine intelligence will complement each other in valuable ways.

[ Deep knowledge, daily. Sign up for The Conversation’s newsletter. ]

Karthik Kannan, Professor of Management and Director of the Krenicki Center for Business Analytics & Machine Learning, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Please Alexa’: are we beginning to recognise the rights of intelligent machines?

Paresh Kathrani, University of Westminster

Amazon has recently developed an option whereby Alexa will only activate if people address it with a “please”. This suggests that we are starting to recognise some intelligent machines in a way that was previously reserved only for humans. In fact, this could very well be the first step towards recognising the rights of machines.

Machines are becoming a part of the fabric of everyday life. Whether it be the complex technology that we are embedding inside of us, or the machines on the outside, the line between what it means to be human and machine is softening. As machines get more and more intelligent, it is imperative that we begin discussing whether it will soon be time to recognise the rights of robots, as much for our sake as for theirs.

When someone says that they have a “right” to something, they are usually saying that they have a claim or an expectation that something should be a certain way. But what is just as important as rights are the foundations on which they are based. Rights rely on various intricate frameworks, such as law and morality. Sometimes, the frameworks may not be clear cut. For instance, in human rights law, strong moral values such as dignity and equality inform legal rights.

So rights are often founded upon human principles. This helps partially explain why we have recognised the rights of animals. We recognise that it is ethically wrong to torture or starve animals, so we create laws against it. As intelligent machines weave further into our lives, there is a good chance that our human principles will also force us to recognise that they too deserve rights.

But you might argue that animals differ from machines in that they have some sort of conscious experience. And it is true that consciousness and subjective experience are important, particularly to human rights. Article 1 of the Universal Declaration of Human Rights 1948, for example, says all human beings “are endowed with reason and conscience and should act towards one another in a spirit of brotherhood”.

However, consciousness and human rights are not the only basis of rights. In New Zealand and Ecuador, rivers have been granted rights because humans deemed their very existence to be important. So rights don’t emerge only from consciousness, they can extend from other criteria also. There is no one correct type or form of rights. Human rights are not the only rights.

As machines become even more complex and intelligent, just discarding or destroying them without asking any questions at all about their moral and physical integrity seems ethically wrong. Just like rivers, they too should receive rights because of their meaning to us.

The Whanganui river in New Zealand has been granted the same rights as humans. Duane Wilkins, CC BY-SA

What if there was a complex and independent machine providing health care to a human over a long period of time. The machine resembled a person and applied intelligence through natural speech. Over time, the machine and the patient built up a close relationship. Then, after a long period of service, the company that creates the machine decides that it is time to turn off and discard this perfectly working machine. It seems ethically wrong to simply discard this intelligent machine, which has kept alive and built a relationship with that patient, without even entertaining its right to integrity and other rights.

This might seem absurd, but imagine for a second that it is you who has built a deep and meaningful relationship with this intelligent machine. Wouldn’t you be desperately finding a way to stop it being turned off and your relationship being lost? It is as much for our own human sake, than for the sake of intelligent machines, that we ought to recognise the rights of intelligent machines.

Sexbots are a good example. The UK’s sexual offences law exists to protect the sexual autonomy of the human victim. But it also exists to ensure that people respect sexual autonomy, the right of a person to control their own body and their own sexual activity, as a value.

But the definition of consent in section 74 of the Sexual Offences Act 2003 in the UK specifically refers to “persons” and not machines. So right now a person can do whatever they wish to a sexbot, including torture. There is something troubling about this. And it is not because we believe sexbots to have consciousness. Instead, it is probably because by allowing people to torture robots, the law stops ensuring that people respect the values of personal and sexual autonomy, that we consider important.

These examples very much show that there is a discussion to be had over the rights of intelligent machines. And as we rapidly enter an age where these examples will no longer be hypothetical, the law must keep up.

Matter of respect

We are already recognising complex machines in a manner that was previously reserved only for humans and animals. We feel that our children must be polite to Alexa as, if they are not, it will damage our own notions of respect and dignity. Unconsciously we are already recognising that how we communicate with and respect intelligent machines will affect how we communicate with and respect humans. If we don’t extend recognition to intelligent machines, then it will affect how we treat and consider humans.

Machines are integrating their way in to our world. Google’s recent experiment with natural language assistants, where AI sounded eerily like a human, gave us an insight into this future. One day, it may become impossible to tell whether we are interacting with machines or with humans. When that day comes, rights may have to change to include them as well. As we change, rights may naturally have to adapt too.

Paresh Kathrani, Senior Lecturer in Law, University of Westminster

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Di publikasi oleh: Garuda Website