Excerpts from Coeckelbergh on Ai Ethics

“Frankenstein can be seen as a Romantic novel that warns of modern technology, but it is informed by the science of its day. For example, the use of electricity—then a very new technology—plays an important role: it is used to animate the corpse. It also makes references to magnetism and anatomy. Thinkers and writers at the time debated about the nature and origin of life. What is the life force?”

”In theistic religion, transcendence means that a god is “above” and independent of the material and physical world, as opposed to in the world and part of the world (immanence). In the Judeo-Christian monotheistic tradition, God is seen as transcending his creation. God can also be seen at the same time as permeating all creation and beings (immanence), and, for example, in Catholic theology, God is understood as revealing himself immanently through his son (Christ) and the Holy Spirit. Frankensteinian narratives about AI seem to stress transcendence in the sense of a split or gap between creator and creation (between Homo deus and AI), without giving much hope that this split or gap can be bridged.“

“If general Ai is possible at all, then we don’t want a kind of ‘psychopath Ai’ that is perfectly rational but insensitive to human concerns because it lacks emotion.”

“For these reasons, we could reject the very idea of full moral agency altogether, or we could take a middle position: we have to give Ais some kind of morality, but not full morality. Wendell Wallach and Colin Allen use the term “functional morality (2009). Ai systems need some capacity to evaluate the ethical consequences of their actions. The rationale for this decision is clear in the case of self-driving cars: the car will likely get into situations where a moral choice has to be made but there is no time for human decision making or human intervention. Sometimes these choices take the form of dilemmas. Philosophers talk about trolley dilemmas, named after a thought experiment in which a trolley barrels down a railway track and you have to choose between doing nothing, which will kill five people tied to the track, or pulling a lever and sending the trolley to another track, where only one person is tied down but is someone you know.”

“It seems that we will have to make these moral decisions (beforehand) and make sure developers implement them in cars. Or perhaps we need to build Ai cars that learn from humans’ choices. However, one may question whether giving Ais rules is a good way to represent human morality, if morality can be ‘represented’ or reproduced at all, and if trolley dilemmas capture something that is central to moral life and experience. Or, from an entirely different perspective, one may ask whether humans are in fact good in making moral choices. Why imitate human morality at all? Transhumanists, for example, may argue that Ais will have a superior morality because they will be more intelligent than us.”

”An algorithm is a set and sequence of instructions, like a recipe, that tells the computer, smartphone, machine, robot or whatever it is embedded in, what to do. It leads to a particular output based on the information available (input). It is applied to solve a problem. To understand Ai ethics, we need to understand how Ai algorithms work and what they do.”

It may decide based on a decision tree - a model of decisions and their possible consequences, often graphically represented as a flowchart. An algorithm that does this contains conditional statements: decision rules in the form of if … (conditions) … then … (outcome). The process is a deterministic one. Drawing on a database that represents human expert knowledge, such an Ai can reason through a lot of information and act as an expert system. It can make expert decisions or recommendations based on an extensive body of knowledge, which may be difficult or impossible for humans to read through.”

”Machine learning refers to software that can ‘learn’. The term is controversial: some say that what it does is not true learning because it does not have real cognition; only humans can learn. In any case, modern machine learning bears “little or no similarity to what might plausibly be going on in humans’ heads” (Boden, 2016). Machine learning is based on statistics; it is a statistical process. It can be used for various tasks, but the underlying task is often pattern recognition. Algorithms can identity patterns or rules in data and use those patterns or rules to explain the data and make predictions for future data.”

”This is done autonomously in the sense that is happens without direct instruction and rules given by the programmer. In contrast to expert systems, which rely on human domain experts who expain the rules to programmers who then code these rules, the machine learning algorithm finds rules or patterns that the programmer has not specified. Only the objective or task is given. The software can adapt its behavior to better match the requirements of the task.”

”Scientists used to create theories to explain data and make predictions; in machine learning, the computer creates its own models that fit the data. The starting point is the data, not the theories. In this sense, data is no longer ‘passive’, but ‘active’: it is “the data itself that defines what to do next (Alpaydin, 2016). Researchers train the algorithm using existing data sets (e.g. old emails) and then the algorithm can predict results from new data.”

”Reinforcement learning, finally, requires an indication of whether the output is good or bad. It is analogous to reward and punishment. The program is not told which actions to take but ‘learns’ through an iterative process which actions yield reward.”

”As Boden (2016) remarks, Ai lacks our understanding of relevance. One should add that it also lacks understanding, experience, sensitivity, and wisdom. This is a good argument why in theory and in principle humans need to be involved. But there is also an empirical argument for not leaving humans out of the picture: in practice, humans are involved. Without programmers and data scientists, the technology simply doesn’t work.”

References:
Coeckelbergh, M. (2020). AI Ethics. MIT Press.


Previous
Previous

Storyfile

Next
Next

ABBA Voyage: Prompted Virtual Stage Performance