The way we talk about Ethics is broken

An analogy with the Neural Network Algorithm

Alex Lenail
11 min readJul 5, 2015

I’m going to try and explain to you why I believe discussions on the subject of ethics, especially of the abstract variety, in which seemingly well warranted postulations about the nature of morality abound, are not only entirely futile but morally hazardous. But in order to do this, I need to explain to you how I arrived at this conclusion, and that’s going to require that I provide you some background in Machine Learning…

The Artificial Intelligence community is abuzz about a buzzword which has begun to permeate the public psyche: Deep Learning, the return of Neural Networks, just bigger.

Neural Networks are a category of machine learning algorithm originally inspired by the brain. They abstract and formalize ‘neurons’ as units which send a signal to the next layer of neurons if the combined strength of the signals received by that neuron (via metaphorical dendrites) exceed some threshold. A linear combination of the the previous layer of neurons’ output and the respective weights of the connections from those neurons is computed, and the sigmoid function g of that linear combination determines whether a neuron will or won’t ‘fire’.

Read as ‘the activation a of a neuron is the sigmoid g of the sum of each of the previous neuron’s output x multiplied by the strength of the connection from that previous neuron theta. Finally, the prediction h is again the sigmoid g of the linear combination of the appropriate weights theta and activations a.

This genre of neuromorphic computation has existed for decades—the recent hubbub about so called “deep learning” comes from the fact that this class of algorithms scales rather well with data and compute, whereas others don’t benefit as much from additional information or processing power, meaning that as Moore’s law progressed, this class of algorithm overtook domains in which it was traditionally outperformed.

Generally speaking, machine learning seeks to find a mapping between vectors in some input space and vectors in some output space. The classical problem is recognizing handwritten digits, which humans have no trouble with:

From MNIST

Here, the input is a picture and the output is a number. From a computer’s vantage point, the task is rather difficult, because the machine’s representation of each of the digits is a long vector of pixel intensity values. Our visual cortices quickly extract higher level features from this pixel data (9, 8 and 6 have loops; 1, 2, 7 have straight bars), and assemble them into even higher level representations of the input, until the ‘meaning’, or in this case, the actual number, is derived from the raw image data.

With neural networks that operate on visual information, we can actually see the features they extract from the low-level visual information. These turn out to look a lot like Gabor wavelets, which is a wonderfully beautiful coincidence which probably isn’t whatsoever coincidental at all.

The topology of a neural network, the way the neurons are structured, usually forces dimensionality reduction of the traditionally large input space into smaller and smaller spaces. In the context of numbers, the input to our eyes and the algorithm could be a 128x128px image (a 16,384-dimensional vector, i.e. a list of 16,384 pixel intensity values), but the output we perceive and would like the algorithm to recognize is a value between 0 and 9. The images above can be thought of as the basis that spans the space of input images, a basis upon which any image of a number can be reconstructed with reasonable accuracy. A linear combination of these fundamental building blocks of ‘raw image’ should provide the means to recompose any handwritten digit, if properly superimposed.

If the topology of the neural network has many layers of neurons between the input (left) and output (right) then we call the network ‘deep’. This is the origin of the nomenclature “Deep Learning” which learns to recognize higher level patterns than lines or shapes, such as faces, or, famously, cats.

Observe the above picture as raw information being transformed into meaning, using a mathematical, simplistic model of a neuron and an even simpler model of their architecture in the brain.

Deep learning is changing the way many Artificial Intelligence tasks are being carried out. If you have a smartphone, you probably have a voice assistant (Siri or Google) both of which rely on deep learning to decode your speech into language and decode your language into meaning. This approach is quickly becoming ubiquitous in a variety of artificial intelligence tasks. But what does this have to do with ethics?

The canonical ethical dilemma used to gauge popular moral leanings and probe young students of ethics on their biases is called the Trolley Problem.

From Wikipedia

It is formulated by wikipedia in the following way:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

Additional variants of the problem posed by later philosophers modify the original premise, for example the ‘Fat Man Scenario’:

As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you — your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?

Or further, the ‘Fat Villain’ in which the fat man on the bridge is the man who tied up the five men on the tracks, or ‘The Loop’, where you must actively choose to kill a man to prevent the death of five rather than simply choosing the death of 1 over the death of 5.

Depending on how you feel you would act in these dilemmas, ethicists claim, you fall into one of two camps: a believer in ‘Utilitarianism’ or a devotee to ‘Deontological Ethics’. Utilitarianism (or ‘Consequentialism’) is the notion that actions have consequences and the most ethical action is defined as that which leads to the best outcome, an idea which is synthesized by the catchphrase “The Ends Justify The Means”, and furthermore, perfectly analogical to the Bellman Equation, which is a machine learning principle put forth by Richard Bellman in 1957 and has been the bedrock of the branch of machine learning called Optimal Control.

Read as: (1) The greatest value V* to be derived in a situation s is defined as the best action a which can be taken from that situation s which is defined as (2) the action which you expect to yield the best outcome, which is the same as (3) the action which will put you in the best position to make the next good decision, which means that (6) you should choose the action a from situation s which has the highest probability P of getting you to another situation s’ where you’ll be able to make a series of great decisions in the future.

Utilitarianism, then, lends itself well to rigorous mathematical analysis. Did the action do more good then harm? If so, then it is ethical. Given the possibility of quantifying good and harm, ethics become math.

Deontological Ethics, oftentimes referred to as ‘Kantianism’, is the stance that individuals have a moral duty to adhere to a rules-based decision-procedure, e.g. “killing is wrong, no matter the consequences”, and that the motives and intentions of a person who carries out an action are the sole basis of its morality, rather than the outcomes of that action.

Jeremy Bentham (Utilitarianism) , and Immanuel Kant (Kantianism)

Abstract ethical debates seem to eternally revolve around these two paradigms, and focus on the relative merits and costs of each of these approaches to life. These decision patterns are well defined, but humans hardly ever actually abide by them. Why?

Ethics fundamentally concern themselves with the rightness of decisions, so an ethical system can be formalized as some function on the domain of the infinite set of scenarios and decisions that maps each of those (scenario, decision) coordinates to a level of righteousness.

Imagine here a projection of the infinite space of situations onto a single dimension (the x axis), which captures all the many subtleties of human life. We also project the infinite space of decisions onto a single dimension y which similarly captures all the information of the higher dimensional space. Then a coordinate in this space is a scenario and decision, and a system of ethics is a function f(x, y) on that domain which determines how ethical the decision taken from that situation may be, where 1 is maximally ethical, and -1 is maximally unethical.

Ethical discussions traditionally concern themselves with the morally hazardous (situation, decision) subspaces and boundary conditions. The trolley problem, for example, is such a subspace, which helps us gauge the general shape of the ‘ethics’ function a person abides by.

Under this analogy, if we (temporarily) assume all decisions are either right or wrong, (discretizing the z-axis), learning ethics becomes isomorphic to the common task of binary classification from Machine Learning. Let’s extend this analogy a little further and see what we find.

A linear classifier, hypothetically dividing ‘right’ from ‘wrong’

Deontological Ethics, recall, are a simple analysis of the space of decisions: in any situation, some set of decisions is always wrong, independent of outcome. In the analogy to machine learning this system of ethics maps to the set of decision-stump-based models (decision trees, boosting, etc…), some of the most naïve models.

Utilitarianism, evaluates likely outcomes of each decision, and ascertains which one to take based on an analysis of those predictions. That maps by analogy to a transform on the input space, and choosing between outcomes (and their probabilities) instead of actions. In machine learning, we call this a feature transformation, where the moral rectitude of a decision is nonlinear in the the input space but linear in the transformed space (which in this case is the predicted outcome of the decision from the situation).

Once you know whether a (situation, decision) will promote ‘good’, the ethics become very simple, according to Utilitarianism.

But decisions aren’t made in either of these ways, nor should they. Neither of these models accurately nor satisfactorily approximates the distinction between right and wrong, or describes how humans face ethical choices. There’s something missing, some amount of subtlety which these models don’t seem to capture: they seem far too mechanical to cope with all of the grey areas of human existence.

The way humans respond to The Trolley Problem sheds some light on the nature of the gap between human decision-making and these two famous ethical approaches. Recall the first scenario: The Switch, in which the decision to save five results in the death of one. What would you do?

Most people will flick the switch. However, hardly anyone claims they will throw the innocent Fat Man to his death, even though from a consequentialist standpoint, the outcomes are identical. If most would flick the switch (anti-Kant) but not push the fat man to his death (anti-utilitarian), then although both of these models seem plausible, a majority of polled humans don’t abide by either of them. Many people devote themselves to coming into alignment with one of these systems, but I claim that misguided endeavor results from old, broken ethical rhetoric being allowed to retain a primacy it should have ceded long ago. These two systems, endlessly debated, do not actually drive our decision-making, nor should they.

How might a more complex machine learning model approach the task of discriminating between right and wrong (situation, decision) coordinates? Imagine for a moment a Neural Network’s approach.

Inputs that may play into a human’s decision in the trolley problem, which traditional formulations don’t take into account. When the trolley problem is presented to someone, all sorts of assumptions about the situation play a much larger part in the decision someone professes they would take than we might think.

What are the hidden units, the internal nodes, in this metaphorical algorithm? They are each unique transformations of the features of the (situation, decision) into deeper, less articulable, and yet more salient features. They are the concepts and ideals which as a whole represent the inexpressible notions of right and wrong.

When confronted with the Trolley Problem, humans envision the scenario, pull from any experiences which are whatsoever similar by a variety of analogies and approximate values for a huge number of feature variables, making a great many predictions/assumptions about a great many potential decisions at once, and then choose from those the decision which they are most drawn to by some inarticulable intuition, which they may subsequently seek to justify, perhaps even using Utilitarian or Deontological rhetoric.

Humans flick The Switch but refuse to push the Fat Man for reasons they can hardly express. We don’t have the declarative semantic structures to handle all the complexity of our decision-making so we use metaphorical language, describing situations as “grey areas”.

But we usually make very good decisions the vast majority of the time, because decision-making is a task for which we have an extraordinary aptitude, greater than any mechanistic model, greater than we can even describe in words. Deontological Ethics and Utilitarianism are not only wrong, but they dramatically undersell human potential.

Our behavior has been meticulously refined for our entire lives performing an optimization algorithm machine learning researchers have yet to invent. Given a situation, we perform a series of deeply non-linear transformation of the input we receive from our senses; combinations of contextual cues and instinctual and learned motives and ideas provide an intuition of rightness, the source of our decisions.

Jeremy Bentham and Immanuel Kant were pioneers of ethical thinking in their time, but the fact that humans have flouted their teachings for centuries indicates to me that they must have missed something. Abstract Ethical arguments around these two simplistic paradigms only serve to undersell our own capacity for righteous action, and may lead us to worse decisions.

Machine Learning, and the Neural Network algorithm in particular, provide a framework for re-examining the brain from a mathematically regimented standpoint, and furthermore, a language to grapple with some of the more sophisticated phenomena it produces, such as ethical decision-making.
In a parallel sense to how the brain inspired the Neural Network algorithm, the algorithm can inspire ways of understanding the brain.

Next time you make a decision which causes you to think twice, try to examine the inner workings of your own mind, both conscious and unconscious, which might be driving your thinking. It might surprise you.

If you noticed that the entire article I wrote seems to contradict itself: “Abstract Ethical discussions aren’t productive, so let me discuss ethics, abstractly,” I would point out that this article is a meta-ethical discussion, not an ethical one (an important distinction). I’m not talking about right and wrong, I’m talking about talking about right and wrong, which I do think can be productive.

I’d also like to point out that I don’t think ethics can never be discussed productively, I just think we ought to do a better job of discussing them.
For example, I believe literature (fiction) affords ethical decisionmaking the subtlety it deserves, and I think people can learn a lot from reading literature.

Besides that, I’d love feedback on these ideas.

Thanks to everyone who endured discussions with me about this, including but not limited to: Kevin Paeth, Arun Varma, and Carolyn Saund.

--

--

Alex Lenail
Alex Lenail

Written by Alex Lenail

conscious mammalian organism, fanatical tea snob.

Responses (2)