On the ethics of smart traffic

The hallmarks of a responsible, grown-up person has for ages been the shouldering of responsibility, and subsequently enjoying the freedoms that come with it. Getting your drivers license is one of these hallmarks that’s become a secular rite of passage into adulthood. The right, and responsibility, to pilot a huge chunk of steel over public roads among other similarly dangerous hunks of steel is a powerful symbol of freedom, and a question of personal responsibility. We differentiate among children and adults on how much responsibility they have over their actions, and to what level they are held accountable for their actions. In Finland, children can not be held legally responsible and can not be criminally prosecuted before the age of fifteen. This means that if a child of twelve stabs someone to death, they will not face a court but social security workers and psychologists. Between the ages of fifteen and eighteen, the person is awarded extra leniency in criminal matters. This is a reasonable acknowledgement of the immaturity of a developing mind. The parts of your brain that deal with realistically weighing risks and rewards keep developing up to your mid-twenties, but at eighteen, you’re considered legally an adult and able to make your own choices in life.

At the foundation of this view of adult responsibility is the idea of the sovereign individual, who makes free choices using their own free will. Out of those free choices comes accountability for the reasonably foreseeable outcomes of those choices. Some philosophers, like Sam Harris, have famously argued that free will does not really exist, but we do not build our society or our values under such claims. In the very technical sense, Harris might be correct in his assertion that all our actions are the consequence of environmental and internal forces beyond our control, but that does not mean that organizing a society around the concept of free will doesn’t make sense. We act out in the world the belief of every individual possessing the capability to make choices.

We manifest belief in free will in our actions, social structures and our psychology. After all, treating people in such a way as to suppose that they are fully responsible for their actions creates a society that discourages antisocial behavior. Social discouragement is a factor that influences individuals, whether they strictly speaking have absolute free will or not. To conceptualize your existence as a leaf blown around by forces way beyond control is not an empowering experience, though seeing others as being pushed and pulled by multiple factors beyond their control can serve as a fountain of compassion towards them. Free will, whether it strictly exists or not, is still a useful concept, and we need it to make sense of the social environment around us. It is at the root of being held accountable for your actions.

The existence of free will and moral responsibility are core concepts when trying to find answers to ethical dilemmas, like the railway cart thought experiment (also known as the “trolley problem“). The thought experiment goes something like this: There’s a railway cart speeding towards a group of men working on the tracks. There’s an intersection ahead, and you stand at the lever where you can, with a simple pull, steer the cart towards another part of the rail where a single man is working alone. The cart is fast and heavy, and whoever gets hit with it gets killed. Is it ethically justified to pull the lever, and doom the single worker to his death while saving five lives? Or should you just do nothing and leave the five men to their fate, saving the one? Variations of this dilemma explore the choices that the subject has to make whether the single person is related to them or a close family member or whether they need to pull a switch or physically push a fat man on the tracks to derail the train. Answers and intuitions vary accordingly.

By McGeddon – Own work, CC BY-SA 4.0

This is a question on the ethics of action or inaction. Answers vary person to person, but most people who are given time to think about their answer steer towards the utilitarian view, where it’s better to save five lives at the cost of one, even if that requires you to directly influence the situation. There’s a tendency to think that by doing nothing, you are not influencing the world around you, and thus you can’t be held accountable for what happens afterwards. This view expresses the idea that there’s a moral wrong happening anyway, and participating in it would make the observer culpable and a part of the tragedy. The opposing view says that what manifests for the person at the switch is a moral obligation, where inaction is action in itself. There are a lot of situations where inaction seems to be the preferred choice for many, especially when confronting evil comes at a great personal cost.

There is no clear, right answer to this or a myriad of other moral dilemmas. This example is usually laid out so that we have time to rationalize and think about the issue, but when it comes to steering a vehicle, we usually are seriously time-constrained and have to react instinctively. There are situations in traffic where the vehicle you’re driving is in a state of motion where you can only minimally influence it’s direction as the speed is already beyond your control. You might find yourself in a situation where you have to make a split-second decision whether to steer your car to the right, possibly killing yourself and your passengers, or to the left, swerving into incoming traffic and possibly killing someone coming in the opposing direction. The stakes are even higher if this situation presents itself in a city center, with pedestrians and bicycles all around you. These situations can resemble the trolley problem, except they present themselves without warning, and give you no time to think about the optimal solution.

When these situations end up in tragedy and people get hurt, we assign blame and responsibility on the driver. Their decision is, ultimately, what gets analyzed and dissected in a court of law and found to be either correct or lacking. The driver is responsible for the car and their own condition to safely drive it. If the weather situation is such that you lose control of your car because there’s ice on the road, we find fault in the driver for not lowering their speed. If their tire explodes and the car flips, we find fault in the driver of the car for not ensuring that the tires were in proper condition for driving on a public road. If you fall asleep on the wheel, you probably should have taken a nap. If you drive drunk and crash your car into someone, you’re generally judged to be an asshole beyond comprehension. This is one of the reasons we do not let children drive, since we know that they are not ready to take on the responsibility that comes with the freedom of driving.

Automating the driver

In the following years, this focus of blame is going to move to a new target. The car itself is going to take over a lot of the burden of driving, ultimately placing the driver in the passenger’s seat. Experiments with self-driving technology have begun years ago, and limited solutions like the Tesla autopilot are even deployed to consumer hardware today. The rapid deployment of machine learning and artificial intelligence technologies combined with the lowering costs of hardware will eventually phase out the driver altogether. This will most probably amount to a sharp decrease in overall accidents, since most accidents today are still caused by human issues on part of the driver, not mechanical or technological faults in the car itself. Eliminating the driver from the process of moving the car from one place to another should end up in much safer traffic conditions.

Today, this evolution has a lot of unanswered questions to it. The primary questions right now are still a technical ones – how do we give the car/computer enough information, and in the right format, to build a working model of it’s surroundings? The second part of it is the prediction model – the computer needs to mimic human intuition on how the objects surrounding the car are going to act in the next few seconds. This consists of making mathematical models on whether the pedestrian, bike or another car are going to continue their movement forward or possibly make a turn. This is, naturally, downstream from successfully identifying objects in the world in the first place, which is not a trivial task for a computer. Distinguishing between objects in the world is hard, and successfully classifying those objects into categories which are used a basis for behavioral prediction is a resource-intensive task and prone to error.

The third step in automated driving is applying the pre-programmed “driving policy”, which guides how the car itself should navigate through the world that it “sees” and tries to make sense of. The crashes that are happening now are mainly system design failures in sensing and prediction phases, which will be sorted out as the technology matures. There will, of course, be crashes involving malfunctioning computers in the future but my intuition on this subject is that these will be much rarer than the human errors we experience now.

Making moral choices

By giving the task of navigating traffic to the computer, we face the question on how we model morality and ethics in the driving policy that will guide the choices the driving computer makes. How can a computer be tasked to make judgments like the trolley problem defined before? These are questions that defy the cold logic of machines, and which don’t have answers that can be expressed as code with discrete outcomes. That’s why they are dilemmas, problems without clear answers. How much is the surrounding culture allowed to dictate what moral values are programmed into the driving policy? The equal value of human life is not a globally accepted concept.

Should we push for a utilitarian outcome for machine-instructed ethical considerations, or should the machine take into account more detail as the intelligence of the computer gains new ground? Should we trust the computer to make ethical judgments whether it’s more permissible to let the car plow into a group of five older alcoholics sitting on a bus stop, or a single mother pushing a baby tram, when no other options are available? What if the choice is simply between a man and a woman? I hope we can all agree that there are factors that should not be given any weight in this kind of decision-making, like the race, faith or gender of the people in question. But judging on the climate of some academic thought on the subject today, that might not be as true as I’d like to think. A poll conducted in universities today on the trolley problem where the single person working the track is an intersectionally oppressed person of color versus the lives of five white, straight men might stack the deck in a surprising and morally unsound way.

Questions like these highlight the improbability that a strictly utilitarian approach is attainable. Even if we publicly agree that all human life is valuable, we usually privately are ready to make concessions that some lives are more valuable than others. What lives are more valuable than others is ultimately a question of our culture, personal ethics and values. How do we account for the uncomfortable fact that sometimes the only ethically sound option is to steer the car in such a way that the driver is sure to die, so that strangers on both sides of the road may live on? Would you buy a car, that would place you on the ethical scales according to your actual weight, and not any higher? In a dystopian world-view, computers might rank-order people based on their virtues, and use this scale to weigh the worth of your life in the rare situation where a software program is making the decision whether you live or die. A shadow of this view is in the horizon.

The ethics algorithm

The machines that in the future drive our cars need clear, explicit instructions on how to handle these questions. In the future, we are tasking programmers to come up with definite solutions to age-old ethical questions, and put the into practice in the choices made by moving machines. This will be reality in a few decades, and we need to have clear answers to go forward with, or abandon the pursuit of further machine automation altogether.

We can’t build cognitive dissonance into thinking machines. That code will not compile, unless the juxtaposition of two mutually exclusive views is rectified. People can happily profess all lives to be of equal value and dodge any questions on having to make terrible ethical choices concerning that principle, but machines can’t. They have to make a choice, since inaction due to undecidedness is not an option. If people face the choices, they act out according to their base instincts but, crucially, they have to live with the consequences. They are responsible. Society will judge them, since they are free, emancipated individuals capable of making their own choices.

If we give machines the freedom to plot routes through heavy traffic and steer thousands of tons of steel at high speeds through them, how do we assign the individual responsibility that comes along with that (simulated) freedom? You can’t put a computer on trial, but should we impose some level of responsibility on the programmers? If yes, then what level of responsibility, and are these people personally responsible for every wrong call the computer makes in the future, under any circumstances? Maybe.

It might be ultimately decided that we should leave the toughest questions up to blind chance. A digital flip of the coin that will make a call in a situation where both outcomes are guaranteed to be bad. The question whether we’re ready to place ourselves and our family in that equation as the possibly expendable ones remains to be seen.

You Might Also Like
Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.