Freedom of choice
Scott Adams recently wrote an article called Would You Take Orders From Machines?. Although the idea is interesting, my humble opinion is that Scott is missing the point.
Orders
The first problem with his article is the use of the term “orders”. We don't take orders from machines, but merely use information they provide to us. An order is an authoritative command, direction, or instruction; thus, the notion involves subordination and obedience, which translates into an unconditional following of the order. When a soldier is given an order but doesn't follow it, the soldier is disobedient and his role within the military system is compromised. When I'm giving an order to my PC to copy a file, I expect it to do it. If instead, it copies the file to a different location or simply removes the file, it would indicate a problem within the machine, which may lead to its replacement.
Let's talk about Scott's GPS example. Drivers don't take orders from their GPS. They simply use information GPS provide to chose the road. They may, on the other hand, take in account the information from other sources in order to take a route which differs from the one suggested by a GPS. For instance, they may notice that a suggested road looks congested and take a deviation. Or they may know that a new road was taken yesterday and take the shortcut. Or driver's husband may call and ask to buy some groceries while on the road. Or they may notice a nice park and decide to walk through it to the destination instead of driving.
With this basic example, it's easy to notice two substantial differences: the possibility to have multiple sources of information (1) and the lack of the notion of obedience (2).
Orders usually come from a single source. Imagine a soldier receiving an order to defend a spot from his commander, then an order to retract from his other commander, and at the same time an order to attack from a third commander. Or imagine my PC being controlled by me and ten other persons, all giving contradictory orders. This won't work for long.
What happens when I decide to walk through the park instead of driving? What happens if I turn right in order to buy some groceries, while my GPS told be that I should turn left?
Not much. I won't be punished by my GPS. It will not consider me disobedient and it will not replace me by another driver. I am not a faulty element of a system when my decisions contradict the suggestions of my GPS.
It would be a different story if a soldier starts to decide which orders should he follow, or my PC starts to decide which files should be copied.
An excellent illustration is the computerization of the aircraft. Planes are mostly driven by machines, which might look like pilots have no actual choice. But look what happens in a case of an emergency. In Flight 235, would the machine override the decision of the pilots to turn off an engine? Would it prevent them to do it? What happens is that the machine steps back and lets skillful pilots take the decisions which may make a difference between hundreds of deaths or a safe landing.
But wait—you may ask—what about Scott's example of stoplights? The illustration belongs to a different category, but implications are the same. Stoplights don't give orders. They merely provide us with information. The fact that we don't run a red light is explained by our social behavior, by the fact that we accept to follow the law and do what our morale tells us to do. The machine aspect of the stoplight here is irrelevant, it is simply an avatar; what is relevant is the law and the morale. I won't murder my neighbor (even if he makes a lot of noise when I try to concentrate on my article) because it's against the law and against my morale. Similarly, I stop at red light, guided once again by the law and my morale.
If the notion of avatar is unclear, let me illustrate this with the example of a soldier. When a soldier receives an order through his radio, is he obeying to the radio or to his commander? Right, the radio is only a tool. Imagine now that instead of talking through the radio, commander may activate remotely one of two lights on a small device of a soldier. Green light means “Fire at will”, blue light means “Stay low and move to extraction zone”. Is the soldier following the orders of a small device or his commander?
While stoplights are not activated manually by a policeman, the idea is the same. We, people, made those stoplights, we defined the rules which make them turn red or green. A stoplight doesn't “decide” that your car should stop. A stoplight just powers different lamps according to some basic rules. It doesn't understand those rules, and doesn't understand the meaning of the red or the green light, nor its consequences on the world.
Freedom of choice
The second problem is the notion of choice and freedom of choice. According to Scott, “My too-clever point is that someday humans will be enslaved by their machines without realizing it.”
What is freedom of choice? Do we actually have freedom of choice?
Without going too philosophical, it is clear that nearly every choice we make is somehow constrained by our environment.
When I buy groceries, am I completely free in my choice? Not really: for starters, I can buy only what is available in the store. I can chose whether I want oranges or apples, unless there are no oranges left, or the owner of the shop decides to double the price of apples.
When I go walking, am I free to go wherever I want? Not really: there is private property, and there are districts of the city which are not safe, and there are plenty of other factors which influence my decision.
It still looks like I have freedom of choice. I can buy oranges even if every grocery store in my city decides to sell apples only: I can still drive to another city. And I can walk in any district of the city, at my own risk. But the fact remains that the consequences of our choices make us free only in appearance.
Machines don't take from us freedom of choice. Their presence is simply irrelevant. They become another factor which may influence my choices, but this doesn't mean that I'm more constrained than I am without them. The nice part is that since they don't give orders, but merely suggest and provide information, I always have a choice to consider their suggestions as relevant or to ignore them.
Imagine a basic case. I'm about to go to a restaurant with my friends. Am I free to chose one in the first place? Probably not, because some are closed, some are too far away and some don't serve vegetarian food (and one of my friends is vegetarian). Moreover, another friend may hate my favorite restaurant, so my choice can be overruled because those fucking social rules force me to consider the opinions of my friends for I don't know what reason!
Now, I can ask my smartphone for suggestions. Let's see... hm, there is a restaurant I don't know which is not far away, which serves vegetarian food, which can take my reservation online right now and which has a good rating. Let's go there!
Who took the decision now?
My smartphone? It's just a stupid machine; it doesn't even know what restaurant is. All it knows are bytes, and those bytes have no meaning for it. So who?
Maybe developers who created the app I used to find the restaurant? They may not even know this restaurant exists.
What about the community through the ranking of this particular restaurant? Well, seems like we get a bit closer. So my choice is based not on an order given by a machine, but rather my trust in the community, my trust in the average of human beings who gave feedback.
This means that the fact that omnipresent machines influence our choices has no real implications. Yes, we use those machines in order to take decisions, but this doesn't make us slaves, nor do we actually take orders from machines. We are free, like we were free before we bought a GPS, a smartphone and dozens of other gadgets intended to make our life easier by enslaving us.