The age of self-driving cars seems closer than ever. When you look at what Google, Tesla and other companies are doing to advance the arrival of the fully autonomous vehicle it becomes quite clear that the day is not that far away.
When you think about it, we have had something like semi-autonomous vehicles on the roads of Texas for some time. Think cruise control. These days, there are cars that can stay in their own lanes underway, park themselves and apply their own brakes when they sense that an accident is imminent.
The one thing that separates common vehicles of today from being fully autonomous is that a human driver still needs to be at the controls. One of the reasons for that is because, when certain accident situations present themselves, snap judgments involving morals and ethics have to be made, and developers haven't quite resolved what kind of moral compass self-driving cars should be given.
This is an important question because when accidents happen, the issue of liability arises. The way things are now, car accident victims have a right to pursue compensation and recovery based on a determination of individual accountability or product safety.
The tricky reality autonomous vehicle developers face, according to some experts, is that there will be accident scenarios on the road in which injury or death is unavoidable. So the question becomes, should self-driving vehicles be programmed to protect the occupants of the vehicle at all costs, or should they be programmed to sacrifice them for the sake of those in the other vehicle? What's the threshold for making the call?
This isn't an issue that needs to be overcome immediately, but industry observers do agree that it needs to be addressed at some point if self-driving vehicles are ever to be fully accepted by society.
For now, all we can do is wait and watch.