Imagine this scenario; you are in your new driverless car and a situation arises (doesnt matter how) where the driver (the computer) has to decide between crashing into a group of young school children, probably killing several, or slamming the car into a wall and probably killing the passenger, i.e. you! Simple ethics would recommend taking a least harm approach, but that means maybe killing you. Would you buy a car programmed to kill you? Or would you prefer to buy one that would make the less ethical choice and always seek to protect the cars occupants. These ethical dilemmas are coming to the fore with the advent of autonomous systems. Several years ago the UKs Royal Academy of Engineers published a report on the ethics of emerging technologies and autonomous systems. More recently MIT Technology Review posted a piece titled Why Self-Driving Cars Must Be Programmed to Kill. My colleague, Paul Ralph, also just gave a radio interview on this subject.
from The Universal Machine http://universal-machine.blogspot.com/
Put the internet to work for you.
|
Recommended for you |