Fully autonomous vehicles are the dream of many people. Right away I think of autonomous taxis that are scheduled through an app on your phone that’s linked to your work schedule or calender that picks ypu up and drops you off. Being a science fiction fan, i also see a future in asteroid mining and the resource transportation required to transport that material to the place it needs to be, for example, a base on a distant planet or to the construction site of a large space ship. Our main topic today though, autonomous cars, will inevitably get into situations were it needs to make difficult decisions that most people would have trouble making. A huge question on every ones mind is, How do we program the A.I to make ethical decisions?
Ethics: “moral principles that govern a person’s behavior or the conducting of an activity”. With this Definition and after reading ACM: Code Of Ethics, i would have to say that ethical programming can be grouped into a few different sections.
First, write respectable code. What is respectable code? respectable code to me and ACM: Code Of Ethics consists of code that does no harm to anyone that uses it. this can be anywhere from compromising the privacy of the users to writing scripts that do intentional damage to machines or lives of people.
Second would be to maintain a professional image. You should always try your best when writing code for others. You should avoid projects where you know very little about the subject. The last thing you need at your job site would to be asked a question and have no idea how to respond because you know nothing about the subject. when you do get a job within your abilities, using standard procedures such as writing proper names for your functions and variables to make sure then next person can easily read your code and make adjustments as needed. This also applies to writing respectable code.
third, be a leader when the opportunities arrive. By this i mean, when you know more about a subject, you should be able to effectively communicate with your team to lead them to the final product. Ive seen several times when someone would know more about a subject, but for some reason felt it was unnecessary to share their knowledge and lead the team to a victory.
With autonomous cars, we can use a lot of the same principles when writing our programs. If you had the chance to jump into the first autonomous taxi service, wouldn’t you feel more safe knowing the creator of that autonomous vehicle follows ACM: Code Of Ethics versus some some random team with sloppy coding standards? Im sure we all know by now that clean code is easier to read and work on making the production quicker and without bugs. You can see why a professional image is important.
Its well know by now that cars are dangerous, so know imaging writing a program that does everything a human does when driving. There are a lot of rules to adhere to. First the car needs to be able to stay on the road at all times and avoid other cars and pedestrians. This is the basic package when it comes to autonomous cars. I would say tesla is at this stage as of now because the driver of their cars still needs to be focused on the road while the car is in autonomous mode. One of the most controversial topics with autonomous vehicles is the “lesser of two evils” problem. If were about to crash and your only choices were to hit someone or turn directly into a brick wall which would you choose? Which one should the A.I choose for you? There are several different versions to this problem, but the idea is always the same, who dies when death is the only choice?
I am a strong supporter in autonomous vehicles because i believe it can make the roads safer everyone. “The lesser of two evils” problem is an exact scenario that doesnt happen very often, but is an issue that needs a lot of attention.