Reading Notes
This article discusses the ethical issues surrounding programming autonomous vehicles. It re-imagines the storied “tolly problem” in which a person is presented with a no-win situation that they must then choose a solution to and justify their decision. The author does not take a position as to how the car should react in a similar situation. In that sense I would agree. Such a decision should not be contemplated by one individual or one development team. It should be a public conversation. It is unlikely that a single correct answer will be developed, but a better answer can be achieved with input from a more broad range of constituents.
In this article, the author shares an experience in which he was responsible for developing code that was intentionally misleading as part of a marketing strategy. I understand why he made the decisions that he did and why he felt personally responsible for the unintended outcome. Given the limited information, I cannot pass judgement whether he was right or wrong to code the site as instructed. For all I know, this drug led to the death of a few people (which is terrible), but at the same time saved hundreds or thousands of lives. I am not attempting to excuse his choice, perhaps the quiz was intentionally and fraudulently misleading, but perhaps the drug in question was also the best choice for anyone who took the quiz. Ultimately, I appreciate the underlying message of the piece: developers must evaluate and take responsibility for the products that they help to create.