9/23/2023 0 Comments Ai box experiment transcript![]() ![]() It is also referenced in 2175: Flag Interpretation and 2348: Boat Puzzle, but not directly mentioned. This problem is mentioned in 1455: Trolley Problem, 1938: Meltdown and Spectre and in 1925: Self-Driving Car Milestones. The AI on the right is not just trying to answer the question, but to develop a new variant (one with three tracks, apparently), presumably to test others with. This problem is frequently discussed in connection with AI, both to investigate their capacity for moral reasoning, and for practical reasons (for example, if an autonomous car had to choose between, on the one hand, having an occupant-threatening collision or, on the other, putting pedestrians into harms' way). There are many variants on this problem, adjusting the circumstances, the number and nature of the people at risk, the responsibility of the subject, etc., in order to fully explore why you would make the decision that you make. The classic formulation is that a runaway trolley is about to hit five people on a track, and the only way to save them is to divert the trolley onto another track, where it will hit one person, and the subject is asked whether they would consider it morally right to divert the trolley.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |