https://docs.google.com/presentation/d/1v1GKMfIFsE1PEjMf1_OlDPtQPl3to2ns64JjwaNoRGg/edit?usp=sharing My name is Akshar Oza, and I am a freshman studying neuroscience on the pre-med track (considering minoring in human physiology or chemistry). I'm from Madison, Wisconsin, (I also did a semester at UW-Madison before coming to BU), and my main post-grad goal is to attend medical school. Outside of academics, I work at a hospital back home, and have been since high school. I (along with my family) have been getting into watches for the last couple of years. I play tennis, I have a strong interest in geopolitics (primarily as applied to modern global conflicts) and military aviation, and I love to travel with my family. One fun fact about me is that I know Russian Cyrillic (such that I can read and kinda write) and am learning the small differences of the very similar Ukrainian Cyrillic script.
top of page

bottom of page
I found your presentation topic both timely and thought-provoking, especially considering the ethical complexities surrounding autonomous weapons. It raises important questions about accountability and moral responsibility in warfare. If machines are making life-and-death decisions, who bears the ethical burden?
The use of autonomous weapons could reduce the political and emotional barriers that usually prevent war. When a country does not risk the lives of its own soldiers, decision-makers may feel less pressure to seek peaceful solutions, even though civilians and enemy soldiers are still at risk.
For the trolley problem you raised, if I had to make a choice, I would choose to sacrifice the soldiers. I believe that soldiers, by taking their oath, accept a moral obligation to protect the country and its citizens, even at the cost of their own lives. Citizens, on the other hand, are relatively defenseless and innocent, and it is the government’s duty to protect them. Therefore, in such a situation, the lives of citizens should be prioritized.
I thought that your presentation topic was very relevant to modern day events, especially with the emergence of AI and whether or not such machinary can be trusted with making humane decisions. I'm curious if you have a counter argument for your standpoint, a parallel where a machine can be trusted to make such choices?