Presentation link: https://www.canva.com/design/DAGj6cPuI10/sSf05lB-X6eF9HkOpI8EBA/edit?utm_content=DAGj6cPuI10&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton
Name: Adi Almukhamet
I am an international student from Kazakhstan, Almaty
Major: Computer Science
Post-Grad Goals: Big Tech and help dad with his blockchain projects
Hobbies: Long-distance running, gym, piano
I wish I had the superpower of slowing time, so that I had enough time to think through every decision and make enough time for everything I want to do
I would consider sincerity and patience my superpower, since I feel like this is something that keeps decreasing in people with time
My guilty pleasure is gaming with my friends
I really love his insight about AI. Particularly when he said that whether AI's failure is because of the code's fault or it is a reflection of the failure in human society. It reminds me of the worries about whether AI will generate autonomous awareness. So, I have a question about autonomous awareness. Is the AI's autonomous awareness just the result of a more artificially developed advanced algorithm, or is it a spontaneous qualitative change that occurs when technology reaches a certain level?
I think that your take on the topic is very interesting. The fact that humans themselves could be blamed for the fault of AI never occurred to me even though everyone knows that humans were the ones who created AI. Personally, I think that I subconsiously view AI as another person rather than a man-made entity and that may be why I never blamed humans for the fault of AI but rather AI itself.
I think you provided excellent examples for our discussion, with multi-faceted analysis of both sides. Your final point about humans needing to create successful ethical AI is very convincing.
The debate on AI is very controversial. The benefits AI could bring would definitely help lots of individuals and industries. However, no one truly knows what some AI technologies might turn into. Therefore, as mentioned in the presentation, transparency in AI decisions is crucial. In the solution, the concept of licensed creators was brought up. The criteria of licensed creators are vague as there is no clear definition regarding a fixed standard. Hence, I think this part still has ambiguity and needs some clarification.
I agree with one of the comments brought up in class, that the solution of letting victims audit the code could potentially bring bias -- how can we assure it stays objective if our ultimate goal is to make an ethical AI? It's hard to standardize the creation and modification process of AI because the problem stems from human ourselves. But at the same time it's hard to never introduce AI to human biases. Those biases are also part of the society, a part of the human civilization that AI needs to learn in order to function as a tool that knows everything.
I agree with Toshith. I have one nitpick that I wanted to present to you: In your "AI Deadlock" slide, you mentioned that if we "remove" human biases from AI, the AI will generate new biases to avoid the human ones. I wonder what your opinion is on a possible solution - what if we never introduced AI to human biases? It seems to me that the active removal of human biases poses a threat because it causes AI to swing in the opposite direction - but what if we never set it in motion? Would it generate its own biases, like as demonstrated by Tay?
The presentation mentioned that AI doesn’t fail ethics but humans fail at building ethical AI. That made me think whether it’s even possible to create truly ethical AI or will it always carry some form of human bias, no matter how hard we try