Watched the conversation between Yuval Harari and Fei-Fei Li today about AI and ethics at Stanford memorial Auditorium.
Yuval Harari wrote a couple of books, all of which I’ve read, and Fei-Fei Li is well known tech person in the Bay Area, now heading Human centered AI at Stanford. If this was a battle, the opening salvos were pretty intense, like bombs that creates distortion in space time… “what about love?” Shot Fei-Fei “do you really feel that an AI can achieve or supersede human love?”
YH: Well, just flip open Tinder.
Ff: I hope there’s more than that to love.
YH: well it depends on if you are talking about Greek platonic love or Buddhism love or the love of mammalian courtship.
Ff: is there more to that in mammalian courtship? Or do we fear AI may…
YH: well yeah! It already does affect, influence, and tell you about what to feel and think and whom to date and
The conversation continues from there:
Yuval gives financial system as an example of system that nobody understands–not even leaders and experts. AI could suffer same problem.
Yuval thinks the way forward is more introspective human self-understanding as well as personal AI. Personal AI is an AI that does or come to know a designated person(possibly better than himself) to help that person to get what he wants.
Yuval closes by answering “free will” question in a most surprising way. This is by far the most refreshing thing I’ve heard about AI this year. As a history professor at Hebrew a university of Israel says Bible is wrong to insist on free will, that self-understanding as a means to lift oneself from suffering is more important for human beings. This of course rejects the Bible and promotes eastern Buddhism world-view. I mean wow! What philosopher and scientists dare to flat out say that is absolutely brave and amazing!
Fei-fei of course works on the HAI effort to promote intercourse between various disciplines from humanities to computer science to natural science, etc. although ultimately, this effort is part of a whole society working to solve the problem of a new transformative technology. This position is of course reasonable deflection to the other side pressing computer scientists for an answer about safety, fairness or fairness and safety for all of humanity.
Fei-Fei Li says that there’s no theorem forcing a trade-off between explainability of model and performance(accuracy) of AI systems. That too is also way way out there IMHO. The intuitionistic interpretation of “interpretability” and what we know about performance definitely pits “simplicity” against “performance” due to element of communication which we understand to have bandwidth. But indeed, Fei-Fei Li is an expert of this subject, and also has knowledge of things like neuroscience. Maybe there are boundaries to the monotonic relationship of the two beyond which that relationship reverses. But this thought is definitely new to me.
This stimulating discussion will keep me excited for a long time.
There is something scary about this whole thing though. The humbleness Harari expresses. Something inside me is afraid for him… Afraid for us if we were to take his path…. like what if he finds out in 20 years time that he’s not gay? Or that Buddhism was a multi-millennium joke and that Judaism was really the truth? His position is very very unsettling in that it allows for very very big changes in believes. Fei-Fei’s confidence is reassuring, but definitely strikes fear in me. I mean for god sakes, she is born Chinese! And she did a year outside of USA in her adulthood, in China! Like what kind of evil espionage training was she getting? She worked for google and Professor at Stanford!!! And she’s up there answering question about how the powerhouses of AI will not take advantage of everyone with no AI all over the world??!! Like wtf? Do we expect a straight answer ? Her people (student and gang of researchers) invented a lot of this shit and who is to say what she did, or more importantly, did not say or post on arxiv ? More of her scientific cronies or country man will say. Fei-Fei’s answer is that you should talk to her, and she co-leads a center for doing that. How satisfying is that?
Anyways… I’m really still in awe of Harari and Li. But I can definitely see a reasonable number of reasonable people casting reasonable doubt on this reasonable exercise in reason.
Ps writing this while also watching BlackKKlansman. Wow, America was such a fucked up place… I’m really scared now. What a long tortured struggle for freedom endured by the African Americans and Jewish people. I’d write a song about my own people’s predicaments had I the talent. (My people being a disaggregated segment of colored people of America) Sigh…..