Equality of Benefit

I’ve been involved in a lot of discussion around bias, equality and fairness regarding algorithmic decision making. Without going into excessive amount of background and detail the gist of my believe at the current moment is that equality of utility is the safest thing for companies to aspire to.

What is equality of utility? Let’s degenerate into binary decision making: given individual x, who has observable features f(x) and protected feature p(x). Suppose the company has to choose among two actions to take {a,b}. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties p?

Let god bestow us, a neutral third party, with a utility functor u whose evaluation on the individual u(x) results in a function u(x)(a) is the utility of company taking action a to individual x, u(x)(b) is the utility to individual x of company taking action b.

Let g be the decision process of company, g(•) is the decision company makes either a or b for the situation. Then the right thing to do

g(f(x)) = argmax_{i\in{a,b}}(u(x)(i)) = g(f(x), p(x))

Simple, we do as god says is best for the customer, act as if we have the knowledge of an oracle–even when we know of some reason for discrimination.

The halting thought problem

One wonders if there are thoughts that people cannot have for those people who did have them did not continue on to be able to pass the thought onto others?

Just like a crashed computer cannot infect another computer with the virus that crashed it, could there be some group of thoughts inducible in normal humans that cause catastrophic problems so severe that those accidentally reaching it in the course of evolution actually never lived to tell?

Yoshua Bengio’s talk at the 2016 Bay Area deep learning school inspired this nightmare last night… What if just as we don’t need to drive a car and crash a thousand times to learn how not to crash a car, we have learned not to think of many things that would crash our own programming? Other somewhat subliminal messages that caused this nightmare was his computer crashing three or four times during the presentation.

What if the reason why we don’t know how human brain learn is because that knowledge would cause instability in human individual or society–to crash them–so severe that we have built-in mechanisms  that prevents us from understanding it? The same way it is easy for us to understand physics and logic and emotions, etc, etc, through same unknown mechanism, we are inhibited from having some class of knowledge or skill?

The obvious example would be for example if I derive a deterministic way to cause a person to stop peeing, or rather, if I learned to stop myself from peeing. That is an obvious example. But there could be other less obvious examples–things that cause psychological changes, inducing forgetfulness, ahh for example if I figured out how to forget and then immediately forgot how to do so. This kind of “bug” or boundary conditions seems very possible.

Aside from admitting that it is structurally possible, there is the added effect of evolution. If true, evolution theory tells us that those are precluded from happening strictly because those of us survives have evolved away these dangerous edge cases or otherwise developed very strong and redundant inhibitory systems to prevent them from occuring.

Lastly, is knowledge of how we learn such a dangerous knowledge? Do we constantly have mini-crashes, just as we are about to learn the secrets of knowledge, the learning of it, something peripheral, like a presentation software, crashes in us and prevents that thought from occuring?

Thankfully we know that we have not evolved away the ability to think about such possibility, perhaps there is still hope?

A possible symbiosis

So… To take myself out of the nitty gritty for a moment it seems still possible for there to be machine-human symbiosis.
Some number of decasdes ago, while in highschool I wondered about this matter. At that time we had 80486 computers and Ram in the megabytes. My conclusion for the computer replacing many human jobs or functions or that they become more valuable than humans was that it is inevitable that we strive to live with them. Much like white people have learned to live with black people and that we care for endangered more than we care about some peoples economic welfare–we can learn to live with computers as equals and sacrifice some human pursuits to that end.

Alas that was more than two decades ago.

Last night, I heard Kaifulee’s Lee address a crowd and discuss AI businesses in China. His answer was somewhat sombering. In paraphrase, I believe what he said was that in the age of AI, where computer product managers are gradually replacing human functions that requires less than 5 seconds of human thought with AI, the future human will be in things that take more thought, but under time and audience pressure he gave art, music, appreciation of art, things that require a personal touch, as examples of new jobs.

Personally, I can empathize. As one who sees foremost advances of AI and robotics, and a person whose job is partially to make money by disruptively using this new technology to replace old systems, I can definitely see his human_replaced_counter ticking up and projections for it to grow very fast.

(Much else was discussed at his talk, of course, this is just a short question at the end of the talk)

It might just be me, but I can almost see tears as he answered this question. There isn’t a comfortable answer when you have to admit that someone, or something, else will definitely beat you at something. When that something is your livelihood, and there is even a Robo-vc now, it is harder to be objective. Now of course we do not want to be paralyzed by paranoia. But we should think hard! 

What will we do when machines take over our jobs. What do we do when machines take over our lives and live for us?

 And where is the problem? Why don’t all the Uber drivers replaced by bots go on welfare and go to free community college. They can get their degree in the comfort of their home on Ng’s coursera or Thrun’s Udacity. They can learn to do something else, perhaps learn to write programs? Learn to Code as it is now colloquially called. They can take my job after that. That way I can go get my md’s and ph.d.’s and go heal people or philosophy?
(Footnote: what we don’t want to see if a flow of talented and educated people to jobs they over qualify for: coders driving Uber, md’s ph.d.’s writing code. The prevalence of this phenomenon stirs a deep dark anxiety that I cannot name. The decisions to do so are individually very rational. However it would seem to me that society’s investment in educational infrastructure to created these md/phd’s have not achieved sufficient ROI, for the society’s sake. I.e. If they train in physics, should they not attend to physics matters that the training was designed to do? And should they not be related with completely unrelated subjects? If not why do we have so much investment in physics higher education? S/physics/another subject/; again it is worrisome if society is this way but no worries for the individual or the institutions involved in this process, each of which is arguably producing maximally and with the best of intentions)

Would that be a blast? It’ll be like 24th century of Star Trek: we will have no wars, no worries about money or scarce resources. With advancement of technologies, society is advanced. We will no longer struggle against fellow man but against a greater obstacle. We will only strive to better ourselves or humanity.

Such a grand future awaits us!