Equality of Utility II

Some time ago we investigated the equality of benefits. Roughly speaking let us consider degenerate real world actions into discretely selectable choices of action a\in A given individual x, who has observable features f(x) and protected feature p(x). Suppose the company has to choose among a set of actions to take a \in A. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties p?

Let god bestow us, a neutral third party, with a utility functor u whose evaluation on the individual u(x) results in a function u(x)(a) is the utility of company taking action a to individual x, u(x)(b) is the utility to individual x of company taking action b.
Let f be the decision process of company g, g(x) is the decision company makes, some a for the individual x. Then the right thing to do
g(f(x)) = argmax_{a\in A}(u(x)(a)) = g(f(x), p(x))
Simple, we do as god says, act as if we have the knowledge of an oracle–even when knowing some discriminable information that we then chose to ignore.

This is not as easy as it looks in a formula. Think of a person with a clown nose and one without, your behavior will likely be very different between those two persons, even if you decide that a clown nose has absolutely nothing to do with the task at hand.

Additionally, the nature of our imperfection dictates that our systems that we build are imperfect. What if we cannot achieve God’s will? What if we fail to do the virtuous even when we know what the right thing to do is?

What could a neutral thirdparty reasonably demand of a faulty company? One suggested approach is to establish probabilistic equality among protected classes. Suppose there are some number of classes, m\in M which corresponds to values of p(x), between which we must protect their utility. (So for example M could be cartesian product of age, sex, race, birthplace, religion and political party)

E(u(x)(g(f(a)))| m) = c\ \forall m\in M

That the customer utility for each class is identically some value c. This is a simplification as there are other classes of equivalence in stochastic variables.

Note this framework has some slight benefit over traditional machine learning framework evaluating equality on confusion matrix of classifier performance g. There two most inspiring examples that I suffer from:

Situation 1: I noticed that my coworker was getting Tesla car advertisements while I do not receive one. Even though my utility in not receiving the advertisement was a negligibly loss–because I cannot afford a tesla, I still feel angry. I may even be tempted to find a protected attribute of mine to claim that tesla discriminated against me in its advertisement campaign: What! they think mid-aged Asian man can’t have a midlife crisis or can’t afford to splurge on a Tesla? In this case a true negative for prediction regarding response/conversion through a Tesla car Ad but offensive enough to cause problems. In retrospect this would have had positive utility for me, when I reached out to Tesla I learned more about how the car would work for me. But the decision seem to produce a negative sentiment from its subject.(The company has, since my drafting of this blog entry, sent me repeated invitation to test drive the S, perhaps due to recent but small increase in my disposable cash, which I may consider calling upon by taking the offer to test drive, at a suitable time. this is just an example)

Situation 2: I am offended when I do receive an advertisement for STD testing, and in particular for hepatitis family of diseases. For gods sake, there’s a Asian Liver Center at Stanford whose purpose for establishment is to check me for hepatitis or other Liver problems present in Asian livers. In this case, god bless me, that I am free of hepatitis and other liver problems of any kind, and that this is a false positive in advertising. I am offended. And in reality one may argue that the benefit of this advertisement, to me, to increase my chances of early detection is positive–E(u(huan)g(f(huan)))>0 I still feel offended. This case is a false positive to advertisement conversion. It is a positive utility to have shown it to me. And yet it produced negative sentiment.

Situation 3: I just received a piece of snail mail from a Redwood City mortuary advertising their service to Mr. And Mrs. Chang. I am terrified. I feel this is a death threat of some form. Putting the idea of me dying in Redwood City in my head. The letter has hand addressed envelope. This is a false positive for advertising relevance(I did not die, not yet any ways, and I am not planning on dying) it has zero utility for me, and I am definitely feeling very negative sentiment.

These are but several of many possible situations where the company could do the right thing in front of God, and in front of the board, by still be erring and thereby producing very negative sentiment. At risk of running out of numbers to enumerate all of them, I have not numbered all the types starting at 1.

To summarize, there are several factors that ultimately factor into a company’s decision making process, nonexclusively they are:

  • The E.u.g.f for x, whether it is defensible in front of an oracle, God, or court of law;
  • how will any action make the subject individual feel, the sentiment it produces, irrespective of objective utility;
  • is utility function universally accepted;
  • and finally the company’s bottom line.

With these considerations in mind, we can now continue with our exploration of fairness.

In a world… where there are nebulous words!

It occurs to me to write these down on a special occasion… Initially I was considering an optimization problem in a situation where localized optimization is essentially playing a zero sum game with an opponent who is much more powerful.

In this situation, even though we have big data, and even though we have deep learning, it still remains that there is bigger data and yet more sophistication else where. One of the challenges of the nascent big data and deep learning enabled AI industry is one of problem selection.

There are people who are trying to cure cancer and save lives. And there are people trying to trade stocks, win political campaigns, or engage in armed conflict (not that these are the same things) Their continued admonishments against AI are the people who fear the latter. I would imagine there may be very few who would oppose the prior.

That! That is the underlying restriction to the technology: what it can do for prior cause is practically restricted by what it does for the latter. The same applies to all tehnolgies of course. We’ve had internet, social media, a typical Californian would probably take a few minutes to recognize therebeing anything exceeding unusual about the potential downside of yet another meme…

Also, consider aliens, of the interstellar variety, one should always be mindful of our real  competition. There is likely a far greater intelligence out there. Let us not doubt, and let us certainly not delay the development of our own Big Intelligence as matter in due course of our kind’s progress.

Equality of Benefit

I’ve been involved in a lot of discussion around bias, equality and fairness regarding algorithmic decision making. Without going into excessive amount of background and detail the gist of my believe at the current moment is that equality of utility is the safest thing for companies to aspire to.

What is equality of utility? Let’s degenerate into binary decision making: given individual x, who has observable features f(x) and protected feature p(x). Suppose the company has to choose among two actions to take {a,b}. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties p?

Let god bestow us, a neutral third party, with a utility functor u whose evaluation on the individual u(x) results in a function u(x)(a) is the utility of company taking action a to individual x, u(x)(b) is the utility to individual x of company taking action b.

Let g be the decision process of company, g(•) is the decision company makes either a or b for the situation. Then the right thing to do

g(f(x)) = argmax_{i\in{a,b}}(u(x)(i)) = g(f(x), p(x))

Simple, we do as god says is best for the customer, act as if we have the knowledge of an oracle–even when we know of some reason for discrimination.

The halting thought problem

One wonders if there are thoughts that people cannot have for those people who did have them did not continue on to be able to pass the thought onto others?

Just like a crashed computer cannot infect another computer with the virus that crashed it, could there be some group of thoughts inducible in normal humans that cause catastrophic problems so severe that those accidentally reaching it in the course of evolution actually never lived to tell?

Yoshua Bengio’s talk at the 2016 Bay Area deep learning school inspired this nightmare last night… What if just as we don’t need to drive a car and crash a thousand times to learn how not to crash a car, we have learned not to think of many things that would crash our own programming? Other somewhat subliminal messages that caused this nightmare was his computer crashing three or four times during the presentation.

What if the reason why we don’t know how human brain learn is because that knowledge would cause instability in human individual or society–to crash them–so severe that we have built-in mechanisms  that prevents us from understanding it? The same way it is easy for us to understand physics and logic and emotions, etc, etc, through same unknown mechanism, we are inhibited from having some class of knowledge or skill?

The obvious example would be for example if I derive a deterministic way to cause a person to stop peeing, or rather, if I learned to stop myself from peeing. That is an obvious example. But there could be other less obvious examples–things that cause psychological changes, inducing forgetfulness, ahh for example if I figured out how to forget and then immediately forgot how to do so. This kind of “bug” or boundary conditions seems very possible.

Aside from admitting that it is structurally possible, there is the added effect of evolution. If true, evolution theory tells us that those are precluded from happening strictly because those of us survives have evolved away these dangerous edge cases or otherwise developed very strong and redundant inhibitory systems to prevent them from occuring.

Lastly, is knowledge of how we learn such a dangerous knowledge? Do we constantly have mini-crashes, just as we are about to learn the secrets of knowledge, the learning of it, something peripheral, like a presentation software, crashes in us and prevents that thought from occuring?

Thankfully we know that we have not evolved away the ability to think about such possibility, perhaps there is still hope?

A possible symbiosis

So… To take myself out of the nitty gritty for a moment it seems still possible for there to be machine-human symbiosis.
Some number of decasdes ago, while in highschool I wondered about this matter. At that time we had 80486 computers and Ram in the megabytes. My conclusion for the computer replacing many human jobs or functions or that they become more valuable than humans was that it is inevitable that we strive to live with them. Much like white people have learned to live with black people and that we care for endangered more than we care about some peoples economic welfare–we can learn to live with computers as equals and sacrifice some human pursuits to that end.

Alas that was more than two decades ago.

Last night, I heard Kaifulee’s Lee address a crowd and discuss AI businesses in China. His answer was somewhat sombering. In paraphrase, I believe what he said was that in the age of AI, where computer product managers are gradually replacing human functions that requires less than 5 seconds of human thought with AI, the future human will be in things that take more thought, but under time and audience pressure he gave art, music, appreciation of art, things that require a personal touch, as examples of new jobs.

Personally, I can empathize. As one who sees foremost advances of AI and robotics, and a person whose job is partially to make money by disruptively using this new technology to replace old systems, I can definitely see his human_replaced_counter ticking up and projections for it to grow very fast.

(Much else was discussed at his talk, of course, this is just a short question at the end of the talk)

It might just be me, but I can almost see tears as he answered this question. There isn’t a comfortable answer when you have to admit that someone, or something, else will definitely beat you at something. When that something is your livelihood, and there is even a Robo-vc now, it is harder to be objective. Now of course we do not want to be paralyzed by paranoia. But we should think hard! 

What will we do when machines take over our jobs. What do we do when machines take over our lives and live for us?

 And where is the problem? Why don’t all the Uber drivers replaced by bots go on welfare and go to free community college. They can get their degree in the comfort of their home on Ng’s coursera or Thrun’s Udacity. They can learn to do something else, perhaps learn to write programs? Learn to Code as it is now colloquially called. They can take my job after that. That way I can go get my md’s and ph.d.’s and go heal people or philosophy?
(Footnote: what we don’t want to see if a flow of talented and educated people to jobs they over qualify for: coders driving Uber, md’s ph.d.’s writing code. The prevalence of this phenomenon stirs a deep dark anxiety that I cannot name. The decisions to do so are individually very rational. However it would seem to me that society’s investment in educational infrastructure to created these md/phd’s have not achieved sufficient ROI, for the society’s sake. I.e. If they train in physics, should they not attend to physics matters that the training was designed to do? And should they not be related with completely unrelated subjects? If not why do we have so much investment in physics higher education? S/physics/another subject/; again it is worrisome if society is this way but no worries for the individual or the institutions involved in this process, each of which is arguably producing maximally and with the best of intentions)

Would that be a blast? It’ll be like 24th century of Star Trek: we will have no wars, no worries about money or scarce resources. With advancement of technologies, society is advanced. We will no longer struggle against fellow man but against a greater obstacle. We will only strive to better ourselves or humanity.

Such a grand future awaits us!

.