Equality of Utility II

Some time ago we investigated the equality of benefits. Roughly speaking let us consider degenerate real world actions into discretely selectable choices of action a\in A given individual x, who has observable features f(x) and protected feature p(x). Suppose the company has to choose among a set of actions to take a \in A. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties p?

Let god bestow us, a neutral third party, with a utility functor u whose evaluation on the individual u(x) results in a function u(x)(a) is the utility of company taking action a to individual x, u(x)(b) is the utility to individual x of company taking action b.
Let f be the decision process of company g, g(x) is the decision company makes, some a for the individual x. Then the right thing to do
g(f(x)) = argmax_{a\in A}(u(x)(a)) = g(f(x), p(x))
Simple, we do as god says, act as if we have the knowledge of an oracle–even when knowing some discriminable information that we then chose to ignore.

This is not as easy as it looks in a formula. Think of a person with a clown nose and one without, your behavior will likely be very different between those two persons, even if you decide that a clown nose has absolutely nothing to do with the task at hand.

Additionally, the nature of our imperfection dictates that our systems that we build are imperfect. What if we cannot achieve God’s will? What if we fail to do the virtuous even when we know what the right thing to do is?

What could a neutral thirdparty reasonably demand of a faulty company? One suggested approach is to establish probabilistic equality among protected classes. Suppose there are some number of classes, m\in M which corresponds to values of p(x), between which we must protect their utility. (So for example M could be cartesian product of age, sex, race, birthplace, religion and political party)

E(u(x)(g(f(a)))| m) = c\ \forall m\in M

That the customer utility for each class is identically some value c. This is a simplification as there are other classes of equivalence in stochastic variables.

Note this framework has some slight benefit over traditional machine learning framework evaluating equality on confusion matrix of classifier performance g. There two most inspiring examples that I suffer from:

Situation 1: I noticed that my coworker was getting Tesla car advertisements while I do not receive one. Even though my utility in not receiving the advertisement was a negligibly loss–because I cannot afford a tesla, I still feel angry. I may even be tempted to find a protected attribute of mine to claim that tesla discriminated against me in its advertisement campaign: What! they think mid-aged Asian man can’t have a midlife crisis or can’t afford to splurge on a Tesla? In this case a true negative for prediction regarding response/conversion through a Tesla car Ad but offensive enough to cause problems. In retrospect this would have had positive utility for me, when I reached out to Tesla I learned more about how the car would work for me. But the decision seem to produce a negative sentiment from its subject.(The company has, since my drafting of this blog entry, sent me repeated invitation to test drive the S, perhaps due to recent but small increase in my disposable cash, which I may consider calling upon by taking the offer to test drive, at a suitable time. this is just an example)

Situation 2: I am offended when I do receive an advertisement for STD testing, and in particular for hepatitis family of diseases. For gods sake, there’s a Asian Liver Center at Stanford whose purpose for establishment is to check me for hepatitis or other Liver problems present in Asian livers. In this case, god bless me, that I am free of hepatitis and other liver problems of any kind, and that this is a false positive in advertising. I am offended. And in reality one may argue that the benefit of this advertisement, to me, to increase my chances of early detection is positive–E(u(huan)g(f(huan)))>0 I still feel offended. This case is a false positive to advertisement conversion. It is a positive utility to have shown it to me. And yet it produced negative sentiment.

Situation 3: I just received a piece of snail mail from a Redwood City mortuary advertising their service to Mr. And Mrs. Chang. I am terrified. I feel this is a death threat of some form. Putting the idea of me dying in Redwood City in my head. The letter has hand addressed envelope. This is a false positive for advertising relevance(I did not die, not yet any ways, and I am not planning on dying) it has zero utility for me, and I am definitely feeling very negative sentiment.

These are but several of many possible situations where the company could do the right thing in front of God, and in front of the board, by still be erring and thereby producing very negative sentiment. At risk of running out of numbers to enumerate all of them, I have not numbered all the types starting at 1.

To summarize, there are several factors that ultimately factor into a company’s decision making process, nonexclusively they are:

  • The E.u.g.f for x, whether it is defensible in front of an oracle, God, or court of law;
  • how will any action make the subject individual feel, the sentiment it produces, irrespective of objective utility;
  • is utility function universally accepted;
  • and finally the company’s bottom line.

With these considerations in mind, we can now continue with our exploration of fairness.

In a world… where there are nebulous words!

It occurs to me to write these down on a special occasion… Initially I was considering an optimization problem in a situation where localized optimization is essentially playing a zero sum game with an opponent who is much more powerful.

In this situation, even though we have big data, and even though we have deep learning, it still remains that there is bigger data and yet more sophistication else where. One of the challenges of the nascent big data and deep learning enabled AI industry is one of problem selection.

There are people who are trying to cure cancer and save lives. And there are people trying to trade stocks, win political campaigns, or engage in armed conflict (not that these are the same things) Their continued admonishments against AI are the people who fear the latter. I would imagine there may be very few who would oppose the prior.

That! That is the underlying restriction to the technology: what it can do for prior cause is practically restricted by what it does for the latter. The same applies to all tehnolgies of course. We’ve had internet, social media, a typical Californian would probably take a few minutes to recognize therebeing anything exceeding unusual about the potential downside of yet another meme…

Also, consider aliens, of the interstellar variety, one should always be mindful of our real  competition. There is likely a far greater intelligence out there. Let us not doubt, and let us certainly not delay the development of our own Big Intelligence as matter in due course of our kind’s progress.

Equality of Benefit

I’ve been involved in a lot of discussion around bias, equality and fairness regarding algorithmic decision making. Without going into excessive amount of background and detail the gist of my believe at the current moment is that equality of utility is the safest thing for companies to aspire to.

What is equality of utility? Let’s degenerate into binary decision making: given individual x, who has observable features f(x) and protected feature p(x). Suppose the company has to choose among two actions to take {a,b}. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties p?

Let god bestow us, a neutral third party, with a utility functor u whose evaluation on the individual u(x) results in a function u(x)(a) is the utility of company taking action a to individual x, u(x)(b) is the utility to individual x of company taking action b.

Let g be the decision process of company, g(•) is the decision company makes either a or b for the situation. Then the right thing to do

g(f(x)) = argmax_{i\in{a,b}}(u(x)(i)) = g(f(x), p(x))

Simple, we do as god says is best for the customer, act as if we have the knowledge of an oracle–even when we know of some reason for discrimination.