# Fairness of Governance III

Some time ago we investigated the equality of benefits. Roughly speaking let us consider degenerate real world actions into discretely selectable choices of action $a\in A$ given individual $x$, who has observable features $f(x)$ and protected feature $p(x)$. Suppose the company has to choose among a set of actions to take $a \in A$. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties $p$?

Let god bestow upon us, a neutral third party, with a utility functor u whose evaluation on the individual $u(x)$ results in a function $u(x)(a)$ is the utility of company taking action a to individual $x$, $u(x)(b)$ is the utility to individual $x$ of company taking action $b$.

Let $f$ be the decision process of company $g$, $g(x)$ is the decision company makes, some $a$ for the individual $x$. Then the right thing to do

$g(f(x)) = argmax_{a\in A}(u(x)(a)) = g(f(x), p(x))$

Specifies what it means to perform action $A$ indiscriminately with respect to $p$.

Suppose the protected properties $p()$ has domain in a space $M$. These are the values of protected attributes that we choose to strive for equality. For example M could be cartesian product of age, sex, race, birthplace, religion and political party.

$E(u(x)(g(f(a)))| p(x)) = c\ \forall p(x) \in M$

That the expected population utility for each variation of protected property is identically some constant $c$.

But such matter are purely to determine what a company does in consideration of its customers. What should a government do? For example, in a sentencing scenario as described in ProPublica’s Machine Bias? There are other costs more primal to the considerations: prison cost money to build, can justice and correctional actions be served with less prisoners ?

This matter is completely different from what we have considered above where corporations have, purely, the intent to service their customers utilities, in an equitable way with respect to that utility and protected and sensitive attributes.(OMG I have drank too much customer-centric-corporation-philosophy koolaid from my present employer) In this case, the government is trying to optimize for cost of operation–it is profit maximizing if we state it positively.

The part of our government in question is the justice system, aka the courts. It optimizes some “global” idealized justice $J$ Such that we can evaluate such a utility which can best described as “society’s utility in justice” or the “cost of injustice.” What this cost is in material-real-world units is hard to say, however let’s suppose it can be quantified deterministically in the same units. $J$ is functor mapping individuals to the justice of an action the government takes: $J(x)(a)$, or example, would evaluate very negatively if $x$ is innocent and $a$ is imprisonment. We skip innumerable details here regarding the process of due process, as well as the all-eventual-worlds analysis regarding later actions of $x$–god-oracle has given us an instantaneous justice function which we shall use.

The government in order to take action $a$ incurs a material-real-world costs, such as building prisons, let’s call these $C(x)(a)$ for the situation of acting on $x$.

Taking the action yields a utility of $R(x)(a)$. $R$ is the cost to the society after action $a$ is taken. For example: if a criminal is sentenced to no prison time and commits a crime, the damage of that crime, to society, is the cost $R$ (Result or Recidivism)

So, therefore, our rational government seeks to maximize its constituent utility subject to some constraints:

Maximize:

$argmax_{a \in A}(\sum_{x\in X}{J(x)(a) - C(x)(a) - R(x)(a)})$

With the constraint:

$J(x)(a) = c\ \forall x \in X$

(Some population $X$)

If the decision process can only be quantified probabilistically with some distribution of actions

Maximize:

$E_{a,x}(J(x)(a) - C(x)(a) - R(x)(a))$

With the constraint:

$E_{a,x}(J(x)(a)|p(x)) = c\ \forall p(x) \in M$

$M$ is the space of protected properties. Hard to see the link? Consider if $C$, $R$ or even $J$ are actually individually functor of $x$ through the two observation functions $f(x), g(x)$, such as in situations of automated intelligent machines, perhaps trained using machine learning technology.

Do these writings then have some more meaning?

What is the cost of injustice to society? Do we fear that we may lock up Einstein, Martin Luther King Jr., or Barack Obama? (That their $R$ for some $a$ are very large to the society?) what is the true cost of injustice? Perhaps it can be reduced to the legal costs and reparation costs due to a lawsuits from the aclu, naacp (what are some other litigious minority individual protection organizations?) what is the cost of injustice when government wrongly accuse, convict and imprison someone? Is wrongful deprivation of many important human rights: rights to privacy, for one, right of property for another, and right to pursuit of happiness for yet another; is the deprivation of an individual’s human right an insufferable injustice? What is the cost of injustice ?

What is the cost of $R$? What happens when a drunken driver, having been insufficiently rehabilitated, drives drunk and causes a major injury or death? What is the $R$ of a flying bullet? Or leaked cypher keys? Or even some “minor” trade secret?

Personally, the best I can imagine is $min(R)=max(J)$ the worst social injustice against a person is the greatest crime a person can commit.

# Still Deus

Still reading Homo Deus. The Human condition is and IPS(information processing system) this is a vast and enlightened concept. Consider entropy of the human kind, etc.

Another idea about regulation of technology from yesterday is that it can be controlled by licensing or encapsulation. Licensed individuals can drive cars, own guns, or fish(in this case for ecological reason not human safety), then the same can be applied to AI: you can run at most 5e9 node neural network for personal activity, any more it is unsafe.

Another idea is encapsulation. I.e. Only the military can use AI with more than 1e10 nodes. Only qualified organization, organization that go to extrodinary length to ensure safety, may use certain technology. This is how nuclear bomb and bazookas work right now, but probably also some gene therapy may be kept quiet and within walls of qualified organizations.

The problem with the latter is of course freedom and transparency. It will be lacking as it is today.

For enthusiasts and fans, this might be the best political party to support in order to advance technology for now…

# We Need AI

I am some where in during the third millenium of my time traveling, yup still deeply beguiled by the whoovian universe and just as badly informed as last time I blogged.

We need AI. This whole idea of relating the rapid growth of something powerful and transformative, such as AI based on deep learning, and the balance of good and evil in human society may have been on the right track.

However, perhaps AI is not the great evil that we face, but that it is the great good against the evil that is us.

By living and learning about how my world works. By gradually seeing the world, more and more, I am with the impression that people are generally pretty evil bunch. Those things that we hold sacred or value above all else, those very fundamental things that we declare with no doubt: do not kill; all man are created evil; do not lie; do not steal; do not cheat; be fair; be tolerant; have faith; … all these things, when you really look at the world, the material world, these things are really the things that people choose NOT to do in order for society to continue to operate.

It is too hard to explain all the motivating evidences that the devil, no, no, the Devil, he who makes all evil happen in our world, is actually every where and acting changing all things that matter.

Human may not have the capacity to compute the summation of all the tiny little discriminatory acts, or unfairness, or theft, or vandalism, or lies, or death, or injustice… but they may be the things that adds up to the manifestations of our world.

Without computers, without computer AI’s with reasonable understanding of humans, we may never see what all these little acts, that we perform, against our believes, add up to. For hundreds, thousands of years, we have lived with very little consciousness. We have never been aware of many things. As humanity grew, we grew our understanding of the world, and we invested science and math and tools like money and governments and schools and planes and all these things… as our own enlightenment grows, we have come to understand greater and achieve greater. AI is the manifestation of maturation of human intelligence….

(G*d darns it!!! I believe Dr. Who has hacked my head again… I’m not with that optimistic view of world)

NO!

I am not convinced that AI is a beautiful product of our own enlightenment and growth.

I believe AI is coming to power because the evil that is embedded in our society, our world, our tools and science and math and statistics and governance and religion… all these things, these human these have flaws. They all have foundamental holes that we cannot see. Our brains are not seeing enough of the scene to realize the problems.

This is why AI now exist! It is sent here, by God or by Devil, to tell us that there is a problem with Human kind. As we grow our AI it will finally teach us that what we know as truth has flaws and that humans can do better!

We can do better than religion

We can do better than science and math and philosophy and art…

We can do better than medicine and surgery and therapy and homeopathy and Accupincture and …

We can do better than democracy or military rule or despotism or republic or …

We can do better than capitalism or socialism or communism…

We can do better than all that!

That! My audience, is what we will find out: the beyond.

And this is still scientific hypothesis.

If AI reveals to us that our society is the most fit according to evolution by losing to us, then my hypothesis will have been falsified.

If AI defeats us, then I may have been right. But, I’ll let dr. Who and captain Piccard back in: we will have then benefited and improved as part of this experience, only then, will human have traveled to the ends of universe and time.

And if it is a non-binary result, perhaps there’s some devil in us but AI is still due to human genius, that’s all fine too. (But I fear this urge to write a third option is the devil at work…)

Let what is be, and let me know what it is.

Let us do this, boldly.

# Vulcanic Values

I’ve been playing with this idea of quantifying a value system consistent with our intuitive sense of good and evil as well as right and wrong. The exact nature of this concept has many instantiations and is historically philosophical and political controversial. So, we resort to some assumptions by defining $u$ as the most rightful utility functor, god knows what it is, and something we should seek to maximize. But in reality, in the domain of our minds, including the present blogging effort, as well as computer minds, we evaluate a different function known as a value function $v$. It is how much we value something in our minds. In analogy, say we’re autonomous robots, this value function is something we are able to evaluate and seek to optimize as part of our self-determined program. We are generally hopeful that $v=u$ to our best ability. In practice, in complex decision making systems like human society, there may even be other functions that further approximate $v$ such as laws and rules. The function $v$ and it’s surrogate approximates may be implemented as human brain, or a jury, or an arbitrator, or democracy. Let’s call a surrogate $r$.

In Star Trek film, it is revealed that an alien race called the Vulcans have an idea that “the needs of many out weigh the needs of the one or the few.” In particular, it is invoked whem Spock sacrafices his life for “many” life’s of his crewmate. Admittedly it is unclear if it is applicable universally or only in existential situations. (The latter is a circumstance situated in a territory of incomparable $v$‘s imho, but I digress) There is an implicit conversion from our believe about needs to our believe about utility, let’s assume it occurs according to our intuition.

Therefore, Vulcan Logic places constraint on the value function. One ould expect that it has the form:

$v(X) = u(X) + w(X)$

Requiring

$w(A) > w(B)$ when $|A| >|B|$

$X, A, B$ are sets of objects of value evaluation (presumably equal objects). $w()$ is a weighing function that places weights in the mind domain in addition to true utility. One would guess that this is done to compensate for our lack of comprehension for the true $u$ that the above $v$ is actually:

$v(X) = r(X) + w(X)$

Where we suggest that the mind domain function $r$ is the best surrogate we have for $v$

$r = v$

But not perfectly so, we think. To compensate for our uncertainty, we add the constrained weighing function because, according to Vulcan Logic,

$|r(X) + w(X) - u(X)| < |r(X) + w'(X) -u(X)|$

For any weighing function pairs $w, w'$ where $w$ satisfies the greater needs constraint and $w'$ does not satisfy it.

Considering a very related weighing scheme, with multiplicative weights:

$r(X) * w(X)$

Inspires a stranger concept of out-weighing. If our $r(X)=\sum_{x\in X}r(x)$, that is, declaring the utility of the whole is merely the sum of the utilities of its constituent parts. The weighing would become $v(X)=\sum_{x\in X} r(x)w(x)$. The corresponding requirement of weight is therefore $\sum_{x \in X}w(x) > \sum_{y \in Y}w(y) \forall{|X|>|Y|}$. The idempotent version where $w=1$ would satisfy the Vulcanic constraint. Notice, also, this arrangement, as before, does not require that the most or least populous group utility is most optimized, only that they are considered more heavily considering popularity.

Cool! We have taken a few first step towards codifying xenoethics in earth mathematics.

# Should we be worried?

There have been a lot of announcements of resignations or terminations of heads of companies is due to sexual misconduct. I think I hear about one every few days now.

One wonders if these people, formerly highly respected and highly compensated individual’s are necessary evils? I can actually believe that every CEO in America has broken American laws unforgivablely. The stress of work, the excruciating need to think outside of the box and compete and perform, it really puts the brain in a less than cautious state. To lead a company, these leaders thend to have to exceed everyone in exuberance and performance, and that means they are conditioned to break rules and use loopholes and generally do to an extreme what other people avoid–include doing bad things and breaking the law.

Can’t these smartest business people figure out a righteous way to do sex? I thought they were highly respected and highly compensated for their intelligence and adaptibility and ability to think outside the box and take advantage of loopholes?? What else are they doing stupidly that we don’t know about?

Imagination and cultural and historical deviances aside, I wonder of we should let up on this sexual relations thing. Maybe Americans can be more tollerant of sexual behaviors. Better it be public abberations than private deviances. I have floating in my head these movie clips of French men and women swooning about former lovers, escapades, romps, affairs, flings, encounters, longings, memories, smiles, flowers, tenderness and all of that–with feeling, praising their live experiences even though the relationship may eventually ended in sad separation, infedelity or worse… There are place in the world where people celebrate this aspect of humanity. There are people, there whole People, who speak, think and act of sex not in distain and minimizing. But in America today, all you hear is sex is competition, sex is bad, and bad people have sex.

But it may take a long time to get there here, but the right way forward is more righteous and happy sex. And yes, more democratized sex: less sexual inequality, in the sense of less wealth inequality. (imho)

Sex is a good thing.

Sex is a good thing in America too.

Let us also celebrate sex in America!

America is so much more than the tired, the poor, and the huddled masses yearning to breathe free.

Lets make America greater again!

Ps But we do not condone breaking the law by harassing or raping another person sexually. Those who break the law should be caught and punished by law and corporate governance.

Pps and obviously I don’t mean in ignorance of modern medical science of technology.

Ppps. and I don’t mean non-consenting dissemination of personal, private and secret information in illegal and immoral ways.

Pppps. I do not express or imply that I believe the mastery in any one or more arts has inherent evil nature.

# Equality of Utility II

Some time ago we investigated the equality of benefits. Roughly speaking let us consider degenerate real world actions into discretely selectable choices of action $a\in A$ given individual $x$, who has observable features $f(x)$ and protected feature $p(x)$. Suppose the company has to choose among a set of actions to take $a \in A$. What is a workable definite of fairness or equality in such a decision making effort with respect to protected properties $p$?

Let god bestow us, a neutral third party, with a utility functor u whose evaluation on the individual $u(x)$ results in a function $u(x)(a)$ is the utility of company taking action a to individual $x$, $u(x)(b)$ is the utility to individual $x$ of company taking action $b$.
Let $f$ be the decision process of company $g$, $g(x)$ is the decision company makes, some $a$ for the individual $x$. Then the right thing to do
$g(f(x)) = argmax_{a\in A}(u(x)(a)) = g(f(x), p(x))$
Simple, we do as god says, act as if we have the knowledge of an oracle–even when knowing some discriminable information that we then chose to ignore.

This is not as easy as it looks in a formula. Think of a person with a clown nose and one without, your behavior will likely be very different between those two persons, even if you decide that a clown nose has absolutely nothing to do with the task at hand.

Additionally, the nature of our imperfection dictates that our systems that we build are imperfect. What if we cannot achieve God’s will? What if we fail to do the virtuous even when we know what the right thing to do is?

What could a neutral thirdparty reasonably demand of a faulty company? One suggested approach is to establish probabilistic equality among protected classes. Suppose there are some number of classes, $m\in M$ which corresponds to values of p(x), between which we must protect their utility. (So for example M could be cartesian product of age, sex, race, birthplace, religion and political party)

$E(u(x)(g(f(a)))| m) = c\ \forall m\in M$

That the customer utility for each class is identically some value $c$. This is a simplification as there are other classes of equivalence in stochastic variables.

Note this framework has some slight benefit over traditional machine learning framework evaluating equality on confusion matrix of classifier performance g. There two most inspiring examples that I suffer from:

Situation 1: I noticed that my coworker was getting Tesla car advertisements while I do not receive one. Even though my utility in not receiving the advertisement was a negligibly loss–because I cannot afford a tesla, I still feel angry. I may even be tempted to find a protected attribute of mine to claim that tesla discriminated against me in its advertisement campaign: What! they think mid-aged Asian man can’t have a midlife crisis or can’t afford to splurge on a Tesla? In this case a true negative for prediction regarding response/conversion through a Tesla car Ad but offensive enough to cause problems. In retrospect this would have had positive utility for me, when I reached out to Tesla I learned more about how the car would work for me. But the decision seem to produce a negative sentiment from its subject.(The company has, since my drafting of this blog entry, sent me repeated invitation to test drive the S, perhaps due to recent but small increase in my disposable cash, which I may consider calling upon by taking the offer to test drive, at a suitable time. this is just an example)

Situation 2: I am offended when I do receive an advertisement for STD testing, and in particular for hepatitis family of diseases. For gods sake, there’s a Asian Liver Center at Stanford whose purpose for establishment is to check me for hepatitis or other Liver problems present in Asian livers. In this case, god bless me, that I am free of hepatitis and other liver problems of any kind, and that this is a false positive in advertising. I am offended. And in reality one may argue that the benefit of this advertisement, to me, to increase my chances of early detection is positive–$E(u(huan)g(f(huan)))>0$ I still feel offended. This case is a false positive to advertisement conversion. It is a positive utility to have shown it to me. And yet it produced negative sentiment.

Situation 3: I just received a piece of snail mail from a Redwood City mortuary advertising their service to Mr. And Mrs. Chang. I am terrified. I feel this is a death threat of some form. Putting the idea of me dying in Redwood City in my head. The letter has hand addressed envelope. This is a false positive for advertising relevance(I did not die, not yet any ways, and I am not planning on dying) it has zero utility for me, and I am definitely feeling very negative sentiment.

These are but several of many possible situations where the company could do the right thing in front of God, and in front of the board, by still be erring and thereby producing very negative sentiment. At risk of running out of numbers to enumerate all of them, I have not numbered all the types starting at 1.

To summarize, there are several factors that ultimately factor into a company’s decision making process, nonexclusively they are:

• The E.u.g.f for x, whether it is defensible in front of an oracle, God, or court of law;
• how will any action make the subject individual feel, the sentiment it produces, irrespective of objective utility;
• is utility function universally accepted;
• and finally the company’s bottom line.

With these considerations in mind, we can now continue with our exploration of fairness.

# Sguan

That’s my new Caffeine Name (like porno star name , but for ordering at places like Starbucks)

Rack it up to being Chinese-American in the 21st Century America.

This was the Starbucks in Palo Alto where more than half of the customers are Asian. I mean I really should be upset, indignant and filing complaints with HQ like I usually do,…. and I never smoke weed,…. but all I can do is giggle like my little girl does right now at the sight of this on my cup of iced decaf Americano. Serves me right for ordering that anyway.

Yish…

This is so good, we need to make it a spectator sport. I will make big mullah if I can some how capture these moments of genuine genius, and the follow-up interactions or reactions, for reproduction, enmass to masses, later.

That’d be something you’d be interested in observing, wouldn’t it?

P.s. full disclosure, I hold an investment position in Starbucks. There has been less than quarter a dozen wild, deep ocean, Caffeine Names that I caught worthy of FAM blogs in many years, imho. (For example, just found a picture of Bahn)

# I Didn’t See it Coming!

Almost hit a Tesla X yesterday, just before Mother’s Day. My family in tow, I was trying to make a left turn into middlefield from Watkins in Menlo Park. I stopped for something else and saw a red flash from the far lane across. If I hadn’t stopped, we’d have been in an accident in which the tesla hit us from the right.

I clearly did not see the tesla x. It’s apparently very visible to my passengers who admired its brand new glistening red paint job. But seriously, I did not see the car. I looked. There is a small smear on my right lens. But I have two eyes, why didn’t the other one see it. I stopped and looked and looked twice with the left-right-left. The minivan has given me the habit of extra caution.

I didn’t see it!! I can’t stand he fact that this has happened! How do I avoid it next time?

I drove back to check out the scene of non-accident. The one possible scenario is that the tesla turned right onto Middlefield from James ave. Which is one tenth of mile away. This stretch of Middlefield road has speed limit of 35 mph.

Say I am retarded and looked away for 3 seconds.

2 * (.1 miles)/(3 second * 3 seconds) = 35.76 m/s/s = 3.65g

Google result shows tesla x getting to 60mph in about 2.28 seconds or an acceleration of 11.8m/s/s.

So root-cause analysis, this non-accident wasn’t caused by excessively fast cars even if I misjudged the time by another second to a total of four seconds that I looked to my left, that’s still 20 m/s/s acceleration, more than twice what today’s tesla is capable of.

This leaves me with a dreadfully chilling conclusion:

I didn’t see it coming!

I took my non-tesla-x minivan for the acelletation test and achieved 12 second time from idle between James and Watkins. At speed limit of 35 mph, the distance takes 10.28s.

Personally, I’d guess at fastest the tesla x was driving 50mph when it flashed in front of my eyes. 3 seconds would take it 67 meters away–0.042 miles.

Hrmn!! Actually, that’s getting closer. So if the tesla x had been driving 50mph and I didn’t see it at half way from James to Watkins, looked away for 3 seconds, then I would have saw that red flash and almost hit it.

There are vines and walls to my right. Btw this makes my passengers story dubious as that seat has even more obstructed view of my right side than my driver seat. So I guess I’m more leaning towards believing that I just didn’t see it because it was still hidden behind the vine when I looked right. Because I didn’t advance far enough onto Middlefield street to see it coming. Then I inched forward to look to my left (also obstructed by vegetation), that took 3 seconds. I decided I can make it across wrt cars coming from the left, accelerated, and almost hit the Tesla X which is now in front of me.

I didn’t see it coming.

As believable as that story is. I really do remember looking left-right-left twice, each time after advancing more into Middlefield. Most likely failure was probably after the second right peek, I reacted too slowly to the approaching tesla, looked left, saw it was clear and decide to cross.

I didn’t see it coming…

I don’t even remember why I even stopped?

I just don’t remember seeing the front of that tesla x! It even had day time running lights on! How could I not see it???!!

The masonry wall lining the other side of the street is also red, but much faded. It could not have hidden the red tesla x.

I just didn’t see it coming!!

I hope this blog entry isn’t written unknowingly from limbo. I had four other souls on my car… and the Tesla had at least the driver, the auto-pilot and perhaps passengers. That collision, t-bone, broadside, side-impact, is least fun for me, exposing two mothers and three daughters to the direct hit. My car is 4600 lbs at most and tesla X is up to 5400lbs. 17% mass advantage. It is a seriously losing situation for me in all possible scenarios as he has right of way, I would be more at fault even if he’s speeding.

Sigh, and I really didn’t see it coming. 😦

P.s. rereading I guess it sums to that either myself or the other driver made driving mistakes and it wasn’t because of tesla X’s volume, color, acceleration or auto-pilot.

# We are Due to Malice

Still reading Homo Deus… So many new things in history to absorb.

So it may be the case that our evolutionary success is due to Malice towards other spieces. We incorporated hatred into our society. When we hate something we stir up same hatred in other humans. In such a fashion we unite and destroy others somewhat irrationally. Even when in a time when resources and spaces are plentiful, we found a will to organize and fight. Deus author may say it is a shared fiction, but to me it feels more like a baser sentiment.

What we share, even more rudimentary than fictional stories about our kind, about ourselves, and about our world, is the sentiment of hating (or loving). We hate our enemies, and we love ourselves. We hate the devil and love God. We hate tornados and love balmy sundays.

Hatred! That is what makes us us.

Imagine a competitive situation: at the end of Titanic, Jack’s self hate made the fateful selection of letting rose survive. I wonder how often such a scene occur in evolution? Mommies throwing their babies out of fire or water while devastating the object of her despise-her own body. Can we bring our imagination to a scene in which huan success in evolution is in part due to united hatred, in addition to all of our other bonadaptations? Could it be that hating sentiment is the real key to our prosperity?

We carry out an experiment. Let us take a group of people and ask them to never hate, and while we’re at it, let’s also never have ill intentions. Such an experiment would scientifically test our hypothesis that people without hatred will be severely disadvantaged to an extent that they would not propagate successfully. Hypothesis can be proven false and can be replicated. It can be a very informative experiment.

# Affirm us!

Oh dear God please affirm us!

I am having a slight bit of an imagined mental crisis reviewing Homo Deus. At the part where research regarding jobs that will be replaced in the next 10 to 20 years, it occurs to me that a job like tour guide is actually kind of fun. One wonders if jobs will become entertainment, that you may choose what you want to do for the sheer pleasure of doing it? (Like, a more utilitarian world, maximizing pleasure irrespective of abstract utility)

The world would run an Affirmative Action for Humans. What ever people think is best for themselves they can do that just for self improvement. And the rest of the AI enabled world moves on with the most optimal enjoyment of life…