Intelligence Kernel: What Services Needs to be Fair

In previous posts(1, 2, etc.) we wondered about a practical definitions of fairness considering all the advantages of our modern sciences and technologies. One particularly stimulating idea was the idea of equality of effort in servicing. Equality of effort demands that a servicing entity (the Server) must make equal effort to provide service to all its customers (the Customer). My examples was for the servers at a restaurant in Little Mexico to serve me real cow components cooked the same way and on the same plates as all fellow customers who speak Spanish and all of whom do not abstain from eating meat. Since my peculiar affliction of birth, education and vegetarianism is no fault of the restaurant owners, it might be very reasonable to only demand that they try hard enough to please me. That is, to demand that they try but not demand that they succeed in pleasing me. But some of that servicing involves a live human person thinking. Their facial muscles moving to smile as they greet me. How do we request that Server make sufficiently fair mental and emotions effort? we consider the mental component of that question in this blog entry.

In this day and age of impending computerized AGI, we may suppose WLOU (without loss of usefulness) we can write the Server’s mental process in a functional manner: his brain B upon receiving input I and service order O produces an output B(I, O). Suppose that there is another person H for whom she makes the same consideration B for an identical service order O, producing B(H, O). How do I know if the B doesn’t have a clause inside that says:

B(I,O) := 0
B(x,O) := O^6

Clearly, it did no thinking for me producing 0 while it exponentiated O six times for everyone else which is arguably a lot more thinking than 0. One inclination is to declare that we must have identity-blind thinking. If we disallow thinking about the Customer when servicing then the effect will be an equal effort:

B(x,O) = B’(O) = O^6

It only matters what Customer asked for but not who is asking. The problem with this is that while it achieves some type of fairness, it is unrealistic and ultimately not fair. I would most definitely want the Server to think about me when serving me. If you can imagine that I walk into the restaurant with a child on my back, a Server who is indifferent to me will pull the chair back and ask me to sit. An intelligent, attentive and thoughtful Server would firstly inquire whether I preferred a child seat or booster seat, brought my choice out and placed it next to my chair, and then pull my chair back for me—and if I looked masculine enough, perhaps forego the pulling back of the chair for me as it is suggestive.

So in fact, we expect that to have been thoughtful, the Server must consider me:

There is no such B’ such that

B(x, O) = B’(O)

This requirement is difficult for some situations. For example, if part of the Server’s job is to open the door as I approach, would it not do so for everyone? Due to the nature of the Service, the Server cannot consider different action for different people.

Sadly, I must suggest a workaround like Kant exaggerating to a thug, is for the program to look up in a lookup table the individual for exceptions. From program analysis perspective, the x would become needed in an irreducible way. And in reality this is good habit to program for extensibility. For example we may later implement special rules for opening doors for wheel chairs, stretchers and crutches.

Another workaround is to adjust the scope and analyze the possible outcomes. A program does not need to be intelligent if it does not have the freedom to make any choices. Recall that we have previously attempted to quantify cardinality and qualify ordinality the ideas of freedom, empowerment, liberty, and rights based on the cardinality of the available choices. A thought with no freedom of choice is not an intelligent thought.

In reality, our thoughts leading to the outcome is probably a mix of intelligent and unintelligent thought functions:

Inputs: x, O
a = f1(x)
b = f2(O)
c = f3(a, b)
d = f4(c)
Output; d

Each of the f’s above are irreducible. in particular, the function f3, being an irreducible function, requires both input a and b to compute its output. (Example of such functions include addition, subtraction, multiplication, division, if-then-else, etc.) f3 is where we truly decide what we need to do, differently, for different person making the same order O. And it is precisely here that we can inject intelligence. Therefore we shall refer to these irreducible functions nodes that dominates both forward and backwards the Intelligent Kernel. To be fair, we must ensure that all Intelligent Kernels are fair. This idea is somewhat reductionist, we might feel the need to say that in the domain of computable functions, what we can best express Intelligence as seem irreducible functions having at least two inputs. But it is not so absurd as it sounds, for example the transistors that do most of the thinking for computers calculates such a function in binary.

In the case of my nearly $50k worth of Tesla solar system, the lack of thought inspires a different idea of thoughtfulness. What happened was the Tesla engineer(the Server) in charge of designing my system cobbled together a system where the solar panels can charge batteries fully for a few months around the summer solstice of each year. For the other half of the year, due to my particular situation as a Customer, the solar panels generates so little energy that the battery is effectively empty. Based on the response of Tesla support teams (and I mean entire teams of people because I spoke to many over the ensuing years), I would imagine he thought he was pretty smart. The way Tesla goaded customer into their folds was a “total annual generation” metric. The support team repeated to me several dozen times that the total annual generation will be great. The unfortunate part is that there is also a battery system that is part of Server’s design. And in the half of the year that the solar panels cannot generate much of any things the battery will also be useless. It would seem to me that the engineer at Server somehow did not consider the empty batteries in the winter. This problem could have been avoided if he ran a very basic simulation of the system and noticed that batteries would basically not charge for half a year. In the present system we must insist that the simulation function T takes into consideration the customer. (And in Tesla versus me case the shading on Customer’s house in the colder half of the year)

t1, t2, ..., tn are the n time steps of a discrete simulation
s0 is the way things are on the day the system is activated

s_{i+1} = T(x, s_i)

That T must not be decomposable to

T(x, s_i) = T(s_i)

(Man! I hate to drive now days with so many growing number of Tesla cars on the roads. Without these thoughtfulness requirements, it is hard for me to raise my confidence in their products)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s