The QIM: a Measure of Fairness in Servicing

In a previous episode, we discussed the components of QIM, as well as some ways to interpret, measure and perform the decomposition empirically. I suppose it is high time to write out more formally the model.

As common sense dictates, a service of interest, in the context of Fairness analysis, is any system, machine, human or a mixture of both–collectively the Service. There will be an ever present environment, a context, within which the system physically, legally and morally resides–collectively the Situation. The Consumers of this service and a Metric computable from observations. service has a fixed duration of interest to each of its Consumers.

The question QIM tried to answer is: under the Situation, does said Service provide equal service, as measured by Metric, to its Consumers during relevant Durations?

This seems positively silly thing to write about. It is obvious that the local taco shop in my neighborhood does not provide me with the same service as other visitors–those that can order animal guts, brain, ear, tongue, and intestinr stew in Spanish, and enjoy it, where as I, a non-spanish-speaking vegitarian who has never had that type of food cannot order it and choose not to appreciate it at all. The Situation and Metric for this simple neighborhood taco shop seems absurdly difficult to define for it to be a useful exercise. (There are also a homeomorphic Cantonese food, amongst many other foods, for which I cannot say or eat)

As with all human matter, one would expect some compromise when the mind corners itself. For me, this answer is effort. In the course of servicing it’s customers, I would be made happier knowing equal effort was made to service me as was made to service my Spanish speaking neighbors. Why? You ask? Seems like a peculiarly pernicious thing to ask for the pain of your server when the inability to enjoy the food is due entirely to me the Consumer, the service(this, btw is not common, realistically in discrimination situations, the server’s taste is not… so innocent imho), the disadvantaged Consumer seems at fault! To this I must answer, but I paid the same as my neighbors (probably also not true, I had to pay more, but for argument sake, and without any loss, let’s say I paid exactly same price as they) the restaurant should, as a matter of fairness in service, give me the same respect as they give to your other customers. The ingredients and materials you use to prepare the food, the man-hour, the natural gas, the plates they are served on, the dilligence and persistence in the mind and body to produce the final product, these must be no less fore than for my Spanish-speaking neighbors.

In one respect, this approach is useful. Ultimately the effort a Service makes is something under their fuller control than the outcome of the service.
This does not make me happy as I will never enjoy beef tongue, eating it or otherwise, but that is not something the server can affect.

(I’ll surely regret this. I recall a time, maybe a decade ago, on this blog where I said I’ll never floss until the day I cleaned my other end with similarly intrusive externalities… I floss now.)

This does not make me happy as I will never enjoy beef tongue. In a second respect, this demand is made partly out of respect for money. For money to maintain it’s integrity, it’s value, demands must be made at it’s expenditure. For if the money buys me less, then money is worth less to me. Out of my, the Consumer’s, respect for money based market economy Situation, I demand that it buys your expense and exertion. In a different situation, for example as a vegetarian, I may wish to demand that no cow is harmed in making of cow tongue soup, but that is not the more generally applicable economic-driven Situation I am currently addressing.

(And further, often these are stated terms of service using expressions such as “performance in good faith” and “fullfillment by all reasonable/commercial efforts”. Here my stipulation is that the performance will be both in good faith towards accomplishing the service and also not only reasonable, but also equal effort among Customers.)

Therefore, QIM can be applied to this effort based Metric of Service as well.

Trek

Seems like there is a Trek Discovery season II. Wonder what’s left in season I though. So far the folks are still growing into their characters. Captan is stealing the show a bit, honestly, the development of Michael is… rather supernatural. For a human minority, I kind of wanted to see her succeed as a human… but she does I suppose, reintroducing herself to the new security chief…

Picard facepalm to Sarek getting caught preferring Spock over Michael.

Generalization Initialization

I’ve been talking to coworkers about recent batch of papers claiming deep neural networks can or cannot generalize effectively.

I feel I do not have the same respect for this problem as my coworkers. I do not fear it as they do.

Let’s see, how bad could this be?

I suppose an example of this problem is learning to identify a cat. The robot may find out through reinforcement learning that a cat is best identified by scaring it suddenly and hearing surprised meow. So few mute cat exists that accuracy is negligibly decreased by this overfitting. The obvious problem with this is that there are mute cats and Hollywood will make a movie about the one that was used to defeat the AI that overpowered its human creators.

(And the reverse could be true as well, for example toy dogs finding out that scaring children into crying fit is the best way to detect a child from adult)

The intelligent reader will quickly point out that there are plenty of things covered in deepnets-101 that prevents that from happening. (Well maybe not necessarily for reinforcement learning, but straight up deep nets has nice regularizers)

What else could happen? Was there a meme around the internet about the indistinguishability of dogs and fried chicken? The fear is that Cortana would grab the dog and microwave it when you ask it to reheat the leftover from KFC. The generalization in this case is too general—it found anything that could resemble a dog instead of just the dogs. And this was just a meme, not sure if it could withstand serious answers.

More sophisticated problems, often jokingly put on display, are the mistakes that mentally ill people display. Well, mentally ill people and geniuses. The AI could make framing errors: throwing a person into a pool to clean some dirt off of his shoulder. The solution is not within reasonable framing of the problem. But it could be chosen due to the wrong type of generalization.

There is also a problem of leakage. For example, a learning system could overfit training data consisting of FBI profile so much that it is more of a determiner for whether FBI has investigated a person instead of determiner for true crimes. Failure to truly generalize to other populations for whom FBI never collected information is caused by the learning system picking up bias and errors of the whole FBI system consisting of many error-capable humans. The theory, at least for today’s systems, is that it is at least as bad as the human it learns from.

This now indeed seems to be a very interesting problem to consider. But there may not be a one-stop-shop solution to all of AI’s problems. Generalization is probably just one of many things we must solve for in future systems. This is great opportunity for scientific advancement and development of specializations, such as Robopsychology, and psychohistory, and…

But for real.