Discovery Season II premier

Michaels smile is very pleasant to see. This is a great addition to her character. Her pronounced frown, which still happens a lot, but it does not become her as smiles do. The smile after the sneeze, the smile when she sees her father. Nice!

Sarek, perhaps the make up or natural aging has drastically improved this character. His head doesn’t bobble in that very unvulcan way any more…

There’s something off about Pike. He’s got swagger as we expect, but there’s some strange delay between his dialog and other folks. There are a few successful interactions, but overall it feels so disconnected.

Anyways, I guess the lack of focus is most painfully highlighted when it is regained. “We got him, right?ladies?” And the ladies responds, affirmatively.

Maybe it’s because the bridge crew doesn’t look at Pike. Recalling chekov and sulu days, they turn their whole body towards the captain, full eye contact… Locked gaze till dismissal. Data, Troy, no other their fancy sofa chairs, Riker with his tilted head but all the faces till pointed at the Captain. Man, Gene Rodenberry’s indelible mark on star trek is fading away…

I don’t know, they may all have looked, but it just doesn’t feel the same. Pikes speeches. What’s wrong with them? They sound great. But I don’t get it. The words don’t sink in like other speeches do…

Fairness by Bounded ETA Matching

At some point, we have to admit that we are limited beings. Our present systems of ownership and reward dictates that owners enjoy and suffer gains and losses proportional to his share of ownership. (For example, most directly, shareholders of a public company, bond holders, inventor of a machine, etc.) as we have seen elsewhere, this system tend to benefit larger owners more when things go well. If there is a maximum amount of ownership beyond which ones happiness can no longer be enhanced, then the larger owners will always reach it before others. But we are endowed with the capacity to contemplate fairness. We have enough free will to demand fairness for ourselves. We have gained the confidence that this is achievable by making decisions and taking actions collectively.

It might be worth the time to distinguish all that the world’s opportunities afford us over a life time from those benefits given to us as a part of our social contract with that which governs. The government does not promise us the world, it does not promise us food or health care, it does not guarantee that we will have jobs. The arguments for big and small governments are alike in that they offer to make limited efforts to render fair and just service in the role of a government. I suppose you could say that they also offer the judicial system to challenge the government’s effort in terms of fairness and justice, to seek out remedy when lack thereof. So it is completely obvious and reasonable to me that it is right for us to offer up a thought system in which we fundamentally plan on the government rendering limited service with known limits to these service. What is, then, the most fair service it may render to each individual? We therefore identify this upper bound on government services in developing novel ideas of fairness.

One is inspires to think of a different kind of fairness. Suppose we do not concern ourselves with how ownership and causation align with incentives for agents, and only consider fairness. One can imagine a system as follows. Let m*_{p,s_t} be a maximum foreseeable measurement of benefit for an individual p at state s_t at time t. (And we should say by measurement we mean the best determination of the quantity of quality benefit) We should seek an action that equalizes the expected time of achievement of such benefits among all equals of the population. For the next action, to be fair, we should try to achieve it. Let’s say that this expected time to arrive (ETA) is calculable using a function E(m*, s_t, p, a), then we assert that all choices should be made in an effort to balance:

\forall_{p,q\in P} E(m*, s_t, p, a) = E(m*, s_t, p, a)

So, for a practical example, one way to achieve identical ETA is to reach for equal decrement deciding each action. Suppose my detriment (it’s negative as a measurement of my benefit) from being part of this governed union is my effective tax rate, then the next annual change to the tax code should be required to change my tax rate by the same proportion towards the ideal tax rate as a very well-off person. My tax rate at time t is 50% and his tax rate at that same time is a cool 10%, and the country needs an effective tax rate of 20% to operate efficiently, then the change to my tax rate and the change to his tax rate should satisfy:

c_{mine} / (.2 - .5) = c_{rich} / ( .2 - .1)

Simplifying, the change to my effective tax rate should be three times his tax rate in the opposite direction.

c_{mine} = - 3 * c_{rich}

If this is equality is satisfied for the whole population, then we have made a fair change under the Bounded ETA Matching Fairness(BETAM Fairness).

[[Upon rereading this, I suppose it’ll be easier if c is explained as the same additive change we make a plan to effect on our tax rates over a number of periods before reaching the expected ideal, necessary and equal (for this example) target effective tax rates simultaneously.]]

At a very high level, the proportional improvement approach seems to be an overdetermined system that can never be satisfied. There are n^2 -n equality constraints for n different levels of effective tax rates. It also remains to be seen whether an system can stipulate negative taxes for some and positive taxes for others while moving towards total equality. These are very interesting problems that seem to have very concise mathematical answers.

Another aspect of this idea to point out is m*_{p,s_t} might take on different values for each individual. How can this be? A simple and often quoted resolution is that people should get what they need, a 5-foot tall man needs different clothes than a 6-foot women for making garment. So it is within my reason that the optimal gain by each person from its government is different from others even when measured by very universal surrogates of value such as money and status. It might well be that Gates, or Bezos/Gates/Musk/Page/Jobs/Brin/pick_any really should get billions of dollars and I should only get a few million(again mixing gains and detriments resulting from entrepreneurship, labor, malfeasance, politics and membership in society, as we do in theses types of discussions during the early 21st century, for no obvious reason, except for sharpened contrast here). This model does not itself stipulate that everyone is equal. One certainly can set the optimal benefit from government to be equal for each natural person when using the BETAM Fairness as a matter of normative ethical dictum. But one can also set them to be unequal values as a result of whole-society all-pairs negotiation–by whichever means it is carried out, the resulting target for each party(persons, governments, corporations, etc.) is used to compute BETAMly Fair action or policies.

So, we have described an approach to fairness in which the goal is to optimize everyone’s happiness so that they are expected to climax at the same time. Some other time-sensitive measurement of benefit from government might be concerned with things like:

  • life-time benefit extremes: biggest tax bill/benefit check each person gets.
  • The gap between max/min benefit.
  • Smoothness of an individuals benefit transition: losing $10/day for ever versus losing $160k in one trade…. err, in your IRA…
  • Total lifetime benefit: for example, youth pay more taxes than elderly. Elderly benefit by getting money from the government. Over a lifetime, these benefits averages out.
  • General shape of a person’s benefit curve should grow in a positive direction over time…
  • Etc.

Ps., and certainly one can also request that the QIM is changing in a BETAMly fair way. But we should probably look into each specific metric to ensure that such an approach is actually physically, socially, economically and politically feasible. Ideas like being fair to the government treating it as a subject deserving fair and just treatment, ideas like taking tax payments from the richer and giving to the poor in the same tax year based on the poorer wage-earner’s income that year, ideas like universal measurement of the quantity of quality benefit to the (very) heterogeneous but deserving subjects, these must be lunacy beyond those exhibited by the storied exploits of Robin Hood and his merry men.

Pps. It is not lost on me that an old data-science adage may hold: measurement of metrics decrease their objectivity. If we insisted that there be income equality, it might be very easy for the world to create expenses for the formerly poor such that their actual quality of life does not improve as the numbers next to their AGI field does, and that they still have no meaningful disposable income to spend. It is a sort of unfair inflation may take place due to radical social-economic adjustments. But it still isn’t so bad an idea to document these ideas for prosperity to puzzle over and maybe they will want to start an enterprise to realize these ideals.

Ppps. it is pointed out to me that business folk have their own sayings about effects of measuring a metric in “what gets measured gets managed” and that “what gets managed gets done”. But in this case it’s probably more like the third one that says “what makes profit gets made.” If there’s no financial gain in this it is very difficult to accomplish by business means. Perhaps there is a faith-based approach to promote equality and fairness. The faith-based organization might get further with these ideas than a business venture.

From HTML to Marketing2Vec

A curious thought came to mind. I answered my own annoyance at the 30 minutes it took for me to search and calculate the electric-only and gasoline-only energy cost of chevy volt with an imagined google that just answers the question for me when I asked “what is the electric only mileage of a 2018 chevy volt?”

The marketing folks and the EPA mucked it up with “mixed mileage” which is useless to me to decide whether the charger I’m sitting at is cheaper or more expensive than filling up at Costco with my gas rebate Visa card.

Now, my demand is pretty special, but I am protected as a human being and economic agent to optimize my expenditure. The EPA and car makers want to sell cars, and is therefore free to express the information which ever way they want to.

In the future, my AI can certainly read their publication and answer my question(including the answer, “it cannot be determined from available information). But one wonders what happens to our language and culture, when one party has so much freedom (and incentive) to bias the represented ideas that it becomes effectively impossible or at least economically irrational for a second party to take on the expense of understanding what is said to my own advantage. In fact, the training that goes into this have awesome sophistication, the communication is produced in good faith to offer help information according to our social standards, they stand ethically unchallenged and is profitable for the producers to produce them. I.e. what I earn in 30 minutes (plus subsequent time used to complain about it) is more than the 2 cents that I saved for the duration of my ownership of this car at this particular charger at the current Costco gas price provided BoA/AAA doesn’t terminate my gas rebate card program.

There are a lot of people trained in this kind of communication. They include branding folks, they include sales people, they include public relations people. They also include those skilled at encoding it into the HTML my browser received. In the future, the marketers may have the skills to create an AI-document, a product2vec, or advertisement2vec, if you will. My own AI will be compatible with that standard of communication(like my browser can read html), and it interprets the marketing vector it receives and understand it and presents it to me in place of the browser. My AI of course understands my economic needs and my preferences. It will therefore dig for things I need and want.

Since obfuscation occurs in human languages and expressions, one wonders how much obfuscations will be embedded in those future AI marketing vectors? Will it be economically feasible for humanity to figure out the right amount of obfuscation to allow?

Alternatively, this might be the fall of AI, if our consensus comes to that we are all very unhappy and this whole social order built on internet and computer technologies should just fall. Technology will just fail to unite humanity and move us onward. I would have no problem with that. People have to change for the system to change. Not everyone can be like RBG and change the system before people changes wholeheartedly.