The Log Inequality Measure and the Inverse

Consider the replacement of square in QIM with the log function. This equality metric is useful in modeling realistic expectations.

When I compare the inequality of my networth with those very rich people, I am actually not very hurt by that fact. They are distant to me and I could care less if they owned a moon, as long as they don’t drop it on me, I’m fine with it.

But if you compare your networth with those of my college freshman classmates, or those of graduate school classmates, of with those of my coworkers, then suddenly, even a difference of $100 could put me in very bad mood. The underlying cause of my bias is unknown to me. But if my feeling were a guide to what is truly unequal, I am able to write it, approximately, as

log(a-b)

It is quite noticeable that this curve, for the LIM, has very different shape than the QIM. But perhaps because my k-nearest-neighbors occupy more attainable positions. It is likely that I can get an equally large size of cow guts as the señor at the next table. It is unlikely that I can wrestle Micro$oft to the mat by writing a new operating system. In fact one could almost imagine

(a-b)^{-1}

With the infinity at complete equality set to zero. The Inverse Inequality Metric (IIM), along with its partner the LIM can perhaps be most useful in personal servicee effort to gain equality. For example, I can try smiling a bit more at the cashier and waitress in my neighborhood restaurant while I order cow tounge in Spanish. A little respect will impact my C little, while it may lead to increased E and consequently a larger piece of the cow(dX).

Tax-Free Tax System

If the money paid towards tax are untaxable, then should the system be so for the tax itself? Suppose we want to tax a person with income A and nominal tax rate R. The tax amount is then

RA(1-p)=pA

Where p is some unknown proportion of A to be paid into taxes this year. Aka effective tax rate.

R-Rp=p

R/(1+R) = p

A 30% nominal tax rate resolved to effective rate of 23.1% of income in this system.

So to set an effective rate using nominal setting under the system, one would solve for R.

R = p/(1-p)

Say some bracket should have effective rate of 30% the rate on taxable income, under this system, would require that nominal tax be set to 42.8%.

However you massage dung, it’s still money you have to pay. But the system should be self consistent. We should not have to pay taxes on money we spent on paying taxes during the year we earn and use that money to pay taxes.

Phished by ‭(800)922-0206‬

I just read my 8-digit Verizon password reset temporary password to this 800 number. Half asleep, I had really thought it was Verizon trying to help me recover two iPhone XR’s ordered on my account.

I read them the numbers right underneath the text

Verizon Msg: For the security of your account, Verizon will never contact you for this code. Your My Verizon temporary password is dumbanddumber”

DOH!

But the funny thing is, Verizon seem to have a second filter that randomly asks for another field of personal information after someone uses the temporary password. So the entity trying to phish me called me five times in the next five minutes from a landline ‭(673)180-4668‬. I guess they were hoping I was still on the hook and may give them that second field of information.

To my credit, I realized in time and called Verizon to deactivate any changes for the next 24 hours while ignoring those 5 calls. It had seemed that they were successful to change my password, I could not log in. But I reset my password using the same mechanism. The down side is, I don’t know what they did to my account in the mean time. They could have downloaded statements containing detailed information about every call I made. They could have ordered two iPhone XR’s… Verizon claims nothing happened, but that’s likely just support line ass coverage. They don’t want to admit anything happened even if it did, at least not at a casual customer request.

Let’s see what happens…

The Deep Universal Regressor

There’s this idea in Deep Learning that Neural Networks are universal function approximates. They can approximate any function you can provide data for.

It has confounded me for a long time exactly how it does this for continuous valued output, but recently, through the grapevines that is the Deep Learning community, I finally discovered one answer to this question.

Consider some deep neural network taking in X and producing some kind of penultimate layer of activations, A. We want to write a formula for producing a Y that approximates \hat{Y}.

Oh boy, who are we kidding, let’s just drop down to tensorflow code…

You want to do

Y = inverse\_sigmoid(tf.mean(tf.sigmoid(A)))

Being careful, of course, to calculate the pooling not for the batch but for each input and not to double sigmoid-activate A, but the last activation must be sigmoid-compatible. Note since sigmoid produces numbers between zero and one inclusive, so the mean, or any convex combination, of a bunch of such numbers can also exactly span that range, suitable for input to the inverse\_sigmoid. And of course if you need to, A could have been activated with the likes of tf.exp or tf.square and then filtered through the tf.sigmoid

For example, if you think \hat{Y} ultimately grows with tf.log(A), and you have already made sure A is positive, then you can use the following by simplifying out the exponential and compute

Y = inverse\_sigmoid(tf.mean(\frac{A}{A+1}))

The sigmoidsigmoid^{-1} pair can also be replaced with other bounded activations like the tanh or \frac{x}{\sqrt[1/k]{1+x^k}} and their respective closed-form inverses.It can also be replaced with unbounded constricting activations such as x^{\frac{1}{2k+1}} and x^{2k+1} pair for a chosen whole numbers k.

Tada!

This solves the problems of your deep neural network needing constricting nonlinearities like the sigmoids, your need to produce a continuous output that may grow at non-linear rates relative to activations, your limited computational resources, and your having a lucid hunch as to how they are related.

Hopefully this helps you and saves significant amount of brain activity and experimentation. Your problem will probably need a special architecture using a configuration of this pooling later.

P.s. the use of sigmoidal functions seems beckons to a probabilistic interpretation. The desigmoid, that’s inverse sigmoid, can be interpreted as a lookup from the CDF of a random variable, the value at which it achieves that accumulation. Essentially, in the most basic configuration, this regressor uses each element of A in the penultimate layer to support(or to reflect evidence that) that the desire Y is larger. In a human brain, this positive-only thinking seem overly restrictive. What if we have a field of A is a positive signal that strictly means smaller Y? One way is to use a second FCL to remove effect of one sigmoid from another. A second intuitive idea would be to do the following:

Y = tf.atanh(tf.mean(tf.tanh(A)))

In one step, this regressor can consider both support for larger and for smaller value of Y.

P.p.s. Want to also put in a plug to our wonderful democracy. The computation of mean is explicitly mixing votes of each activation in the penultimate layer equally–each neuron gets an equal vote as to the result. Politics aside, and in addition to convex combinations, all other range preserving combination are fair game–e.g. geometric mean, softmax,etc. depending on the relationship between X and Y and the network that produced A.

Smile, Pay and Hope

So, this fat old lady cuts in front of me at the Los Altos Whole Foods meat counter…

After a few seconds I say loudly to the clerk “I’m next in line, please do not let anyone else cut in front of me.”

“Oh sorry, we’re you in line? I didn’t see you!” The fat old lady said in pretentious seriousness.

This is one very maddening experience just because there are three Chinese looking people standing in line, and all Chinese people look alike, it doesn’t mean that three separate Chinese people are the same person! We’re all in line and we each get a turn. And yes, you are thinking but she could have legitimately thought you were all in the same.e family. With three separate shopping carts, it would be a very insulting scenario she has in her mind for a family of Chinese to push three shopping carts! What are we like pigs and eat a cart full of food each?!

Some days, it’s kind of important to put on that smile and carry on like the world doesn’t have any gunk like I just see.

Whole foods used to be so friendly… Perhaps this also speaks to Amazon’s ownership?

This past weekend we went to buy some chickens from a local poultry vendor. Wow, that was like a very sad experience. I drove an hour with my daughter to her store . She tries to sell us heat lamp and feed stuff. When I looked down to check her prices versus Amazon’s, I got an earful of anti-competition rant about how Amazon squeezed everyone out. “I’m not going to tell you anything about these chickens if you ask for Amazon’s prices.”

I started to talk capitalists sense into her, to explain price competition is good. Organized production is more prolific… But I thought better of it. I fear various kind of retribution for additional expression of disagreement. In this wilderness, maybe her gang of crew can come chasing after me with their ATV’s and pitch forks,… Or guns,… Who knows what could happen.

I put on a smile and paid full price.

It was only a smile and extra money to pay to facilitate the happiness of the people we care about. Hopefully all that politics doesn’t get in the way of our common pursuit of individual happiness.