Paperless Bathroom

(Interview question)

A man enters a public bathroom that does not have paper towel and whose door opens inward into the bathroom and the levered lock latches behind him. The lack of paper towel, in addition to other abundant evidences, suggests that most users do not wash their hands. What can he do to escape the room?

.

.

.

.

.

An idea below

.

.

.

.

.

One suggestion is for him to take a generous portion of liquid soap typically found in such a bathroom into his less favored hand. Then he shall lather the soap onto the lever generously so as to substantially coat all surfaces of the lock lever or otherwise where hand contact is required to open the door. He the opens the door expecting the slippery soap to provide some disinfecting and deodorizing benefit from matters on the door’s lock lever. Upon absconsion, he is then at liberty to wash his dirty soapy hand in a more sanitary bathroom, or just to rub and rinse the soap off at a drinking fountain nearby.

* Extra credit for addressing environmental issues. For example, the side effect of this exit is that he has left the inside bathroom door lock lever soapy. But in the grand scheme of things, this does no great injury to any. If a mindless person opens the door, he surely benefits from the soap-cleaner handle. This person can also just rinse the soap off at the water fountains.

* Extra credit for suggesting that humanity really should have invented a popularizable foot operated or a touch free bathroom door by now.

* Extra credit for discussions of building codes and suggesting that modification of the law, in an orderly fashion of course, be a means of progress forward.

* Extra credit for discussions of starting a grassroots movement to perform this act which will surely cause “the authorities” to take measure and buy paper towels instead of more soap. But maybe use recycled material for that.

* No points off for someone who wants to coast through after some other people dirties themselves to exit.

Deep Universal Regressor Explored

Some months ago I wrote of a discovery regarding the training of deep regressors using SGD. I have since come to realize the benefit of exponentiating the raw parameters before using them is reasonable, sometimes. It would appear that for approximate second order optimizers, like Adam that I used instead of the SGD that I thought I used, the exponential has the effect of modulating the variance aspect of the optimization. The signal to noise ratio of gradients of \partial A / \partial W for identically distributed A in the two case where A=W and A=e^W will vary but is largely dependent on A. If A is small, the SNR for A=e^W is stronger. If A is large, then SNR is stronger for A=W.

The combination of methods would tend to follow gradient more eagerly for neuronal activation less dependent on input than larger dependence on neuronal activation. This appear counter to my originally documented intuition that the larger the dependence neuronal output has on neuronal input x the larger the gradient step–mainly due to use of approximate second order SGD. My modification, as you would expect, allows the gradients to move more freely even if they are not normally distributed. The exponentiation of weights is akin to log-transform that we used in linear regression analysis, it lets us use linear methods that rely on normality of error on some systems with non-normally distributed, often heavy tailed, errors.

Therefore although my success with this method stems more from the fortuitous conditioning of my problem than it does from the universality of the approach, it can make sense for a very large, albeit non-universal, set of problems. Subsequently any exponent A=W^p is equivalent to the corresponding inverse power transform of power 1/p.

Stay tuned, more to come as I remove more bugs from the experiments.

Underflowing learning rates

Sometimes early stopping helps to regularize models, other times it seem to have numerical properties. When using learning rate decayers like linear, polynomial or cosine, they allow the rate to be very close to zero for a while. It seems that sometimes this will underflow the change to only some parameters while other parameter changes remain nonzero, The result is that an inaccurate gradient is applied and the model drops in performance. It is detectable of course, but one can probably just snapshot models and choose an earlier model when performance starts going south.

Ah, okay, and some time passes, and I found a slew of papers from several years ago pointing out the problem is with the \epsilon used in Adam, it looks like an underflow because it fell below \epsilon kudos to the people who found this obvious problem.

P.s. and extra kudos for the people who actually just “fixed it” for me by lowering the value inside their software package.

Harvard Please keep the B’s flowing

Just heard about Federal District Court Judge Allison D. Burroughs decision that Harvard’s race-correlated admission policy was not only legal, but right for all universities in America. Today is a day that will live in infamy! This is an absolute travesty of justice! I object to this ruling very sadly.

I empathize with Chinese and other Asians who think meritocratic achievements should be the only objective judgement of admission at a place of learning. But honestly, can you really afford to say no to the next Bill Gates or Mark Zuckerberg just to admit a Chinese kid who scored better?

If I was on Harvard’s board and was a fiduciary, I would absolutely not mess with its admission. I mean it would be financially irresponsible to do so. If my concerns were with respect to the creation of wealth and happiness for humanity, in retrospect, I would not hesitate to give the same judgement.

Of course there are a lot of other people who are white and attended college there, I knew a few, and they all seem absolutely best specimen of humanity. Like if you wanted to send a ship full of people to space and take best of us with them, Harvard freshman class would probably be the best bet.

They might not be very self sufficient, even as a group , but that’s probably not what Harvard is for. Harvard trains leaders. This is a declared and time honored objective of the institution. Leaders do a very specific thing in human society, but their jobs are very limited. Everyone cannot all lead. So perhaps we need to refine what we said just now and state Harvard freshman class is the finest collection of future human leaders.

I wonder if anyone ever thought through the theory of stable racial diversity in democratic society as implemented in America.

If we were to engage in any kind of attenuation of racial representation, wouldn’t it not make sense to set two bounds on the most privileged admissions? One upper bound, no more than half of the whole population shall be a single race (say white, or Hispanic,…) and that minority races shall have minimal representation (at least one male and one female from each major race)

This admitted naive proposal seem to guarantee that one race cannot out vote all other races. And the lower bound has the quality of Noah’s Arc to try to propagate all races.

That idea will probably subsequently beg a quantification of liberty and happiness, because we can not measure or state the merit of a system without an ideal that we can all agree to aspire to. Lacking that, the dual problem may be that power is evil in the absolute, and since absolute populace is absolute power in a democracy, above proportional upper bound follows.