Starbucks Impossible Needs Tomato

Just tried out the impossible pork sausage sandwich at Starbucks. Unsurprisingly, it taste impossibly like pork sausage. It is a little bit less greasy than one would expect. This has been apparently a trend: Beyond Meat alt-meats tend to be extra tasty (in the sense of having MSG) but also very very greasy. The Beyond meats are great except visually, consumer can be made nervous by the amount of dripping grease. Impossible Burger does not drip grease but arguably could imitate fatter meat. To all these imitators,

There seem to be some physical properties of trans-fat and polyunsaturated fat that has not yet been mimicked by the alt-meat tech we have in 2020. One of the reasons why memories of good food can last in our mind for decades with great vividness and fondness. It is possibly not only because of the impression it makes while you eat it, but caused by the residual fats stuck in the nose, tongue, between teeth, in the throat, as well as the smell that bubbles up as burps during digestion. These continues, but gradually diminishing, stimulation of our senses continues to reinforce our minds through tingles of the same great taste and smell for hours possibly days after consumption. From what we know of neural networks today, this type of continued and repeated but diminishing stimulus helps us to remember it more deeply. (More specifically I mean we mimic this experience with replay buffer and learning rate decay in deep learning). During this phase of “learning” we experience desire to eat the same food but not having it. Thus, it seems that the sage summation:

Having is not as good as wanting.

Is unfathomably wise for its time. My addition, like a robo—psychologist adding adding a formula modeling human cognitive psychology to the appendix of psychohistory, stating further that:

Wanting is not as good as having and still wanting.

Since I asked her to remove the egg the sandwich, it was too salty for me. I think it can only be improved if my sandwich had its egg and tomato added. The tomato would add vegetable to this otherwise completely tasty sausage sandwich.

Can Covid Spread by Dishwashers and Washing Machines?

My relative who lives in Queens China tells me that a local doctor who serves the community for free told her that using single-use utensils is very effective against cross infection at home. It seems that even though he is trained in the western style, a dentist in fact—a profession that probably deals with cleansing of utensils on a very high frequency, he suggest using single-use utensils. This experience seems to suggest that maybe Covid can spread by dishwasher.

I just observed my dishwasher on its highest germ fighting mode (a 2.5 hr wash including steam cleaning, electric water temperature boosting, the whole #! (It’s pronounced “Sh’bang”). The machine seems to begin with a cycle where it wets the dishes with cold water. It the. drains the initial water along with food stuff it washed off before proceeding to heating up water and mixing soap. I’m going stop there and not suggest that this virus can survive hot water and soap and that it flys into my nose on a soap bubble, but that is surprisingly easy to visualize after watching cartoons with my kids for the past few days…

The water is expelled through a hose connected to a sink erator (a garbage disposal chewer of sorts that blends the food that falls in it before sending them down the drain.) along the way there is also a pipe connecting to an air gap. This air gap, is not effective in preventing passage of virus, in fact it’s probably where some escape, right into the nose of the person washing dishes or others walking by. MAN!! why hasn’t the CDC suggested this? Do Chinese people have to do all the work, short of sacrificing baby animals to save their supreme master, to fix Covid?

Anyways, I’m obviously blogging mid-Covid infection. I either got it from taking care of my kids or at Father’s Day celebration in a restaurant—you know, the usual good deed cannot go unpunished.

This is one nasty bugger! The version that I got has a feature wherein it makes you feel cold every few hours (I guess conversely it gives you fever every few hours, I’m not too sure witch because my mind wasn’t that clear then) feeling the chill, you automatically pile on layers and layers of warming things. My wife is under 3 or 4 layers right now despite my 4-day-wiser advise. The heat makes you sweat, then you flip off the cover and experience cold, sneezing and coughing ensues, as it usually attaches at the upper airway, ensuring its spread. This, as exciting as it may seem from a engineering design perspective, is quite horrifying.

But wait, there’s more!

The dresses seem very adept at causing panic. With all these flipping symptoms, one probably rush to hospital, infecting everyone along the way. Or else one may demand support by family member, once again causing new exposure.

For me, I seem to have experienced every symptom of every cold that I’ve ever had: fever, chills, sneezing, tearing, coughing(post nasal drip and lower as well), muscle aches, swollen joints, dizziness, lethargy, sleeping for days unable to wake to alarms. And some how this virus can activate these symptoms several times a day. So for example the customary chills/fever tend to subside for a single infection after one bout (2 at most I guess), but for Covid, I had chills like 3 to 4 times a day. And then the swollen joints… and some new issues like coughs that cause spasms shooting pain into the upper-upper arm (medial and before the biceps), smell sense being much more sensitive than before. It’s possible that all these symptoms were all happening in the first few days simultaneously. The virus may be able to affect the sense and highlight some feelings individually. An alternative explanation would be that the virus is probing my body for different vectors of attack that it knows about… or that my body knows about? Since there suggestion in the media about it somehow triggering immune system to do extra work, I wonder if those old feelings were produced by my own body—that they’ve always been a result of my immune system’s actions ? Anyways, without more precise measurement and comparison, it is difficult to come to a conclusion that I can believe in…

For me, I would definitely try to sleep less. Rest I would, but sleeping allows the body to do strange things without higher brain moderation. You end up lying naked on sheets that are completely drenched—that is not ideal neither for comfort nor for health. Seriously isolate. I slept in my car several hours when I was feeling the worst. Turning on my super heavy duty HEPA filtration system while my dishwasher is running.

Last of all, my hyperactive elder child and, frankly, anemic and very social wife are both very thin. They have symptoms far later and weaker than myself and younger child, who are not hyperactive, anemic and somewhat reclusive—relatively and euphemistically speaking🤓. And we’ve had it long and hard. So I would lose some weight in retrospect. That seems to help.

That’s it for now. Hope everyone get better soon!

P.s. I would repeat the official instructions from my hospital as well: 48hrs of fever, blood SpO2 below 94%, or worsening of symptoms including trouble breathing necessitate a visit to hospital ASAP. In all other cases isolate and keep healthy. Mumbojumbopseudosciences are here only for your enlightened entertainment purposes.

P.p.s. This message has experienced at least 10 days of isolation since my infection… hopefully it doesn’t bring on new troubles for any body.

All Black Lives Matter, not all White Lives Matter

Let me start by saying that all human biological lives matter. I believe that is the foremost of what our American Constitution aspires to provide for its citizens “Life, Liberty and pursuit of happiness.” So when George Floyd is deprived of his biological life without due process at hands of police brutality, that is wrong and violates our very basic believe about how the country should be.

The problem arose when we, the colored minorities, became enamored with the lives a small faction of white Americans. These are the millionaires and billionaires, these are your bosses and his bosses, these are the people on the other side of glass ceiling and bamboo fences. These people live their lives largely on the labor of others, spending their own resources mostly on the dominance over those who provides for them. These are the people filled with hubris when considering other people’s lives, the demand entitlement for what and who they are. These are the people who have little to worry about almost all the time. These are the people who never “had to do anything” and “still did it” and they always had to make sure you know that “they didn’t have to and did” thereby communicating that they are entitle something from you.

Either by intentional self-promotion or accidental exhibition, we get the impression that some people are like that and that they enjoy being that way and that they deserve it. Over the centuries of watching these happy white people go about their happy white lives, perhaps we began to aspire to it.

This developed aspiration, for myself or “my people” to achieve “White Livelihood” is wrong. Those white lives that subjugate others lives cruelly as unequals act neither justly nor fairly. Your cultural identity should have enough pride and integrity to distinguish between the righteous and the evil. The “White Life” I attempted to describe above is not worthy of your noble identity. Be proud, be strong and rise above that kind of “white people”. Hopefully, I mean this not just metaphorically or spiritually. We can strive to rise above this, all of our soul, wrapped in white, brown, black, yellow skins, we can rise above them economically and live better lives and in harmony.

Therefore, I wholeheartedly oppose government sponsored race-dependent or race-correlated service. Our constitution and laws should repeatedly affirm that we are a nation that strives to not discriminate—for any reason! The government should reaffirm that the services it provides to its citizens do not discriminate. The government should ensure that essential infrastructures do not discriminate: utilities, financial tools, medical care, education and transportation, food related services should always be provided without discrimination as a matter of law on pains of termination.

My faith that this will work partially rely on the fact that there a a whole lot of white lives, passed and living, that are really kind and that do believe in racial equality. I mean, the people who wrote this constitution were all white right? These dreams of equality and freedom, they were dreamt by all but put to practice by those blessed souls wrapped in white right? Let’s have faith in our white fellow man, let us demand that they live up to their ancestors wisdom, let us believe it! Let us really believe it. Let us really believe and really live it—equally and freely and happily!!

ACA-5, whether it passes or not is a clear sign that minorities in America need real leadership. After reading about the issues around affirmative actions for a few hours, I fee I have a basic understanding of it. I will not copy-paste Wikipedia’s explanation of the two bills, but everyone who live in California should learn about this. Last time we, colored minorities, had thoughtful leadership, it was when MLKj was still around. Dr. King, according to present day news media, advocated something called Class-based Affirmative Action. Dr. King believed that all that suffered poorness should be helped. The fact that black Americans had been systematically suppressed economically for many generations means would mean that blacks would be helped the most by class-based affirmative action. But dr. King had felt that that would be a just way to administer Affirmative Action.

It makes a lot of sense, because money is a very important cause of opportunity. Without money, we most definitely do not have opportunity. And money is incontrovertibly measurable, so it is very good way to choose who needs help.

A related idea is aptitude-based affirmative action. Is it possible to choose to help based on their actual measured aptitude? The poor student, aside from needing money to eat food, with respect to what schools can do, is to provide them with more academic help. If an Asian kid has trouble passing PE, he should be given nutritional supplements and additional Coach attention to improve his physique. If a latino kid needs a bit of extra tutorial on coding, that’s great, give it to him. And for hod sakes, please educate new immigrants on language, etiquette, financial management, basic health and safety and driving. I know this is a little bit communistic, instead of solving problem with money, I want to provide for the needs of the poorly established. But hear me out.

Would it not make sense to give to the disadvantaged what they really need? Give them the skills that our society value. And by this I mean that students who do not qualify for college really should not go to college. They should be given what they need: if community college is what they need, then let’s fund community colleges. If MOOCS are what really works, let’s fund more MOOCS. But because our society now has more support for minorities, colleges can now freely be more blind to protected attributes.

I do not want our great universities to lower their academic standards. We will be left helpless if we do not have a truly competitive higher education system on the world stage. Let’s try to build an honest and skilled workforce. Let’s improve diversity starting earlier in the education system so that we help the disadvantaged most effectively.

Method of Coffee Aroma and Flavor Extraction

I’ve discovered an interesting way to brew coffee and here it is:

  1. Chill water to just above freezing, fill half a container.
  2. Grind good frozen coffee beans medium-coarse
  3. Soak the coffee grounds in the cold water, stir until wet.
  4. Pour hot water (212ish Fahrenheit) into container right on top of the floating wet grounds.
  5. Stir slightly then seal to reduce exposure to air to a minimum.
  6. Place in fridge until time of consumption.

In step 5, be mindful that cooling will tend to create a vacuum in a sealed container. There is no need to resists the temptation to shake the sealed container vigorously and repeatedly. As long as the air being mixed with the soaking grounds remains small and unrefreshed, the brew will succeed. After shaking, preparer should ensure that all grounds are soaked in the liquid. In step 6, we would recommend that you in the least refrigerate until the grounds settle to the bottom so that the pour does not require filtering. For all the beans we have used, the extraction produces floating oil layer after cooling and settling. The aroma is quite irresistible even when the fluids are cold.

The hot water extracts aroma and flavor, but it cools quickly so that the temperature does not facilitate additional chemical reactions neither in the suspension nor with the air. The result is some very aromatic and very flavorful cold coffee. And if you prefer it hot you can heat it up or add hot water just prior to consumption. Also, the proportion of beans to water and container size can be adapted to your liking and facilities. To start, use standard recommendations for brewing coffee, as of now according to top google search for someone in northern California, is two tablespoon of beans for every 6 oz. of hot water between 195-205 degrees Fahrenheit: link)

The energy involved in the change of temperature in water is mass times specific heat times change in temperature in Celsius. Let’s say your cool water is at normal refrigerator temperature of 40 Fahrenheit (4C), boiling water at 212F(100C) since the mass and Cp are the same, the final temperature will be at 52 Celsius (125.6 F). Depending on how close you bring the water to boiling, the room humidity and air pressure, this may achieve a much lower or higher temperature than the ncausa recommends. In fact if the boiling water is right on the precipice of boiling, it contains 2257J/g of potential energy which is enough to bring the rest of the water to the boiling temperature(but not boiling). However, due to the enlarged range of temperature and energy levels our frozen beans experience when brewed using present method, they undergo very complete extraction. Another reason why we find the present method of brewing effective is credited to the waiting period in the refrigerator. This extended soak is similar to cold brew preparation methods.

If you are economically challenged due to Covid-19 crisis, one additional step is to repeat steps 2-6 without discarding grounds after drinking the first batch. Ideally, the old grounds should still be immersed in unconsumed coffee water. Adding grounds and cold water in step then continue normally. We’ve found no discernible lost of quality in the extraction. Due to the cooled temperature, old grounds continue to contribute additional caffeine and taste while new grounds provide more lively aromas. Less grounds can be used in subsequent brewing than the very first batch or a fresh batch when recycling cool old grounds. Caveat emptor: our container size have limited our experimentation in this respect to at most ten(10) days.

Note the soaking of the beans is very similar to Mate preparation in which we put the Mate into dummy water. The wetting and dummying down of the Mate helps it to prevents the destruction of flavor, vitamins and other valuables. In the case of coffee, the cooling also prevents vaporization of oil which holds much of said valuables. Presoaking with cold water, which has higher specific heat than coffee grounds itself, also means the beans heat up much more slowly when the extra-hot water is applied. The water serves as a temperature buffer and slows temperature increase to ensures a smooth extraction—this is a common reason cited for the use of dummy water in Mate preparation despite its lack of taste.

Other reasons for my attempt at this most like have come from watching a lot of Gordon Ramsey shows where he very passionately teach other people to cook. It may also be from watching Bobby Flay or another one of the American Iron Chefs which is inspired by Japanese show Iron Chef. Relevant culinary technique is the act of soaking seafood in ice bath immediately after cooking them. Or it might be from watching Phelps swim and then dunking his whole body in an ice bath. My first exposure to how addictive Mate can be when I saw it on a show called Mozart in the Jungle on Amazon Prime Video. Last, but most certainly not the least, some Chinese tea drinkers have habit of washing the tea leaves before steeping, this achieves a similar temperature buffering effect as used in the present method. Tea, Chinese green tea, flavors are more subtle than most coffee, but a thickness of tea leaves cannot hold as much water as the same thickness layer of coffee grounds while being less robust to high temperatures. Popular methods of preparation such as matcha or fermentation all but destroys said attributes of properly brewed tea. A challenge for my readers will be to update my method for use with Chinese green tea, or create anew a better tea steeping method.

This is a notable day: India just lost 20 soldiers in a hand-to-hand combat with Chinese troops, the second blast of COVID19 is raging in America now, and all H1B visas just got canceled, and a dust storm from Sahara is about to reach America, and America and China are exchanging 4 flights each week!! (Actually 8, 4 from Chinese airlines and 4 from American Airlines are permitted by Tit-for-tat diplomacy.), and Black Lives Matter movement continues its struggle for race equality in many many protests, and the stock market has reached all time record high again. I hope when you read this you will have a much better outlook for your tomorrow than I have today. May this improved coffee extraction method brighten your day and bring enhanced vibrancy to your life.

The Least Valuable Pursuit

I’ve been watching a lot of sci-fi recently. Mixed into them, also an occasional deep learning course. The Berkeley Deep Reinforcement Learning class seems exceedingly interesting.

But one concern arose in my mind while perusing the learning-as-entertainment Internet. There are a lot of smart kids learning and working on increasingly sophisticated reinforcement learning algorithms, A few years or decades down the road, probably everyone will be working on one of a few large Artificial Intelligences, and by Artificial Intelligence, here I mean an identifiable collection of machine learned knowledge with associated algorithms, software and hardware systems, for interacting with the human world and the physical world. It’ll be like the Google Search Engine—many many very very sophisticated moving parts within it, while outside, our world will even provide a whole human sub-culture and sub-economy to sustain it.

The people who work on these systems becomes less and less valuable. The whole gist of Deep Reinforcement Learning is that when in a computer simulatable world, an automated learning algorithm could figure out how to act very gainfully very quickly. The speediness by which this learning process can happen is suggested by the DeepMind paper on an AI called AlphaZero. It learned, in a few days, to play human games better than humans have learned to play them over thousands of years.

Then, an inevitable self-recursive thought arise: what is more simulatable than the Machine Learning process? (Think AutoML-zero) The whole point of Machine Learning, the act of AI, is to encode the world in a way that our computational models can accept, then, the rest is to improve metrics which we have dutifully taught the computers. There is nothing more simulatable than the process of building an AI system. Ergo, it will be optimized away and everyone working in AI will be the first to be displaced by AI.

That’s right, if everyone follows through rationally, the order of chaos appears to be that first AI scientists and engineers are automated away, then truck drivers and software engineers.

The pursuit of AI work has to be the least valuable in the long run. The effort to replace one’s own intelligence by advancing an external intelligence is ultimately self defeating.

If not everything plays out rationally, we arrive at chaos a little bit later, but while getting there, we may have that social problem where chunks of job market are systematically replaced by AI companies. The trouble being that those chunks of job market are people with voting power and less brain plasticity than a SARSA models… the democracy may stop AI.

Aptly, my brain having been immersed in related situations for a while, just came up with this paper title

Learning to Harmonize Human and Artificial Intelligences

But I would be self defeating if I were to work on publishing that, wouldn’t I ? Malevolent and optimistic aspirations may not be the right lead in all circumstances.

Where is the equilibrium in this conundrum? is there a mid-way? Is there a paths’ path wide enough for both to move forward? Is there a root Intelligence in our world wherefrom all of ours sprang. Has that root Intelligence the wisdom to unite us all? Should we have asked “from whom?”


What of Death in the age of AI

I had a funny conversation with an old friend today. He mentioned that some Netflix show had within its plot a near future where an antagonist suffered death in the family. The support for that future’s human beings is an AI which is built upon the digital and social data recorded during the lifetime of the deceased. Said AI can talk with the living mimicking the deceased.

Oh the lovely thoughts that come to mind when death encroaches onto thy neurons. There is the slight chance that a digital recreation is better than a person’s own recreations (by way of imagination). Main reasons being that it would have a better and more independent PRNG than the human brains, and that it would have more data than any individual is ever exposed to.

I would definitely spring for the Linear Algebra package, for I had just spent half an hour complaining to my father that my poor Linear Algebra skills are in the way of my advancement. I definitely want my avatar, the Huan Chang Memorial Chatbot to know all of Linear Algebra and I want my kids and my dad, in person or as their own AI’s to see me with Linear Algebra Kungfu!

Next on the list would definitely be a spelling and grammar checker. ‘nuf said. Maybe a room simulator that gets messier and messier. Just for those people in my life who hates messes.

Given how much time I spend online, I definitely want my bot to have redundant connections and lifetime subscriptions to things like arxiv, Wikipedia, wolfram alpha, weather underground, …, probably CBS all access for future Star Trek shows. Maybe a low latency feed to wall street so I can watch it crash repeatedly.

Another thing I may want is for my AI bot to run on a cloud having only servers physically located in my home towns. This is kind of a digital age version of having your ashes brought home. I have made few (and may make more) places home in my life time. So my gaibot will have plenty of physical redundancy on different continents with different geopolitical climates.

With all this effort, I should also charge a fee for conversing with The Huan Chang Memorial Chatbot. Let’s set the family and friends price at $0.02 per exchange.

Alright! I have got to get cracking on my social media and digital records. Thinking these matter brings the issue of digital integrity to the forefront of my mind: more important than ever, I will demand that my gaibot to have digital integrity!

Invest While you Spend: a Tale of Freedom Joe Forever

There are a lot of fintech companies doing hyper-personal financial management. Certainly Acorn, Stash, and others like Wealthfront, Betterment, etc. etc. etc. The ideas implemented are simple but amazingly cool.

The trick being used is that the companies will either by means of being at all of your financial purchases via a debit card, or otherwise have access to your spendings through yodlee means. Then at time of each transaction they can bring about an investment. Implementations are different between vendors and I haven’t found one that I really like, but essentially there are two ideas behind this:

  1. the investment is made to the company you spent money at. This makes sense because in making a purchase, you are in some sense increasing the value of the company, and buying shares is just a way to recuperate the lost future investment gain on the spent money. “Invest in what you use/buy/love.”
  2. the investment is made as a percentage of your spending. If we assume that you use toilet paper and drink coffee today, it is reasonable that you will do the same in 10, 20, 30, maybe even 50 years. Given any rate of return, you can calculate how much money you have to put aside to eventually be able to make that same consumption at the same frequency without adding more money.
  3. Use completely automated process in the investment process to execute investments according to a modern data-driven, hyper-personal, scientific, effective and safe design.

So let’s say I drink a $5 Starbucks coffee daily. My financial advisor can guarantee with his life that they can provide a 10% inflation adjusted annual return on any and all investment. The setup is then as follows: the fintech company will make a withdrawal an additional 15% of each purchase from my balance and immediately invest the money. So, that’s $5 To Starbucks and $0.75 to investments for a total of $5.75 out of my bank account. That’s pretax, but you can also include tax in the calculations and pay future sales and income taxes as well. The present calculation do not factor in tax. If we keep doing this daily, then in 20 years, we will have accumulated enough money to drink a coffee every day forever. NB, the design is for the daily per-purchase-investment into the coffee fund to accumulate to a level when in retirement you will no longer need to grow the coffee fund, it’s earnings in investments will pay for all future coffee drinking.

How to get
Free Coffee

You can plug your tolerance for saving rate and what you feel is believable sustained inflation-adjusted return on investments in the chart above to see how long you’ll have to save before retiring to the same life style you have today. If you let your imagination run wild a little bit and believe in a stable accumulative return quoted in inflation-adjusted annual percentages yield, then saving for retirement actually doesn’t seem that bad! 20 years is how long I’ve worked for already! If I thought of this when I was a fresh college grad, I’d be sipping on free coffee by now! But it’s not too late now, I can still work another 20 years and get my free coffee thereafter.

There are also other issues like the dollar cost averaging effect and the need for rebalancing the investments. There seem to be some additional games that you can play to increase investment risk as you get older because the fund will not need to sustain you caffeine needs for as long as forever. This is opposite of all investment advise you received today. Normally you are asked to reduce risk of your portfolio when you draw nearer to death. But in reality, if you are sure that you don’t have to pay for free coffee forever, the equation changes and suddenly you can take on much more risk with the extra coffee money.

Another concern is you may have change in taste or lifestyle expectations, one should analyze the directive to prepare for the same cup of joe for the rest of your life. For another example, a woman may not need to buy as much feminine hygiene products after menopause. The proposal uses your every spending habits today as a surrogate for measuring your future life style. But this may very well not be something you look forward to retirement. But these are more advanced financial, psychological, physiological and philosophical topics reserved for homework or future blogging.

Compute in deltas

So, for some years I’ve been stuck unable to figure out delta computing. I use the symbol \Game because it looks similar on my phone to the symbol I want. But here, I will use \Delta in place of \Game.

The small delta means difference between two programs \delta(p_1,p_2) is a program that when applied to the program p_1 produces another program that can takes any input, x, of p_1 to produce r_1=\delta(p_1,p_2)(p_1)(x), a result that is equivalent to the second program run on the same input and environment r_2=p_2(x) such that r_1\equiv r_2 for some useful definition of \equiv. This is the program difference(pd) between two programs.

The large delta the gives us the program differential operator(PD). \Delta(p,a) produces a function that can produce the change in p when a pd of its argument a is offered \delta(a_1,a_2). That is: \Delta(p,a)(\delta(a_1,a_2))\equiv \delta(p(a=a_1),p(a=a_2)) where the RHS partial evaluations are performed by partially specifying just the parametera and leaving the rest free.

An understanding of a pair of pd operator (\delta_1, \delta_2) allows for reversible change if \delta_1\circ \delta_2 is an appropriately typed identity function. A single \delta is an irreversible change. For one example, reading from a true random number generator would be an irreversible program. Inside the realm of a computer, simply reading an input from outside of the computer is an irreversible program within the computer, because it cannot affect the unpressing of the key. Even though outside of the computer we may know the reverse state of the “no” is the question“are you sure” from “rm -rf /“, the computer cannot know that for sure with its own faculties. That is to say, you cannot either, even if you are inside the computer and has access to just the computer memories and interfaces. Invertible pairs are intuitive, such as (sin(x),sin^{-1}(x)).

Our accessible realm of compute in an execution is therefore an accumulation of: (an initial state, irreversibly computed outputs, and the compute graph of reversible deltas) By modeling information this way, we can explicitly consider more general changes of states as well as give rise to a framework for understanding, interacting and developing software programs more effectively.

p.s. Btw, these ideas can be equally well expanded into operational and denotational semantics, each with their own idiosyncrasies.

p.p.s. Can we circumvent first order logic by currying functions instead of using \forall? Elsewhere I have worked out the reparameterization to achieve \forall_{a_1,a_2} \Delta(p,a)(\delta(a_1,a_2))\equiv \delta(p(a=a_1),p(a=a_2)). One of several example of this kind of reparameterization would be \Delta(p,\delta_a) \equiv \delta(p) each LHS and RHS now is requested to takes two parameters typed for a and yields a function that computes the pd of p when it’s parameter a changes from one to the second. To achieve the first order approximation effect of derivation in ordinary calculus on reals, all we need is to specify a loose \equiv^1, the first order equivalence, and so on. There are also sub-first-order equivalences such as: having at least same number of characters in the program code, that they are in the same language, etc. First order equivalence should minimally be a program having sufficiently compatible (-ly typed) input and outputs. Subsequently higher order equivalences include progressively more and more identical runtime behaviors or progressively more matching meaning. Here, again is another example of why presently described paradigm is beneficial: for example if a program is stochastic, how do we determine if another program is equivalent to it other than that the code is identical? By isolating the irreversible compute of receiving (from identical) external entropy, the remaining program can be evaluated in the f^{th} order using conventional \equiv^f. Further higher order equivalence may require that they have same runtime/memory/resource complexities. Which, btw, inspires an n^{th} ordering \geq^n that requires all equivalences \forall k<n \equiv^k and then at the n^{th} level require LHS to be better than RHS—such as lower runtime complexity, etc. The details of all these developments are documented more fully elsewhere.

p.p.p.s. Where is this headed? Well, aside from modeling the universe, one possibility is to achieve truly symbolic differentiation and do back-prop on program code. One can ask, for the PD to a program’s unit test wrt the program. We then pass in the pair (false,true) to arrive at the program (code) mutator that can repairs the input program to produce a program that causes the unit test to pass, after which we use higher ordering to search for a better program.

One can dream…

Deep Universal Regressors Elsewhere

I just chanced upon a fascinating article called the Neural Additive Models Interpretable Machine Learning with Neural Nets(FAMX.3 for me due to my interest, but others may feel this draft is a 2 or 3 due to brevity) The proposed ExU is a layer that has an foreactivated parameter (see my own blog discussions on the need for nonlinear over raw parameters here, here, and here, etc.)

h(x)=f(e^w * (x-b)

I’m very excited that people like Jeffery Hinton and Richard Caruana are thinking about and writing about stuff that I’m thinking about and writing about at about the same time and arriving at similar solutions. In this case they performed foreactivation on a weight matrix. This paper of course is a collection of massive amount of experimentation, far more than I had resources to accomplish. These smart folks also solved the problem of sign that I had struggled with a bit—the sign is washed out by having multiple layers. (64 in their successful examples)

oh! That was obvious, now that they say that. the tanh-autoactivated sign I wanted to multiply on the front of the e^W was not necessary after all. As long as there is at least one “linear” layer at the output of the subnetwork that does not use the ExU or another sign-restricting foreactivation on the parameters, then the output can have a full range in R irrespective of input and therefore can be a universal regressor.

My only concern is the effort it required to arrive at their awesome results, no less than 4 hyperparameters had to be tuned using Bayesian optimization. I think my own laziness demands that there be a way to tune a model hyperparameter using only learning rate warmup and decays—the dynamical nature of a model and its data should be entirely taken care of by the model and automated training process. The foreactivation is one such mechanism.

Of course, I only have access to the initial draft posted on 2020-04-29. I am very hopeful that in subsequent revisions and sequels this highly flexible and highly interpretable modeling technique can made easier to use.