The Right-Sizing of Humanity

I was playing with a deep neural network tutorial recently. The fun of deep learning is starting to wear out after three years or so of continued exposure. Adjusting learning rates, batch sizes, filters, penalties and regularizations. Trying out algorithm that promise to perform without undue experimentation with these hyper-parameters… It used to be so fun, exciting to make even the smallest improvements. But today, it’s quite tedious and quite boring.

A quick meta-thought brings to mind a training procedure: Everytime I want to change the training, either interrupting SGD mid-stride or tinker with a hyper parameter. I could write the change as python code. And then ask a model to learn to write these changes for me based on my supervision.

The outcome, in the limit, is that the model will be able to autonomously make these changes that I will want to make as if I was watching it. It would just do what I want to do, with the same patience as myself, and maybe same typo-rate, even. Note it doesn’t optimize final metric, just mimics what I would do.

ATM, I feel that experience will be gratifying. There are some various interpretations to that event.

We can say that the program has removed human desire. Not in the sense that we cannot or need not desire, but desires that are quickly satisfied are really not desires.

We can also say that human will have achieved idleness. We do nothing and anything and yet everything is done. We’ve achieved nil-activity, we accomplish all through inaction.

We can also say that the work to achieve said model tuning automation is a human minimizing activity. If optimizing human resource consumption, the model will remove the need for human. And if the model is still imperfect it will tend to minimize our usefulness. This is by far the most horrorfying interpretation of the event. Working on that model literally is an effort to minimize human involvement. (That has the MDL for my desires at the moment)

So here we have arrived at one way to inspect the “future of jobs” “situation.”

Since we are still in control, we actually have the ability to set where we want to go with. In the most pathetic case, we will institute Affirmative Action to affirm Humanity: the Law shall favor by race, the Human race, requiring all AI to have 1bps of entropy added to their actions to facilitate the need for humans. A less pathetic approach, we are squeezed out of skills-for-hire arena but human still engage each other in socializing and networking and things that only humans do, that’s still quite, quite, pathetic though, IMMHO today.

Can we ponder the question: “what is the right size of humanity?” What is the amount of involvement we really want? Right now I hate tuning parameters, but I can certainly see a situation, say a robot lay dying in the middle of the road, his loving and beloved human child crying.

I can say, “move aside and let me through, I can help!. The 2019 model year bots uses algorithms that have been in the public domain since 2018(and they don’t work! cd /dev ; they are bastard of random with zero and properly homed in null). I have SGD training! I can save that bot!!” The child’s watery eyes is now filled with hope, meeting my fiercely determined and confident eyes at the midpoint on the edge linking us two humans. (Think Goldblum cross Eastwood cross Moore)

But a bot could do that better than you! My annoyingly observant reader will quickly point out and move on to another more interesting blog.

But for me, there seems to be something I care about in that moment. There’s something I care for in that moment. And one can easily achieve consensus that there is something humanity cares for in that moment. Is it hope? Is it kindness? Is it sympathy? Is it the desire to decrease perceived entropy? Is it the interdependence of humans that is really of note? Is it my usefulness that I eally care about? To the bot or to the child? Is it respect all I want, even when that’s only payable in arrears? Is it…??? What is it? Can we quantify it? Or does its identity and essence rely on its lack of computerized representation?

Perhaps an AI can be made to tell us this idea that it cannot describe within its domain? An AI to give humanity it’s best meaning and purpose. And any progress in characterizing it seem like a truly imaginative and inventive step forward, be it taken by human or by computers.

To be continued…

P.s. I realize it was more than 25 years ago when I first wrote about this matter. I dreamt in highschool of making AI. I wrote for my 11th grade Advanced Social Science class to take the position that a symbiosis is not only acceptable but also a desirable and inevitable outcome. We should co-evolve, I wrote. Somehow, that position still echos in the FAM Blog. It would be fascinating CS work to integrate with philosophy, perhaps name it Computational Philosophy, a field of philosphical endeavor, Human kind and Computer kind, together, hands-on-keyboards…

But that’s an interesting question in itself. Because we, as a kind do ingest a lot of very intimate things from our surroundings: water, air, viral dna/rna, etc. Things like antibiotics we take as part of humanity because enough people use them, on average, there’s some non-zero antibiotic in everyone. Then there are vaccines. Significant resources have been devoted to the continued injections of antibiotics or vaccines into people that they are us. Computer is us, part of us. They have both physical presence and biological and social functionality as part of us personally and as part of our society.

While there are some who object to mass enforcement of mandatory vaccination, their effects are limited. One would imagine that the people yammering against computers becoming irreplaceable part of our lives… They are vaccine opposers. They are the people who asks questions like “who will buy drugs if vaccine prevents a disease?” And “are you still you if computer does the shopping for you?”

Don’t care, and yes.

Refactor Autoactivations

I’ve been thinking about autoactivations recently. This is one of those great innovations that stood up to the test of time, it still works after a lot of debugging and exposure to new data and models.

I find that I have been referring to autoactivations as pre-activation because they occur to deep neural nets before the parameters are actually mixed with input data (or previous layers activations) but if you look at the two expressions:

  1. To pre-activate a parameter means to apply nonlinearity before it is used. e.g. preheating the oven, the suffix is a verb and happens before something else.
  2. But a pre-activation is actually an adjective meaning before any activations. e.g. pre-trial motions. It’s suffix is a noun and becomes the subject to be preceded.

And actually similar problem applies to ante- prefix. So, to avoid confusion, we should probably refer to autoactivations as foreactivations and to foreactivate the layer. This prefix also means before and it works both for nouns: foresight, foreknowledge, forethought, forerunner, foreword, foreman, and also works for verbs: forecast, foreshadow, foredone, foreshorten, forewarn, forestall, foredoom. In each case the suffix is always the prior thing before but never preceded by another.

So, let us all try out foreactivations and related approaches. The speed up in training will surely be a good thing for humanity, at least, we won’t be consuming as much energy training models without foreactivations.

😀

Harassment of Asian Women by Asian Men

I just read a rather alarming article about Asian Men harassing Asian women for dating and marrying white and other non-minority races. Admittedly this is alarming because I may have expressed jealousy towards some white kids having more tofu from Chinese girls than I. And also, some of them seem to know more about Chinese culture than I. I think casting aspersion on the motives of Asian women becoming embroiled with non-minority races has certainly crossed my mind a million times if not materialized into blog entry. (It will before this entery publishes)

Personally, I don’t feel it unethical or illogical to have these thoughts. I’m not terribly ashamed of them either. But by the same token, a lot of the envy come from seeing some very very very successful friends and acquaintances make Asian-women-non-asian-men relationship work. There are really nice best-of-both-worlds marriages starting with TWO ceremonies(parties)… Two homes on different continents to take new family members to… Two successful cultures, religions, languages, And those beautifully exotic hybridized kids… Love them to pieces! So much so that the feeling really hurts some times.

But that is not to say harassment is right. Asian women probably suffered more than non-Asian women throughout history. But today they deserve the respect of a fellow human being. We, Asian men should never harass Asian women for their independently made decisions!! And incidents of harassment should of course be treated with urgency.

Asian men do have a hard time in America, as the female author of the harassment piece notes. Although it sounds a little bit belittling, these weaklings who cannot compete reproductively resorts to emailed harassment of lost mates. I mean, god, as an Asian men, reading this piece makes me hate Asian mankind even more than I already do, having all my inferiorities and inferiority complexities. It really doesn’t make me respect Asian women more when she publicly writes these belittling pieces about a group of people whom she chose not to associate with.

Yes, just like women have to fart and poop on rare occasions, I believe non-minority women are capable of racist behavior too. (Not speaking from personal experience, of course, every women I know are not racists, but in the large there must be enough racism to make the pairing challenging) Therefore the celebration of those few successful Asian-men-white-women relationships should not be disparaged. The lack of celebration for Asian-women-non-asian-men relationships is absolutely no reason to diminish the success of those who ventured and succeeded.

But I am a bigger person.

I think a rational concern, if I were elder to an Asian girl, would be the balance of power. Yes yes yes, there’s love, but a little balance never hurts. Perhaps one thing that Asians don’t understand about each other is what we think of each other’s power in a relationship. Who decides where to live? Who decides what to eat? Who decides what to drive? Who decides when to have kids? Who decides how to spend money, Who decides …

Asian men may imagine that in mixed relationship, the girl lacks power, that she give up rights and freedoms, those she would enjoy in a racially equal relationship, for … whatever it is that she does it for with the non-minority, maybe better sex? Asian women, probably think they have all the power in the relationship. I mean if she doesn’t feel safe, she probably wouldn’t enter into that relationship (there, I am a bigger person, I respect rationality of girls when it comes to love)

The disparate perspectives probably causes all the angst of the article. So, IMMHO, can Asian girls prove they have access to their rights and privileges? Can the girls show us, and the world that they are in an equal partnership? And Asian men who can’t get Asian women, can you try white girls, I’m guessing they can be quite nice to you if you find your way around surrounding racism and into her heart. My guess is that women of all races are feeling and thinking rational beings.

Ugh, random rambling, hope this note will still make sense when it sees the light of day