I was playing with a deep neural network tutorial recently. The fun of deep learning is starting to wear out after three years or so of continued exposure. Adjusting learning rates, batch sizes, filters, penalties and regularizations. Trying out algorithm that promise to perform without undue experimentation with these hyper-parameters… It used to be so fun, exciting to make even the smallest improvements. But today, it’s quite tedious and quite boring.
A quick meta-thought brings to mind a training procedure: Everytime I want to change the training, either interrupting SGD mid-stride or tinker with a hyper parameter. I could write the change as python code. And then ask a model to learn to write these changes for me based on my supervision.
The outcome, in the limit, is that the model will be able to autonomously make these changes that I will want to make as if I was watching it. It would just do what I want to do, with the same patience as myself, and maybe same typo-rate, even. Note it doesn’t optimize final metric, just mimics what I would do.
ATM, I feel that experience will be gratifying. There are some various interpretations to that event.
We can say that the program has removed human desire. Not in the sense that we cannot or need not desire, but desires that are quickly satisfied are really not desires.
We can also say that human will have achieved idleness. We do nothing and anything and yet everything is done. We’ve achieved nil-activity, we accomplish all through inaction.
We can also say that the work to achieve said model tuning automation is a human minimizing activity. If optimizing human resource consumption, the model will remove the need for human. And if the model is still imperfect it will tend to minimize our usefulness. This is by far the most horrorfying interpretation of the event. Working on that model literally is an effort to minimize human involvement. (That has the MDL for my desires at the moment)
So here we have arrived at one way to inspect the “future of jobs” “situation.”
Since we are still in control, we actually have the ability to set where we want to go with. In the most pathetic case, we will institute Affirmative Action to affirm Humanity: the Law shall favor by race, the Human race, requiring all AI to have 1bps of entropy added to their actions to facilitate the need for humans. A less pathetic approach, we are squeezed out of skills-for-hire arena but human still engage each other in socializing and networking and things that only humans do, that’s still quite, quite, pathetic though, IMMHO today.
Can we ponder the question: “what is the right size of humanity?” What is the amount of involvement we really want? Right now I hate tuning parameters, but I can certainly see a situation, say a robot lay dying in the middle of the road, his loving and beloved human child crying.
I can say, “move aside and let me through, I can help!. The 2019 model year bots uses algorithms that have been in the public domain since 2018(and they don’t work! cd /dev ; they are bastard of random with zero and properly homed in null). I have SGD training! I can save that bot!!” The child’s watery eyes is now filled with hope, meeting my fiercely determined and confident eyes at the midpoint on the edge linking us two humans. (Think Goldblum cross Eastwood cross Moore)
But a bot could do that better than you! My annoyingly observant reader will quickly point out and move on to another more interesting blog.
But for me, there seems to be something I care about in that moment. There’s something I care for in that moment. And one can easily achieve consensus that there is something humanity cares for in that moment. Is it hope? Is it kindness? Is it sympathy? Is it the desire to decrease perceived entropy? Is it the interdependence of humans that is really of note? Is it my usefulness that I eally care about? To the bot or to the child? Is it respect all I want, even when that’s only payable in arrears? Is it…??? What is it? Can we quantify it? Or does its identity and essence rely on its lack of computerized representation?
Perhaps an AI can be made to tell us this idea that it cannot describe within its domain? An AI to give humanity it’s best meaning and purpose. And any progress in characterizing it seem like a truly imaginative and inventive step forward, be it taken by human or by computers.
To be continued…
P.s. I realize it was more than 25 years ago when I first wrote about this matter. I dreamt in highschool of making AI. I wrote for my 11th grade Advanced Social Science class to take the position that a symbiosis is not only acceptable but also a desirable and inevitable outcome. We should co-evolve, I wrote. Somehow, that position still echos in the FAM Blog. It would be fascinating CS work to integrate with philosophy, perhaps name it Computational Philosophy, a field of philosphical endeavor, Human kind and Computer kind, together, hands-on-keyboards…
But that’s an interesting question in itself. Because we, as a kind do ingest a lot of very intimate things from our surroundings: water, air, viral dna/rna, etc. Things like antibiotics we take as part of humanity because enough people use them, on average, there’s some non-zero antibiotic in everyone. Then there are vaccines. Significant resources have been devoted to the continued injections of antibiotics or vaccines into people that they are us. Computer is us, part of us. They have both physical presence and biological and social functionality as part of us personally and as part of our society.
While there are some who object to mass enforcement of mandatory vaccination, their effects are limited. One would imagine that the people yammering against computers becoming irreplaceable part of our lives… They are vaccine opposers. They are the people who asks questions like “who will buy drugs if vaccine prevents a disease?” And “are you still you if computer does the shopping for you?”
Don’t care, and yes.