I’ve been watching a lot of sci-fi recently. Mixed into them, also an occasional deep learning course. The Berkeley Deep Reinforcement Learning class seems exceedingly interesting.
But one concern arose in my mind while perusing the learning-as-entertainment Internet. There are a lot of smart kids learning and working on increasingly sophisticated reinforcement learning algorithms, A few years or decades down the road, probably everyone will be working on one of a few large Artificial Intelligences, and by Artificial Intelligence, here I mean an identifiable collection of machine learned knowledge with associated algorithms, software and hardware systems, for interacting with the human world and the physical world. It’ll be like the Google Search Engine—many many very very sophisticated moving parts within it, while outside, our world will even provide a whole human sub-culture and sub-economy to sustain it.
The people who work on these systems becomes less and less valuable. The whole gist of Deep Reinforcement Learning is that when in a computer simulatable world, an automated learning algorithm could figure out how to act very gainfully very quickly. The speediness by which this learning process can happen is suggested by the DeepMind paper on an AI called AlphaZero. It learned, in a few days, to play human games better than humans have learned to play them over thousands of years.
Then, an inevitable self-recursive thought arise: what is more simulatable than the Machine Learning process? (Think AutoML-zero) The whole point of Machine Learning, the act of AI, is to encode the world in a way that our computational models can accept, then, the rest is to improve metrics which we have dutifully taught the computers. There is nothing more simulatable than the process of building an AI system. Ergo, it will be optimized away and everyone working in AI will be the first to be displaced by AI.
That’s right, if everyone follows through rationally, the order of chaos appears to be that first AI scientists and engineers are automated away, then truck drivers and software engineers.
The pursuit of AI work has to be the least valuable in the long run. The effort to replace one’s own intelligence by advancing an external intelligence is ultimately self defeating.
If not everything plays out rationally, we arrive at chaos a little bit later, but while getting there, we may have that social problem where chunks of job market are systematically replaced by AI companies. The trouble being that those chunks of job market are people with voting power and less brain plasticity than a SARSA models… the democracy may stop AI.
Aptly, my brain having been immersed in related situations for a while, just came up with this paper title
Learning to Harmonize Human and Artificial Intelligences
But I would be self defeating if I were to work on publishing that, wouldn’t I ? Malevolent and optimistic aspirations may not be the right lead in all circumstances.
Where is the equilibrium in this conundrum? is there a mid-way? Is there a paths’ path wide enough for both to move forward? Is there a root Intelligence in our world wherefrom all of ours sprang. Has that root Intelligence the wisdom to unite us all? Should we have asked “from whom?”