Generalization Initialization

I’ve been talking to coworkers about recent batch of papers claiming deep neural networks can or cannot generalize effectively.

I feel I do not have the same respect for this problem as my coworkers. I do not fear it as they do.

Let’s see, how bad could this be?

I suppose an example of this problem is learning to identify a cat. The robot may find out through reinforcement learning that a cat is best identified by scaring it suddenly and hearing surprised meow. So few mute cat exists that accuracy is negligibly decreased by this overfitting. The obvious problem with this is that there are mute cats and Hollywood will make a movie about the one that was used to defeat the AI that overpowered its human creators.

(And the reverse could be true as well, for example toy dogs finding out that scaring children into crying fit is the best way to detect a child from adult)

The intelligent reader will quickly point out that there are plenty of things covered in deepnets-101 that prevents that from happening. (Well maybe not necessarily for reinforcement learning, but straight up deep nets has nice regularizers)

What else could happen? Was there a meme around the internet about the indistinguishability of dogs and fried chicken? The fear is that Cortana would grab the dog and microwave it when you ask it to reheat the leftover from KFC. The generalization in this case is too general—it found anything that could resemble a dog instead of just the dogs. And this was just a meme, not sure if it could withstand serious answers.

More sophisticated problems, often jokingly put on display, are the mistakes that mentally ill people display. Well, mentally ill people and geniuses. The AI could make framing errors: throwing a person into a pool to clean some dirt off of his shoulder. The solution is not within reasonable framing of the problem. But it could be chosen due to the wrong type of generalization.

There is also a problem of leakage. For example, a learning system could overfit training data consisting of FBI profile so much that it is more of a determiner for whether FBI has investigated a person instead of determiner for true crimes. Failure to truly generalize to other populations for whom FBI never collected information is caused by the learning system picking up bias and errors of the whole FBI system consisting of many error-capable humans. The theory, at least for today’s systems, is that it is at least as bad as the human it learns from.

This now indeed seems to be a very interesting problem to consider. But there may not be a one-stop-shop solution to all of AI’s problems. Generalization is probably just one of many things we must solve for in future systems. This is great opportunity for scientific advancement and development of specializations, such as Robopsychology, and psychohistory, and…

But for real.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s