Anthony Repetto
2 min readSep 1, 2019

--

Thank you for your thoughtful response!

I agree that each neuron is an abstraction, and a ‘gravity’ neuron would emerge in any network — though it would be constrained by the aspects of gravity’s impact which were visible in limited training data. So, expanding the network’s understanding of gravity, using existing monolithic architectures, requires re-training the network. That’s part of what I was hoping to avoid.

There’s been some progress in retaining old learning while acquiring new tasks or adapting to new environments, yet our artificial networks are distinctly brittle and forgetful, compared to us. I think that difference isn’t a matter of quantity; a bigger monolithic network won’t get us lifelong learning. Our own brains use a more subtle technique to form generalizations of abstractions. (“fluid dynamics ==electromagnetism” is something currently beyond monolithic deep nets…they still struggle with object permanence!)

We can keep trying to understand the brain, to build more capable networks… yet, the Lottery Ticket demonstrates that there may be many more mathematically convenient algorithms. I’d bet on those.

(…And, I agree that embodied robots are the natural way to comprehend real and valued abstractions — back in 2004, I had a rather stilted, awkward argument with Will Wright about exactly that topic. He bet that John Holland’s genetic algorithms would produce intelligence first, using toy environments. I bet on Jeff Hawkins’ error-correcting layers, running in a robot with a rich sensor array. Now, I think we’ll find some stupidly powerful shortcut — and it’ll be able to adapt from simulation to reality with ease, like getting in a car with smoother steering.)

--

--

Anthony Repetto
Anthony Repetto

No responses yet