I've been busy writing up a new paper, and expect the reviews back on another soon, so ... sorry for the lack of posts. But this should be of interest:
The Dana Foundation has just posted an interview with Terrence Sejnowki about his recent Science paper, "Foundations for a New Science of Learning" (with coauthors Meltzoff, Kuhl & Movellan). Sejnowski is a kind of legendary figure in computational neuroscience, having founded the journal Neural Computation, developed the primary algorithm in independent components analysis (infomax), contrastive hebbian learning, and played an early role in linking the mathematical concept of "prediction error" to dopamine function.
One snippet from the interview:
Q: In what ways has the study of how children learn been used to solve engineering problems?
A: Children's brains are still developing and we need to understand how that helps them to learn. One example is imitation learning, which has been studied by Andrew Meltzoff, Ph.D., at the University of Washington in Seattle, who is trying to understand what makes children such effective learners. Babies and children are really good at imitation. Right out of the womb, babies can imitate facial expressions. If you stick out your tongue, a baby who can barely see will repeat your action. Children have fantastic abilities to mimic actions and behaviors. They learn a lot simply by observing and mimicking, and they will try to repeat not only the action itself - say, reaching out with the arm - but the purpose of the action - say, picking up a ball. This is something humans do much more effectively than any other animal.
Engineers, having seen that imitation is highly effective in humans, combined imitation learning with reinforcement learning to boost the performance of control systems. In apprenticeship learning, for example, a powerful computer tracks the actions of an expert human controlling a complex system, and then programs the reinforcement system to imitate and learn the very complex motor commands that the human makes. Engineers are now able to reproduce human skills that were previously thought beyond the reach of machines. For example, Andrew Ng, Ph.D., at Stanford has used apprenticeship learning with reinforcement to automatically control helicopters that do stunts like flying upside down.
Read more of the interview here.
Imitation certainly is a powerful learning tool, and mastered by babies and children as we gradually diminish that skill as we grow. I've been amazed by my 1 year-old son who accurately imitates all the silly sounds I make, and then takes it a step further, attaching meaning to each sound and utilizes them to communicate. This seems to be the basis of language acquisition, so it'll be interesting to hear him once he starts imitating recognizable words in any of our home languages.
It also makes sense that engineers would extend the concept to enable machines to learn to perform more advanced tasks. That, in tandem with progress in the physical structures of robots and other machines, make them able to achieve even finer nuances. I recently saw a robot perform a traditional Japanese dance at a science expo in Tokyo, impressively fluidly and gracefully... and most surprisingly -- imperfectly!
I wonder how all this will eventually be applied. Hoping for the best.
Hi Chris :)
This reminds me a lot of the first time I saw my PI do an electrophy recording. I clicked as I repeated the actions in my mind.
It suddenly all made sense.
There's also an interesting point raised by this quote that it is visual cues that seem to be the more potent. As for exemple, one would have probably explained to me a thousand times how to find my electrode during an experiment, I would not have grasp the concept as well as during my only visual experience of it.
What are the underlying mecanisms that permit us to link visual information to procedural memory, or declarative semantic memory. Could langage/sound encode more information, or give access to exclusive information?
A picture is worth a thousand words.