| Home | Program | Call for Abstracts | Contact |
| 14:00 – 14:15 | Welcome and Introduction |
| 14:15 – 14:45 | "Using Simulated Robots to Understand Variability and Error in Development", Ori Ossmy (Birkbeck, University of London, UK) |
| Abstract: What is the optimal variability and penalty for errors in infant skill learning? Behavioral analyses indicate that variability and errors are frequent but trivial as infants acquire foundational skills. In learning to walk, for example, changing trajectories and falling are commonplace but appears to incur only a negligible penalty. Behavioral data, however, cannot reveal whether variability and low penalty for falling is beneficial for learning. In this talk, I will demonstrate how simulated robots can be used as an embodied model to test infants’ learning. I will discuss how robotics provides a fruitful avenue for testing hypotheses about infant development and reciprocally, observations of infant behavior may inform research on artificial intelligence. |
| 14:45 – 15:15 | "How Retinas Shape Visual Development", Samantha Wood (Indiana University Bloomington, USA) |
| Abstract: Why do visual systems have the organization they do? Are the unique properties of the visual system hardcoded into the brain by genetics or learned through experience? One way to test this is with computational models of vision. Most models of visual development focus on how brains learn, but they neglect a crucial fact: brains only learn from the inputs the body provides. In this talk, I will present results from models trained on simulated prenatal retinal waves and models trained on postnatal visual inputs shaped by the optics of the eye (e.g., cones vs. rods, foveation, dynamic vision sensing). Our results demonstrate that generic neural networks (transformers) spontaneously develop key signatures of biological visual systems when trained on prenatal and postnatal retinal experiences, including orientation selectivity, shape perception, and hierarchically increasing receptive fields. Thus, core features of visual organization may emerge naturally from general learning applied to embodied sensory data, without requiring detailed genetic instruction. |
| 15:15 – 15:30 | Poster Spotlights |
| 15:30 – 16:00 | Coffee & Tea Break + Posters |
| 16:00 – 16:30 | "The Intelligence of the Hand", Lorenzo Jamone (University College London, UK) |
| Abstract: Human dexterity remains unmatched by modern robots, yet developing more dexterous robotic systems is crucial for tackling tasks in semi-structured, unstructured, and hazardous environments. My team is dedicated to studying "the intelligence of the hand", in humans and robots, to bridge this gap and enhance the functionality and intelligence of robotic hands. In the talk, I will share highlights from our recent work in tactile perception, haptic exploration, grasping, and manipulation, showcasing how these advancements are bringing us closer to creating truly dexterous robots, but also to understanding more about how humans use their hands in different tasks. |
| 16:30 – 16:45 | "Brains in Motion: How we Learn to Move and Adapt Across Development", Laura Faßbender (JLU Gießen, Germany) |
| 16:45 – 17:00 | "Infant Age Classifier for a Baby Brain-Computer Interface", Djordje Veljkovic (NTNU, Norway) |
| 17:00 – 17:30 | Panel discussion with all the speakers |