Posture affects infants’ capacity to identify objects, study finds
The iCub robot has helped advanced scientific understanding of word-object mapping, thanks to joint efforts from the ITALK and POETICON++ projects.
Generally taken for granted, our capacity to immediately recognise, name
and associate thousands of objects with memories – under various
viewing conditions – still remains a mystery. It is well-known that
top-down knowledge arising from previous experience with our environment
plays a key role in this process. But what if there is no such
knowledge, like when infants suddenly start mapping words to objects? Is
the learning process strictly relying on repeated word-object
associations, or do things like spatial location and body posture have
an impact as well?
To find out, scientists at the Indiana University teamed up with two
EU-funded projects – ITALK and POETICON++ – to run tests on a humanoid
robot model and later verify the results in new infant studies. Various
experiments were conducted on the robot, including one with two
different objects being placed on its right- and left-hand side – in a
way that forced the robot to position itself differently to view one or
the other. Once the robot tuned left, the name of the left-hand object
was pronounced, and the other way around.
After repeating the two object presentations several times, the team
proceeded with no object in view, and then with objects visible but not
being named. Finally, the locations of the two objects were changed,
and the robot kept making the right name-object association in 71 % of
tests. When the body variable was removed from all experiments, however,
this score only reached 46 %. Tests on infants showed very similar
results.
‘This study shows that the body plays a role in early object name
learning, and how toddlers use the body's position in space to connect
ideas,’ said Linda Smith from Indiana University, who conducted the
study. ‘A number of studies suggest that memory is tightly tied to the
location of an object. None, however, have shown that bodily position
plays a role or that, if you shift your body, you could forget.’
The robot used for this study is non-other than iCub, a humanoid
robot developed under the EU-funded project RobotCub an adopted by over
20 laboratories worldwide. The robot, which is characterised by its
highly realistic body movements, is also central to the ITALK and
POETICON++ projects, which provided it with the capacity to acquire
complex cognitive and behavioural skills based on infant-inspired
language learning.
‘The creation of a robot model for infant learning has far-reaching
implications for how the brains of young people work,’ Smith concludes.
Whilst additional research is needed to determine whether the tie to
posture for learning is limited to infants, this link has potentially
far reaching implications. Many problems related to motor development
come along with cognitive developmental disorders, and this relation is
still not well understood. It is hoped that the study helps to advance
scientific knowledge in this field.
published: 2015-04-02