Scientists Train AI Using Headcam Footage From Human Toddler

Researchers have not only built an AI child, but are now training AI using headcam footage from a human baby as well. In a press release, New York University announced that its data science researchers had strapped a camera to the head of a real, live, human toddler for 18 months to see how much an AI model could learn from it. Most large language models (LLMs), like OpenAI’s GPT-4 and its competitors, are trained on “astronomical amounts of language input” that are many times larger than what infants receive when learning to speak a language during the first years of their lives. Despite that data gap, the systems the NYU scientists trained on the baby cam data were in fact able to learn a “substantial number of words and concepts” — and all from only about one percent of the child’s total waking hours between the ages of six months and two years. Translation: using only a fraction of the data usually required to train an AI model, these researchers were able to teach their system to learn like a baby. That sort of thing is enough to make most Silicon Valley types, who are understandably concerned about the enormous amounts of energy, water, and data needed to train AI models, salivate. As the MIT Technology Review reports, the baby in question is named Sam, and he lives in Australia with his parents and two cats. The project resulted in roughly 61 hours of footage shot while Sam wore the…Scientists Train AI Using Headcam Footage From Human Toddler

Leave a Reply

Your email address will not be published. Required fields are marked *