The Columbia Chronicle

Cooking robot may offer artificial culinary intelligence

By Sports & Health Editor

January 26, 2015

One of the greatest questions in developing of artificial intelligence is how to provide robots with a software template that enables them to recognize objects and learn actions by watching humans. Researchers from the University of Maryland Institute for Advanced Computer Studies and the National Information Communications Technology Research Centre of Excellence in Australia have developed a software system that allows robots to learn actions and make inferences by watching cooking videos from YouTube.“It’s very difficult [to teach robots] actions where something is manipulated because there’s a lot of variation in the way the action happens,” said co-author Cornelia Fermüller, a research scientist at the University of Maryland’s Institute for Advanced Computer Studies. “If I do it or someone else does it, we do it very differently. We could use different tools so you have to find a way of capturing this variation. ”The intelligent system that enabled the robot to glean information from the videos includes two artificial neural networks that mimic the human eye’s processing resulting in object recognition, according to the study. The networks enabled the robot to recognize objects it viewed in the videos and determine the type of grasp required to manipulate objects such as knives and tomatoes when chopping, dicing and preparing food. “In addition to [accounting for variation] there is the difficulty involved in capturing it visually,” Fermüller said. “We’ve looked at the goal of the task and then decomposed it on the basis of that.”Fermüller said the group classified the two types of grasping the robot performed as “power” versus “precision.” Broadly, power grasping is used when an object needs to be held firmly in order to apply force—like when holding a knife to make a cut. Holding a tomato in place to stabilize it is considered precision grasping—a more fine-grain action that calls for accuracy, according to the paper. When observing human activity in real life, robotic systems are able to perceive the movements and objects they are designed to recognize in three dimensions over time, Fermüller said. However, when the movement and objects are viewed in a video, that information is not as immediately understood. “The way we think of videos is as a three-dimensional entity in the sense that there are two dimensions of space and one dimension of time,” said Jason Corso, an associate professor of electrical engineering and computer science at the University of Michigan. “It’s not as 3D as the world we live in, but one can use a video … which is a spacetime signal, and from it correspond feature points that could be used to reconstruct the 3D environment that is being seen or imaged in that video.”According to the paper, the development of deep neural networks that are able to efficiently capture raw data from video and enable robots to perceive actions and objects have revolutionized how visual recognition in artificially intelligent systems function. The algorithms programmed into the University of Maryland’s cooking robot are one example of this neural functioning.“So what was used here was really the hand description and object tool description, and then the action was inferred out of that,” Fermüller said. Previous research on robotic manipulation and action recognition has been conducted using hand trackers and motion capture gloves to overcome the inherent limitations of trying to design artificial intelligence that can learn by example, she said. “Part of the problem is that robot hands today are so behind what biological manipulation is capable of,” said Ken Forbus, a professor of computer science and education at Northwestern University. “We have more dynamic range in terms of our touch sensing. It’s very, very difficult to calibrate, as there’s all sorts of problems that might be real problems and any system is going to have to solve them.”Forbus said some of the difficulty that presents itself in robotic design arises from the fact that the tools robots are outfitted with are far behind the ones humans are born with both physically and in terms of sense perception.“There is tons of tacit knowledge in human understanding—tons,” Forbus said. “Not just in manipulation, [but] in conceptual knowledge.”According to Forbus, artificial intelligence researchers have three ways to incorporate this type of conceptual thinking into intelligent systems. The first option is to try to design robots that can think and analyze in a manner superior to humans, and the second is articulating the tacit knowledge that humans possess by trying to boil it down into a programmable set of rules. The third way is to attempt to model the AI on the type of analogical thinking humans use as they discern information and make generalizations that help provide a framework for how to act during future experiences. “That’s a model that’s daunting in the sense that it requires lots and lots of [programmed] experience,” Forbus said. “But it’s promising in that if we can make analogical generalization work in scale … it’s going to be a very human-like way of doing it.”

Silenced nerve cell leaves ‘love hormone’ unrequited

Silenced nerve cell leaves ‘love hormone’ unrequited

By Assistant Sports & Health Editor

November 3, 2014

The brain is ruled by the interactions of chemicals and receptors. However, new research emphasizes that those reactions often radiate beyond the brain, affecting the body’s physiological processes ...

Pediatric medicine's damaging oversight

Pediatric medicine’s damaging oversight

October 13, 2014

In 1889, pediatric surgeon William Hill was removing the tonsils from children with obstructed airways to alleviate their struggles with breathing when he noticed unforeseen side effects of the procedure. T...

Bolstered brain research aims inward

By Assistant Sports & Health Editor

October 13, 2014

As a part of the first wave of new federal funding initiatives, research and development into novel technologies to better understand the brain will begin this year.“Most of what you will see in this remarkable set of [grant awards] are technological advances,” said Dr. Francis Collins, director of the National Institutes of Health, during a Sept. 30 press conference. “We are seriously tackling an understanding of the ...

Kristine Flaherty, commonly known as K.Flay, is a rising artist in the underground hip-hop scene, exhibiting a distinct combination of indie-rock beats and smooth rhymes.

Stanford graduate hooks rap scene with indie-rock beats

October 6, 2014

Despite wielding a degree in psychology and sociology from Stanford University, Kristine Flaherty, more commonly known by her stage name K.Flay, spends her days dropping beats in San Francisco’s Bay ...

‘Being Flynn,’ looking backward

By Drew Hunt

February 29, 2012

Director Paul Weitz (“Little Fockers,” “Cirque du Freak: The Vampire’s Assistant”) opens his newest film with an aged but no less ubiquitous Robert De Niro as he walks into a parking garage and gets behind the wheel of a bright yellow taxicab. An image as loaded as that one is enough to pull an audience straight out of your film, but Weitz is content to let minds wander. And why not? After all, the biggest criti...

Big East Powerhouse

By Etheria Modacure

January 31, 2011

The NCAA Women’s Final Four will be held in Indianapolis, and one program looks to make the short trip from Chicago to the Conseco Fieldhouse. The DePaul University women’s basketball team has been winning games with an aggressive playing style on both sides of the basketball court and may be one of the select four teams playing for a championship in April.The Blue Demons are off to one their best starts in program histo...

We've got you covered