The Columbia Chronicle

SGA

SGA to hold first meeting of semester

February 2, 2015

Having spent much of the Fall 2014 semester forging a relationship with the college community, the Student Government Association is reconvening Feb. 3 for its first senate meeting of the new semester. L...

Self-reflection may distinguish lucid dreamers

By Sports & Health Editor

February 2, 2015

Though frequent lucid dreamers are uncommon, the nocturnal phenomenon has been a topic of interest to psychologists and sleep scientists for centuries. New research published in The Journal of Neuroscience has established a link between certain cognitive functions and the likelihood of being able to lucid dream, shedding some new light on the hazy subject.“Metacognitive monitoring is essentially the ability to monitor your own thoughts,” said Elisa Filevich, lead author of the study and postdoctoral fellow at the Max Planck Institute for Human Development. “It’s knowing what’s inside your mind.”This ability to self-reflect has been associated more with lucid rather than non-lucid dreams, leading researchers to suspect a connection to the anterior prefrontal cortex—the brain area that controls conscious processing and enables humans to consider and gain perspective on their own thoughts and actions. The Jan. 21 study is the first to test a link between lucid dreaming ability and the metacognitive function of self-reflecting at the neural level.“Dreams are normally not subject to this metacognitive monitoring,” Filevich said. “If you really were able to critically reflect on what you’re thinking, then you would notice that there are logical inaccuracies, logical failures—that things don’t follow one another,” Filevich said. “The only reason why you don’t realize you’re in a dream is because you’re not really thinking about what you’re thinking.” Study participants in a functional MRI machine were given two thought-monitoring tasks. In a portion of each they were asked to consciously self-reflect, to stay aware of their thoughts and what they were perceiving around them. Based on instructions given, the subjects indicated how internally or externally oriented their thoughts were. The fMRI data showed greater blood flow to the regions of the brain associated with metacognitive functioning in those participants who, based on a series of questionnaires and surveys, indicated that they regularly experienced lucid dreams. “We knew that we were expecting frontopolar cortex [activity based on previous research showing] that people with higher metacognitive ability have bigger brain matter volume in the prefrontal cortex,” Filevich said. “That was exactly where we expected the difference between lucid dreamers and non-lucid dreamers to be, and that’s what we got.”According to Benjamin Baird, a postdoctoral researcher at the University of Wisconsin-Madison, the point has been made within lucid dreaming literature that it is uncommon for people to reflect on their current state of consciousness much of the time.“Most people in their everyday lives don’t go around wondering whether they’re dreaming or not,” Baird said. “The kind of metacognition that’s talked about in terms of lucid dreaming is also something that doesn’t happen very frequently in the waking state.”Baird said current research also reflects that ordinary, non-lucid dreams also routinely feature metacognitive-type processes. “If you look at people’s reports of their dreaming experiences, they are making judgments about things [and] considering other people’s reactions,” Baird said. “Those kinds of things happen frequently throughout the waking state and dreaming. The question is which ones we want to call metacognition.”Memory and perception are two domains at the focus of metacognitive research, Baird said. Although structures in the anterior prefrontal cortex relate to both of those abilities, there is also evidence that other parts of the brain region may relate to thought-monitoring skills. According to Dr. Allan Hobson, a professor of psychiatry at Harvard Medical School and author of multiple papers on dreams and dream consciousness, one theory that may help explain the occurrence of lucid dreams is the hybrid state hypothesis.“What consciousness is doing is constantly updating our predictive blueprint about the world and yet our predictive blueprint of the world is constantly entering into whatever conscious state we are in,” Hobson said. “In waking, the predominant information is external and in dreaming the predominant information is internal. [When] lucid dreaming, we produce an alternation between these two states.”According to a January 2015 paper co-authored by Hobson, the highest incidence rate of both intentional and spontaneous lucid dreaming was observed in young people, peaking at the age of 9. Neurobiological changes children experience at this age begin to activate the frontal lobe, which is engaged during lucid dreaming. These changes are taking place in the same area of the brain associated with the self-monitoring, metacognitive abilities. Filevich said in order to better answer the question of a causal link between anterior prefrontal cortex activity and lucid dreaming, she hopes to teach people how to lucid dream and measure whether this increases the gray matter in the part of the brain corresponding to self-reflection.“[We want to see] whether it’s a completely trainable ability or if it comes with preconditions—whether your specific brain configuration helps you,” Filevich said.

Cooking robot may offer artificial culinary intelligence

By Sports & Health Editor

January 26, 2015

One of the greatest questions in developing of artificial intelligence is how to provide robots with a software template that enables them to recognize objects and learn actions by watching humans. Researchers from the University of Maryland Institute for Advanced Computer Studies and the National Information Communications Technology Research Centre of Excellence in Australia have developed a software system that allows robots to learn actions and make inferences by watching cooking videos from YouTube.“It’s very difficult [to teach robots] actions where something is manipulated because there’s a lot of variation in the way the action happens,” said co-author Cornelia Fermüller, a research scientist at the University of Maryland’s Institute for Advanced Computer Studies. “If I do it or someone else does it, we do it very differently. We could use different tools so you have to find a way of capturing this variation. ”The intelligent system that enabled the robot to glean information from the videos includes two artificial neural networks that mimic the human eye’s processing resulting in object recognition, according to the study. The networks enabled the robot to recognize objects it viewed in the videos and determine the type of grasp required to manipulate objects such as knives and tomatoes when chopping, dicing and preparing food. “In addition to [accounting for variation] there is the difficulty involved in capturing it visually,” Fermüller said. “We’ve looked at the goal of the task and then decomposed it on the basis of that.”Fermüller said the group classified the two types of grasping the robot performed as “power” versus “precision.” Broadly, power grasping is used when an object needs to be held firmly in order to apply force—like when holding a knife to make a cut. Holding a tomato in place to stabilize it is considered precision grasping—a more fine-grain action that calls for accuracy, according to the paper. When observing human activity in real life, robotic systems are able to perceive the movements and objects they are designed to recognize in three dimensions over time, Fermüller said. However, when the movement and objects are viewed in a video, that information is not as immediately understood. “The way we think of videos is as a three-dimensional entity in the sense that there are two dimensions of space and one dimension of time,” said Jason Corso, an associate professor of electrical engineering and computer science at the University of Michigan. “It’s not as 3D as the world we live in, but one can use a video … which is a spacetime signal, and from it correspond feature points that could be used to reconstruct the 3D environment that is being seen or imaged in that video.”According to the paper, the development of deep neural networks that are able to efficiently capture raw data from video and enable robots to perceive actions and objects have revolutionized how visual recognition in artificially intelligent systems function. The algorithms programmed into the University of Maryland’s cooking robot are one example of this neural functioning.“So what was used here was really the hand description and object tool description, and then the action was inferred out of that,” Fermüller said. Previous research on robotic manipulation and action recognition has been conducted using hand trackers and motion capture gloves to overcome the inherent limitations of trying to design artificial intelligence that can learn by example, she said. “Part of the problem is that robot hands today are so behind what biological manipulation is capable of,” said Ken Forbus, a professor of computer science and education at Northwestern University. “We have more dynamic range in terms of our touch sensing. It’s very, very difficult to calibrate, as there’s all sorts of problems that might be real problems and any system is going to have to solve them.”Forbus said some of the difficulty that presents itself in robotic design arises from the fact that the tools robots are outfitted with are far behind the ones humans are born with both physically and in terms of sense perception.“There is tons of tacit knowledge in human understanding—tons,” Forbus said. “Not just in manipulation, [but] in conceptual knowledge.”According to Forbus, artificial intelligence researchers have three ways to incorporate this type of conceptual thinking into intelligent systems. The first option is to try to design robots that can think and analyze in a manner superior to humans, and the second is articulating the tacit knowledge that humans possess by trying to boil it down into a programmable set of rules. The third way is to attempt to model the AI on the type of analogical thinking humans use as they discern information and make generalizations that help provide a framework for how to act during future experiences. “That’s a model that’s daunting in the sense that it requires lots and lots of [programmed] experience,” Forbus said. “But it’s promising in that if we can make analogical generalization work in scale … it’s going to be a very human-like way of doing it.”

Communication and Media Innovation

Department merger spawns search for new chair

January 26, 2015

Nearly a year after the college’s announcement of the merger of the Advertising and Public Relations programs and the Journalism Department, an interdisciplinary committee has selected a name for the new ...

Large assets, bigger profits: Plus-size women, clothing, bloggers, fight to fit into the fashion industry

By Managing Editor

December 8, 2014

The highest-earning plus-size supermodel in the fashion industry is known by one name—not her first or last, just “Emme.” The 51-year-old made her way into the fashion industry in the 1990s, when ultra-thin airbrushed models graced the covers of fashion magazines, advertisements and runways. Although the experience was exciting for Emme, she was not immune to prejudice: a famous photographer referred to her as a “fatty...

Buried languages leave lifelong trace

Buried languages leave lifelong trace

By Assistant Sports & Health Editor and Contributing Writer

December 1, 2014

Languages that people are exposed to at a young age form circuits in the brain that the body does not forget, even if the individual does. The existence of this buried information persists after child...

Columbia rushes around anti-Greek policy

Columbia rushes around anti-Greek policy

December 1, 2014

An email sent to students’ LoopMail accounts Nov. 9 about the campus bookstore selling sorority and fraternity gear despite the collegewide policy prohibiting Greek life left some students confused an...

‘Feeling of presence’ demystified

'Feeling of presence' demystified

By Assistant Sports & Health Editor

November 17, 2014

Mountaineers sometimes report the sensation of a nearby presence as they scale great heights, as if someone was right behind them—off to the left or the right a bit—but out of sight. Healthy minds ...

tDCS

DIY devices jolt brain, improve function

November 17, 2014

For some, a cup of coffee is the best way to focus on work. But for Vincent Wood, a junior neuroscience major at the University of Pittsburgh, that extra jolt of energy comes from a different source.Wood use...

Columbia community mourns student’s death

By Managing Editor

November 3, 2014

Jake McConnell, a sophomore cinema art + science major, died Oct. 29 in his dorm room at The Dwight, 642 S. Clark St.McConnell, a 20-year-old native of suburban Crystal Lake, Illinois, came to the college to study film and creative writing. McConnell’s death was announced Oct. 29 in a collegewide email from President Kwang-Wu Kim and Vice President of Student Success Mark Kelly. “We are deeply saddened to share news ...

Mayor proposed to ban box law in Chicago

Mayor proposed to ban box law in Chicago

October 20, 2014

Mayor Rahm Emanuel proposed a new city ordinance on Sept. 29 called Ban the Box, which would eradicate the current box law that creates barriers to employment for convicted felons and those with criminal re...

We've got you covered