The Columbia Chronicle

Advertisement

Ald. Bob Fioretti Concedes: ‘We Did Do Change’

February 25, 2015

Former Ald. Bob Fioretti conceded his bid for mayor Tuesday at his election party, held at the Holiday Inn Mart Plaza, 350 W. Mart Center Drive. In his speech, he addressed issues in Chicago, such as c...

Student’s big idea creates Big Ideas Columbia

Senior public relations major Kathryn Walters, along with the Office of Student Communications, organized Big Ideas Columbia, an event for students to learn what it means to be successful during and after college.

By Senior Campus Reporter

February 23, 2015

The office of Student Communications is teaming up with senior public relations major Kathryn Walters to produce Big Ideas Columbia. Big Ideas Columbia will be held Feb. 26 from 4–6 p.m. at Film Row ...

Genetic variety in gut critters tied to disease

Genetic variety in gut critters tied to disease

By Sports & Health Editor

February 23, 2015

The human gut is a teeming locus of bacterial enterprise. This ever-changing clump of activity—our gut microbiome—spans the length of the stomach and intestines and is colonized by trillions of microb...

Yulia Shupenia

Featured Athlete: Yulia Shupenia

February 16, 2015

Yulia Shupenia, a 20-year-old sophomore communications major at DePaul University, was named the BIG EAST Women’s Tennis Athlete of the Week on Feb. 9 by the BIG EAST Conference. Shupenia plays on D...

Sad music plucks spectrum of emotional notes

Sad music plucks spectrum of emotional notes

By Sports & Health Editor

February 16, 2015

For all its heartrending shrewdness, Neil Young’s single “Only Love Can Break Your Heart” climbed to No. 33 on the U.S. Billboard Top 100 chart in 1970.The contradictory notion of love being the solita...

Brain reorganization cluttered after sense restored

Brain reorganization cluttered after sense restored

By Sports & Health Editor

February 9, 2015

When people are born, their brains are primed to receive instructions on how to wire themselves based on the kinds of sensory input information they receive. However, for individuals born with sensory impa...

Self-reflection may distinguish lucid dreamers

By Sports & Health Editor

February 2, 2015

Though frequent lucid dreamers are uncommon, the nocturnal phenomenon has been a topic of interest to psychologists and sleep scientists for centuries. New research published in The Journal of Neuroscience has established a link between certain cognitive functions and the likelihood of being able to lucid dream, shedding some new light on the hazy subject.“Metacognitive monitoring is essentially the ability to monitor your own thoughts,” said Elisa Filevich, lead author of the study and postdoctoral fellow at the Max Planck Institute for Human Development. “It’s knowing what’s inside your mind.”This ability to self-reflect has been associated more with lucid rather than non-lucid dreams, leading researchers to suspect a connection to the anterior prefrontal cortex—the brain area that controls conscious processing and enables humans to consider and gain perspective on their own thoughts and actions. The Jan. 21 study is the first to test a link between lucid dreaming ability and the metacognitive function of self-reflecting at the neural level.“Dreams are normally not subject to this metacognitive monitoring,” Filevich said. “If you really were able to critically reflect on what you’re thinking, then you would notice that there are logical inaccuracies, logical failures—that things don’t follow one another,” Filevich said. “The only reason why you don’t realize you’re in a dream is because you’re not really thinking about what you’re thinking.” Study participants in a functional MRI machine were given two thought-monitoring tasks. In a portion of each they were asked to consciously self-reflect, to stay aware of their thoughts and what they were perceiving around them. Based on instructions given, the subjects indicated how internally or externally oriented their thoughts were. The fMRI data showed greater blood flow to the regions of the brain associated with metacognitive functioning in those participants who, based on a series of questionnaires and surveys, indicated that they regularly experienced lucid dreams. “We knew that we were expecting frontopolar cortex [activity based on previous research showing] that people with higher metacognitive ability have bigger brain matter volume in the prefrontal cortex,” Filevich said. “That was exactly where we expected the difference between lucid dreamers and non-lucid dreamers to be, and that’s what we got.”According to Benjamin Baird, a postdoctoral researcher at the University of Wisconsin-Madison, the point has been made within lucid dreaming literature that it is uncommon for people to reflect on their current state of consciousness much of the time.“Most people in their everyday lives don’t go around wondering whether they’re dreaming or not,” Baird said. “The kind of metacognition that’s talked about in terms of lucid dreaming is also something that doesn’t happen very frequently in the waking state.”Baird said current research also reflects that ordinary, non-lucid dreams also routinely feature metacognitive-type processes. “If you look at people’s reports of their dreaming experiences, they are making judgments about things [and] considering other people’s reactions,” Baird said. “Those kinds of things happen frequently throughout the waking state and dreaming. The question is which ones we want to call metacognition.”Memory and perception are two domains at the focus of metacognitive research, Baird said. Although structures in the anterior prefrontal cortex relate to both of those abilities, there is also evidence that other parts of the brain region may relate to thought-monitoring skills. According to Dr. Allan Hobson, a professor of psychiatry at Harvard Medical School and author of multiple papers on dreams and dream consciousness, one theory that may help explain the occurrence of lucid dreams is the hybrid state hypothesis.“What consciousness is doing is constantly updating our predictive blueprint about the world and yet our predictive blueprint of the world is constantly entering into whatever conscious state we are in,” Hobson said. “In waking, the predominant information is external and in dreaming the predominant information is internal. [When] lucid dreaming, we produce an alternation between these two states.”According to a January 2015 paper co-authored by Hobson, the highest incidence rate of both intentional and spontaneous lucid dreaming was observed in young people, peaking at the age of 9. Neurobiological changes children experience at this age begin to activate the frontal lobe, which is engaged during lucid dreaming. These changes are taking place in the same area of the brain associated with the self-monitoring, metacognitive abilities. Filevich said in order to better answer the question of a causal link between anterior prefrontal cortex activity and lucid dreaming, she hopes to teach people how to lucid dream and measure whether this increases the gray matter in the part of the brain corresponding to self-reflection.“[We want to see] whether it’s a completely trainable ability or if it comes with preconditions—whether your specific brain configuration helps you,” Filevich said.

Cooking robot may offer artificial culinary intelligence

By Sports & Health Editor

January 26, 2015

One of the greatest questions in developing of artificial intelligence is how to provide robots with a software template that enables them to recognize objects and learn actions by watching humans. Researchers from the University of Maryland Institute for Advanced Computer Studies and the National Information Communications Technology Research Centre of Excellence in Australia have developed a software system that allows robots to learn actions and make inferences by watching cooking videos from YouTube.“It’s very difficult [to teach robots] actions where something is manipulated because there’s a lot of variation in the way the action happens,” said co-author Cornelia Fermüller, a research scientist at the University of Maryland’s Institute for Advanced Computer Studies. “If I do it or someone else does it, we do it very differently. We could use different tools so you have to find a way of capturing this variation. ”The intelligent system that enabled the robot to glean information from the videos includes two artificial neural networks that mimic the human eye’s processing resulting in object recognition, according to the study. The networks enabled the robot to recognize objects it viewed in the videos and determine the type of grasp required to manipulate objects such as knives and tomatoes when chopping, dicing and preparing food. “In addition to [accounting for variation] there is the difficulty involved in capturing it visually,” Fermüller said. “We’ve looked at the goal of the task and then decomposed it on the basis of that.”Fermüller said the group classified the two types of grasping the robot performed as “power” versus “precision.” Broadly, power grasping is used when an object needs to be held firmly in order to apply force—like when holding a knife to make a cut. Holding a tomato in place to stabilize it is considered precision grasping—a more fine-grain action that calls for accuracy, according to the paper. When observing human activity in real life, robotic systems are able to perceive the movements and objects they are designed to recognize in three dimensions over time, Fermüller said. However, when the movement and objects are viewed in a video, that information is not as immediately understood. “The way we think of videos is as a three-dimensional entity in the sense that there are two dimensions of space and one dimension of time,” said Jason Corso, an associate professor of electrical engineering and computer science at the University of Michigan. “It’s not as 3D as the world we live in, but one can use a video … which is a spacetime signal, and from it correspond feature points that could be used to reconstruct the 3D environment that is being seen or imaged in that video.”According to the paper, the development of deep neural networks that are able to efficiently capture raw data from video and enable robots to perceive actions and objects have revolutionized how visual recognition in artificially intelligent systems function. The algorithms programmed into the University of Maryland’s cooking robot are one example of this neural functioning.“So what was used here was really the hand description and object tool description, and then the action was inferred out of that,” Fermüller said. Previous research on robotic manipulation and action recognition has been conducted using hand trackers and motion capture gloves to overcome the inherent limitations of trying to design artificial intelligence that can learn by example, she said. “Part of the problem is that robot hands today are so behind what biological manipulation is capable of,” said Ken Forbus, a professor of computer science and education at Northwestern University. “We have more dynamic range in terms of our touch sensing. It’s very, very difficult to calibrate, as there’s all sorts of problems that might be real problems and any system is going to have to solve them.”Forbus said some of the difficulty that presents itself in robotic design arises from the fact that the tools robots are outfitted with are far behind the ones humans are born with both physically and in terms of sense perception.“There is tons of tacit knowledge in human understanding—tons,” Forbus said. “Not just in manipulation, [but] in conceptual knowledge.”According to Forbus, artificial intelligence researchers have three ways to incorporate this type of conceptual thinking into intelligent systems. The first option is to try to design robots that can think and analyze in a manner superior to humans, and the second is articulating the tacit knowledge that humans possess by trying to boil it down into a programmable set of rules. The third way is to attempt to model the AI on the type of analogical thinking humans use as they discern information and make generalizations that help provide a framework for how to act during future experiences. “That’s a model that’s daunting in the sense that it requires lots and lots of [programmed] experience,” Forbus said. “But it’s promising in that if we can make analogical generalization work in scale … it’s going to be a very human-like way of doing it.”

St. Sabina Church celebrates Martin Luther King Jr. Day | The Columbia Chronicle

January 26, 2015

St. Sabina Church, a Christian church in the Auburn-Gresham neighborhood on Chicago’s South Side, celebrated the 86th Martin Luther King Jr. Day on January 18 in honor of King’s birthday as well a...

Buried languages leave lifelong trace

Buried languages leave lifelong trace

By Assistant Sports & Health Editor and Contributing Writer

December 1, 2014

Languages that people are exposed to at a young age form circuits in the brain that the body does not forget, even if the individual does. The existence of this buried information persists after child...

‘STEM’ disparity has early origin

Portrait of Marie Curie in her laboratory

By Assistant Sports & Health Editor

December 1, 2014

The gap between the sexes in the fields of science, technology, engineering and mathematics has narrowed considerably since the 1970s. Although that disparity has been addressed in certain respects, the root ...

Backward motion spins compass on brain’s maps

By Assistant Sports & Health Editor.

November 24, 2014

Detailed maps of the physical world are formed in different regionsof the brain as the central nervous system receives information from the five senses. The sense of sight helps humans develop topographic brain maps that give an accurate representation of where they are in space.Researchers from the Scripps Research Institute in La Jolla, California, investigated whether movements animals make repeatedly in their environments c...

Advertisement
We've got you covered