Students face education improvements

New+facial+detection+software+can+measure+students+engagement+levels

Photo Illustration by Angela Conners

New facial detection software can measure students’ engagement levels

By Sports & Health Editor

As final exams and school projects begin to consume students’ lives, researchers are developing new software that can discern how engaged they are in the classroom.

A study published April 10 in the journal IEEE Xplore introduced a new facial detection software that can measure students’ engagement levels. Researchers from Emotient, a facial recognition technology development company, collaborated with researchers from Virginia State University and the University of California, San Diego to create the software.

“[The inspiration] was a combination of the opportunity to pursue automatic facial expression recognition research and applying it to education,” said Jacob Whitehill, co-founder and research scientist at Emotient.

The software can predict a student’s attention levels with 70 percent accuracy, Whitehill said, adding that the software can also predict students’ test scores more accurately than their previous grades suggest. The software is modeled after machine learning technology, which uses statistics-based algorithms to compare images and videos to find distinctions—in this case, images of the students’ facial expressions, Whitehill said.

To test the software, students were recorded while using online learning software on their iPads and laptops, Whitehill said. The team then selected moments in which participants looked engaged and or distracted and added the facial responses to a catalogue that processed the images to determine a student’s engagement level, Whitehill said.

Jeff Cohn, professor of psychology at the University of Pittsburgh, said he thinks educators need to be able to accurately measure their students’ engagement levels to address any learning concerns students may have, such as grades and understanding of the material.

“While we’re teaching, we’re [constantly] trying to evaluate our students’ responses to what we’re saying,” Cohn said.

Whitehill said educators have shown increased interest in this technology in the last five years as universities have begun implementing more online resources. He said facial recognition software research can improve online lecture content and offer students a way to provide more honest feedback.

“What we can do instead [of regular evaluations] is get an automatic and massively wide-scale sense of how our students respond to what they’re learning from,” Whitehill said. “By using this technology, you can identify parts that need to be improved.”

According to Javier Movellan, a researcher at the Machine Perception Laboratory at UC San Diego and lead researcher at Emotient, the company has developed several cameras that detect facial expressions associated with primary emotions, which can be universally understood by any culture, and cognitive states, such as feelings of confusion or frustration.

The cameras used in the engagement study record muscle movements, Movellan said. When the camera detects which facial muscles moved in a particular area, it can then determine the subject’s emotion.

In addition to use for student feedback, Movellan said the software could be implemented into a retail setting to analyze customer experiences. He said the technology could also be applied in the medical field, as doctors could monitor the effects of a new drug by tracking their patients’ facial expressions daily to monitor their emotional reactions to the medicine.

“Basically the technology is ready to be used in real-life environments,” Movellan said. “[We are] very close.”

Whitehill said the facial detection software is similar to human observation because neither is entirely accurate. However, the software is consistent, he said, whereas two human observers judging a subject’s engagement level are likely to disagree about the subject’s specific emotion. People are generally better able to determine a person’s emotions because they can account for the context of the situation, Cohn said.

For example, smiles of embarrassment and enjoyment both use the Duchenne marker, the contraction of the sphincter muscle that surrounds the eyes and controls their movement, Cohn said, adding that the person’s behaviors within a given context are important when inferring emotion.

“Software does not recognize emotion,” Cohn said. “It may be recognizing expressions due to emotion labels, but to know what someone is feeling, that requires inference.”