Share

Team’s New Tool Advances the Art of Busting Hidden Software Bugs
Computer Scientists Create Framework That Tracks Down Hard-to-Find Variability Bugs

One of the biggest challenges to fixing software bugs can be finding them.

With support from the National Science Foundation, computer scientists at The University of Texas at Dallas are going after some of the hardest-to-find errors, called variability bugs, which appear only when software is configured to work with certain hardware.

Austin Mordahl (bottom), a software engineering doctoral student, and Dr. Shiyi Wei, assistant professor of computer science, developed a framework for detecting difficult-to-find variability bugs. They presented their research in August at a software engineering conference in Estonia.

The researchers presented a framework they developed to detect variability bugs at the recent Association for Computing Machinery (ACM) Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering in Estonia. Variability bugs cannot typically be detected using off-the-shelf software analysis tools, said Dr. Shiyi Wei, assistant professor of computer science in the Erik Jonsson School of Engineering and Computer Science.

“It is hard to test and analyze software for variability bugs because of all of the different hardware involved,” Wei said. “You have to test many software configurations to find these bugs, and it’s not possible to test all the configurations.”

A software program can have millions of configurations that allow it to operate among different models and brands of computers and devices. Typical software analysis tools test only one configuration at a time, said Austin Mordahl, a software engineering PhD student and computer science research assistant working on the project.

“The analogy I use is that it’s like ordering a pizza, where the initial code base for a program is the entire palette of topping options you have available at the beginning, and the final product contains selected elements. But off-the-shelf tools are only able to analyze the finished pizza,” Mordahl said. “So, if you don’t select the part of the code that has a bug in it to be included in the final product — let’s say you skipped the anchovies — then no matter how good your off-the-shelf tool is, it will never find the issue because the bug simply doesn’t exist in your executable.”

The researchers tested 1,000 configurations across three programs, for a total of 3,000 configurations, in an attempt to identify variability bugs. The project detected and confirmed 77 bugs, 52 of which were variability bugs.

Mordahl won the ACM Student Research Competition at the International Conference on Software Engineering in May for a paper describing this research titled, “Toward detection and characterization of variability bugs in configurable C software: an empirical study.”

Millions of people rely on highly configurable software, yet these systems lack adequate automated tools to keep them secure and reliable, Wei said. He and fellow researchers plan to continue to develop and improve their framework and hope their data set can support future research on highly configurable code analyses.

“Configured software is one of the most common types we use,” Wei said. “That’s why it’s so important to improve the quality of this software.”

The UT Dallas researchers collaborated on the project with computer scientists at The University of Texas at Austin, the University of Maryland and the University of Central Florida.

 

Researchers Create Automated System for Improving Computer Bug Reports

When software doesn’t work properly, many frustrated users fill out online bug reports. Too often, however, their explanations are unclear or incomplete, leaving developers without enough information to resolve the issue, said Dr. Andrian Marcus, professor of computer science in the Jonsson School.

Dr. Andrian Marcus

With funding from the National Science Foundation, Marcus is working with other computer scientists to create a more effective way for users to report problems to developers. The researchers have developed a tool that provides feedback to the users on the quality of their reports. The research was presented recently at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering and won the ACM SIGSOFT Distinguished Paper Award.

“Thousands of these reports are passed on to developers,” Marcus said. “The first thing they have to do is reproduce the bug.”

Without enough information to reproduce the error, developers spend an excessive amount of time resolving the issue, if they can fix it at all, he said. The researchers aim to bridge the gap between users’ descriptions of what happened and the technical information that developers need.

Marcus teamed up with Dr. Vincent Ng, professor of computer science and an expert in natural language processing and machine learning, and two doctoral students: Jing Lu, a research assistant in computer science, and Oscar Chaparro PhD’19, now an assistant professor at the College of William & Mary. The UT Dallas researchers also collaborated with other researchers from William & Mary and the University of Sannio in Italy.

The team built an automated approach that can analyze the text in a bug report, assess the quality of the information and provide feedback to users who report bugs.

“Our research is about automatically identifying these components and allowing the machine to determine the steps to reproduce an error,” Marcus said.

The platform identified errors in bug reports with more than 90% accuracy. The researchers’ long-term goal is to create an interactive system that will produce better reports.

“We don’t teach people how to write bug reports, so they write gibberish,” Marcus said. “If we can create a system that elicits the information from a conversation, we believe there will be better bug reports coming out of it.”

Media Contact: The Office of Media Relations, UT Dallas, (972) 883-2155, [email protected].