A new algorithm can triage programming bugs, highlighting dangerous flaws.

What’s new: Microsoft researchers developed a machine learning model that reads the titles of bug reports and recognizes those describing flaws that compromise security. Further, it sorts security bugs by severity.

How it works: Along with stellar software products, Microsoft developers produce 30,000 coding errors monthly. Users or other developers who encounter one can file a report. These reports are generated automatically, including a brief title that describes the bug plus detailed information that may include passwords or other sensitive details. To protect user privacy, Microsoft’s model reads only the title.

  • 13 million bug reports collected between 2001 and 2018 served as data for training, validation, and testing. Security experts approved training data before training and evaluated the model’s performance in production.
  • The data was annotated either by hand or automatically using software that recognizes similarities between previous bugs and new ones.
  • The team extracted features using TF-IDF and trained separate models based on Naive Bayes, AdaBoost, and logistic regression, as described in a recent paper. Logistic regression achieved the best results.
  • The system first ranks bugs that have no security impact at a low priority, then grades security bugs as critical, important, or low-impact.

Results: The model recognized security bugs with 93 percent accuracy and achieved 98 percent area under the ROC curve. Based on an extensive review of earlier work on automated bug-hunting, the researchers believe their system is the first to classify software flaws based on report titles alone. They expect to deploy it in coming months.

Behind the news: Software bugs are responsible for some of history’s most infamous tech headaches.

  • In 2014, the Heartbleed bug rendered huge swaths of the internet vulnerable to hackers.
  • Issues in General Electric’s energy management software compounded a local power outage into a widespread blackout in 2003.
  • Many software systems developed prior to the mid 1990s used two digits to represent the year. This design flaw led to global panic, and expensive efforts to make sure systems did not crash at the dawn of the year 2000.

Why it matters: In 2016, the U.S. government estimated that cyber security breaches — many of them made possible by software defects — cost the nation’s economy as much as $109 billion. Being able to spot and repair the most dangerous flaws quickly can save huge sums of money and keep people safer.

We’re thinking: As producers of our fair share of bugs, we’re glad to know AI has our back.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox