Microsoft Security Risk Detection will use artificial intelligence to help developers find bad code and detect security vulnerabilities in their apps.
Microsoft Security Risk Detection, made publicly available Friday, uses artificial intelligence (AI) to help software developers find bugs in their code and other vulnerabilities. The cloud-based tool, previously known as Project Springfield, is meant to complement the work being done by developers and security experts, according to a blog post.
Microsoft's David Molnar, who leads the group behind Microsoft Security Risk Detection, said in the post that the tool performs fuzz testing, a QA method for finding buggy code and security problems. As more and more software is developed, the need for this testing grows, and becomes hard to manage.
The AI is not meant to replace the human workers, but merely to augment the work they're already doing, the post said.
SEE: Bug Bounty: Web Hacking (TechRepublic Academy)
"We use AI to automate the same reasoning process that you or I would use to find a bug, and we scale it out with the power of the cloud," Molnar said in the post.
To conduct its fuzz testing, Microsoft Security Risk Detection asks "what if" questions to determine the root cause of a given issue, the post said. By doing this over and over, it narrows its focus, looking for problems that other tools may have missed. It is helpful for companies that build software in-house, or for those that customize off-the-shelf software or open source tools, the post said.
Electronic signature company DocuSign has been testing the new tool. The firm's senior director of software security, John Heasman, said they have been using it as "an extra step of assurance."
By automating some of the common security processes with bug testing, the tool has improved digital transformation efforts as well, Molnar said in the post. It also helps smaller companies, which may not have access to the same kind of resources, improve their security posture.
Cybersecurity has recently emerged as one of the core enterprise applications for AI. Startups like Armorway have been working on using AI to predict breaches, and IBM's Watson has also been trained to help detect and mitigate potential threats.
The 3 big takeaways for TechRepublic readers
- Microsoft Security Risk Detection uses AI to help businesses better detect bugs in their software and identify potential security vulnerabilities.
- The new tool conducts fuzz testing, and is meant to augment existing efforts, not fully replace human security experts.
- Cybersecurity is fast emerging as a new initiative for AI in the enterprise, as startups and tech giants have been working with AI tools to improve security offerings.
- Information security policy (Tech Pro Research)
- iCloud security flaw put iPhone, Mac passwords at risk (ZDNet)
- How the DoD uses bug bounties to help secure the department's websites (TechRepublic)
- Writing Windows or Linux apps? Microsoft just launched a cloud-powered bug hunter to find the flaws in your code (ZDNet)
- Ex-Facebook engineers launch Honeycomb, a new tool for your debugging nightmares (TechRepublic)