By the end of 2017, some 61% of businesses had implemented artificial intelligence (AI) into their organizations–a 23% jump from the previous year, according to Narrative Science. And the incorporation of AI into business will only rise: The number of medium and large enterprises using machine learning is predicted to double by the end of 2018, said Deloitte.

Machine learning is a form of AI that interprets massive amounts of data, applying algorithms to the material, and making predictions off its observations. Common technologies that employ machine learning include facial recognition, speech recognition, translation services, and object recognition.

SEE: Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

Businesses typically use machine learning for locating and processing large data sets that no human could sort through in a timely manner, if at all. Major companies like Amazon, IBM, Google, and Microsoft use machine learning to improve business functionality. But some organizations are implementing machine learning for more a narrow purpose: Cybersecurity.

While many assume machine learning makes cybersecurity professionals’ lives much easier by better tracking security issues, that’s not necessarily the case. Just like any new technology, machine learning still has its flaws–problems that turn the tech into more of a headache than a helping hand in the security space.

Here are the five ways machine learning may make things harder on cybersecurity pros.

1. Machine learning-equipped hackers

Machine learning can be helpful defending against attackers, but can be destructive when used by the wrong people. “An arms race is occurring as each side tries to one-up the other to make a better AI,” said Ryan Ries, AI/machine learning expert at Onica.

Machine learning works faster than humans–a quality that is typically celebrated. However, not in the case of cyberattacking efforts.

“Human attackers will perform reconnaissance on a potential victim before launching a cyber attack, investigating things like what software they are running, the version of that software, any known vulnerabilities for said version, or any un-published zero day exploits shared among the hacker community that could improve their attack. This process can take many hours,” said Emil Hozan, security analyst at WatchGuard Technologies. “But with machine learning, this research process can be carried out much more quickly and efficiently. Machine learning/AI hacking can also learn from past experiences; what didn’t work on a similar previous hack attempt could be skipped over in favor of a new tactic.”

2. Lack of transparency

In most cybersecurity systems, when a flaw is detected, the administrator can go in and see what caused the alert, according to Gartner research vice president Anton Chuvakin. However, with machine learning-based systems, the cause of alerts cannot be pinpointed, presenting a lack of transparency. Sometimes, these alerts end up being false positives, said Chuvakin.

“Not only it can be wrong, but it’s also harder to ascertain, and, as people say in security, harder to triage what it means,” said Chuvakin. “Are we in real trouble? Are we in, somewhat of a trouble or are we not in trouble at all?”

3. Feeding the right data

Machine learning systems don’t work when just any and all data are fed to it. These systems are actually a little picky. Modern machine learning algorithms rely on very specific data to work, said Chuvakin.

“When we spoke to some of the vendors, they told us that the challenges are often not about machine learnings, but more about how you feed in the right data,” said Chuvakin.

SEE: Enterprise IoT research: Uses, strategy, and security (Tech Pro Research)

If companies want a quality output, the input has to be quality as well. “I would say that use of machine learning puts higher pressure on security professionals to deliver better quality input data, better quality sensor data,” Chuvakin said. “The old-school way the system may be less sensitive to quality inputs.”

4. Humans still need to make the system work

Since machine learning systems can’t explain why something was flagged, something (or somebody) else is needed. Many people worry that AI will take jobs, but with the specialized skills needed for machine learning to work, more jobs might actually be created, said Chuvakin.

“For the system to work, you have to have a security data scientist, which is obviously really rare and really expensive. It’s just a peculiar consequence of some of the advanced math being used in the product,” said Chuvakin. “Not only are the systems not always explainable, but to actually tune the product to operate effectively, you have to have skills that most security operations teams don’t have.”

It doesn’t appear that machine learning is going to be replacing security professionals; most companies will actually need additional security pros to make the systems function properly.

5. The tech talent shortage

The specialized skills necessary to make machine learning work creates another problem: need for hard-to-find talent. Talent shortages in the tech world, especially among data scientists, are definitely no secret. Handling machine learning systems is difficult, so it’s hard to find individuals able to help out with such a niche operating systems, leaving cybersecurity pros in a bind.

“Machine learning is actually dramatically more difficult than most people realize,” said Chuvain. “Companies are having trouble finding talent. But think about it: If you have a large company and they want to use machine learning for business and they’re having trouble hiring, do you think the security team that doesn’t make money will be able to hire the right talent to do machine learning? The answer is very often no.”