In 2016, the White House recognized the importance of AI at its Frontiers Conference. The concept of driverless cars became a reality, with Uber’s self-driving fleet in Pittsburgh and Tesla’s new models equipped with the hardware for full autonomy. Google’s DeepMind platform, AlphaGo, beat the world champion of the game–10 years ahead of predictions.

“Increasing use of machine learning and knowledge-based modeling methods” are major trends to watch in 2017, said Marie desJardins, associate dean and professor of computer science at the University of Maryland, Baltimore County. How will this play out? TechRepublic spoke to the experts to look at three overarching ways that AI will make an impact.

1. AI’s growing influence

“The interest from the general population in the societal impacts of AI will grow, especially where these impacts are already tangible,” said Vince Conitzer, professor of computer science at Duke University. “Many of the general themes have been previously identified–technological unemployment, autonomous weapons, machine learning systems based on biased data, machine learning systems being used to identify and suppress dissent, AI systems making moral judgments–but specific new technological developments and their failures will further drive and inform the conversation.”

Conitzer sees the interest in AI reaching specific groups outside the industry as well. “Lawyers will grapple with how the law should deal with autonomous vehicles, economists will study AI-driven technological unemployment, sociologists will study the impact of ever more effective AI-based recommender systems and personal assistants, and so on,” he said.

Fabio Cardenas, CEO of Sundown AI, agrees. In a more direct sense, he said he sees AI impacting specific roles within organizations, “such as accounting, finance, supply chain, HR, or other fields where work was performed by specialists,” he said. “This growing sphere will allow the AI to spread to several departments in multiple industries across the globe.”

2. AI going rogue

The Terminator-esque scenario is a common trope of AI writing. But is there any truth to it? According to Cardenas, “AI going rogue” could be a reality of 2017. Cardenas envisions “AI created for nefarious purposes by small group of cyber bandits to defraud institutions or individuals,” he said. “This rogue AI will enable hacking into systems that were once thought un-hackable.”

Cardenas also sees this happening through corrupting AI that already exists. “If the training set of the AI is compromised, the hackers could introduced bias or exemptions in order to subvert the AI’s predictive capabilities for their own gain,” he said.

One way this could happen, Cardenas said, is through “AI developed to make other AI smarter.” AI can self-improve by checking for blind spots in training data, making adjustments, “or, if we are lucky, writing code to improve the other AI,” he said. The result, he said, will help optimize AI. “We’re still far from a super intelligent AI, but this trend will get us closer.”

Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, also sees AI “failures” as a trend of 2017.

“The most interesting and important trend to watch in AI, and the one I am now closely tracking, is: AI failures will grow in frequency and severity proportionate to AI’s capability,” he said. In other words, as we make gains, we also increase the likelihood of “malevolent” AI.

3. Moral issues in AI

How do we prevent AI from going rogue? It’s a question many AI researchers are concerned with. Across the board, experts worry about the ethical implications of AI. After all, it’s easy to see ways that AI can make mistakes–from reinforcing biases to spouting racial slurs, to, in an extreme case, not preventing a fatal accident.

SEE: MIT’s ‘Moral Machine’ crowdsources decisions about autonomous driving, but experts call it misguided (TechRepublic)

“The traditional AI community within computer science will increasingly address societal and moral issues in their work,” said Conitzer. “AI researchers are already interested in these topics, but we are only at the beginning of figuring out how to make concrete technical contributions along these lines.” The issues, he said, will rely on “an ever growing number of computer scientists” to solve.

Moshe Vardi, a computer science professor at Rice University, also said he sees this as a trend for 2017. “Ethical issues related to AI will continue to gather attention, around impact of automation on labor, lethal autonomous weapons, algorithmic fairness and transparency, and more,” he said.

And Toby Walsh, professor of AI at the University of New South Wales, agrees. We will see, he said, “an autonomous car accidently killing an innocent pedestrian, cyclist or passenger of another vehicle,” he said. This will highlight the importance of these issues as we develop and regulate AI, he added.

The value-alignment problem–which means, how do we ensure AI operates with the same values as humans?–“stops being seen as a problem for superintelligence, but one for algorithms and machine learning programs we make today,” said Walsh.