Building a slide deck, pitch, or presentation? Here are the big takeaways:
- South Korean university KAIST opened an AI weapons lab in February in partnership with the firm Hanwha Systems.
- A larger group of artificial intelligence researchers boycotted KAIST, which eventually responded that it has no intention to build "lethal autonomous weapons systems."
South Korean university KAIST recently opened a research lab to work on artificial intelligence (AI)-powered weapons, and the AI research community was not happy about it. On Wednesday, more than 50 researchers announced a boycott of the university until it promised to stop working on such weapons.
The boycott was organized by Toby Walsh, a professor at the University of New South Wales in Sydney, and centered around an open letter signed by the participants. The letter notes that the published goals for the Research Center for the Convergence of National Defense and Artificial Intelligence at KAIST were to "develop artificial intelligence (AI) technologies to be applied to military weapons, joining the global competition to develop autonomous arms."
The United Nations has been discussing a ban on such weapons for some time. Even with those discussions happening, the researchers said that KAIST is attempting to "accelerate the arms race to develop such weapons," according to the letter.
SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)
Those who signed the open letter pledged that they would not visit the university, host visitors from the university, or contribute to any research project at KAIST. According to the letter, the boycott will continue until until the researchers are assured that KAIST's Research Center for the Convergence of National Defense and Artificial Intelligence won't develop any "autonomous weapons lacking meaningful human control."
According to a recent Reuters report, KAIST responded soon after the publishing of the letter stating that it had "no intention to engage in development of lethal autonomous weapons systems and killer robots."
KAIST was formerly known as the Korea Advanced Institute of Science & Technology. According to the letter, it opened the research center on February 20 in partnership with the Hanwha Group, a company that makes explosives.
As noted by Reuters, President Sung-Chul Shin said KAIST was "significantly aware" of ethical issues around AI, and reaffirmed that the university won't conduct research on "autonomous weapons lacking meaningful human control."
Autonomous weapons and the risk of AI, in general, have been a topic of debate in tech circles for years. Tesla CEO Elon Musk went as far as to say that AI was "more dangerous than nukes," and that it needs a governing body to oversee its implementation.
Additionally, a recent report examined the potential malicious uses of AI, including the ideas that rogue nation states or terrorists could use the technology for their own ends.
"If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before," the letter said.
In closing, the letter urges KAIST not to develop the weapons, and to "work instead on uses of AI to improve and not harm human lives."
- Special report: How to implement AI and machine learning (free PDF) (TechRepublic)
- Researchers boycott Korean university over 'killer robot' AI weapons lab (ZDNet)
- Machine learning: A cheat sheet (TechRepublic)
- Robotics in business: Everything humans need to know (ZDNet)
- The malicious uses of AI: Why it's urgent to prepare now (TechRepublic)
Conner Forrest has nothing to disclose. He doesn't hold investments in the technology companies he covers.
Conner Forrest is a Senior Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.