A new research project called People + AI Research initiative (PAIR) attempts to address bias in machine learning, and offers tools to help those who work in AI.
Artificial intelligence (AI) is progressing at a breakneck pace, raising questions about how smart machines will become, how machines and humans can work together, and what tasks AI will be taking over in the new era of machine intelligence. On Monday, Google announced a step towards addressing the human-machine dilemma: A new project, People + AI Research initiative (PAIR), that centers around collaboration.
In a blog post published Monday by Fernanda Viégas, senior staff research scientist at Google Brain Team, and Martin Wattenberg, senior staff research scientist at Google Brain Team, Google announced its plan to draw together its researchers in an effort to understand how people interact with AI.
According to the post, "The goal of PAIR is to focus on the 'human side' of AI: the relationship between users and technology, the new applications it enables, and how to make it broadly inclusive. The goal isn't just to publish research; we're also releasing open source tools for researchers and other experts to use."
The new initiative is an attempt to take a holistic look at who AI touches, including coders, researchers, business professionals, and employees who will use AI tools.
The post outlines three areas that PAIR's research will address:
- Educational resources for engineers to build machine learning systems.
- Support for professionals like doctors, technicians, designers, farmers, and musicians who want to use AI in their work.
- Tools to help everyday users take advantage of--and benefit from--AI.
Google's PAIR builds on a host of other research looking at AI/human collaboration. It will also invite visiting academics, Brendan Meade of Harvard and Hal Abelson of MIT, to contribute research in education and science when it comes to AI.
"One key to the puzzle is design thinking," the post stated. "Instead of viewing AI purely as a technology, what if we imagine it as a material to design with? Here history might serve as a guide: For instance, advances in computer graphics meant more than better ways of drawing pictures--and that led to completely new kinds of interfaces and applications."
The announcement also unveiled two open-source visualization tools--Facets Overview and Facets Dive--that are meant to support engineers who are beginning to work on machine learning (ML). An important component of this is transparency, when it comes to the huge amounts of data sets that these systems rely on. "One of the ways that ML engineering seems different than traditional software engineering is a stronger need to debug not just code, but data too," the post stated.
"We believe AI can go much further--and be more useful to all of us--if we build systems with people in mind at the start of the process."
The 3 big takeaways for TechRepublic readers
- Google has just launched a new project, People + AI Research initiative (PAIR), that centers around human-machine collaboration.
- The project is an attempt to offer educational resources for researchers, business professionals, and employees who will use AI tools.
- The initiative also addresses bias in AI by including tools that increase transparency in data sets that machine learning algorithms learn on.
- Google weaves AI and machine learning into core products at I/O 2017 (TechRepublic)
- Google DeepMind: The smart person's guide (TechRepublic)
- Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)
- The Machine Learning and Artificial Intelligence Bundle (TechRepublic Academy)
- Should Google be your AI and machine learning platform? (ZDNet)
- Bias in machine learning, and how to stop it (TechRepublic)
- Can AI really be ethical and unbiased? (ZDNet)