I’ve been thinking more about “moonshots” in recent years, where countries and huge companies take on big, hairy, audacious goals that are right at the edge of humanity’s capabilities and seek to tap the people’s potential. The primary reason that moonshots are top of mind is because I attended a forum at the White House this week, in which the next frontier for exploration was defined as the three pounds of neural tissue between our ears.
The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative will focus upon developing tools and technologies that give neuroscientists, physicians, and humanity more insight about how the brain works, why, and in different situations. The initiative is explicitly multidisciplinary, seeking to establish centers and research clusters that combine advances in engineering and physics with neuroscience.
In a post describing this new US national moonshot, Dr. Francis Collins, director of the National Institutes of Health (NIH), connected today’s gaps in knowledge to those of last century:
“When on May 25, 1961, President Kennedy announced plans to go to the moon by the end of the decade, most Americans (not to mention space scientists!) were stunned because much of the technology needed to achieve a moonshot didn’t yet exist. Likewise, medical research today faces a wide gap between our current technologies for studying the brain and what will be needed to realize BRAIN’s ambitious goals. Right now, we’re pretty good at studying individual brain cells and we also are able to image the whole brain when someone is holding very still inside a neuroimaging machine (such as a PET or MRI scanner). What’s missing are tools to see what’s really going on within the brain’s neural circuitry — the crucial middle level at which most of human cognition and behavior is generated, as well as ways to look at the brain when people are moving around and interacting in the real world.”
Projects funded under BRAIN seek to combine landmark discoveries like the sequencing of the humane genome with new tools for mapping neuronal connections, nanoscience, and biomechanical engineering in collaborative environments, said John Holdren, the director of the White House Office of Science and Technology Policy. New photonic capabilities enable researchers to use pulses of light to see how cells affect behavior, applying neuroscience and physics to see how the brain is connected. This will go a long way to getting a full, clear picture of how the brain functions at the speed of thought, said Holdren, leading to the treatment of disease and better learning outcomes. Like Dr. Collins, he highlighted the lack of available and effective tools to study the brain in action as a key barrier to scientists measuring neural circuits in real time. While such tech doesn’t exist today, if the BRAIN Initiative succeeds, it will.
Over the past year since the initiative was announced the most exciting thing that’s happened is energy and enthusiasm in engineering, science, and industrial communities, said Mark Schnitzer, who has built miniature optical systems that interface with the brains of animals, pioneering the use of optogenetics in research at Stanford.
Schnitzer said that he expects the results of the new research will be profound, from unlocking the central mysteries of brain function to harnessing brain computational strategies for humanity’s technological purposes. He also noted the importance of the BRAIN Initiative combining existing technology and science, listing a series of advances that will have applications in future applied research.
- Wireless technologies will enable non-invasive monitoring, measurement, and control.
- Miniaturized cameras, microscopes, fiber optics, sensors, and other advances in the photonics industry will enable systems to collect and transmit data at higher speeds.
- Algorithms developed for use in military defense and counterintelligence surveillance will enable scientists to track small organisms.
- Adaptive imaging and robotics will give scientists new tools for precise observation.
- Computation and machine-learning that has been applied towards consumer behavior may help scientists to understand brain behavior and disorders.
In sum, the combination of technology, engineering, and neuroscience will drive discovery and clinical utility.
A video of the White House conference about the BRAIN Initiative is embedded below. If you watch nothing else, jump to the 50:00 mark for the discussion between Dr. Geoffrey Manley, a neurosurgeon, and professor Kerry Ressler. The pair discuss the role of tech in the treatment of traumatic brain injury (TBI) and post-traumatic stress disorder (PTSD), both of which affect millions of people annually in the US and many more around the world.
Given the dual use of some of the tools and therapies that may result from some of this work, the integration of ethics into neuroscience and applied research from the beginning will be crucial. For instance, the use of optogenetics in targeting sections of amygdala that turn on the “fight or flight response” or brain implants that could control feelings have huge implications for people suffering from PTSD but could also be used in military contexts, creating soldiers without fear.
As this research bears fruit, doctors will gain new insight into the causes and effective treatments for some of the most severe and debilitating brain disorders and conditions.
I do wonder, though, if the resources behind the BRAIN Initiative are sufficient. I’ve been reading reviews of a new book by Google chairman Eric Schmidt, in which he and his coauthor present a carefully manicured version of how Google works. I heard Schmidt on WAMU last month, talking to Diane Rehm about the book, and the conversation led me to Google X, the part of Google that’s focused on moonshots, from the self-driving car to newer efforts to define a healthy human, create a drone delivery service, or deploy wireless broadband through balloons, among other things.
Google X is, in many ways, a “skunkworks,” a 21st century version of the Skunk Works at Lockheed Martin that spawned the U-2 spyplane, the SR-71 Blackbird, and other big jumps forward in aerospace design in World War II and the years that followed. It’s a way to insulate a small team from the bureaucracy of large institutions, inoculating them against groupthink, enabling higher risk experiments, and insulating those responsible for the inevitable failures that follow.
The original moonshot — the US Apollo Program, answering President John F. Kennedy’s call to send a man to the moon — was anything but a skunkworks. At its peak, the Apollo Program employed more than 400,000 Americans and was supported by more than 20,000 industrial firms and universities, and $24 billion in spending over the course of the program. NASA’s total budget reached nearly 4.5% of total US spending in 1965-66. The Apollo program didn’t just end up sending astronauts to the moon and bringing them back: it led to advances in computing, telecommunication, and avionics that were used in the US aerospace industry and beyond.
By way of contrast, the national resources that are deployed to solve grand challenges in the 21st century, funding the moonshots of today, feel comparatively small, at least taken one by one. How much is being spent and by whom isn’t as straightforward to estimate as I’d like. The Defense Advanced Research Projects Agency (DARPA), which operates something like a skunkworks writ large within the US Department of Defense, spends about $2.8 billion annually on research and development (R&D). The results the nation has seen from that investment are significant, from the ARPANET that preceded the internet and the hypertext system that preceded the World Wide Web to a series of active projects that seem straight out of science fiction.
Given those benefits, as Brad Plumer noted in The Washington Post last year, the potential for a federal R&D crash, has some observers concerned, although the role of government in commercializing research on science and engineering is hotly debated. Plumer wrote:
“The key question here is how much of this innovation might have happened without government involvement. And as Stanford’s Roger Noll explains in this NBER essay, economists have fairly nuanced views on this.
Many economists agree that private companies tend to under-invest in very basic scientific research, since it’s hard for one firm to reap the full benefits from those discoveries. So the federal government, which now funds 60 percent of all basic research in the United States, is likely irreplaceable here. What’s more, studies have found that many types of government R&D spur private companies to conduct their own additional research. That is, the two are complementary, not substitutes.
But the situation is murkier for other forms of public R&D. Many government programs are focused on advancing commercial technologies in specific industries — and those could well be crowding out private-sector activities. What’s more, some government R&D programs are so focused on demonstrating their usefulness to Congress that they stick with ‘safe’ research that the private sector would have done anyway.”
The US spends about 2% of its annual budget (or would, if Congress passed one) on science and technology research according to the Center on Budget and Policy Priorities. If total annual outlays end up being about $3 trillion in 2014, as estimated by the Congressional Budget Office (CBO), that would add up to about $60 billion in federal spending. According to the CBO, total agency spending on R&D was $140 billion in 2012. National Science Foundation (NSF) statistics show $138 billion in total federal obligations for R&D in 2012 (Figure B).
More broadly, the CBO’s December 2013 report on investment (which is described as purchases of education and training, physical capital, and R&D) estimates that in 2012, the federal government spent $531 billion on investment, or about 15% of total federal spending and 3% of gross domestic product (GDP). Of that total, the CBO estimated total non-defense investment spending accounts in 2012 at about $64 billion, with the bulk of it going to the NIH (Figure C).
That all adds up to the largest national expenditure on basic science and applied R&D in the world. As Plumer highlighted, trends in federal spending on these investments as a percentage of GDP worry some observers, including the Information Technology and Innovation Foundation (ITIF), which released a 2012 report warning of the impact of sequestration on long-term economic value creation and negatively compared trends in US investment to growing commitments from other nations.
The ITIF explicitly connected federal investments in R&D at universities to subsequent private sector spinoffs that have collectively generated hundreds of billions in economic value, tens of thousands of jobs, and significant societal benefits, from vaccines to search engines to genetic therapies (Figure D).
The transition from funding to performance of research to commercialization is an important one, and the role of government in that process is the crux of political differences, as Plumer also observed:
“When the Congressional Budget Office reviewed the evidence in 2007, it concluded that government-funded basic research generated ‘substantially positive returns,'” he wrote. “And it found that, on the whole, government R&D helped spur additional private-sector R&D rather than displace it. Yet the CBO also noted that some types of government programs may very well be crowding out private research. Likewise, a 2007 survey by the Bureau of Labor Statistics found that the social returns for many public R&D programs were ‘near zero.'”
In this context, the NIH awarding $46 million to more than 100 researchers in 15 states and three nations to pursue the BRAIN Initiative might be considered relatively small potatoes, even if the total level of commitments between the public and private sector adds up to $300 million. (Given caps on appropriations put in place by the Budget Control Act of 2011, the next Congress may not add to it.) My sense is that, like the $31 million in funding the NSF awarded to 17 projects under the Data Infrastructure Building Blocks (DIBBs) program this week, both of these investments could have extraordinary returns in the long term.
Even if the combined funding amounts are small historically, the foci of the BRAIN Initiative are ambitious, as Dr. Collins laid out: “Our goal? To produce the first dynamic view of the human brain in action, revealing how its roughly 86 billion neurons and its trillions of connections interact in real time.” As Antonio Regalado reported in July 2014, other governments are pursuing brain projects as well, from the European Union to China.
If successful, the projects funded by the BRAIN Initiative (PDF) “will revolutionize our understanding of how we think, feel, learn, remember, and move, transforming efforts to help the more than 1 billion people worldwide who suffer from autism, depression, schizophrenia, epilepsy, traumatic brain injury, Parkinson’s disease, Alzheimer’s disease, and other devastating brain disorders,” wrote Dr. Collins.
Practically, we might see the development of mobile devices that can do positron emission tomography (PET) or functional magnetic resonance imaging (fMRI) scans, much as ultrasounds have become mobile, along with diagnostic or therapeutic capabilities in brain modulation through magnetic fields. Down the road, we might even see the emergence of a prototype that’s not so different from the device Bones uses on the Starship Enterprise. As with other disruptive technologies, the emergence of these capabilities will challenge us as a society, but from here, the upside looks much greater than the downside.