On July 15, 1965, the Mariner 4 spacecraft snapped a series of photographs of Mars during its flyby of the Red Planet. These were the first “close-up” images taken of another planet from outer space, according to NASA. One of these first grainy photographs depicted a massive crater nearly 100 miles in diameter. Now, NASA’s Jet Propulsion Laboratory (JPL) is tapping artificial intelligence (AI) to help with its cosmic cartography efforts, using these technologies to identify “fresh craters” on Mars.
For more than 14 years, the Mars Reconnaissance Orbiter (MRO) has transmitted detailed images of Mars back to researchers on Earth. Scientists have used orbiter data to spot more than 1,000 new Martian craters. Historically, as the report notes, spotting Martian craters typically involves researchers analyzing photographs looking for minute changes in these alien landscapes.
Typically, the orbiter’s Context Camera—a low-resolution camera that covers large amounts of surface area—first detects these new potential craters. In these images, only blast marks surrounding the impact crater will “stand out,” per JPL. Next, scientists use the High-Resolution Imaging Science Experiment (HiRISE) for further analysis.
Unlike the Context Camera, the HiRISE can detect minute surface characteristics. However, this process is rather time-intensive. All in all, it takes researchers about 40 minutes to scan one Context Camera photo. To help expedite this process, members of the JPL team designed a tool appropriately called an “automated fresh impact crater classifier.”
SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)
Training the tool involved feeding the crater classifier more than 6,800 Context Camera photos. This included known impacts that were confirmed using HiRISE. This training sample also included images without “fresh impacts” to “show the classifier what not to look for.” After this training, JPL unleashed the classifier on the full Context Camera repository including roughly 112,000 photos in total.
The AI tool runs on a supercomputer cluster made up of “dozens of high-performance computers that can operate in concert with one another.” While the human-helmed identification process takes about 40 minutes, the AI tool can manage classification in about five seconds on average, according to JPL. However, even with this compressed timeline, the team would still need to find a way to efficiently tackle the massive repository.
SEE: Key details: NASA’s mission to Mars (free PDF) (TechRepublic)
“It wouldn’t be possible to process over 112,000 images in a reasonable amount of time without distributing the work across many computers,” said JPL computer scientist Gary Doran in the release.
“The strategy is to split the problem into smaller pieces that can be solved in parallel,” Doran continued.
To accomplish this, researchers deployed more than 700 copies of the crater classifier across the cluster synchronously, Doran explained in the release.
At the moment, the classifier still must have its work checked by a human scientist after the fact.
“AI can’t do the kind of skilled analysis a scientist can,” said JPL computer scientist Kiri Wagstaff in the release. “But tools like this new algorithm can be their assistants. This paves the way for an exciting symbiosis of human and AI ‘investigators’ working together to accelerate scientific discovery.”
In late August, JPL explained the classifier detected “a dark smudge” in the Noctis Fossae region and HiRISE later confirmed this site to be a “cluster of craters.” The JPL team has now submitted over 20 additional potential candidates each awaiting HiRISE confirmation.
“There are likely many more impacts that we haven’t found yet,” said Ingrid Daubar, a scientist with appointments at JPL and Brown University involved with the work. “This advance shows you just how much you can do with veteran missions like MRO using modern analysis techniques.”