What do you do when you have a $2.5 billion vehicle on a world so distant it would take between 4 and 24 minutes to tell it to stop? Before the Curiosity rover moves from waypoint to waypoint on Mars, Ray Arvidson, a planetary scientist in Arts & Sciences at Washington University in St. Louis who serves as a surface properties scientist for the mission, meets by phone with NASA engineers to discuss and plan the rover’s route. Together, the scientists pick a path that skirts deep sand, steep inclines, sharp rocks and cul-de-sacs.

Two years ago, Arvidson prepared for these teleconferences by clicking through images made by cameras aboard the rover or satellites orbiting Mars. But path planning from images requires sophisticated photo interpretation skills. Foreground objects obscure background objects, and the flat images make it difficult to gauge size and distance. The planner  also must mentally stitch one image to the next to reconstruct the flow of the landscape.

How much easier it would be if the planner could just step onto the Martian surface and walk around, inspecting boulders, sand pits and scree as if he or she were actually there.

And that’s exactly how Arvidson does it today. He was a beta-tester for OnSight, a system co-developed by Microsoft and the Jet Propulsion Laboratory that integrates data from the Curiosity rover to produce a 3-D simulation of the Martian landscape. The imagery is projected on a see-through screen in a head-mounted display called HoloLens, which was developed by Microsoft.

Tom Ffiske

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s