DOE study of electronics

Space exploration Software

Space Exploration / April 18, 2015

In the Human Interfaces Group at the Jet Propulsion Laboratory, we build software for people, just like you probably do. Admittedly, some aspects of our work are a little unusual.

We believe in user centered design, evidence over opinion, and the importance of learning quickly through sketches and prototypes. We assume we don’t have the perfect idea right at the beginning, so we try to make it better, then great, as quickly as possible. Time and budget are always constraints. Sometimes the speed of light is a constraint, too.

We communicate with our spacecraft using the Deep Space Network — three sites across the globe with huge radio antennas. As I write this, dish 35 is downlinking data from Voyager 2. The data, traveling at the speed of light, currently takes 1.24 days to arrive.

Among many tasks for many missions, our group is working on new displays for DSN operators so that they can better monitor, debug, and correct any issues with data communication. It’s critical that they see problems quickly and immediately be able to jump to the low level information.

We believe in measured automation. We design software for experts, but we need to make sure they’re using their brainpower to do their jobs, not figuring out how to wrangle subpar software.

The complexity of our missions and of our data are increasing. With Mariner 4, in 1965, you could plot the data by hand (which they did at JPL, as confirmation). Today, the Mars Curiosity rover is capturing an enormous amount of data. In the future, Mars 2020 will improve upon that. We must produce software tools that substantially expand human capacity, or else we’ll hit a ceiling.

When we do our job well, we get to expand the possibilities for science and exploration. Every day, we must focus on ensuring that we are building the right tool for the job.

We try to avoid guesswork by testing our assumptions and ideas as early as possible, and working with our customers throughout the entire process. We ask ourselves a lot of questions: Who’s our audience? What are we trying to accomplish? Is the thing we think is a problem actually a problem? If it is a problem, is it worth solving? Does the solution we have in mind effectively address the problem? Does our design embody our intended solution? Is the design usable by our audience in the context of their work? What’s our most dangerous untested assumption?

We don’t get to A/B test our way through design decisions. Sometimes our entire user base is just a handful of people, and they work in the building next door. That level of access is great, and it’s pretty different from designing a web application for anyone with internet access.

So how do we know what’s right? We use a range of techniques for communicating and evaluating what possible futures might look like. Long before we have detailed interface sketches, we use narratives, storyboards, and other imagery to show our ideas. We use paper prototypes and experience prototypes to simulate the future and evaluate people’s use of the tools we have yet to build.

We build software for humans. We design and build iteratively. There’s much more software running on the ground than on the spacecraft. It’s much more straightforward to update ground software than flight software, so we can maintain a certain level of agility in deployment, as well.

Much of our current work runs in a web browser, although we also look beyond the web. We’re currently building 3D immersive collaborative software for deciding what Curiosity, our newest Mars rover, does every day, and there are many more opportunities to use augmented and virtual reality for science. These platforms are just beginning to be explored. Our intuition and design and prototyping techniques are much less useful in those new realms — we need new tools to build new tools.

We’re a federally funded lab that exists to accomplish the near-impossible on a regular basis. In short: enterprise software for space exploration.