CubeRover – Image Viewer Split View
Designing an image comparison interface for a light-weight lunar rover
Role –
Interaction Designer
Team –
Image viewer team, TeleOps
Tools –
Figma, Principle
Timeline –
Fall 2019
Project Overview
Setting a precedent for the future private lunar mission
In 2019 NASA announced Project Artemis to send mankind to the moon in 2024. As a first lander to kickstart Project Artemis, CubeRover, a light-weight lunar rover developed by Astrobotics, will be dispatched to the moon in 2021 with a goal to investigate the properties of lunar Regolith in order to estimate the range of impact and velocity of the lunar landing ejecta.

In order to carry out its mission, the CubeRover team will rely on the ground teleoperation software to remotely operate the rover from Earth. As a designer on the teleoperation software team, I was tasked to build an image viewer interface that will enable the scientists to edit, manage, and compare images of the lunar surface taken by the rover.
Opportunity Space
Absence of an image comparison feature
During the lunar mission, the rover periodically takes photos of the lunar surface using its frontal and rear cameras. However, the system operation team wanted an additional image comparison feature for the following reasons:
Compensation for the inaccuracy of the MonoSLAM algorithm
MonoSLAM is an image recognition algorithm that compares the movement of the distinctive points on two images to calculate the distanced traveled. However, due to the fallible nature of the MonoSLAM the operation team wanted a way to manually compare images as a fall back option measure rover's trajectory and real-time location.
Comparing the different angle of light reflection captured in images during route circumnavigation
Understanding the different angles of light reflection on the lunar surface and objects becomes an indicator for location to explore versus to avoid, which is a valuable resource that contributes to the Map team's route planning process.
Concept Ideation
How might we design an image comparison interface that is efficient and intuitive?
With this question in mind, I wanted to deliver an image comparison interface with the following features:
Concept Explorations
Experimenting with micro–interactions that provide visual guides for the users
Concept Validation
Paper missions
Weekly design critiques through virtual paper missions gave me an opportunity to test and validate my design decisions with the operations team. The two prototypes were tested during the virtual paper mission where they engineers were asked to perform task specific interactions.
Final Design
Complete interaction flow of the split view interface
As a result of the insight from the virtual mission, the final flow features a hover preview interaction.
Takeaways
Learning the value of up-front communication to resolve lingering questions & navigating ambiguity
The main challenge of the project came from a communication lag with the design team and one of our user group – the mission science team. The science team was looking for an extremely specific feature to alleviate difficulty comparing the light refraction angle of the two images. However, we ended up coming with concepts that did not align with the request of the science team because we didn't fully understand their mission goals.

In addition, the project in general was a difficult one to onboard. CubeRover was a project that aims to successfully dispatch a rover to the Moon and to be frank, we didn't have any interface design precedents to inform our design process.