Conversational user interface for Airbnb
Abbi is Airbnb’s conceptual conversational user interface (CUI) designed to help users navigate through the check-in and check-out process. Specifically, Abbi can contact hosts, figure out specific check-in/out information, and even recommend restaurants and experiences.
I worked on identifying use cases and pain points of the current Airbnb user journey as well as iterating on the visual and the motion for various CUI states.
Motion Study Part 1 —
Starting with using simple shape and color, I studied the speed and degree of movement of the objects in relation to the type of emotion that they evoked. This motion study became a basis for understanding how CUI uses shape and speed to communicate with the user.
Motion Study Part 2 —
For the next step, I decided to work with less constraints by adding more design elements such as colors and shapes.
Motion Study Part 3 —
For the final step of the motion study, I collaborated with Allissa and Maddy for synthesis of our three individual studies. We mapped our motion studies based on the two categories — positive/negative and intensity — onto a graph.
Identifying the use case —
Once we familiarized ourselves with the similarities we found from the motion studies, we came up with a CUI concept for Airbnb. We then briefly looked into the user journey and pain points of an average Airbnb user.
We decided to optimize our CUI for the check-in process which many airbnb users expressed confusion and frustration due to the large volume of information they had to sort through as Airbnb does not adopt an universal check-in procedure. Currently, it is largely up to the host to send out the check-in information in whatever platform they prefer i.e. Google Docs, PDFs.
In order to highlight the features of Abbi through a 1min motion graphic, our group decided to focus on the check-in process of a Airbnb user. We created a user persona based on the previous step.
There are already many VUI out in the market — Apple’s Siri, Microsoft’s Cortana, Samsung’s Bixby, and Amazon’s Alexa to name a few. It’s natural for people to associate certain motions with a certain action due to their experience with pre-existing VUIs. The big question here was: how do we know whether people identify certain motions with their action because they are used to them or because they are intuitive and natural?