Organised by ITaaU (IT as a Utility), the UX Boot Camp is an intensive one day workshop held at the design studio of ustwo, in London. Ustwo develop apps, one of which (Monument Valley) won Apple’s Design Award in 2014 and was recently nominated for multiple BAFTAs. ITaaU promotes interdisciplinary work within academia to fulfil the potential of IT resources and services (and are funding the Mobile App Data as a Utility for Public Health ADUP project). The collaboration with ustwo opens the doors to academics and those interested in learning, from a successful company, some of the processes that can be involved in mobile app design.
The focus on process, concepts and creativity (not coding!)
Perhaps surprisingly for a workshop for developing and testing mobile apps, no coding skills were required. This allowed for a holistic understanding of the overall design process, yet retained the methodological details for each development stage (and the rapid progression and movement between these stages) – very relevant and valuable learning for my PhD. Attendees were varied in their academic background and the group size itself at around 15 people (split into smaller groups of 3 or 4) was ideal for generating diverse ideas surrounding our task for the day, while allowing for whole-group discussion of these ideas.
Our task involved designing an app for a gym interested in improving membership.
Georgios and Isabelle from ustwo explained the concept of lean UX design and the move away from lengthy testing and recruitment processes requiring large groups of users at once (and the extensive documentation of these processes). Instead, our work for the day involved establishing assumptions about the user and the business, generating hypotheses, and then testing these directly through user interviews conducted in small batches. Our initial assumptions were based on the profile of a potential user (persona), and some information about the gym, as well as our design brief. We were particularly interested in addressing those assumptions that were not only less known, but that we thought to be high-risk and have a high impact on the app’s success. With these in mind, we designed a few screens on paper, used the Pop App tool to make them interactive on a smartphone and created more specific hypotheses.
Asking users questions about our assumptions and initial design flagged up some fundamental issues with the app content. These issues could be quickly and immediately addressed through going back and trying out some other ideas using the same process (turning another assumption into a hypothesis and creating a new accompanying design to show different users). Our second round of user interviews indicated that these ideas might be more successful, and from there we could begin to consider testing more specific design changes.
The workshop ended with groups presenting to the group their app ideas and own experience with the development process. It was interesting to see that, despite being given the same design brief, each group’s interpretation and focus was very different – partially due to differences in designers’ initial visions but importantly these ideas taking different paths based on the preferences of different individual users. Giorgio and Isabelle were helpful, enthusiastic and hands-on throughout the day, providing tips for every group. We learned, for example, that any conflicting visions for how the app might best be improved could be overcome by getting them all down on paper and letting the user decide/discuss their own preferences.
We were given beer and an impressive tour of the studio. The pace of the day was excellent and exemplified how quickly an app can be designed that incorporates user feedback.
The evaluation of health behaviour change apps – could lean UX design be incorporated?
This method raises interesting points for the evaluation of mHealth apps. There seems to be a highly iterative and tightly bound relationship between evaluation and development. This could be beneficial within health studies, but is currently not supported by the established paradigms in this field. Following the feasibility stage of intervention evaluation within health studies, RCTs themselves may focus on a ‘complete’ version of the intervention that remains stable throughout the study. This prohibits the ability to quickly and continuously improve the intervention for better outcomes, see the impact of small changes and perhaps find ‘active ingredients’ through these small changes. Strict recruitment processes and the large datasets required for RCTs can contribute to the total time needed to conduct them, which is often years.
Despite issues of e.g. representativeness stemming from input by small numbers of users, the relationship between development and evaluation is an important one not just for HCI/computing science researchers but likely health researchers too. The qualitative user evaluations involved in lean UX design were fairly unstructured relative to clinical outcome research, yet highly informative, useful and efficient, with extensive evaluation information gained within a short space of time. Perhaps lean UX design and other similar iterative processes could be considered for use within clinical mHealth evaluations.