Searching for an MVP with Agile
The brief stated that Fernbank has a mobile app they want to expand with a variety of features that will enhance their guest experience.
When I pulled pulled-down a copy of the app to get a feel for it, I noticed that there was no way to purchase tickets directly through the app at this time.
As a team we had agreed to produce our product using an Agile framework in a two week sprint and this primed us to search for a Minimally-Viable Product (MVP) through rapid prototyping based on Lean UX research.
It seemed so obvious at first
Avoiding cognitive biases in an Agile environment
We had the brief, we had the scope, we just needed to do some simple research and get to generating the deliverables, right?
Thinking in this way biased our initial research findings and this threatened to lead us down a development path that wasn’t going to deliver the robust solution we were aspiring to create.
going faster can be troublesome: you may be decreasing your thinking discipline either consciously or subconsciously. And that can lead to more opportunities to fall prey to cognitive traps such as biases and fallacies. The more biased you are, the more distorted your system view becomes.Agile Alliance
The opportunity for our team was to think quickly and clearly about the appropriateness of fit when comparing user experiences.
Plan Your Work, Work Your Plan
Deliverables like personas are the result of design processes so it was necessary to detailing the various steps that go into, say, user testing.
By no means was that initial planning document complete, but it did enable the team to visualize the various tasks and I could then begin distributing those tasks across the days available as well as effectively delegate tasks using Scrum Poker.
At the end of each work day I spent some time reflecting on the progress made by the individual team members (typically through oral communication from those members).
I used this time to review the brief, review the planning document, and review the research findings to help me clarify the scope of our work and align our efforts toward meeting our obligations.
Whatever happened to that app?
I began my discovery process by testing the app’s functionality with clickthroughs and annotating screen grabs of the various tasks I set out to accomplish.
Fairly quickly I saw that the app was no longer being supported and that there was little that could be accomplished through the app and instead the app is pointing users toward Fernbank’s responsive website.
Because the app was no longer being supported, I turned my attention to learning about how the app was developed. I was guided by several questions:
- What were the business goals of the app?
- How did the business promote the app?
- Whatever happened to the app?
I went looking for any stories that Fernbank had shared about their mobile app. I was hoping to find their marketing copy, or other release notes. I found that the current Fernbank site no longer references their mobile app.
Before the extinction event: the marketing campaign
I found a news article from Georgia Public Broadcasting that announced the launch of the Fernbank mobile app. This gave me a date as well as a sense of the business goals Fernbank assigned to the app.
Using the publication date from the GPB radio story, I then turned to the Internet Archive’s “Wayback Machine” to see if they had cached any versions of the Fernbank website that included information about the development and release of the app.
After the release: the App Store, Google Play, LinkedIn
Through LinkedIn I reached out to the developers who listed Phunware’s Fernbank app on their resume with the intention to interview them about the app and why it is no longer being supported.
After deploying our initial screener instrument and gathering that data we set out to synthesize our findings through affinity mapping.
With these insights we were in a position to create personas and map their journey through our mobile experience.
We had responses from users, but were they the right users?
We created a screener instrument, we deployed our instrument, but were we capturing the appropriate respondents?
Using a Lean UX model—doing guerrilla research—can mean getting usable results quickly, but it can also bake-in biases (sample biases, confirmation biases, etc.) that, down the line result in squandered resources.
Solution 1: in app ticket purchasing?
Through interviews we learned that respondents expected the ability to purchase tickets and their membership information be available through a Fernbank app.
Given that 100% of the respondents reported this, we felt this was a minimally viable product to develop.
We thought we had it all sussed-out.
It seemed so clear: ticketing ought to be available through this app.
After four days of initial research our group participated in a mid-point critique session. These have been uniformly helpful because we can draw inspiration from our peers and also the staging of our initial findings forces each of us to focus our planning for completion.
As my team mate explained their initial hypothesis (users want to buy tickets through the mobile app) it became clear that they couldn’t justify their conclusion.
Because they had failed to recruit the appropriate respondents.
In the initial interviews they were gathering insights about how folks reportedly behave when buying tickets for events and museums, but we also captured a large number of respondents who were reporting their behaviors and expectations for the My Disney Experience app. This was in part because a (vocal) team member was planning a trip to Disney World with their family later in the year and those family members were over-represented among the initial respondents.
I synthesized the findings from those initial responses, but it became clear that none of our respondents were familiar with the Fernbank Museum’s mobile app, nor had they ever visited Fernbank.
Finding the right folks
Over the next two days I recruited Fernbank guests and members.
I then interviewed these respondents in order to get a clearer sense of the pain points that they experience when visiting Fernbank.
This was revelatory.
Yes, these respondents expected the ability to purchase tickets and store their membership information, but they also experienced great frustration at the ticketing experience when visiting Fernbank. With this information I felt more confident about how to design our solution.
Toward a better ticketing solution
At the end of the first week of our two-week sprint we had begun wireframing screens from the app’s current state.
Now equipped with robust insights from current and former members we were in a position to revise our personas in light of these findings.
This also enabled a revision of a user scenario that could be used as part of our presentation.
We then held a design studio in which we visually synthesized the findings from our research. In this studio we were able find the optimal paths for our users to achieve their goals.
We left the studio with both a revised user flow and rough approximations of what our app screens would look like at each of those moments.
In user testing we learned how intuitive our design solution was for completing tasks as well as gathering unexpected insights into user behaviors and expectations.
We recorded those findings and revised our prototype.
Based on feedback from the stakeholders, I’ve been revising the clickable prototype to bring it into compliance with Fernbank’s current branding scheme.
I’ve also adjusted the prototype to conform to the recent iPhone footprint.
In the next two-week sprint I would suggest clarifying the Apple Wallet transition as well as the sharing transition.
I’d also begin developing the account screens and processes: what do users expect from these areas?