Customers were angry when OVO Energy decided to ditch their native apps. Embedded within the squad tasked with rebuilding the apps in reactive native, I helped the team take a lean UX approach.
We starting by redesigning how you submit meter reads through the app to reduce errors and increase customer satisfaction. Through facilitating collaborative research and design sessions, validating our ideas through usability testing and experiments, we successfully addressed user errors in submitting meter reads and increased customer satisfaction going from 2 stars to 4.5 stars in the app stores.
OVO Energy's new app strategy meant moving from natively built customer apps to apps built in Reactive Native. We’d received a customer backlash after replacing the existing native apps with the mobile site in an app wrapper, which was a temporary measure as the new react native apps were being built.
The squad’s mission was to increase customer happiness through the design and build the new react native apps. And to do it quickly, to stem the flow of negative feedback from customers threatening to leave.
App analytics showed that submitting meter reads was the most used feature. Analysing app store reviews showed this activity was receiving the most complaints and as there were also other business implications to getting this right, we agreed this should be our focus.
I built up a picture of what was going on by:
From this analysis, I identified the key frustrations were:
Vague email and in-app comms requested reads within a 5 day window. Customers were unaware that they needed to provide reads on a specific day. This confused customers who, after following our advice, received a monthly statement that included 'estimated readings'.
After submitting their read, customers were frustrated further when receiving feedback suggesting their read was incorrect because it's lower than our last estimated read.
The form required users to input their read in a specific format, however we weren't giving any help. For example, you cannot submit your read unless you've put zeros in front of it, so a read of 7900 would need to be entered as 07900. I observed using video analytics tools that users were repeatedly clicking on the disabled submit button and then giving up (it only becomes enabled when you've entered number in the required format).
Our page was cluttered and the hierarchy between different sets of tabs was confusing, which made the task of submitting reads cognitively demanding. For example, we saw users review their historic gas reads then accidently submit their gas read in the electricity input field.
Armed with my insights and a competitor review of how others had tried to improve the submit meter experience, I ran a workshop with my squad. Once we'd built shared understanding on the problems, I ran a 'design studio' exercise.
We generated and critiqued each others ideas and then agreed on some principles around what good looks like
I then refined those ideas into a low fidelity prototype which aimed to address the issues we'd encountered
We had 3 main options for how the user inputs their meter read once they've clicked the 'Submit reading' button on the overview cards. There were pros and cons for each one and it wasn't clear what would deliver the best experience so devised an A/B test which would give us the answer
Then collaborating with a UI designer, we iterated on the designs as we went higher fidelity.
I refined how we should communicate when customers should submit a read and explain the implications if they choose to provide it on different date. As this date is the same for both fuels it made sense to add this to the top of page rather than repeating the prompt on each fuel overview card. I validated that this guidance was clear to understand through comprehension testing with users.
As we were iterating on the designs, a colleague attending a review, commented that he did not feel the thank you state shown once a read is submitted was obvious. This comment created doubt within the squad as to whether this was a genuine problem. I saw it as an opportunity to validate this hypothesis through usability testing as I knew the A/B test we were planning wouldn't give us the answer to this. Usability testing would also allow us to iron out any other issues before launching the A/B test.
I quickly setup an interactive prototype and launched a remote unmoderated test using usertesting.com to get to the bottom of this.
We learnt through this that everyone who submitted their read for one fuel e.g. electricity was clear that this task had been done and moved onto the 2nd task of submitting their read for the other fuel e.g. gas. We therefore avoided wasting time fixing a problem that didn't exist.
The testing also highlighted the difficulties we faced with the current input fields like users being unclear which field they should start entering their read and not being aware of the need to enter the zeros before the number. We wondered whether a simple line of explanatory copy might help users with this issue and if it would, then we should consider using this in the A/B test. If this did come out the winner, it would be the simplest to implement.
I worked with our copywriter on some instructional microcopy and we went through cycles of user testing, to see whether users were able to complete the task of submitting read right first time with this new line.
The first iteration of the instructions enabled most users to complete the task however one tester was confused by the message and added the zeros after her read not before. We addressed this issue by making the instructions more explicit in the next version which received 100% success rate.
I led the design of the A/B test, identifying the best way to set it up to give us meaningful results. We created an in-app satisfaction survey which was triggered after customers submitted both readings. This would help decide which was delivering the best experience.
We were conscious that we'd be ignoring people who didn't successfully submit their reads which could bias the results. So we also tracked error rates to identify whether any version was creating more failures than the others. We launched all 3 variants on iOS and 2 variants on Android (the number wheel wasn't included in Android test because of technical difficulties).
After I analysed the data, I found that the prepopulation version delivered the best satisfaction and did not increase error rates so we rolled that out across devices. I also shared our learnings across the OVO group and this solution was rolled into the mobile app for the Lumo brand
After launching the new way to submit meter reads as well as continuing to replace web views with react native components we were able to improve customer satisfaction as evidenced by the app store reviews rising from 2 stars to 4.5 stars.