Overview
I was recruited to improve the usability of an IoT Asset Tracking product ecosystem. To inform these changes I performed a heuristic analysis, corroborated my findings using ethnographic observation, then synthesised required changes into low-fi prototypes for the product development team.
post_keywords: Usability Analysis, Heuristics, Ethnography, Think Aloud, Qualitative, Synthesis, Work Item Prioritisation
Problem Definition
The IoT tracking product had seen early adoption among different customers in healthcare, construction, and education. The results of quantitative analysis of system usage data showed that users were not habitually using the system. Following brief conversations with users it was quickly discovered that parts of the system were frustrating to use. Often breaking user mental models or failing to convey system status. My task was therefore to improve the system to reduce frustration and improve usage statistics.
Audience
Due to the nature of difficulty, my target audience was as generic a sample of users as I could find. Users could have any level of technical experience, I was purely interested in first-time interaction with the system across a number of standard tasks to determine bottlenecks and pain-points. As this was my first work on the system in a company with no previous design experience, I opted not to produce personas and instead focus on task completion. I would later produce a set of personas as the design practice matured. Participating users were recruited to an open call from both internal and external ads.
Role & Team
I was the sole user researcher on this project, responsible for:
- Participant recruitment,
- Determining Sample Bias
- Performing Ethnographic Observation
- Planning User Think Aloud sessions
- Facilitating sessions
- Training additional facilitators
As the project progressed, I opted to train additional facilitators from within the product team. This helped internal exposure to design practice and fostered empathy to the user.
Constraints
Design Process
Product Immersion - Pitches from throughout the company
Being brand new to the product, the first point of research was to gather pitches on the nature of the solution from individuals across the company. This included C-Suite employees, the Sales team, Software, and Infrastructure. The outcome of this was a very fractured picture of the system with missing purpose. It was apparent that the initial concept for the system had been lost as the company responded to customer needs.
Understanding that this would be difficult to define internally, I instead turned my focus towards what the system was for the user. This understanding is an ongoing process, now informed by a user group. At the most basic level: “The system tells you where your things are and what condition they are in”.
Heuristic Analysis - My pass through the system using UX rules of thumb
Having had difficulty determining the function to user at the pitch level I then decided to get hands-on with the system. I was able to note function and intention across the platform to then test with users. The set of heuristics used combined Jakob Nielsen’s Usability Heuristics (https://www.nngroup.com/articles/ten-usability-heuristics/) with interactivity laws such as Fitts Law. Heuristics
- Visibility of system status
- Match between system and the real world
- User control and freedom
- Consistency and standards
- Error prevention
- Recognition rather than recall
- Flexibility and efficiency of use
- Aesthetic and minimalist design
- Help users recognize, diagnose, and recover from errors
- Help and documentation
I also categorised each instance in level of severity and as an issue of:
- Usability (requiring design change),
- Taxonomy (if words used were confusing or inappropriate) ,
- Responsiveness (due to the existing implementation),
- Bug (if it was an obvious error in function).
The analysis resulted in 158 issues spread across the system with varying levels of severity. The most common critical heuristic violation was #1: Visibility of System Status, followed by a large number of issues in #4: Consistency & Standards, and 6 Recognition rather than recall. See the graphics below for more details and breakdown by system area.
The breakdown of issues from the heursitic evaluation by heuristic and severity. It was very clear that a lot of attention was need on behalf of the user
After completing the heuristic analysis process, I sat down with the development team to work out a pathway for fixes and implementation of possible solutions to the issues identified. The graphic below shows the tracker for this shortly before this discussion.
It became apparent that while only some of these issues could be solved by bug and responsiveness fixes. Most of the critical to medium severity issues required deeper exploration through user research, especially considering the common heuristics were to do with the immediate user-interactive feedback loop.
User Observation
To help inform the broader improvements needed in the system resulting from the heuristic analysis, I needed more data. I contacted a number of our internal expert users and external customers to observe them as they worked with the system. My findings from these ethnographic sessions confirmed similar experiences and difficulties, all users had adapted their interaction with the system in avoidance of the identified issues:
- Where the system lacked feedback on status, users employed additional checks in their workflow to pre-empt issues.
- Users had made notes on their workflow in the system to counteract remembering how to navigate confusing screens.
User Think Aloud Sessions
To test further, I planned Think Aloud sessions with internal users and external customers to explore users' mental models and attitudes to the system. Users could be entirely fresh to the system, with no prior knowledge as I wanted to understand how seamless the system was to interact with at any given moment - no recall needed other than login details.
I recruited 11 individuals to spend time completing a number of common tasks in the system. At all times during the sessions I asked users to ‘Think Aloud’, giving me details of their process and reasoning. Where appropriate I would probe deeper on their responses and actions - keeping the sessions near conversational, though with the user dominant - to gain deeper insight. To keep experimental consistency between sessions, I created a script with a set of tasks and used the same system data for all participants.
Script
Intro
Hello, welcome to our ‘think aloud’ session. I’m Kevin Doherty researcher looking into how this system works for you , and I’m joined by ________, ________internal customer experience associate. #####Explain Method Today we’re using a method of testing ktrack called ‘think aloud’. We will ask you to complete a few tasks while speaking your thoughts aloud. This helps us to understand how the system’s design is currently working for users. When you think you have finished your task please say ‘done’ and we will give you your next task. An example of this process for the task: ‘can you tell me how many turns there are on a walk from your home to the nearest shop?’ would be: [Perform think aloud example] Have I been clear enough in my explanation so far?
Final Informed Consent Check
To take part in this session you’ve completed our information & consent form. I just want to double check that you’re comfortable taking part in this session? And that you consent to us recording the computer screen and the room audio? Only the research team will have access to these recordings. [Wait for response] [If remote] Can you please share your screen? Any questions before we begin? Ok I will start recording and give you the first task. [Start Recording] Page Break
Tasks
Map View & Filtering
- Can you login to the system and navigate to the Map View?
- Username: ______
- Password: ______
- Can you tell me how many assets are in the _____ location?
- Can you tell me how many of those are ‘asset type’?
- Can you tell me how many assets are in the ____ location?
- Can you show me only the ____ on the Map View?
- Can you tell me the current location of the asset ____ ?
Asset & Location History 7. Where has this asset been? (On December 1st 2021) 8. Can you show me all the assets that have been in location ____ ?
Admin 9. Can you edit an asset for me?
- I’d like asset ____ to become an asset type of ____ .
- Can you add a new location to ____ please? You can call it anything you like.
Logout 10. Finally, can you now please logout of the system?
Outro
Thank you, that concludes our think aloud session. I’ll now stop recording. Recording has stopped. Do you have any questions? Thank you for your time today! I’ll be in touch to get your donut order ahead of the findings being published
###Data Synthesis
Using thematic analysis, each user session was coded using ‘in-vivo’ and ‘open’ coding methods to extract initial data points. Open Coding was an appropriate choice due to the general nature of interaction and function within the system; it could extract the useful data while not discounting potentially insightful actions or attitudes. In-vivo coding was used in combination to maintain the user’s voice throughout - a key part of promoting empathy with the user’s experience with the software when the data was finally synthesised and presented to the product development team. Once all user sessions were complete the data was group in an affinity diagram, producing thematic categories from similar data points. These thematic categories were then further collated into a ‘work-areas’ that highlighted through grounded evidence the system features improvement. See below graphic:
Work Prioritisation
Finally, I sat down with the wider product development and business team to present my findings and decide upon the areas for improvement that had the most priority. I then fed this into Azure DevOps and gave the product development team access to the Miro board containing all anonymised user insights. I then began close collaboration with the team on rapid prototyping designs for these improvements. This work is still ongoing.
Retrospective
- Overall, the project was a great success for the introduction of user-centred design practices to the company. When I presented to the wider team they found it enlightening and proved the catalyst to change the way the product team looks at work - now using the design thinking process.
- This fostered an interest in me of system areas that promoted users’ functional-fixedness. I became keen on finding these areas and improving to negate users entering this state.
- After this I was keen to deepen our team’s knowledge of our users. I worked to understand the system conceptually and begin to segment our users into outcome based personas - from those wondering ‘where’ something through to those using the system data for strategic policy decisions.
- If I was performing the initial run through again, I would have pushed harder to implement in-system analytics to better understand users’ close interaction with the system. While I had qualitative evidence, having the ability to use quantitative performance metrics in a mixed-methods triangulation would have provided a better initial benchmark. This would be useful both for showing improvements to the system through this work, and reinforce the ROI of design at a c-suite level in a small company.