Google Assistant - Language Learning
Project Format: Semester-long group course project for SI 622: Needs Assessment and Usability Evaluation
Role: UX Researcher
Time Span: September 2021 - December 2021 (4 months)
Task: Investigate Google Assistant’s potential for integrating language learning within Assistant
Design Process
Overview
This class revolves around familiarizing students with how to evaluate the usability, usefulness, and acceptability of interactive software systems. In addition, the course taught me how to decide among diverse evaluation methods for specific goals and types of systems and how to write reports and give presentations that effectively communicate results of evaluations and argue for the validity and relevance of methods chosen.
My group consisted of myself and four other MSI graduate students, and we were paired with the Google Assistant division of Google for the semester-long project. We were assigned several questions regarding the usability of Google Assistant within a language learning context such as: how can Assistant devices actively help users improve their language learning skills? What are the current pain points in learning a new language? Are there any limitations with today’s assistant devices? What are some opportunity areas?
Methodology + My Roles
Interaction Map
Research Goal: To understand how the Google Assistant application is currently structured and identify user journeys.
Outcome: Our interaction map allowed us to generate a static representation of the Google Assistant system that helped us understand all the possible actions within the application. This helped us structure the research goals for our user interviews.
Time Span: Two days.
My Role: I was responsible for taking screenshots of the different possible interactions within the Google Assistant mobile application and mapping the interactions.
User Interviews
Research Goals: Identify the different user groups of Google Assistant and their specific needs. Next, we constructed three personas that represent some of the diverse dimensions of the people we interviewed.
Outcome: We discovered the motivations behind different users’ visits to the Assistant app, and prepared us to explore how comparable platforms structure, manage, and present their information. We used the information we gathered, and the personas we developed throughout the rest of our research methods.
Time Span: Ten days.
My Role: I first collaboratively developed. an interview script for each interview. Of the 5 interviews, I was the interviewer for one and the note-taker for another. I also helped design the three user personas.
Comparative Evaluation
Research Goals: Identify strategies used by other language learning and virtual assistant competitors that facilitate ideal user language learning.
Outcome: We discovered which features are most significant and worth including in a language learning context.
Time Span: Five days.
My Role: I analyzed three of Assistant’s direct competitors in the language learning space: Duolingo, Babbel, and Rosetta Stone.
Survey
Research Goal: To better understand how satisfied different user groups are with Google Assistant, and to discover what a larger population of users looks for in a language learning platform.
Outcome: The survey helped us further identify user groups and their goals regarding language learning, which we compared to the Google Assistant application to determine its shortcomings. This prepared us to develop our heuristic evaluations so we can further examine the Assistant’s shortcomings.
Time Span: Ten days (survey deployed for 7 days, 3 days for analysis).
My Role: I worked collaboratively to develop the survey questions so that they aligned with our research goals. The survey questions were created through multiple iterations, including brainstorming, narrowing down the question scope, and modifying according to our professor’s recommendations. The survey consisted of multiple-choice questions, select all that apply questions, a slider question, and a free-response question.
Heuristic Evaluation
Research Goals: To pinpoint where in the Google Assistant application there are usability issues through the lens of Nielsen’s Ten Heuristics.
Outcome: The findings from the heuristic evaluations were used to prioritize which violations most affected the user, which allowed us to structure usability tests to focus on those violations and how they affect the user’s experience.
Time Span: Five days.
My Role: I first worked collaboratively with my team to identify which of Nielsen’s heuristics apply to the Assistant application. Then, I conducted an individual heuristic evaluation before coming together with my team and aggregating our results.
Usability Tests
Research Goal: To observe and understand how users actually interact with the Google Assistant application and pinpoint where the application falls short of user goals and expectations, given the user goals & tasks we identified from our interviews and survey.
Outcome: The usability tests allowed us to understand where the application failed and the user struggled to complete the assigned tasks.
Time Span: Two weeks.
My Role: I worked collaboratively to write the usability test tasks and script. We then updated the tasks to be more specific based on feedback from our professor.
Final Video (above)
Goal: To present our high-level findings and recommendations.
Time Span: One week.
My Role: I was responsible for creating and presenting part of the findings and recommendations in the final vieo.