Selected Works

Music Recommendations #1: Why People Don't Use Them

Summary

As a team of one, I owned a feature of a music streaming service and collaborated with ML data scientists to solve a problem. The problem was low streaming counts for personalized songs despite renovation of the recommendation system.

I approached it qualitatively because we needed to understand "why" view counts weren't growing rather than just measuring them. Through heuristic evaluation, journey mapping, and usability testing, I identified that UI complexity was preventing users from discovering recommendations.

My insights led to interface simplification and recognition that users preferred familiar music in certain contexts. Follow-up quantitative research implementing these findings resulted in a 6% increase in streaming counts.

Mockup of the product

Key Insights

  • User experience is fragmented by expertise level: Short-term users struggle with navigation and deplete cognitive resources before discovering recommendations, while long-term users create personal shortcuts that bypass new features entirely.
  • Familiarity drives engagement more than novelty: Users prefer recognizable songs in recommendation lists, challenging the team's assumption that diversity in recommendations is always preferable.
  • Context significantly influences music selection behavior: Users adopt more conservative selection strategies when choosing music for specific activities (like driving or jogging), gravitating toward familiar songs and avoiding exploration.

Detailed research process

Music Recommendations #2: How UX Research Improved KPIs

Summary

As a team of one, I owned a music streaming feature and collaborated with ML data scientists. The problem was twofold: proving how familiar songs affect user engagement and addressing data scientists' resistance to moving away from diversity as a solution for the "filter bubble".

I took a quantitative approach and reframed the topic as testing whether increased recommendation diversity, not familiarity, would improve engagement to overcome data scientists' skepticism.

Using A/B testing and log data analysis, I discovered increased diversity only attracted explorative users, not the majority, without significant changes in listening habits. Data scientists accepted my evidence-based suggestion to add a few familiar songs to playlist recommendations, resulting in a 6% increase in streaming counts, a key KPI.

Recommendation feature of the mockup

Key Insights

  • User segmentation reveals distinct recommendation preferences: Not all users respond the same way to diverse recommendations - explorative users showed higher click-through rates and page views with diverse playlists while most users preferred familiar content.
  • Recommendation diversity functions primarily as an attraction mechanism: Even when explorative users checked more with diverse recommendations, their actual streaming behaviors remained consistent, suggesting diversity attracts attention but doesn't significantly change listening habits.
  • Familiarity and diversity serve different purposes at different stages: Familiar songs may build initial trust in recommendations for new users ("they know what I like"), while continuing users may need more diversity to maintain interest and prevent recommendation fatigue.

Detailed research process

Building Critical User Journeys for a Finance Product Roadmap

Summary

As a part of a taskforce team, I cross-functionally collaborated with diverse stakeholders including leadership to solve a problem. The problem was clarifying our mobile finance app's Critical User Journeys (CUJs) to inform the next year's roadmap.

I approached it with a dynamic mixed-methods framework and incorporated both existing data and fresh user perspectives for fast-paced, efficient collaboration. From triangulating analyses of behavioral log data, user scenario reviews, and user interviews, I suggested possible user journeys and iterated them with stakeholders to finalize CUJs.

The impact was successful integration into roadmap planning, company-wide recognition of CUJs' importance, and practical application in feature prioritization decisions.

A mockup of the product

Key Insights

  • Understanding user perspectives in diverse product lifecycles is critical: As CUJs are accumulated traits of user behaviors over several years, it is highly important to listen to voices from both new and long-term users.
  • Facilitating cross-functional collaboration is key to success: Unlike single-topic studies, identifying product CUJs must require teamwork and communication across departments, guided by qualitative research and stakeholder feedback.
  • Addressing internal stakeholder perspectives drives adoption: As stakeholders' concerns and questions about CUJs were addressed, they became more supportive and actively referred to the outcome for the future planning.

Detailed research process

Foundational Research for Building a Data Product for AI

Summary

As a UX research lead in an early startup, I cross-functionally collaborated with technical stakeholders to develop a web product for diagnosing AI dataset quality. The challenge was creating a user-friendly MVP for a highly technical product with limited market benchmarks.

Starting with a simple prototype, I employed a generative research approach using qualitative methods. First, I established stakeholders' consensus on UX by democratizing the research process. Then, I defined target user groups and critical product features with leadership. Through in-depth interviews and analysis, I developed comprehensive user scenarios that informed the product's design iterations and validated them with usability testing.

This collaborative approach led to a successful product launch at CES 2024 and secured partnerships with major industry players, establishing clear UX strategies for future product plans.

AI Dataset Quality Diagnostic Report

Key Insights

  • The fundamentals of UX remain essential: Even in the technical domain of data and AI, user-centered design principles are crucial for the product development process.
  • Understanding users' level of data background is paramount: Data literacy and skills vary across data experts depending on their backgrounds and levels, which requires a thoughtful targeting of the "main" user group.
  • Ensuring trustworthiness through usability is critical for an AI product: Potential customers may be concerned about trustworthiness of an AI-involved product and want to ensure it through product's usability.

Detailed research process