NanoMood is an AI-powered platform that provides symptom assessment of neuropsychiatric function through a data-centric approach.
My Role
Tools
Duration
Team
UI/UX Designer
UX Researcher
Figma
Figjam
July 2024- May 2025
2 UX Designers
2 UX Researchers
Co-Founders
1 Developer
1 ML Engineer

My Impact

This was a redesign project where I led the end-to-end MVP design, rethought the user experience to address usability issues, conducted user testing with 30+ clinicians and patients, and pitched in 10+ investor meetings to drive early traction and validate core features.

Context

Mental health conditions are often underdiagnosed and poorly managed due to complex symptoms and limited monitoring capabilities. NanoMood addresses this challenge through AI-powered, continuous symptom assessment using multimodal data.

Problem

Current mental health care relies on infrequent assessments, making it difficult to track patients’ progress, detect early warning signs, or tailor treatments effectively. Without continuous and user-friendly data collection, clinicians and patients are left with incomplete insights into day-to-day mental health patterns.

Key Deliverables

Developed an integrated system with two key components: a patient app for easy symptom tracking and visualization, and a physician dashboard with AI-powered tools for risk assessment and treatment planning.

How Might We Statement

How might we improve mental health care by designing intuitive tools that continuously collect and visualize multimodal symptom data to support more personalized and proactive treatment?

Solution Preview

Clinician Platform

Patient Mobile Application

Research

To lay the foundation for NanoMood’s redesign, we conducted an in-depth research phase to understand both patients’ and clinicians’ pain points, needs, and goals. This included competitive benchmarking, a detailed audit of the existing product, and persona development—all of which shaped our design priorities.

Competitive Analysis

We analyzed several leading platforms in the mental health and neuropsychiatric space to benchmark best practices and uncover opportunities for differentiation. The analysis focused on:

Usability: How well competitors facilitated real-time data input, monitoring, and reporting for users and healthcare professionals.
Features: Identifying standard and innovative features offered by competitors, such as multimodal data integration, AI-driven insights, and risk stratification tools.
Visual Appeal: Comparing visual design styles to evaluate trends, clarity, and how effectively they conveyed complex health information.
User Engagement: Studying techniques competitors used to retain user engagement, such as reminders, gamification, or personalized content.

Key Takeaways:

Feature Benchmarking: Real-time insights and mood tracking were standard; many used overly technical formats.
Visual Clarity Gaps: Most platforms used dense charts or raw data—hard for patients to understand.
Opportunity for Simplicity + Trust: NanoMood could stand out with more human-centered, plain-language explanations and friendly visuals.

Previous Application Design Audit

  • For Patients:
  • The interface lacked clarity and consistency, making it less engaging and harder to navigate.
  • Inputting multimodal data was cumbersome, leading to frustration and reduced user adherence.
  • Visual feedback on symptom trends was minimal, leaving users with little actionable insight.
  • For Physicians:
  • The platform provided a fragmented view of patient data, making it difficult to interpret trends or prioritize care.
  • Limited support for risk stratification and treatment planning hindered timely and effective decision-making.
  • The overall design felt outdated and visually cluttered, detracting from usability and professional trust.

User Personas

To better define our core user types and needs, we created research-backed personas representing our primary stakeholders: patients and clinicians.

Key Takeaways:

Clarified Target Needs: Maya helped us emphasize clarity and comfort; Dr. Morgan emphasized efficiency and holistic visibility.
Kept Users at the Center: Personas served as design anchors in all key decision moment

Findings

Based on the research insights, I established four success criteria to evaluate whether the final designs effectively address the identified problems.
  • Clarity and Understanding
  • Is the multimodal nature of the app easy for patients to comprehend?
  • Are the data visualizations clear, concise, and intuitive for all users?
  • Can patients easily understand how to tag their data, ensuring accurate symptom tracking?
  • Clinician Usability
  • Can clinicians view all patient data in a visually clear and organized manner?
  • Is the "All Patients" dashboard easy to navigate and sort through based on key priorities, such as risk level or admission needs?
  • Does the platform enable clinicians to quickly interpret a concise summary of mental health questionnaires for better decision-making?
  • Patient Engagement and Accessibility
  • Are patients able to engage with the platform effortlessly, with clear workflows for data input and symptom monitoring?
  • Does the system provide actionable insights and personalized feedback that patients can easily understand and act on?
  • Effectiveness of Summaries and Insights
  • Are the summaries of patient data, such as trends, risk scores, and activity, presented in a way that is easy to interpret for clinicians?
  • Does the app support seamless communication and understanding between patients and clinicians through well-structured data and visual elements?

Clincian Platform User Flow

Before:
The early version of the dashboard displayed a dense overview of patient data and graphs, but lacked a clear interaction flow. Features like filtering, note-taking, and session tracking were either missing or buried, making it harder for clinicians to act on insights efficiently.

Design Goal
We restructured the dashboard into a more task-oriented flow—separating patient filters, report access, and note management

Mobile Application User Flow

Before:
The early version of the app had a fragmented structure, with multiple screens dedicated to similar tasks. This led to a cluttered experience and unnecessary complexity for users navigating between repeated flows.

Design Goal
We streamlined the experience by consolidating overlapping screens, grouping related features, and simplifying the overall flow to reduce cognitive load and improve accessibility.

User Testing

To ensure NanoMood addressed real needs, we conducted two weeks of structured user testing across both the patient and clinician platforms. Our focus: onboarding, conversational AI interactions, and LLM-powered health insights. These sessions helped us uncover usability challenges, capture user expectations, and refine the experience through targeted iterations.

Participants & Sessions

Patients: We recruited ~10 UCSD students who owned wearable devices and were interested in mental health.

Clinicians: We conducted five 1:1 sessions with physicians from UCSD Health, and one larger session with over 10 clinicians led by our co-founders.

Testing Scope

We divided the user testing into three key flows for the patient platform:
1. Onboarding & Account Creation
2. AI Chatbot Interaction
3.Health Data Visualizations:
Generated using LLMs developed  by our machine learning engineering team to translate raw wearable data into personalized mental health insights

For clinicians, we walked them through:
1. A demo of both patient and clinician-facing dashboards
2. Key workflows like reviewing summaries, tracking medication response, and navigating patient insights

I personally conducted these sessions alongside another designer. While my teammate guided participants through the tasks, I asked follow-up questions, captured observations, and helped synthesize takeaways.

What We Measured

Key Finding and Iterations

Problem 1 of 6: Onboarding was originally split across three separate screens.

Insight:
Observation: Users seemed frustrated by the multiple steps on each screen.
→During testing, participants hesitated and expressed that the process felt unnecessarily long. However, when tested with a consolidated version, no confusion or drop-off occurred.

Action Taken:
Added information icons beside each data input explaining its purpose and relevance to personalized insights.

Before
After

Problem 2 of 6: Users didn’t understand what data they were sharing

Insight:
User: “Why do we need to upload this data?”
→ Users were confused about what information was being collected and why.

Action Taken:
Added information icons beside each data input explaining its purpose and relevance to personalized insights.

Before
After

Problem 3 of 6: Complex graphs left users confused instead of informed.

Insight:
User: “These graphs are very complicated—I don’t even know how to read scatter plots.”
→ Users felt overwhelmed and couldn’t interpret the visualizations.

Action Taken:
Replaced scatter plots with simpler formats and introduced a “data interpretation” section summarizing key takeaways in plain language.

Problem 4 of 6: Lack of Real-Time Alerts

Insight:

User: “Can the app send a notification when it detects something unusual in my data?”
→ Patients wanted timely awareness of significant changes in their health metrics.

Action Taken:
Built a system for tagging/confirming events and added a notification flow to flag irregular biometric patterns.

New feature incorporated

Problem 5 or 6: AI Mislabeling or Missing Events

Insight:

Clinician: “Sometimes the metric data can detect a false event or miss an episode, i think there should be some sort of feature for patients to confirm these episodes and add any missed episodes”
→ Clinicians emphasized the need for more accurate episode logging.

Action Taken:
Enabled patients to confirm, deny, or tag health events before AI-generated interpretations are finalized.

Problem 6 of 6: Switching Between Tools During Sessions

Insight:

Clinician: “Most clinicians use separate note-taking software—switching mid-session would be a barrier.”
→ Workflow interruptions were a major concern for clinicians.

Action Taken:
Added a built-in note-taking feature in the clinician platform to streamline workflow and reduce app-switching.

New Feature Incorporated

Positive Feedback Highlights

Clear Progress Tracking:
Clinician: “I like that it shows pre- and post-medication comparisons.”
→ Clinicians appreciated how the app helps visualize treatment impact and improves session quality by providing a holistic view of patient progress.

Seamless Chatbot Experience:
→ Patients interacted with the chatbot naturally, without confusion, making it easy to share concerns in their own words.

Empowering Mental Health Understanding
Patient Insight:
→ Many users shared that the platform helped them better understand patterns in their own mental health and made it easier to express their experiences and concerns.
Clinician Insight:
→ Multiple clinicians noted that the app’s summaries and visualizations could support more productive sessions and offer a more holistic view of each patient’s mental health journey.

Final Solution

Clinician Platform

Comprehensive Patient Profiles: Centralized health data, activity trends, and mental health episodes.
At-a-Glance Overview: Centralized dashboard for managing all patients efficiently.
Efficient Documentation: Quickly document patient conditions with structured note templates.
Comprehensive Mental Health Summary: Includes score summary of assessments like PHQ-9, GAD-7, and PCL-5.

Patient Mobile Application

Personalized Profile Setup: Guides users through essential health data entry for tailored insights.
Health Insights: Simplifies your metric data and personalizes it to reveal patterns in your mental health
ChatBot Prototype: Simulates real AI interactions to test user behavior and expectations in mental health support.

Reflection

Working on NanoMood's platform in a startup environment was an enriching experience that taught me the value of designing an MVP by balancing simplicity, functionality, and feasibility within tight timelines. Collaborating with a multidisciplinary team, including developers, machine learning experts, and user researchers, allowed me to align technical constraints, data-driven insights, and user needs into a cohesive product. The iterative process, driven by continuous feedback loops, ensured that the platform remained user-centered while leveraging cutting-edge AI capabilities. This experience emphasized the importance of cross-team collaboration, adaptability, and clear communication in creating impactful solutions for complex challenges like mental health care.

RELATED WORK