top of page
just simple.png

Adaptive Cleaning, Personalized to Your Home

  • 5 days ago
  • 8 min read

Updated: 3 days ago

Roomba's cleaning intelligence already adapts based on dirt and room type. This project gave it something new: context. By bringing household composition into the decay model, we made personalization real: not just a setting, but a behavior change.




Role: Product Design Manager & Feature Design Lead, iRobot

Team: 2 designers, 1 writer and 1 Product Manager, data scientists and devs

Tools: Figma, Figma Make, Lovable, Copilot, Usertesting.com

Skills: UX Strategy & Research, Systems Design, Design Leadership, Cross-functional Collaboration Deliverables: Home Profile (Shipped March 2026)+ Roadmap for improvements & personalization



CONTEXT

Roomba's adaptive cleaning system, based on the Clean Score, was already live. It uses a probabilistic decay model to predict how quickly a room becomes recommended for cleaning.


But for all its intelligence under the hood, the system knew nothing about who actually lived in the home. Every household got the same decay rates. A family with three kids and two shedding dogs was treated exactly like a single adult in a one-bedroom. The robot had no way to account for the difference, and users had no way to tell it.



OPPORTUNITY

We saw a clear gap: household context — the people, kids, and pets that drive real cleaning needs — wasn’t meaningfully part of the model.


The Classic app had a rough version of this in Dirt Detective settings, where users could enter basic household information. But the inputs were limited and the feature never fully influenced the cleaning system. When Roomba Home launched, that functionality didn’t carry forward.


This work revisited the concept and integrated it directly into the model.


The opportunity was to give the system a way to learn who lives in a home upfront, and let that meaningfully shape how it behaves. But "let users tell us about their home" is easy to say and hard to do well.


We had real constraints:​


  • Too many questions will prevent setup completion

  • Inputs that don't actually change behavior are just noise

  • Data collection has to feel trustworthy, not invasive

  • Whatever we build has to be worth the engineering investment

This wasn't really a survey project. It was a question of which inputs were worth asking for at all.


KEY INSIGHTS

I partnered with a member of our research team to launch a 100-person survey and card sort — specifically to ground our assumptions in real behavior before we started designing.

  • Kids under 5 change how often everything gets cleaned

    Parents report significantly more frequent cleaning cycles across the board. Having young children in the home is one of the strongest predictors of cleaning frequency — and a clear input for a system trying to stay ahead of mess.

  • Pets change which rooms get cleaned, and how often

    Pet owners clean specific rooms far more often than non-pet owners. The effect isn't evenly distributed — it concentrates in the spaces where pets spend the most time, making shedding behavior a highly actionable input.

  • Weather topped the list

    Weather was the most cited driver of mess across all participants.



TRANSLATING REAL WORLD CONTEXT INTO THE MODEL

The Clean Score uses a probabilistic model that controls how quickly a room becomes recommended for cleaning. Historically those parameters were set by a number of standard inputs like when it was last cleaned, room type, and previous dirt detections. We expanded them to include household composition.

Working directly with data science, we mapped out how different inputs could shift those parameters — surfacing cleaning recommendations sooner in households where rooms need attention more frequently.


Brainstorming how various household inputs map to model parameters and decay rate ranges. Numerical thresholds are redacted - some things must stay off the internet :)
Brainstorming how various household inputs map to model parameters and decay rate ranges. Numerical thresholds are redacted - some things must stay off the internet :)

We also needed guardrails to keep the system predictable: 

  • A room can never go from clean to recommended for cleaning in under a day. This could feel unsustainable no matter how chaotic your household might be. 

  • A room can't go more than 10 days without a cleaning recommendation, regardless of input. We know that even the most calm and kept rooms need a touch up at least every 10 days 

Those constraints weren't arbitrary. They kept the personalization from tipping into false urgency or sluggishness, making the system feel trustworthy rather than erratic.



PLACING THE SURVEY IN THE USER JOURNEY

Before we could design the survey itself, we had to figure out where it lived. We mapped out two options: introduce the survey during setup, before the user sees anything OR wait until after mapping, once they have spatial context and a saved map.


Option 2 was tempting at first. It gained us more spatial context, and potentially better answers.


However, we landed on option 1, and it wasn't a close call once we thought it through. 

  • This survey is the first thing users see for a reason. Personalization that comes first feels like a promise. Personalization that comes late feels like an afterthought. 

  • The Home Profile sits inside a broader home-centered IA built around the idea of telling us about your home.That framing sets the stage for everything that follows. 

  • Competitive research backed the call. Even though competitor apps remain robot-centered, home-level questions at setup is a consistent pattern.



DECIDING WHICH INPUTS ARE WORTH ASKING FOR

❌ Manual decay controls We explored letting users adjust how quickly individual rooms decay. It was tempting as a power user feature, but it conflicted with the core promise of the product: Roomba should just know. We cut it before testing.


❌ Room-by-room usage patterns We prototyped asking things like "how often do you cook?" or "which rooms are your pets frequently in?" The timing was the problem: the Home Profile appears before users have seen their map. Asking for spatial detail before they have spatial context would've introduced guesswork, not clarity. We flagged it for a future in-context moment instead.


❌ Home square footage We explored whether larger homes might correlate with less frequent cleaning recommendations. Research showed no material patterns, so we dropped it.


✅ Number of adults We considered whether general occupancy could move the model in a meaningful way. Research showed this didn't do much on its own, but we kept it as a baseline input for household size context.


✅ Number of children Kids under 5 are one of the strongest predictors of cleaning frequency. We considered a binary "do you have kids?" to reduce any sensitivity around the question, but alpha testing showed users answered the specific number without hesitation.


✅ Number of pets We considered granular pet types like cats, dogs, fish, etc, but research pointed to shedding behavior as the actual driver, not species. We landed on one question: how many pets that shed hair live in your home. Simple to answer, directly connected to how the the home feels.


❌ Weather and environmental factors This was the hardest call, because weather was the only perceived mess driver that came up in all types of households. Integrating it would've required more granular location permissions, external API dependencies, and cost hat weren't worth it for the hardware impact we could deliver in V1. We documented the rationale and scoped it for a future iteration.

ALIGNING ON HOW IT WORKS

This wasn't the first time we'd wrestled with how to translate clean scores into something meaningful for users. Early on, we tested the algorithm using only raw numerical scores before the visualization even existed.

That foundation shaped how we approached this project. Working with data science, we made informed guesses about how these inputs would shift the score and change how quickly a room surfaced as recommended for cleaning.


Before the UI was ready, we tested those assumptions in alpha using an updated version of the original survey from the Classic app — an earlier, rougher version of this same idea that lived under Dirt Detective. The math could be right and the experience could still feel off. This time, we didn't live with assumptions or translate numbers in a Mural board. We tested the experience for feel and tweaked it accordingly.


EVOLUTION FROM COLD TO CONVERSATIONAL

Getting the questions right took more iteration than expected. Early drafts were functional but cold - technically clear, but they read like a form.



The next pass introduced warmer framing and personalized confirmation states based on what users shared. If you told us you had pets, the completion screen acknowledged it. The survey started to feel like a conversation rather than a data collection exercise.



We carried that thinking into the final designs, tightening the copy further and pushing the personalization and warmth at every stage, not just the final screens. We partnered with data science again to define the final profiles and what combinations of inputs that landed a user in each profile.



HOW WE WORKED ACROSS TEAMS

This wasn't a feature design could own alone. 

  • Data science defined how inputs map to model parameters and set the decay guardrails

  • Engineering scoped feasibility, built the ingestion layer, and kept performance within bounds

  • Product helped prioritize scope and decide what to defer 

Within my team, most of us had a hand in this project. I had done the foundational work and user research, one designer built the first wires, I guided the refinement, another turned my AI-generated concepts into custom illustrations, and our copywriter shaped my early AI copy into something that actually sounded like us. 

I led end-to-end while staying hands-on where it mattered, keeping the vision for this feature alive through execution. On a small team, knowing when to delegate and when to stay close mattered as much as the design itself.


BRINGING IT ALL TOGETHER: WHAT WE SHIPPED

Home Profile gives Roomba real world context. Two questions are asked once and change how the robot behaves from day one, gaining accuracy over time. The algorithm updates shipped alongside the new experience.


Household composition now shapes how quickly rooms surface as recommended for cleaning, without users ever seeing the math behind it.


Introducing customization to new and existing users

The survey reached users at different moments depending on where they were in their journey. New users saw it during first-time setup, before their map, as part of getting their robot ready. Existing users got it at the end of a new app version walkthrough as a prompt to bring their robot up to speed on their home. 



Asking less, doing more

Two questions, presented early, set the stage for an intelligent, smart home experience.

 

The copy and context below each question respond dynamically to what users enter, so the experience feels less like a form and more like the app is already paying attention.


Tailored to your home

Answers feed directly into the model, surfacing one of five recommendation profiles: Balanced Everyday, Pet-focused, Family-focused, Always-on and Standard Day-to-day.

Instead of exposing the math, the profile gives users a plain-language summary of how their robot will behave.


Always editable

Life happens and households evolve over time. The survey doesn't disappear after setup. Users can return to Home Profile anytime through Account Settings and update their household using the same screens, costing dev no extra overhead.



REFLECTIONS
  • Restraint is a design decision Every input we considered had a reasonable argument for inclusion. The real work was figuring out which ones were actually worth the tradeoff. Trimming anything that didn't change robot behavior is what kept the survey from becoming a burden.

  • The math and the experience are not the same thing The model could be right and the experience could still feel off. Testing for feel in alpha — not just for accuracy — is what gave us confidence the system would actually land with users.

  • Small teams move fast when everyone plays to their strengths This feature touched every person on the design team. Having people who could own their piece — and trust each other to — is what made it possible to move quickly without losing quality. The wires, the illustrations, the copy: each one got better because the right person was driving it.



WHAT'S NEXT?

It’s still early to measure long-term impact, but the hypothesis is that users who complete Home Profile should receive more accurate cleaning recommendations from the start. We expect this to increase the use of Smart Clean, the automation that sends Roomba to intelligently clean the recommended rooms.


Because household context is now part of the existing decay model, the system is already set up to support deeper personalization as Roomba Home continues to evolve.

bottom of page