AI Assistant
Robin's first AI agent for workplace analytics.
Role
UX Designer
AI Design Strategist
Timeline
Oct’24 - Dec’24 (Beta)
Feb'25 - March'25 (MVP)

Overview
Context
Robin captures extensive data across space utilization, booking patterns, employee behavior, and visitor metrics. It’s Workplace Analytics provides dashboards with actionable insights that help facilities managers optimize operations, reduce costs, and enhance employee experiences.
However, in recent months, we noticed a 2-5% decline in analytics usage month over month, particularly among facilities and operations managers.
The business context was Robin wanted to position itself as an AI-forward leader in workplace technology.

The Solution: AI Assistant
I was tasked with leading the design of Robin's first AI-powered product—an intelligent agent that would transform how users access workplace analytics.
Starting as a conversational assistant, the product evolved from generative AI to an agentic system capable of understanding intent, handling query variations, and retrieving contextual insights.
My objective was to eliminate technical barriers and enable intuitive, instant access to actionable insights—delivered in a formatted, user-ready way—while establishing Robin's credibility in AI innovation.
The Outcome
Beta Insights
- 30% of beta participants engaged with the AI Assistant
- 4-5 follow-up questions per session indicating genuine exploration
MVP Outcome
- 70% of customers activated the Assistant within Q1
- 40-50% repeat usage, signaling genuine value beyond curiosity
The Challenge
Reason behind Low Usage
While Robin’s dashboards provided valuable data, accessing advanced metrics like occupancy or office usage often meant building custom dashboards on top of the existing ones. This added effort slowed decision-making and left many managers without the timely insights they needed.
The 2–5% monthly usage drop was mainly due to:
High
complexity
The analytics tools felt overwhelming and intimidating, especially for users new to Robin or workplace analytics in general.
Time-consuming workflows
Building custom dashboards and extracting meaningful data required substantial time investment.
Missed opportunities
The delays in accessing critical insights prevented managers from making timely, data-driven strategic decisions about their workplace operations.
Objective
We set out to fundamentally transform how users interact with workplace analytics, enabling intuitive access to insights through three core objectives:
Eliminate complexity
Simplify workflows so any user could easily access workplace data without technical barriers
Enable instant access
Deliver workplace insights rapidly, removing delays that hindered decision-making
Deliver actionable answers
Provide formatted, contextual responses that users could immediately apply to their workplace strategies
Product Strategy
Two paths emerged to achieve this vision:
1. Enhance our existing dashboard system with AI-powered automation and improved reporting tools.
2. Introduce a lightweight, conversational solution where users could ask questions and receive instant answers.
​
We chose the second path and launched in two phases:
Phase 1 : Beta
We decided to release an early beta version of the AI assistant to power users as a fast, strategic response—validating whether conversational AI could reduce friction and drive adoption. We wanted to see what would work and expose any limitation.
Phase 2: MVP
Based on beta learnings, we decided to evolve from a simple generative chatbot to an intelligent agent, add some more guidance to the solution and redesigned to build trust.
Business Goal
This product goal seamlessly aligned with Robin's broader strategic opportunity to position itself at the forefront of workplace innovation, establishing the company as an AI-forward leader in the evolving workplace technology landscape.
Our goal was to transform data complexity into clarity with one simple act—
type your question and get the answer.
Success Criteria
Adoption
Increase usage of the assistant compared to dashboards, showing that analytics became more accessible.
Faster Insights
Reduce time from hours of setup to instant answers.​​
Deeper Engagement
Encourage follow-up questions that signal curiosity and exploration.​
Research
1. User Research
Objective
Our research aimed to uncover the key challenges users faced with the existing analytics dashboards that hindered adoption. We set out to define the types of questions users expected the assistant to answer and how it could provide meaningful support.
In addition, we explored users’ familiarity and comfort with other AI tools to identify potential friction points in adopting a conversational assistant.
Approach
We conducted 1:1 sessions with both internal administrators and external workplace admins/facilities managers to learn how they use Robin Analytics and where AI might fit into their daily workflows.
Insights
Managers lacked direct access to insights
Most relied on others for reports since creating dashboards for specific questions was too technical and time-consuming.
Ad hoc queries weren’t well supported
Dashboards handled trends well but made answering simple, one-off questions tedious and inefficient.
Conversational tools felt intuitive
With familiarity from ChatGPT-like assistants, natural language promised a lower learning curve and easier entry into analytics.
2. AI Pattern Research
Objective
As AI was a new design space for me, I wanted to understand how designing for AI products differs from traditional digital products. My goal was to learn the best practices for chatbot experiences and identify interaction patterns that would feel seamless, especially for novice users.
Approach
I explored various LLM-based chatbots already in the market to experience how users typically interact with them. In addition, I studied established best-practice resources, including articles like
to ground my design decisions in proven guidelines.
Insights
Set Clear Expectations
Successful chatbots work within well-defined boundaries. From the start, users should know what the assistant can and cannot do, reducing false expectations and building trust in its capabilities.
Design for
Guidance
Providing query examples, help text, and graceful error recovery prevents frustration. When the bot doesn’t understand, fallback options or rephrasing suggestions keep the experience smooth and usable.
Gather Feedback & Evolve
A chatbot should be treated as a living product. Capturing user feedback and monitoring real interactions helps refine its scope, improve accuracy, and evolve features to meet genuine needs over time.
Phase 1 : Beta Solution
To address low adoption and the need for quick insights, we designed an AI assistant allowing managers to query analytics in natural language.
This beta release, scoped to advanced analytics users, validated whether conversational access could reduce friction.
Designing the Entry Experience
We explored a sidebar layout but realized users didn’t need to maintain context with other dashboards or reports while chatting with the assistant. Transitioning to a full-screen model allowed us to prioritize space for conversation, reduce distractions, and position the assistant as the entry point to the analytics module — aligning with the business goal of framing Robin Analytics as a smart, intelligent solution.
Full Screen over Sidebar
Setting Expectations
To manage expectations around the MVP, we focused on clarity and transparency from the first interaction. These constraints were framed as deliberate guardrails — ensuring the experience felt reliable while staying within technical and cost boundaries.
​
-
Defined scope - Limited to bookable office resources.
-
Transparency - Disclaimers highlighted potential accuracy gaps.
-
Session rules - Conversations automatically end after 8 hours of inactivity.

Guidance for Users
To reduce uncertainty, we provided sample questions at the start of a conversation, helping users understand what kinds of queries worked best. These examples were drawn from the most frequently asked questions uncovered during our research, making them both familiar and immediately useful.
​An always-available FAQ link offered additional support without overwhelming the main interaction.
Gathering Feedback
As a beta release, feedback was critical. We embedded a lightweight, opt-in feedback mechanism within the interface, ensuring it appeared only when relevant and could be easily dismissed. This kept the experience non-intrusive while still capturing valuable insights from engaged users.
Static Designs




Beta feedback
After beta launch, extensive user feedback revealed four fundamental barriers preventing broader adoption.
The Blank Slate Problem
Users without a specific question in mind didn't understand what the AI could do. When we asked non-users why they hadn't engaged with the beta, the response was consistent: "We don't know what to ask."
​
The six example questions we provided at launch weren't sufficient to communicate the assistant's full capabilities. Users needed a way to discover and explore what the system could answer before formulating their own queries—essentially, they needed to see the possibilities before knowing what to ask.
The Phrasing Problem
Even users who knew what information they needed struggled with how to ask for it. While they understood what workplace data they were looking for, they couldn't phrase questions in ways the assistant could understand effectively, resulting in wrong answers.
"I don't think the issue is with the AI itself—it's more about the learning curve. If I had a better understanding of how to phrase questions, I'd get more value. A better learning tool on how to use the AI effectively would be incredibly helpful."
Accuracy & Complexity Limitations
To deliver real value, we needed to expand beyond our initial scope of bookable office resources to include employees, meetings, schedules, and user data. However, our LLM-based approach struggled to handle this increased data complexity while maintaining acceptable accuracy.
The Trust Barrier
As a new technology making its debut in enterprise workplace management, users hesitated to rely on AI for important operational decisions. Without visibility into how the system reached its conclusions or what data it accessed, adoption remained limited to early adopters willing to experiment.
Phase 2 : MVP Solution
Based on these insights, we evolved the product through four strategic changes for MVP launch, each directly addressing a critical adoption barrier.
Guided Onboarding with Categorized Questions
To address the "We don't know what to ask" problem, we introduced a welcome screen with curated question categories—organized around common workplace analytics needs like space utilization, booking patterns, and employee behavior.
This solved the blank slate problem by showing users what the AI could do before they needed to ask. Users could browse and discover capabilities through exploration, building a mental model of the system's scope without facing an intimidating empty input field.

Typeahead Suggestions
To help users phrase questions effectively, we introduced typeahead that surfaced contextual suggestions as they typed. Even with just a few words entered, users could see properly formatted questions, reducing friction and helping them discover effective phrasing patterns.

Agentic Architecture
To address the accuracy issue, we evolved from an LLM-based approach to agentic architecture. This approach enabled us to include more data modules beyond bookable resources—such as user data, meeting schedules, and employee information—while maintaining high accuracy across all queries.
This architectural shift also ensured we were future-proof, allowing us to scale capabilities as workplace analytics needs expanded without compromising accuracy.

Chain of thought + Progressive Disclosure
To build trust, we needed to show how the AI reached its conclusions.
I researched transparency patterns in leading AI products. While chain-of-thought (used by ChatGPT, Claude) provides full visibility, it can overwhelm novice users with dense output. So I designed a hybrid approach combining chain-of-thought with progressive disclosure. By default, reasoning appears in a collapsible format—providing transparency without overload. Expert users can expand for full detail when needed.
​
This approach solved multiple problems simultaneously:
- building trust through transparency
- turning the increased latency from our agentic architecture into engagement
- demonstrating the system's sophisticated reasoning capabilities.

Outcome
We wanted to change how managers accessed workplace insights by achieving the following:
70% Adoption Rate
70% of customers activated the AI Agent within the first quarter—exceptional for enterprise SaaS and a significant increase from 30% beta engagement.
40-50% Repeat Usage
40-50% became repeat users, with sessions averaging 4–5 follow-up questions—indicating genuine exploration and value beyond initial curiosity.
Learnings
Beta is a powerful tool
Releasing early helped balance business strategy with user research. It gave us real-world validation, surfaced usability issues, and shaped a more informed roadmap.
Designing for AI is different from designing standard digital products
In traditional tools, outputs are predictable—users click a button or apply a filter and always get the same result. With AI, outcomes are probabilistic, which means users bring different expectations: some expect near-human intelligence, while others are skeptical of accuracy. This requires designing for trust, transparency, and recoverability at every step.