Timeline

Jul 2021 - June 2022

Role

Senior product designer

Context

As a Senior Product Designer at Voiceflow, a leading platform for designing, prototyping, and testing conversational AI, I was tasked with improving our users' testing capabilities. Voiceflow enables developers and designers to create complex conversational flows for chatbots and voice assistants.

Our team had noticed a growing demand from our enterprise clients for more sophisticated testing tools.

The challenge

Prior to this project, Voiceflow users faced significant limitations when testing their AI assistants:

  1. Tests could only be initiated from a default starting point, failing to account for diverse real-world scenarios.

  2. Testing different scenarios was a time-consuming and manual process.

  3. There was a lack of variability in testing conditions.

  4. Users struggled to identify edge cases or unexpected user behaviors.


These limitations hindered the ability of builders to create robust and versatile AI assistants, potentially leading to suboptimal user experiences in real-world applications.

Users & Audience

Our primary users were conversational AI developers and designers, particularly those working on enterprise-level projects with complex user scenarios.

My Role

As Lead Product Designer, I:

  • Conducted user research and data analysis

  • Led ideation workshops and concept development

  • Created and iterated on designs from wireframes to high-fidelity mockups

  • Collaborated closely with engineering and product management

  • Oversaw user testing and feature implementation

Uncovering the Real Problem

When I joined Voiceflow, our testing tools were basic, handling input/output testing, flow validation, and simple simulations. However, user feedback and support calls highlighted significant testing limitations.

Our goal was to increase user retention by 30% through enhanced product capabilities.

To understand the issues, I worked closely with our customer success team, listened to calls, and analyzed support tickets.

Users frequently mentioned "scenarios" and "personas," spending hours on manual workarounds to simulate different contexts. This led to the breakthrough realization that we needed to reimagine AI testing. The result was "Test Personas," a feature for creating and managing diverse user profiles for comprehensive AI testing.

Bringing the Team Along

To help the team understand the complexity of AI interactions, I organized an unconventional "Day in the Life of AI" workshop. Team members role-played as AI assistants and users, acting out various scenarios. This experiential approach not only generated excitement but also produced a wealth of ideas that shaped our solution.

The Solution

We created the User Testing Persona feature, allowing users to:

  1. Create and manage test personas with predefined variables

  2. Select specific personas to simulate diverse interactions

  3. Save and reuse test scenarios

  4. Seamlessly integrate new testing capabilities into existing workflows

Exploring Alternatives

Before settling on the Test Personas feature, we explored several other solutions:

  1. Enhanced Scripting Tool: While appealing to technical users, it proved too complex for many clients.

  2. AI-Generated Test Cases: Lacked nuanced understanding of specific use-cases and raised concerns about predictability.

  3. Pre-built Scenario Library: Couldn't cover the vast diversity of use-cases across different industries.

  4. Real User Testing Integration: Deemed too costly and time-consuming for regular testing needs.

We chose Test Personas for its flexibility, efficiency, consistency, scalability, and user familiarity. The enthusiastic response to our early prototype confirmed we were on the right track.

Impact

After a month-long beta test with 50 enterprise users:

  • 60% increase in unique scenarios tested

  • 95% of beta users rated the feature as "very useful"

  • 30% increase in bug detection during testing

User Feedback

Learnings

The best insights often come from unexpected places. Always keep your ears open.

  1. Experiential exercises can bridge understanding gaps in complex topics like AI.

  2. Showing a working prototype, even if imperfect, can be more powerful than any presentation.

  3. Balancing user needs with technical constraints requires constant communication and creativity.

Let's Work Together?

If you've got an exciting challenge, and you'd like to work together

Let's Work Together?

If you've got an exciting challenge, and you'd like to work together

Let's Work Together?

If you've got an exciting challenge, and you'd like to work together