5 Things Teams Should Consider When Building a Virtual Assistant
Conversational AI is a multidisciplinary effort; it takes multiple skill sets to get it right. On top of that, many companies are still in the experimentation phase of incorporating assistants into their business, still learning how assistants can create the most value for customers.
For teams moving from experimentation to serious development, building a virtual assistant can feel like uncharted waters. However, a few clear patterns have emerged among teams who are successfully running assistants in production. This post will detail those strategies to help conversational AI teams build on the wisdom of the community.
1) Choose your use case with precision
Depending on the industry you work in, picking a use case for your virtual assistant might seem like an easy task at first glance. However, it is key to clearly define the scope of your project before beginning development to avoid additional challenges that may arise. Begin by thinking through a simple set of skills that your users might want in the first iteration and expand when you are ready.
If you already know what you need to build and want to accelerate the process, the Rasa starter packs can reduce early development time considerably. Starter packs are free to utilize and come equipped with training data and skills that your users might be looking for early on. Rasa starter packs include the following use cases: financial services, insurance, retail, and help desk. You can clone the starter packs by looking under “clone a starter pack” in the Rasa docs.
2) Build using real conversations
To avoid conversations that don’t feel natural, you should build your assistant with training data sourced from actual conversations. Because synthetic data doesn’t reflect the way real users speak, it can lead to more conversations that fall out of scope and more users that encounter the dreaded “I don’t understand” message. To improve user experience, you should focus on supporting dialogue that actually occurs, not trying to anticipate all possible phrases people might say to the assistant.
With Rasa, it is easy to gather real user data by sharing your assistant, with either a volunteer group or even just close friends and family, to see how people communicate with your creation. Rasa can help you take those real conversations and input them back into your assistant to better support users in the future. One of the hallmarks of a great user experience is allowing users to have natural conversations and be understood vs. having to adapt to the assistant to accomplish their goal.
3) Create proper test coverage early
While sharing your assistant early is vitally important, it doesn’t catch all of the issues that may arise when publishing your assistant to a wider audience. As you build out stories for your assistant, you have the ability to create tests of those stories at the same time. Just as in test-driven development, you want to begin with tests that model the intended behavior of the virtual assistant. From there, you can begin building out your virtual assistant. Your initial tests serve as a validation point as you expand and refactor. In this way, you can ensure proper coverage from the very earliest stages of development.
Creating a robust testing strategy for your virtual assistant will yield benefits many times over in the long run. As you develop your assistant you will want to ensure it retains its previous capabilities as you continue to add more skills. Adding test stories as you develop creates a system for regression testing that can both catch mistakes and confirm proper behaviors.
4) Iterate on mistakes and reinforce correct behavior
We can all dream of a world where conversational assistants make no mistakes and all conversations occur as intended, but sadly that is not the case. When you initially share your assistant with volunteers and when you push your assistant live, you will want to continually review conversations to identify what is working and what might not be.
Rasa allows you to tag and filter user conversations to identify problem areas, potential new skills, successful conversations, and more. Establishing this workflow early can help you not only expand your assistant’s capabilities based on what your user base needs, you can also correct assistant behavior before it becomes a larger issue.
5) Use analytics for a better understanding
User insights are a driving force behind many conversational AI projects, but it is crucial to distinguish between metrics that actually correlate with the success of your virtual assistant and “vanity metrics”. You should also try to avoid using individual metrics to drive decisions as they can rarely paint the full picture by themselves.
Instead, use analytics to get an overview of AI model health, user activity related to the assistant, and deployment performance—always with your overarching business goal in mind. Remember that virtual assistants improve over time, and by continually reviewing user conversations in tandem with metrics, you can create exceptional user experiences.
Building conversational AI applications requires the right workflows, a user-centered approach, and the proper tools. In the end, success can be defined in several ways. For some Rasa customers, success has meant deflecting routine customer service contacts, creating personalized customer experiences, and providing a multi-channel, automated technical support assistant.