How to migrate your existing Google DialogFlow assistant to Rasa

A number of Rasa users have migrated from hosted chatbot development platforms like LUIS, Wit.ai, or the one that we see the highest percentage of users migrating from - DialogFlow (formerly Api.ai). It’s safe to say that we can see a trend that more and more developers start building their assistants using hosted platforms like DialogFlow, but eventually, they face development limitations and make a decision to switch to open source solution - Rasa Stack.

Here are the main reasons why developers choose Rasa Stack over the hosted platforms:

  • It’s open source.

    • You can customize everything. You can tune the models, integrate new components or even customize the entire framework to best suit your needs.
    • You own your data. You can choose where you want to host your applications and you have a full ownership over who can access the data of your application.
  • Machine learning-powered dialogue management.

    • The assistant uses machine learning to learn the patterns from real conversations. That means no predefined rules, and no state machines.
    • You don’t need massive amounts of training data to get started. You can build a scalable assistant bootstrapping from a very small sample of training data.

Migration from DialogFlow to Rasa is one of the most common requests from the Rasa community. This post is going to cover a step-by-step process of migrating an existing DialogFlow assistant to Rasa.

You can find the code of the example assistant used in this tutorial here.

Outline

  1. The DialogFlow assistant
  2. Step 1: Exporting the DialogFlow assistant
  3. Step 2: Training the Rasa NLU model using exported data
  4. Step 3: Migrating contexts and training Rasa Core model
  5. Step 4: Your DialogFlow assistant now runs on Rasa. What’s next?
  6. Useful resources

The DialogFlow Assistant

As we said, the goal of this post is to show a step-by-step process for migrating an existing DialogFlow assistant to Rasa.

In order to illustrate the process, we are going to use an example - a custom-built search assistant called ‘place_finder’, capable of looking for places like restaurants, shops, banks within the user-specified radius, and provide details about the returned place’s address, rating and opening hours. The assistant gets all these details by using a webhook to make API calls to the Google Places API.

Below is an example conversation with the place_finder assistant:

place_finder_mockup_5efcceb3b202

If you want to replicate this tutorial, you can find the data files and the webhook code of ‘place_finder’ assistant inside the dialogflow-assistant directory of the repository of this tutorial. Alternatively, you can follow this guide using your own custom assistant and adjust the steps below to suit your case.

Whichever option you choose, at this point you should have your DialogFlow assistant - now it’s time to migrate it to Rasa!

Tip: If you want to play around with place_finder assistant you should get a Google Places API key and place it inside the credentials.yml file of these repositories.

Step 1: Exporting the DialogFlow Assistant

We recommend starting with migrating the NLU part of the DialogFlow assistant.

To migrate it to Rasa, all you have to do is export the project files and use them to train the Rasa NLU model - no data or formatting changes are needed. It’s designed to be as easy as possible. Here’s how.

  1. You can export the project data by navigating to the settings tab of your agent and choosing the ‘Export and Import’ tab.dialogflow

  2. On the ‘Export and Import’ tab choose the option ‘Export as ZIP’. This will export the project data as a zip file and download it to your local machine.dialogflow-export

  3. Once downloaded, you can unzip the package and inspect the files inside. The DialogFlow project directory consists of the following files and directories:

  • entities - a directory which contains json files of the entities your DialogFlow assistant was trained to extract
  • intents - a directory which contains json files of the intents your DialogFlow assistant was trained to understand
  • agent.json - a file which contains the configuration of the DialogFlow assistant
  • package.json - a file which contains the information about the software used to build the assistant

dialogfwlo_structure

These files can directly be used to train the Rasa NLU model. You only need to define an NLU pipeline configuration file and pass it to the Rasa NLU train script. That’s taken care of in Step 2 below. Check out the next step of this tutorial to see the detailed instructions.

Step 2: Training the Rasa NLU model using exported data

Rasa NLU allows full customization of the models. Before you train the NLU model you have to define a configuration of the pipeline. A processing pipeline defines how the training examples are parsed and how the features are extracted. The configuration has to be saved as a .yml file inside your project directory. Below is an example pipeline configuration which you can use:

File: rasa-assistant/config.yml

language: "en"

pipeline:
- name: "nlp_spacy"                   # loads the spacy language model
- name: "tokenizer_spacy"             # splits the sentence into tokens
- name: "ner_crf"                     # uses the pretrained spacy NER model
- name: "intent_featurizer_spacy"     # creates sentence vector representations
- name: "intent_classifier_sklearn"   # defines the classifier
- name: "ner_duckling"                # uses duckling to parse the numbers
  dimensions: ['number']

Once you define the pipeline, you are good to train the NLU model using the steps below:

  1. To train the model, use the following command which calls the Rasa NLU train function, loads the pipeline configuration and training data files and saves the trained model inside a project called ‘current’:

    python -m rasa_nlu.train -c config.yml --data data/place_finder -o models --project current --verbose
    

    You can test the trained model by running it as a Rasa server which you can launch using the following command:

    python -m rasa_nlu.server --path ./models
    

    The function above will start a local server on a port 5000.

  2. Now you can emulate the results of the Rasa NLU model by sending requests like the following one. It will pass the message ‘Hello’ to the Rasa NLU model and return the response.

    curl 'localhost:5000/parse?q=Hello&project=current'
    

The response of the Rasa NLU model includes the results of intent classification and entity extraction. For example, the message ‘Hello’ was classified as an intent ‘Default Welcome Intent’ with the confidence of 0.83. Here is a full response returned by the NLU model:

{
  "intent": {
    "name": "Default Welcome Intent",
    "confidence": 0.8342492802420313
  },
  "entities": [],
  "intent_ranking": [
    {
      "name": "Default Welcome Intent",
      "confidence": 0.8342492802420313
    },
    {
      "name": "thanks",
      "confidence": 0.09805256693160122
    },
    {
      "name": "goodbye",
      "confidence": 0.05392708191432759
    },
    {
      "name": "address",
      "confidence": 0.003986386948676723
    },
    {
      "name": "place_search",
      "confidence": 0.0037102872949153686
    },
    {
      "name": "rating",
      "confidence": 0.003059348479049656
    },
    {
      "name": "opening_hours",
      "confidence": 0.0030150481893980153
    }
  ],
  "text": "Hello",
  "project": "current",
  "model": "model_20180827-110057"
}

And that’s it! You have just migrated the NLU part of the DialogFlow assistant to Rasa!

Tip: The data format exported by DialogFlow is different from the one that Rasa NLU uses. If you want to improve your Rasa NLU model after the migration, you can grab a data file produced inside the model directory after the initial training is finished (in this example the path to this file is rasa-assistant/models/current/model_xxxxxxx/trainingdata.json), add new examples to it and retrain the NLU model pointing the train function to this file.

Step 3: Migrating contexts and training the Rasa Core model

Note: If your custom DialogFlow assistant doesn’t use any contexts or webhooks, you can skip this part of the tutorial and go to the step 4 of this tutorial.

If it does, then follow the steps below.

  1. To migrate the remaining part of the assistant - context handling and custom actions - you need some training data.

    DialogFlow performs dialogue management through the concept of ‘contexts’, while Rasa uses machine learning to predict what actions an assistant should make at the specific state of the conversation based on previous actions and extracted details.

    It means that in order to train the Rasa Core model, you need some training data in the form of stories. Since DialogFlow’s dialogue management is a rule-based approach, you cannot export any training data which you could use directly to train the Rasa Core dialogue model.

    The good news is that you have access to conversations history on DialogFlow and you can use it as a basis for generating training data for Rasa Core model.

    Here is an example conversation on DialogFlow:

    conversation-1

    In order to convert this conversation into a Rasa Core training story, you have to convert user inputs to corresponding intents and entities, while the responses of the agent have to be expressed as actions. Here is how the above conversation would look like as a Rasa story:


    File: rasa-assistant/data/stories.md

    ## story_01
      * Default Welcome Intent
        utter_greet
      * place_search{“query”:”restaurant”, “radius”:”600”}
        action_place_search
        slots{“place_match”:”one”}
        slots{“address”:”Ars Vini in Sredzkistraße 27, Hagenauer Straße 1, Berlin”}
        slots{“rating”:”4.4”}
        slots{“opening_hours”:”true”}
     * opening_hours
        utter_opening_hours
     * rating
        utter_rating
    

    To train the model you will need some additional stories representing different conversation turns.

  2. Create the domain of your assistant which defines all intents, entities, slots, templates, and actions the assistant should be familiar with. For example, the templates of the place_finder assistant look like this:

    File: domain.yml

    templates:
      utter_greet:
        - "Hello! How can I help?"
      utter_goodbye:
        - "Talk to you later!"
      utter_thanks:
        - "You are very welcome."
      utter_what_radius:
        - "Within what radius?"
      utter_rating:
        - "The rating is {rating}."
      utter_address:
        - "The address is {address}."
      utter_opening_hours:
        - "The place is currently {opening_hours}."
      utter_no_results:
        - "Sorry, I couldn't find anything."
    

    Tip: All intents and entities defined in the domain file must match the names defined in training examples.

  3. Define custom actions. While we can write simple text responses inside the domain file (just like in the domain file snippet above), more complicated actions, like making an API call or connecting to the database to retrieve some data, should be wrapped as a custom action class. In DialogFlow, place_finder had a webhook for retrieving the data from Google Places API so to migrate it to Rasa, you should turn it into a custom action class like the following:

    File: rasa-assistant/actions.py

    This class assigns the name to this custom action, makes the API call, retrieves the requested data, generates a response which should be sent back to the user and sets the details which should be kept throughout the conversation as slots.

  4. That’s all you need to train the Rasa Core model which will predict how the assistant should respond to user inputs. Now it’s time to train it!

    You can train it by using the following command which will call the rasa core train function, load the domain and stories training data files and store the trained model inside the ‘current’ project:

    python -m rasa_core.train -d domain.yml -s data/stories.md -o models/current/dialogue --epochs 200
    
  5. And this is it: you have successfully migrated an assistant from DialogFlow to Rasa! You can now test it locally. By running the command below you will start the custom actions webhook:

    python -m rasa_core_sdk.endpoint --actions actions
    

    Now, you can load the agent using the command below which will load both Rasa NLU and Rasa Core models and launch the assistant in the console for you to chat:

    python -m rasa_core.run -d models/current/dialogue -u models/current/nlu_model --endpoints endpoints.yml
    

Step 4: Your DialogFlow assistant is now running on Rasa. What’s next?

The sky's the limit of what you can do with Rasa-powered assistants. You can customize the models, integrate additional components or connect it to the outside world using the connectors of the most popular messaging platforms. You can even connect them to other cool frameworks and tools to make it even more fun!

If you have migrated your assistant to Rasa, we would love to learn about your experience! Join the Rasa Community Forum and let us know!

Useful resources: