Unified Routing – Effort Base Routing

Within Dynamics 365 we can now use effort-based routing to assign conversations or records to users. Meaning from Omnichannel for Customer Service we could route case records, web chat conversations and more to agents based on effort estimate models. I will describe this new feature in this post.

Before I start, I guess I should suggest why this might be required! Often complex queries can consume an unexpected large amount of agent time. Therefore, wouldn’t it be useful if you could spot a potentially complex query and route to specific agents. Meaning your front-line agents can be kept free to handle regular queries as promptly as possible.

In this post I will explain how to create an effort based routing model and use it in your workstream classification / routing.

Below you can see that I have opened by “Customer Service Admin Center”. Under the heading of routing, you will find the effort-based routing option.

We’d use this option to be able to route emails, cases or other records to an agent based on the amount of time for the agent to respond or maybe the time needed to resolve the issue.

Setting up estimate-based routing involves two stages. Firstly, you will need to define an effort estimate model and train it. Then the published model can be used in routing logic, maybe (for example) you could route all cases which could involve a high effort to a particular queue.

Creation Effort Model

When we create an effort-based model, firstly you will give your model a name.

Next you will select what type of record will be used for the source data for this model. In my example I selected cases. But I could have selected any of the record types I have configured for record routing. For example, emails etc.

I can now select the attributes to consider. I opted for my case title and description. It is possible to add additional attributes as required. (Up to a maximum of 10 attributes.)

My next task is to decide how to measure the actual effort.

My options include …

Single duration field – in this approach I’d have one attribute that would contain the duration to resolve or respond.

Calculated duration – here you’d select two date / time fields. And the difference between the two would signify the “effort”.

SLA KPI instance – usefully we can also use any SLA data to derive effort estimates. Useful if all your cases have a first response by or resolve by SLA already.

Next you can optionally filter the records and define a timeframe to fetch data. You might use this (for example) to only include cases of types. Or maybe you only want to consider cases of a particular origin.

Limiting the timespan of data to consider might be useful if you have high volumes of data!

Once saved you can train your model. Notice that this can take several hours!

Any AI model is only as good as the data available for the training! Usefully the potential performance of a model is graded. Notice how my first attempt received a D rating. (Aka not very good!)

Model grades explained ….

  • A – your model is performing well. (But it might still be possible to improve the model.)
  • B – the model is correct is often correct but could be improved.
  • C – the model is “slightly better” than a random guess. In most cases the model will need to be improved/
  • D – Something is wrong! Maybe you simply don’t have enough data to create a useful model.

I guess getting a “D” rating might be expected in a test system.

Tip / Note:
I had some difficultly re-training my model. I found that deleting it and re-creating with different parameters was necessary to try to create a model with a better grade.

Once you have created and trained your model, it can be used in your work classification and routing logic. (Meaning you do need to wait for the training process to complete before the model can be added into your routing logic!)

I also found that my grade was initially “D”, my test data is very very poor! There is a minimum requirement of 50 rows to train a model. But Microsoft do suggest providing 1000 (or more) records with a “typical” distribution, meaning creating a useful model from just test data maybe a challenge. However, in most live scenarios this should be straight forward.

Once your model is trained you will need to publish it.

Once published we have a dry run option. My model retained a performance grade of “D”. But after some data improvement I did get the dry run to return some results!

When we use the dry run option you select a record and click test. The model will then be used to show the type of scores you can expect,

Below you can see that in my example the model predicted 1067 minutes to resolve this case with a 68% confidence score. My confidence score is probably lower than I would like. But with a model of grade “D” I felt quite happy with this result!

I admit that my first model wasn’t successful. I assume in part because of its “D” rating. I continued to create cases and create new models.

With some effort I did improve my model. Slightly! It was now rated as being “slightly better” than a random guess.


I can now use my model in routing, meaning I can classify cases using my model and then use that information to route to particular queues.

So, I opened the workstream I’d already created to route cases and amended the work classification settings.

Below you can see that I have added a work classification to return the potential effort estimate. I will then use that in my routing.

Below you can see that whilst creating my effort classification, I gave it a name and then set the type to “Effort estimation”. I can then pick the effort model I want to use.

I have opted to use my case title and description as inputs for my classification.

Notice that what will be returned is an effort value in minutes and a confidence score. (I’m expecting my confidence scores will be very low! As I am working on a test system with limited data.)

Next, I created a new routing rule. This rule would look for complex / time consuming cases and route those work items to a specific queue.

In this routing logic I would have used my effort confidence score and the effort estimate. As maybe I should ignore anything with a low confidence score. But knowing my effort model was of “limited quality” I decided to just use the effort value.

You can see that in my condition I selected a “Many to One” relationship called “Effort prediction result (Effort estimate)”. I could then pick the “Effort value (minutes)” field from my effort model.

My test routing logic ended up looking like this. Essentially, I have two rules that will be evaluated in order. The first rule will check the effort estimate and route to my “high effort queue” when a large “job” is spotted. After that my normal routing logic will be applied. Which in my simple example involves routing anything else to the default queue I use to entity records.

My Testing!

Importantly, I waited 15 minutes for my changes to apply.

I then tested that cases were being routed as expected. Which all looked great! But how could I prove my effort model had returned a meaningful result. For that I turned to the diagnostics option.

In your “Customer Service admin center”, under the routing option you will find a diagnostics option. I turned my diagnostics on and created a case. (To see how it would be routed.)

Below you can see that my diagnostics showed the case had been routed to my “High Effort Queue”. That immediately suggested everything was working as expected.

But I wanted to confirm this and see what effort and confidence score had been returned.

You can see that by opening the diagnostics routing option I can see more details. In the “Classification” option I could see that an effort value of 284 minutes had been returned. That was a very high estimate, but this was a complex case! (I’d entered a lengthy description on my case and highlighted several issues.) But the confidence level was just 18%. (Actually, that was higher than I’d expected!)

By looking at the “Route to queue” option, I could also confirm that my routing logic had worked as expected. As my first rule was triggered and it routed the case to my “High Effort Queue”.


What do I think of this new feature!

I admit that creating my effort model took a few attempts. Not least because of my poor-quality training data. So maybe that experience would be much easier in a real-world scenario.

And I also admit that I had a few teething problems with my work classification logic. But once I’d that set correctly the effort routing worked well.

I already have a customer who has been asking about how they can deflect complex conversations aways from their front-line agents. This feature looks like it will exactly meet their requirements.

All in all, I am impressed with effort based routing and I’m looking forward to implementing a real scenario. Nice job Microsoft.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s