Next Best Action Framework
A process for 1-to-1 marketing optimization
Next Best Action or NBA selection is a common process within decisioning. If you’re not sure what decisioning is check out my last post “Decisioning & Next Best Action” it won’t take long and it should set the stage well for this conversation(link at bottom of page). Next Best Action is a pretty simple concept. In any given interaction with a customer you must evaluate the ways in which you can respond, these are your possible actions. If you want to optimize that interaction you have to evaluate the possible actions you can take and determine which best fits your optimization goal. Decisioning platforms give you an ideal environment to make these selections, but it’s the design of your NBA process that determines how well you are truly optimizing each individual interaction for each specific customer. If you did read my last post on decisioning you’ll recognize the visual below. In this post we are going to dive into each step of that common process flow for NBA selection.
Filter
This first step is logically the most simple, but also where it’s easiest to step on your own toes. Filtering possible actions comes down to filtering out actions that aren’t logical to take. It’s not filtering down to a set of actions that you think would be best based on your prior experience or your existing targeting strategies for that particular action. The idea is to use this part of the process to ensure you don’t put a message/offer/recommendation in front of a customer that isn’t applicable to them at all or can’t be taken advantage of by them. Secondarily, getting rid of those inapplicable actions will also lighten the processing load in subsequent steps.
Here are some examples of good actions to filter out and their justifications:
- An offer for a free month of HBO to a customer who already has HBO.
- An exclusive rewards credit card application prompt to a customer who doesn’t have the credit score to qualify.
- A payment incentive or reminder to a customer who already made their payment.
Here are some examples of bad reasons to filter those same offers out:
- No free month of HBO offer because the customer has been with us for 10 years and never purchased a premium channel.
- No rewards credit card application prompt because the customer only uses their current card X times per month and probably isn’t interested.
- No payment incentive to a customer who’s payment is due because your existing predictive model doesn’t rate them as a high churn risk.
Again the idea in this step is not to efficiently target customers. I’m not saying that your broader rules don’t work to isolate customers more likely to respond to an action. I’m sure they do, and I’m sure they’ve made your efforts more efficient in the past. By allowing generally less optimal offers to be scored you’ll allow those prior targeting factors, as well as numerous others, to be considered in determining the value of taking that action. You can then consider that much more personalized evaluation relative to the other possible actions for that specific customer instead of in a vacuum. Remember the goal isn’t what works best in most situations. We are looking for 1-to-1 optimization. Generally not ideal for this type of customer, doesn’t mean that it’s not the best option for the circumstances of an individual customer interaction.
Score
The point to scoring is to be able to judge each action in equal terms, even when those actions may be very different, so you can prioritize. This can seem like a pretty esoteric task, and can lead to some pretty unique solutions. Not in a good way. In the way that only the “experts” who made it understand your scoring methodology. There's a good chance they understand the math, the statistical models, and the steps taken to come up with that score, but still don’t have a firm grasp on what the score represents. When that's the case it’s probably because it doesn’t represent anything particularly intelligible.
It’s a common misstep to veer away from a tangible goal for what the outcome of your scoring processes approximates. It usually starts out as a pretty graspable formula. Then factors are piled on one by one in order to provide levers to pull. These levers are put in place so you can influence the results. Often just multiplying the previous score formula by yet another factor. There are two major problems here. First, the tendency is to start stacking multiplicative relationships. When you do that it’s hard to actually manipulate those factors appropriately to get the intended results. Second, and much more importantly, the intent of scoring for next best action is not to semi-arbitrarily manipulate the results. Here’s an example of what one of those scoring formulas gone wrong might look like:
Prioritization Score = ((Customer Response Propensity)*(Customer Lifetime Value)*(Offer Priority Factor)+ (Retention offer bonus factor)*(Customer Churn Risk))*(Executive Priority factor)
This looks pretty sophisticated initially. It’s leveraging the output from a couple different predictive models. It gives you a few levers to pull if you want a specific offer to get presented more or less often. If a customer is likely to churn you can give special churn offers a little boost, and if you have executive direction to take a certain action more or less often you have a great override switch right there at the end. It’s tempting to decide that your prioritization score doesn’t need to result in a value that is graspable. The purpose is to rank your possible actions against each other and they are all scored via the same mechanism, so the comparison is fair. That’s true, but it misses the big picture entirely. Why are we going to rank all these potential next actions in the first place? It’s to optimize your efforts by selecting the best action(s) to take. The entire optimization effort hinges on exactly what “best” means and matching that optimization goal to your prioritization scoring.
“Best” for most businesses means generating the most value. Most businesses measure value in dollar terms. A process has long existed in the financial and gambling worlds for assessing multiple options and optimizing the selection based on value generated, Expected Value or EV. Expected Value is quite simple, EV = Probability * Value. What is the probability of X occurring and what is the value if it occurs? Here’s a simple example of comparing the EV for two product offers:
To optimize for value using the idea of EV you don’t necessarily have to take those inputs separately into a formula. You could also approach it via a single predictive model meant to predict what approximates an expected value. In my opinion there is value in having both inputs identified and evaluated separately first. Historic evidence for a model predicting the probability of a customer making the desired response to a specific action is pretty set in stone. Did you take the marketing action and did the customer respond in the desired manner? Value can be much more dynamic, even when you use direct revenue or profit from a response as that value. Those numbers can easily change for the same offer over time (new price point, different cost factors, etc.). A model trained on realized value from historic occurrences of that same action can’t account for that variability and predict current expected value very well.
An EV based system for scoring your actions doesn’t have to be as simple as the example outlined above. You can factor in a variety of goal actions and their value then add all of those expected values together. You can also have a much more complex process for determining the value of an action being responded to. All the broader business concerns and priorities can also be factored into an EV based formula, but to do so you must determine their value in dollar terms. This process is definitely more difficult than coming up with some factor between 0 and 1, but it also forces you to reconcile how much revenue or profit you are willing to give up in the pursuit of that goal. You may find other frameworks better for your particular optimization goal than EV, but I think it’s appropriate for most businesses. We’ll dive deeper into prioritization scoring strategy and developing inputs in future posts.
Rank & Select
The ranking portion of this process is as simple as it sounds. In most scenarios the selection is also simple. Often it will come down to selecting the top X actions according to the scoring process. If you’re looking for the single next best action the highest prioritization score or rank 1 is what will be selected. If the interaction requires more than one action to be taken, ex. putting the 3 best product offers on a pop up carousel upon someone’s account login , take the top 3. There are some cases where you need to make alterations to a top X selection process. Just like overdoing it on that first filtering step, it’s easy to negatively impact 1-to-1 optimization by getting complex here.
Mostly you are only including rules here to get rid of actually conflicting actions. For example, showing a customer an offer for both $10 off and $20 off the same product. In cases where you need to make rule based decisions between eligible actions for the customer interaction try to do them here instead of before scoring. This will allow the score to be a factor in those decisions. This will also ensure unbiased eligibility, score, and rank data for future analysis of the process. That data is invaluable when it comes to diagnosing why certain actions are being selected more or less often than other actions or your expectation.
A common type of selection rule would be to allow one offer per product category when selecting multiple offers to display to the customer. I wouldn’t necessarily default to having rules like this, but when the evidence supports them use them. When optimizing a multiple offer display it’s not really plausible to account for surrounding offer context when modeling the probability of one of those offers. Rules to account for this can be useful. For example a bank may have data that shows when customers are presented with multiple credit card offers at one time they are less likely to take action and apply to any of them. In this case it makes sense to limit selection to only one credit card offer when presenting multiple offers to the customer.
The other common use case for adding selection rules is for managing more operational outbound messages through a decisioning process alongside optional marketing actions. There will be outbound communications that need to be selected for practical/ethical/contractual reasons when available over the marketing actions that are scored higher. It may be difficult to foresee scenarios when your decisioning process is deciding between those drastically different actions at the moment. We’ll discuss reconciling outbound activity with an NBA strategy in my next post.
I have no doubt we could continue to explore examples and tips for applying this [filter>score>rank & select] NBA framework for hours. We’ll do so in the future. If you made it this far your gears are probably turning on how to apply this optimization framework to your use case, so I’ll wrap it up. Hopefully this serves as a good starting point for that brainstorming. I’ll leave you with two reminders before you go and get your hands dirty.
- Don’t step on the toes of your 1-to-1 optimization by applying unnecessary filters up front or in the selection rules at the end.
- The quality and direction of your optimization comes down to how you handle the scoring process. If you don’t know exactly what you scoring process measures then you don’t know what you’re optimizing towards.
Have questions or want to share your thoughts on NBA selection? Hit that comment section below. If you’re interested in reading more about decisioning and next best action, or marketing analytics and optimization in general, follow me here and connect with me on LinkedIn..
In case you missed my last post on Decisioning and NBA: