Step 8: ROI, Plan, and Forecast
This deep dive into the eighth step of the Marketing Optimized Framework posted previously.
Why
What "works" changes and everything marketing does should be a test to see if what we knew to work holds true or something new works better
Find out what activities, media channels, and assets worked best in progressing your prospects and do more of that
The more consistent our conversion rates at key point in the deal life cycle the more accurate our predictions for Planning and Forecasting
What
Unlike many of the previous steps, this is mostly analysis and planning activities, often associated with Quarterly Business Reviews (QBRs). We start with the analysis of what was effective and cost-efficient (these along with the average value of contract are the balancing act marketing performs). With conversion rate at each stage through the marketing and sales funnels, we can plan the amount of activity needed to meet sales targets for the next cycle. As the teams execute on their plans, and we watch Leading Indicators (forward looking) to predict marketing's contribution and if any adjustments are needed.
How
From Step 7, we have completed the active marketing processes and now we look back to plan forward.
Effective and Efficient
These are Lagging Indicators and should always directly tie to the measure that tells us if we moved the business in the right direction and by how much (i.e. # of deals and deal value). While this may make some marketers uncomfortable as we are a few steps away from being able to directly influence the close of a deal... but this is ultimately what the business needs to grow and thrive and is what we should be reviewed by in the long cycle. Later we will talk about Leading Indicators which are things we as marketers can directly affect but not the ultimate outcome that moves the business.
The best most accurate way to analyze which activities, media channels, and assets worked best will be through the lens of Data Science. As I advocated before, this removes biases and assumptions and looks at what the data tells us. The weighting of the assets should not be done by an arbitrary rule (First Touch, Last Touch, Equal-weighted across all Touches, U-shape, W-shape, etc) but rather observed data for weighting. These arbitrary rules are imposing a pattern of an idealized journey with no tangent on the buyer's touchpoints ... instead of forcing this pattern why don't we let the data tell us what the pattern looks like (drawn from all touchpoints we have tracked). While there are use insights from some of these they do not function well as Attribution (analysis to differentiate the value of marketing touchpoints). The best approach is a model that statistically weights the touches for the buying group within a reasonable time frame of the Deal creation date.
First Touch is a useful insight to see what assets (and marketing channels) bring new contacts to the marketing database but there is also a better approach where you have a platform that keeps track of unknown touches and links those after becoming known... this way you can look at all touches where they responded by coming to the website but didn't ultimately fill out a form and the culmination of all those visits resulted in them finally filling out a form and becoming known. Last touch (before deal creation) can be useful to see if face-to-face events help make the connection with the sales team and gets the deal rolling. Similarly, Last touch (before deal close) you might look for patterns of events that help "seal the deal" where the buying team meets our sales support team and deal closes. Again while these are interesting insights at key moments, they are not Attribution. (Share in the comments below important moments you track)
You might observe on average your prospects spend about 3 months researching before a deal is created... then to re-train your attribution model, start with the Deal create date and go back 3 months and pull all activity data for all members of the buying team for all Deals. You should include all activities otherwise you are intentionally warping your understanding. Attribution will not be perfect until we can capture all interactions everywhere (obviously beyond our ability) but I would rather have an almost complete picture than one with many gaps.
As part of the analysis, you should separate out the assets from the deployment channel so you can understand the performance of the media channel from the asset as it is likely each asset is delivered across multiple channels.
An important realization is each campaign will work well for different parts of the funnel and not all will directly contribute to pipeline creation or pipeline conversion. Display Ads for instance are often used to reach new contacts and bring them into the database to feed outbound programs. Not every contact is worth the same focus to bring in contacts that fit the buying team should be the focus. For programs like these that target the earliest part of the marketing funnel, we can see the stage conversion but often we forget to include the web visits prior to conversion (some platforms are unable to do this). These may need to be aggregated at the company level and included in the attribution model that way.
Knowing what performed well in converting prospects in a given stage lets us next examine which were cost-efficient. It does us no good if it cost more to bring in the prospects than the AVG value of the Deal. Another efficiency metric you should review is how many contacts did the campaign attempt to reach vs how many prospects responded. There is a hidden cost of contact fatigue... we've all seen it... a marketer target nearly the entire database to try to get enough responses for their campaign. But this will prevent those contact from getting some other campaigns that they are a better fit for because of contact touch limits... which are designed to ensure we don't overwhelm contacts with too many communications. The worst performers should be highlighted for corrective action.
Plan
Knowing what worked well at converting prospects at each stage and how efficient these were, we can plan marketing for the next cycle. This is not a complete wipe and build from the group up every time. Ideally, there will be few surprises (underperformers and outperformers)... and the few surprises we have we drop the underperformers and add the outperformers into the mix and create new content to test for this cycle.
With a well-functioning marketing machine, you should not have large swings from cycle to cycle and you have conversion rate by stage as well as by Program and campaign (those that are reused). You will also have the cost of the campaigns by type and program group. Some portion of the Sales Target will need to be influenced Marketing. Most of these will come from Pipeline Conversion and only a few from Pipeline creation campaigns (depending on your AVG sales cycle length). Pipeline creation campaigns are usually meant to prepare prospects for future cycles where they enter Pipeline conversion programs. Too often marketing teams are only thinking about the actions in "their lane" and not how it is feeding the pipeline in these terms. Adjust the mix of campaigns based to ensure you have enough prospect touches for both the current cycle and future cycles. This gives us a high-level plan for budgeting and next steps.
The way you organize the hierarchy of your marketing activities will depend on the variety of campaigns you have and the focus of your reporting. The example below is focused on the assets and how they are deployed is captured in fields at the campaign level. This includes metadata such as Marketing Channel and Paid/Organic and Outbound vs Inbound.
There needs to be a sanity check. Are there enough companies that are "in-market" to meet the sales numbers (number of companies "in-market" for that solution and average deal value)? Obviously, we won't know of every company that is "in-market" (especially if there isn't a lot of Intent data coverage for your type of product/service). But we can look at what the ratio looks like historically (known companies "in-market" vs deals closed) to gauge the likelihood of hitting the number?
(How do you rough out a marketing plan? Share in the comments.)
Forecast
As we start the cycle again and marketing executes campaigns we need reporting to help forecast how we are doing. We use Leading Indicators that have strong correlations to the measures we evaluate the business on (i.e. Lagging Indicators). The simple reason we use Leading Indicators here is to detect problems while there is still time to course correct (change of tactics or redistribution of budget, etc) and keep the pipeline healthy. Sometimes part way into a cycle (i.e. quarter) you might need/get addition funds to spend, keep in mind some programs don't scale as well and the lift from the additional funds may not be linear. Historical data of the efficiency of the campaign can help with this.
Typically these are measures of flow and quality at key stages in a prospect's lifecycle. There are many more metrics each individual team will watch, but an overall view might look something like this? (What do you monitor to check if your campaigns are performing as expected and will meet targets? Tell us in the comments.)
Tools
Here are some tools I have had exposure to for attribution, campaign efficiency, planning, and forecasting. (Share in the comments tools that you love)
Comentários