With a vast experience in marketing, sales, new product development, market research and acquisitions in both the semiconductor industry and information services Jeremey Donovan's wealth of information and experience made for a fascinating and thorough interview. Now the Head of Sales and Sales Operations at leading sales engagement platform, SalesLoft, and published author, Jeremey explains his tried and tested approaches in the unpredictable world of forecasting.
Rory Brown (RB): Hi Jeremey, thank you so much for chatting with me, it's a pleasure to be talking to someone with so much experience and knowledge.
Jeremey Donovan (JD): Thanks for having me. I'll start with the 'why' behind forecasting. I think the why is important. I think your forecast is actually a leading indicator or a forcing mechanism of understanding whether or not you're actually following discipline processes for deal reviews and so on. Those discipline processes, in turn, drive your win rate, and those win rates, in turn, drive your business. Forecasting in of itself is a means to an end, which is that sort of cascade that happens in order for you to succeed in your business. I guess there's other pieces of forecasting which is to quell fear. We talked about the rational side which is growing your business, I think there's also the behavioural side which is giving people comfort. Executives and sales leaders and investors, just giving them comfort in what's going on. Then there is that third point which I guess is on the rational side which is, especially in a growth business, you forecast in order to figure out what your capacity and expenditure is going to have to be. How many people do I need to hire? How much marketing do I need to do? All of this financial planning relies on accurate forecasting?
RB: You've mentioned before how a good forecast will tell you that you have a disciplined business that follows a process, can you tell me about what you’ve been looking at there?
JD: Yes, I think a good example of this is if you look out there at the more sophisticated people, when they talk about forecasting they always include some of a discussion of one sales stages if you will, sales methodology and then they also invariably talk about some qualification framework. These qualification frameworks, another way to describe them as actually like deal health and hygiene assessments. Here's what I mean by that: the stages are probably very similar whether you're talking about SMB or enterprise. But a lot of times that qualification framework that you feather into your stages you're probably going to use something simple like BANT or one of it's cousins. For example, in the transactional world you're probably going to use something like MEDDIC, which seems to be fairly universal in the enterprise selling world, and then feather that into your exit criteria of your stages.
You have this information and your stages, then, in theory and in practice, they are associated with probabilities. Then you come up with a weighted pipeline forecast as one means of doing that. If your weighted pipeline that you forecast at the beginning of a month or a quarter is very different from where you end the month or the quarter, then something in there is wrong. Your close dates are wrong, where you actually are in your stages is wrong, your probabilities are wrong, something is wrong. So it's this diagnostic tool, it's your actual variance to forecast which tells you something's wrong and then you need to double click on that to figure out what it is. I think people aren't disciplined enough about going back in time and saying, "Okay, this was my expected conversion rate and I hit this number at the end of the period." Was it actually the pipeline that you had opened at the beginning of the period or did you pull deals forward or pull or did bluebirds fly in? I think there's intellectual honesty exercise that needs to happen that can get easily massed with the classic, "I need a 2X or 3X pipeline coverage ratio." You’ve got to be careful about that.
RB: you're talking about this pipe that could pull in, this pipe that could create, there’s bluebirds, the pipe that was already there. Basically, what was the pipe that you actually closed?
JD: Or what you had committed to close, absolutely.
RB: Yes. What are you doing to get on top of that right now?
JD: Now, I guess it’s the math side of it. Right now, we have three approaches and we're triangulating across the three. The least sophisticated, but probably the most common out there is, well we call it the bid, some people the call it the commit, it has different names. But effectively at a certain period, let's say, the 20th of the month, we'll ask our first-line managers to commit to a number for the month and then we measure.
RB: Like a judgment number? So they survey their landscape, and they say "This is what I'm going to learn?" It's not deal-specific?
JD: Yes, it's not deal specific, which I don't love about it but it's one of our triangulation points. They will look at a lot of data in order to figure that out and then apply some optimistic or pessimistic judgment on that number and then we roll that up for a final number. You could use carrots or sticks to keep them honest. We actually use a carrot to keep them honest, which is every manager gives us their bid on the 20th of the month and then the manager whose final number is closest to the number they forecast gets intrinsic and extrinsic value. The glory, I guess, the intrinsic part of being the most right but then they also get $500 for calling the number.
RB: So that's the 20th, so all they have to do is be close within 10 days?
JD: Yes. We actually do it every week. Every week they have to call a number for the month, but we only do the “contest”, if you will, on the 20th.
RB: And on the 20th can't it be a commit based on the actual deals that are committing?
JD: Yes, if you take the enterprise side. Their roll up on the enterprise side of the commit is going to be much closer and more known. I mean, your question gets at what forecast methodology should you use in what segment? That's another thing I've noticed as I've talked to a bunch of sales ops, leaders and sales leaders, is that we've got sales processes as one dimension, you've got the qualification or deal health framework you use as another one. The other thing is I think the forecast processes are very different for different segments as they should be.
For instance, in the enterprise, the characteristics are that you have longer sales cycles, fewer deals, and larger deals. What that means is generally your forecast window, whether it's a month or a quarter is shorter than your sales cycle. When that's the case, probably the best approach you can use is that deal-by-deal inspection and then you'll put deals into the forecast categories - I think the salesforce default is a commit best case in pipeline? Then I think the smart thing to do on top of that is to say-- Well, the most sophisticated thing you see on that one is for any manager you apply a set of probabilities for commit best case in pipeline and you adjust by manager and roll that up and that way you're taking into account any of their over/under biases. I think that's the right approach for those enterprise deals because so much of the information that you would use to forecast does not exist in SalesForce, it just would not be practical for that to happen.
The flip side of that is like; well you can go to the extreme, which is cases where it's just super high velocity transactional and there you'd use either a demand model or a capacity productivity model, we do that for our sales development function. Which is we know how many SDRs we have, we know what their productive capacity is. We actually go so far as to model the individual and where they are in their ramp.
RB: And you've got enough data at that level because they're high activity?
JD: Yes. We know where this individual is in their ramp and what they're going to produce in a given period, within reason so I think that's the right model there. Then the more typical thing is the thing that's in the middle, which is where you have this mix; you have a lot of smaller or medium-sized deals. There I think the weighted pipeline is the most trustworthy thing. But then you have to somehow still have an estimate of what's out of pipeline, what's going to come in out of pipeline.
RB: Just coming back a bit: the three approaches, we talked about the bid, and you're about to go on to number two.
JD: We use this bid as one point of the triangle if you will. The next one we do is the weighted pipeline approach. There we do change the probability by segment and by stage. That's our second triangulation point. Then our third one is actually a more sophisticated variation of the pipeline coverage approach. Many people just have this 2X to 3X thing, but what we've actually modelled out is we've modelled historically what does our pipeline coverage need to be in order to end at a particular number. We can do that at any moment.
RB: Where this needs to be now, where should it be today for next month for example? It means we don't have enough by the end of the month.
JD: Yes. For example, if you're in the 14th day of the month and your open pipeline is $100, where do you expect to end at the end of the month? Not from a stage weighted point of view, but just historically? If you were sitting at $100 the 14th of the month a year ago, where did you end that month? Or the 15th or the 16th or the 17th? We have a model that that allows us any day of the week to project out in time where we think we're going to end based on that sort of historical trend and seasonality.
RB: A question for you, to play devil's advocate on that third one. You're looking at coverage in terms of value, or weighted value?
JD: You could use either one, but it really is total value.
RB: My challenge with that is, where a lot of people that I've seen go wrong, they're not doing as sophisticated as that, but they look at their X, they get into the quarter and most of their X is at the wrong part of the funnel to make that quarter. We need to get rid of that right?
JD: I agree, I don't like it. My way of reasoning through it myself is, let's say you're trying to forecast the month and you're three days away from the end of the month, if you have good pipeline hygiene you should not have anything early stage forecast to close in that month. In theory, whatever is open as you go into that final three days should be stuff that could close in which case the approach is sound, but it's like all forecasting relies on really really good, hot data and really really good hygiene.
RB: Just to humour me a little, those three models that you use, those three methods, could you run through each one and just tell me if it's off, if you look back historically and it says, "This method told us X 25 days ago, and we finished on Y which is quite different," what would you automatically assume, what would your hypothesis be as to what happened or what would you explore to work out why it was off, what it would tell you about that particular business segment?
JD: So how we would diagnose the variants? Yes, on the commit approach, that one is I think really getting down at the individual manager who was high and who was low. Over time in theory, you'd see patterns and maybe manager A is consistently high, then you have a choice of you either try to educate them and get them to lower, or you just apply a different probability to their to their commit.
RB: Which would you choose, personal preference?
JD: I think it's very hard for people to adjust. One of my failings I suppose is that I-
RB: Math over behaviour change?
JD: Yes, math over behaviour change. It's not an either/or, you can probably try to do both and hopefully over time they correct, but I wouldn't blindly assume that they would correct. Interestingly, by the way, the other two models, the weighted pipeline model and the pipeline coverage model, although they rely on decently different math, those track for us very, very closely together. Just take the pipeline coverage one; assuming that your historical probabilities are correct, my best guess is it's like inaccuracy and in stages, like the rep was just being sloppy on deal hygiene. I think that's it. I think that's often going to be the root cause as basically, the deal hygiene is not good.
RB: You've got a team average of 10 people, but each 10 do everything really differently. If three of the people that do it differently have most of the pipeline that month, all of a sudden you've got a different result.
JD: Yes. I would add by the way, if you have a high degree of variability across your teams, assuming that they're serving similar segments, that's probably a sign that your sales process is not being rigorously followed or that you don't have one. It’s one or the other.
You guys may be more sophisticated, I think there's a variation on maybe the weighted version? It's like we're just waiting by stage and segment which is better than just random probabilities, but I think you can get so much more precise, accurate, whatever word you want to use, and sophisticated by factoring in other things that are better signs of deal health. I think the strongest predictor of opportunity health that I've been told is, "When was the last inbound email from the prospect?" If you haven't had one of those in a while, then you probably don't have a deal.
RB: You talked a little bit about in enterprise your forecasting window's shorter than the sales cycle, and you talked about your capacity model, which is great if you're lucky enough to have that high volume turning through. In the middle which is where most SAS companies sit, I would say, is that kind of window where maybe you've got 30, 45 day sales cycles, but you're trying to get a forecast 90 days out. In that scenario, what sort of stuff have you thought about or tried to get feasibility?
JD: You have to glue two things together because you're in that fuzzy middle area. I think that what I would glue together is the weighted pipeline model, that should give you a pretty good estimate of what you have, what's known. Then the other pieces you need to forecast? Basically the run rate piece, and there's a lot of run rate models that you could potentially use. In a way, our pipeline coverage approach actually is a way of modelling run rate, but there are other ways to model run rate. A really good example of that is upgrade business. That upgrade business is often a percentage of your installed base, you can probably have a reasonably good forecast of what your upgrade business is by just knowing what your installed base is. And I think there's two situations; there are companies who tend to upgrade at renewal, in which case they have to look at what's renewing in a period. Then there are other businesses that have continuous upgrades that might happen at any time, and that's probably easier because then you just use a percentage of your total number, then just basic time-series stuff is also a reasonable way to do it.
RB: Talking about the culture forecasting, first of all, in an ideal world, how should reps managers, team leaders, directors view the forecast? What should their understanding be of why the forecast is important?
JD: Well, at the end of the day, so much of it is about their personal interests.
RB: Which is quite against the grain already because in businesses, people don't see a forecast as useful for them. They see it as I'm giving information to someone else so they can report on me. Right?
JD: Yes. I think if you're a rep, the reason it's important to forecast is so that you can forecast your earnings. Then the more extreme version of is your job security, that if you don't have enough in the pipeline to hit your number or some reasonable percentage of your number, then you're not going to make money and you put your job at risk. I think there's a real personal benefit to having accurate forecasts. Then it's probably a similar for managers. Certainly for first-line managers. I think when you get to executive leadership, then it becomes more important for planning.
RB: What does a good process look like for the types of forecasting meetings that take place in your business from rep and manager conversations right up to board and how they take place and few of you guys have probably played around with this?
JD: It's one of the benefits of getting venture capital from a large VC. One of our VCs is Insight Partners. They have a lot of investments in SAS companies and they have a sales centre of excellence. They don't require us to follow it, but they have given us a recommended forecast calendar. It basically works like this; reps are obviously responsible for making sure that their pipeline is up to date on an on-going basis, but they're especially responsible for it by the end of day, Monday. We'll roll up the rep number on the end of the day on Monday, the first line managers will then roll up for the directors by end of day Tuesday, the directors will roll up for the RVPs on Wednesday. That way we can deliver the CRO, the current, monthly, and quarterly projection each Thursday.
RB: That's pretty clear. Every week.
JD: Every week, you just do that over and over again. That will vary. Some of the weeks, also per their guidance, if you take 13 weeks in a quarter some weeks will vary what the focus is on occasionally. For instance, at the extreme ends, is in week one, the most important deals to focus on are probably the pushes from the prior quarter. There's more emphasis there. Then again, per Insight Partners’ guidance in week 9 or 10, we want to carve out some time to look at deals that are forecast to close in the following quarter. Once you get into weeks, whatever 11, 12, 13, you just have to be focused exclusively on what's going to close this quarter. Right around that week 9 or 10 mark, it's a little bit of calm before the storm to actually figure out we'll do a little bit of planning so you're in good shape for the following quarter.
RB: Excellent. To offer the mic to you, what else comes to mind when you hear the topic of forecasting?
JD: I'm stepping back and there's people processing technology. Salespeople and sales managers are generally not mathematicians. Maybe I guess a point here is that whatever you do has to be easy and understandable and doable by them and, for better or worse, this is one I've thought deeply about, but if you were to draw a two-by-two of accuracy, low and high and complexity, low and high. Weirdly to me, people would willingly trade a degree of accuracy for less complexity.
Whatever you build for people, don't build something elaborate. You need to build sort of something that they can understand. Otherwise, you're going to fail at the adoption piece. I think your other point that you made earlier of they need to understand what the what's in it for them to spend the time forecasting, I think it's super important. I guess another thing is don't ask people to forecast what they're incapable of doing. A good example of this is asking reps to try to estimate how much they're going to close that's not in their pipeline right now, I think it is a waste of time and energy. They can't do that. You're much better off centralising that estimate. Only ask people to forecast what they can do. I think I could probably go on and on about people-oriented things because it is critical.
RB: The overarching message is keep it easy and simple and not hard to follow for them.
JD: Yes. Exactly. Then the last bit is on technology. There's a lot you can do. Again, I think even sales operations people are not statisticians. I think it's in peoples' interest right now to explore - it gets at what you guys do, right? They should be exploring ways that people have made the math understandable and simple and applicable to their businesses. There was like the first generation of AI, ML, predictive stuff that was done in marketing that fell a little flat. I think we're past that first-generation and we're now onto gen 2 at least, where this stuff actually works. It is predicated on having good underlying data. If you've got good underlying data, you can get really, really strong predictions. Especially if some of those elusive pieces like the run rate part of your business.
RB: It's been interesting. The challenge is- we don't really go to market talking about our machine learning or AI. I think that's a wrong way to go because people are still put off by it and they don't trust it.
JD: I agree. They don't.
RB: Because they don't know what it's doing. If you've got a Lego car and it's too complicated to take it to bits, you don't trust that it's going to do what it needs to do. What we've found is, in some cases we've made more complicated mathematical models, but they're still easy to interpret and they give you various leavers you can pull. So if you're going to miss next quarter, here's the four leavers as to why. The problem with machine learning is that might give you visibility another two or three or four weeks ahead of that but you can't reverse engineer it very well. You can't look in and see the guts and go, "Well, the reason it's doing this is because of X, Y, and Z."
JD: I think it's like why the commit persists. The commit persists because sales leaders who are not mathematicians generally understand the commit and they are very much afraid of the-- well they don't trust what they don't understand.
Discover why Kluster, the leading revenue analytics and forecasting platform for B2B SaaS companies, has received multiple awards from G2 in their 2022 Winter Report. Kluster is the number one platform worldwide for Opportunity Scoring, Risk Analysis, and Live Forecasting. Learn more about the platform's features, awards, and fast integration times on our website.
Pravesh Mistry, Chief Revenue Officer at Truework, believes a strong level of clarity is what makes a good culture!