There’s a play on words I like to use whenever someone is publicly disparaging another member of their own profession: “professional discourtesy.” It can be seen as a form of revenge, and like revenge it’s often best served cold. Sour grapes are also best served cold, and jealousy and spite are often behind expressions of professional discourtesy as well.
So it may be hard to believe that, while this post is definitely an exercise in professional discourtesy, I’m neither being vengeful nor spiteful when I say that the large analyst firms’ vendor ratings reports, those Magic Quadrants and Waves and MarketScapes, are really doing a massive disservice to the industry, both to the vendors who struggle mightily to make it to the Holy Land in the top right corner and to the customers who often pay rather large sums for the privilege of seeing where the different vendors stack up against one another in various sectors of the market.
Maybe it will help to convince you my motives are pure by stating that I’ve built a solid percent of my mildly successful career in the shadow of these reports: their gaps and omissions form the basis for a good deal of my work advising customers and vendors about their strategic decisions. The more these reports fail to service their purported purpose – help customers ultimately be successful by making the right strategic decisions about their technology choices – the more consulting work myself and a corps of independent analysts like me can pull down.
So I love these guys, really I do, even if by saying so I’m angling for a “damning with faint praise award.” (Maybe my friend Jon Reed wants to add that category to his (in)famous Hits and Misses posts?)
Love ‘em or not, a recent conversation I had with a friend, a seasoned analyst relations exec and fellow skeptic, finally catalyzed my thinking about what’s so wrong with these reports – and in particular some vendors’ slavish devotion to fighting for a spot within the upper right quadrant of those boxy little graphics.
So here goes.
My first beef is how non-differentiating these reports can be. All too often an analyst firm bake-off report has a cluster of competitors crowding that “magic” upper right corner in a dense, barely discernible pack. Sometimes the circles are color- and size-coded, so you can usually find the vendor you’re looking for, but there is no empirical way to know what those relative positions mean when seven vendors are all clustered in a tight knot that occupies a space the size of a cherry on the printed page. All this tells me is how similar they are to each other. It says nothing about what makes them different.
Sharing the same relative position in a bake-off report only tells the reader how similar the vendors are, not what differentiates them.
Meanwhile, the customer has paid some real money to find out that their short list consists of seven vendors. How much does that help the process of moving an RFP forward?
In most of the deals I’ve looked at or been involved in, not nearly enough. Especially because, for example, the report I’m looking at right now has another six vendors about 10 mm from the main cluster – that’s literally all the data a customer has to work with to differentiate between the other vendors offerings: six more vendors 10 mm away from the central seven-vendor upper-right cluster and five or so degrees to the right or left of the invisible center line that defines the mid-point between the two axes. That’s really not a lot of data on which to base a potentially multi-million dollar deal. I should add that this particular firm seems to specialize in consistently not providing strong differentiation: a lot of its reports have this clustering problem.
Another firm’s reports tend to be more differentiating, but the customer is still left with the need to parse their choices based on somewhat vague positions along two equally vague axes. Can anyone really make a decision by “scoring” their vendor on one of these graphs? I really hope they don’t. And even the four paragraph write-up for each vendor that follows is mostly a catalogue of generalizations about quality, coverage, focus, value, and other 30,000 foot concepts that don’t necessarily move the needle on a strategic decision far enough to be defensible.
Why? For two reasons. The first is that a real transformative software project is always about the details, not the generalities. So what if vendor X has “the best knowledge management offering” or just bought an integration company so they now have strong support for heterogeneous end-to-end processes? Does anyone in their right mind award a million-dollar contract to a vendor based on that kind of platitude? Unfortunately yes, hence the massive endemic software implementation failure rate. (Again, an error that has been the foundation of a lot of what I’ve been doing for a living the last 30 years. So another tip of the hat to the big analyst firms from me and my financial advisor.) In every complex deal I’ve seen, the real trick has been to discern how different the top vendors are from one another, not how similar.
In every complex deal I’ve seen, the real trick has been to discern how different the top vendors are from one another, not how similar.
The second reason is more about the nature of complex projects, and this an issue that permeates my entire critique of these reports. If my project includes a new cloud ERP system, and my company is a discrete manufacturer, for example, it can be helpful to understand in general which vendor has what capability, and to a certain extent knowing which are the top vendors can provide some guidance. But only some: no ERP system stands alone, and the functions that are enabled by an ERP system are potentially tied to everything in the order-to-cash cycle, even if the ERP side only handles a portion of that process.
Making a cloud ERP system usable takes a lot more than just deploying an ERP module, however capable, and in fact for most end-users in this order-to cash-cycle, the ERP system will be running behind the scenes anyway. Meaning it’s really not so important that one vendor’s ERP is crowded in the upper right corner next to six others – that’s basic table stakes. What really makes or breaks the project is how that ERP software supports the larger processes – new or revised – that are essential to the success of the transformation of order-to-cash, many of which include steps or tasks that aren’t part of the ERP system’s built-in functionality. In other words, it may be – and often is – much more important to know how an ERP system handles a particular special case in a particular corner of the company’s business than its relative position in the corner of an analyst report.
In case you’re wondering, none of that information is detailed in any of these bake-off reports that I’ve studied – at best there are generalities about API strategies (has one) and partner program (has one too) rather than how well those aspects of the vendor’s business work. The real strategic issue – does it support my strategic special requirements – isn’t in the bake-off report.
Speaking of partners. No piece of software has value if it can’t be implemented successfully, and the ultimate irony of every one of these product bake-offs is that none of them ever references in any detail the vendor’s success rate in translating all that vagueness about relative value into actual customer success. (Why? Because these analyst firms don’t track that issue. Why don’t they track that issue? Other than it’s really really hard to do, I don’t know, ask them.)
Workday is my favorite poster child for this problem: They always score high in the vendor ratings for the usual feature/functionality reasons – Workday plays that game well. But take a look at Raven Intelligence’s data on Workday implementations and it’s a very different picture – as in they’re simply not the leader in actual customer success, despite their high ratings in the bakeoff reports or their misleading claims of 90+ percent customer satisfaction rankings.
There is no correlation between a high rating in a bake-off report and a vendor’s ability to manage a successful implementation directly or through a partner. None!
It shouldn’t be a question I have to ask, but here goes: Do customers need to have successful projects that deliver real value or do they just need to buy good software? If you don’t know the answer you can stop reading right now and go back to shitposting or Wordle or whatever you were doing before you unfortunately clicked a link to this post.
But wait, before you go. There’s also the problem of the criteria used to qualify for being in a bakeoff report in the first place (and no, despite rumors to the contrary, the ability to pay outright for the privilege isn’t one of them, though you’d better be staffed up to handle the workload of complying with all the data requirements these firms will burden a vendor with in order to be part of the report.) One report recently given to me – a legitimate reprint, nothing pirated – was missing a company that was arguably one of the category leaders in that slice of the market.
Why weren’t they included? They didn’t fit the criteria.
A category leader, if not the category leader, didn’t fit the criteria? Anyone else wondering if this is the fault of the vendor? Or is there something deeply flawed about the criteria? In this case the possible category leader was excluded because of a technicality that revolved around a somewhat arcane aspect of technology. (Because the only other reason is that the vendor didn’t pay enough to be in the report, which never happens, right?) This issue shouldn’t by itself disqualify any vendor as long as the stuff that actually matters to the customer – like does it solve my business problem, does it play well with the rest of my infrastructure, can I implement it at a reasonable cost, and will that implementation be successful according to established criteria – was covered. But no – arcane technical criteria beat real-world value, at least in the minds of the authors of this particular report.
To tone down the “professional discourtesy” just a tad, when all is said and done these reports do serve a purpose, but they hardly deserve to live rent-free in the brains of so many marketers, AR professionals, and software execs, many of whom spend considerable energy bowing to these false gods. Nor should they figure so prominently in the perspective of that enabler of everything wrong with the analyst/vendor/customer relationship, ARInsights – which purports to rate individual analysts’ value to vendors by, not surprisingly, assigning the lion’s share of the rating to the number of reports a firm publishes in a given timeframe.
(Why does ARInsights do this? They apparently have no choice. Their unabashed answer is that the number of reports a firm publishes is easy to quantify, more complex measures take too much work. They’re like the proverbial ballpark reporter who could never hope to actually play the game they’re critiquing, as ARInsights is in no position to analyze either the quality of these reports nor their real impact. Nor do they ever talk to customers about what customers need from analysts – as far as I can tell from my conversations with them.)
But I digress. Using these analyst firm reports as a way to short-list vendors for a project can be useful, but only to a point. One thing that would help would be a comprehensive, solution-based rating that doesn’t just focus on a single piece of technology or point solution. A company looking for a way to improve their supply chain visibility and resiliency, for example, won’t be particularly well-served by a report that purports to rate the multi-enterprise supply chain vendors, omits a category leader (again!), and rambles on about a complex rating system that nonetheless leaves the reader gazing at an undifferentiated cluster of almost 10 companies vastly different in size, scope, breadth of offering that are all nonetheless supposedly worthy of consideration. Did I mention the report is rather pricey?
While a more complex rating report would be useful, the real gold standard is found in the opinions and perspectives of other customers as to a specific product’s ability to successfully interact and coexist in a complex, heterogeneous landscape as part of a complex end-to-end process. Along with the vendor’s ability to either successfully implement the software themselves or find the right partner to do so for the customer.
Ideally those opinions aren’t about individual products as much as they are about the processes that were enabled: available to promise, configure price quote, regulatory compliance, etc. It’s not enough to know that an ERP system can do order management – what a customer really needs to know is if it can help lower days sales outstanding by speeding up some part of that process.
Because, here’s the radical idea for the close, at the end of the day the customer, whether they articulate this as well as they should or not, doesn’t want a discrete product, or a service, or a piece of technology. They want the solution to a business problem or a means to avail themselves of a new opportunity, they want an outcome they can measure, and they want it as soon as possible and as cheaply as possible. And it should work well for its users and play well with the rest of the technology stack.
So please, vendors, stop playing into a cycle of selling point solutions that by themselves don’t deliver value only because that’s how you know how to build them and that’s how the big analyst firms (incorrectly) measure your value to your customers. You and the big analyst firms can do better. Much better. And our mutual customers will thank you for this, I promise. Me too. But only after I retire.
Ludovic Leforestier says
Really good blog post Josh, I agree the implementation aspect is generally overlooked in evaluative research –but good analysts know better and take this into account.
Now, with much of the reference check going to Gartner Peer Insights and even though this site is now open to partners (inc. SI’s), references conversations for Gartner Magic Quadrants aren’t as rich as before.
In this post (link below), we advocate vendors to select participating to the right evaluative report. And we educate our clients on this.
https://www.starsight.biz/2022/05/09/how-much-do-i-need-to-pay-to-be-in-a-gartner-magic-quadrant-and-4-other-analyst-relations-myths/
Joshua Greenbaum says
Thanks, Ludovic. Your “analyst relations myths” are spot on too. Well said.