Most expense management vendor selections don’t fail at the final decision. They fail earlier, when the shortlist is built on the wrong criteria, when demo scripts substitute for real evaluation, or when the constraints driving the decision are never named out loud.
The CFO who runs a structured shortlisting process gets a better outcome than one who runs a feature comparison. The difference isn’t analytical skill. It’s knowing which part of the process is actually doing the work.
Key Takeaways
- Most shortlists are built from brand recognition and peer recommendations, not structured criteria. That’s a reasonable starting point but a poor filter.
- The criteria that appear in evaluation matrices often don’t predict satisfaction after go-live. The criteria that do are harder to assess in a demo.
- Constraints, such as ERP compatibility, implementation timeline, budget ceiling, all shape vendor decisions more than features, and they’re rarely mapped before evaluation starts.
- The vendors who make evaluation easy are telling you something about how they’ll handle everything else.
How Shortlists Actually Get Built
The official version of a vendor shortlist involves an RFP, a formal scoring matrix, and a cross-functional evaluation team. That version exists in some organizations. In most, the shortlist starts with three or four names someone already knew.
A category leader gets included because they came up in a Google search. A mid-market vendor makes the list because a peer mentioned them at a conference. A new entrant appears because a board member forwarded an article. From there, a few more names get added through a quick analyst review, and the shortlist is set before any criteria have been formally defined.
None of that is wrong, exactly. Most of the time it produces a usable starting point. The problem is that the shortlist built this way reflects familiarity, not fit. Brand recognition and recency bias are doing more filtering work than any explicit criteria.
The finance leaders who get the best outcomes from vendor selection are the ones who apply a genuine filter before demos begin, not during them.
The Criteria That Survive Contact with Reality
Every evaluation starts with a long list of requirements. Integration with the existing ERP. Mobile expense capture. Policy enforcement controls. Multi-currency support. Configurable approval workflows. After a few demos, the list tends to collapse, because almost every vendor demo claims to satisfy almost everything on it.
The criteria that differentiate vendors well aren’t the ones that appear in feature checklists. They’re the ones that are hard to assess from a demo:
Implementation timeline and what drives it. Every vendor will tell you their implementation is fast. Ask for the median time-to-live for companies at your size and on your ERP. Ask what causes implementations to take longer, and whether yours has any of those factors. The answer tells you more than the headline number.
Support model after go-live. The implementation team is almost always more capable than the ongoing support team. Ask who owns your account after launch, what their response SLAs are, and how configuration changes get handled once you’re live. Ask for a reference call with someone who has been a customer for two or more years, not someone in their first year.
Pricing model behavior at scale. Many platforms appear competitively priced at the point of sale and become expensive as transaction volume grows, as you add integrations, or as you need features that sit in a higher tier. Ask for a three-year pricing scenario at your expected growth trajectory, in writing.
Data ownership and portability. If you need to leave, what do you own? How do you get your historical expense data out? Vendors rarely volunteer this information, but it’s a fair question and the answer varies significantly.
The Constraints That Drive More Than Features
The features on the shortlist evaluation sheet rarely determine the final decision. Constraints do.
ERP and systems compatibility is the most common deal-killer that should have been identified at the start. A platform that doesn’t have a native, maintained connection to your ERP is a different product than one that does, regardless of how it scores on everything else. This is worth verifying before any demo, not after. Understanding how expense platforms connect to ERPs and other systems is a prerequisite for a meaningful shortlist, not an item to evaluate in round two.
The actual budget ceiling, not the ROI argument. CFOs know the ROI case can be made for almost any reasonable software investment. The real constraint is what was approved in the budget. A platform that pencils out beautifully on a total cost of ownership model may still lose to one that costs less upfront, because the person approving the budget isn’t the same person running the evaluation.
The internal implementation deadline. There’s almost always a date driving the selection, whether it’s a fiscal year-end, an audit, a contract expiry, or a board-level commitment. That date is a constraint on which vendors are realistic candidates. A platform with a six-month implementation runway is not a real option if you need to go live in ten weeks.
The internal champion. Software selections are rarely made by committee in practice. Usually one person is driving the process, and their preferences and concerns shape the evaluation, sometimes visibly and sometimes not. Understanding who that person is and what they’re optimizing for matters more than it’s comfortable to admit.
The Demo Problem
Vendor demos are not evaluations. They are sales presentations. Every vendor demo is designed to show the product’s strengths in a controlled sequence and to avoid surfaces where weaknesses are visible. The standard demo flow is optimized through thousands of repetitions to move prospects to the next stage.
This doesn’t mean demos are useless. It means they should be treated as introductions, not assessments.
A few things that cut through the scripted demo:
Ask vendors to run a scenario against your specific data, your ERP structure, or your policy rules. Vendors who handle this well have a product that’s actually configured by real companies. Vendors who deflect are telling you something.
Ask what the product doesn’t do well. Every platform has genuine limitations. A vendor who can’t name one hasn’t earned trust as an evaluation partner.
Ask to see the expense audit and policy controls in a live configuration, not in a sandbox with pre-loaded perfect data. Exception handling, escalation paths, and edge cases are where platforms diverge most.
Ask for a reference call that isn’t coordinated by the vendor’s sales team. Any vendor worth selecting will have customers who would agree to an unscripted conversation.
What a Structured Shortlisting Process Actually Looks Like
The finance leaders who run the best evaluations do roughly the same things:
They define their must-have criteria before any demos begin and share them with all vendors in advance. This tells each vendor what actually matters and forces the evaluation to stay anchored.
They separate must-haves from nice-to-haves and enforce that separation when scoring. Features that were labeled “nice to have” before demos shouldn’t become deciding factors because one vendor demo made them look appealing.
They align all internal stakeholders on evaluation criteria before the process starts, not during it. When IT, procurement, and finance are scoring vendors against different criteria, the selection process becomes a negotiation rather than an evaluation.
They run reference calls before final scoring, not after. References are most useful when they can still change the outcome.
They assign a single decision-owner who is accountable for the recommendation and has the authority to make it. Vendor selections that require unanimous agreement almost always end in compromise.
The Final Dynamic Most Evaluations Miss
The vendors who win in a structured shortlisting process are usually the ones who made the evaluation process itself easier: who were transparent about limitations, who provided references without friction, who answered direct questions directly.
That’s not a coincidence. The same organizational qualities that make evaluation easy tend to show up again in implementation, in support, and in the long-term partnership. A vendor who controls information during the sales process will control information when something goes wrong post-launch.
How a vendor behaves when you’re evaluating them is the most accurate preview available of how they’ll behave when you’re their customer.


