Blog

Why outcome-based consulting works better than time-and-materials for financial services

Written by Paul Sanders | Apr 21, 2026 2:58:13 PM

Outcome-based pricing changes the conversation entirely. Consulting stops being a cost line that finance grumbles about every month, and starts being something with a fixed number on one side and an actual result on the other.

In Short

For financial services firms running cloud, identity or security programmes, outcome-based consulting is a better fit than time-and-materials (T&M) because it aligns incentives, caps budget risk, and forces both sides to agree what "done" means before anyone starts. This post covers why T&M quietly works against you in regulated environments, how to scope outcomes properly, how to measure real ROI, and why a boutique consultancy with an associate network can deliver serious programmes without the Big Four overhead.

T&M vs outcome-based at a glance

  Time-and-materials (T&M) Outcome-based
What you're paying for? Consultant hours A defined result
Budget certainty Low, Overruns are common High, Price is fixed upfront
Who carries delivery risk? Client Consultancy
Incentive to finish early None, Finishing early reduces revenue Strong. Margin depends on efficiency 
Scope management Scope creep benefits the consultancy Scope is locked before work starts
Best suited for Genuinely open-ended discovery work Known outcomes, regulated environments.

 

Why Traditional Time-and-Materials Models Fall Short in Financial Services

T&M pays the consultancy more when things go slower, which is a problem when you're trying to deliver a regulated transformation on a fixed board budget.

If you're running tech or security in a financial services firm, you've probably sat through a T&M pitch that felt a bit... off. The numbers never quite add up. The scope creeps. The timeline drifts. And the longer it all takes, the better it is for the consultancy billing you.

That's the bit I can't get past. T&M literally pays the consultancy more when things go slower. I'm not saying every firm abuses it, but the incentive is baked in from day one, and in my experience people follow the incentives they're given (even when they don't realise they are).

For a CTO or CISO in financial services right now, this is a proper problem. You're being asked to move to the cloud securely, get zero trust actually working, keep the FCA happy, and be "AI ready" on top of it. Finance wants tight cost control and a clean ROI story. Good luck delivering all of that on an open-ended hourly engagement where nobody's genuinely on the hook for the outcome.

That's the core issue. You're paying for hours, not results. There's no reason for the consultancy to sharpen the scope or push back on nice-to-haves, because that would just reduce their own revenue. What should be a strategic programme quietly turns into a very expensive timesheet.

Outcome-based is the opposite of that. Agree what "done" looks like up front, price against that, and let the consultancy carry some of the commercial risk for delivering it. If I'm honest, it's also a much harder conversation to have at the start, because both sides actually have to pin down what success is. But once you've done that, everyone's pointed in the same direction for the first time.

How outcome-based consulting actually works

In an outcome-based engagement, you're buying a specific result at a fixed price, not consultant hours, with the deliverable and success criteria agreed before any work starts.

You're not buying consultant hours, you're buying a specific result. A secure M365 tenancy that actually meets FCA requirements. An Azure landing zone with PIM properly set up. A JML process that doesn't fall over the first time someone changes department. Whatever it is, you agree up front what "done" looks like, what it costs, and roughly when you'll have it.

That changes the conversation internally as well. Consulting stops being this vague line item that finance side-eyes every month, and becomes a proper investment with a number next to it. For a tech leader trying to keep a board happy, that's a much easier business case to put together. You know what you're getting, when you'll get it, and what it's going to cost. No creeping timesheets. No awkward "we're going to need another sprint" chat in month four.

The bit that genuinely surprises people though is how much faster things tend to move. When both sides are pointed at the same outcome and the same deadline, decisions happen quicker, blockers get cleared, and people stop relitigating the same conversation every fortnight. Which matters a lot in fintech, because if your platform isn't keeping up with the business, the business will just go around you (and usually does).

The relationship ends up being about "did we deliver the thing" rather than "did we log enough hours this month". If I'm honest, that's also a harder model to sell into, because it forces both sides to agree what success actually is before anyone starts. But once you've done that bit properly, it tends to be the last difficult conversation you have.

How to define success metrics that actually mean something

Good success metrics translate a business objective into a specific, verifiable technology outcome, so you can tell at the end whether the engagement delivered or didn't.

Get this bit wrong and you're straight back into "are we there yet" territory, which is exactly what you were trying to get away from. For a financial services org, that means taking a business objective and actually translating it into something measurable on the tech side.

Take "we want people to work securely from anywhere" as an example. Perfectly fine as a business goal. Useless as a success metric. What you actually need is something like: conditional access policies deployed across the estate, MFA rolled out to every persona (not just the easy ones), and DLP configured to protect client data without making people want to throw their laptop out of a window. Now you've got something you can check against.

The point of decent metrics is they connect the tech work to something the business actually cares about. Nobody on the board is counting Azure resources. They care whether the capability is there. Has onboarding dropped from three days to three hours? Are the compliance controls automated and auditable, or still held together with a SharePoint spreadsheet and a prayer? Those are the questions that come up in front of the exec team when they're asking where the money went.

Forcing yourself to define success up front also stops the project quietly drifting into a technical perfection exercise. You'd be amazed how often a transformation ends up with a lovely platform that doesn't do the thing the business originally asked for. If the outcome isn't written down, that drift is almost guaranteed.

This matters even more when you're working with independent consultants and an associate network rather than a big firm. Everyone needs to know what they're building towards from day one, because the structure is leaner and there's no room for vague. In my experience, that's actually a feature, not a bug. Most of the big-firm projects I've seen go sideways went sideways because nobody had agreed what "done" meant before they started.

How fixed-price engagements actually reduce risk

When the engagement is scoped properly, fixed-price reduces risk for both the client and the consultancy compared to T&M, because it caps budget exposure and makes accountability unambiguous.

One question that comes up almost every time with fixed-price, outcome-based engagements is around risk. What if something changes? What if the requirements shift halfway through? Who actually carries the cost? It's a fair question, and the honest answer is that when the engagement is structured properly, fixed-price reduces risk for both sides compared to T&M, not the other way round.

For the client, the obvious one is that budget overrun just isn't a thing any more. You know what the engagement costs on day one, and the consulting team has a direct incentive to deliver efficiently (because their margin depends on it). That's a very different conversation to the T&M version, which is usually some variant of "we're forecasting another 40 days".

For a financial services org putting in something critical like an identity framework or a secure cloud platform, this matters. With outcome-based, there's nowhere to hide behind timesheets or change requests. You either delivered the thing you agreed to, or you didn't. Which tends to focus everyone on fixing the problem rather than writing a very detailed document about the problem.

The trick is doing the scoping properly up front. What's in, what's out, what the edge cases look like, what happens when something changes. This is where consultants who've done this sort of engagement a few times are actually worth the money. They've hit the edge cases before. They know where the regulatory stuff catches people out in financial services. They can spot where the complexity is likely to appear and plan for it, rather than discovering it on your time.

Which is the whole point of working with a trusted network of associates rather than consultants figuring it out on your budget. It's not that fixed-price removes risk by itself, it doesn't. The risk goes down because you've got people in the room who've already made the mistakes somewhere else. That's the bit people undervalue.

Measuring Real ROI Beyond Project Completion

Real ROI on a consulting engagement is measured by what the work enables afterwards (faster rollouts, fewer incidents, lower run cost) not just whether the project finished on time and on budget.

On time and on budget is the minimum. The real test is whether it keeps paying back long after the consultants have moved on. For a tech leader in financial services, that means ROI has to mean something in business terms. Faster time-to-market for new products. Fewer security incidents. Lower run cost. A tech environment that doesn't actively repel the engineers you're trying to hire.

Outcome-based engagements are genuinely easier to measure on this, because what you're delivering tends to tie directly to a business capability. If you've rolled out PIM across Azure properly, you can actually put a number on the drop in standing admin privileges (and the drop in risk that comes with it). If you've automated JML through persona-led access management, you can count the IT hours you got back and the access-related audit findings you've stopped having to deal with.

The bigger return, though, usually isn't from what the engagement delivered. It's from what the engagement quietly enables afterwards. A properly secured M365 tenancy isn't a compliance tick-box. It's the foundation that lets you go and do Purview properly. It's what makes Copilot a sensible rollout rather than a data leak story waiting to happen. It's what lets your teams actually collaborate wherever they're working from. A well-configured Azure landing zone is the same, it's the thing that makes experimenting with new services a normal Tuesday rather than a three-month governance conversation.

That's where the real return on this kind of work lives, and if I'm honest, it's also the part that's hardest to get on a business case on day one. You're partly measuring things that now don't happen, which is never a fun slide to present. But every transformation I've seen actually pay off, the thing that made the rest possible was always the platform and foundation work. That's usually the quietest bit of the programme, and the bit people try to cut scope out of first. Worth remembering.

How a boutique consultancy delivers complex programmes without the Big Four bench

Our boutique consultancy (Yobah) delivers serious, multi-workstream programmes by drawing on a trusted associate network of specialists, rather than a permanent bench, which keeps overhead low and expertise high.

The question I hear most often from tech leaders in financial services is, basically: how does a boutique consultancy deliver a complex, multi-workstream programme without the bench of a Big Four? It's a fair question. The honest answer is that it makes you rethink what "bench" really means, and whether the traditional model is doing you any favours in the first place.

The associate network we work with at Yobah has been built up over years of real engagements across identity, security architecture, modern workplace and change. These aren't CVs that a recruiter sent over last week. They're specialists we've worked alongside on actual client programmes, who I've seen do the work properly, and who I'd happily put in front of a CTO or CISO tomorrow. When a piece of work comes in, we put together the right team for what the client actually needs. Not a team padded out to hit a margin number, and not one with the "details to be figured out" once the contract's signed.

That gives you a few things that matter for a fintech. You get specialists who have done this exact work before, whether that's conditional access for a financial services context, Azure landing zones built to FCA standards, or persona-led access for an org that's properly complicated (and they usually are). You skip the overhead that's baked into a Big Four rate card, where you're partly paying for layers of management and internal process that don't touch your programme. And because we engage on outcomes rather than T&M, everyone's pointed at the same thing from day one.

Honestly, you don't need a 200-person logo on the engagement letter to deliver a serious programme. Most of the time, you need a handful of the right people pointed at a clearly defined outcome, with the experience to handle the complexity when it appears (because it always does). The economics only really work when nobody's watching timesheets. You get a secure, compliant foundation you can build on. The associates get to do work that actually uses their expertise. Nobody's relationship is mediated through someone else's utilisation target.

That's it really. A trusted network of the best independents in the industry, a clear outcome, and a model where the incentives aren't fighting each other. The size of the logo on the slide deck doesn't come into it.

Frequently asked questions

Is outcome-based consulting more expensive than T&M?

Usually no, once you've accounted for scope creep and overruns on the T&M side. The headline number on an outcome-based engagement can look higher because the consultancy is carrying the delivery risk and has priced that in. But you know the ceiling on day one, which you rarely do on T&M.

What happens if requirements change halfway through an outcome-based engagement?

You handle it the same way a serious build project would, through a change control process. The difference is that scope changes are the exception, not the default, and they're priced and agreed explicitly rather than disappearing into next month's timesheet.

Can outcome-based pricing work for discovery or strategy phases?

Not always. If the work is genuinely open-ended (early-stage discovery, strategy definition, or architecture assessment with unknown findings) a small time-boxed T&M or fixed-fee discovery piece usually makes more sense. Once the outcome is clear enough to scope, move to outcome-based for delivery.

Does outcome-based consulting work for regulated financial services firms?

Yes, and arguably better than T&M. Regulated firms need budget certainty, clear accountability, and auditable delivery, all of which outcome-based provides. The key is working with consultants who understand the regulatory context (FCA, PRA, operational resilience) so the scoping accounts for it up front.

How do you stop scope creep in a fixed-price engagement?

By scoping properly at the start and being honest about what's in and what's out. Good consultants will push back on vague requirements before signing, because they're the ones carrying the delivery risk. Bad ones will sign anything and then argue about it later. Choose accordingly.

Can a boutique consultancy really deliver a programme at the scale of a Big Four engagement?

For most enterprise programmes, yes. You rarely need a 200-person bench. You need a handful of senior specialists who've done it before, an associate network to scale into when you need specific expertise, and a model where the incentives aren't fighting each other.