
Marketing attribution is an essential tool in any marketing stack. But it gets blamed for problems it was never designed to solve. When teams treat multi-touch attribution as a causality engine instead of a credit distribution system, they end up with dashboards full of confident numbers that mislead more than they inform. That is a people and process problem, not a tool problem.
We pulled from 24 episodes of Humans of Martech, featuring practitioners like Ron Jacobson (Rockerbox), Nadia Davis (GOVTECH attribution specialist), Constantine Yurevich (visit scoring pioneer), Kacie Jenkins (Sendoso SVP of Marketing), and Sundar Swaminathan (former Uber Growth Marketing Data Science Lead), to build the deepest guide to attribution you will find.
Attribution, MTA, and measurement are not the same thing
Most attribution debates start with a vocabulary problem. Practitioners use “attribution,” “multi-touch attribution,” and “measurement” interchangeably, then argue past each other because they are describing different things. Before going further into which methods work and which fall short, it helps to be precise about what these terms actually mean.
Barbara Galiza, a marketing measurement consultant, flagged this confusion directly. When she hears marketers talking about attribution, she notices they are almost always thinking about click-based, touch-based models. Her view is that attribution is much broader than that. “To me, attribution is about understanding the results of campaigns. And MTA and click-based, touch-based models is one way that you can attribute. It’s a little part of that. It’s not the whole picture.” Holdout tests, incrementality experiments, and media mix modeling are all attribution methods too. They just work differently and answer different questions.
“When people think about attribution, they’re thinking about click-based, touch-based models. To me, attribution is a lot bigger than that. MTA is one way that you can attribute. It’s a little part of that. It’s not the whole picture.”
Barbara Galiza, The inconvenient truths of attribution no one wants to admit
Barbara also draws a line between attribution and measurement that most teams blur. Attribution focuses specifically on tying marketing activity to business results: leads, pipeline, revenue. Measurement is broader. It covers engagement, messaging resonance, channel effectiveness, and customer behavior, none of which always connect directly to a conversion. Attribution is a subset of measurement. Conflating the two leads teams to either over-rely on attribution when other measurement approaches would be more useful, or dismiss attribution entirely when what they actually need is a different attribution method.
MTA specifically is a click-based behavioral tracking system. It records which digital touchpoints appeared before a conversion and distributes credit across them. Barbara points out that this makes it genuinely well-suited for bottom-funnel, click-driven channels like paid search, where high-intent users click an ad and convert. The problem is MTA tends to over-credit search because that is where the conversion gets logged, while missing the awareness-building that happened upstream: the video ad, the podcast mention, the piece of content someone read 3 weeks earlier. It can tell you what happened after the click. It cannot tell you what caused the click.
The working definitions
- Measurement — the full picture. Everything that tells you how marketing is performing: engagement, reach, brand lift, conversion rates, channel effectiveness, and business outcomes. Attribution is one component of measurement.
- Attribution — the subset of measurement focused specifically on connecting marketing activity to business results like leads, pipeline, and revenue. Methods include MTA, MMM, incrementality testing, and holdout experiments. Choosing the right method depends on what question you are trying to answer.
- Multi-touch attribution (MTA) — one specific attribution method. It tracks observable digital touchpoints (clicks, page views, form fills) and distributes credit across them in the path to conversion. It works well for click-based, bottom-funnel channels. It does not prove causality and misses channels that do not generate clicks. Note: MTA is not the same as multi-channel attribution, a distinction Attribution App’s research highlights. Multi-channel attribution measures broad channel buckets (paid, organic, email). MTA can track any individual interaction — a specific ad, a direct mail piece, an offline event — and assign proportional credit. The granularity is meaningfully different.
- Media mix modeling (MMM) — an attribution method that estimates channel impact using aggregate spend and outcome data. Useful for long-term budget planning and measuring channels that MTA cannot see (TV, radio, out-of-home). Requires 2 or more years of data to produce reliable results.
- Incrementality testing — an attribution method that measures causality directly by comparing audiences who received marketing against those who did not. The most reliable way to answer whether a campaign actually drove a conversion, not just preceded one.
Everything in this guide operates within that framework. When practitioners criticize MTA, they are usually criticizing how it gets used, asking it to answer questions about causality or offline influence that it was never designed to handle. When they defend it, they are describing what it actually does well: fast, real-time visibility into which channels and content show up in conversion paths. Both things are true, and understanding the vocabulary is the starting point for using any of these tools well.
What multi-touch attribution was built to do and where it gets misused
Multi-touch attribution does exactly what it says: it tracks which touchpoints appeared before a conversion and distributes credit across them. That is a genuinely useful capability. The problem is not the tool. It is what teams ask it to do. MTA is a credit distribution system. It is not a causality engine. When it gets treated as one, the gap between correlation and actual cause is where confidence in the data collapses.
Rajeev Nair, who leads measurement strategy using causal AI frameworks, described how this misuse has become the default. Attribution has been hijacked by tracking. What started as a way to understand which actions led to a customer buying something has become a digital breadcrumb trail used to assign credit based on who touched what ad and when, regardless of whether any of it actually changed the buyer’s mind. Causality is where that expectation breaks down. Attribution is supposed to tell you what caused an outcome, not what appeared next to it, not what someone happened to click on right before.
“Attribution is supposed to tell you what caused an outcome. Not what appeared next to it. Not what someone happened to click on right before. Actual cause and effect. Instead, we get dashboards full of correlation dressed up as insight.”
Rajeev Nair, Causal AI and a unified measurement framework
Ron, who built the MTA platform at Rockerbox, points to a specific implementation choice that created long-running problems. Early companies like Visual IQ, Convertro, MarketShare, and Adometry built their models on the promise of tracking every digital impression via third-party cookies. He calls this the “original sin” of MTA, not because the methodology was wrong but because the implementation created blind spots that were too easy to ignore. There is no third-party cookie for linear TV, direct mail, or radio. Building an MTA system on that foundation meant some of the most influential marketing channels were invisible by design.
Pranav Piyush, CEO of Paramark, illustrated the causality gap with a simple example: someone searches “Humans of Martech” on Google, clicks the organic link, and subscribes. Your MTA tool credits organic SEO. But what actually got them to type that search query in the first place? A friend’s recommendation? A LinkedIn post? A podcast mention? The MTA model has no idea, and it does not care. It assigns credit to the last observable touchpoint and calls it a day.
“I ask people, somebody clicked on a link on Google, did that click cause the downstream conversion? And in some cases, maybe, in some cases, maybe not. Unless and until you recognize and acknowledge the fact that you don’t know what got me to put that term in the search box.”
Pranav Piyush, Why multi-touch attribution is broken
Ron described 3 common situations among clients arriving at Rockerbox: they are using platform-reported numbers, relying on Google Analytics, or they have tried and failed with other measurement methods. Rarely does anyone show up with MTA already running. When they do, they are often missing critical data from platforms like Pinterest, TikTok, or offline channels. The model looks complete on the dashboard but captures a fraction of the actual buyer journey.
The downstream consequence is what Attribution App calls attribution anarchy: every channel platform reports its own numbers, each claiming 100% credit for the same conversion, and nobody can answer the CFO’s actual question: which of these budgets is working? This is the real problem MTA was built to solve. A B2B deal that closes after a sales call was actually shaped by a trade show 6 months earlier, 3 nurture emails, a webinar, and a competitor comparison page. Last-touch gives the sales call full credit. First-touch gives the trade show full credit. Neither shows how those pieces worked together, and neither can tell you which ones to cut without killing pipeline.
- MTA tracks observable digital touchpoints: clicks, form fills, page views. What it cannot see (TV, radio, offline conversations, peer referrals) is outside its scope. Knowing that boundary is the difference between using it well and blaming it for gaps it was never built to close.
- Platform-reported attribution stacks credit across every channel that touched a conversion. Deduplication requires pulling data out of individual platforms and running your own analysis.
- MTA distributes credit across touchpoints that preceded a conversion. Whether those touchpoints caused the conversion is a separate question, one that requires incrementality testing to answer.
- Brand awareness, word of mouth, and offline influence shape buying decisions without leaving digital fingerprints. A healthy measurement stack accounts for these through MMM, self-reported attribution, and leading indicators like branded search volume, not by expecting MTA to capture them.
Choosing the right multi-touch attribution model for your sales cycle
No single MTA model works for every business. The right model depends on your sales cycle length, the number of decision-makers involved, and the specific question you are trying to answer at that moment. Practitioners who lock into one MTA touchpoint model without understanding its trade-offs waste months building the wrong thing.
Nadia ran attribution in the GOVTECH market, where sales cycles drag on for 2 years and stretch across a dozen decision stages. She dealt with constant data decay, elected officials cycling in and out, and databases bloated with mismatched or obsolete records. Her solution was to swap models based on the goal. When the objective was market expansion, she turned to First Touch to identify which channels pulled in new contacts from the right segment. When the goal shifted to understanding deal velocity, she switched to a U-shaped model that weighted the first and last touches while distributing remaining credit across the middle.
“The success of any ABM or demand gen tactic starts and stops with marketing operations. If you don’t have that excellence in how you bring things together, everything else is secondary.”
Nadia Davis, How to decide if attribution data is good enough
At Uber, Sundar took a different approach. Rather than chasing the perfect model, his team anchored on consistency. They used last-click attribution as a baseline, knowing full well it had limitations, because its consistency provided comparable data over time. The framework followed the MECE principle (Mutually Exclusive, Collectively Exhaustive), ensuring every conversion got assigned exactly once and no channel got double-credited.
Steffen Hedebrandt, co-founder of Dreamdata, gave customers the ability to switch MTA models on the fly during analysis. The biggest revelation for most users was comparing models side by side. For paid channels, they could see 5 different MTA model views of the same data. This exposed how dramatically conclusions shifted depending on the model, which is exactly why locking into a single model without understanding its trade-offs is dangerous.
What each model trades off
| Model | Best for | Blind spot |
|---|---|---|
| First Touch | Understanding which channels drive net-new awareness | Ignores everything that happens after initial contact |
| Last Touch | Consistent baseline for comparing channels over time | Credits the closer, not the creator of demand |
| U-Shaped | Long sales cycles with clear entry and conversion points | Underweights the middle of the journey |
| W-Shaped | Adding weight to opportunity creation alongside first/last | Requires clean CRM data at every stage |
| Linear | Fair credit distribution when no touchpoint dominates | Treats a banner impression the same as a demo request |
| Markov Chain | Identifying which touches consistently push accounts forward | Needs large datasets to produce reliable results |
“Attribution is not an exact science but a directional tool. Is the juice worth the squeeze?”
Sara McNamara, Pathfinding via Attribution
One model the table understates is linear. Linear attribution is technically a multi-touch model and it looks rigorous because it distributes credit across every touch rather than collapsing it onto 1 or 2 endpoints. But as Attribution App puts it, linear creates an illusion of completeness while flattening the behavioral signal. A banner impression and a high-intent product demo receive identical credit. That is mathematically fair but analytically useless. Linear is a reasonable starting point for teams with no model at all, but it should not be mistaken for sophistication.
Nadia also worked with chain-based MTA models built on Markov chains in account-based marketing contexts. These models revealed which touches consistently pushed accounts forward, letting her cut spend on channels that only created the illusion of momentum. The trade-off was clear: Markov models need large datasets to produce reliable results, and they still cannot tell you with certainty whether a campaign truly caused a conversion.
One honest question before investing in any of these models: is MTA complexity warranted at your current scale? Attribution App’s research points to a rough readiness threshold: the methodology tends to pay off when you are running ads on multiple digital networks simultaneously, connecting offline channels to online conversions, or experiencing duplicate attribution claims across platforms. Below roughly $250K in B2C marketing spend or $500K in B2B, a well-configured last-touch or first-touch model may serve better than a complex MTA setup producing noisy results from insufficient data. The model is the easy part. Clean, unified data across channels is the hard part.
How privacy regulations changed the attribution playbook
The decline of third-party cookies forced a fundamental rethink of how attribution data gets collected and stitched together. Teams that relied on cross-site tracking for multi-touch attribution are rebuilding their measurement infrastructure, and the replacement architecture looks different. MTA itself is not going away. The specific implementation built on third-party cookies is.
Ron described how companies like Visual IQ, Convertro, and Adometry built their attribution systems on cookie-based tracking. When Apple, Firefox, and eventually Chrome started restricting third-party cookies, those vendors lost the data layer their models depended on. Most of them disappeared. But the lesson Ron drew from this was not that MTA is dead. It is that MTA built on borrowed tracking infrastructure was always fragile. The teams rebuilding on first-party data and direct platform integrations are doing it right the second time.
Barbara Galiza, a marketing measurement consultant, remained cautious about vendors claiming to solve the cookie problem through first-party data and probabilistic methods. Validation is still the major hurdle. While companies like Rockerbox leverage platform partnerships and statistical modeling, Barbara pointed out that proving these methods actually produce accurate results remains an open question. She advocated for media mix modeling as an alternative that estimates channel impact without requiring user-level tracking.
“The demise of third-party cookies has exposed the flaws in traditional MTA models, but it also presents a chance to rebuild attribution from the ground up. By combining first-party data, direct platform partnerships, and statistical modeling, marketers can create a more accurate, channel-specific approach.”
Ron Jacobson, Why multi-touch attribution excels in credit distribution but fails in causality
Siobhan Solberg, a data privacy strategist, took the privacy argument further, suggesting a shift in mindset. Instead of aiming for perfect accuracy, marketers should focus on understanding general trends and flows. Proportional data can still inform decisions without violating privacy. She acknowledged that achieving precise MTA data is already difficult given Safari and iPhone privacy settings that restrict data collection. The question is whether your measurement ambitions require individual-level tracking or whether directional signals are good enough.
“I kind of need to track as much data as I can, because I don’t have a multi touch attribution solution set up yet, and I might need this data to stitch that customer journey together. What are your thoughts on how multi touch attribution competes with data minimization and data protection?”
Siobhan Solberg, A guide to ethical marketing with data minimization
- Audit your current attribution stack for third-party cookie dependencies. Anything built on cross-site tracking is already producing degraded data.
- Invest in first-party data collection through server-side tracking, CDPs, and direct platform integrations.
- Recognize that complete user-level journey stitching is harder at scale than it used to be. Build a measurement stack that layers MTA alongside aggregate methods like MMM and incrementality testing rather than asking any single tool to do everything.
- Align your measurement ambitions with your actual privacy obligations. Tracking less data does not mean understanding less, it means understanding differently.
The dark funnel and what attribution will never measure
A significant portion of the buyer journey happens in places your attribution tools cannot see: Slack channels, group chats, podcast conversations, hallway discussions at conferences, and peer-to-peer recommendations. This “dark funnel” represents real demand generation that never shows up in a dashboard, and pretending it does not exist warps your entire measurement strategy.
Nadia traced the origin of the term back to 2012 and a journalist at The Atlantic, not a marketer. The term was coined to describe all the places people go for advice that tracking tools cannot reach. Despite what some B2B influencers claim, dark social is not a new concept or a marketing innovation. It is a label for something that has always existed: word of mouth.
“I looked up the term because I’m like, who coined this? It was 2012 and it was the Atlantic. It never came from a marketer. How can this not come from a marketer? It was a journalist who was trying to write an article on all the things where people go for advice.”
Nadia Davis, How to decide if attribution data is good enough
Rutger Katz, co-founder of Bynder, argued that MTA does not truly show what caused a customer to convert. Instead, he pointed to the importance of listening directly to customers to understand what actually influenced their decision. Much of what drives conversions happens in untraceable peer-to-peer networks, industry communities, and informal discussions between colleagues. Customers may report that they heard about a product through Google, but that Google search was triggered by something they heard in a community that no tracking pixel will ever capture.
Tara Robertson, a B2B marketing leader, pushed back on the notion that marketers should reorient their entire strategy around the dark social concept. She acknowledged the historical relevance of word-of-mouth marketing but disagreed with the misguided focus on tracking every possible interaction at the expense of authentic engagement. If a marketer captures attention and incites a Google search that ends in an ad click, that does not necessarily mean SEM drove the conversion. The real driver was the attention capture that happened elsewhere.
“While attribution software may claim comprehensive metrics, Tara’s experience contradicts that assertion. What really got her attention was the often misguided focus on tracking every possible interaction, to the detriment of authentic engagement.”
Tara Robertson, Cost-Effective Growth and Creative Attention in B2B
The practical response is to stop trying to attribute what cannot be attributed and start building measurement systems that account for the gap. Self-reported attribution (“how did you hear about us?”) captures some dark funnel signal. Directional indicators like branded search volume, direct traffic trends, and community engagement metrics fill in more. The goal is triangulation, not precision.
Incrementality testing: proving marketing actually caused the sale
Incrementality testing asks the only question that matters for budget decisions: would this sale have happened without marketing? Unlike MTA models that distribute credit after the fact, incrementality tests measure the actual causal impact of your spend by comparing audiences who received marketing against audiences who did not.
Ron framed incrementality as a baseline question. Without a concept of baseline performance, everything looks like it was driven by marketing. The core test is straightforward: hold out a percentage of your audience from a campaign, then compare conversion rates between the exposed group and the holdout. The difference is your incremental lift.
“Would that dollar revenue have happened without marketing is kind of the way I think about it. We need some sort of a baseline that deserves to get some credit for revenue, credit for conversions. And if you don’t have a concept of a baseline, it’s as if everything was driven by marketing.”
Ron Jacobson, Why multi-touch attribution excels in credit distribution but fails in causality
The biggest resistance to holdout tests comes from fear of lost revenue. Marketers worry that the 10% of their audience held out from a campaign represents missed conversions. Ron addressed this directly: by the time the test concludes, you already know whether the audience is incremental or not. If they convert at the same rate without seeing the ads, you were paying for conversions that would have happened anyway. Delayed holdout tests, where the held-out group eventually receives the campaign, can reduce this anxiety while still producing clean causal data.
Pranav argued that holdout testing should be standard practice, particularly for email and in-app communications where you have full control over the delivery surface. The perceived opportunity cost is almost always overblown. If holding out 5% of your audience from an email campaign causes a meaningful revenue dip, that is actually valuable information about how much that campaign is worth.
“Holdout testing should be a standard practice, particularly for email and in-app communications, where you have control over the surface.”
Pranav Piyush, Why multi-touch attribution is broken
Sundar provided a more nuanced view. He pointed out that an incrementality test and an A/B test are structurally identical: 2 groups, measured for differences. The statistical test does not know one group is getting nothing. You can also run incrementality tests on spend levels ($100 vs $200), testing the incremental value of marginal spend rather than all-or-nothing holdouts.
How to start
- Start with channels you control (email, push notifications, in-app messages) where holdouts are simple to implement.
- Hold out 5-10% of your audience. Measure conversion rate differences over a meaningful time window.
- Run spend-level incrementality tests on paid channels: compare $X spend vs $2X spend to find diminishing returns.
- Use delayed holdouts to reduce internal resistance: the held-out group receives the campaign after the measurement window closes.
- Reserve geo-testing for channels where user-level holdouts are impossible (TV, radio, billboards).
Visit scoring: a complementary approach when journey tracking breaks down
Visit scoring takes a different angle from traditional MTA. Instead of trying to connect every touchpoint in a fragmented buyer journey, it scores each website visit based on engagement behavior, then attributes that score to the traffic source immediately. Where last-click attribution captures only the final step, visit scoring registers buying signals at every session, including sessions that happen weeks before a purchase.
Constantine built this methodology starting from an honest constraint: stitching together the full customer journey across devices and sessions is often impossible. Fragmented sessions, multiple devices, and in-app browsers make the clean linear journey a useful fiction. Think about your own behavior: you see a product on Instagram’s in-app browser, switch to Safari for research, check reviews on your laptop that night, and purchase days later on a completely different network. MTA that depends on connecting those sessions will lose most of the story.
“Understanding the full customer journey is impossible. Even our best modeling barely captures fragments of reality, bound by device limitations and the chaotic way people actually browse online.”
Constantine Yurevich, Visit Scoring: an alternative to MMM and MTA
The methodology works by tracking what people actually do on your site: time invested, pages explored, product comparisons made, and content consumed. You create probabilistic models that match these engagement patterns to real sales outcomes. When you analyze those patterns at scale and optimize toward traffic sources that deliver visitors exhibiting genuine research behavior, revenue grows and ROAS improves across your marketing mix.
Visit scoring captures value in situations where traditional MTA misses signals: users who show high engagement but purchase later, users who switch devices mid-journey, users who return through different channels before buying, and users who take longer than your attribution window to decide. By scoring the session immediately and crediting the traffic source, upper-funnel campaigns get proper credit and paid platforms receive 10x more optimization signals.
“Replace last-click myopia with visit scoring to capture the 90% of signals you’re currently throwing away. Score every session based on engagement patterns that predict future purchases, then attribute that value to its traffic source immediately.”
Constantine Yurevich, Visit Scoring: an alternative to MMM and MTA
Constantine also offered a practical shortcut for teams overwhelmed by measurement complexity: stop measuring channels you will use regardless. Your organic channels, your blog, your social presence, these will happen whether you track them or not. Focus your measurement firepower exclusively on paid channels where budget decisions hang in the balance. For everything else, just do it without the tracking overhead. This cuts your analytics complexity in half while giving you actionable data for the channels that actually impact your bottom line.
Marketing mix modeling vs multi-touch attribution vs incrementality: which tool answers which question
MMM, MTA, and incrementality testing are not competing methodologies. They answer entirely different questions, and teams that pick one and expect it to do everything end up with expensive models that mislead more than they inform. The most effective measurement stacks use all 3 in combination, each calibrated against the others.
Rajeev built a unified measurement framework that treats each tool as serving a distinct purpose. Marketing mix modeling handles long-term planning and budget allocation across channels. Experiments (incrementality tests) sharpen the model’s edge with causal validation. Multi-touch attribution (MTA) manages daily execution, providing quick directional signals for operational decisions. His team uses multipliers, calibrated from MMM and experiments, to adjust MTA outputs and remove duplicate credit from platforms that all claim the same conversions.
“Treat measurement as a multi-tool, not a magic eight ball. Use marketing mix modeling for long-term planning, experiments to sharpen the model’s edge with causal validation, and attribution to manage daily execution. Each method answers a different question.”
Rajeev Nair, Causal AI and a unified measurement framework
Matthew Castino, who leads marketing measurement at Canva, saw this evolution firsthand. The team invested heavily in MMM and incrementality because those methods answered high-level budget and causality questions. But eventually the limits surfaced. Aggregate models could not show how people discovered the product, where they fell out of conversion flows, or how user quality varied by channel. Those signals pointed directly back to MTA. Canva now keeps MTA in the mix alongside MMM and incrementality, treating it as a behavioral lens rather than a revenue credit system.
“Attribution went through a strange cycle in the industry. Teams trusted it, questioned it, pushed it aside, and then realized that the replacement methods did not cover everything.”
Matthew Castino, How Canva measures marketing
How they fit together
| Method | Question it answers | Time horizon | Data requirement |
|---|---|---|---|
| Marketing Mix Modeling (MMM) | How should I allocate budget across channels next quarter? | Months to years | Aggregate spend and outcome data, 2+ years ideal |
| Multi-Touch Attribution (MTA) | Which touchpoints are involved in conversion paths? | Days to weeks | User-level event data across channels |
| Incrementality Testing | Did this campaign actually cause additional conversions? | Weeks to months per test | Holdout groups or geo-splits with clean measurement |
| Visit Scoring | Which traffic sources bring visitors who show buying behavior? | Real-time per session | On-site engagement data and conversion history |
Constantine warned against simplified MMM models that only analyze platform data. They use circular logic to overvalue your highest-spend channels: a convenient self-fulfilling prophecy. And geo testing, he argued, is useful only when you suspect a channel delivers zero value. For everything else, you are paying for math with confidence intervals so wide they become meaningless.
Attribution App offers the cleanest framing for how these tools fit together: MTA operates at the tactical execution layer, giving granular, real-time, channel-level feedback on what is showing up in conversion paths. MMM operates at the strategic planning layer, trend-aware and offline-inclusive, suited to quarterly and annual budget decisions. Incrementality testing operates as the validation layer, running alongside both to confirm which campaigns are actually driving results versus riding existing demand. Each layer answers a different question. The strongest measurement programs run all 3 rather than picking 1 and expecting it to cover the others.
Where multi-touch attribution genuinely earns its place
Multi-touch attribution earns its place in the measurement stack when teams use it for what it actually does well. It is the fastest tool for surfacing which content shows up consistently in deals that close, which channel sequences move accounts forward, and how user quality varies by traffic source. Those are real questions worth answering, and no other method answers them as quickly or at the session level that MTA operates at.
Nadia has heard the “MTA is dead” declarations many times. Her take is pragmatic: the teams making that call loudest tend to be the ones who had a bad run and gave up, while larger, more disciplined teams kept making it work with structure and rigor. For Nadia, MTA is not a scoreboard assigning credit to channels. It is an analytics tool for understanding sequences. In the GOVTECH market, she identified 1 sequence that reliably moved deals forward: an in-person event conversation followed by an on-demand webinar. Prospects in that market could not attend live without signaling buying interest publicly, so on-demand let them engage anonymously while still going deep enough to trigger a meaningful follow-up.
“Attribution keeps you in business. It’s what lets you prove to executives why you need the budget, why your team exists, and why the work matters.”
Nadia Davis, How to decide if attribution data is good enough
Nadia’s updates to executives focused on scenarios rather than dashboards. She showed which sequences worked for which segment and backed them with evidence. “If you have the conversation at the event and they watch on demand, you’re in. The BDR calls, and they already feel like they know you.” When both touches happened, momentum built. When only 1 happened, it stalled. Without MTA connecting those 2 touches to deal outcomes, that insight would have stayed invisible in the data.
Ashley at Atlassian uses MTA specifically for content insights, not credit assignment. She draws a distinction that most attribution debates skip: there is a difference between which channel drove revenue and which content contributed to a buying decision. MTA cannot reliably answer the first question, but it can answer the second. When Ashley analyzed buying journeys, she found that tutorial articles buried in support documentation showed up consistently alongside closed deals, even when they rarely registered as a top touchpoint in standard models.
“There might have been amazing tutorial articles that were buried in a support doc somewhere that doesn’t show up very often in the MTA model, but maybe those pieces of content are in fact incremental and was the tipping point for pipeline. And without them, X amount of deals wouldn’t have closed.”
Ashley Faus, Building content that matches actual human thinking
Ashley is clear-eyed about what MTA cannot do. “Can we please not with last touch or first touch,” she said, explaining that the value is directional visibility into the journey, not a causal verdict. She uses path data to identify which assets consistently precede conversions, test adjacent channels that might reach similar audiences, and evaluate whether messaging changes shorten or lengthen the sales cycle. For a content-heavy go-to-market motion, that behavioral map is irreplaceable.
Steffen Hedebrandt built Dreamdata to solve this problem after seeing it firsthand at Airtame. His team had invested in a full content operation, writers, a videographer, a designer, and an editor, but could not prove the revenue impact to leadership. When they ran MTA analysis, comparison pages pitting Airtame against competitors were consistently showing up in deal paths that eventually closed. The organic content team had been creating real pipeline value the entire time. Sales was closing deals that started with SEO content, but without the data connecting those dots, marketing had no way to defend the investment. “You plant the seed until the sales guy closes the deal. And that’s why it’s so critical to have some kind of clue on how those dots really connect.”
Ron offered the most concise case for keeping MTA in the stack: it is a credit distribution system that is still a useful guidepost for understanding where your efforts are making an impact. His advice is to stop pitching the model itself and start asking the path-to-conversion questions it actually answers: how long do different audience segments take to convert, how do retention paths differ from acquisition paths, and what changes when you add or remove a channel from the mix.
Matthew at Canva brought MTA back after a period of investing heavily in MMM and incrementality. The aggregate methods answered the planning and causality questions, but gaps surfaced. MTA caught shifts in channel performance days faster than MMM could update. When creative performance moved, MTA signaled it immediately while the larger model was still stabilizing. It also revealed user quality differences between channels, identifying traffic sources that drove volume but produced users with thin retention, a signal MMM flattened entirely. “In marketing, good quality data is hard to come by. It rarely makes sense to throw anything away.”
There is also a structural advantage MTA has over MMM that does not get discussed enough. Attribution App describes it as the deterministic edge: unlike MMM, which relies on statistical inference and aggregate correlations, MTA provides a visible record of what actually happened. You cannot point to a specific deal and trace how an MMM model reached its conclusion. MTA can. Every credit allocation is tied to actual recorded events: first interaction, each subsequent touchpoint, final conversion. When a CFO asks why marketing claims to have influenced a particular account, MTA gives you the evidence trail to show them. MMM gives you a coefficient.
That board-level accountability matters more than it used to. According to Forrester data cited by Attribution App, 82% of CMOs now report goals aligned directly to revenue targets. Attribution App’s own data puts marketing budgets at 8% of company revenue in 2024, down from 9% the year prior. In an environment where every dollar needs a defensible justification, MTA’s ability to connect specific marketing activity to pipeline in a visible, traceable way is one of its most underappreciated strengths. It also surfaces something single-touch models miss entirely: channel interdependence. A paid search ad that appears to close deals often looks less impressive once MTA shows it is harvesting demand that a content piece created 2 months earlier. That dynamic is invisible to last-touch. MTA makes it explicit, which changes how you allocate budget between acquisition channels and mid-funnel nurture.
- Use MTA to analyze content performance in buying journeys, not to assign channel credit. Which assets consistently appear alongside closed deals is a question worth answering.
- Look for sequences, not individual touchpoints. MTA surfaces whether 2 touches in the right order move accounts differently than either touch alone.
- Treat MTA as your fast-feedback layer. It catches channel shifts and creative performance changes weeks before MMM can confirm them.
- Compare MTA path data against self-reported answers to surface tracking blind spots. Persistent gaps between what customers report and what the model shows are informative, not a problem to fix.
- Ask path-to-conversion questions instead of revenue causation questions. Time to conversion, new vs retained customer paths, and channel mix effects are things MTA answers reliably.
Self-reported attribution vs software: what Ahrefs and Canva learned
Self-reported attribution (asking customers “how did you hear about us?”) captures signals that no tracking tool can detect, but it introduces its own biases. The most mature measurement teams combine self-reported data with software attribution, using each to validate and challenge the other rather than picking one as the single source of truth.
Sam Oh, VP of Marketing at Ahrefs, described what might be the most radical approach to attribution in B2B SaaS. Dimitri, Ahrefs’ founder and CEO, challenged the necessity of detailed attribution models entirely. If revenue is increasing, their strategies must be effective. Instead of chasing perfect attribution, they prioritize product development and creating valuable content. Sam acknowledged this was initially difficult coming from an agency background where clients demanded clear metrics linking every dollar to revenue, but the logic holds: attribution models often have flaws and can mislead decision-making with bad data.
“Attribution models often have flaws and can mislead decision-making with bad data. Ahrefs’ approach focuses on the broader picture: if revenue is increasing, their strategies must be effective.”
Sam Oh, Ahrefs’ VP of Marketing
Michael Rumiantsau, CEO of Narrative BI, described the manual reality behind most attribution efforts. His team pins data points on every new signup to determine its originating channel, cross-referencing with Intercom and their internal database. While effective to some extent, the process is cumbersome and prone to error. This is the gap attribution software is supposed to fill, but the solutions available often create as many problems as they solve.
Kacie combined every available data point: UTMs, self-reported attribution, and multi-touch models to create a comprehensive picture. Board members and leadership seek simple answers to “what drove the most revenue?” but that is rarely a question with a singular answer. Her approach treated each data source as one input into a triangulated view rather than elevating any single method as the definitive answer.
“Board members and leadership often seek simple answers, asking, ‘What drove the most revenue?’ This is rarely a question with a singular answer, and it certainly doesn’t lie solely in the last touchpoint.”
Kacie Jenkins, Capturing the true impact of marketing
- Add a “how did you hear about us?” field to your highest-intent conversion points (demo requests, trial signups, not blog subscriptions).
- Compare self-reported answers against your software attribution data monthly. Persistent gaps reveal where your tracking has blind spots.
- Weight self-reported data more heavily for channels that software attribution structurally undercounts (podcasts, communities, peer referrals).
- Present both views to leadership. The tension between them is informative, not a problem to solve.
Proving marketing’s revenue impact without perfect attribution
You do not need a perfect attribution model to prove marketing drives revenue. The practitioners who keep their budgets and their seat at the leadership table do it by combining directional data, concrete experiments, and storytelling that speaks the language of finance, not the language of marketing dashboards.
Moni Oloyede, a marketing ops leader, took one of the strongest positions on this topic: most attribution as practiced today is a waste of time. The real failure is not in the models but in the assumption that every marketing activity requires direct revenue attribution. Trade shows are the perfect example. Companies spend massive sums on events, yield maybe a handful of deals over a year, and spark the annual budget complaint ritual. But they keep showing up. The executives demanding ROI spreadsheets somehow intuitively grasp what their attribution models miss: some activities build relationships that generate revenue over timelines no dashboard can track.
“They will happily take a freaking PowerPoint presentation that says, we sent an email, it had a reply-to, and it had this much success to it.”
Moni Oloyede, Attribution is a waste of time
Moni’s advice was to document results in whatever format best captures real value, whether that is traditional analytics or a simple presentation showing concrete outcomes. Your CEO needs clear results to share with the board. They will gladly accept a straightforward presentation showing real impact over a complex attribution model nobody fully understands.
Nadia framed the capability gap differently. Marketing leaders who can quantify the financial impact of creative work are the ones who keep their budgets. Some people struggle with making decisions without near-perfect certainty. Others use data to validate their instincts and move forward despite ambiguity. The practitioners who thrive make decisions with incomplete information, accepting calculated risk instead of waiting months to gather every possible data point.
“Marketing leaders who can quantify the financial impact of creative work are the ones who keep their budgets, and their seat at the table.”
Nadia Davis, How to decide if attribution data is good enough
Guta Tolmasquim, CEO of Purple Metrics, built an entire product around connecting brand to revenue. The approach that worked was focusing on 1 clear change at a time and tracking its impact on revenue without distractions. You earn credibility with finance partners by showing how brand decisions move purchase behavior in measurable ways. When you build discipline into measurement and align it with actual sales, you transform branding from a creative exercise into a proven growth lever.
Connecting brand spend to revenue when MTA ignores it
Brand marketing is the biggest blind spot in most attribution stacks. MTA cannot measure it because brand awareness works through memory and recognition, not clickable touchpoints. MMM can estimate it but only at high levels of aggregation. The teams that successfully connect brand to revenue do it through controlled experiments and creative measurement that finance leaders actually trust.
Guta worked with a consumer brand that ran a TV campaign where the attribution algorithm predicted a huge sales lift. The team spent confidently. When the campaign ended, sales barely moved. The warehouse stayed stocked, the dashboard stayed flat. Eventually they found that the creative had shifted just enough to disconnect from what customers expected. The model never accounted for that change because it treated all TV campaigns as identical.
Guta described another client whose sales spiked so quickly that the team thought the data was broken. When they investigated, they learned that new legislation had forced a competitor to route customers to their product. The attribution model had no category for what happened. These experiences taught Guta that brand measurement works only when you isolate variables: 1 change at a time, tracked against revenue, without distractions from other campaigns running simultaneously.
“Brand measurement works best when you focus on one clear change at a time and track its impact on revenue without distractions. You can earn credibility with your finance partners by showing how brand decisions move purchase behavior in measurable ways.”
Guta Tolmasquim, Connecting brand to revenue with attribution algorithms
Kacie described the challenge of making the case for brand investments when working for a CEO who does not believe in marketing. Her approach was to invest in trust-building, strong branding, and thoughtful outbound that generated measurable pipeline alongside brand lift. The key was never separating brand from demand. Every brand activity had a measurable component, even if the primary goal was awareness rather than direct conversion.
“When did marketing become this department where we need to assign a dollar to every single thing that we do? It’s made the job a lot trickier for sure.”
Kacie Jenkins, Capturing the true impact of marketing
- Run 1 brand campaign change at a time and measure its impact on revenue in isolation.
- Track leading indicators that precede revenue: branded search volume, direct traffic, social mention velocity, inbound demo request volume.
- Present brand metrics in financial language. “Branded search increased 23% after the campaign, correlating with a 15% increase in inbound pipeline” works better than impressions and reach.
- Use holdout testing for digital brand campaigns where possible. Hold out a geo or audience segment and compare conversion rates.
The marketing ops infrastructure that makes attribution actually work
Attribution debates usually start in the wrong place. Teams argue about models and vendors while ignoring whether the infrastructure exists to make any model produce reliable output. Attribution App frames the root cause plainly: most teams are not failing at attribution models. They are failing at data unification. Connecting offline and online events, reconciling duplicate attribution across platforms, and resolving multi-stakeholder tracking in B2B accounts are infrastructure problems. The model is the easy part. Funnel stage definitions, channel taxonomies, campaign naming conventions, and shared language across sales and marketing determine the quality of every attribution number before the model runs.
Nadia was direct about this: the success of any ABM or demand gen tactic starts and stops with marketing operations. If you do not have that excellence in how you bring things together, everything else is secondary. Vague funnel definitions, inconsistent UTM parameters, and misaligned lifecycle stages between marketing and sales make attribution data unreliable at the foundation level. No model can fix bad inputs.
Phil Gamache and Jon Taylor, co-hosts of Humans of Martech, explored lifecycle reporting as the bedrock of measurement. Getting to revenue data is harder than it looks. Impressions and sessions are easy: log into Google Analytics, pull numbers from ad platforms, done. Things get hairy when you start working with contacts, deals, and new customers. Without common definitions, you cannot even start building useful reports. If marketing and sales do not agree on what constitutes an MQL, every number downstream from that definition is unreliable.
“If you and sales don’t agree on what constitutes an MQL, it’s going to be hard to be successful creating good reports.”
Phil Gamache and Jon Taylor, How skilled do you need to be at marketing reporting?
Simon Heaton, Director of Growth Marketing at Buffer, implemented server-side tracking through Segment as the foundation of their measurement stack. The CDP captured on-site events (page loads, button clicks, CTA interactions) while avoiding the data loss from browser-based tracking. Some events came out of the box; others required custom instrumentation with specific nomenclature for different buttons and CTAs. The lesson was that measurement infrastructure is not a one-time setup but an ongoing engineering effort that requires dedicated resources.
“We implemented this through CDP Segment. Segment captures all of our on-site events, like page loads, we have button clicks instrumented, a whole bunch of really interesting events.”
Simon Heaton, Buffer’s Director of Growth Marketing
- Align marketing and sales on lifecycle stage definitions before touching attribution models.
- Standardize UTM parameters and campaign naming conventions across every channel and team.
- Implement server-side tracking through a CDP to reduce data loss from browser privacy restrictions.
- Document your channel taxonomy. Every team member should map any campaign to the same channel category.
- Audit your CRM data quality quarterly. Attribution is only as good as the contact and opportunity records feeding it.
How AI and causal models are reshaping attribution
AI-powered attribution tools promise to solve the measurement problem through machine learning, probabilistic modeling, and causal inference. Some of these approaches represent genuine methodological advances. Others dress up the same flawed correlation logic in fancier packaging. The difference comes down to whether the model can actually prove causation or just assign credit more creatively.
Rajeev worked on causal AI frameworks that move beyond correlation. The core distinction: traditional MTA tells you what happened before a conversion, causal models attempt to tell you what actually caused it. His team calibrates MTA outputs against MMM and experimental results, using multipliers to adjust platform-reported numbers and remove duplicate credit. It is messy and not academically pure, but it is more useful than pretending the sum of platform claims equals your revenue.
“Align your tools to the decisions you actually need to make, and stop forcing one model to do everything.”
Rajeev Nair, Causal AI and a unified measurement framework
Chris Golec, who founded Demandbase and pioneered Account Based Marketing, is now building Channel99 to tackle attribution with AI. His assessment of the current state is blunt: most solutions capture around 5% of customer signals while creating elaborate dashboards that mask massive gaps in understanding. Channel99 targets 2 critical problems: deduplication of credit across platforms that all claim the same conversions, and prediction of marketing ROI before dollars are spent rather than after.
Ashley Faus, Head of Content at Atlassian, offered a grounding counterpoint. She joked that “DocuSign is our highest converting channel. That contract goes over and 99.9% of the time somebody freaking signs it.” The point was serious: while we agonize over which touchpoint “drove” the conversion, modern buying journeys include dozens of interactions across multiple channels, teams, and time periods. AI models that claim to untangle this complexity should be evaluated on whether they actually change your decisions, not whether their math looks impressive.
“DocuSign is our highest converting channel. That contract goes over and 99.9% of the time somebody freaking signs it.”
Ashley Faus, Building content that matches actual human thinking
Michael positioned AI-powered BI tools as a way to democratize data narratives. Rather than requiring a data science team to interpret attribution models, AI can surface actionable patterns from the data and present them in plain language. The gap between what attribution models produce and what marketing teams can actually use remains one of the biggest unsolved problems in martech. Tooling that closes that gap, whether through AI or better UX, changes outcomes more than a mathematically superior model that nobody understands.
Frequently asked questions
What is the best MTA model for B2B companies?
There is no single best model. Nadia swapped between First Touch, U-shaped, and Markov chain models depending on whether she was measuring market expansion, deal velocity, or channel influence. Start with last-click as a consistent baseline, then layer in additional models to answer specific questions.
Should I use multi-touch attribution or marketing mix modeling?
Use both. MTA answers operational questions (which touchpoints are in conversion paths), while MMM answers strategic questions (how should I allocate budget next quarter). Rajeev’s framework uses MMM for planning, experiments for causal validation, and MTA for daily execution decisions.
How do I start running incrementality tests?
Start with channels you control, like email or push notifications, where holdouts are simple. Hold out 5-10% of your audience from a campaign and compare conversion rates. Pranav argued this should be standard practice for any owned channel. Expand to paid channels using spend-level tests ($100 vs $200) once you have the methodology working.
How do I measure the impact of dark social and word of mouth?
You cannot measure dark social directly; that is the whole point. Use self-reported attribution fields on high-intent forms, track branded search volume trends, and monitor direct traffic patterns. Nadia and Rutger both recommended listening directly to customers rather than trusting tracking tools to capture what happens in Slack channels and peer conversations.
How do I prove marketing’s value to a skeptical CEO?
Stop leading with attribution dashboards. Moni found that executives will happily accept a simple presentation showing concrete results over a complex model nobody understands. Combine directional data from multiple sources, run small experiments that prove specific campaign impact, and present results in financial language that ties marketing activity to revenue outcomes.
Is self-reported attribution reliable?
Self-reported attribution captures signals that software misses (podcasts, peer referrals, communities) but introduces recency and recall bias. Kacie combined self-reported data with UTMs and multi-touch models, treating each as one input rather than the definitive answer. The gaps between self-reported and software data are informative, not a problem to eliminate.
What is visit scoring and how does it differ from lead scoring?
Visit scoring evaluates individual website sessions based on engagement behavior (time on site, pages explored, content consumed) and attributes that score to the traffic source immediately. Unlike lead scoring, which evaluates a person across multiple interactions over time, visit scoring works at the session level and is designed to optimize paid media spend by identifying which traffic sources bring visitors who behave like buyers.
How big does my team need to be before investing in multi-touch attribution?
Nadia described how MTA became a bundled feature in platforms like HubSpot and Marketo, leading small teams to believe they could run MTA without dedicated expertise. In smaller SaaS companies, that promise collapsed because 1 person was running ads, handling demand gen, and trying to build models between meetings. You need at least 1 person with dedicated time for measurement infrastructure before an MTA model will produce trustworthy results.
Does the loss of third-party cookies make multi-touch attribution obsolete?
MTA built on third-party cookies is dead. MTA built on first-party data, server-side tracking, and platform partnerships is alive and adapting. Ron described how Rockerbox rebuilt its methodology channel by channel using first-party data and direct integrations, a process that aligns with how Meta and Google themselves now use modeled conversions to fill tracking gaps.
How do I attribute revenue to brand marketing?
Isolate 1 variable at a time. Guta found that brand measurement only works when you track 1 clear change against revenue without other campaigns running simultaneously. Use branded search volume, direct traffic, and inbound pipeline as leading indicators. For digital brand campaigns, run geo-holdout tests to measure incremental lift.
Why do ad platforms always overcount conversions?
Every platform claims credit for the same conversion because each one sees only its own touchpoint data. Rajeev’s team uses calibration multipliers derived from MMM and experiments to adjust platform-reported numbers and remove duplicate credit. Constantine put it more bluntly: the big platforms sell attribution solutions they do not even trust for their own internal decisions.
Explore the episodes
- Ron Jacobson: Why multi-touch attribution excels in credit distribution but fails in causality — How MTA’s cookie-dependent past created tracking gaps, why rebuilding on first-party data fixes it, and why incrementality testing answers the causality question MTA was never meant to answer.
- Constantine Yurevich: Visit Scoring, an alternative to MMM and MTA — Why cross-device journey stitching breaks down and how scoring visitor engagement behavior at the session level captures signals that attribution windows miss.
- Nadia Davis: How to decide if attribution data is good enough to guide strategy — Swapping models based on goals, chain-based MTA for ABM, and why marketing ops excellence is the prerequisite for any measurement model.
- Rajeev Nair: Causal AI and a unified measurement framework — How to combine MMM, MTA, and experiments into a calibrated measurement system that accounts for platform overcounting.
- Kacie Jenkins: Capturing the true impact of marketing and avoiding reductive metrics — Combining UTMs, self-reported attribution, and multi-touch models to prove marketing impact to skeptical leadership.
- Pranav Piyush: Why multi-touch attribution is broken and what you should do instead — The case for holdout testing, the causality gap in MTA, and what to do when your attribution model stops being useful.
- Barbara Galiza: The inconvenient truths of attribution no one wants to admit — Why attribution is a much broader category than MTA, where MTA actually excels (click-based bottom-funnel channels), and how to layer methods to fill the gaps.
- Moni Oloyede: The marketing ops identity paradox and why attribution is a waste of time — Why not all marketing activities require direct revenue attribution and how to prove value without perfect measurement.
- Sundar Swaminathan: How Uber measures the ROI of marketing — Attribution at scale, the MECE principle for avoiding double-counting, and how incrementality tests work at Uber.
- Guta Tolmasquim: Connecting brand to revenue with attribution algorithms — Isolating brand impact on revenue, earning credibility with finance partners, and the stories of attribution models that got it spectacularly wrong.
- Matthew Castino: How Canva measures marketing — Why Canva brought MTA back after investing heavily in MMM and incrementality, and what aggregate models miss.
- Siobhan Solberg: A guide to ethical marketing with data minimization — Balancing multi-touch attribution with privacy obligations and why proportional data is often good enough.
Want more practitioner perspectives? Subscribe to Humans of Martech for weekly conversations with the operators building the future of marketing technology.