139: Ron Jacobson: Why multi-touch attribution excels in credit distribution but fails in causality

What’s up everyone, today I have the pleasure of sitting down with Ron Jacobson, Co-founder and CEO of Rockerbox.

Summary: Multi-touch attribution doesn’t tell you what really caused a conversion or revenue, it’s a credit distribution system. It’s still a useful guidepost in understanding where your efforts are making an impact. Incrementality testing, on the other hand, digs deeper—helping you pinpoint what’s really driving results by answering, “What would’ve happened without this campaign?” But to get there, it’s not about finding the perfect model, it’s about asking the right questions. Don’t get stuck in the basics like Google Analytics. True measurement demands first-party data and statistical modeling, especially as third-party cookies fade. For startups, the goal is momentum—nail one channel before diving into complex measurement. Build success first, then refine with tools like MTA or MMM to truly understand what drives growth.

Jump to a Section

About Ron

Ron Jacobson - Humans of Martech Interview on incrementality MM and multi touch attribution
  • Ron started his career as a software engineer before transitioning to product management at AppNexus where he ran the platform analytics team and later the real time platform product team
  • He then took the entrepreneurial plunge Co-founding Rockerbox, first as a programmatic advertising platform then a multi touch attribution platform 
  • And today they’ve added a suite of marketing measurement tools that also leverage marketing mix modeling.

Rethinking the Role of Multi-Touch Attribution

Imagine multi-touch attribution as a central point of impact, like a futuristic sphere radiating energy outward—each crack and glowing line representing a touchpoint in the customer’s path. While the energy spreads, creating visible effects, it doesn't reveal the exact origin or true catalyst of the event.

Multi-touch attribution (MTA) often sparks debate around its effectiveness in driving marketing decisions. While many recognize it as a flawed tool, few fully grasp the extent to which it misses a crucial element: causality. When asked whether MTA should be seen as a credit distribution mechanism rather than a way to measure causality, Ron agrees wholeheartedly, explaining that this is exactly how his team has framed the discussion for years.

Ron emphasizes that MTA’s purpose isn’t to assign cause-and-effect between marketing touchpoints and revenue generation. Instead, it’s a retrospective tool designed to distribute credit across various touchpoints in a customer’s journey. Imagine MTA as a central point of impact, like a futuristic sphere radiating energy outward—each crack and glowing line representing a touchpoint in the customer’s path. While the energy spreads, creating visible effects, it doesn’t reveal the exact origin or true catalyst of the event.

Ron argues that marketing teams need to shift their focus from chasing causality to understanding how customers interact with marketing efforts. This approach helps marketers assess what channels or strategies might be working, even if the exact causal impact remains elusive.

A specific example Ron highlights is when clients test new channels like OTT, CTV, or linear TV. Frequently, these clients aren’t sure if the new channel is even making an impact. The issue, he notes, isn’t necessarily that the marketing is ineffective—it’s that the data simply doesn’t reflect customer engagement due to gaps in tools like Google Analytics. While causality is still out of reach, MTA can at least show that the new channel is on the customer’s path to purchase, providing some reassurance that the efforts are not entirely in vain.

Ron points out that this shift in perspective helps marketing teams function more effectively. Rather than getting bogged down by the impossibility of determining exact causality, teams can use MTA to answer more immediate, practical questions: What are the touchpoints that seem to drive the most engagement? Where should we focus next? It’s not about perfectly predicting outcomes, but about gathering insights that improve day-to-day operations.

Key takeaway: MTA isn’t designed to establish causality, but rather to help distribute credit among touchpoints. When marketers focus on how customers engage with their efforts rather than trying to measure cause-and-effect, MTA becomes a valuable tool in refining strategy.

Back to the top ⬆️

Understanding the Value of Path to Conversion

When diving into the value of the path to conversion, we often struggle with the fact that it doesn’t fully address causality. Just because a customer clicks on a Google link and converts doesn’t necessarily mean that click caused the purchase. It’s possible the customer had already been influenced by a social ad or an email from days prior. Understanding the motivations behind these actions remains elusive.

Ron’s take on this is refreshingly straightforward. He suggests ignoring the model entirely when pitching multi-touch attribution (MTA). Instead, focus on the question: What can you learn from understanding the customer’s path to conversion? By treating MTA as an alternative lens to last-click or first-touch attribution, Ron emphasizes that it provides more context but doesn’t necessarily give a definitive answer to causality. He argues that last-touch attribution, for example, isn’t the best method for understanding the full customer journey.

The real value of analyzing the path to conversion, according to Ron, comes from the variety of questions you can answer. Questions like time to conversion, comparing paths for new versus retained customers, or how adding a new channel influences customer behavior. Retention, in particular, has gained importance as rising interest rates push companies to focus on profitability, and understanding how existing customers engage without paid media is crucial.

Ron points out that the path to conversion isn’t just a credit distribution mechanism but a core dataset that allows marketers to do their jobs more effectively. By looking beyond conversions alone and examining full paths, even those that don’t lead to a sale, marketers can better assess conversion rates and session data. Still, he concedes that none of this answers the critical question of whether marketing spend was truly incremental or whether a customer would have converted without it.

Key takeaway: While path to conversion analysis doesn’t solve for causality, it opens the door to deeper insights. Marketers can use it to answer key questions about customer behavior, retention, and channel effectiveness, but should remain aware of its limitations in proving incremental impact.

Back to the top ⬆️

The Future of Martech in Anaheim, California 🪐

MOps-Apalooza is back on Nov 4-6, 2024 and there’s only a few tickets left! The conference is tech agnostic, so no vendor kool-aid to drink, sessions are super practical and topics are wiiide ranging. Connect, learn and grow among the best in the industry.
If you can’t make it in person, the entire event is being live streamed, get your ticket before they run out 👇

Defining Incrementality in Marketing

Defining Incrementality in Marketing

When we discuss incrementality, the core question is simple: Would the business results still have happened without marketing? It’s a shift in mindset from how we traditionally report on marketing outcomes. Instead of simply attributing revenue to specific touchpoints, incrementality forces us to ask whether that revenue would exist at all if we hadn’t spent that marketing dollar—much like asking if a tree falling in a forest makes a sound if no one is there to hear it.

Ron emphasizes the importance of having a baseline when assessing incrementality. Without this, everything looks like it’s driven by marketing, which isn’t always true. For him, the key is understanding the marginal return on that last dollar spent. In other words, is each dollar spent still driving profitable results? This approach helps marketers gauge if they’re spending wisely and achieving their business goals.

The real challenge comes in determining the best methodologies to uncover incrementality. Ron explains that while modeling tools like multi-touch attribution (MTA) aren’t designed to measure incrementality, they provide valuable insights when combined with testing methodologies. He highlights that running a holdout test, for example, can reveal incremental results, and applying that test’s findings to MTA reporting allows marketers to optimize daily decisions while still understanding broader trends.

Ultimately, Ron advises marketers to focus less on the methodologies themselves and more on the questions they need answers to. Whether you’re trying to allocate next quarter’s budget or determine the effectiveness of a new creative, the right approach depends on what you’re trying to uncover. By starting with the right questions, marketers can select the best tools or methods to answer them, rather than getting caught up in finding a one-size-fits-all solution.

Key takeaway: Incrementality in marketing comes down to understanding what results wouldn’t exist without marketing. Instead of getting lost in finding the perfect methodology, marketers should focus on the specific questions they need answers to and choose the appropriate tools to uncover insights.

Back to the top ⬆️

Focusing on the Right Marketing Metrics

Focusing on the Right Marketing Metrics

When clients come with specific data points or hypotheses, even if they seem like vanity metrics, Ron is more than happy to engage. He believes that having something concrete to work with, no matter how off-target, is better than a vague request for “better attribution.” He’ll challenge their focus, asking why they’re obsessed with a certain metric like brand lift when profitability is on a downward trend. But the fact that they’re coming with a specific issue to solve is a good starting point.

What Ron struggles with more is when clients come with a completely undefined request. “I need better attribution” is a phrase he hears too often, and it’s akin to someone going to a personal trainer saying, “I need to get fit,” without specifying whether the goal is to lose weight, build muscle, or improve endurance. When asked for better attribution, Ron and his team must dig deeper, asking numerous questions to uncover what the client is actually trying to achieve.

Through these conversations, Ron helps clients narrow down their focus. Maybe they need a combination of methodologies like media mix modeling (MMM) for wholesale and MTA for direct-to-consumer (DTC) efforts. Perhaps down the line, they’ll need to integrate all of this data into a warehouse solution as they scale their analytics team. The key is guiding them from a vague problem statement to actionable insights by understanding the right metrics to focus on and the right mix of solutions to deploy.

Ron’s approach highlights a broader truth: most marketing teams don’t need “better attribution” in the abstract—they need clarity on their goals and the right tools to meet them. Once the root question is identified, the solution becomes much clearer, whether it’s refining their current metrics or implementing new methodologies.

Key takeaway: Vague requests like “better attribution” often signal deeper, underlying questions. Marketers should start by identifying their specific business goals and then work with the right mix of tools to meet those needs. Clarity in goals leads to better solutions.

Back to the top ⬆️

Why Marketers Struggle with Attribution

Why Marketers Struggle with Attribution

The demand for “better attribution” often stems from frustration with current tools. But according to Ron, most clients coming to Rockerbox aren’t dealing with MTA (multi-touch attribution) yet. In fact, they’re often starting from the basics: either platform-reported numbers or Google Analytics. Occasionally, they’ve tried other measurement methodologies that failed them, but rarely do they arrive already equipped with MTA.

Ron describes three common client situations, they’re either:

  1. using platform numbers,
  2. relying on Google Analytics,
  3. or they’ve tried and failed with other methods.

It’s actually rare for a client to come with an MTA solution already in place. When that does happen, they’re often missing critical data from platforms like Pinterest, Snapchat, TikTok, or channels like OTT and linear TV. That gap leaves their existing MTA model incomplete—much like standing on a mountain peak, seeing only fragments of a vast, broken landscape. Without access to the full view.

The reality is, many companies need more than a simple attribution model. Ron highlights how Rockerbox steps in to help build a more comprehensive view, especially when clients are working with fragmented data. From direct TV to digital channels, getting a clear picture of what’s driving conversions requires a more sophisticated approach than what most clients start with.

For Ron, the bigger challenge is bridging that gap—moving clients from Google Analytics and platform metrics to something more actionable. It’s not just about offering MTA but building a more accurate and integrated path to conversion that captures the full scope of customer interactions across channels.

Key takeaway: Most marketers aren’t starting with MTA—they’re relying on basic tools like Google Analytics or platform-reported numbers. Moving beyond these basics requires a holistic view of data across all channels, ensuring that critical touchpoints aren’t overlooked.

Back to the top ⬆️

Overcoming the Flaws of MTA with First-Party Data

Overcoming the Flaws of MTA with First-Party Data

Many marketers approach multi-touch attribution (MTA) with a flawed assumption, thanks to the original reliance on third-party cookies. Ron describes this reliance as the “original sin” of MTA—early companies like Visual IQ, Convertro, MarketShare, and Adometry built their attribution models on the promise of tracking every digital impression via third-party cookies. It was a compelling vision, but fundamentally flawed, because it overlooked significant blind spots. As Ron points out, “there’s no third-party cookie for linear TV, direct mail, or radio.” The very foundation upon which those early attribution models were built was incomplete from the start.

With the gradual phasing out of third-party cookies, MTA models relying on them crumble. But Ron sees this as an opportunity to rebuild attribution on more solid ground—starting with first-party identity resolution. Instead of relying on a single point of connection across channels, the focus shifts to collecting the most granular data available for each channel and filling in the gaps with deterministic data or statistical models. This isn’t just a fix—it’s a more accurate and future-proof way of looking at attribution.

Take walled gardens like Snapchat, Pinterest, and TikTok. By striking direct partnerships with these platforms, Rockerbox gains access to deterministic data such as impression-level logs, which can be connected with customers’ first-party data in a privacy-compliant manner. This effectively breaks down the so-called “walls” that have long been seen as barriers to effective attribution. Ron humorously notes how this approach is akin to “tearing down the wall,” a playful nod to Ronald Reagan’s famous speech.

For offline channels like direct mail or OTT/CTV, the solution comes in the form of modeling. Without third-party cookies, Rockerbox leverages IP addresses and log files to build probabilistic models. For example, they track when and where ads were served and compare that with user behavior on clients’ websites, using statistical analysis to attribute conversions. Additional layers like post-purchase surveys and promo code tracking further strengthen these models, ensuring they remain robust and actionable over time.

The result is a channel-by-channel approach that doesn’t just patch over the gaps left by third-party cookies but redefines attribution altogether. In fact, this methodology aligns with how even giants like Meta and Google are moving forward, using modeled conversions to fill in the gaps in their data.

Key takeaway: The demise of third-party cookies has exposed the flaws in traditional MTA models, but it also presents a chance to rebuild attribution from the ground up. By combining first-party data, direct platform partnerships, and statistical modeling, marketers can create a more accurate, channel-specific approach to attribution that works even in walled gardens and offline channels.

Back to the top ⬆️

MTA vs MMM vs Incrementality

Understanding how to apply different marketing measurement models is crucial to maximizing the impact of your budget. When asked about the strengths and weaknesses of Media Mix Modeling (MMM), Multi-Touch Attribution (MTA), and incrementality testing, Ron emphasized that each model has a specific role depending on the stage of your business and the type of data you have.

Ron begins by discussing Media Mix Modeling. As he points out, MMM is most useful for businesses with at least two years of historical data and annual marketing spends upwards of $5 million. “If you’re a company that’s a week or a month old, MMM isn’t worth discussing,” he explains. The power of MMM lies in analyzing aggregated historical data to determine which channels are driving business results. This makes it ideal for forward-looking budgeting and high-level planning. However, it’s not particularly useful for day-to-day optimizations, as it only breaks down performance at a broad level, like channel or tactic, and is mostly leveraged by the C-suite for strategic decision-making.

On the other hand, incrementality testing is designed to answer the more granular question: Was that last marketing dollar truly incremental? This is achieved through A/B tests or geo-based tests, where media is scaled up or held back in one location to compare against a control. While incrementality testing provides a clear answer on the effectiveness of marketing spend, Ron points out that it comes with operational challenges—such as needing to restructure campaigns—and carries real costs, particularly when media is paused in key markets. Plus, the results are often only relevant for a limited period. “A test run in January might not apply by March,” he notes.

Finally, Ron discusses MTA, the go-to model for in-platform optimization. MTA shines when tracking the user journey through different touchpoints, helping marketers understand paths to conversion and time-to-conversion metrics. But Ron suggests that combining MTA with the insights from MMM or incrementality testing can supercharge performance. By applying the results from these broader models as multipliers within MTA, marketers can make more informed daily adjustments to their campaigns, bridging the gap between high-level strategy and real-time optimization.

Key takeaway: Each attribution model serves a different purpose. MMM is great for high-level, long-term planning and budgeting, incrementality testing uncovers the true value of your marketing spend, and MTA optimizes daily decisions. Combining insights from multiple models can provide the most comprehensive view of your marketing performance.

Back to the top ⬆️

Why Startups Should Avoid Overcomplicating Measurement Early On

Why Startups Should Avoid Overcomplicating Measurement Early On

When asked about the challenges startups face with testing and measurement, Ron’s advice is refreshingly blunt: in the early days, startups shouldn’t be focusing on complex measurement models. “Don’t talk to me or my competitors,” he says. For early-stage startups, the key is to find one channel that works and scale it until it breaks. Trying to juggle multiple channels and sophisticated testing frameworks too early will spread resources too thin, often without yielding meaningful results.

Ron emphasizes that in the early stages, startups can usually tell if something is working just by running paid media campaigns and tracking basic performance. For many, that’s all the measurement they need. The common mistake he sees is companies coming to Rockerbox looking for a magic solution when, in reality, their marketing channels simply aren’t effective. The underlying business needs work, not a new measurement tool.

He compares startups chasing measurement solutions too early to someone blaming a scale for not losing weight. The scale is only there to measure progress, not fix underlying issues. If a startup’s marketing isn’t profitable or growing, no amount of measurement will fix that. Measurement companies can help when you have a system in place, but without a working channel, there’s nothing to measure or optimize.

Ron suggests that once startups have one or more channels that are effective and scaling, that’s the time to consider bringing in measurement tools. At that point, measurement can help amplify success, acting as “gasoline on the fire.” But before that, it’s just a distraction from the real task: finding the one thing that works and pushing it as far as possible.

Key takeaway: Startups should focus on finding one successful marketing channel and scaling it before diving into complex measurement tools. Measurement is valuable once the basics are working, but it won’t fix fundamental business issues.

Back to the top ⬆️

What If We Just Stopped Reporting on Marketing?

What If We Just Stopped Reporting on Marketing?

When asked about companies like Wistia and Ahrefs that don’t rely on sophisticated attribution models like MTA or MMM, Ron doesn’t shy away from admitting that, in some cases, it simply isn’t necessary. These companies focus entirely on creating value for their audience through content, product improvements, and customer experience. And their metric for success? Revenue growth. As long as the revenue charts are pointing upward, they don’t bother with intricate reporting or tracking tools.

Ron’s take is straightforward: if it’s working, don’t change it. “They have a very high-class lack of problems,” he remarks, pointing out that these companies are in an enviable position. If a business can grow consistently and profitably without heavy measurement, why fix what isn’t broken? His advice to companies in this situation is clear—keep doing what’s working, and only reevaluate if things start to break down.

But what happens if it does break? Ron notes that when growth plateaus or channels stop delivering, it might be time to dig deeper into the data. That’s when a company would benefit from attribution models like MTA or MMM to help optimize and discover new opportunities. However, until that happens, there’s no need to complicate things. Ron isn’t about to tell a company that’s thriving without complex reporting to start tracking every touchpoint meticulously.

For Ron, it’s about balance. Not every company needs to invest in attribution models or third-party measurement if they’ve already found their formula for success. But for those that hit a wall or want to scale smarter, that’s where deeper insights from tools like Rockerbox come into play.

Key takeaway: If your business is growing consistently and revenue is rising, you don’t need to overcomplicate things with sophisticated attribution models. Keep focusing on what’s working. However, when growth stalls or channels start to underperform, that’s when tools like MTA or MMM can help you refine and optimize your efforts.

Back to the top ⬆️

The Value of Delayed Holdout Tests in Incrementality Testing

The Value of Delayed Holdout Tests in Incrementality Testing

When discussing the hesitation many companies have with running holdout tests, the core concern is clear: by holding out a percentage of their audience, they might be leaving potential revenue on the table. These tests, designed to assess the incremental value of a campaign, can create friction, especially if marketers are worried about missing out on conversions from the portion of the audience not receiving the ads. This leads to the question of whether delayed holdout tests, where the held-out audience eventually receives the campaign, can balance insights with mitigating lost revenue.

Ron’s perspective is direct—by the time the test concludes, you should already know whether the audience is incremental or not. In other words, if the test shows that the audience would not have converted without the ad, then you know that your holdout group is indeed incremental, and you can go after them with confidence. If the test reveals they aren’t, then you’ve learned that serving them media wouldn’t make a difference, and holding them out was the right call.

Ron also points out that once you’ve run the test, the cost has already been absorbed. The holdout group was part of the test, and there’s no point in doubling back without applying the insights you’ve gained. If the campaign is proven effective, go after the audience you held back—but if it’s not, there’s no reason to waste more budget on it. The test itself has already done the heavy lifting of providing clarity.

In essence, the beauty of a well-executed holdout test is that it allows companies to make data-driven decisions based on clear evidence of incremental value. And by applying those insights immediately, businesses can ensure they aren’t just minimizing risk but optimizing future campaigns with a clearer sense of direction.

Key takeaway: Delayed holdout tests can provide reassurance for companies worried about missed revenue, but once the test is complete, the results should guide the next steps. If the audience is proven to be incremental, you can confidently serve them the media. If not, the test has already saved you from inefficient spend.

Back to the top ⬆️

Should Every Marketing Initiative Be Tested?

Should Every Marketing Initiative Be Tested?

When asked whether every improvement, especially obvious ones like better onboarding emails, needs to be rigorously tested, Ron suggests a nuanced approach. While testing has clear value, not every company or situation calls for it. He references two iconic companies to illustrate this: Google, known for its obsession with testing every detail, like their famous blue button experiment, and Apple, which tends to trust its instinct and taste. Both approaches have led to success, but they reflect different organizational cultures.

Ron points out that testing isn’t always necessary, especially if a company’s internal culture leans towards decisiveness and trust in its team’s expertise. However, for businesses that embrace testing as part of their DNA, there are still operational costs involved. Testing takes time—not just to execute, but to analyze and implement results. You also need to consider how running multiple tests might limit what you can do in parallel. The complexity of keeping tests clean and isolated means you can’t just run dozens at once without risking “leakage” of results.

Despite the challenges, Ron acknowledges that testing can offer tremendous value, especially when there are specific questions or uncertainties. He gives an example: if a media mix model shows a high ROI for a channel but with a wide confidence interval, it’s worth pausing and running a test to validate that performance. Testing, in this case, ensures you’re making decisions based on solid data, not assumptions.

Ultimately, Ron emphasizes that the decision to test should stem from the questions you need answered. Start with the problem, and if testing is the right solution to gain clarity, go for it. But don’t fall into the trap of testing for the sake of testing—make sure it aligns with your goals.

Key takeaway: Testing isn’t always essential, and whether or not to test depends on your company’s culture and the specific questions you’re looking to answer. While it can provide valuable insights, consider the time and operational costs involved. Focus on solving specific problems rather than applying testing as a default.

Back to the top ⬆️

Understanding Time-Based and Geo-Based Testing in Marketing

Understanding Time-Based and Geo-Based Testing in Marketing

When asked about the effectiveness of time-based and geo-based tests for determining incrementality, Ron dives into the nuances of these methods. While traditional A/B testing has its value, he emphasizes that it doesn’t inherently measure incrementality or causality. Just because two messages are sent to different groups doesn’t mean the message itself is the reason for conversion—or lack thereof. The true way to measure causality is to not serve any messaging to one group and compare the outcomes against those who did receive the message.

Ron acknowledges the flaws in geo tests, such as the impossibility of finding two identical geographic regions. However, he still sees them as valuable tools, especially when used with full awareness of their limitations. The key, he argues, is honesty about these flaws. Geographic differences, news cycles, and demographic variability all play a role, but these tests can still provide useful directional insights if applied carefully.

There’s no one-size-fits-all answer, Ron explains. The decision to run these tests depends heavily on your business’s stage, scale, and risk tolerance. If the outcome of a test could potentially lead to doubling your marketing spend, and that failure could sink your business, then a more cautious approach is warranted. But if the stakes aren’t as high, testing can be more aggressive. Ron emphasizes that understanding the business context is essential, and companies need skilled teams and measurement partners who grasp the nuances of the business they are measuring.

Ron illustrates this point with an example from a client in the golf apparel industry. For this client, sponsorship effectiveness varied significantly depending on the type of golf tournament being played—whether it was a major or a regular event. Without this knowledge, any model built for their business would miss a critical factor in evaluating sponsorship performance. It’s this kind of detailed understanding that makes testing and measurement truly meaningful.

Key takeaway: Time-based and geo-based tests can offer insights, but they come with inherent limitations. The decision to use them should be based on your business’s specific context, the potential risks, and what you hope to learn. Understanding your industry’s nuances is crucial to making these tests effective and avoiding costly missteps.

Back to the top ⬆️

The Power of a Clean Data Warehouse in Marketing Measurement

The Power of a Clean Data Warehouse in Marketing Measurement

One of the most overlooked yet critical components in marketing measurement is the data warehouse—the foundation that fuels everything from attribution models to campaign optimization. Ron emphasizes that marketers often miss this step, relying on martech tools with built-in data instead of focusing on the underlying infrastructure that powers meaningful insights. He breaks it down into three essential components: having the right data, ensuring proper categorization, and connecting the data seamlessly.

First, having the right data is paramount. If you’re running an eye-catching video ad on Pinterest or Snapchat, you need to know which users are engaging with it. Without that data, measuring the true impact is impossible. Ron notes that Rockerbox has done the heavy lifting by forming partnerships that grant access to such crucial data. But it’s not just about external data sets—marketers need to make sure they’re capturing everything top-of-funnel and flowing it into their data warehouse for future analysis.

Next, the data must be properly categorized. This is where many companies falter, as different platforms use different terms for essentially the same things. For instance, “prospecting” in Google might be called “upper funnel” in Facebook and “branding” on Snapchat. Without a unified way of categorizing this data, your reporting will be fragmented, giving an inaccurate picture of performance. Even something as small as a typo in a campaign name can disrupt the categorization and skew the results.

Finally, Ron discusses the importance of connecting the data. This isn’t just about matching marketing efforts with conversions, but also tying in internal business data. For example, connecting the path to conversion for order ID #125 to the actual profit and loss (P&L) for that order. This involves identity resolution, join keys, and a deep understanding of how to link different data sets across marketing and business systems. At Rockerbox, they’ve split their focus between data products (focused on building the foundation) and analysis products (focused on advanced marketing analytics) to ensure companies can maximize the value of their data.

A great example Ron gave is using post-purchase survey data and promo code data to measure marketing effectiveness. Let’s say a customer visits your site directly, but in a post-purchase survey, they mention TV as the source where they first heard about you. That’s valuable data, crediting TV with some level of contribution to the purchase. However, if you’re not collecting that survey data cleanly or passing it properly to your measurement tools, it’s almost as if the data doesn’t exist.

Without a solid data foundation, Ron warns, companies are at a long-term disadvantage. The ability to measure, optimize, and attribute accurately all hinges on having the right data infrastructure in place.

Key takeaway: Building a strong data foundation is crucial for accurate marketing measurement. Ensure you have the right data flowing into your warehouse, categorize it consistently across platforms, and connect it with your internal business data. Without this, any advanced analytics like MMM or MTA will be less effective and could even lead to misguided conclusions.

Back to the top ⬆️

Balancing Career and Family as a Founder

Balancing Career and Family as a Founder

When asked about balancing the demands of being a CEO, a father, and a former avid runner, Ron offers an honest response: there’s no perfect balance. As the co-founder of Rockerbox, his mind is often consumed by the company, even during personal time. He admits that even while watching his child play, thoughts of the business linger in the back of his mind. It’s a constant struggle, and balance isn’t something that comes naturally.

However, Ron finds clarity in small moments, particularly in the role of being a father. His child’s needs are immediate and unchanging—whether it’s a good day or bad, children demand your attention. For Ron, this serves as a grounding force, reminding him that some aspects of life operate on their own terms, independent of professional stress.

Running has also been a significant outlet for Ron. Though he admits he’s not as consistent as he’d like, he notes the profound impact physical activity has on his mental state. On difficult days, when stress takes a toll, going for a run helps reset his perspective. It’s a simple but powerful way for him to shift gears, improve his mood, and come back refreshed. He wishes he could make more time for it.

In the end, Ron’s message is clear: balancing the roles of CEO, father, and individual is difficult, and while there may not be a perfect formula, finding small, reliable outlets—like parenting and running—can make the difference.

Key takeaway: Balance is elusive, especially for entrepreneurs, but small, grounding rituals—whether it’s being present with family or staying active—can offer moments of clarity and help recharge. Prioritize these pockets of relief when possible to maintain well-being amid the chaos.

Back to the top ⬆️

Episode Recap

Ron Jacobson MTA humans of martech

In the world of marketing, causality is the ultimate prize. We want to know—down to the last click—what truly drives a sale, where the tipping points are, and how to allocate resources for maximum impact. Multi-Touch Attribution (MTA), while not the key to causality, still holds value as a guidepost. Instead of being written off, it’s a step in the journey—a way to start understanding where your efforts are having an impact. The problem comes when MTA is expected to do more than it’s designed for. It doesn’t solve the puzzle of cause and effect, but it gives you a map to help steer the ship. Use it as a tool to refine your strategy, not as the final answer.

Where MTA falls short, incrementality testing steps in. This is the true battleground where you face off against the question: “What would have happened if we hadn’t run that campaign?” Getting clear on incrementality lets you uncover real, measurable impacts—the kind you can take to the C-suite with confidence. But here’s the twist: the perfect model doesn’t exist. Chasing after the ultimate methodology is a waste of time. Instead, smart marketers focus on asking the right questions and using the best tools available to answer them. If you can isolate where your efforts are making a difference, you’re closer to understanding causality.

Most marketers start with tools like Google Analytics or the standard reporting platforms, and it’s tempting to stay there—playing it safe, relying on the data that’s easiest to grab. But true attribution isn’t that simple. To really get a handle on what’s driving results, you need to go deeper—connecting the dots across channels, bringing in first-party data, and using statistical models to fill in the gaps. As third-party cookies crumble, this becomes even more critical. The game is changing, and only those willing to adapt and rebuild their approach to measurement will come out on top.

If this sounds complicated, that’s because it is. But the lesson here isn’t to throw everything at attribution early on. Startups and smaller teams need to get one channel working before worrying about perfect measurement. Build momentum first, then add complexity as you grow. Once you have a foundation of success, that’s when it’s time to layer in more sophisticated models like MTA or MMM to fine-tune and amplify what’s already working.

Ultimately, measurement is a marathon, not a sprint. The tools you use today will evolve, and so will your approach to understanding causality. But keep your focus on the core question: what’s truly driving growth? When you’re clear on that, everything else starts to fall into place.

Listen to the full episode ⬇️ or Back to the top ⬆️

Follow Ron👇

✌️


Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)

Ask the Humans of Martech archive
Search 6,600+ transcript clips from real conversations with martech practitioners. Describe the problem you’re working on.

All categories

Monthly archives

See all episodes

Future-proofing the humans behind the tech

Leave a Reply