176: Rajeev Nair: Causal AI and a unified measurement framework

A promotional graphic featuring three logos: MoEngage, Knak, and RevenueHero.

What’s up everyone, today we have the pleasure of sitting down with Rajeev Nair, Co-Founder and Chief Product Officer at Lifesight.

Summary: Rajeev believes measurement only works when it’s unified or multi-model, a stack that blends multi-touch attribution, incrementality, media mix modeling and causal AI, each used for the decision it fits. At Lifesight, that means using causal machine learning to surface hidden experiments in messy historical data and designing geo tests that reveal what actually drives lift. Attribution alone can’t tell you what changed outcomes. Rajeev’s team moved past dashboards and built a system that focuses on clarity, not correlation. Attribution handles daily tweaks. MMM guides long-term planning. Experiments validate what’s real. Each tool plays a role, but none can stand alone.

In this Episode…

Recommended Martech Tools 🛠️

We only partner with products that are chosen and vetted by us. If you’re interested in partnering, reach out here.

📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.

🎨 Knak: No-code email and landing page creator to build on-brand assets with an editor that anyone can use.

🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.

About Rajeev

Illustration of a person with curly hair wearing a blue shirt, set against a cosmic background filled with planets, stars, and geometric shapes.

Rajeev Nair is the Co-Founder and Chief Product Officer at Lifesight, where he’s spent the last several years shaping how modern marketers measure impact. Before that, he led product at Moda and served as a business intelligence analyst at Ebizu. He began his career as a technical business analyst at Infosys, building a foundation in data and systems thinking that still drives his work today.

Digital Astrology and the Attribution Illusion

Lifesight started by building traditional attribution tools focused on tracking user journeys and distributing credit across touchpoints using ID graphs. The goal was to help brands understand which interactions influenced conversions. But Rajeev and his team quickly realized that attribution alone didn’t answer the core question their customers kept asking: what actually drove incremental revenue? In response, they shifted gears around 2019, moving toward incrementality testing. 

They began with exposed versus synthetic control groups, then evolved to more scalable, identity-agnostic methods like geo testing. This pivot marked a fundamental change in their product philosophy; from mapping behavior to measuring causal impact.

Rajeeve shares his thoughts on multi-touch attribution and the evolution of the space.

The Dilution of The Term Attribution

Attribution has been hijacked by tracking. Rajeev points straight at the rot. What used to be a way to understand which actions actually led to a customer buying something has become little more than a digital breadcrumb trail. Marketers keep calling it attribution, but what they’re really doing is surveillance. They’re collecting events and assigning credit based on who touched what ad and when, even if none of it actually changed the buyer’s mind.

The biggest failure here is causality. Rajeev is clear about this. Attribution is supposed to tell you what caused an outcome. Not what appeared next to it. Not what someone happened to click on right before. Actual cause and effect. Instead, we get dashboards full of correlation dressed up as insight. You might see a spike in conversions and assume it was the retargeting campaign, but you’re building castles on sand if you can’t prove causality.

Then comes the complexity problem. Today’s marketing stack is a jungle. You have:

  • Paid ads across five different platforms
  • Organic content
  • Discounts
  • Seasonal shifts
  • Pricing changes
  • Product updates

All these things impact results, but most attribution models treat them like isolated variables. They don’t ask, “What moved the needle more than it would’ve moved otherwise?” They ask, “Who touched the user last before they bought?” That’s not measurement. That’s astrology for marketers.

“Attribution, in today’s marketing context, has just come to mean tracking. The word itself has been diluted.”

Multi-touch attribution doesn’t save you either. It distributes credit differently, but it’s still built on flawed data and weak assumptions. If you’re measuring everything and understanding nothing, you’re just spending more money to stay confused. Real marketing optimization requires incrementality analysis, not just a prettier funnel chart.

To Measure What Caused a Sale, You Need Experiments

Even with perfect data, attribution keeps lying. Rajeev learned that the hard way. His team chased the attribution grail by building identity graphs so detailed they could probably tell you what toothpaste a customer used. They stitched together first-party and third-party data, mapped the full user journey, and connected every touchpoint from TikTok to in-store checkout. Then they ran the numbers. What came back wasn’t insight. It was statistical noise.

Every marketing team that has sunk months into journey mapping has hit the same wall. At the bottom of the funnel, conversion paths light up like a Christmas tree. Retargeting ads, last-clicked emails, discount codes, they all scream high correlation with purchase. The logic feels airtight until you realize it’s just recency bias with a data export. These touchpoints show up because they’re close to conversion. That doesn’t mean they caused it.

“Causality is essentially correlation plus bias. Can we somehow manage the bias so that we could interpret the observed correlation as causality?”

What Rajeev means is that while correlation on its own proves nothing, it’s still the starting point. You need correlation to even guess at a causal link, but then you have to strip out all the bias (timing, selection, confounding variables) before you can claim anything actually drove the outcome. It’s a messy process, and attribution data alone doesn’t get you there.

That’s the puzzle. You can’t infer real marketing effectiveness just from journey data. You can’t say the billboard drove walk-ins if everyone had to walk past it to enter the store. You can’t say coupons created conversions if they were handed out after someone had already walked in. Attribution doesn’t answer those questions. It only tells you what happened. It doesn’t explain why it happened.

To measure causality, you need experiments. Rajeev gives it straight: run controlled tests. Put a billboard at one store, skip it at another. Offer discounts to some, hold them back from others. Then compare outcomes. Only when you hold a variable constant and see lift can you say something worked. Attribution on its own is just a correlation engine. And correlation, without real-world intervention, tells you absolutely nothing useful.

Key takeaway: Attribution data without controlled testing isn’t useful. If you want to know what drives results, design experiments. Stop treating customer journeys like gospel. Use journey data as a starting point, then isolate variables and measure actual lift. That way you can make real decisions instead of retroactively rationalizing whatever got funded last quarter.

Back to the top ⬆️

The Limitations of Incrementality Tests and How Quasi-Experiments Can Help

Most teams think they’re being scientific when they run an incrementality test. But the truth is, these tests are fragile. Geo tests are high-effort and easy to mess up. Quasi experiments are directional at best and misleading at worst. If you’re not careful with design, timing, and interpretation, you’ll end up with results that look rigorous… but aren’t.

Why Most Teams Get Geo Testing Completely Wrong

Geo testing gets romanticized as this high-integrity measurement method, but most teams treat it like a side quest. They run it once, complain it was expensive, then go back to attribution dashboards because they’re easier to screenshot in a slide deck. The truth is, geo testing takes guts. It means pulling spend from regions that bring in real revenue. That’s not a simulation. It’s a real-world test with real-world consequences.

Rajeev breaks it down without the fluff. The entire goal is to recreate something you never get in marketing: a counterfactual. What would have happened if you didn’t run the campaign? You can’t rewind time, so you simulate it by creating two sets of regions (control and treatment) and assume they behave similarly. Then you intervene in just one. If you’ve done it right, you get a clean signal. If you haven’t, you get noise that looks like insight but isn’t.

The objections always sound the same. It’s too expensive. It takes too long. The team is already overloaded. Rajeev gets it, but he doesn’t let those excuses slide. He calls out how teams overestimate the cost and underestimate the upside. You can run geo tests in smaller, lower-revenue areas, just 5 to 10 percent of your total business, and still get statistically powerful results. You just need to care enough to design it right.

Geo testing works best when paired with a layered measurement stack:

  • Use attribution to spot fast directional trends
  • Use modeling to generate smart hypotheses
  • Use geo tests to confirm the bets that matter

“It’s a slower process,” Rajeev says, “but you can adopt the learning with more confidence.”

That confidence is the whole point. You don’t need dozens of geo tests a year. You need a few that answer the right questions and give your team the conviction to act without second-guessing the numbers.

Quasi Experiments Help You Test Without Tanking Your Quarter

Most teams run from incrementality testing because they’re scared of the opportunity cost. That fear is valid. You pull spend from a top-performing channel, slow down acquisition, and cross your fingers for clean results. Rajeev suggests a better starting point: quasi experiments. These let you infer causality from existing historical variation without having to disrupt your active campaigns. They’re structured ways to surface signal from past chaos.

A quasi experiment is like finding a natural test you accidentally ran in the past. Instead of stopping campaigns or launching a formal test, you look at moments where spend changed on its own (like when you paused Facebook for two weeks last year) and see if performance changed in a meaningful way. You’re not setting up the test from scratch; you’re analyzing the real-world messiness to see if patterns emerge.

Instead of pausing paid or burning budget to run a test, you look at the natural fluctuations in your media mix over time. That’s the foundation of any good MMM. If the spikes and dips in spend across channels already mirror performance shifts, you might not need a new test to see what’s driving lift. Rajeev treats this as your hypothesis engine. It narrows your focus before you start tinkering with real dollars.

But even with the best hypothesis, you still need rigor. Rajeev gives a sharp example: a brand spends $1 million monthly and sees $5 million in return. The next month they add $50,000 to try a new channel, and revenue jumps to $6 million. Looks like a win. But if it’s December and they sell winter jackets, the lift might be seasonal, not causal. That’s where quasi experiments fall short, and why you still need controlled testing.

Geo testing is your fix. By running parallel tests in two regions that behave the same in past Decembers, you can isolate the effect of that new $50,000. It’s not fast. But it’s real. Rajeev calls out a key tradeoff: short tests require bigger spend, long tests can be cheaper but take patience. Either way, design matters more than deployment. Bad design leads to bad reads. The math won’t save you.

“You can operationalize testing all day, but if you design it wrong, none of it matters.”

Key takeaway: Use quasi experiments to surface signals and sharpen your hypotheses, then validate them with a few high-quality geo tests each quarter. Choose regions with low revenue risk but high signal value. Budget three to five weeks. Assign someone to own the integrity of the test. Treat the design like a product; if it can’t detect the effect, it’s not worth running. Modeling and attribution can guide you, but clarity comes from disciplined experimentation.

Back to the top ⬆️

The 2025 AI and Marketing Performance Index 🤖

Research conducted by GrowthLoop shows a huge disconnect between the hype around AI and the real world that marketers work in today. 

In this report, you’ll discover:

How teams are leveraging AI within their marketing cycles, from audience segmentation to insights. How marketers think about the AI <> human partnership with their existing stacks. And the biggest hurdles marketers face in AI adoption, and ways they’re overcoming them

Cover of the report titled 'The 2025 AI and Marketing Performance Index' featuring graphs and key insights on optimizing marketing strategies and AI adoption.

Why Causal Inference Matters More Than Ever in Marketing Measurement

Marketers love to throw around the word causal. It sounds sharp. Scientific. Confident. But most of what gets labeled causal in marketing is just dressed-up correlation. Rajeev breaks it down with a dose of clarity most vendors would rather you never heard. If you’re not randomizing, balancing groups, and removing bias, you’re just chasing noise in fancy charts. And that’s the easy part. The harder truth? Even your best experiments only tell you what worked this quarter for your business. Next quarter, that result might flip. There is no universal law of paid social.

Everything in marketing is what Rajeev calls “quasi-causal.” You never get clean conditions. You don’t control who sees your ad, what they ate that morning, or what macro trend is nudging their behavior. That’s why geographic testing has become a rare bright spot. It lets you isolate effects using natural market variation. Want to compare Facebook to TikTok? You can’t rely on each platform’s own study. They’re not using the same methods, and worse, they’re grading their own homework. Geo testing gives you a shared baseline. It’s still messy, but at least the mess is consistent.

The next layer is causal machine learning. This is where things get weird. Think of it like this: you’ve got mountains of historical data. Hidden inside are accidental experiments, moments where one region saw a change, another didn’t, and the rest stayed stable. Causal ML tries to detect those moments and measure lift like you planned it all along. It’s faster. Sometimes it’s the only ethical option, especially when actual experimentation would cross a line. For example, you can’t assign one group to smoke cigarettes and the other to abstain. But you can find groups that did those things on their own, then model the differences.

Still, causal ML has gaps. The tooling is years behind what marketers get with regression models. There’s no agreed way to benchmark performance. So Rajeev’s team borrows maturity from the ML world and layers it onto causal structures. They start with a causal graph, which defines how variables influence each other. Then they run traditional models like Bayesian regression. This combo lets them codify assumptions and explain results in a way that actually makes sense to stakeholders. Instead of pretending every variable is independent, the model reflects real dependencies, like how a TV ad boosts brand search, which inflates your Google ROI.

“Google brand search isn’t an independent variable,” Rajeev explains. “You’re not spending there in isolation. People are searching because they saw your ad somewhere else. You can’t ignore that.”

Causal AI shifts the work from running live experiments to scanning your historical data for hidden ones already baked in, so you can uncover causality without rerunning the whole playbook.

Key takeaway: Build causality like you’re building something real, not just ticking a box. Use historical data to surface natural experiments hiding in plain sight, then apply models that can handle the mess. That way you can learn what’s truly driving outcomes without spinning up new tests every quarter. If you rely on platform lift studies or click-path reports, you’re not measuring impact, you’re buying someone else’s narrative.

Back to the top ⬆️

Why You Need a Unified Measurement Stack to Make Better Decisions

People love arguing about what measurement methodology is best. Should you trust MMM or incrementality? Is MTA totally dead? 

Instead of focusing on just a single approach, Rajeev’s trying to help teams understand that they should stop treating measurement like a philosophy class and start treating it like a toolbox. Different questions need different tools. If you’re trying to cut spend, shift budget, or swap out creatives, you shouldn’t be dragging the same model across all three. That’s how you end up stuck in analysis purgatory, debating coefficients while your competitors ship three new campaigns.

Making Sense of Conflicting Marketing Models

Lifesight is built around the idea of a unified measurement or multi-model approach. He maps decisions into 3 time horizons: 

  1. long-term strategic
  2. mid-term tactical
  3. and short-term operational. 

Each one requires a different model. You can’t use MMM to choose which creative to pause this afternoon, and you can’t use multitouch attribution to justify a quarterly reallocation of budget. These aren’t competing tools; they’re scoped for different layers of reality. Rajeev points out that even MTA, which gets trashed in most analytics circles, still has a job when you’re choosing between 30 Facebook creatives with a $10k budget to spend today.

“If your incrementality test says Facebook works, you still need to decide which creatives to scale. That’s where touch-based attribution has a role.”

Rajeev calls the framework unified measurement, but he’s clear that it’s more than a model mashup. It’s a planning system. The surface looks simple; just a tool where you type in goals, constraints, and budget ranges. But underneath, there’s a tight orchestration of multiple models, each running in the background to simulate tradeoffs. You don’t need to dig into adstock decay curves unless you want to. Yet every assumption is documented, every input traceable. No secret sauce, no hidden knobs behind the curtain.

This kind of setup only works when you start with the question, not the dashboard. Rajeev credits Stefano Puntoni’s book Decision Driven Analytics for influencing that philosophy. You begin with the decision you want to make, then check what the data actually supports. If you start with the data and fish for answers, you’re already lost.

Instead of using every model to answer the same question, use each one to answer the question it was built for. Let’s unpack that a bit more.

Stop Asking One Model to Answer Every Question

Most teams want a model to give them a single, definitive answer. Where should we spend more? What’s driving revenue? Which channels are dragging us down? They expect one tidy output they can drop into a slide deck. Rajeev sees this request constantly. But the truth is, no single method can answer all those questions well. Measurement isn’t a monolith. It’s a layered system with each method tuned for a different type of decision.

Let’s say you’re running a brand that sells on both Shopify and Amazon. Your spend is spread across Facebook, Google, email, and Amazon’s ad network. Some of that spend lifts both channels. Some cannibalizes the other. Rajeev’s team builds a model for each revenue stream, then merges them into a unified view. It’s not perfect, but it respects how the world actually works. Facebook might drive a sale on Amazon, and email might nudge someone toward your DTC site three days later. Your model should reflect that mess.

“We apply absolute modeling rigor at each level, then build a master model focused on overall revenue,” Rajeev says.

Even then, you can’t trust the model blindly. If Facebook prospecting and Google top-of-funnel are highly correlated, the model won’t be able to tell which is doing the heavy lifting. That’s where testing becomes critical. Rajeev recommends running structured experiments (like a geo test on Facebook) to validate assumptions. Once you’ve tested a variable, you can feed that outcome back into your model to calibrate it. That calibration step tightens the model’s accuracy across the board, not just for the one channel you tested.

Attribution gets folded in for operational awareness. You still need to know which ad set is catching fire or which creative is tanking. But Rajeev calls out the obvious: attribution platforms overcount. They all claim credit. When that happens, you can’t just trust the numbers as-is. That’s why his team uses multipliers, calibrated from MMM and experiments, to adjust attribution and remove duplicate credit. It’s messy, and it’s not academically pure, but it’s more useful than pretending the sum of platform claims equals your revenue.

Key takeaway: Treat measurement as a multi-tool, not a magic eight ball. Use marketing mix modeling for long-term planning, experiments to sharpen the model’s edge with causal validation, and attribution to manage daily execution but sparingly for quick decisions that need directional signals. Each method answers a different question. Align your tools to the decisions you actually need to make, and stop forcing one model to do everything.

Back to the top ⬆️

Human Challenges vs. Technical Challenges of Measurement

Marketers over-rely on attribution and they over-measure everything. Rajeev sees both as symptoms of the same problem: using data to defend decisions instead of improving them. Attribution gets twisted into validation. Measurement turns into micromanagement. The fix is targeted rigor where decisions matter, paired with models that earn trust through forecast accuracy, not post-hoc storytelling.

Stop Using Attribution to Validate Your Job

Attribution models were supposed to help marketers make smarter decisions. Instead, they’ve become tools for survival. In tech, where CMOs often cycle out before their strategies take root, the pressure to prove impact gets extreme. Budget justification replaces honest measurement. Rajeev calls this the “soft problem” of attribution, the part where human incentives twist clean data into corporate storytelling.

Most vendors know this game. You want your model to show lift? Pull back on the media mix model and push weight to multitouch. Want to validate that expensive campaign? Crank up self-reported attribution. In platforms where assumptions are buried, this manipulation goes unnoticed. But in unified systems that surface every assumption in the interface, the manipulation is right there in the open. It’s not foolproof, but it makes the spin easier to spot.

Rajeev sees that as necessary friction. Marketing sits in a probabilistic world, while the rest of the C-suite reads from deterministic spreadsheets. There’s no ambiguity in a $3 million loss. There’s plenty in a vague “uplift” from a new YouTube campaign. To close that trust gap, transparency has to replace bravado. A model should show where the guesses live, where the confidence is earned, and where the uncertainty remains.

To make it real, Rajeev recommends treating measurement like a scientific process, not a faith-based KPI generator. Before optimizing budget, test the model’s forecast. Feed it next month’s spend plan and see if it predicts performance accurately. If it does, the model has earned its place. If it doesn’t, go back and refine. Forecast first, optimize later. That keeps you grounded in real-world accountability instead of chasing vanity metrics.

“Your self-reported attribution numbers will always look prettier than incrementality ones,” Rajeev says. “But incrementality can be proven or disproven. That’s where the real value lives.”

Stop Measuring Everything

Most teams treat measurement like a superstition. If it moves, track it. If it doesn’t move, track it harder. Every campaign, every ad variant, every subject line gets its own micro-dashboard. But Rajeev doesn’t buy into the measurement panic. He thinks teams are confusing thoroughness with effectiveness.

You don’t need a double-blind experiment to validate your Thursday newsletter. But if you’re spending 10 percent of your total budget on a new channel, then yes, you should absolutely know what it’s doing. Rajeev argues for a tiered system of rigor:

  • Channels that spend real dollars and have the potential to scale deserve careful measurement
  • High-velocity tests like creative variations need lighter-touch tracking to avoid analysis paralysis
  • Always-on programs like SEO and lifecycle can be monitored without micromanaging every variable

“You can only control what you can measure. But you don’t need to apply the same level of rigor across everything.”

For early-stage brands or teams still stuck in single-channel mode, Rajeev suggests starting small. A scrappy framework is fine. The danger comes from pretending you’re running real experiments when you’re not. If the test design is sloppy, the inference will be wrong, and you’ll scale the wrong thing. That’s not strategy; that’s just noise in a nice chart.

Once your operation gets messy (offline channels, indirect sales, inconsistent touchpoints) you need formal measurement discipline. That includes causal testing, mixed models, triangulated methods. But even then, the job isn’t to measure more. It’s to measure smarter. Rigor is a resource. Spend it where it counts.

Key takeaway: Start by tracking all major channels just enough to spot patterns like volume, spend, lift. Once you know which ones affect real budget shifts or campaign decisions, double down on those. Build deeper models where the stakes are high. Before you optimize anything, run a forecast using your model to see how well it predicts outcomes. That’s where trust is earned. Show your team the assumptions, show finance where the logic holds and where it doesn’t. Measurement should help you steer, not just explain what already happened.

Back to the top ⬆️

When Will AI Agents Run Marketing Measurement on Their Own

There’s a version of the future where AI agents are sitting in your dashboards right now, making optimization decisions before you’ve had your morning coffee. That future is getting a lot of airtime. Vendors are racing to ship “autonomous” assistants with catchy names, lofty promises, and zero accountability when things break. Rajeev has seen this story before. He’s seen what happens when automated systems react to each other without a human filter, like the flash crash of 2010, where bots in financial markets tripped over each other and wiped out billions in minutes.

At Lifesight, they’re building their agent, Mia, with a slower hand. Rajeev isn’t chasing full autonomy. He’s building tools that help marketers ask better questions and get clearer answers. Most marketers still struggle with statistical concepts like causal inference and confidence intervals. Training them to interpret raw output from measurement models is a losing battle. Giving them a conversation layer that speaks plain English? That’s something they’ll actually use.

“If your agent recommends refreshing Facebook creative A, B, and C, there better be a reason you can inspect,” Rajeev says. “Otherwise it’s just guesswork in a lab coat.”

Instead of ripping decisions out of the marketer’s hands, Mia waits. She generates recommendations, surfaces the reasoning, and requires human sign-off. Rajeev doesn’t see that changing anytime soon. The cost of being wrong (because of hallucination, missing context, or misinterpreted ambiguity) is still too high. You’re dealing with live budgets and real customer journeys. Until the systems can reason like domain experts, they’ll need oversight.

The roadmap is intriguing, though. Protocols like MCP and A2A are emerging to help agents talk to each other in structured, trackable ways. Eventually, you could have agents querying systems, analyzing lift, syncing creative performance, and coordinating actions across platforms. But until that future is less of a demo and more of a daily reality, Rajeev is focused on one thing: helping marketers interact with their data the way they think, not the way engineers model it.

Key takeaway: Explore AI agents that assist with measurement, not fully automate it. Let them generate recommendations with transparent reasoning. Require human validation before acting, especially when budgets and customer experience are on the line. Use agents where ambiguity kills rule-based workflows, but don’t skip the hard part of thinking critically about what the agent is actually telling you.

Back to the top ⬆️

How to Train Your Brain to Stay Content Without Chasing Wins

How to Train Your Brain to Stay Content Without Chasing Wins

Happiness gets marketed like a product: something you earn through achievement, hustle, or life optimization. Rajeev doesn’t buy it. He thinks it’s more like marketing mix modeling. Even if you stop spending, there’s still a baseline signal. He believes happiness works the same way, there’s a base level you always return to, no matter how extreme the high or low.

There’s a name for it: the hedonic treadmill. You hit a milestone, feel the buzz, then drift back to where you started. Promotions fade. Bad news numbs. If your baseline sucks, you’ll keep chasing peaks that don’t last. Rajeev wants to raise the baseline itself. And he’s not talking about buying a new journal or joining a 5 a.m. club.

He draws from eastern philosophy, especially schools of Hindu and Buddhist thought that describe happiness as a neurological event, just neurotransmitters like serotonin and oxytocin firing in the brain. The question he’s been asking himself lately: “Can you get those neurons to fire without anything good happening around you?” In other words, can you train yourself to feel content without waiting for a raise, a like, or a big win?

“I’ve been thinking a lot about whether you can sit quietly somewhere and make those neurons fire,” Rajeev said. “If happiness is just brain chemistry, then why not learn to control it, instead of waiting on the world to cooperate?”

And then there’s the wildcard: having a daughter. That experience rewired something in him. More patience. Less grasping. Fewer ego loops. He didn’t frame it as a shortcut, just as something that cracked open a layer of awareness. And while he won’t pretend to have it all figured out, his version of balance doesn’t come from time blocking. It comes from teaching his mind not to need the next thing.

Key takeaway: If your happiness resets to the same baseline after every high, stop chasing peaks and focus on lifting the floor. Raise your internal default by studying your own emotional patterns, experimenting with stillness, and learning what triggers your brain’s own reward system. You can train your mind to feel content without outside validation and that shift can change how you show up at work, at home, and everywhere else.

Back to the top ⬆️

Episode Recap

Artwork featuring Rajeev Nair, Co-Founder and Chief Product Officer at Lifesight, surrounded by a cosmic background with planets and stars, showcasing a blend of technology and creativity.

Attribution was supposed to help marketers understand what worked. Building Lifesight, Rajeev Nair chased that promise for years, tracking every touchpoint, building dense identity graphs, wiring up click paths that looked like insights. But when the numbers came in, they felt hollow. None of it explained causality. None of it proved lift. Just a pile of events and guesses dressed up as strategy.

That’s when Rajeev and the Lifesight team changed course. They shifted from attribution to experimentation. They tested in the real world, running geo splits, designing controlled trials, digging into quasi experiments from messy campaign data. It was slower. Riskier. But finally, the answers held up.

Rajeev doesn’t worship one method. He builds stacks. MMM for long-term planning. Attribution for daily tweaks. Incrementality for clarity. Each tool answers a different question, and the trick is knowing which one to use when. Over-measuring every campaign creates noise. So does trusting every model as gospel. His advice: design fewer, sharper tests. Make assumptions visible. Test them out loud.

The human side matters just as much. Teams misuse data to justify decisions already made. Attribution becomes job insurance. That’s why Lifesight’s AI assistant, Mia, doesn’t automate decisions. It flags ideas, shows its work, and asks for a human yes. Because models don’t own budgets… people do.

In a world full of dashboards pretending to be answers, Rajeev is after something else. A measurement system that earns trust. One that helps you see what actually moved the needle, and gives you the guts to act on it.

Listen to the full episode ⬇️ or Back to the top ⬆️

A promotional graphic featuring three logos: MoEngage, Knak, and RevenueHero.

Follow Rajeev 👇

✌️


Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)

Ask the Humans of Martech archive
Search 6,600+ transcript clips from real conversations with martech practitioners. Describe the problem you’re working on.

All categories

Monthly archives

See all episodes

Future-proofing the humans behind the tech

Leave a Reply