Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS

What’s up everyone, today we have the pleasure of sitting down with Constantine Yurevich, CEO and Co-Founder at SegmentStream.
Summary: Multi-touch attribution is a beautifully crafted illusion we all pretend to believe in while knowing deep down it’s flawed. The work is mysterious, but is it important? The big ad platforms sell us sophisticated solutions they don’t even trust for their own internal decisions. Is it time we accept marketing causation is a thing we can’t measure? Visitor behavior scoring is a really interesting alternative or extra ingredient to consider. Often thought of as a tool for lead management to help prioritize your SDR’s time, the team at SegmentStream started using the same scoring methodology, but with an attribution application. Enter synthetic conversions. Instead of just tracking conversions, track meaningful visits like time spent, pages explored, comparisons made. This allows you to connect upper-funnel campaigns to real behavior patterns rather than just looking at who converted in a single session.
In this Episode…
- Why Marketing Attribution Still Matters Despite Its Flaws
- Simplified MMM is a Measurement Fantasy You’re Being Sold
- Geo Holdout Testing Fails Because Companies Get The Science Wrong
- Why Marketing Will Never Have True Causation
- Your CMO Is Buying Fake Attribution Data (And Everyone Knows It)
- Visitor Scoring Transforms Marketing Attribution
- Marginal ROAS Exposes the Lie of Average Returns
- Activating Attribution Insights Through Automated Bidding
Recommended Martech Tools 🛠️
We only partner with products that are chosen and vetted by us. If you’re interested in partnering, reach out here.
📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.
🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.
🦩 Census: Universal data layer that unifies & cleans data from all your sources and makes it available for any app and AI agent to use.
🎨 Knak: No-code email and landing page creator to build on-brand assets with an editor that anyone can use.
About Constantine and SegmentStream

- SegmentStream was founded in 2018 in London
- In Feb 2022, they raised a first funding round of 2.7M
- SegmentStream is now trusted by more than 100 leading customers across the globe including L’Oreal, KitchenAid, Synthesia, Carshop, InstaHeadShots, and many others
Why Marketing Attribution Still Matters Despite Its Flaws
Attribution chaos continues to haunt marketers drowning in competing methodologies and high-priced solutions. Constantine blasts through the measurement fog with brutal practicality when tackling the Multi-Touch Attribution (MTA) debate. While many have written MTA’s obituary due to its diminishing visibility into customer journeys, his take might surprise you.
The attribution landscape brims with alternatives that look impressive in PowerPoint presentations but crumble under real business conditions:
- Geo holdout testing sounds brilliant: Turn off ads in half your markets, keep them running in others, measure the difference. Simple! Except it’ll cost you millions in lost revenue during testing. Constantine points out the brutal math: “For some businesses, this is like losing 1 million, $2 million during the test. Would you be willing to run a test that’s gonna cost you $1 million?” These tests require a minimum 5% revenue contribution from the channel to even register effects, making them impractical for anything but your biggest channels.
- MMM promises statistical rigor: But demands absurd amounts of data covering everything from your competitors’ moves to presidential elections and global conflicts. Good luck collecting that comprehensive dataset spanning 2-3 years, then validating whether the TV attribution your fancy model spits out actually reflects reality.
“Mathematically, everything works fine, but when you apply it in reality, there is no way to test it. You just see some numbers and there is no way to test it.”
For scrappy D2C brands, SaaS startups, and lead gen businesses, Constantine argues MTA still delivers more practical value than its supposedly superior alternatives. You won’t achieve perfect attribution, but you can compare campaigns at the same funnel stage against each other. Your lower-funnel campaigns can be measured against other lower-funnel efforts. Mid-funnel initiatives can compete with similar tactics.
Constantine drops a bombshell observation that should make you question the industry’s MMM evangelism: “If Google and Facebook so willingly open-source different MMM technologies and they really believe in this technology, why wouldn’t they implement it into their own product?” These data behemoths with unparalleled user visibility still rely on variations of touch-based attribution internally. Something doesn’t add up.
Key takeaway: Stop chasing perfect attribution unicorns. MTA delivers practical campaign comparisons within funnel stages despite its flaws. For most businesses, sophisticated alternatives cost more than they’re worth in lost revenue during testing or impossible data requirements. Compare apples to apples (lower-funnel to lower-funnel campaigns) with MTA, test different creatives, and focus on relative performance improvement. The big platforms themselves don’t fully trust their publicly promoted alternatives – why should you bet your marketing budget on them?
Back to the top ⬆️
Simplified MMM is a Measurement Fantasy You’re Being Sold

Marketing Mix Modeling has roared back into fashion as third-party cookies crumble and marketers scramble for measurement alternatives. Constantine cuts through the hype with brutal clarity. Traditional MMM demands massive resources, making it impractical for companies spending just a couple million monthly on digital ads. The investment-to-insight ratio falls flat for most brands.
Into this measurement vacuum rushed a wave of “simplified” MMM solutions promising the impossible: quick setup, pretty dashboards, and supposedly accurate attribution using only platform data. These stripped-down models deliver exactly what you’d expect (but might not recognize) – they invariably credit your highest-spend channels with driving your best results. Constantine doesn’t mince words about this circular logic:
“The channel where you invest the most and the channel that has the most impressions looks really incremental on your dashboards.”
The credibility gap becomes glaringly obvious when you apply common sense. If Facebook or Google truly generated the astronomical lift these simplified models claim, you’d notice it immediately in your business results without needing sophisticated analytics. When dashboards show massive performance spikes while actual revenue stays flat, something’s deeply wrong with your measurement.
The fatal flaw? Effect isolation. Most brands run multiple campaigns across platforms simultaneously:
- Facebook ads
- Google search campaigns
- Brand awareness initiatives
- Retargeting programs
These overlapping efforts create a complex web of influences that simplified models completely fail to untangle. Without proper experimental design or comprehensive data inputs, these models produce what amounts to expensive, colorful guesswork.
Constantine acknowledges rare exceptions where simplified approaches might work – like when Coca-Cola drops $50 million on a concentrated Christmas campaign. But for typical DTC brands running evergreen campaigns across multiple channels, these models create a dangerous illusion of precision while delivering fundamentally flawed insights.
Key takeaway: Ditch simplified MMM models that only analyze advertising platform data. They systematically overvalue your highest-spend channels through circular logic. True incrementality measurement requires comprehensive models incorporating broader market factors, competitive activity, and consumer behavior patterns. If a measurement solution seems suspiciously easy to implement, it probably sacrifices accuracy. Your marketing deserves better than pretty dashboards built on incomplete data.
Back to the top ⬆️
Geo Holdout Testing Fails Because Companies Get The Science Wrong
Marketing’s obsession with scientific-sounding measurement has birthed a monster: the geo holdout test. This expensive experiment promises to reveal your marketing’s true impact through the magic of “controlled testing.” Too bad it’s built on statistical quicksand that swallows marketing budgets while spitting out garbage insights.
Constantine rips apart the foundational myth that geo holdout tests equal proper A/B testing. Real A/B tests randomize at the individual level – you take your entire audience, randomly split it, then randomly expose one group to your campaign. What happens in geo testing? Something wildly different:
- You cherry-pick regions you *think* behave similarly (goodbye, randomization)
- You construct a “synthetic control” (corporate-speak for “we made up a comparison group”)
- You pray external factors don’t torpedo your comparison
“The primary quality of the proper A/B test is randomization. In geo holdout tests, we do not do randomization. We decide which states or which cities behave similarly. And then we build synthetic control.”
In the messy real world, these tests crumble faster than cheap cookies. Constantine’s seen clients spectacularly botch implementation: “They have Facebook ads spending $50K a month and they shut down 50% of the states, but still the rest of the budget is automatically relocated to the control group.” You just accidentally pumped more money into your control regions! Your test is immediately worthless, but the pretty dashboard won’t tell you that.
The statistical confidence intervals make these tests practically useless. Constantine breaks down the brutal math: “You’ve measured your incrementality as 5%, but usually for 5% incrementality with 5% minimal detectable effect, margin error is plus minus 4%. So your actual incrementality is between 1% to 9%.” When translated to ROI, you’ve proven your campaign is… somewhere between hemorrhaging money and wildly profitable. Congratulations on your $50K experiment that told you absolutely nothing actionable!
We’re living in what Constantine calls “the dark era of marketing measurement.” The cookie apocalypse and tracking prevention have created fertile ground for measurement snake oil. “During such times when this is chaos, there are a lot of experts that appear on the market and they try to sell you like gold. While this gold is not gold, but no one knows.” The technical complexity creates perfect cover – CMOs lack the statistical background to spot the flaws, while data scientists often lack business context to see why their elegant models fail in practice.
Key takeaway: Stop wasting money on geo holdout testing unless you suspect a channel delivers zero value. The method only makes sense when testing something you believe might be completely non-incremental (like brand campaigns you suspect do nothing) or upper-funnel tactics with no direct metrics. For everything else, you’re paying for mathematical theater that can’t deliver on its promise of causality. The wide confidence intervals alone make most results meaningless for decision-making.
Back to the top ⬆️
Why Marketing Will Never Have True Causation
Marketing’s obsession with proving causation borders on religious devotion. We’ve built elaborate temples to attribution, sacrificed budgets to measurement gods, and recited the sacred chants of “data-driven decisions.” Constantine tears this cathedral down with one observation: “In marketing we have only correlation. It’s not true that we have causation.” Let that sink in while your CFO demands proof that brand campaigns “caused” revenue growth.
The attribution industry sells mathematical fairy tales wrapped in scientific-looking packages. Even A/B testing, our supposed gold standard, collapses under scrutiny. Constantine exposes the dirty secret: “When you do an A/B test, you need a methodology to measure the results of this A/B test.” Google’s “incrementality lift studies” randomly assign users to test groups (good science!) but then measure outcomes using… last-click attribution (circular logic!). You’re using the broken thing to validate the broken thing. Marketers shell out thousands for fancy dashboards that simply regurgitate flawed assumptions with prettier charts.
“This is a really tough market because you will encounter people that will have no clue what you are doing.”
Your own purchase behavior demolishes attribution models daily. Think about your last significant purchase:
- You saw Instagram ads for weeks
- A friend mentioned the product casually
- You read three comparison articles
- You watched a YouTube review
- You checked prices across sites
- Two weeks later, you clicked a retargeting ad and bought
Which touch point “caused” your purchase? All of them? None of them? Now multiply this complexity across thousands of customers, add competitive noise, fold in market trends, sprinkle economic factors, and season with psychological quirks. Good luck isolating causation in that stew of influences.
Smart marketers embrace correlation and build practical frameworks. When brand campaigns correlate with organic search growth, track it. When new creative correlates with rising conversion rates, document it. When PR spikes correlate with consideration metrics jumping, note it. Constantine suggests working with reality instead of fighting it. Even medical research with controlled trials and laboratory conditions struggles with causation. Marketing operates in the messy real world with infinite variables – believing you’ll crack causation where medical science struggles reflects dangerous hubris.
Key takeaway: Rethink whether marketing causation exists, or at least whether the means to measure it does. Build a practical measurement system based on correlations instead:
1. Identify and track proxy metrics that correlate with business outcomes (organic brand searches, branded traffic, consideration metrics).
2. Document patterns without claiming definitive causation.
3. Create a correlation dashboard that shows how marketing activities and business outcomes move together over time.
4. Test activities sequentially when possible, creating cleaner before/after snapshots.
5. Maintain healthy skepticism toward any vendor promising “true causation”.
You’ll waste less money on measurement theater and focus more on what actually matters: making marketing decisions based on the clearest signals available, however imperfect they may be.
Back to the top ⬆️
Your CMO Is Buying Fake Attribution Data (And Everyone Knows It)
The attribution industry runs on an open conspiracy that would shock you if you weren’t already neck-deep in it. Constantine rips away the professional veneer covering marketing’s dirtiest secret: CMOs actively seek validation, not truth, from their measurement partners. Your dashboards aren’t showing you reality – they’re showing you whatever story keeps the budget flowing.
“Imagine you’re a CMO who dumped millions into DV360 display campaigns with zero data justifying that spend,” Constantine explains. Now quarterly board meetings loom like storm clouds on your horizon. What do you do? Simple: “You buy an MMM solution and they justify your numbers.” This isn’t speculation or cynicism – it’s industry practice. Constantine has colleagues at e-commerce companies who’ve been told point-blank by MMM providers:
“If you don’t like the report, we can make it different. You decide what you believe is the incrementality of your YouTube ads, we can adjust it, no problem.”
Read that again. Your supposedly objective, scientific attribution model comes with a “choose your own adventure” option. The slick sales deck promised causal proof of marketing impact. The reality? You’re buying expensive confirmation bias dressed in data science clothing.
The whole charade works because of marketing’s revolving door. Most CMOs bounce between companies faster than attribution models can stabilize:
- Get hired amid promises of growth
- Dump budget into flashy channels
- Commission attribution that validates choices
- Showcase positive attribution at leadership meetings
- Update LinkedIn profile
- Rinse and repeat at next company
Constantine sees this pattern constantly: “In many companies, the rotation of CMOs happens almost every year. That’s why you need to survive one year and then go to the next company.” You don’t need accurate attribution – you need plausible attribution that buys you time until your inevitable job hop. The reports exist to “justify investment” and buy “at least one more year” before reality catches up.
Only the rare CMOs who stick around 5-10 years play a totally different game. They grasp a brutal truth: attribution theater means nothing if business plateaus. Constantine cuts through the bullshit: “You don’t need to show attribution. You don’t need to show effectiveness of your YouTube or Facebook ads. What you need to show is that you grow 20-30% year over year. If you don’t show this, you’re fired.” Long-termers optimize for actual business growth because they’ll face consequences for missed targets. They can’t hide behind clever attribution models when the P&L screams stagnation.
This creates the ultimate measurement conundrum that nobody wants to address: there exists no objective standard to verify if your attribution system works. “No one knows what is accurate,” Constantine admits with refreshing candor. “There is no such thing as accurate because you don’t have anything to compare it to.” You can’t benchmark against the “real world” because nobody has a perfect window into how the real world works. Constantine demolishes the typical marketing case study with appropriate skepticism: “When someone says, ‘We evaluated our TikTok ads and figured out that actual ROAS is five times more than expected, and now we’ve got 1500% more revenue from TikTok,’ my question is: how much did you grow year over year? When you measure results of your attribution by your own attribution, it doesn’t make any sense.”
Key takeaway: Your attribution system probably validates all your existing spending patterns because that’s exactly what it was designed to do. Break free from this expensive self-deception by focusing on year over year growth, investigate areas where competing attribution methods dramatically disagree, implement incrementality tests for your largest channel investments. Most importantly, ask your vendor how they will show you if your content strategy is completeley wrong so you make sure they aren’t just selling confirmation bias.
Back to the top ⬆️
Visitor Scoring Transforms Marketing Attribution
The messy reality of digital attribution makes marketers lose sleep. Fragmented sessions, multiple devices, and in-app browsers obliterate the clean customer journey we desperately want to see. Constantine tore down this problem to its foundation and rebuilt attribution from scratch. “Understanding the full customer journey is impossible,” he states with refreshing clarity. Even our best modeling barely captures fragments of reality, bound by device limitations and the chaotic way people actually browse online.
Think about your own behavior. You see a product on Instagram’s in-app browser, switch to Safari for research, check reviews on your laptop later that night, and maybe purchase days later on a completely different network. Traditional attribution crumbles in this scenario. Constantine’s team made a crucial pivot: stop obsessing over conversions and start measuring what you can actually see—genuine interest through on-site behavior patterns.
“If you bring five visitors with a 20% probability to buy, statistically it’s the same as bringing one visitor with 100% probability.”
Their visitor scoring approach evaluates quality through actions that matter:
- Time invested exploring products
- Depth of research behavior
- Engagement patterns that mirror successful buyers
- Contextual relevance to actual purchase behavior
The brilliance lives in the probabilistic framework. A thousand high-engagement visits from Paraguay mean nothing if that region historically produces zero sales. The system builds contextual buckets, combining behavior signals with geographic and product-specific patterns. Someone researching Paris-to-Prague flights won’t influence attribution for US-to-Dubai bookings. The model distributes conversion credit where statistical evidence points to actual influence, replacing the absurd notion that all traffic carries equal potential.
Constantine cuts through attribution mythology with surgical precision. “For unknown D2C brands to assume someone would bother remembering your brand after an impression… I cannot understand this logic,” he explains. His team found that meaningful engagement leaves statistical fingerprints. When you analyze those patterns at scale and optimize toward sources that deliver visitors exhibiting genuine research behavior, revenue grows and ROAS improves across your marketing mix.
Key takeaway: Ditch your broken attribution model and score visitor behavior instead. Track what people actually do on your site—time invested, pages explored, product comparisons made—and create probabilistic models that match these patterns to real sales. Attribution fails when trying to connect every dot across fragmented journeys. Focus on traffic sources that deliver visitors who show genuine buying signals through their behavior, not just those who happen to convert in a single tracked session. When implemented at scale, this approach delivers dramatically improved ROAS by connecting upper-funnel campaigns to the behavior patterns they genuinely influence.
Back to the top ⬆️
How Visit-Based Attribution Unlocks Hidden Marketing Insights

Your attribution models are lying to you. Every day, marketers chase conversions while completely missing the goldmine of behavioral data that predicts future purchases. Constantine cuts through the noise: “Sometimes for your upper funnel campaigns, you might not even have conversions… when you go to the creative level, it’s just not enough signals.” Facebook, Google, TikTok – none can optimize properly on scraps of conversion data.
Visit scoring flips this broken model on its head. By analyzing user behavior patterns in each session, you gain 10X more optimization signals than conversion tracking alone. The math is simple but brutal: waiting for purchases means throwing away 90% of your useful data. Constantine’s team found something remarkable in their analysis – session behaviors predict future outcomes with shocking accuracy, regardless of whether a purchase happens in that visit.
“You always know that cookie is consistent within one visit, so you never lose the traffic source. As soon as the visit ended, we can evaluate this visit, give it a score, and immediately attribute it to the traffic source that initiated this visit.”
The cross-device nightmare haunts every marketer’s attribution dreams. Users browse on mobile, buy on desktop, and your attribution model completely falls apart. Visit scoring solves this because it works at the session level where cookies remain intact. Each visit gets scored based on engagement patterns before the user jumps devices, giving proper credit to traffic sources that traditional models completely miss.
Upper-funnel campaigns suffer most from broken attribution. Think about your Facebook and TikTok campaigns – how many times have you killed promising ad sets because “the conversions aren’t there”? Visit scoring captures the value when users:
- Show high engagement but purchase later
- Switch devices mid-journey
- Return through different channels before buying
- Take longer than your attribution window to decide
Key takeaway: Replace last-click myopia with visit scoring to capture the 90% of signals you’re currently throwing away. Score every session based on engagement patterns that predict future purchases, then attribute that value to its traffic source immediately. This gives upper-funnel campaigns proper credit and provides 10X more optimization signals for platforms like Facebook and TikTok. Start by identifying your highest-value engagement behaviors (not just conversions), then create a simple scoring model that weights these actions proportionally to their predictive power.
Stop Chasing Credit and Measure What Matters
Marketing attribution lives in a strange twilight zone. You know exactly half of your budget works brilliantly, but damned if you can figure out which half. Constantine cuts through this fog with a refreshingly practical take on how attribution should actually work in the real world.
When platforms can’t connect the dots between touchpoints, they default to evaluating each visit in isolation. “We evaluate each visit separately,” Constantine explains, “and in a sense, it’s the last touch because within one session, it’s only one touch. You cannot build multi-touch attribution for one visit.” This creates a fundamental constraint that forces pragmatic decisions about value assignment.
But the game changes completely when platforms can stitch together customer journeys. Constantine’s algorithm takes an incremental approach:
“First you came from Facebook and we evaluated that your probability to convert increased from zero to 20%, for example, but then you returned from direct and now your probability increased to 25%. We will not assign 25% to direct because 20% were already generated by initial. So we’ll assign only the fraction, only the change.”
This gets to the heart of Constantine’s most useful insight: the crucial distinction between significant and insignificant traffic sources. Significant sources burn real money and can scale with investment. Insignificant sources like organic, email, and direct traffic often can’t meaningfully scale regardless of how much you try to attribute to them.
Some marketers obsess over attribution to an absurd degree. “I’ve seen some marketers who even put UTM parameters on their cart abandonment emails,” Constantine notes with obvious frustration. “Why would you ever do this? Marketing measurement is already so complex.”
For marketers who genuinely want to measure incrementality, Constantine has a simpler solution: run A/B tests in your CRM. It’s straightforward, provides clear answers, and avoids the measurement quagmire altogether.
This philosophy culminates in a practical recommendation that will save you countless hours of analysis paralysis: when a customer journey includes both significant and insignificant touchpoints, attribute the entire conversion value to the first significant source. Your paid channels deserve the scrutiny; your organic channels will happen regardless.
Key takeaway: Dump your complex attribution models for channels you’ll use anyway. Focus your measurement firepower exclusively on paid channels where budget decisions hang in the balance. For everything else, just do it without the tracking overhead. This cuts your analytics complexity in half while giving you actionable data for the channels that actually impact your budget.
Back to the top ⬆️
Marginal ROAS Exposes the Lie of Average Returns
Average ROAS metrics seduce marketers with false confidence while their incremental ad dollars silently hemorrhage value. Constantine cuts through the industry’s measurement obsession with a brutal distinction that could save your next $100K allocation. Most platforms show you cumulative performance that masks the cliff you’ve already driven over, creating what Constantine calls “a very misleading metric” that keeps you spending long after returns have collapsed.
Picture the scenario playing out in marketing departments everywhere: Your Google Ads spend of $1,000 daily generates $2,000 in revenue. Conventional wisdom says double down. You add another $1,000, but this fresh investment returns only $500. Your dashboard still glows with a seemingly healthy 1.25x total ROAS ($2,500 revenue on $2,000 spend), while in reality, you’re burning money on every new dollar spent at a disastrous 0.5x marginal return.
“If you’re gonna look at average ROAS, you invested 2000 and you’ve got 2,500 in return, it’s still profitable. You can continue investing. So marginal ROAS is very hard to calculate and I would say probably we are the only platform that actually calculates this.”
Constantine advocates for measurement precision that scales with investment significance. For high-stake decisions, the decimal points matter enormously:
- Budget-light activities (email to existing lists, a single SEO specialist) require basic confirmation they’re directionally positive
- Mid-tier investments deserve A/B testing when incremental value feels questionable
- Six and seven-figure allocations demand decimal-level precision about marginal returns
The platform blindspot compounds this problem. Google Ads, Facebook, and virtually every analytics interface Constantine encounters showcase only average performance data. Without visibility into marginal effectiveness, marketing teams continue pushing budgets well beyond their effective frontier. When deciding where to place your next $100K, Constantine stresses that the difference between a 0.8 and 1.1 marginal ROAS represents the line between progressive value creation and actively destroying company resources.
Key takeaway: Stop making budget decisions based on average ROAS—it masks diminishing returns and tricks you into wasting money. Calculate the return on your incremental spending separately from your overall campaign performance. Before increasing any channel’s budget, ask: “What will my additional $10K actually produce in new revenue?” not “Is my overall campaign still profitable?” This shift transforms your marketing from blindly following misleading metrics to making genuinely data-informed spending decisions.
Back to the top ⬆️
Activating Attribution Insights Through Automated Bidding
Your marketing attribution data is collecting dust. Constantine dropped a bomb during our conversation: internal metrics at SegmentStream revealed 90-95% of clients ignored their own attribution recommendations. Companies fork over six figures annually for fancy measurement tools, then let those insights rot in dashboards. “We invested a lot of money into this, but we just wanted to know, do our clients really use analytics?” The harsh truth? They don’t.
The gap isn’t laziness, it’s structural. Your attribution platform speaks a different language than your ad platforms. That 2X ROAS you see in your analytics dashboard? Google calculates something wildly different based on its own pixels and conversion paths. The disconnect creates absurd scenarios in the real world:
- A customer researches extensively across multiple sessions
- They share product links with friends or family through messaging apps
- They’re clearly showing high purchase intent
- But Facebook registers this as “wasted spend” because no conversion happened in that specific browser
> “At the end of the visit, if we see that this visit is valuable, if we see that Facebook should be bringing more customers like this because this customer was really interested in the brand and the product and has very high probability to buy, we create a synthetic conversion.”
SegmentStream tackled this head-on with synthetic conversions—signals that communicate visit value back to ad platforms even when traditional conversions don’t occur. These aren’t fake purchases. They’re intent indicators fired through conversion APIs to Google, Facebook, LinkedIn, or TikTok. For clients with zero last-click conversions in upper-funnel campaigns, this changed everything. They stopped optimizing for meaningless metrics like landing page views and started targeting genuine purchase intent.
The final puzzle piece? Making action effortless. SegmentStream built an algorithm that translates attribution insights into platform-specific commands. You click one “apply” button, and the system adjusts target ROAS and campaign budgets across hundreds of campaigns instantly. “Before, they needed to go inside Google Ads and apply changes to budget for every single campaign that took hours,” Constantine explains. “Now they can click apply and all these changes happen in a second.” This automation pushed adoption rates to 80%, dramatically closing the execution gap.
Key takeaway: Your attribution data becomes powerful only when you act on it daily. Implement synthetic conversions to give ad platforms the signals they need about high-value visitors who don’t convert immediately. Then automate budget adjustments based on marginal ROAS calculations. This two-step system eliminates the manual spreadsheet work that keeps your insights trapped in dashboards and transforms upper-funnel performance within weeks, not months.
Back to the top ⬆️
Scoring for Lead Management vs Attribution

Marketing attribution shouldn’t be magic. Constantine’s visitor scoring system rips away the curtain on what traditional lead scoring kept hidden. His company built something that shows you exactly why a prospect got a particular score – tracking their entire path through your website, pinpointing when their score changed, and explaining what drove those shifts.
Think about this: a client once questioned why a visitor who spent 30 minutes on their site and reached the checkout page received almost no score. The answer? Location data. The visitor browsed from Indonesia where this client had never made a sale. The system knew this geographical pattern mattered. People rarely browse from one continent and buy from another.
“There are a lot of mechanisms under the hood that scale down the score if it’s not aligned with specific buckets because people do not move between locations that much.”
This transparency shift mirrors what happened with Google’s smart bidding. Marketists resisted at first – they’d spent years manually selecting keywords and adjusting bids. Google responded by showing which factors influenced the algorithm:
- Sunday visitors converting more frequently
- Users from specific cities performing better
- Certain demographics driving higher conversion rates
While Constantine believes this transparency offers limited tactical value, he recognizes its psychological importance. People trust systems they can understand.
His team already lets you investigate why any visitor received their specific score value. You can trace their complete journey – time spent, pages viewed, actions taken – and see precisely which moments triggered score changes. Next, they plan to expose even more variables that influence these decisions.
This visibility transforms how marketers work. You’ll know not just who your high-value prospects are but exactly why they’re valuable. You can feed these insights to your content team, guide your sales conversations, and allocate budget with confidence.
Key takeaway: Visitor scoring works best when you combine sophisticated algorithms with clear explanations. Look for systems that show you which factors drive scores (like geographical anomalies) while maintaining the complex pattern detection humans might miss. This helps you turn scoring insights into content strategy, sales enablement, and smarter budget decisions across channels.
Back to the top ⬆️
The Key to Happiness is to Stop Following-up

Constantine threw his CRM system in the digital trash can. Not metaphorically. Actually deleted it. “I don’t like CRM systems. I don’t like follow-ups. I don’t like chasing clients,” he states with refreshing candor. As a CEO handling significant sales responsibilities, he made a decision that would make most sales directors break out in hives: completely abandoning the follow-up process that’s considered the oxygen of any sales operation.
What replaced this sales infrastructure void? Only activities that genuinely energize him:
- Creating valuable content that attracts the right prospects
- Engaging authentically on LinkedIn without forced outreach
- Having conversations exclusively with people already interested in their solutions
- Strategic collaboration with the product team to build what matters
This selective focus sparked initial bewilderment. “At first my partner was a little bit like, ‘oh, you’ve met with this client like one month ago. Would you like to follow up?'” Constantine recalls. The answer remained consistently simple: “No.”
“There is a reason why this client is not reaching out to me. They might have many different priorities and I will be pushing, but there’s gonna be resistance. And myself, I don’t like when someone pushes me.”
His theory flips traditional sales wisdom on its head. When prospects truly need his solution and experience enough pain, they remember their initial positive conversation and reach out naturally. No artificial deadline pressure. No awkward check-in calls. The relationship begins from mutual interest rather than obligation.
This philosophy now permeates their entire company culture. They’ve systematically eliminated activities many companies consider mandatory across sales, marketing, and development. The result? Work flows naturally. Team members do what they genuinely enjoy. The company attracts clients who appreciate their authentic approach rather than those who respond to persistence techniques.
Key takeaway: Delete one energy-draining business ritual from your workflow this week. Start with the task you actively avoid, whether it’s CRM updates, following-up with leads, or meaningless reporting. Replace it with high-value work that leverages your natural strengths. Track what happens when you stop chasing lukewarm prospects and instead create content or experiences that make the right people seek you out. The quality of both your work and client relationships will improve dramatically when you stop performing business theater and focus exclusively on genuine value creation.
Back to the top ⬆️
Episode Recap

Multi-touch attribution is a beautifully crafted illusion we all pretend to believe in while knowing deep down it’s flawed. The big platforms sell us sophisticated solutions they don’t even trust for their own internal decisions. You’ve seen it: pretty dashboards with clean attribution models that somehow always validate your current spending patterns. The work is mysterious, but is it important?
Stop throwing money at simplified MMM models that only analyze platform data. They use circular logic to overvalue your highest-spend channels, a convenient self-fulfilling prophecy. Rethink your geo testing strategy. Useful only when you suspect a channel delivers zero value. For everything else, you’re paying for math ‘theater’ with confidence intervals so wide they become meaningless.
The sad truth is that we need to accept that marketing causation might be fundamentally unmeasurable. Build practical systems around this reality by focusing on year-over-year business growth metrics instead of attribution models. Run competing methodologies side by side and look hard at where they disagree. When vendors pitch you, ask them point blank: “Tell me how your system will show if my current strategy is completely wrong.” Watch them squirm – they’re selling comfort, not causal insights.
Visitor behavior scoring is a really interesting alternative or extra ingredient to consider. Often thought of as a tool for lead management to help prioritize your SDR’s time, the team at SegmentStream started using the same scoring methodology, but with an attribution application. Track what people actually do on your site: time spent, pages explored, comparisons made. Create models that match these patterns to actual sales. You can connect upper-funnel campaigns to real behavior patterns rather than just looking at who converted in a single session.
Also, stop wasting measurement time on channels you’ll use anyway, like organic SEO… you’re still going to do it. Put your measurement muscle behind paid media where budget decisions really matter. Calculate the return on your incremental spending separately from your overall campaign performance. Before you increase any budget, ask yourself what your extra $10K will actually produce in new revenue, not whether the whole campaign seems profitable.
Make your attribution data work daily, not just sit in reports. Use synthetic conversions to signal ad platforms about valuable visitors who don’t convert right away. Automate your budget adjustments based on marginal returns calculations. You’ll transform your upper-funnel performance in weeks, not months, and spend less time drowning in spreadsheets.
Tired: what gets credit. Wired: what creates actual value. This distinction might just keep you employed past the typical 18-month CMO expiration date that looms over marketers chasing marketing data refinement.
Listen to the full episode ⬇️ or Back to the top ⬆️

Follow Constantine 👇
✌️
—
Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)
Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS
Related tags
<< Previous episode
Next episode >>
All categories
- AI (92)
- career (58)
- customer data (59)
- email (64)
- guest episode (168)
- operations (127)
- people skills (34)
- productivity (10)
- seo (14)
See all episodes
Future-proofing the humans behind the tech
Apple •Pocket Casts•Google •Overcast •Spotify •Breaker •Castro •RSS
