200: Matthew Castino: How Canva measures marketing

Image of Anna Aubuchon, VP of Operations at Civic Technologies, discussing AI and operational efficiency.

What’s up everyone, today we have the pleasure of sitting down with Matthew Castino, Marketing Measurement Science Lead @ Canva.

Summary: Canva operates at a scale where every marketing decision carries huge weight, and Matt leads the measurement function that keeps those decisions grounded in science. He leans on experiments to challenge assumptions that models inflate. As the company grew, he reshaped measurement so centralized models stayed steady while embedded data scientists guided decisions locally, and he built one forecasting engine that finance and marketing can trust together. He keeps attribution in play because user behavior exposes patterns MMM misses, and he treats disagreements between methods as signals worth examining. AI removes the bottlenecks around geo tests, data questions, and creative tagging, giving his team space to focus on evidence instead of logistics.

In this Episode…

Recommended Martech Tools + Services 🛠️

We only partner with products and agencies that are chosen and vetted by us. If you’re interested in partnering, reach out here.

🔌GrowthBench: The GrowthBench Freelancer Network delivers top growth talent — flexible contracts, zero hiring lag.

🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.

🎨 Knak: No-code email and landing page creator to build on-brand assets with an editor that anyone can use.

📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.

About Matthew

A man with glasses, wearing a green Canva t-shirt, stands confidently with his arms crossed, in front of a vibrant, colorful digital landscape featuring rocky terrains, a futuristic structure, and a large orange planet in the background.

Matthew Castino blends psychology, statistics, and marketing intuition in a way that feels almost unfair. With a PhD in Psychology and a career spent building measurement systems that actually work, he’s now the Marketing Measurement Science Lead at Canva, where he turns sprawling datasets and ambitious growth questions into evidence that teams can trust.

His path winds through academia, health research, and the high-tempo world of sports trading. At UNSW, Matt taught psychology and statistics while contributing to research at CHETRE. At Tabcorp, he moved through roles in customer profiling, risk systems, and US/domestic sports trading; spaces where every model, every assumption, and every decision meets real consequences fast. Those years sharpened his sense for what signal looks like in a messy environment.

Matt lives in Australia and remains endlessly curious about how people think, how markets behave, and why measurement keeps getting harder, and more fun.

Canva’s Prioritization System for Marketing Experiments

A vibrant, sci-fi landscape depicting a towering rocket launching from a rocky surface, accompanied by two astronauts, planets, and a colorful sky filled with shooting stars.

Canva’s marketing experiments run in conditions that rarely resemble the clean, product controlled environment that most tech companies love to romanticize. Matthew works in markets filled with messy signals, country level quirks, channel specific behaviors, and creative that behaves differently depending on the audience. Canva built a world class experimentation platform for product, but none of that machinery helps when teams need to run geo tests or channel experiments across markets that function on completely different rhythms. Marketing had to build its own tooling, and Matthew treats that reality with a mix of respect and practicality.

His team relies on a prioritization system grounded in two concrete variables.

  1. Spend
  2. Uncertainty

Large budgets demand measurement rigor because wasted dollars compound across millions of impressions. Matthew cares about placing the most reliable experiments behind the markets and channels with the biggest financial commitments. He pairs that with a very sober evaluation of uncertainty. His team pulls signals from MMM models, platform lift tests, creative engagement, and confidence intervals. They pay special attention to MMM intervals that expand beyond comfortable ranges, especially when historical spend has not varied enough for the model to learn. He reads weak creative engagement as a warning sign because poor engagement usually drags efficiency down even before the attribution questions show up.

“We try to figure out where the most money is spent in the most uncertain way.”

The next challenge sits in the structure of the team. Matthew ran experimentation globally from a centralized group for years, and that model made sense when the company footprint was narrower. Canva now operates in regions where creative norms differ sharply, and local teams want more authority to respond to market dynamics in real time. Matthew sees that centralization slows everything once the company reaches global scale. He pushes for embedded data scientists who sit inside each region, work directly with marketers, and build market specific experimentation roadmaps that reflect local context. That way experimentation becomes a partner to strategy instead of a bottleneck.

Matthew avoids building a tower of approvals because heavy process often suffocates marketing momentum. He prefers a model where teams follow shared principles, run experiments responsibly, and adjust budgets quickly. He wants measurement to operate in the background while marketers focus on creative and channel strategies with confidence that the numbers can keep up with the pace of execution.

Key takeaway: Run experiments where they matter most by combining the biggest budgets with the widest uncertainty. Use triangulated signals like MMM bounds, lift tests, and creative engagement to identify channels that deserve deeper testing. Give regional teams embedded data scientists so they can respond to real conditions without waiting for central approval queues. Build light guardrails, not heavy process, so experimentation strengthens day to day marketing decisions with speed and confidence.

Back to the top ⬆️

A vibrant urban street scene illuminated by neon lights, depicting people walking in the rain amidst towering buildings adorned with bright advertisements.

Geographic holdout tests gave Matt a practical way to challenge long-standing spend patterns at Canva without turning measurement into a philosophical debate. He described how many new team members arrived from environments shaped by attribution dashboards, and he needed something concrete that demonstrated why experiments belong in the measurement toolkit. Experiments produced clearer decisions because they created evidence that anyone could understand, which helped the organization expand its comfort with more advanced measurement methods.

The turning point started with a direct question from Canva’s CEO. She wanted to understand why the company kept investing heavily in bidding on the keyword “Canva,” even though the brand was already dominant in organic search. The company had global awareness, strong default rankings, and a product that people searched for by name. Attribution platforms treated branded search as a powerhouse channel because those clicks converted at extremely high rates. Matt knew attribution would reinforce the spend by design, so he recommended a controlled experiment that tested actual incrementality.

“We just turned it off or down in a couple of regions and watched what happened.”

The team created several regional holdouts across the United States. They reduced bids in those regions, monitored downstream behavior, and let natural demand play out. The performance barely moved. Growth held steady and revenue held steady. The spend did not create additional value at the level the dashboards suggested. High intent users continued converting, which showed how easily attribution can exaggerate impact when a channel serves people who already made their decision.

The outcome saved Canva millions of dollars, and the savings were immediately reallocated to areas with better leverage. The win carried emotional weight inside the company because it replaced speculation with evidence. It also shifted how teams talked about experiments. People saw that tests could settle arguments about spend, influence budget decisions, and give leadership confidence in choices that might otherwise feel risky.

Brand search often creates its own mythology, and Matt uses the Canva test to highlight where that mythology breaks apart. Canva already owned its brand term, so when the paid link disappeared the click simply moved to the top organic result instead of evaporating. He warns smaller companies against assuming the same outcome, because weaker organic rankings leave room for competitors or aggregators to intercept demand. He pushes for experimentation when possible, but he also emphasizes judgment, the simple exercise of tracing what a user would realistically do if the ad vanished. This practical lens avoids the trap of treating strategies from dominant SaaS brands as universal rules, and it keeps spend decisions tied to real behavior rather than inherited habits.

Matt often points back to this experiment because it built internal belief in a way no presentation or model could. Controlled tests provided clarity, they traveled well across teams, and they helped Canva build a culture that trusted measurement grounded in real-world behavior.

Key takeaway: Geographic holdout tests give you a clean way to validate whether branded search spend creates incremental value. If your brand already dominates organic results, you can pause or reduce branded bids in specific regions, measure the before and after, and confirm whether the spend actually drives additional growth. That way you can redirect budget toward channels that create real impact.

Back to the top ⬆️

Structuring Global Measurement Teams at Canva

A vibrant digital illustration of a tropical island landscape featuring multiple small islands with palm trees, surrounded by calm waters, with floating rock formations in the sky and a colorful sunset backdrop.

Centralized measurement works when a company operates in a single hub, and Matt describes how that setup once fit neatly inside Canva. Modeling, experimentation, and attribution all lived within one group, and nearly all marketing activity ran out of Sydney. The team could walk into the same room, share context instantly, and push recommendations into execution without friction. That structure collapsed once the company pushed into regions where cultural nuance shapes what resonates and what falls flat.

Matt explains that global expansion exposed the limits of a one-team-handles-everything philosophy. Markets with different creative norms, seasonal rhythms, and user expectations forced the company to rethink how measurement would actually support growth. High intent users in the United States still responded to familiar content, but broader audiences in other regions quickly disengaged when templates or imagery lacked cultural relevance. Regional teams needed more than a universal model. They needed the ability to translate measurement outputs into decisions shaped by local context.

The measurement function split into two distinct layers to support that reality. Matt now leads the group responsible for the tooling, the modeling, the experimentation frameworks, and the research that keeps the science consistent. The second layer consists of embedded data scientists who sit inside regional teams. They provide the interpretation of those models, and they shape recommendations around local needs. They understand when a creative strategy aligns with cultural patterns, and they know when a seasonal event requires real shifts in spend or messaging. That way they can turn centralized outputs into meaningful actions.

Communication practices had to evolve alongside the structure. Matt points out that technical changes inside a model can easily derail a conversation if the explanation focuses on smoothing parameters or marginal ROAS fluctuations. Teams across the company care about decisions, not the inner workings of internal math. His group learned to lead with what a change means for budgets, creative, or regional plans. The clarity of the narrative matters as much as the precision of the measurement, and the team now treats translation as a core competency instead of an afterthought.

Key takeaway: Global measurement teams function best when modeling stays centralized and interpretation sits close to the regions making decisions. Clear translation of model outputs into practical actions builds trust, accelerates adoption, and keeps the science useful instead of abstract.

Back to the top ⬆️

How Canva Integrates Marketing Measurement Into Company Forecasting

A vibrant digital landscape featuring a swirling tornado amidst a colorful sunset, surrounded by mountains and celestial bodies.

Marketing measurement gains real authority once its numbers feed the same planning machinery that finance and product use. Matthew described how most companies still run parallel systems that never quite align. Marketing produces incrementality estimates, finance produces ROI models and top line forecasts, and product teams build their own demand curves. Each group feels confident in its model, yet none of the systems speak cleanly to each other during annual planning. Matthew stepped into that gap because the organization needed a shared model that could connect the entire growth story.

Finance became the starting point because measurement and finance share a vocabulary. Matthew said that finance moves quickly when evidence shows that spend fails to produce returns. That urgency gave his team influence because they could point to investments that created lift and investments that stalled out. He talked about how this collaboration helped them earn trust. Finance forced the measurement team to be sharper with definitions and assumptions. That pressure created a healthier model and created momentum to involve leadership in bigger planning conversations.

The harder lift came when Canva tried to merge marketing’s measurement structure with multi year forecasting. Matthew described how their internal models did not match the company’s long range plan, which made it difficult to decide where growth would actually come from. He and his team began stitching the systems together by mapping every growth lever into a unified model. He said the team had to account for several pieces all at once, including:

  • the expected size and direction of future marketing budgets
  • the compounding nature of brand familiarity, loyalty, and memorability
  • interactions between organic demand and paid investment
  • how different tactics shape demand curves across multiple years

Matthew brought up brand effects because they create the largest uncertainty in long range measurement. He explained how Canva believes deeply in the value of awareness and recognizability. The challenge comes from translating that belief into a measurable curve that estimates future lift. He described the work as energizing because it forces long conversations about how the company actually grows. It also forces his team to build new modeling structures that can handle compounding effects without distorting short term performance signals.

He said the goal is a single growth engine that leadership can use to compare every lever in one place. Marketing, product, pricing, and marketplace initiatives all enter the model with the same rules and the same units. That way resource allocation becomes a strategic decision supported by shared evidence instead of a negotiation between teams. Matthew mentioned that his data scientists enjoy this direction because they get to build something that informs the entire business and not only marketing.

Key takeaway: Build one forecasting system that connects marketing’s incrementality models with finance projections and multi year company plans. Treat every growth lever as part of the same structure so leadership can compare options with clean logic. That way you can make resource decisions with fewer debates, clearer investment choices, and a shared understanding of how growth forms over time.

Back to the top ⬆️

Using MMM Scenario Tools To Align Finance And Marketing

A solitary figure stands at the edge of a colorful, abstract landscape, gazing towards a winding path and a directional sign against a vibrant sunset sky.

Marketing budgets usually drift into conflict because teams bring different goals to the same meeting. Matt avoids that trap by starting every finance conversation with a concrete choice about the company’s priority for the year. He asks whether the objective is growth, efficiency, monetization, or free user expansion because each one leads to a different curve of spend and payoff. Finance leaders respond well when the conversation stays anchored to a shared target rather than a vague desire to be “smart with spend.” Clear intent creates calmer rooms.

He explained how Canva built a scenario tool that sits on top of their MMM output. It gives finance the controls instead of burying them in a static report. They can adjust spend objectives, pick profit thresholds, and see how budgets would move across markets. It turns a normally tense meeting into a collaborative session because people can explore outcomes without arguing about hypotheticals. Matt said that watching finance leaders drive the tool themselves has been surprisingly helpful for trust building.

“It lets them choose what they want to optimize for and see how the scenarios shift.”

The geography component makes the tool even more useful. The US team pays closer attention to monetization. Southeast Asia focuses on free user growth because freemium expansion shapes the long term trajectory. The tool visualizes those differences in a way that removes guesswork. Finance leaders understand why the same dollar behaves differently across regions, and that shared understanding shapes better conversations about where budgets should land.

The real friction shows up when the horizon stretches beyond the quarter. Performance spend feels comfortable because it behaves like a vending machine that pays back quickly. Brand investment operates on a slower clock. Finance leaders feel uneasy when the near term revenue forecast barely moves, and Matt understands the pressure behind that reaction. He keeps pushing for long range planning because companies lose momentum when they starve future demand. Canva has built enough shared history across team members to make those longer conversations productive.

Key takeaway: Use MMM powered scenario tools that give finance teams the ability to explore objectives, tradeoffs, and geographic differences on their own. Shared visibility turns abstract budget debates into grounded conversations. Pair fast feedback channels like performance spend with long term brand investment so you can protect future growth while keeping finance confident in short term returns.

Back to the top ⬆️

Why Attribution Still Matters at Canva

A figure walks along abandoned railway tracks, framed by a glowing orange and pink sunset filtering through an overgrown passageway, creating a mysterious and atmospheric scene.

Attribution went through a strange cycle in the industry. Teams trusted it, questioned it, pushed it aside, and then realized that the replacement methods did not cover everything. Matt saw this pattern inside Canva as the team grew its measurement toolkit. They invested heavily in MMM and incrementality because those methods answered high level budget and causality questions. But eventually the limits of that shift surfaced, and the gaps pointed directly back to user level attribution.

Matt keeps attribution in the mix because it exposes behaviors that aggregate models flatten. You can see how people discover the product, where they fall out of flows, and how user quality varies by channel. Those signals influence real work. They help you refine landing pages and evaluate whether a channel is driving high LTV users or sending low value traffic. Matt has watched attribution uncover situations where channels delivered volume but produced users who rarely stuck. He learned that this type of variance gets lost inside MMM and that user level signals help teams avoid misleading comfort.

“In marketing, good quality data is hard to come by. It rarely makes sense to throw anything away.”

Contradictions between MMM and attribution create important moments for investigation. Matt treats these mismatches as prompts rather than anomalies. When MMM claims a channel performs well but attribution shows weak quality or thin contribution, the measurement team pauses. They check definitions, inputs, lag effects, and user behavior. That way they can identify the interpretation that is least wrong and understand where the models might be distorting reality. Matt enjoys that tension because it forces teams to think beyond dashboards and confront the messy parts of human behavior.

Speed is another reason attribution stays essential. New creative can drive a drop in CPAs inside platform data within days. Attribution catches those shifts quickly. MMM needs more time to stabilize. Matt has seen teams lose momentum when they wait for MMM updates before acting. Attribution helps them produce faster feedback loops, especially when they are operating in channels with volatile auction dynamics or short lived creative breakthroughs. Acting quickly matters when the window for impact is narrow.

Matt sees ongoing confusion with attribution in the industry because teams still hope for a single measurement system that solves everything. He often reminds people that every method carries blind spots and that progress comes from combining them. A blended toolkit creates a fuller view of performance. Attribution identifies behavior, MMM creates strategic baselines, and experiments anchor causality. He encourages marketers to treat every method as a contributor and to treat conflict between them as valuable direction rather than a threat to whichever method feels more comfortable.

Key takeaway: Use attribution to understand user behavior, diagnose friction, and track quality differences across channels. Compare attribution with MMM to catch contradictions early and use those moments to investigate definitions, lag effects, and variance that the aggregate view hides. Move quickly when arttribution shows meaningful shifts in CPA or user quality, and let MMM provide long term budget calibration. A blended toolkit produces sharper decisions and keeps you from relying on any single model that cannot answer every question.

Back to the top ⬆️

How Canva Builds Feedback Loops Between MMM and Experiments

A futuristic astronaut operating a scientific device on a platform overlooking a vibrant alien landscape with a large planet in the background.

Experiments give marketing teams the only form of evidence that feels real under pressure. Matt explains that experiments inside Canva involve manual planning, coordination with regional teams, and strict sequencing so tests do not collide. Modeling fills the broader measurement layer by covering every market and every channel at once. Matt treats the model as a workhorse that needs constant guardrails because most teams feed it far less data than the model requires to behave reliably.

He lays out the core problem in simple terms. MMMs often rely on only a few hundred or a few thousand daily observations, yet teams expect them to estimate hundreds of parameters. Those parameters include baseline behavior, long term seasonality patterns, and channel efficiency curves. Matt describes how this imbalance produces unstable signals whenever channels move together. Facebook and TikTok often follow the same spend patterns. The model cannot distinguish their individual contributions when the data hides any meaningful variation. That limitation creates inflated certainty in outputs that look clean on a dashboard but feel questionable the moment you ask the model to explain itself.

Matt treats experiments as a correction mechanism that pushes the model back toward observable reality. He builds experiments around situations where the data lacks variation, situations where leadership wants a clear result, and situations where the risk to the business is low enough to justify a controlled test. He mentions that Canva continues to grow its experimentation program and explores ways for AI to automate the routine setup work. He sees experimentation as a core part of measurement hygiene instead of a one off validation exercise.

He also highlights the expiration date on past experiments. A test that ran eight months ago may no longer reflect current conditions. Creative has changed. Budget pacing has shifted. Competitive dynamics look different. Matt jokes about world wars and new creatives to show how quickly the ground moves under these datasets.

“Experiments are core. The model is the best possible thing when you do not have an experiment, but you need both feeding each other.”

He treats every experiment as a time bound snapshot that needs to be refreshed through a continuous loop. The team uses modeling outputs to flag uncertainty, prioritizes experiments in those areas, and feeds the learnings back into the model. That loop gives Canva a way to manage measurement across dozens of markets without relying on any single source of truth.

Key takeaway: Use MMM to identify where confidence is weakest, design experiments that isolate those gaps, and feed the results back into the model before the environment shifts again. That way you can maintain accuracy without expecting the model to solve problems the dataset was never built to support.

Back to the top ⬆️

Canva’s AI Workflow Automation for Geo Experiments

A futuristic landscape depicting a chessboard and a lone figure in a spacesuit, gazing out over vibrant mountains and a surreal sunset.

AI sits inside Canva’s measurement stack as a relief valve for the workloads that pile up around experimentation. Geo experiments highlight the issue clearly because the underlying Python and R notebooks require real technical fluency. Matt remembers the backlog well. One specialist held the keys to the geo splits, the synthetic controls, and the tuning knobs. Every team needed that person to run even simple configurations, and the queue grew longer every quarter.

“We were getting fifteen people pinging one person to run an experiment. That person was drowning.”

The team is now building a natural language layer that turns those technical notebooks into something a broader group can use. A marketer can request a test in a specific country, define the daily spend, and set the duration. The system turns that intent into valid parameters. Matt calls this a way to open the workflow to more people without sacrificing the guardrails that keep experiments trustworthy. The change gives teams more freedom to operate, and it gives the measurement function stronger leverage because it no longer serves as a manual switchboard.

Snowflake’s Cortex layer powers the second category of work. Matt wants stakeholders to ask everyday data questions without taxing the data science team. He talks openly about the irony in many companies. They invest in dashboards but still funnel hundreds of trivial questions through their analysts because no one can find anything. A strong semantic layer gives AI enough context to answer these questions accurately. That way you can reduce the routine noise and help data scientists focus on designing experiments and refining models.

The creative tagging effort creates a different kind of leverage. Matt is using large language models to parse creative assets across formats. Static ads, videos, and audio files all become structured feature sets. These features feed downstream models that identify which characteristics correlate with performance. Teams gain evidence that informs creative direction, and they avoid long debates based on taste or instinct alone. He sees this as an efficient way to understand creative at scale because it works across languages and mediums without specialized annotation crews.

Key takeaway: AI delivers real value when it tackles the technical and repetitive work that slows your team down. Natural language tools help you run geo experiments without waiting for scarce experts. Warehouse querying reduces the volume of routine data questions. Creative tagging turns subjective decisions into evidence driven choices. Concentrate your AI investment on bottlenecks that quietly drain your time, and you will expand experimentation capacity, reduce operational drag, and push your measurement work into more strategic territory.

Back to the top ⬆️

Why Strong Coworker Relationships Improve Career Satisfaction

A vibrant digital illustration of a mystical forest at sunset, featuring tall trees, colorful foliage, and glowing mushrooms along a winding path.

Work feels lighter when the people around you raise the collective temperature in the right direction. Matthew spoke about this with a steady confidence that comes from lived experience rather than theory. He described how much satisfaction he draws from working with colleagues who think deeply, move quickly, and support one another when deadlines compress. He meant real collaboration, the kind where someone jumps in without being asked and you feel the tension drop from your shoulders.

“I can’t tell you how fortunate I am to work with such richly intelligent people who are also great to work with.”

Matthew encourages you to treat relationship building as a practical skill. He believes that your workday becomes radically better when you invest in trust and camaraderie. He talked about the value of believing that the people around you share the same intentions because it stabilizes conversations during disagreements. Teams rely on social capital when they debate roadmaps, restructure priorities, or deal with messy projects. He has seen how quickly alignment forms when the team already understands one another’s motives. That way you can navigate conflict with less friction and far more clarity.

Matthew also shared the emotional rhythm of working from home with two young kids. He described how he leaves his office after a difficult sprint and immediately steps into a different world. He hears small feet running toward him. He sees two kids with their own priorities, often involving a trampoline and zero interest in marketing models. That moment delivers an abrupt sensory shift. He can feel the grass, hear the laughter, and let his mind reset. Parenting forces him to switch contexts whether he is ready or not, and he treats that forced shift as a grounding mechanism.

His garden and time outside serve a similar purpose. He steps into slower cycles, touches soil, watches plants recover after rain, and reconnects with something that does not care about metrics or quarterly cycles. He talked about this with a kind of calm that suggests those rituals keep him balanced. They remind him that his job matters, but family and daily life move to their own rhythm. He closed with a line that every marketer knows deep down. At the end of the day, they are just ads. You can care deeply about the craft without letting the work devour all your bandwidth.

Key takeaway: Build strong relationships with coworkers who share your intentions because those bonds make hard projects manageable and disagreements less draining. Make time to invest in trust so you can collaborate with less friction and better momentum. Create grounding rituals outside of work, especially ones that force your attention into the present, such as time with kids or time outside. Treat the work with care while remembering that it occupies only one part of your life.

Back to the top ⬆️

Episode Recap

A digital artwork featuring Matthew Castino, Marketing Measurement Science Lead at Canva, against a vibrant cosmic landscape with mountains and abstract elements, promoting the 'Humans of Martech' podcast.

Matt’s work at Canva follows a clear pattern. He hunts for the places where spend is heaviest and understanding runs thin, and he treats those areas as priority territory. If MMM bounds stretch, if creative loses traction, or if lift tests wobble, he pushes for an experiment. He works this way because he wants marketers to move with confidence instead of fear, and he wants measurement to feel practical instead of theoretical.

The branded search test became one of the team’s turning points. Canva’s CEO questioned why the company kept paying for its own name, so Matt suggested shutting the keyword off in a handful of regions. High intent users shifted to the organic link, and Canva saved millions. The result changed how teams talked about measurement. It showed people that experiments settle arguments faster than debates and replaced long-held assumptions with something firmer.

As the company expanded, Matt realized the old centralized model could not support the needs of regional teams. Markets behaved differently. Creative norms shifted by culture. Interpretations bottlenecked in Sydney. He rebuilt the structure so centralized models and tooling stayed consistent while embedded data scientists sat inside each region to translate insights into local decisions. Forecasting played a similar role. He created one model that united marketing incrementality, finance projections, and multi year growth levers so leaders could plan with a shared understanding instead of parallel narratives.

Attribution reentered the picture once MMM reached its limits. User level signals exposed friction, quality differences, and drop-off points that aggregated models smoothed out. Matt kept attribution in the toolkit because it catches patterns that matter in fast feedback loops. Conflicts between MMM and attribution became signals to investigate, not reasons to discard a method. Experiments filled the remaining gap. They grounded the model when variation was thin and conditions shifted faster than history could stabilize.

AI helped relieve the operational strain that sat underneath all of this. Natural language tools removed the reliance on a single specialist for geo experiments. Snowflake Cortex reduced the flow of small data questions that once slowed analysts down. Creative tagging turned global assets into structured inputs for performance analysis. The result was a system that let Matt’s team spend more time designing thoughtful tests and less time acting as a manual switchboard. Canva’s measurement stack ended up functioning like a living system, shaped by evidence, corrected by experiments, and supported by AI in the exact places where pressure used to build.

Listen to the full episode ⬇️ or Back to the top ⬆️

Image of Anna Aubuchon, VP of Operations at Civic Technologies, discussing AI and operational efficiency.

Follow Matt👇

✌️


Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)

All categories

Monthly archives

See all episodes

Future-proofing the humans behind the tech

Leave a Reply