Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS

What’s up everyone, today we have the pleasure of sitting down with Dr. John Whalen, Cognitive Scientist, Author, and Founder at Brilliant Experience.
Summary: John has spent his career studying how people actually think, and his conclusion is uncomfortable for anyone who believes their marketing decisions are more rational than they are. In this episode, John explores how synthetic users built from cognitive science principles can fill the massive research gap that most teams quietly ignore, and why removing the human interviewer from the room might be the fastest way to finally hear the truth.
In this Episode…
- How Synthetic User Research Works and When to Trust It
- Building Synthetic Users From Real Interview Data
- How Synthetic User Segments Replace the Average Persona
- Bridging Qualitative and Quantitative Research
- How Synthetic Users Help Answer the Why Behind Behavioral Data
- How to Design Marketing Content for AI Agents and Human Users Simultaneously
Recommended Martech Tools 🛠️
We only partner with products and agencies that are chosen and vetted by us. If you’re interested in partnering, reach out here.
📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.
🎨 Knak: Go from idea to on-brand email and landing pages in minutes, using AI where it actually matters.
🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.
🔌 GrowthBench: Twilio’s top-tier consulting partner, turning your Twilio investment into a customer engagement engine
About John

Dr. John Whalen is a Cognitive Scientist, Author, and Founder of Brilliant Experience, where he applies cognitive science principles to help organizations design products and experiences that align with how people actually think and make decisions. He’s also an educator, teaching two AI customer research courses on Maven.
His work explores the intersection of human psychology and marketing, including the emerging practice of pre-testing ideas on synthetic users to give brands a faster and more informed competitive edge. He is also the author of a book on the science of designing for the human mind, bringing academic rigor to practical business challenges.
How Synthetic User Research Works and When to Trust It

Synthetic user research sounds like something creepy out of a dystopian science fiction film, and John is the first to admit the terminology does nobody any favors. When asked about what synthetic users actually are and what they mean for research, he admited: if he had been on the branding team, he would have pushed hard for something like “dynamic personas” instead. The name creates unnecessary friction before the conversation even starts. And that friction matters when you’re trying to get skeptical executives or methiculous researchers to take the whole thing seriously.
Under the hood, specialized AI tools simulate how a defined audience segment would respond to a question, concept, or stimulus, without recruiting, scheduling, incentivizing, or waiting on real human participants. John runs a class where he collects genuine human data first, then feeds comparable inputs into these tools to benchmark accuracy head-to-head. The results are pretty wild. AI-generated responses align with real human findings somewhere between 85% and 100% of the time on major topics and consumer needs. That is not a peer-reviewed clinical trial, and John is not pretending otherwise. But 85% alignment is enough signal to stop reflexively dismissing the method and start asking harder, more specific questions about exactly where it fits into a research stack.
So what does this mean for you and your company though? Think all the decisions that currently live in a black hole of zero structured input. How many product calls, campaign concepts, and messaging pivots happen with nothing more than a conference room full of people who all read the same talking heads on LinkedIn? John argues that low cost, round-the-clock accessibility, and minimal public exposure make these tools a natural fit for precisely those moments: pressure-checking a hypothesis at 11pm, testing whether a pitch direction even makes sense before it touches a client, or deciding whether a concept deserves the time and money required for proper validation.
“If these are only going to keep getting better and better, which they are, then logically, what kinds of decisions right now go completely by gut and no research, and what could we use to help us frame that?”
One of the more underappreciated angles John raises is global inclusivity. Large organizations routinely test in the US and Western Europe, then extrapolate those findings to markets in Southeast Asia, Latin America, or Sub-Saharan Africa because local research budgets simply do not exist. Big nono. Synthetic personas trained on broader, more representative data could at minimum provide directional signals for those markets, making research more geographically honest without a proportional spike in spend.
The early AI bias problem, where models essentially mirrored the worldview of a narrow, tech-adjacent demographic slice, was real and valid and well-documented. But training data keeps expanding, and the gap between “Silicon Valley assumption” and “what people in Nairobi or Jakarta actually think” is narrowing in ways that deserve acknowledgment.
Key takeaway: Synthetic user research earns its place not as a replacement for real human data, but as a low-cost, always-available pressure valve for the enormous volume of decisions that currently happen with no research input at all, so before you dismiss it as gimmicky, ask yourself honestly how many of your last ten strategic calls were backed by anything more rigorous than internal consensus.
⬆️ Back to the top
How Synthetic Users Make Stakeholders Want More Real Human Research

Thos big hairy static research decks have a fundamental limitation that anyone who has sat through a stakeholder presentation already understands. You hand over a slide deck, someone reads it, and then three days later they have five more questions you can’t answer without going back to the field. Brutal feeling.
Interrogating a Live Persona
John argues that synthetic users solve this problem in a surprisingly indirect way: when a stakeholder can keep interrogating a live AI persona, the conversation never closes. They start poking at the model, asking things like “would you like this?” or “why would you feel that way about that?” and somewhere in that process, something shifts. They stop treating research as a report and start treating it as a living, always-on thing.
What John has observed across a half-dozen client engagements is that this interactivity makes leaders ravenous for it. His team positions synthetic user outputs as directional, explicitly not as data, closer to hypothesis generation than validation. But still cray valuable. When a stakeholder gets genuinely excited about a pattern they’re seeing in a synthetic persona, the natural next thought tends to be “if this could actually be true, we need to go test it with real humans.” The synthetic user functions as a preview of the variance you might find in the field, not a substitute for going there.
“Think of this as almost a preview of what you could have with your humans. So you’re being more prepared for what might be to come, what might be the distribution of different responses.”
Instant Reactions
There’s a second use case John describes, about discovering new questions. When a stakeholder first sits down to scope a research project, they often don’t know what they’re actually asking. Spinning up a synthetic user in the room and throwing that rough, half-formed question at it live tends to produce a response the stakeholder immediately reacts against, not because the answer was illuminating, but because it was slightly off. “That isn’t quite what I want” is one of the most valuable sentences you can hear from a client. Suddenly you’ve learned more about what they actually care about than thirty minutes of intake conversation would have yielded. The synthetic user’s output almost doesn’t matter in that moment. The reaction is the data.
Testing Niche Segments
John also points to a structural advantage for research teams working with limited budgets and crazy timelines. Most orgs can’t realistically test every audience segment they want to reach, and prioritization decisions get made on gut feel more often than anyone will admit. Synthetic users give teams an early directional read on which segments are worth the investment before expensive fieldwork begins.
Asking Impossible Questions
He also notes the ability to ask what he calls impossible questions: things a real respondent would never tell an interviewer and would never admit on a survey, surfaced through the looser, more speculative nature of large language model interaction. You can even prompt outputs in specialized frameworks like jobs-to-be-done format, which no survey instrument lets you do in real time. That’s a genuinely strange capability, and the industry is still figuring out how much to trust it.
Key takeaway: Position synthetic user sessions as live hypothesis engines inside stakeholder scoping meetings, because the moment a decision-maker says “that’s not quite what I mean,” you’ve captured the actual research question faster and more precisely than any intake form will ever give you.
How Synthetic Users Complement Real Human Research

Pre-testing on synthetic users draws a fair amount of skepticism, most notably: the risk that AI-generated responses miss the kind of visceral, emotional moments, the genuine hugs and tears, that surface in real-world contextual inquiry. John chats about how large language models carry real blind spots in their current form, and anyone pretending otherwise is selling you something.
One of the clearest examples he offers involves basic physical grounding. Ask an LLM about reaching something at twenty feet and it will cheerfully tell you to just reach up. No embodied sense of what twenty feet feels like. No physical intuition built up over decades of navigating an actual body through an actual world. That gap is structural, not a quirk to work around.
Beyond spatial reasoning, synthetic users also tend to be (paradoxically) too logical. Real humans are gloriously irrational, shaped by cognitive biases, emotional tripwires, and flat-out contradictions + crappy memories, that statistical models sand smooth. John frames synthetic outputs as notional possibilities rather than definitive verdicts, because real research routinely surfaces counterintuitive findings that no model would ever predict.
“We want to say this is a possibility or a notional. Real humans and the way that we think, we are all subject to our frailties and imperfections, and synthetic users may fall into the trap of being too logical or too systematic.”
Where John finds synthetic users genuinely powerful is when his team regularly builds synthetic versions of specific stakeholders (his ICP) before high-stakes presentations, feeds the model the key points of the pitch, and asks what the first rebuttals, questions, and objections are likely to be. We actually do this at Humans of Martech when prepping interview questions for guests. John finds the accuracy is wild. A question the model surfaces as probable will often come out of that stakeholder’s mouth almost verbatim during the actual meeting, sometimes word for word.
He also uses synthetic experts to interrogate his own cognitive blind spots, deliberately inviting alternative disciplinary perspectives into his analysis. As a cognitive scientist by training rather than a brand strategist, that kind of intellectual friction is not optional for him; it is how he catches the frameworks he would never naturally reach for.
The more honest critique of the “shortcuts versus rigor” debate is that it constructs a false binary from the start. Synthetic pre-testing and real-world contextual inquiry aren’t at odds with one another, they are solving different problems at different stages of a research process. One delivers speed, scale, and a way to stress-test your thinking before you burn resources. The other delivers raw, unfiltered human texture that no generative model can fabricate, the kind you feel in a room when someone’s voice cracks unexpectedly. Treating them as competitors is the actual methodological error, not the tools themselves.
In a marketing context that’s like saving those long term holdout incrementality experiments for big ticket ideas and using synthetic users for the debate you’re having with the product team about the best subject line for your onboarding email sequence.
Key takeaway: Run your pitch or concept through a synthetic stakeholder model before your next high-stakes meeting, collect the likely objections it generates, and build your responses into the presentation itself, then follow up with real human research wherever emotional nuance and lived irrationality are the variables that actually determine the outcome.
How to Build Synthetic Users From Real Interview Data

Synthetic users sound like a shortcut. Feed some prompts into ChatGPT, get back a fake persona, call it a day. But John is describing something considerably more rigorous than that, and the gap between what most people are doing and what he’s actually teaching deserves a hard look. When asked about the practical mechanics of building synthetic users, he broke it down into two distinct tracks: purpose-built SaaS products, and custom agentic pipelines you construct yourself.
On the SaaS side, John rattled off a list of tools most marketers have never encountered, including Verve, Yabble, Subconscious AI, Ask Rally, and Delve, each with a different methodology and a different angle of attack. His team has been interviewing the founders of these platforms and stress-testing them before going on record about their value.
That kind of due diligence is rare, almost anachronistically so. Most of the synthetic user conversation in martech right now involves people breathlessly recommending tools they’ve used for two weeks. John’s posture is that of an orchestrator, someone who understands what each tool is actually doing beneath the polished interface, rather than someone who just trusts the output because it arrived in a clean PDF.
The second track is where things get genuinely thorny. Using tools like Claude Code, Gemini, or Codex, he builds agentic systems that walk through the synthesis process in visible, auditable steps. The workflow looks something like this:
- Ingest raw interview transcripts and extract key personality traits, motivations, vocabulary patterns, and life context from each one.
- Cluster those extractions into three or four major archetypes that represent the real distribution of people in that data set.
- Bring those archetypes to life as interactive synthetic users you can interview individually or run as a simulated focus group.
“A lot of the people in my classes are like, that really sounds like what I would get in my focus group or in my interviews. It’s kind of creepy.”
What separates this from someone lazily prompting an LLM to “act like a 35-year-old marketing manager” is the layered evaluation architecture John builds around it. Each stage of the pipeline produces a visible byproduct you can inspect and argue with. If the model is extracting quotes from interviews, a separate lightweight agent does exactly one job: check whether each quote is real. Yes or no. Fix it before moving forward. No exceptions, no vibes-based trust.
This single-purpose validation loop is borrowed from advanced agentic system design, and applying it to qualitative research synthesis is genuinely clever. He calls the broader pattern a VOWELS framework, a classic agentic evaluation approach where the system critiques its own outputs, absorbs feedback, and refines its analysis before producing the next pass. The first cycle will probably be rough, and that’s built into the expectation. The point is you can see exactly why it was rough and correct it, the same way you’d mark up a junior analyst’s first draft and send it back covered in red.
Most organizations deploying synthetic users can’t answer one basic validation question: would this have produced the right go/no decision on an actual campaign? Rigorous benchmarking of synthetic user accuracy against real-world segment distributions remains largely undone. The tools are getting more powerful. The outputs are getting more convincing. The methodology for proving they’re actually correct is not keeping pace. Building your own stepwise pipeline, with visible intermediate outputs and embedded evaluators, at least gives you something to audit when the results look off. A black-box SaaS tool that hands you a persona report with zero visibility into the steps it took to get there offers confidence without accountability, which is a dangerous thing to sell someone who’s about to brief a creative team.
Key takeaway: Build synthetic users through a staged pipeline where each step produces an inspectable intermediate output, because a persona you cannot audit at the extraction, clustering, and validation stages will quietly embed errors that compound all the way into your creative brief.
How Synthetic User Segments Replace the Average Persona

The average persona was always a compromise. Marketers built it not because it reflected reality, but because one human brain can only hold so much complexity at once. John acknowledges this directly when asked whether synthetic users risk becoming the same blunt instrument: personas were always a cognitive convenience, a shorthand fiction dressed up as strategy.
The shift now is one of scale and granularity. Where a traditional persona might flatten an entire target audience into a single composite archetype, synthetic users allow for something far more textured. A consumer in Osaka carries different cultural cues, purchasing anxieties, and aesthetic preferences than one in Santiago or Melbourne. Those differences matter enormously in product development and messaging. The average persona erases them. Synthetic segments don’t have to.
“Logically we can create a world model and then ask what the world speaks to this concept and how we might refine it. You could essentially have hundreds of little studies that we’re doing globally, and then collectively, what is that telling us?”
What separates this from the old approach is live signal detection. John describes teams pulling real behavioral data from platforms like Twitter and Instagram across geographies, feeding current cultural context directly into synthetic profiles rather than leaning on frozen demographic assumptions. Product teams testing a new concept can suddenly see not just how a generalized “35-year-old urban professional” might react, but how that reaction splinters across a dozen distinct regional mindsets at the same time.
Research that once required a large dedicated team, or simply never happened at all, becomes tractable. That is a fundamentally different category of competitive intelligence, not an incremental improvement on what came before.
The uncomfortable truth is that most organizations are still building personas the old way, and the reason has nothing to do with method quality. Upgrading the process feels disruptive. The data infrastructure, the prompting discipline, and the interpretive frameworks required to run hundreds of synthetic micro-studies demand real investment, real commitment, and someone willing to argue for it internally. Meanwhile, the competitive asymmetry keeps widening. Brands still collapsing global audiences into a single representative human are making high-stakes strategic decisions based on an average that, statistically speaking, describes no actual living person.
Key takeaway: Synthetic user segments only escape the trap of the average persona when they are fed with live, geographically specific behavioral data rather than static demographic assumptions, which means the most important operational question for any research team is not how many segments they have built, but how recently those segments were updated with real-world signal.
How AI Moderation Surfaces Confessions That Human Researchers Cannot

There is a specific kind of truth that never surfaces in a standard focus group. The kind where someone admits they bought something because their spouse pressured them, or that watching their parents lose everything to debt still governs every financial choice they make thirty years later. Those confessions exist inside people. They almost never escape into a room with a clipboard and a stranger behind a one-way mirror.
When asked about the most uncomfortable truths that synthetic agents have surfaced during the research process, John shared something more interesting than a single dramatic example. He reframed the question entirely to explain what makes both AI-moderated interviews and synthetic users so disarming. People reveal more when the thing asking the question has no feelings to hurt and no eyebrow to raise. A human moderator, no matter how skilled, carries an implicit social contract into every session. Somewhere in the back of their mind, the respondent is always performing, always managing the impression they are leaving behind.
“People have this tendency to be more willing to open up to an AI moderator for things about their finances or their insecurities or their medical situations, because it’s sort of neutral. It won’t be like, ‘My God, you have that situation and you’re still eating Twinkies.'”
John is careful not to oversell the effect. Respondents know, in some dim and peripheral way, that a human will eventually read the transcript. But that awareness sits far enough away that it stops triggering the same defensive editing. What comes through instead are the messy, irrational, profoundly human forces that actually dictate purchasing behavior. Deep-seated financial anxiety rooted in childhood scarcity. The quiet admission that a spouse holds the real veto power in the household. These are not outlier confessions from unusually candid participants. They are the invisible architecture underneath most buying decisions, and conventional brand research walks right past them.
The practical implication stings a little if you run qualitative research for a living. Consistently clean, articulate, socially acceptable answers from respondents are a warning signal, not a green light. You are probably looking at a curated version of your customer. John frames this not as an indictment of traditional research but as a structural feature of human social dynamics that no amount of moderator training fully resolves. He is candid that a genuinely expert human moderator can go deeper and sharper than an AI. But reaching 85% of that quality across a dozen languages, at speed, without the interpersonal friction that keeps sensitive truths buried, is a categorically different research capability than anything that existed five years ago.
Key takeaway: When customers answer to a human researcher, they are partly answering to their own self-image, so removing that social dynamic through AI moderation creates the psychological distance people need to reveal the financial anxieties, personal insecurities, and hidden influences that actually drive the decisions your marketing is trying to reach.
How Synthetic Users Bridge Qualitative and Quantitative Research

Trigger warning: The quant versus qual debate has consumed research budgets and fractured marketing and product teams for decades. Quantitative, marketing analytics teams want statistical significance. Qualitative, product manager teams want human nuance and usability feedback. Both camps are usually right, and both are usually talking past each other. John believes AI may finally give both sides enough of what they need to stop fighting.
When asked about his claim that AI enables statistically relevant qualitative data, John was quick to temper expectations. There is no definitive proof yet, and he said as much directly. But the directional evidence is strong enough to take seriously. AI-moderated qualitative research now makes it possible to run qual studies at genuine scale, something that was essentially impossible when every interview required a human moderator burning through time and budget. That constraint alone kept the two camps permanently at odds. When you add synthetic users into the picture, the possibility space expands in ways most research teams haven’t fully reckoned with.
“We can go and build a set of synthetic users that represent our data, run a quant test, and then run it with synthetic users and see if we get the same stats.”
Statistics is ultimately about representing a population. Synthetic users, when built properly from real behavioral and demographic data, are also attempting to represent a population. If those synthetic cohorts reliably reproduce the statistical patterns observed in real-world quant testing, that is a meaningful validation signal, not a minor footnote. Researchers can then use them to pre-test ideas, stress-test assumptions, and get directional reads on lift potential before committing to expensive field studies that take weeks to produce a number.
John also traces this back to the underlying mechanics of how machine learning actually works, which is where the hype earns a little of its oxygen. Neural networks were built on a stubbornly biological premise: can we model the way neurons in the human brain form and dissolve synaptic connections based on repeated exposure to patterns? The answer turned out to be yes, well enough at least to produce algorithms capable of mapping relationships across datasets so large no human analyst could process them. Natural language processing then extended that capability into human communication itself, which is what eventually handed researchers the generative AI tools now rewriting how discovery work gets done.
Nobody yet knows whether synthetic users will predict a 2% lift versus a 3% lift with real precision. Anyone claiming otherwise is sprinting past the evidence. But the directional signal is real, and serious organizations are already building internal validation programs to test it. Waiting for a definitive academic paper before experimenting is how teams end up two years behind the curve.
Key takeaway: Synthetic users operate on the same statistical logic as traditional sampling by attempting to faithfully represent a real population, so running your current quant tests against a well-constructed synthetic cohort is one of the most cost-efficient ways to start building your own evidence base rather than inheriting someone else’s conclusions.
How Synthetic Users Help Answer the Why Behind Behavioral Data

Behavioral data is one of the most misleading success stories in modern marketing. You can track every scroll, every hesitation, every abandoned cart, but none of that tells you what was actually happening in the person’s head. The click is visible. The cognition behind it is not. When asked about whether synthetic users represent a genuine solution to this gap, John took the question seriously, agreed with the premise, and immediately added the kind of friction that separates thoughtful practitioners from hype-chasers.
The strongest argument for synthetic users, in John’s view, is their ability to reach populations that would otherwise be impossible to study. Consider the research problems that have always been treated as unsolvable:
- What do ultra-high-net-worth individuals actually want from a financial product when a $300 incentive is an insult, not a motivation?
- What drives someone who abandoned a checkout flow at the very last step?
- What does an elite rock climber need from gear that a mid-level enthusiast would never even think to articulate?
These are not edge cases. They represent entire customer segments that traditional survey panels and focus groups simply cannot access at any meaningful scale, and the industry has been quietly pretending otherwise for decades.
“These groups that you can’t reach out to at all are a great example of letting synthetic users surface the real possibilities, and then asking what evidence do we have to support or contradict that.”
John frames the right way to use this capability as converging lines of evidence. Synthetic user outputs function as a hypothesis-generation engine, surfacing plausible explanations for behavior that your clickstream data captures but cannot decode. Stress-test those explanations against everything else you already know: qualitative interviews, ethnographic observations, longitudinal cohort data. If the synthetic user’s reasoning holds across multiple sources, you have something worth acting on. If it conflicts with what actual users said in recorded sessions, that tension itself becomes the most interesting thing in the room.
Where John draws a sharp line is on blind adoption. He describes watching junior designers light up at synthetic user outputs and immediately want to ship decisions based on them, and his response is a measured but firm slowdown. The stakes are not uniform across industries: A potato chip brand running with a slightly wrong synthetic insight loses some media budget and moves on. A company building cardiac surgical tools doing the same thing is operating in an entirely different moral and legal register, one where speed-for-rigor tradeoffs have names attached to them. The appropriate level of skepticism scales with consequence, and anyone flattening that distinction is courting a very specific kind of expensive mistake.
Key takeaway: Synthetic users are most valuable when applied to the research problems you cannot solve any other way, specifically the populations you cannot recruit, the questions you cannot ask at scale, and the behaviors your data captures but cannot explain, provided every output is treated as a testable hypothesis rather than a finding.
How to Design Marketing Content for AI Agents and Human Users Simultaneously

Most marketers are still building for one audience. They optimize for humans, run their A/B tests, tweak their copy, and call it a day. But when asked about the emerging challenge of designing for both human visitors and AI agents simultaneously, John made something clear: we are entering territory where those two audiences have almost nothing in common, and treating them as interchangeable is a costly mistake.
The behavioral gap between a human browser and an AI agent is profound. A human responds to visual hierarchy, emotional tone, and brand feel. An agentic system scanning your site to fulfill a shopping task cares about none of that. It runs on tokens, efficiency, and extractable signal. John pointed out that agents can read text rendered white on white, which means all the content you thought was invisible to bots simply is not. One audience leans into ambiguity and atmosphere. The other is allergic to both.
John offered one of the more practical suggestions for anyone trying to get ahead of this: open a reasoning-capable AI tool and actually watch it think. Most of these systems display their internal reasoning process in real time, often as a collapsible chain-of-thought. Sit with it. Watch the model second-guess itself, correct its assumptions, and narrate its own uncertainty. As John put it:
“They’ll say, oh I think the user wanted this, but I gave them that, that might suck, maybe I should think about it a different way. Seeing that little line of reasoning could be really helpful to guide that.”
There is also a more adversarial layer forming underneath all of this. Agentic systems are becoming gatekeepers of product discovery and purchasing decisions, and some marketers will inevitably start optimizing content to manipulate those systems the same way they once gamed search rankings. The agents themselves will be tuned to resist that manipulation.
Anyone who lived through the early SEO wars knows exactly where this road goes. John framed this honestly, acknowledging that agentic tools are still moving targets and that understanding how they actually approach a problem requires ongoing study rather than inherited assumptions. Reverse engineering their behavior will be part of the competitive landscape, and so will building content that is structurally honest, unambiguous, and legible to whatever entity encounters it first.
Key takeaway: Watch a reasoning-capable AI agent work through a task in real time, read its chain-of-thought output, and use what it fixates on, skips, and questions to audit whether your content is actually communicating what you think it is to a non-human reader.
How Downtime and Analog Experiences Drive Creative Innovation

Dr. John Whalen wears a lot of hats. Cognitive scientist. Author. Founder. Speaker. Consultant. Father. And, perhaps most importantly, dog dad. When asked how he decides what deserves his energy at any given moment, John doesn’t launch into a productivity framework or reference some time-blocking system he read about in a business book. His answer is far more human than that, and honestly, more useful.
His dog Lola is the unlikely muse here. John describes how dogs greet each day with this almost embarrassing level of enthusiasm, completely unburdened by deadlines, tax seasons, or quarterly reviews. There’s something worth sitting with in that observation. Humans have constructed elaborate cognitive scaffolding around productivity and purpose, but dogs haven’t gotten that memo, and they seem fine. Better than fine, actually. John laughs at himself for being one of those dog walkers who listens to podcasts mid-stroll, joking that Lola was very interested in a recent episode about agentic AI workflows. But underneath the humor is a genuine point: even when he’s trying to unplug, the pull of constant information consumption is strong enough to follow him into the park.
“Connecting back to our analog selves, we know from psychology how crucial it is to just have a little bit of nature, a little bit of downtime, and actually we can be so much more innovative by just giving ourselves that veg time to let the ideas bubble up.”
The cognitive science here is well-established, even if the industry largely pretends otherwise. Periods of mental rest activate what researchers call the default mode network, the part of the brain responsible for creative synthesis, perspective-taking, and the kind of lateral thinking that doesn’t happen on command. Pack your calendar tight enough and that network goes silent. You’re technically working but cognitively hollowed out, moving fast through shallow water. John has spent his career studying how the human mind performs at its ceiling, and the answer keeps pointing somewhere inconvenient: strategic idleness outperforms grind.
For someone operating across so many domains, John’s personal system boils down to something almost suspiciously plain. Celebrate the people and animals around you. Get off your phone. Go outside. Let your mind wander without an agenda attached to it. The martech world in particular fetishizes busyness, treating packed schedules and relentless output as proxies for actual value, as if suffering through another 9 AM standup is somehow proof of intellectual seriousness. John pushes back on that framing, not with a contrarian manifesto, but by pointing to a dog who finds pure, unironic joy in an ordinary Tuesday morning.
Key takeaway: Block time on your calendar specifically for unstructured, screen-free downtime, because the default mode network, the neurological engine behind creative synthesis and strategic thinking, only activates when you stop feeding it inputs and actually let it run.
Episode Recap

Dr. John Whalen’s visit covered a lot of ground, but the throughline was consistent: most marketing decisions happen without any meaningful research input, and synthetic users offer a practical, low-cost way to close that gap without pretending to replace the real thing.
From using AI-moderated interviews to surface the financial anxieties and personal insecurities that people won’t admit to a human researcher, to building synthetic cohorts from real interview data that capture the full spread of user psychology rather than collapsing everyone into a single average persona, John showed how cognitive science principles can make both AI-assisted and traditional research meaningfully sharper.
If you work with lifeycle, research, product, or marketing strategy and you’ve ever made a significant call based on nothing more than internal consensus, this episode will reframe how you think about what “good enough” evidence actually looks like.
Listen to the full episode ⬇️ or Back to the top ⬆️

Follow John 👇
- Design for How People Think: Using Brain Science to Build Better Products
- AI for Customer Research: Future-Proof Your UX & Product Skills
- Agentic AI for Research: Build Custom Workflows & Synthetic Users
✌️
—
Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)
Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS
Related tags
<< Previous episode
Next episode >>
All categories
- AI (90)
- career (57)
- customer data (59)
- email (64)
- guest episode (167)
- operations (126)
- people skills (34)
- productivity (10)
- seo (14)
See all episodes
Future-proofing the humans behind the tech
Apple •Pocket Casts•Google •Overcast •Spotify •Breaker •Castro •RSS