214: Austin Hay: Claude Code is creating a new class of elite marketers and the mental models that make it click

Text promoting sponsors of the event, featuring logos of RevenueHero, Mammoth Growth, MoEngage, and Knak.

What’s up everyone, today we have the pleasure of sitting down with Austin Hay, Martech, Revtech, and GTM systems advisor, and AI builder, writer, and ex-founder.

Summary: You’ll be hard pressed to find someone that understands martech and is more advanced in their Claude Code journey than Austin Hay. He maps the 2 chasms separating most marketers from big AI leverage, makes the case for a new class of professional he calls the white collar super saiyan, and walks through the automations he’s actually built: from a rage-fueled 3am Chrome extension to a multi-agent OKR pipeline that replaced a months-long planning process. If you think you’re using AI seriously and haven’t touched a terminal yet, this one is going to sting. In the best way.

In this Episode…

Recommended Martech Tools 🛠️

We only partner with products and agencies that are chosen and vetted by us. If you’re interested in partnering, reach out here.

🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.

🦣 Mammoth Growth: Customer data agency that turns fragmented data into a unified foundation, unlocking sharper marketing insights and action.

📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.

🎨 Knak: Go from idea to on-brand email and landing pages in minutes, using AI where it actually matters.

About Austin

A cheerful character with blond spiky hair and glasses, wearing an orange vest against a vibrant sunburst background with clouds.

Austin is a rare repeat guest, he was first featured in episode 151 last year. He’s spent 15 years moving between the technical and strategic ends of marketing, starting as the 4th employee at Branch, building and selling a mobile growth consultancy that was acqui-hired by mParticle, and eventually rising to VP of Growth before moving on to Ramp as Head of Martech.

He also built The Marketing Technology Academy – an online learning center for martech which he would eventually sell to Reforge and become the Instructor for their Martech course.

He later co-founded Clarify, a CRM startup he took from zero to $100K+ ARR while completing a Wharton MBA. Today he works as a fractional advisor to scaling companies on martech, revtech, and GTM systems, teaches thousands of practitioners through his Martech course at Reforge, and writes the Growth Stack Mafia newsletter on Substack. In case that wasn’t enough, Austin is also building new courses on how to dominate marketing with AI at ClaudeMarketers.com.

How Code-Driven AI Workflows Outperform Chat-Based Prompting

A stylized character resembling a fighter holds a glowing orb with a pixelated ghost icon, surrounded by computer code and data processing elements.

Most marketers use AI the same way they used Google in 2005. Open the interface, type something in, read what comes back, copy it somewhere. Austin Hay did this for months. He was not an early Claude Code adopter. He says this upfront, almost as a confession. He thought it was another chatbot.

What broke him was specific. He was querying financial data at his startup, Clarify, through Runway, an FP&A platform connected to QuickBooks. Every SQL change required the same round trip: write the query in terminal, copy it to Claude, get feedback, paste it back, run it. He built a folder just to manage the back-and-forth. The model couldn’t see his local files. The chat UI had upload limits. He was stuck in what he calls a world of calling and answering. Functional. But slow. And bounded in a way you eventually stop ignoring.

Claude Code gave him access. When you type claude in a terminal, the model reads your actual files; the data as it lives in your repository, not a paste you copied, not a summary you wrote. It runs commands against your system, observes what happens, and acts on the result. The round trip ends. You stop relaying information and start working in the same environment. That is a different thing than a smarter chatbot.

The shift combined with several unlocks arriving at once:

  • Opus as a model
  • MCPs that worked reliably
  • Max plan that made unlimited credits economical
  • and an agent architecture built around memory files and commands.

All of it hit critical mass for Austin in January. He says the last 6 months felt like 3 years. You can hear in how he talks about it that he means it.

The 2 chasms he had written about in his newsletter turned out to be real and distinct.

  1. Adopting AI at all is chasm 1.
  2. Crossing from chat to code is chasm 2.
A diagram titled 'The AI Chasms for White Collar Workers' depicting two AI chasms and various tools like CC, Cursor, co-work, repl.it, and web/app chats.
https://growthstackmafia.com/p/converging-on-white-collar-super

Most practitioners have cleared the first. Almost none have cleared the second. And the view from the other side, Austin says, is unrecognizable.

“It’s this culmination of many things that I think really hit this critical mass in about January of this year.”

Key takeaway: Install Claude Code, open a terminal, point it at a folder with files you actually work with (SQL queries, drafts, data exports, notes) and run a real task on them. The gap between giving AI access to your environment and describing your environment through a chat window is immediate and felt, and that feeling is what changes the mental model.

Back to the top ⬆️

How to Start Building With Claude Code When You Have No Time

A muscular character standing in a brightly colored training gym with weights and equipment in the background.

Short on time like the rest of the world? You have a 9-to-5. Your weekends disappear. Nobody at your company is running AI hackathons. “Learn the command line” isn’t really advice you can act on between your Thursday syncs.

Austin points at the part most people missunderstand: they know step 1 (chat interface) and they see step 3 (Claude Code in terminal) and they conclude the gap is too wide. Step 2 exists. And step 2 is where everything clicks.

Anthropic’s rollout is layered deliberately:

  1. Chat first: ask a question, read the answer, copy the output.
  2. Cowork space second: Claude works inside a folder on your computer, local or cloud-based, and you’re giving it real files to act on.
  3. Coding interface third: terminal, commands, agents.

The cowork space is a distinct step with its own payoff. It’s where the model stops being a question-answering machine and becomes an environment you work inside.

“Once people understand that Claude lives in a folder on your computer and you can throw stuff in that folder and have it work for you: that’s the next step.”

When you upload documents inside a Claude project and ask it to work on them, you learn something you can’t get from chat: Claude lives in a folder. It acts on what’s in front of it. That sounds obvious. It does not feel obvious until you’ve done it. And once you feel it, the jump from cowork to terminal starts feeling like a small step forward rather than a cliff.

Where this leads, eventually, is automation that runs without you. A cron job fires at 6am. A script processes your data. A workflow runs in the cloud while you’re on a call or asleep. Austin maps the progression clearly: folder on your machine, then a local cron, then a cloud-deployed process that runs continuously. The people building now are building the muscle memory to get there faster. You don’t have to start in the deep end. But you have to start somewhere.

Key takeaway: Start in Claude’s cowork space, not the terminal. Upload a folder of documents you already work with regularly (meeting notes, a newsletter draft, recurring reports, templates) and ask Claude to perform a real task on them. That interaction builds the foundational mental model before you write a single line of code. When you naturally graduate from copy pasta and Claude starts automating files for you… it starts to click.

Back to the top ⬆️

The Programming Concepts Non-Developers Need to Build With Claude Code

An anime-style character with spiky hair angrily typing on a vintage computer, with colorful wires scattered around and a bright pink background.

Austin has been saying “learn the command line” for a decade. That advice predates AI by years. The reason it matters now is completely different from the reason it mattered then.

The 3 foundations:

  1. command line (how computers work),
  2. object orientation (how APIs work),
  3. one programming language (how the web works).

You don’t need to master any of them. You need to understand them. Because without that base layer, you can use the tools that exist today, but you can’t evaluate what Claude does when it uses them on your behalf.

“When you have those 3 things, you can teach yourself anything.”

When you work with Claude Code, the model writes code, executes commands, and makes architectural decisions inside your actual environment, your local files (or the cloud). If you don’t know what a GCP service is, you can’t tell whether the one Claude chose is correct. If you don’t understand SQL, you can’t audit the query it wrote (don’t ask Claude the correct pronounciation though). You’re delegating judgment to a model without the ability to check its work. Austin describes the result as “really dangerous, silly, and wasteful.”

The 4th mental model is the one most specific to AI work: separating deterministic steps from probabilistic ones. Deterministic tasks produce the same output every time (parse a file, extract fields, format JSON. Probabilistic tasks involve inference) write a summary, generate options, draft a first pass. Mash both into a single prompt and you get inconsistent results and conclude the tool is unreliable. Separate them and push the reliable parts into code, and the inconsistency disappears. Your workflow becomes a pipeline. A pipeline runs without you.

For Phil (that’s me!) Claude was like my teaching partner while I was building. There is no dumb question for Claude. Ask it wtf JSON is. Hit something unfamiliar and ask it to explain. The learning doesn’t require clearing your schedule. It requires staying curious every time you hit a wall.

Key takeaway: Pick 1 of the 3 foundations this week and spend an hour on it. Learn what a cron job does. Read a 10-minute explainer on what an API call is. Open a Python tutorial and write a function. Each of these changes what you can ask Claude to do and what you can evaluate when it responds.

Back to the top ⬆️

Examples: How to Turn Repeating Prompts Into Automations That Run Themselves

An illustration of a person in a red outfit squatting in front of a washing machine while interacting with a digital interface displaying text related to blog assembly and content generation.

The highest-ROI tinkering happens at the intersection of repetition and painful annoyance. If you’re doing it every week and you hate it, it’s a candidate for automation. That’s the full selection criteria. Doesn’t need to be complex than that.

The principle that changes everything for consistent output: stop prompting for things you want to repeat (maybe you even have existing frquent prompts in a doc somewhere that you paste into Claude or GPT), instead push that into code. Code? That sounds scary. It actually isn’t.

Probabilistic tasks (summarize this, write a draft, generate options) can stay as prompts and leverage Claude directly. Deterministic tasks (parse this file, extract these fields, format this output) become scripts. The moment you run the same prompt twice expecting the same result, you’ve hit the ceiling of what prompting can do. A shell script doesn’t drift.

Here’s a couple practical examples:

Newsletter agent

Austin’s news briefing setup is the prototype. He was getting newsletters from sources including Humans of Martech (yeeee), Lenny’s pod, Elena Verna, Brian Balfour, and the Wall Street Journal and was drowning in it.

  1. So he had Claude collect the RSS feeds and built a multi-agent pipeline: each source gets its own Claude session, which compresses 5 to 10 articles down to 3.
  2. Those go to a second agent that filters to the strongest ones.
  3. A third agent makes the final cut. Every morning the briefing arrives without him touching it.

Linear MCP example

The Linear MCP example scales this up.

Every manager knows the Friday afternoon ticket review. You open the board, stare at 20 or 30 cards in various states of rot, try to remember what’s blocked and what’s just been ignored, manually chase down the ones with no update, and type out the same “hey, what’s the status on this?” message for the 4th week in a row. It’s not hard work. It’s just draining, forgettable, and somehow always takes longer than it should.

With the Linear MCP, you tell Claude to pull every open ticket, sort them by priority or theme, and walk through the board with you. For anything sitting untouched for 3 days, Claude drafts the follow-up (a nudge to the requester asking for a status update) and posts it directly through the MCP. You’re not copying anything. You’re not switching tabs. You’re not typing the same message again. Claude does the triage, writes the updates, and sends them. Crazy.

OKR automation with Granola and Figma

The OKR process is where it gets really ambitious.

Most teams know how this goes. Someone schedules the planning sessions. You get everyone in a room or a Zoom, talk through priorities, and then spend the next 2 weeks chasing people down to actually get their thoughts into a doc lol. Someone writes a draft. It goes into a Google Doc graveyard. 3 rounds of async (or god forbid sync) comments later, half the team has weighed in and half hasn’t, and whoever owns the process ends up stitching it together manually and calling it done. It’s collaborative in theory but really it’s a project management nightmare that drags on for weeks.

Here’s what Austin cooked up.

  • He had individual conversations with every team member — all recorded with Granola.
  • Took all those transcripts and fed them into a series of agents that divided the responses into groups and themes, then synthesized a full set of OKRs from the raw conversation data.
  • Then he brought the team back together on a Figma board, presented the OKRs, and let people react in real time: emojis, comments, thumbs up, thumbs down, all of it dropped directly on the board.
  • The Figma MCP pulled every reaction back into text, fed it into another pass, and the final output went straight back into Figma.

No one had to write a draft or chase anyone down. The organic conversations were the core work.

“2 years ago at Ramp that was a months-long process. Now we’re talking probably 5 hours.”

Key takeaway: Identify the task you run more than once a week that produces inconsistent output when you prompt for it. Build it into a script using Claude Code — describe the input, the desired output, and ask Claude to write the script. The first version will be rough. Run it twice and it becomes a workflow. Run it daily and it disappears from your to-do list.

Back to the top ⬆️

Why Spending All Your Time in Meetings Is a Career Liability

A colorful illustration of a modern conference room with a group of people sitting around a large table. One person with pink hair is lying on the table, while others are seated in chairs, looking towards a blank screen.

Austin doesn’t put this mildly haha… you could be a 5-time startup founder, a VC partner, a senior executive with a full calendar. He says the same thing to all of you:

“If you’re spending all your time in meetings, you are cooked, personally. That is my honest-to-God opinion.”

No shade to meetings though, sure some of them produce real value. Relationships get built there. But a schedule dominated by them, with no protected time for building or tinkering, produces a specific kind of professional: someone who understands the landscape theoretically and has none of the hands-on pattern recognition that only comes from doing. Today this is dangerous because that gap compounds. The theory ages out in a matter of weeks. The hands-on experience doesn’t, or at least the reps don’t.

Even while consulting, Austin was thinking: if I were to join this company right now, how would I do this job with as little human intervention as possible? His career has been defined by finding the boring tasks in his day and finding ways to automate them. The meetings are context. The work that flows from them is where the leverage is.

What actually sits inside a meeting-heavy day, when you look carefully, is a lot of repetitive downstream work:

  • You take a call.
  • You digest information, you take notes, you sometimes make a decision, come to a consensus.
  • Then you extract action items.
  • Write a follow-up. Post in Slack. Update a ticket.

Each of those is a gap, and every gap is an automation candidate. What could you build that would make the post-call process 5 to 10 minutes, fully automated, not copy-pasted? Seems daunting if you look at the whole thing, but just start with one of them.

The entry point is a call recorder. If you have transcription data coming out of every call, you have structured input. From there, you decide what to do with it:

  • Summarize and send to your CRM.
  • Post highlights to a Slack channel.
  • Create a Linear ticket for each action item.
  • Each is a small Claude Code project.

Each one saves time you didn’t know you were losing. Working backwards from where you already are (the meetings, the calls, the recurring reviews) is the fastest path to finding what’s worth automating.

Key takeaway: Audit 1 week of calls and write down every manual step you take after hanging up, like notes, action items, follow-ups, Slack messages, ticket updates. Pick the one that happens most often. Set up a call recorder with transcription and build a Claude Code automation that handles that step automatically. The recording is the context. The automation is the leverage.

Back to the top ⬆️

Why the Best First Claude Code Project Is the Task That Already Annoys You

A vibrant illustration of a green-skinned character riding a bicycle, showcasing an energetic expression, set against a bold red and orange background.

Austin bought an Eight Sleep mattress. His credit card had a $400 cashback offer linked to the purchase. He didn’t click it in time. The money was gone.

He was enraged. So he stayed up until 3am and built a Chrome extension with Claude and GPT that auto-clicks every credit card offer every time he logs in.

“I stayed up till 3am with Claude and GPT to build a Chrome extension that every time I logged in would just click all the offers. Never again.”

The task is absurd. The stakes are trivial. That is exactly the point. The best first Claude Code project is the one where the annoyance is fresh enough that you actually finish it. Austin’s framing: if something is annoying or interesting, go build it. The first version takes time. By the 5th, you reach for the terminal before you reach for a workaround.

Context determines output quality more than anything else. The more structured and specific your input, the more directly usable the result. This is why Whisper Flow took off: typing full context in exactly the format Claude needs is friction. Talking it out naturally, letting transcription handle the structure, dramatically reduces the barrier to giving good context. Voice inside Claude Code, which Anthropic has announced, matters for the same reason.

Austin is a beast, he runs 6 or 7 terminal windows at a time now, each running a Claude agent on a different task. He moves between them, checks progress, restarts what needs work. The whole system runs on Warp, an AI-native terminal substitute that has model assistance built in, useful when you’re still learning which command does what. Every manual script he used to run himself has been migrated to a scripts folder and turned into a command. The work doesn’t live in his head anymore. It lives in folders.

Key takeaway: Find the task you did manually this week that made you irrationally frustrated. Build a Claude Code solution for it, even a partial one. The specificity of the problem makes the context easy to write, the emotion makes you finish it, and the result will be sitting there working the next time you need it.

Back to the top ⬆️

Why T-Shaped Marketers With Claude Code Will Cover the Work of Entire Teams

A character with spiky blonde hair in a red outfit stands with their back to the viewer, gazing at a dramatic sky filled with orange and yellow clouds.

Before Austin explains what a white collar super saiyan is, he turns the question back on Phil. “What is a Super Saiyan, Phil?” For anyone who missed Dragon Ball Z in the 90s: a Saiyan is a warrior class. Powerful, but bounded. A Super Saiyan is the same person after a specific breaking point — the hair goes gold, strength multiplies, the ceiling of what’s possible becomes unrecognizable. The transformation requires accessing something latent, not becoming someone different.

“People who can do a lot of these jobs all in one are going to command much higher salaries. I don’t think jobs are going away as much.”

The devil’s advocate question to Austin’s model: does a single person covering an entire marketing stack just create a single point of failure? His answer is direct: this is happening whether anyone has an opinion about it or not. The opportunity is in positioning for it.

He makes the distinction concrete with a case from his AtoB consulting work. A team member there handles event planning and community management, work AI genuinely can’t replicate: navigating venues, building relationships, reading a room. Her core function is durable. But the surrounding work, lead lists, conference invites, tracking and measurement, is entirely automatable. Whether she automates it herself or someone else does it for her determines where she ends up.

The destination Austin sees: a world where broad, T-shaped professionals dominate. Deep expertise in 1 vertical, strong cross-functional knowledge in adjacent areas (statistics, data modeling, attribution, telemetry) and the ability to execute across all of them using AI. Someone like this covers the surface area of a 3-person team, commands the salary to match, and produces higher-quality work because the feedback loop between strategy and execution has compressed to nothing.

Key takeaway: Identify 2 or 3 adjacent domains to your current role — attribution if you’re in lifecycle marketing, data modeling if you’re in analytics, paid if you’re in organic — and start building small automations in those areas using Claude Code. Breadth built through your own hands is worth far more than breadth you read about.

Back to the top ⬆️

Why Marketing Taste Matters More Than Technical Skill in the AI Era

A green humanoid character with large antennae sits alone at a restaurant table, looking pensive. The table is set with a plate of food, glasses of wine, and a bottle, against a backdrop of plush red seating.

Austin has been watching Mad Men. And it’s changed how he thinks about where marketing is heading.

“At the same time that all this technical marketing is getting easier to do, it’s also less effective. What’s more effective is being creative and thoughtful.”

Before programmatic targeting and attribution models and automated lifecycle sequences, the job was knowing people well enough to say the right thing to the right person. The technical complexity of modern martech was supposed to make that easier. Austin argues it has driven marketing toward uniformity rather than effectiveness. Everybody runs the same lookalike audiences on the same platforms with AI-generated copy optimized by the same algorithms. The result is more marketing that sounds the same.

The Ramp campaign is the clearest example he has. Kevin from The Office, the most famous accountant of the early 2000s, beloved and nostalgic and exactly the kind of specific cultural reference that either lands perfectly or falls flat.

The campaign works because someone asked: who would an accountant actually love? Then found the answer and executed on it. That’s a creative decision. AI can generate a thousand variants of that ad once the idea exists. But the idea comes from someone who understands people.

The split Austin sees forming: operators and creatives, roughly analogous to the account team and creative team structure in the Mad Men era. Operators handle technical execution. Creatives provide taste. In an environment where AI has compressed the operator skill requirements, the creative side becomes the constraint. The signal in hiring shifts accordingly.

For hiring, Austin’s tests are practical. Ask for a portfolio of campaigns they’ve run. Ask them to describe the most creative idea they’ve had recently. Ask them to look at a piece of work and tell you how they’d make it better. Run a live prompt session on a design: how would they frame it, what would they change, how would they critique it? The goal is to find out whether someone has opinions, and whether those opinions are any good. The standard for taste in creative hiring has always been “when you see it, you know it.” He doesn’t think that changes in an AI-assisted world.

Key takeaway: Run a collaborative session in your next creative hire before the final round. Bring a campaign you think is mediocre and one you think is excellent. Ask them to tell you which is which and why. The quality of the reasoning, not just the answer, tells you whether the taste is real or performed.

Back to the top ⬆️

How Early-Career Professionals Build Judgment When Entry-Level Work Gets Automated

Two animated children playing in a sandy area; one child is building a sandcastle, while the other sits with a smile, observing the scene. The sky is blue with white clouds.

Austin used to be skeptical of secondary education. He came out differently. Looking back at how he built his actual breadth of knowledge (the range that lets him move across martech, data modeling, attribution, product, and operations without starting from scratch) the answer was uncomfortable: he had spent a lot of time learning in structured environments.

The MBA he completed at Wharton while co-founding Clarify shows up not as a credential in his telling but as forced exposure to adjacent domains. Finance, operations research, organizational behavior; topics a growth marketer would normally avoid. He says he was surprised at the end of it by how frequently he draws on knowledge from areas he barely went deep in. Go through a course on Bayesian modeling and it changes how you think about marketing attribution. Breadth acquired through structured exposure turns out to be harder to replicate on your own than depth in a single domain.

“There are lots of ways to meet people, but often not a lot of ways to meet the right people.”

The more pressing issue for early-career professionals, Austin argues, is access to the right learning environments. Courses and communities have proliferated. The filtering hasn’t kept up. What made traditional programs valuable was the audience as much as the curriculum. The right people in the room, introduced at the moment when you’re still forming your professional identity, is a specific kind of advantage that’s hard to manufacture.

Phil’s concern about the entry-level jobs problem is probably something you have thought about also… The tasks that used to build foundational judgment, running a campaign manually, writing SQL queries, wiring up an email sequence, are increasingly automated before young professionals have a chance to do them. What fills that gap is unclear. Austin’s partial answer: find the most filtered community you can access, stay curious across adjacent domains, and pursue things that cost you something. The pattern of what marks you is the closest thing to lasting professional development that exists.

Key takeaway: Evaluate your professional communities by their filtering mechanism, not their size. A small group of people who are 3 to 5 years ahead of you in adjacent disciplines is worth more than a large community of peers at the same level. Find the former and protect the time you spend in it.

Back to the top ⬆️

How Austin Hay Runs His Career as a Flywheel

A character in a suit running down an empty road towards a vibrant sunset, with colorful clouds and fields on either side.

Clarify, the CRM startup Austin co-founded, produced a year he describes without regret. He raised a Series A, finished an MBA, launched the product, ran 2 ultramarathons, and lost his dog. All in the same calendar year. He doesn’t recount this as a hardship narrative. He offers it as the context that clarified what actually deserves his energy.

When he emerged from founding, he found something specific: he gets energy from building, writing, and learning. From the process of sitting with a problem and working through it. The credentials, the raises, the press hits are downstream effects that don’t refill the tank. Founding was hard partly because it consumed the source material.

“How do I make the most amount of money working the least amount of time? People usually run this optimization in their heads around the money side, but they don’t consider the time element.”

Austin names 4 things that shape how he allocates energy now.

  1. Know yourself. Understand what work actually gives you energy.
  2. Optimize for money-to-time ratio rather than income alone. The metric is financial output per hour invested, not total compensation.
  3. Community. After hard family years and 2 years of weekend travel for the MBA, the quality and proximity of the people around him became a deliberate priority.
  4. Physical difficulty. He told his running partner 2 or 3 years ago that he would never run a marathon. Since then he’s completed an Olympic triathlon, a half marathon, a full Olympic race, a half Ironman, an ultra, and is now training for a full Ironman. The goal is the proof of concept. Doing hard things proves you’re capable of more.

The flywheel that organizes all of this: learning generates the thinking that populates his writing and consulting work. The writing brings clients and community. The community generates more learning. Financial outcomes follow the compound, not the other way around. His North Star is protecting time for learning, writing, and building every morning before noon. Lenny gave him the advice to never take a call before 1pm. At first it sounded like a luxury. He realized it’s a priority. Priorities get set. They don’t get found.

Key takeaway: Audit your last week for the 2 or 3 hours where you were most absorbed in your work. Identify the conditions (time of day, type of problem, depth of focus) and redesign your schedule to protect them. The people who do their best work before noon and take calls in the afternoon made a decision. Make it.

Back to the top ⬆️

Episode Recap

Illustration of a smiling man with glasses and blonde hair, wearing an orange shirt and blue jacket, alongside the text 'Humans of Martech' and the name 'Austin Hay' with his title 'Fractional CMO / COO & Allied AI Marketer' in a colorful, vibrant background.

Austin Hay came into this conversation as a self-described chatbot skeptic and left it as one of the clearest voices on what separates practitioners who are genuinely building with AI from those who are still copy-pasting into a chat window. The 2 chasms framing is worth sitting with: chasm 1 is adopting AI at all, chasm 2 is moving from chat to code. Most marketing professionals have cleared the first. Almost none have cleared the second. And the compounding effect of that gap is already showing up in who gets hired, what they get paid, and how much leverage they have over their own time.

The tactical thread running through everything Austin described is repetition as signal. If you’re doing something more than once a week, it’s a candidate for automation. If it produces inconsistent output when you prompt for it, it belongs in a script. If it annoys you enough to stay up until 3am, that’s the energy you need to actually finish the thing. The Chrome extension story sounds like a punchline and it’s actually the methodology.

The bigger argument underneath all of it is about taste. As technical execution gets cheaper and more accessible, the constraint shifts to creative judgment, the ability to ask the right question, pick the right cultural reference, evaluate whether the output is any good. Austin’s Mad Men framing is the clearest way to see where this lands: operators and creatives, and a market that is starting to price creative taste much higher than it did when technical complexity was the bottleneck.

For early-career professionals, the honest answer is that the tasks that used to build foundational judgment are disappearing faster than replacements are appearing. Austin’s answer: find filtered communities, pursue adjacent domains, do things that cost you something. It’s the closest thing to a durable one that exists right now.

Listen to the full episode ⬇️ or Back to the top ⬆️

A banner displaying the text 'Proudly brought to you by' followed by the logos of RevenueHero, Mammoth Growth, MoEngage, and Knak.

Follow Austin👇

✌️


Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)

Ask the Humans of Martech archive
Search 6,600+ transcript clips from real conversations with martech practitioners. Describe the problem you’re working on.

All categories

Monthly archives

See all episodes

Future-proofing the humans behind the tech

Leave a Reply