203: Jordan Resnick: How to distinguish fake traffic from real machine customers

Aboli Gangreddiwar, Senior Director of Lifecycle and Product Marketing at Credible, speaking during an interview.

What’s up everyone, today we have the pleasure of sitting down with Jordan Resnick, Senior Director, Marketing Operations at CHEQ.

Summary: Distinguishing fake traffic from real machine customers starts where metrics break down. Jordan shows how AI-driven bots now scroll, click, submit forms, and pass validation while quietly filling dashboards with activity that never turns into revenue. The tell is behavioral texture. Sessions move too fast. Paths skip learning. Engagement appears without intent. Real machine customers behave with rhythm and purpose, returning, evaluating, integrating. Teams that recognize the difference lock down the conversion point, block synthetic demand before it reaches core systems, keep sales calendars clean, and only report once traffic has earned trust.

In this Episode…

Recommended Martech Tools and Agencies 🛠️

We only partner with products and agencies that are chosen and vetted by us. If you’re interested in partnering, reach out here.

🎨 Knak: Go from idea to on-brand email and landing pages in minutes, using AI where it actually matters.

📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.

🦣 Mammoth Growth: Customer data agency that turns fragmented data into a unified foundation, unlocking sharper marketing insights and action.

🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.

About Jordan

Jordan Resnick is Senior Director of Marketing Operations at CHEQ

Jordan Resnick is Senior Director of Marketing Operations at CHEQ, where he leads the systems, data, and workflows that support go-to-market security across a global customer base. His work sits at the intersection of marketing operations, revenue operations, attribution, automation, and analytics, with a clear focus on building infrastructure that holds up under scale and scrutiny.

Before CHEQ, Jordan ran marketing operations for two major business units at Atlassian, where he supported a vast array of stakeholders, enabling complex GTM motions targeting global markets. Earlier roles at Perkuto and MERGE combined hands-on execution with customer-facing leadership, integration design, and process ownership. His career also includes more than a decade as an independent operator, delivering marketing operations, automation, content, and technical solutions across a wide range of organizations. Jordan brings a deeply practical, execution-driven perspective shaped by years of building, fixing, and maintaining real systems in production environments.

Demystifying Go-to-Market Security

A stylized illustration of a fortified medieval castle at night, surrounded by a dark sky and swirling clouds, with silhouettes of people lining up outside the gates.

Go-to-market security shows up when growth metrics look strong and revenue outcomes feel weak. Marketing operations teams live in that gap every day. Jordan describes GTM security as a business-facing discipline that protects the integrity of acquisition, funnel data, and downstream decisions that depend on clean signals. The work sits inside marketing operations because that is where traffic quality, lead flow, and revenue attribution converge.

When asked about how GTM security differs from traditional fraud prevention, Jordan frames the difference through decision-making pressure. Security teams historically focus on defending infrastructure and minimizing exposure. Marketing ops teams focus on maintaining momentum while spending real budget. GTM security evaluates risk in context, with an eye toward revenue impact, forecasting accuracy, and operational trust across teams that rely on shared data.

Jordan grounds the concept in specific failure points that operators recognize immediately. GTM security examines where bad inputs quietly enter systems and multiply.

  • Paid traffic that inflates sessions without creating buyers.
  • Analytics skewed by automated interactions that look legitimate.
  • Form submissions that pass validation and still never convert.
  • Sales pipelines filled with activity that drains time and morale.

Each issue compounds because systems assume the data is real. Teams keep optimizing against numbers that feel precise and still point in the wrong direction.

“You are putting money into driving people to your website, and the first question should be how many of those people are real and able to buy.”

Invalid traffic behaves like a contaminant. It flows from acquisition into attribution models, forecasting tools, CRMs, and revenue dashboards. Marketing celebrates growth, sales chases shadows, and finance questions confidence in the entire funnel. The problem rarely announces itself as a security incident. It surfaces as confusion, missed targets, and internal friction.

GTM security matters because it gives marketing ops teams a framework to protect the inputs that shape every downstream decision. The work runs alongside traditional security while staying anchored in go-to-market outcomes. That way you can spend with confidence, trust your reporting, and hand sales teams signals grounded in real buying behavior.

Key takeaway: Treat go-to-market security as part of your core marketing operations workflow. Validate traffic quality, filter lead integrity, and block funnel contamination before data enters analytics and sales systems. That way you can protect budget efficiency, restore confidence in reporting, and align growth decisions with real customer behavior.

Back to the top ⬆️

The Fake Traffic Surge

An artistic depiction of a massive wave looming over a traffic jam, with cars in vibrant colors stuck on the road as the sky transitions to a dramatic gradient of pink and blue.

AI-powered automation now sits at the center of the fake traffic surge, and the data from CHEQ makes that pattern hard to dismiss. The jump from 11.3 percent to 17.9 percent happened because automation became accessible to almost anyone with intent. Writing scripts once required time, skill, and trial and error. AI removes that friction and replaces it with speed and scale, which changes who can participate and how quickly abuse spreads.

Jordan ties that accessibility directly to incentives that marketing teams quietly tolerate. Fraud still generates money. Inflated traffic still props up dashboards. Higher visit counts still circulate in board decks without hard questions attached. AI accelerates activity that already existed and widens the group capable of producing it. That combination turns fake traffic into background noise instead of a visible threat, especially when volume metrics continue to earn praise.

“You don’t need to be a hardcore coder to write a script anymore. You can get AI to do it for you.”

Automation also introduces a layer of ambiguity that most teams are not prepared to handle. Bots now perform legitimate tasks that look suspicious inside analytics tools. Some scan pricing pages. Some analyze product specs. Some gather research for downstream purchasing decisions. Jordan points out that people already configure agents to place orders, and that behavior blends seamlessly into traffic logs. Marketing systems treat those visits the same way they treat fraud, which creates confusion across attribution and forecasting.

That confusion pushes teams toward blunt fixes that create new problems. Blanket blocking removes useful signals. Loose filtering leaves waste untouched. Jordan frames the real work as classification rather than suppression. Teams now need to separate intent categories instead of chasing a single definition of fake traffic. That work forces uncomfortable conversations about which metrics deserve trust and which exist only because nobody benefits from challenging them.

Fake traffic keeps growing because systems reward volume and rarely penalize distortion. AI makes production easier, incentives keep demand high, and measurement practices lag behind reality. Marketing ops teams that continue to treat traffic as a vanity metric will keep absorbing cost without realizing where it comes from.

Key takeaway: Build a traffic quality discipline that focuses on intent, not labels. Review inbound traffic sources weekly, group automation by purpose, and tag traffic that can never influence revenue as operational noise. That way you can protect attribution, spend budget on real buying signals, and stop reinforcing metrics that reward volume over truth.

How the Dead Internet Theory Connects to Bot Traffic Growth

An artistic representation of a dystopian landscape filled with zombie-like figures interacting with computers, set against a vivid sunset sky.

The Dead Internet Theory describes an internet where a large share of activity comes from bots rather than people, starting around 2016 and accelerating quietly ever since. The idea surfaced in forum posts in 2021 and framed automation as a force that reshapes perception, conversation, and participation at scale. The theory goes further by suggesting coordination by governments or corporations, but its staying power comes from something simpler. Daily experience online increasingly matches the symptoms the theory describes.

When asked about the theory, Jordan approaches it with curiosity rooted in pattern recognition. He does not claim evidence of centralized control, but he treats skepticism as a reasonable response to history. Institutions have repeatedly presented polished narratives while operating very differently behind the scenes. That context matters when traffic reports show bots making up a substantial portion of internet activity. The theory’s core claim aligns with observable behavior, even when its most dramatic explanations remain unverified.

The theory also explains why the internet feels crowded and hollow at the same time. Feeds fill quickly. Comment sections refresh constantly. Engagement metrics climb without producing meaningful outcomes. Jordan connects that sensation to what marketing teams see inside analytics tools. Automated traffic inflates visibility and interaction while disconnecting those signals from revenue, intent, and trust. The Dead Internet Theory gives language to that disconnect and helps marketers articulate why performance often feels inflated and fragile at once.

Jordan emphasizes intent as the missing link. Bots exist because people design and deploy them with objectives. Those objectives include fraud, influence, competitive disruption, and vanity metrics. Viewing bot traffic as random background noise hides the incentives driving it. The theory resonates because it assumes agency behind automation and treats volume as a consequence of strategy rather than accident.

That same logic carries into content. Jordan draws from his content background to describe how AI-generated writing blends easily with low-effort human copy. The result is a web full of material that looks finished while offering little substance. He uses AI tools for drafting and sees their utility, but he insists on human review as a requirement for publication.

“I would absolutely use it for a first draft. What I have a problem with is taking that first draft and just posting it without applying human thought.”

The Dead Internet Theory ultimately functions as a practical lens for marketers. It reframes traffic, engagement, and content as signals that require scrutiny rather than celebration. That framing changes how teams work day to day.

  • Teams verify traffic sources before trusting growth.
  • Analysts question engagement patterns before updating forecasts.
  • Content owners review AI-assisted output before publishing.
  • These habits restore human judgment in systems dominated by automation and protect decision-making from synthetic noise.

Key takeaway: Use the Dead Internet Theory as an operating assumption, not a belief system. Expect automation to influence traffic, engagement, and content at scale, then design workflows around verification and review. That way you can identify real human behavior, protect reporting credibility, and make decisions grounded in signals that reflect genuine demand and intent.

How to Detect Bot Traffic Through Behavioral Patterns

An illustration of a focused individual analyzing data on screens with colorful graphs, using a magnifying glass to scrutinize the information.

Hyper-realistic bots now generate engagement that looks calm, patient, and deliberate. They watch videos, scroll pages, hover over buttons, and move a cursor with just enough variation to blend in. That behavior flows directly into analytics tools, where conversion rates rise and funnel performance appears healthy. Marketing teams feel confident reviewing those numbers until pipeline outcomes fail to match the story those dashboards tell.

Jordan frames the problem as an arms race driven by behavior, not technology. Detection systems evolve, and attackers quickly adapt their tactics to stay within acceptable thresholds. Bots slow down. Bots pause. Bots simulate curiosity. That shift forces marketing ops teams to stop relying on surface metrics and start evaluating whether sessions follow a believable human sequence. Human visitors pursue an outcome. Bots pursue coverage.

Jordan pays closest attention to sessions that feel incoherent when replayed mentally. He looks for patterns that break basic expectations of time, navigation, and intent. Several signals tend to surface together when traffic turns synthetic:

  • Page velocity that exceeds human reading speed while still touching most major pages.
  • Navigation paths that jump into deep URLs without passing through discovery points.
  • Engagement that spans unrelated sections of the site with no narrative thread.

“I tend to look for what’s not explainable in analytics,” Jordan says. “Are they moving through the site faster than a human buyer would? Are they clicking on everything in patterns that don’t make sense?”

Time on site becomes more useful when treated as a behavioral constraint rather than a performance badge. A visitor who spends minutes on a site while opening dozens of disconnected pages creates friction once you imagine a real person behind the screen. The same tension appears when sessions reach pages that require prior knowledge, such as hidden URLs or resources that sales usually shares directly. Those paths rarely occur by chance in genuine buying journeys.

Jordan also stresses the importance of classification once suspicious behavior appears. Automated traffic includes legitimate crawlers, AI agents, monitoring tools, and malicious actors, and each group behaves differently. CHEQ helps teams separate those categories at scale, but the operational discipline matters just as much. When anomalies appear, teams need to review session paths, time distribution, and source data until the behavior tells a coherent story. That work protects attribution models and prevents inflated engagement from guiding budget decisions.

Key takeaway: Schedule recurring reviews of raw sessions alongside aggregated reports. Pull a small sample of high-engagement visits and trace their paths minute by minute. Watch for speed that exceeds human behavior, navigation that bypasses discovery, and engagement that covers everything without intent. That way you can flag synthetic traffic early, protect conversion data, and give GTM teams numbers that reflect real buyer behavior rather than simulated activity.

How Go To Market Teams Reduce Fake Traffic And Lead Pollution

Two animated characters, one in a knight-like armor and the other with a boxy head, sit on a pink couch playing video games in a cozy room filled with posters and a warm lamp.

Fake traffic creates operational drag long before revenue feels the impact. Metrics inflate, systems fill with junk, and sales teams lose confidence in what marketing hands them. Jordan frames every defensive tactic around one question. How much scale, cost, and downstream friction does your organization tolerate before intervention becomes mandatory. From there, he walks through a full spectrum of controls, starting with what most teams already have and moving toward more advanced coordination.

The tactics tend to surface in long vendor checklists, so Jordan brings order to them by separating signal from noise. Each layer serves a specific purpose, and each one matches a different stage of organizational maturity.

  1. Simple non AI controls like data hygiene and validation checkpoints.
  2. Multi-layered security using CAPTCHA and MFA.
  3. AI-driven fraud prevention using behavioral signals.
  4. Fingerprinting techniques.
  5. Dynamic JavaScript-based tracking pixels like SafePixel.
  6. Real-time scoring and API-level traffic filtering.
  7. Post conversion lead verification tooling.
  8. Federated learning across organizations.

1 – Simple non AI controls like data hygiene and validation checkpoints.

These controls live inside tools most teams already use. Native bot filtering in platforms like HubSpot or Marketo blocks obvious junk once it is enabled. Domain rules prevent free email addresses when enterprise buyers matter. Pattern spotting catches repeat formats, odd punctuation, and familiar fake signatures that appear again and again. Jordan also points to basic site security, including CMS-level spam prevention, as an upstream filter that reduces noise everywhere else.

“Turn on the bot filtering. It’s amazing how many of us don’t even do that.”

2 – Multi-layered security using CAPTCHA and MFA.

These controls add friction at the form level. CAPTCHAs stop low-effort bots and slow down real users at the same time. Jordan treats this as a go-to-market decision, not a pure security call. Honeypot fields provide a quieter option. Hidden inputs attract bots while staying invisible to humans, which flags bad submissions without harming conversion.

3 – AI-driven fraud prevention using behavioral signals.

Behavioral analysis watches how sessions move rather than what they submit. Mouse movement, scroll behavior, and timing patterns expose automation that looks human at a glance. Jordan notes that most teams cannot run this logic in real time without dedicated tooling. Some teams analyze sessions after the fact and block sources once patterns emerge.

4 – Fingerprinting techniques.

Fingerprinting inspects device characteristics, browser properties, and network signatures. Jordan describes this work as deeply technical and usually shared with data science or security teams. Impossible combinations, such as mobile devices reporting nonexistent browsers, surface once teams dig into raw traffic logs. Blocking happens after analysis, which makes future visits cleaner.

5 – Dynamic JavaScript-based tracking pixels like SafePixel.

These scripts collect engagement signals that bots struggle to mimic consistently. Human interaction generates irregular, messy behavior over time. Bots struggle to replicate that pattern across sessions. That way you can score traffic quality using interaction depth rather than static form data.

6 – Real-time scoring and API-level traffic filtering.

Real-time scoring evaluates sessions as they happen. API-level filtering excludes suspicious traffic before it enters analytics, databases, or automation. Jordan frames this as a cost control measure. Every blocked record saves storage, enrichment credits, and sales time.

7 – Post conversion lead verification tooling.

Verification happens after the form submits. Email validity, phone realism, and domain credibility surface quickly. Jordan shares that smaller teams often perform this step manually with strong accuracy. Larger organizations require automation to keep pace and preserve trust with sales.

8 – Federated learning across organizations.

Federated learning trains shared models across companies without exchanging sensitive data. Jordan treats this as collective memory. Bot operators reuse tactics across industries, and shared learning shortens detection cycles for everyone involved.

Jordan returns to scale at the end of the discussion. Inflated traffic counts irritate analysts. Wasted spend, bloated systems, and angry sales teams create pressure that leadership feels immediately.

“When salespeople are getting mad and nobody trusts MQLs anymore, that’s a big problem.”

Key takeaway: Start with controls you already own, including bot filtering, validation rules, and pattern checks. Add friction selectively, favoring backend detection over front-end obstacles. Introduce behavioral analysis, fingerprinting, and real-time filtering as volume grows and manual judgment breaks down. Invest heavily once fake traffic wastes money, fills systems, or damages sales trust. Go-to-market security belongs inside the funnel, where bad traffic gets stopped before it becomes operational debt.

Preventing Fake Leads From Reaching Sales

A crowd of cartoonish robots standing closely together behind a glass wall.

Fake demo requests drain teams in predictable, frustrating ways. CRMs swell with activity that looks healthy on paper, sales calendars fill up, and confidence rises for a moment. The cleanup arrives fast. Reps chase names that feel synthetic, emails bounce, and real buyers wait longer than they should. Jordan treats this as an intake design problem, and he pushes ops teams to take ownership of what reaches sales in the first place.

JavaScript validation gives teams control at the exact point where intent shows itself. Behavior exposes more than any form field ever will. Time on page signals attention. Mouse movement reflects human hesitation and correction. Click order shows whether someone understands what they are filling out. When those signals break pattern, teams can intervene quietly. Jordan describes flows where suspicious visitors never reach a real booking experience, even though the interface looks convincing. Sales teams stay focused, and noise disappears without a confrontation.

Routing logic belongs in the same layer. Jordan encourages teams to make qualification decisions in real time, not after a lead lands in the CRM. Firmographic inputs and enrichment data can determine what happens next immediately. In practice, that often includes:

  • Company size rules that control access to rep calendars.
  • Domain or region checks that trigger an automated response instead of a meeting.
  • Alternate paths that guide smaller accounts toward lighter-touch options.

That way you can respect inbound interest while protecting sales capacity. Tools like RevenueHero bake this logic directly into the form and booking experience, combining qualification, routing, and calendar control in one flow. Teams avoid stitching together brittle workarounds and keep protection consistent by default.

Confirmation steps still pull weight, even when teams resist them. Jordan points to simple techniques he used years ago, including double email entry and reply-to-confirm messages before meetings stayed on the calendar. These checks filter behavior in a way automation rarely defeats.

“Nobody is going to fill out a demo request and then respond to an email saying yes, that really was me, unless they are real.”

That confirmation step always reduces volume. It also sharpens focus and protects rep time. The real decision centers on tolerance for missed leads versus tolerance for wasted hours.

Lead validation forces clarity. Short forms inflate activity and move qualification work onto sales. Behavioral checks, routing rules, and confirmation steps move that work upstream into systems designed to handle it. Jordan’s point resonates because it asks teams to design for trust, time, and follow-through instead of surface-level momentum.

Key takeaway: Build lead protection into the moment of conversion. Use behavioral JavaScript signals, firmographic routing, and confirmation steps inside your booking flow, or adopt tools like our sponsor RevenueHero that ship this logic out of the box. That way you can keep fake requests out of sales calendars, preserve rep focus, and accept volume tradeoffs deliberately instead of paying for them later in lost time.

How to Calculate Revenue Impact of Fake Traffic

A stylized character wearing a gas mask and a hooded jacket stands in a rain of floating dollar bills.

Revenue conversations around fake traffic often stall because teams treat the problem as abstract risk rather than operational loss. Jordan frames the issue as a measurable drag on performance that already exists inside most dashboards. A material share of internet traffic is fake, and that same share flows directly into paid clicks, form fills, and MQL volume. Those interactions consume budget and sales capacity while producing zero revenue, which quietly reshapes funnel math in ways leaders rarely see.

Paid media provides the fastest path to credibility because the inputs are familiar to finance and growth teams. Jordan points to widely cited research showing that 20 to 40 percent of traffic can be invalid across channels. That percentage becomes powerful when applied directly to internal numbers. A campaign that delivers 100 clicks and 10 purchases carries a very different conversion profile once fake traffic is removed from the denominator. The recalculated rate reflects actual buyer behavior rather than inflated activity.

“If 20 to 40 percent of the internet is fake, then 20 to 40 percent of those clicks you’re paying for are never going to buy because they’re fake.”

Jordan grounds the math with a physical analogy that executives grasp immediately. A retail store filled with robots still looks busy, but the robots clog aisles, distract staff, and slow checkout. Digital funnels behave the same way. Fake traffic occupies space that should belong to real buyers, and it degrades performance signals that teams depend on for planning and forecasting.

The same logic extends beyond acquisition into the rest of the funnel, where the damage often compounds. Marketing teams can surface the revenue impact by walking leadership through a simple sequence:

  1. Identify the percentage of leads flagged by sales or operations as spam or invalid.
  2. Remove those records from historical conversion calculations.
  3. Recompute opportunity creation and close rates using only legitimate demand.

That way you can show how much revenue disappears when fake activity crowds out real prospects at every stage.

Jordan also highlights the risk of overcorrecting. AI agents increasingly conduct research on behalf of real buyers, especially in complex B2B purchases. Precision filtering preserves demand signals while removing noise, which keeps revenue teams focused on humans who are actually in market.

Key takeaway: Pull fake traffic out of your funnel math and recalculate everything that follows. Apply credible fraud benchmarks to paid media, strip invalid leads from conversion rates, and present leadership with revised numbers based only on legitimate demand. That exercise turns fraud from a vague concern into a concrete revenue conversation that teams can act on immediately.

How to Report Marketing Performance When Bot Traffic Skews Metrics

A large, robotic creature towers over a devastated landscape, surrounded by flames and dark clouds, with a dramatic and chaotic atmosphere.

Marketing performance collapses the moment inflated numbers escape into internal reporting. Bot traffic turns clean-looking dashboards into liabilities, especially when early celebration travels faster than verification. Jordan treats this scenario as an operational failure mode that marketing ops teams should expect and design around. Once teams accept that junk traffic exists and will find its way into campaigns, discipline becomes the difference between credibility and embarrassment.

Jordan pushes for skepticism as a daily operating posture. Performance reporting deserves the same scrutiny as a data pipeline change or a production deploy. That means slowing down before sharing results and pressure-testing what looks impressive. He calls out top-of-funnel metrics as especially fragile because they inflate easily and decay under inspection. Visits, form fills, and early conversion rates rarely hold their shape once revenue and pipeline enter the discussion. Teams that anchor updates in business outcomes feel less excitement early, but they earn trust that compounds.

“We should be looking at things like revenue and pipeline creation. Ultimately that’s what the business is looking at.”

That framing reshapes how teams decide what earns airtime. Jordan describes a simple filter he applies before sending reports upward. He asks whether the metric survives deeper analysis and whether it influences revenue conversations. Metrics that fail either test stay internal. Metrics that pass both move forward. That habit changes behavior quickly because it reduces the temptation to celebrate volume and redirects energy toward signals that matter.

When inflated results slip through and reality surfaces later, Jordan advocates for ownership without hedging. Reporting errors follow the same rules as operational mistakes. They require acknowledgment, correction, and documentation. He shares examples where dramatic wins shrank after data science flagged anomalies. In those moments, he chose to update leadership with revised numbers and context. That choice preserved trust even when the story lost momentum.

Jordan also acknowledges environments where leadership still wants optimism preserved. In those cases, he suggests clarity through comparison. Slides can present the original numbers alongside adjusted figures after investigation. Context explains the delta and protects future decisions. Teams that avoid this practice drift toward performance reporting as spectacle, where inflated metrics linger because they feel safer than accuracy. Jordan prefers workplaces that reward precision and accountability, even when numbers disappoint.

Key takeaway: Build reporting habits that delay celebration until traffic quality is verified, anchor updates to revenue and pipeline, and correct the record openly when bots inflate results. That way you can protect credibility, guide better decisions, and prevent vanity metrics from steering strategy.

Trust Erosion From Fake Traffic

A stylized landscape featuring a cracked earth surface with vibrant colors, including yellow trees and blue sky, suggesting a transition from nature to a more desolate terrain.

Fake traffic weakens confidence inside marketing teams because it injects doubt into every number that hits a slide. Reporting starts to feel fragile when dashboards show growth that no one fully believes. Budget conversations slow down. Stakeholders begin asking for secondary validation, then tertiary validation, because trust in the data quietly slips away. The emotional weight lands on the person presenting the report, who now feels responsible for defending numbers shaped by forces outside their control.

Jordan describes the response as a discipline grounded in rigor and candor. He treats every metric as something that must earn credibility through investigation. That means chasing anomalies until there are no loose ends and documenting each filter used to remove bots. He also encourages teams to invest in tools like CHEQ when the organization can support it, because automated defense reduces the mental load placed on analysts. Even with strong tooling, he frames reporting as a statement of what the team knows at a specific point in time, supported by evidence and clear reasoning.

“This is what we know today. These are the numbers. These are the controls we put in place to remove bot activity. I cannot promise perfection, but I can explain why I trust this view of the data.”

That framing matters because attribution has always carried blind spots, even in clean environments. Jordan shares a personal story from childhood that shaped how he thinks about measurement. A moving company sponsored a sports promotion that upgraded a fan’s seats mid-game. The memory stayed with him for decades. When he eventually became a customer years later, analytics systems would credit organic search or direct traffic. The original influence would remain invisible to dashboards, despite being real and durable. That gap exists independently of fake traffic, and bot activity simply amplifies it.

Jordan treats analytics as a pattern recognition exercise supported by judgment. He looks for signals that repeat across campaigns and time periods, then layers in human reasoning. You can apply the same mindset in your own reporting by building habits that keep trust intact:

  • Document assumptions, filters, and exclusions in plain language.
  • Explain what the data supports and where uncertainty remains.
  • Track trends across multiple campaigns instead of reacting to single spikes.
  • Pair charts with narrative context so stakeholders understand why results moved.

Psychological confidence returns when teams stop asking numbers to carry certainty on their own. Reporting works best when metrics guide discussion and reasoning, not when they serve as verdicts. Jordan keeps alignment steady by treating data as a shared problem to solve, supported by transparency and consistent logic.

Key takeaway: Build trust in reporting by showing your work every time. Explain how numbers were filtered, why they are credible, and where uncertainty still exists. That way you can defend budgets, maintain morale, and make decisions grounded in repeated evidence rather than fragile precision.

How Marketing Ops Should Adapt Systems for Machine Customers

A stylized illustration of a humanoid robot interacting with a computer interface, holding cash, surrounded by digital elements and abstract shapes.

Machine customers already move through B2B buying systems with speed and precision that most human buyers never match. Autonomous agents research vendors, scan documentation, compare pricing, and sometimes place orders without human review. That activity leaves fingerprints across analytics, form fills, and CRM records. Teams that still assume every buyer looks and behaves like a person create friction where demand already exists.

When asked whether this behavior exists today, Jordan frames the challenge as an organizational awareness problem rather than a tooling gap. Marketing ops teams see these patterns first because they live in the data every day. They notice compressed research cycles, unusually consistent form submissions, and traffic that behaves with intent rather than randomness. Those signals matter because they point to real evaluation and purchasing activity, not noise.

Jordan describes marketing ops as the connective tissue between security, sales, IT, and data teams. That role carries influence when it comes with evidence. Teams need to show what machine-driven buying looks like in practice and why old security assumptions need revision. That conversation works best when grounded in specifics:

  • Sessions that touch pricing pages, documentation, and demo requests in tight sequences.
  • Accounts that progress from first touch to purchase without traditional sales engagement.
  • Traffic patterns that trigger fraud rules due to speed rather than malicious behavior.

“We are in the trenches. We see the numbers. We see the trends. We have to communicate that we are no longer dealing with a single type of buyer.”

Operational systems feel the pressure next. CRMs and automation workflows rely on human-centric signals like hesitation, back-and-forth communication, and incomplete data. Machine customers behave with consistency, speed, and accuracy. That behavior often collides with lead scoring models, routing logic, and security plugins that treat velocity as risk. Teams need workflows that recognize buying patterns instead of defaulting to blocks that quietly remove real demand.

Jordan remains realistic about the limits. No one has a perfect system for distinguishing helpful automation from abuse at scale. Progress comes from iteration, shared learning, and collaboration with teams who specialize in detection and security. Marketing ops teams already trade examples in Slack groups and community forums. That collective awareness helps teams evolve controls based on observed behavior rather than inherited fear.

Key takeaway: Review how your systems classify and block automated behavior today. Pull examples of blocked sessions and leads that show structured research and buying intent, then review them with marketing ops, security, and sales together. Adjust routing and scoring rules to account for machine-driven buying patterns, and document clear criteria for escalation instead of blanket suppression. That way you can protect your funnel while capturing demand that already exists and currently disappears without explanation.

Funnel Audits With Security Teams to Reduce Bot Traffic

A busy airport terminal scene depicting a large crowd of travelers waiting in line near security checkpoints. Colorful stylized figures are shown moving through the terminal, with digital signs displaying flight information in the background.

Fake traffic quietly corrodes go-to-market systems by inflating performance signals while draining downstream teams. Jordan frames the issue around collaboration across marketing ops, security, IT, and data teams when discussing fake traffic and bots. The funnel already contains the evidence, so the work belongs inside regular operational reviews instead of isolated security controls.

Jordan takes a clear stance on shared KPIs, and his reasoning reflects how incentives actually work inside companies. Security teams operate with a mandate centered on prevention, containment, and risk reduction. Marketing ops teams carry pipeline pressure and revenue expectations because they sit alongside sales in the go-to-market motion. Different mandates shape different behaviors, and pretending those incentives match creates confusion instead of alignment.

Jordan points toward routine funnel audits as the practical answer. These reviews work when they run on a predictable cadence and follow a concrete structure. Marketing ops brings funnel data that includes visits, leads, MQLs, SQLs, and opportunities. Security and IT explain which controls are active and why certain patterns triggered them. Data teams validate the trends and call out anomalies. The conversation stays grounded because everyone looks at the same numbers at the same time.

Those discussions surface tension that usually stays hidden in dashboards. Blocking aggressively can suppress legitimate demand alongside junk traffic. Jordan speaks plainly about that reality.

“We have numbers we need to hit, and we are not going to hit them if you are blocking legitimate people just to err on the side of caution.”

That statement reframes the relationship. Security earns visibility into revenue impact. Marketing gains insight into real threat signals. Trust grows because decisions tie back to shared evidence instead of assumptions.

Teams that build this muscle tend to formalize it quickly. Many start with:

  • A recurring review scheduled monthly or quarterly with the same funnel snapshot every time.
  • A shared definition of junk traffic that sales, marketing ops, and security all recognize.
  • A running log of security changes and their measured impact on conversion rates.

That way you can tune controls deliberately, protect pipeline quality, and keep growth moving without relying on blunt blocking rules.

Key takeaway: Run recurring funnel audits that include marketing ops, security, IT, and data teams, and anchor every discussion in real conversion data. Bring the full funnel, document which controls are active, measure their impact on revenue signals, and adjust thresholds together. This cadence turns bot mitigation into a shared operating rhythm instead of a reactive cleanup exercise.

Detachment as a Career Survival Skill

A diver submerged underwater, surrounded by vibrant waves and marine life, with boats visible above the water's surface in an artistic, colorful style.

Work life balance starts with how meaning gets assigned during the workday. Jordan describes balance as a mindset that stays active while work is happening, not something deferred until evenings or weekends. Work already consumes fixed time and attention, especially for employees and founders who feel constant obligation. The real leverage sits in how much emotional weight gets attached to outcomes, recognition, and approval. When meaning remains present during work, pressure becomes easier to carry and energy lasts longer.

Jordan treats effort as non-negotiable and attachment as optional. He commits fully to the quality of his work while accepting how quickly most outputs decay. Websites age fast. Campaigns expire. Systems get rebuilt. That awareness shapes his expectations and keeps disappointment from lingering. He focuses on steady improvement and forward motion, knowing that outcomes sit beyond full control. That posture keeps momentum intact even when results fall short of hopes.

“Give it your all. Perform really well at work. At the same time, work is work. It’s going to disappear.”

Time outside of work becomes another place where values get tested. Jordan pays close attention to whether his off-hours restore his nervous system or simply numb it. Activities chosen for convenience often feel good briefly, then leave frustration behind. He prefers continuity across the week rather than emotional spikes tied to Fridays and Sundays. Energy stays more stable when workdays and personal time feel connected instead of opposed.

The grounding practices Jordan returns to are physical, slow, and resistant to optimization. They include:

  • Gardening and watching progress unfold on a natural timeline.
  • Building forest trails and working directly with the land.
  • Caring for animals and responding to needs that demand presence.
  • Reading history and mythology to widen perspective beyond short cycles.

These practices shape how he shows up at work because they train focus and acceptance. Jordan draws inspiration from cultures that prize craft, repetition, and impermanence. Attention goes fully into the task at hand, then releases cleanly when it ends. That rhythm keeps ambition intact while preventing identity from collapsing into output.

Key takeaway: Build emotional detachment as part of your daily operating system. Bring full effort into work while keeping identity separate from outcomes. Choose off-work activities that calm your body and sharpen attention, then carry that steadiness into meetings, decisions, and deadlines. When effort stays high and attachment stays measured, work consumes less energy and delivers more of it back.

Episode Recap

A colorful illustration of a city highway bustling with cars under a blue sky, featuring the text 'Humans of Martech' and a portrait of a smiling man labeled 'Jordan Resnick, Sr Director, Marketing Operations at CHEQ'.

Go-to-market security shows up when your numbers stop feeling solid. Traffic climbs, leads roll in, dashboards look clean, but revenue drags its feet. Sales feels annoyed. Forecasts feel fragile. You can sense the disconnect even if you cannot point to a single broken system. Jordan starts from that tension because it is familiar to anyone running marketing ops today.

You are paying for activity that behaves well and produces nothing. Bots and AI-driven automation scroll, click, and submit forms like polite visitors. They pass validation. They inflate metrics. Then they vanish before money enters the picture. That noise spreads fast, from paid media to analytics to CRM to revenue reporting, until teams argue about numbers instead of deciding what to do next. AI makes this easier and cheaper to produce, while incentives still reward volume. That combination keeps the problem invisible longer than it should be.

Jordan connects this to why the internet feels crowded and hollow at the same time. Automation fills feeds, funnels, and dashboards with motion that lacks intent. The Dead Internet Theory resonates here because it gives language to a feeling marketers already have. Engagement rises while outcomes thin out. Once you start looking at traffic through that lens, surface-level growth loses its shine and behavior becomes the signal that matters.

Jordan explains how real buyers move with purpose and bots move with coverage. Sessions that race through pages faster than anyone could read, jump into deep URLs without context, or touch everything without a story break basic human patterns. Reviewing raw sessions alongside reports changes how you see performance. Defense follows the same logic. Start with controls you already have, then layer in behavioral analysis, filtering, and routing that stop bad traffic before it pollutes systems or burns sales time. Protect conversion moments, not just dashboards. Tools like RevenueHero fit naturally here because they keep junk leads out of calendars without creating friction for real buyers.

Treat go-to-market security as part of daily marketing operations. Verify traffic before celebrating it. Anchor reporting in revenue and pipeline. Work with security teams through shared funnel reviews instead of reacting after trust erodes. That way you can spend with confidence, give sales cleaner signals, and build growth that feels real instead of convincing.

Aboli Gangreddiwar, Senior Director of Lifecycle and Product Marketing at Credible, sharing insights on AI in marketing operations during a discussion.

Follow Jordan👇

✌️


Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)

Find your next episode
Semantic search across every episode transcript. The more context you give, the better the match.
Find my episodes →

All categories

Monthly archives

See all episodes

Future-proofing the humans behind the tech

Leave a Reply