Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS

What’s up everyone, today we have the honor of sitting down with Aleyda Solís, SEO and AI search consultant.
Summary: AI search is rewriting how people find information, and Aleyda explains the shift with clear, practical detail. She has seen AI crawlers blocked without anyone noticing, JavaScript hiding full sections of sites, and brands interpreting results that were never based on complete data. She shows how users now move freely between Google, TikTok, Instagram, and LLMs, which pushes teams to treat discovery as a multi-platform system. She encourages you to verify your AI visibility, publish content rooted in real customer language, and use topic clusters to anchor strategy when prompts scatter. Her closing point is simple. Community chatter now shapes authority, and AI models pay close attention to it.
In this Episode…
- Crawlability Requirements for AI Search Engines
- LLMs As A New Search Channel In A Multi Platform Discovery System
- AI Search Visibility Analysis for SEO Teams
- Creating Brand Led Informational Content for AI Search
- Choosing SEO Topics That Drive Brand-Aligned Demand
- How Topic Level Analysis Shapes AI Search Strategy
- LLM Search Console Reporting Expectations
- Why LLM Search Rewards Brands With Real Community Signals
Recommended Martech Tools 🛠️
We only partner with products and agencies that are chosen and vetted by us. If you’re interested in partnering, reach out here.
🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.
🦣 Mammoth Growth: Customer data agency that turns fragmented data into a unified foundation, unlocking sharper marketing insights and action.
📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.
🎨 Knak: Go from idea to on-brand email and landing pages in minutes, using AI where it actually matters.
About Aleyda

Aleyda Solís is an international SEO and AI search optimization consultant, speaker, and author who leads Orainti, the boutique consultancy known for solving complex, multi-market SEO challenges. She’s worked with brands across ecommerce, SaaS, and global marketplaces, helping teams rebuild search foundations and scale sustainable organic growth.
She also runs three of the industry’s most trusted newsletters; SEOFOMO, MarketingFOMO, and AI Marketers, where she filters the noise into the updates that genuinely matter. Her free roadmaps, LearningSEO.io and LearningAIsearch.com, give marketers a clear, reliable path to building real skills in both SEO and AI search.
Crawlability Requirements for AI Search Engines

Crawlability shapes everything that follows in AI search. Aleyda talks about it with the tone of someone who has seen far too many sites fail the basics. AI crawlers behave differently from traditional search engines, and they hit roadblocks that most teams never think about. Hosting rules, CDN settings, and robots files often permit Googlebot but quietly block newer user agents. You can hear the frustration in her voice when she describes audit after audit where AI crawlers never reach critical sections of a site.
“You need to allow AI crawlers to access your content. The rules you set might need to be different depending on your context.”
AI crawlers also refuse to process JavaScript. They ingest raw markup and move on. Sites that lean heavily on client-side rendering lose entire menus, product details, pricing tables, and conversion paths. Aleyda describes this as a structural issue that forces marketers to confront their technical debt. Many teams have spent years building front-ends with layers of JavaScript because Google eventually figured out how to handle it. AI crawlers skip that entire pipeline. Simpler pages load faster, reveal hierarchy immediately, and give AI models a complete picture without extra processing.
Search behavior adds new pressure. Aleyda points to OpenAI’s published research showing a rise in task-oriented queries. Users ask models to complete goals directly and skip the page-by-page exploration we grew up optimizing for. You need clarity about which tasks intersect with your offerings. You need to build content that satisfies those tasks without guessing blindly. Aleyda urges teams to validate this with real user understanding because generic keyword tools cannot describe these new behaviors accurately.
Authority signals shift too. Mentions across credible communities carry weight inside AI summaries. Aleyda explains it as a natural extension of digital PR. Forums, newsletters, podcasts, social communities, and industry roundups form a reputation map that AI crawlers use as context. Backlinks still matter, but mentions create presence in a wider set of conversations. Strong SEO programs already invest in this work, but many teams still chase link volume while ignoring the broader network of references that shape brand perception.
Measurement evolves alongside all of this. Aleyda encourages operators to treat AI search as both a performance channel and a visibility channel. You track presence inside responses. You track sentiment and frequency. You monitor competitors that appear beside you or ahead of you. You map how often your brand appears in summaries that influence purchase decisions. Rankings and click curves do not capture the full picture. A broader measurement model captures what these new systems actually distribute.
Key takeaway: You need to explicitly allow AI crawlers to access your content, and that control often lives at the hosting or CDN layer, not just robots.txt. AI crawlers use different user agents, so the validation rules aren’t the same as search bots. Google is the outlier. Googlebot renders JavaScript and works through most challenges. AI crawlers don’t. They largely consume raw HTML and move on. If your critical content or navigation depends on client-side JavaScript, AI crawlers won’t see it. To them, that content effectively doesn’t exist.
Back to the top ⬆️
LLMs As A New Search Channel In A Multi Platform Discovery System

SEO keeps getting declared dead every time Google ships a new interface, yet actual search behavior keeps spreading across more surfaces. Aleyda reacted to JT’s “LLMs as a new channel” framing with immediate agreement because she sees teams wrestling with a bigger shift. They still treat Google as the only gatekeeper, even though users now ask questions, compare products, and verify credibility across several platforms at once. LLMs, TikTok, Instagram, and traditional search engines all function as parallel discovery layers, and the companies that hesitate to accept this trend end up confused about where SEO fits.
Aleyda pointed to the industry’s long dependence on Google and described how that dependence shaped expectations. Many teams built an entire worldview around a single SERP format, a single set of ranking factors, and a single customer entry point. Interface changes feel existential because the discipline was defined too narrowly for too long. She sees this tension inside consulting projects when stakeholders ask whether SEO is dying instead of asking where their audience now searches for answers.
Retail clients provided her clearest examples. They already treat TikTok and Instagram as core search environments. They ask for guidance on how to structure content so it gets discovered through platform specific signals. They ask for clarity on how product intent gets inferred through tags, comments, watch time, and creator interactions. Their questions treat search as a distributed system, and their behavior hints at what the wider market will adopt. Aleyda considers this a preview, because younger customers rarely begin their journey inside a traditional search engine.
Her story from a conference in China made the point even sharper. She explained how Baidu no longer carries the gravitational pull many Western marketers assume. People gather information through Red Note, Douyin, and several specialized platforms, and they assemble answers through a blend of formats.
That experience changed Aleyda’s expectations for Western markets. She believes SEO now means understanding where your audience searches, learning how each platform surfaces information, and adapting your content to match the mechanics of each environment. She treats this shift as a natural expansion of the craft and a sign that organic discovery has become a multi platform discipline.
“SEO keeps evolving, and now it includes different channels because user behavior is more split and diversified.”
Key takeaway: Map every platform where your audience looks for answers, study the ranking signals of each one, and shape your content so it performs inside their unique systems. That way you can build organic visibility that travels with users across Google, TikTok, Instagram, LLM assistants, and any next wave discovery surface.
AI Search Visibility Analysis for SEO Teams

AI engines reward the brands that actually measure their standing inside these new ecosystems. Aleyda responded to the question about layering geo or AI search responsibilities into an existing SEO team with a detailed playbook that starts with simple measurement, then expands into competitive intelligence.
She said teams should quantify how much traffic LLM platforms already send, study the queries that drive it, and compare historical SEO wins to how AI engines now describe the same topics. She warned that personalization inside these systems creates noise, yet patterns still surface when you test enough personas and use cases. Those patterns reveal whether your brand shows authority or feels invisible.
Aleyda frames the research as a series of practical checks. She recommends that teams examine:
- which platforms drive the most traffic to you and your competitors
- where your brand appears within top AI answers for core queries
- how consistently your information is cited across personas
- which platforms supply those citations, especially if you have never touched them
These checks often expose data sources that shock teams. She has seen AI engines rely on forgotten pages with outdated ratings or half-abandoned community hubs. She said these neglected surfaces now feed LLM summaries, which means old content with emotional charge can shape how a brand gets described. This is where she encourages SEOs to work closely with community managers because reputation work and AI visibility overlap more than most teams expect.
“I realized my hosting company was blocking AI bots. All the answers looked wrong and the share of voice was terrible. I only found it because I dug deep into the validation.”
Her hosting story brought a very human frustration into a technical conversation. She had validated every angle until nothing made sense. She eventually discovered that her hosting provider had quietly blocked AI crawlers. She said this is common, which means many brands make strategic decisions based on incomplete visibility. She recommends treating AI crawler access as something you must verify early. That way you can avoid diagnosing imaginary problems.
Aleyda also challenged the idea of launching a separate role for AI search or geo work. She believes skilled SEOs already have the toolkit for this analysis. They understand ranking patterns, citation behavior, competitor research, and platform testing. She said teams should rely on these strengths and invest only when AI platforms become significantly different from one another. She compared this future scenario to how companies hire TikTok or YouTube specialists once algorithms diverge enough to justify deeper focus.
Key takeaway: Validate your presence inside AI engines before restructuring your org. Measure traffic from LLM platforms, test your brand across personas, track where citations originate, and confirm that AI crawlers can access your site. Use your existing SEO talent to run these checks and partner with community managers to address outdated or negative data sources. That way you can strengthen AI visibility with practical steps that fit your current team.
Creating Brand Led Informational Content for AI Search

Top of funnel traffic models strain under AI Overviews because generic content collapses instantly into a short summary. Aleyda described how informational queries that once justified entire content hubs now resolve inside Google or an LLM with no reason for a user to click. She has seen teams publish keyword matched explainers for years, and she believes that AI simply exposed how little weight those pages carried. Her stance centers on building informational content that carries specificity, personal stake, and brand context, because those qualities survive compression.
Aleyda recommends keeping TOFU investments alive with very different intent. She encourages teams to publish material that shapes a buyer’s early thinking and reinforces the brand later. She pointed to formats that pull directly from real customers and real experience, such as research, case studies, trend work, competitive breakdowns, and community driven FAQs. These assets hold texture that AI cannot replicate because they reflect the lived environment of the product and the category. She explained that many companies skip these formats in favor of scalable, low effort content even though the harder work produces the content that actually influences decisions.
“This is the content you want to have even if you take SEO out of the equation,” Aleyda said. “It helps customers and makes you sell more.”
Aleyda advised teams to treat help centers and forums as strategic assets rather than customer service artifacts. She has seen early stage questions from users act as powerful signals for search intent, which creates a natural TOFU footprint when those pages are optimized with clarity and depth. She described how users exploring an AI answer often want additional context or proof, and she believes they return to the brands they recognize. Strong informational content builds that recognition because it shows up repeatedly across use cases and personas.
Aleyda also acknowledged the difficulty of attribution in an LLM dominated environment. She believes teams should still invest because high quality informational content strengthens the entire journey by giving customers something meaningful to evaluate. She sees this work as long term brand scaffolding that supports presales and post sales experiences. Her recommendation centers on publishing material that would still be worth creating without SEO as an incentive. That way you invest in content that informs decisions, carries credibility, and maintains influence even when search models intercept the first click.
Key takeaway: Publish informational content that carries depth, specificity, and direct usefulness. Create original research, case studies, competitive comparisons, FAQs, and help center resources that reflect real customer questions. Focus on formats that hold up when LLMs summarize the basics. That way you build familiarity early, reinforce credibility later, and influence buying decisions even when attribution grows murky.
Choosing SEO Topics That Drive Brand-Aligned Demand

Keyword lists tend to pull marketers into strange corners of the internet. Aleyda describes how teams chase whatever shows up as high volume in the tools even when the terms carry no connection to what customers actually want. She explains that SEO became a tactical checkbox exercise inside many companies. Teams gather keywords, rank them by volume, and call it strategy. Traffic becomes the scoreboard even when that traffic has the business value of a puddle.
Aleyda shares how this plays out in practice. She recalls working with a major cruise line that immediately rejected top-volume terms like “cheap cruises.” The brand positioned itself far away from the bargain segment and refused to dilute that identity. That moment shaped how she evaluates keyword research because it showed her how quickly SEO can drift when the work is disconnected from product and customer reality. She has seen keyword research handed off to interns who have never spoken with customer support or product teams. Those teams carry the vocabulary that buyers use, and ignoring that vocabulary produces topic lists that might rank but rarely convert.
“Good SEO is the one that talks with product, talks with customer service, talks with digital PR, with social media, with branding, and makes sure to target queries that align with the brand.”
Aleyda shares an example from a well known software platform that ranked for nearly anything remotely related to tech. Years of accumulated authority gave them a false sense of stability. The team published content far outside their core expertise, and it ranked because the domain was strong. A Google update finally corrected that pattern and wiped out a massive portion of their traffic. Aleyda points out that the traffic came from pages with almost no relevance for the business. They generated impressive impressions but very few customers. Anyone who has inherited a bloated SEO content library can recognize the pattern.
Aleyda encourages teams to reorient around topics that reflect brand value, customer language, and product truth. LLM search only increases the need for this discipline. She sees marketers obsessing over individual prompts listed in LLM trackers even though prompts scatter across millions of variants. The odds of two users typing the same phrasing are minimal. Even if they do, the answers shift based on context, history, and location. She recommends treating prompt trackers as color, not strategy. The strategic layer comes from a clear sense of which topics matter for the product, which terms match customer expectations, and which stories reinforce trust over time.
Aleyda believes the strongest SEO programs treat topic selection like product positioning. Teams build content that strengthens the brand instead of diluting it. They filter ideas through personas, support conversations, and product roadmaps. They prioritize relevance even when volume looks small. That discipline produces content libraries that survive algorithm updates and surface in LLM responses because they reflect genuine expertise rather than opportunistic traffic grabs.
Key takeaway: Build your topic strategy from customer conversations, product clarity, and brand positioning. Validate themes with support and product teams, then use keyword and prompt tools to refine rather than dictate direction. Create content for the queries that reinforce your expertise and attract qualified demand, and skip the terms that inflate traffic without helping anyone buy.
How Topic Level Analysis Shapes AI Search Strategy

Topic level analysis gives teams a reliable way to work with stakeholders who do not understand keyword volumes or prompt chatter. Aleyda uses it whenever conversations drift into the vague territory of “prompt strategy” because she sees how easily teams get distracted by isolated examples. She prefers to anchor everything in the topics customers actually explore. She identifies the categories that matter, the brands customers associate with those categories, and the long tail questions that surface around each one. That way you can steer conversations toward strategy instead of debating prompt curiosities.
Aleyda likes tools that cluster prompts, but she uses them to understand context rather than chase specific entries. She described a retailer with dozens of product lines and a fragmented sense of what customers asked for. She entered each core topic into a clustering tool and quickly identified the top brands, the common prompt variations, and the sentiment wrapped around each mention. She then evaluated women’s sneakers, men’s jackets, and kids’ golf gear as separate ecosystems. She focused on the visibility of her client inside each category and compared it against the competitors who consistently appeared in the same clusters.
Aleyda organizes her analysis with simple structures that keep teams oriented.
- She defines the topic cluster.
- She identifies the competitors who appear inside it.
- She studies how often each brand is mentioned and how people feel about those mentions.
- She tracks the prompts that best represent the cluster over time.
That way you can stay grounded in demand that actually connects to revenue. She gave a clear example. A footwear only retailer gains nothing from measuring visibility across every prompt tied to Adidas. Adidas sells across many more categories. Aleyda narrows the lens to sneaker related prompts and builds the competitive frame from there. This creates cleaner signals that help teams understand whether they are rising or falling in the areas that matter.
Aleyda treats prompts as reference points rather than targets. She monitors how they shift, how competitors move in and out of view, and how sentiment changes as LLMs ingest new data. She avoids the trap of turning prompt lists into performance dashboards because they drift too quickly and rarely reflect real buying behavior. Topic structure remains stable enough to analyze, which makes it the better foundation for long term tracking.
Aleyda works with prompt libraries when she needs inspiration, but she relies on topic analysis when she needs truth. She sees far more value in understanding the markets underneath each prompt cluster. That way she can give teams a clear sense of where they stand, why they stand there, and what levers actually move them forward.
Key takeaway: Define the topics that matter to your business, then analyze the competitors, citations, and sentiment inside each cluster. Track a handful of representative prompts over time to observe movement without turning them into goals. This lets you measure AI search visibility where customer intent is strongest and helps you prioritize the work that meaningfully shifts your position within those high value categories.
LLM Search Console Reporting Expectations

LLM platforms keep dodging the question of visibility, and the industry feels the gap every time someone tries to measure what their content actually does inside ChatGPT, Perplexity, or Claude. Phil pressed Aleyda on whether these assistants will ever launch real reporting tools. She shared a candid view of why the consoles still do not exist and why they probably will, but only after a few conditions line up.
Aleyda pointed to the simplest explanation. These assistants do not refer much traffic, and most user behavior stays inside the chat window. That dynamic creates awkward optics for platforms that claim to redefine search. She described the situation as a mix of small numbers and regulatory pressure, which pushes companies into silence. Any reporting that exposes weak traffic patterns invites more scrutiny. She said the companies are likely waiting for the market to settle. In her words:
“I believe that they will come, but once the market is more settled and they are able to show more proof of their value.”
Aleyda highlighted a pattern that repeats across every ad platform. Real reporting appears only when revenue depends on it. She expects the tipping point to come when LLMs start selling ads consistently across conversational surfaces. Once marketers spend real budgets, they need performance data. The advertisers on the other end push hard for metrics, and platforms respond by releasing dashboards. She also mentioned ChatGPT’s shopping integrations as an early example. Merchants cannot justify spend without visibility into impressions, clicks, and conversions. The integrations will force data out into the open.
Aleyda believes that a search console for LLMs is a matter of timing. Growth teams inside these companies will eventually need to prove that ad surfaces produce measurable value. That creates a direct incentive to publish traffic, ranking logic, query mapping, or some early form of impression reporting. Even partial telemetry changes the work of any marketer who relies on LLM placement. It shifts optimization from guesswork to measurable behavior, and that shift always follows monetization.
Key takeaway: LLM reporting will appear when advertisers demand measurable outcomes, not before. Watch for two signals: consistent ad formats inside assistants and deeper commerce integrations. Those features force platforms to release performance metrics. That way you can plan content for LLM visibility with real data instead of intuition.
Why LLM Search Rewards Brands With Real Community Signals

LLM powered search engines already elevate small expert brands because they read the web through a different set of signals than legacy SEO systems. Aleyda keeps hearing from founders who woke up to sudden spikes in leads after ChatGPT began mentioning their products for niche questions. Their teams never built polished taxonomies or deep internal linking. Their backlink profiles barely existed. Their advantage came from public chatter and early community traction that lived outside formal SEO playbooks.
Aleyda shares stories from startups that attribute roughly ninety percent of their recent customers to ChatGPT citations. Those founders spent the past year creating content that felt human and unscripted. They posted product walkthroughs on YouTube, answered questions in Discord, talked openly on social, and built small communities that cared about the work. Their names appeared in scattered places across the web. LLMs noticed these traces and used them as evidence of relevance, authenticity, and expertise.
She points to a pattern that helps readers understand why this works.
- LLMs track repeated mentions across social and video, which act as ambient signals of trust.
- LLMs rely heavily on conversational proof, such as how customers describe products to each other.
- LLMs reward highly specific problem solving content, even when it comes from tiny sites.
- LLMs respond quickly to new topics, so early movers earn visibility before incumbents mobilize.
Those signals build a different type of authority. Aleyda believes many SEOs underestimate how much weight LLMs place on lived user behavior. A short Reddit thread or a detailed YouTube comment can influence retrieval more than an entire long form content hub. Aleyda has watched early stage companies outperform established brands because they created stories people repeated, which gave models raw material to learn from.
She expects this pattern to accelerate as new queries and topics appear in AI search. Fresh expertise gains traction without competing in the same narrow stack of blue link rankings. LLM engines thrive on new language, new conversations, and new examples. Brands that invest in real problem solving and community resonance build momentum that transfers directly into AI generated recommendations.
“All of a sudden my startup started getting cited in ChatGPT. It became my biggest traffic driver.”
Aleyda sees LLM powered discovery as a channel that rewards creators who make something people talk about. You build authority through usefulness, conversation, and visibility in places models actively observe. That way you can earn search presence long before traditional SEO signals mature.
Key takeaway: LLM search engines prioritize community signals, conversational proof, and specific problem solving content. You gain meaningful visibility when your product becomes something people reference in public forums, social threads, and video comments. Focus on helpful explanations, active participation in your niche, and stories customers repeat. These signals shape AI generated recommendations faster and more effectively than traditional SEO mechanics.
Prioritizing Work That Matches Personal Purpose

Energy allocation becomes far more predictable when purpose acts as the filter. Aleyda described her system with a kind of grounded practicality that many operators talk about but rarely execute. She organizes her work around the parts of SEO that bring her genuine satisfaction. She cares about direct client conversations, problem solving, and the rhythm that comes from understanding a business deeply. She built her consultancy to protect that rhythm, and she made sure her structure keeps her close to real advisory work rather than drowning her in layers of management.
Aleyda’s filter also helps her resist the pressure to scale for prestige. Growth culture loves to glorify team size, yet she treats independence as the real indicator of progress. She said:
“I really like to talk with customers. That is why I do not have the big agency.”
That single sentence shows how she thinks. She wants the freedom to choose clients, shape her calendar, and jump into projects that align with her skills. She chose a boutique model because she wanted the right kind of work, not the biggest operation in the room. You can hear the years of accumulated self-awareness in how she explains it.
Her personal routines follow the same design principle. She wanted better fitness. She also knew the gym would bore her and eventually vanish from her schedule. So she built a system she would actually continue. She jump ropes at home because it fits any environment. She plays paddle tennis because it is social, fast, and fun. She selected activities that keep her moving without turning exercise into another obligation she would resent. Many operators force themselves into routines that collapse under stress. Aleyda studies her own behavior patterns and then chooses formats that survive real life.
She handles community in the same intentional way. Remote work gives her the flexibility to travel, speak, and meet people who strengthen her thinking. She gains energy from being around practitioners, so she structures her year to include those moments. Conferences, meetups, and events act as fuel rather than interruptions. Her calendar reinforces what keeps her motivated, and she gives herself permission to build around the things that strengthen her work.
Aleyda’s entire system works because she treats purpose as a practical constraint instead of a vague aspiration. She chooses work that energizes her, habits she will continue, and environments that support her personality. She created a model that protects her attention instead of draining it.
Key takeaway: Build your weekly structure around the work that energizes you, not the work you feel obligated to chase. Anchor your decisions in a purpose filter so you can select projects, clients, and habits that reinforce your strengths. Choose activities and routines you will repeat under pressure. That way you can design a sustainable operating system that keeps you aligned with what matters rather than wrestling with a calendar that works against you.
Episode Recap

AI search is rewriting discovery, and Aleyda guides you through it with the calm of someone who has fixed more broken websites than she can count. She talks about blocked crawlers, JavaScript swallowing content, and audits where entire product sections never reached an AI model at all. You feel the frustration because she has lived it.
She expands the frame quickly. People now search across Google, TikTok, Instagram, and LLMs in the same breath. Retailers already treat these platforms as equal inputs. Her stories from China make the point stronger. Users patch together answers from many places, not from one dominant engine. She believes this is where Western behavior is headed.
Her guidance on AI visibility focuses on what you can measure. She wants you to check where LLMs mention you, which pages they rely on, and whether your hosting even lets AI bots through. She once spent days debugging odd rankings until she discovered that her own provider had blocked every AI crawler. It is the kind of mistake any team could make.
Her content advice stays grounded. Thin explainers collapse instantly under AI summaries. Content built from customer language, research, and lived experience holds its shape. Help centers, real questions, and specific problem solving create signals that AI models actually trust.Her stance on keywords follows the same thread. You build around customer vocabulary and brand positioning, not around inflated volume charts. She tells stories of brands that chased large numbers only to lose everything because none of that traffic mattered.
Topic clusters give structure to the noise. She groups related prompts, studies competitors inside each cluster, and evaluates sentiment and visibility over time. This creates clarity in a space where prompt lists drift constantly. She believes LLM reporting will appear only when ads push platforms to provide it. Until then, we are flying with partial dashboards.
Finally, small brands earn surprising visibility because they show up in real conversations, from Discord threads to YouTube comments. LLMs absorb that chatter and treat it as proof. Aleyda has watched founders wake up to unexpected demand because their names kept appearing in places people actually talk.
Listen to the full episode ⬇️ or Back to the top ⬆️

Follow Aleyda👇
✌️
—
Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)
Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS
Related tags
<< Previous episode
Next episode >>
All categories
- AI (95)
- career (61)
- customer data (59)
- email (64)
- guest episode (169)
- operations (127)
- people skills (34)
- productivity (10)
- seo (14)
See all episodes
Future-proofing the humans behind the tech
Apple •Pocket Casts•Google •Overcast •Spotify •Breaker •Castro •RSS