Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS

What’s up everyone, today we have the pleasure of sitting down with Anna Aubuchon, VP of Operations at Civic Technologies.
Summary: Anna breaks down how old build versus buy habits hold teams back, how yearly AI contracts quietly drain momentum, and how modern integrations let operators assemble powerful workflows without engineering bottlenecks. She contrasts scattered one-off AI tools with the speed that comes from shared patterns that spread across teams. Her biggest story is that Civic replaced slow dashboards and long queues with orchestration that pulls every system into one conversational layer, letting people get answers in minutes instead of mornings.
In this Episode…
- How AI Flipped the Build Versus Buy Decision
- Why In House AI Provides Better Economics And Control
- How to Treat AI as an Insourcing Engine
- Replacing Enterprise BI with LLMs and a Self-serve Analytics Stack
- Ops People are Creators of Systems Rather Than Maintainers of Them
- Why Natural Language AI Lowers the Barrier for First-Time Builders
- Technical Literacy Requirements for Next Generation Operators
- Why Creative Practice Strengthens Operational Leadership
Recommended Martech Tools 🛠️
We only partner with products and agencies that are chosen and vetted by us. If you’re interested in partnering, reach out here.
🎨 Knak: Go from idea to on-brand email and landing pages in minutes, using AI where it actually matters.
📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.
🔌 GrowthBench: Twilio’s top-tier consulting partner, turning your Twilio investment into a customer engagement engine
🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.
About Anna

Anna Aubuchon is an operations executive with 15+ years building and scaling teams across fintech, blockchain, and AI. As VP of Operations at Civic Technologies, she oversees support, sales, business operations, product operations, and analytics, anchoring the company’s growth and performance systems.
She has led blockchain operations since 2014 and built cross-functional programs that moved companies from early-stage complexity into stable, scalable execution. Her earlier roles at Gyft and Thomson Reuters focused on commercial operations, enterprise migrations, and global team leadership, supporting revenue retention and major process modernization efforts.
How AI Flipped the Build vs Buy Decision

AI tooling has shifted so quickly that many teams are still making decisions with a playbook written for a different era. Anna explains that the build versus buy framework people lean on carries assumptions that no longer match the tool landscape. She sees operators buying AI products out of habit, even when internal builds have become faster, cheaper, and easier to maintain. She connects that hesitation to outdated mental models rather than actual technical blockers.
AI platforms keep rolling out features that shrink the amount of engineering needed to assemble sophisticated workflows. Anna names the layers that changed this dynamic. System integrations through MCP act as glue for data movement. Tools like n8n and Lindy give ops teams workflow automation without needing to file tickets. Then ChatGPT Agents and Cloud Skills launched with prebuilt capabilities that behave like Lego pieces for internal systems. Direct LLM access removed the fear around infrastructure that used to intimidate nontechnical teams. She describes the overall effect as a compression of technical overhead that once justified buying expensive tools.
She uses Civic’s analytics stack to illustrate how she thinks about the decision. Analytics drives the company’s ability to answer questions quickly, and modern integrations kept the build path light. Her team built the system because it reinforced a core competency. She compares that with an AI support bot that would need to handle very different audiences with changing expectations across multiple channels. She describes that work as high domain complexity that demands constant tuning, and the build cost would outweigh the value. Her team bought that piece. She grounds everything in two filters that guide her decisions: core competency and domain complexity.
Anna also calls out a cultural pattern that slows AI adoption. Teams buy AI tools individually and create isolated pockets of automation. She wants teams to treat AI workflows as shared assets. She sees momentum building when one group experiments with a workflow and others borrow, extend, or remix it. She believes this turns AI adoption into a group habit rather than scattered personal experiments. She highlights the value of shared patterns because they create a repeatable way for teams to test ideas without rebuilding from scratch.
She closes by urging operators to update their decision cycle. Tooling is evolving at a pace that makes six month old assumptions feel stale. She wants teams to revisit build versus buy questions frequently and to treat modern tools as a prompt to redraw boundaries rather than defend old ones. She frames it as an ongoing practice rather than a one time decision.
Key takeaway: Reassess your build versus buy decisions every quarter by measuring two factors. First, identify whether the workflow strengthens a core competency that deserves internal ownership. Second, gauge the domain complexity and decide whether the function needs constant tuning or specialized expertise. Use modern integration layers, workflow builders, and direct LLM access to assemble internal systems quickly. Build the pieces that reinforce your strengths, buy the pieces that demand specialized depth, and share internal workflows so other teams can expand your progress.
Back to the top ⬆️
Why In House AI Provides Better Economics And Control

AI tooling has grown into a marketplace crowded with vendors who promise intelligence, automation, and instant transformation. Anna watches teams fall into these patterns with surprising ease. Many of the tools on the market run the same public models under new branding, yet buyers often assume they are purchasing deeply specialized systems trained on inaccessible data. She laughs about driving down the 101 and seeing AI billboards every few minutes, each one selling a glossy shortcut to operational excellence. The overcrowding makes teams feel like they should buy something simply because everyone else is buying something, and that instinct shifts AI procurement from a strategic decision into a reflex.
“A one year agreement might as well be a decade in AI right now.”
Anna has seen how annual vendor contracts slow companies down. The moment a team commits to a year long agreement, the urgency to evaluate alternatives vanishes. They adopt a “set it and forget it” mindset because the tool is already purchased, the budget is already allocated, and the contract already sits in legal. AI development moves fast. Contract cycles do not. That mismatch creates friction that becomes expensive, especially when new models launch every few weeks and outperform the ones you purchased only months earlier. Teams do not always notice the cost of stagnation because it creeps in quietly.
Anna lays out a practical build versus buy framework. Teams should inspect whether the capability touches their core competency, their customer experience, or their strategic distinctiveness. If it does, then in house AI provides more long term value. It lets the company shape the model around real customer patterns. It keeps experimentation in motion instead of waiting for a vendor update. It removes the premium pricing that comes with packaged AI products, many of which charge enterprise rates for work that an internal team could run on cheaper infrastructure. Buying stays useful for commodity capabilities that do not shape differentiation, but building becomes the smarter investment when precision and speed matter.
She also calls attention to the emotional habits that drive AI procurement inside startups. Buying feels safe. It feels like progress. It creates the illusion of momentum. Anna encourages teams to recognize that real momentum comes from iteration. In house models allow the team to retrain whenever customer behavior shifts. They open a deeper understanding of internal data. They keep ownership of the intelligence that governs decision making. They also prevent the operational drag that comes from waiting on a vendor roadmap that does not match the company’s pace.
Readers who want something immediately useful can anchor their AI strategy in control. AI decisions shape the pace of the business. The team that owns its models owns its speed. The team that signs long contracts inherits someone else’s timeline.
Key takeaway: Evaluate every AI purchase with a clear rule. Buy only when the capability is non core and commoditized. Build whenever the work influences differentiation, customer experience, or long term strategy. That way you can avoid vendor lock in, protect your pace, reduce long term cost, and keep control of the intelligence that drives your company forward.
Back to the top ⬆️
How to Treat AI as an Insourcing Engine

Vendor ecosystems accumulate advantage with every customer interaction, and Anna calls attention to how quietly that dynamic compounds. Vendors refine their systems as thousands of users feed edge cases into the product. They strengthen patterns. They improve model behavior. They fine tune performance based on data you never see. Companies walk away with only the content they supplied, while the vendor walks away with the upgraded model. Anna believes this creates an invisible cost structure that teams rarely calculate, even though it affects long term capability and bargaining power.
Her perspective reflects years of watching operations teams carry the weight of tasks that consume whole workweeks. She argues that AI gives operators a chance to reclaim time for strategic decisions. She describes operations as a function that often carries repetitive work that hides the creative and analytical work beneath it. AI breaks that bottleneck. It lifts the mechanical work off the team and hands them a chance to think more clearly about system design and customer experience.
“You walk away with what you injected, but the vendor walks away with everything they learned.”
Her story about support automation shows how insourcing plays out inside Civic. The company adopted an AI support bot to replace a costly level one BPO, but the real shift happened inside the internal team. They took ownership of the intelligence layer. They learned to read user interactions, manage context, and optimize model behavior. They created a system where:
- operators controlled the quality of knowledge
- teams reviewed patterns directly
- insights stayed in house
- iteration happened inside the company rather than inside a vendor contract
That shift sharpened Civic’s understanding of its customers and reduced reliance on external support without losing control of how the machine behaved.
The analytics transformation followed the same logic. Civic had invested heavily in a traditional BI stack that required engineers and data specialists to maintain. They brought visualization and analysis back inside the business. Operators gained direct access to data that had previously been behind layers of tooling and ticket queues. They began asking stronger questions because they could interact with the underlying information themselves. The change improved decision making and lowered costs, and it gave the team a sense of ownership over a function that had been outsourced for convenience.
Anna’s view of insourcing focuses on capability instead of just cost cutting. She believes AI creates room for operators to think more deeply about the systems they own. She sees insourcing as a path to operational fluency, better control over critical loops, and a business that grows stronger with each iteration.
Key takeaway: AI insourcing strengthens your long term leverage. Keep vendors for infrastructure, but bring intelligence work back inside your team. Focus on owning the learning loop, reviewing interactions directly, and shaping how your systems behave. That way you can cut hidden costs, gain more control over your data patterns, and give your operators meaningful time for deeper, strategic work.
Back to the top ⬆️
Replacing Enterprise BI with LLMs and a Self-serve Analytics Stack
Moving BI Workloads Out of Dashboards and Into LLMs

Replacing a central BI tool inside a company that relies on shared dashboards requires more than a technical refresh. It requires a shift in how people access and interrogate data. Phil asked Anna to walk through that operational leap, including the sequencing, the internal friction, and the difference in speed and cost once Civic brought everything in-house. Anna described a team buried under requests, a single data engineer carrying the weight, and BI tools designed for a slower era. Every new question moved through the same bottleneck. Reports arrived the next morning, long after the moment the question actually mattered.
AI orchestration created the first real opening. Anna routes warehouse data into an LLM client and interacts with it through natural language. She can pull sales data, add product usage patterns, layer in web traffic, and fold in CRM activity, all inside a single thread. That work used to require multiple tools. Now it happens inside one conversation. She uses orchestration to coordinate the scattered AI features hiding in each product. That way she can produce clean outputs, connect insights across systems, and trigger workflows that deliver finished work instead of raw charts. She describes the change as a shift from dashboards that only replay history to conversations that reflect the way people think when they are trying to solve a problem.
“I can go layer by layer into the data, ask the exact questions I care about, and get proactive nudges like have you considered this pattern?”
That line captures why every team felt the impact.
- GTM teams combine CRM data, usage events, and website activity in minutes. Campaigns ship faster and feel more grounded in what users actually do.
- Product teams explore behavior patterns that separate casual users from power users. They use that clarity to shape onboarding and refine UX.
- Finance teams monitor AI spend in real time, which prevents expensive surprises when API usage spikes.
Anna highlights a truth that many teams gloss over during AI rollouts. Clean data remains the anchor. Schema documentation still matters. Clear table relationships still matter. Consistent naming still matters. Data engineers continue to build the prompts that leadership relies on. Everyone else runs those prompts and climbs deeper into the data through natural language. She asks the AI to show its reasoning because the reasoning reveals whether the answer deserves trust. Her team sets limits on tool calls, uses permissions to protect PII, and keeps high-sensitivity work on secured devices. Civic Nexus carries those guardrails so teams can move quickly and stay safe.
Key takeaway: Treat AI orchestration as your operational layer, not a novelty bolted onto legacy BI. Route warehouse data into an LLM client so you can combine metrics across systems inside a single conversation. Document schemas clearly, clean up naming conventions, and let data engineers own the business-critical prompts. Ask the model to show its reasoning to validate accuracy. Apply permission controls for sensitive data and limit tool calls so your costs stay predictable. When you combine structure with orchestration, you remove the bottlenecks that slow BI work, and every team can act on data during the moment it matters.
Back to the top ⬆️
Guardrails That Keep AI Querying Accurate

AI driven querying creates enthusiasm and anxiety at the same time. Marketers want direct access to data because it removes the slow handoff cycles that stall daily work. Data engineers worry about the fallout when someone builds a deck using the wrong table for active users. Anna deals with this tension constantly, and she returns to one principle. AI behaves predictably when the data layer is predictable. AI behaves erratically when the data layer is chaotic.
Anna has watched models latch onto mismatched naming conventions and produce answers that feel precise but point in the wrong direction. One team called everything “users.” The warehouse called the same records “accounts.” The LLM followed the warehouse, and the marketers thought the system had invented an entirely new dataset. The confusion came from language that drifted over time and never made its way back into the schema. AI mirrored exactly what it saw.
“If you do not have good schema documentation, it will infer what it can out of whatever you have set up.”
Anna outlines a pattern that keeps teams grounded. Engineers and analysts write the business critical prompts because they understand the warehouse deeply and know how metrics behave. These prompts include board level KPIs, recurring financial indicators, and any number that directly affects decisions at scale. Everyone else can layer natural language on top of those definitions. That creates a safe middle ground where the model helps people explore data without rewriting the core logic.
She recommends a few habits that keep the system coherent. First, teams should treat schema documentation as a living artifact. Second, they should use chain of thought prompting so the model explains how it reached an answer. Third, they should review outputs that influence downstream work. These steps do not slow the work down. They create predictability because the team can understand how an answer was generated. Anna points out that Civic also uses role based guardrails tied to privacy and security. Sensitive fields stay locked behind device requirements, role limits, and restricted tool calls. The company avoids privacy issues because the AI can only reach what the system explicitly permits.
AI generated analytics becomes useful when the warehouse is organized, the core prompts come from experts, and the system enforces boundaries. Anna has seen teams gain more confidence in their numbers when these habits become routine.
Key takeaway: Keep your warehouse clean, document naming and structure in the language your teams actually use, and reserve mission critical prompts for people who understand the data relationships. Use chain of thought prompting so you can audit the model’s reasoning. Enforce role based guardrails around sensitive fields and tool calls. These habits give you reliable answers, faster exploration, and fewer late night surprises when someone presents metrics built from the wrong table.
Back to the top ⬆️
Using Role Based AI Guardrails Across MCP Servers

Phil pressed on the political slog behind ripping out a BI tool that half the company swears by. Anna described a team that hit a wall with their legacy platform. People waited for dashboards that crawled. Costs ballooned as usage grew. Every vendor promised a “modern” alternative that felt like a slightly different flavor of yesterday. The discontent spread because everyone felt the friction in their day-to-day work. The team needed more speed, more direct access to data, and fewer license handcuffs dragging behind them.
Civic Nexus changed the conversation. Anna explained how the team built an orchestration layer that let their LLM tap into every important system, not only event data. Nexus pulled in CRM tables, support case histories, engineering tasks, documentation pages, product logs, and everything in between. The team connected those sources under one interface, and the energy shifted immediately.
“We could access all of it, and we could interface it together. That is when your imagination starts running wild.”
The excitement came with nerves. Anna described a very human moment inside the company, the one where people imagine someone leaving a laptop open at a coffee shop or typing a messy prompt that accidentally exposes private information. Their culture revolves around privacy and identity protection, so the idea of full access produced a tightness in the room. Teams that care deeply about user privacy feel every possible failure scenario in their gut, and her team felt all of it at once.
That fear turned into structure. Nexus let them build guardrails for each MCP server. Every source could have its own rules. They could block PII at the field level. They could hide email addresses for certain roles. They could limit actions inside Postgres without limiting actions inside Jira. The safety model became a living system rather than a static permission table. Anna mentioned that they reject nearly all outside AI services because vendors fail their compliance filters. Building internally gave them the freedom to move, but the guardrails allowed them to trust what they built.
The combination ended up changing more than their BI workflow. It changed how their teams engage with data. They gained faster answers, deeper flexibility, and tighter control of private information. Internal BI replacement became realistic because the safety layer matched the power of the orchestration layer.
Key takeaway: Role based AI guardrails make internal BI rebuilds practical. Start by identifying each data source that touches sensitive information. Assign a guardrail profile to each one, including rules that block PII, limit actions, and restrict specific fields. That way you can open broad AI access for your team without exposing anything you cannot afford to lose. This single shift gives you more velocity than a BI vendor can offer and keeps privacy inside your walls where it belongs.
Back to the top ⬆️
Ops People are Creators of Systems Rather Than Maintainers of Them

Ops teams gain real power when they stop acting like internal pit crews and start behaving like system designers. Anna pushes hard on this shift because she has lived through the before and after. She talks about support, sales ops, product ops, and business ops as groups that can either run tickets or build engines. She chooses engines. Her teams spend less time reacting and more time architecting, which changes the culture from the inside out.
Anna tells her teams to ignore the boundaries of whatever tool sits in front of them. She encourages them to anchor their work around outcomes, not interfaces. She says it plainly in conversation.
“Do not be inhibited by what the AI tool can offer you. Your imagination is the limit.”
That line hits because most ops pros unconsciously build their workflows around the tool’s constraints. Anna asks them to design the ideal system first, then push the tool to meet that design. She frames the work like product design, not operational cleanup.
The shift becomes obvious when you look at their day to day. They plan in terms of patterns, not tickets. They build workflows that scale people, not workloads. They take on projects that reduce future maintenance instead of optimizing the present queue. When Anna explains what this looks like, she often breaks it down into simple elements such as:
- define the real business objective,
- imagine the ideal system without worrying about limitations,
- design the workflow that makes that system real,
- only then decide how AI or automation fits into the picture.
This sequence gives operators permission to think bigger than the backlog.
Her favorite example comes from home. She jokes that her child once needed help navigating Midjourney, and now that same child can spin up a full video or experiment with “vibe coding” on newer tools. That story reinforces what everyone in ops feels right now. Anyone can build. The walls around technical creation are gone. Anna wants her teams to ride that wave, not compete with it. She believes ops becomes far more valuable when people embrace creativity, imagination, and experimentation as core skills.
She warns her teams that the AI tools limiting them today will look outdated tomorrow. She wants people designing systems that survive tool churn. She wants teams that see structure, flow, and intent as their real craft. When operators take that mindset seriously, they stop thinking about replacing tasks with AI and start building systems that scale the company.
Key takeaway: Treat ops work as system design, not ticket work. Start with the objective, imagine the ideal workflow, and design from that vision before touching any tool. AI becomes more powerful when you lead with creativity and structure instead of constraints. Your leverage grows when you architect workflows that scale people, not maintenance.
Back to the top ⬆️
Why Natural Language AI Lowers the Barrier for First-Time Builders

Natural language AI has turned into the closest thing to an open door for anyone who wants to build but has been conditioned to believe they lack the right background. Anna pushes this point hard because she sees how often people talk themselves out of experimenting. Anyone who can describe a workflow, outline an experience, and articulate a purpose already holds the most important ingredients for building with these systems. The gatekeeping around technical creation dissolves once you realize that natural language now acts as the interface.
Anna’s conversations with her daughter show how simple the mindset can be when no one has taught you to be afraid of the details. Her daughter imagines something, talks through the idea, and lets the system assemble the structure. Adults tend to tense up at the mention of authentication, databases, or APIs. Meanwhile, tools at Civic let you copy a prompt from documentation and generate a complete integration using plain language. Anna wants readers to focus on clarity of intent because the underlying machinery no longer demands expertise.
“Think about all the cool things you want to do, then use your words and start explaining it.”
Many women describe the same hesitation when approaching AI. The word “tools” sparks anxiety. The idea of “coding” feels sealed off. The fear of messing up is louder than the curiosity to try something small. Anna has lived through male-dominated environments in finance and blockchain payments, and she sees AI as a moment of genuine opportunity. She believes people with strong reasoning, process instinct, and business logic have an advantage right now because the systems reward articulation more than technical recipes.
Readers who want a practical entry point can ground themselves in a simple sequence:
- Write down the thing you want to exist.
- Describe the actions a person would take inside it.
- Clarify the benefit or outcome it should create.
- Feed that description into an AI system that interprets natural language.
This is what Anna calls “vibe coding,” and she frames it as a creative exercise instead of a technical challenge. She believes consistent experimentation builds confidence faster than people expect and that momentum compounds in meaningful ways. She encourages immediate action, even if the idea feels tiny, because continued waiting creates inertia that becomes harder to break.
Key takeaway: Natural language AI expands access to building by turning clear thinking into functional output. You can start with a description of the idea, the steps, and the value, then let the system generate the structure. Small experiments create rapid confidence, and that confidence becomes the engine for bigger projects.
Back to the top ⬆️
Technical Literacy Requirements for Next Generation Operators

AI is pushing operators to reevaluate how work moves through a company, and Anna leans into this shift with a level of candor that cuts through the usual chatter. She sees the strongest operators as the ones who treat curiosity like oxygen. Curiosity pushes people to experiment, question, probe, and wander into the parts of their workflow that no one has touched in years. Imagination sits right beside it and gives them enough creative horsepower to design something better instead of repeating whatever the last system produced.
“Curiosity feeds imagination, and those two things shape the operators who will do well in this area.”
Anna argues that companies already have people who can do this work. Leaders often rush to hire for AI experience, yet many of the most promising builders already sit inside the organization. These employees understand the context, the bottlenecks, and the workflows that never get written down. She recommends starting with people who have mid level technical literacy because they can collaborate with engineers and still think like operators. That combination creates momentum quickly.
Her BI dashboard story shows what this mindset looks like in practice. She spent long nights trying to rebuild dashboards one to one using generative tools. The dashboards looked sharp, yet they broke, stalled, and refused to refresh correctly. The effort produced polished visuals with shaky reliability. She eventually stepped back and asked what she was trying to accomplish in the first place. She wanted people to receive the right data at the right moment, and she wanted that information to live where they already spent their time.
Slack became the obvious answer. Most teams live in it for large stretches of the day. She used Civic Nexus to orchestrate a workflow that runs real SQL queries and sends data straight into Slack channels. That workflow removed the need for a visual dashboard entirely. It also shortened the distance between someone needing information and someone receiving it. Teams did not open new tabs or load interfaces. They simply read the message and keep working.
Anna encourages operators to take that same lens to every part of their work. She suggests you start by asking three questions.
- What outcome were you originally trying to achieve.
- Does the current artifact still deliver that result effectively.
- Can the workflow move into the environment where people already operate.
Those questions expose the parts of your process that only exist because they existed before. Once you see them clearly, you can redesign the workflow with tools that actually fit the way your team operates now.
Key takeaway: Curiosity and imagination help operators redesign workflows instead of inheriting outdated ones. Start by identifying the real outcome behind each process, then move the work into the places where your team already spends time. That way you can create lighter systems, cut unnecessary tools, and deliver information in a form people use immediately.
Back to the top ⬆️
Why Creative Practice Strengthens Operational Leadership

Balance in high-pressure roles depends on habits that refill your capacity, and Anna builds that capacity through a mix of community time and quiet creative work. She treats balance as something you construct piece by piece. She also treats it as a working system that needs constant maintenance, especially when the job requires nonstop context switching and decisions that carry weight for teams. Operations work drains attention quickly, and she protects the small rituals that reset her energy so she can stay sharp.
Wheel throwing plays a central role in how she resets. She described the experience with a kind of grounded enthusiasm that only comes from someone who has spent hours fighting with clay. She talked about sitting at the wheel with nothing but the spinning base, the weight of the clay, and whatever idea she brings into the room. She laughed about how “what you thought was going to be a bowl turns into a pinch pot,” and anyone who has tried pottery will understand exactly what she means. The clay slumps, collapses, buckles, or accepts the shape if you coax it correctly. Every attempt carries its own lesson, and every failure becomes a chance to refine your hands.
Her creative practice gives her a structured way to learn from repetition. She returns to the wheel because the work forces presence. She centers the clay, she adjusts pressure, she watches small changes ripple through the form, and she accepts that some pieces belong in the reclaim bucket. The rhythm feels meditative. It gives her a way to clear the mental clutter that builds during the week. She also gains a sense of camaraderie from sitting in a studio where everyone else is fighting the same tiny battles with their own clay. The space feels quiet, and the quiet turns restorative.
Anna carries those lessons into how she leads. She treats wheel throwing as a training ground that improves her patience under pressure. She treats the repeated collapse of a piece as an exercise in emotional regulation. She walks into work with a steadier posture because she spent the weekend practicing the art of recalibrating without frustration. Operational leaders often talk about resilience in vague terms. Anna builds resilience through a craft that forces her to iterate in real time and recover from small failures again and again.
Key takeaway: Creative rituals that involve hands-on repetition give operators a practical way to strengthen patience, focus, and emotional steadiness. If you work in a demanding role, choose a hobby that teaches you to reset quickly and iterate without stress. That way you can return to your team with a clearer mind and a deeper reservoir of resilience.
Back to the top ⬆️
Episode Recap

AI has rewritten the daily reality of operators, and Anna walks through that shift with the calm of someone who has rebuilt enough systems to trust her instincts. She starts with the old build versus buy mindset. Teams keep buying AI tools out of habit, then feel stuck when contracts drag behind the pace of new models. She watches that slowdown happen quietly. It drains momentum without anyone naming it.
Her energy lifts when she talks about modern building blocks. Integrations connect cleanly, workflow tools let operators build without engineering tickets, and LLMs remove the fear of complexity. She evaluates every decision by asking whether it strengthens a core skill or drags the team into constant tuning. That filter shapes what Civic builds and what it buys. She describes how isolated AI experiments weaken companies. People buy tools one by one and create tiny islands of automation. Shared patterns create far more lift. One team tries something, another adapts it, and the whole company moves faster because everyone is pulling from the same set of ingredients.
The story gets wild when she talks about replacing enterprise BI. She remembers slow dashboards, long queues, and a single engineer carrying the load. Orchestration flipped the experience. Every system flowed into one place. People asked questions in plain language and received answers in minutes instead of mornings. That speed created a different culture entirely.
The power made people nervous at first. Civic works with sensitive identity data, and broad access stirred real fear. They turned that fear into precise guardrails. Each system gained its own rules. Sensitive fields stayed sealed. The team built a safety structure that let them move quickly without losing control.
She wants operators to design systems, not babysit tools. She wants them to picture the outcome first, then decide how AI fits. She tells stories about her daughter building things by simply describing what she wants. That mindset, she argues, is more valuable than any technical credential.
Listen to the full episode ⬇️ or Back to the top ⬆️

Follow Anna👇
✌️
—
Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)
Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS
Related tags
<< Previous episode
Next episode >>
All categories
- AI (90)
- career (57)
- customer data (59)
- email (64)
- guest episode (167)
- operations (126)
- people skills (34)
- productivity (10)
- seo (14)
See all episodes
Future-proofing the humans behind the tech
Apple •Pocket Casts•Google •Overcast •Spotify •Breaker •Castro •RSS
