Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS

What’s up everyone, today we have the pleasure of sitting down with Guta Tolmasquim, CEO at Purple Metrics.
Summary: Brand measurement often feels like a polite performance nobody fully believes, and Guta learned this firsthand moving from performance marketing spreadsheets to startup rebrands that showed clear sales bumps everyone could feel. She kept seeing blind spots, like a bank’s soccer sponsorship that quietly cut churn or old LinkedIn pages driving conversions no one tracked. When she built Purple Metrics, she refused to pretend algorithms could explain everything, designing tools that encourage gradual shifts over sudden upheaval. She watched CMOs massage attribution settings to fit their instincts and knew real progress demanded something braver: smaller experiments, simpler language, and the courage to say, “We tried, we learned,” even when results stung. Her TikTok videos in Portuguese became proof that brand work can pay off fast if you track it honestly.
In this Episode…
- How Brand Measurement Connects to Revenue
- How to Convince CFOs to Fund Brand Marketing
- How Attribution Models Handle Channel Composition and Forecast Errors
- How to Build Trust in Marketing Measurement Solutions
- Creating Attribution Algorithms That Respect Brand Complexity
- How Algorithm Blindness Limits Predictive Marketing
- When Marketing Data Refuses to Flatter Your Strategy
Recommended Martech Tools 🛠️
We only partner with products that are chosen and vetted by us. If you’re interested in partnering, reach out here.
🦸 RevenueHero: Automates lead qualification, routing, and scheduling to connect prospects with the right rep faster, easier and without back-and-forth.
📧 MoEngage: Customer engagement platform that executes cross-channel campaigns and automates personalized experiences based on behavior.
🎨 Knak: No-code email and landing page creator to build on-brand assets with an editor that anyone can use.
About Guta

Guta is the founder and CEO of Purple Metrics, a São Paulo-based company building technology that brings marketing attribution and brand measurement under one roof. Before launching Purple Metrics, she founded and led Brand Gym, a branding agency serving startups and tech companies, and spent over a decade as a brand strategist for major clients like Coca-Cola, Pernod Ricard, and L’Oréal.
Guta has also taught brand research and experience at institutions such as Istituto Europeo di Design and Miami Ad School and was a teaching assistant for Scott Galloway’s brand strategy sprint at Section4. Her work blends rigorous strategy, hands-on execution, and a belief that branding deserves the same measurement discipline as performance marketing.
How Brand Measurement Connects to Revenue
Brand measurement drifted away from commercial reality when marketers decided to chase every click and impression. Guta traced this pattern back to the 1970s when companies decided to separate branding and sales into distinct functions. Before that split, teams treated branding as a sales lever that directly supported revenue. The division created two camps that rarely spoke the same language. One camp focused on lavish creative campaigns, and the other became fixated on dashboards filled with shallow metrics.
Guta started her career in performance marketing because she valued seeing every dollar accounted for. She described those years as productive but ultimately unsatisfying. She moved to big enterprises and spent nearly a decade trying to make brand lift reports feel credible in boardrooms. She eventually turned her focus to startups and noticed a clearer path. Startups often have budgets that force prioritization. They pick one initiative, implement it, and measure its direct impact on revenue without dozens of overlapping campaigns.
“When you only have money to do one thing, it becomes obvious what’s working,” Guta explained. “You almost get this A/B test without even planning for it.”
That clarity shaped her view of brand measurement. She learned that disciplined isolation of variables makes results easier to trust. When a startup rebranded, sales moved in a way that confirmed the decision. The data was hard to ignore. Guta saw purchase volumes increase after brand updates, and she knew these signals were stronger than any generic awareness metric. The companies she worked with never relied on sentiment scores alone because they tracked actual transactions.
Guta later built her own product to modernize brand research with a sharper focus on financial outcomes. She designed the system to map brand activities to revenue signals so marketing could prove its impact without resorting to vague reports. The product found traction because it respected the mindset of finance leaders and offered direct evidence that branding drives growth. Guta believed this connection was essential for any team that wants to secure resources and build trust across departments.
Key takeaway: Brand measurement works best when you focus on one clear change at a time and track its impact on revenue without distractions. You can earn credibility with your finance partners by showing how brand decisions move purchase behavior in measurable ways. When you build discipline into measurement and align it with actual sales, you transform branding from a creative exercise into a proven growth lever.
Back to the top ⬆️
Examples Where Brand Investments Shifted Real Business Outcomes

Brand investments often get treated as trophies that decorate a budget presentation. Guta shared a story that showed how sponsorships can drive specific business results when you track them properly. A Brazilian bank decided to sponsor a soccer championship. On the surface, the campaign looked like a glossy PR move. When Guta’s team measured what they called “mindset metrics,” they found that soccer fans reported higher loyalty toward the bank. The data set off a chain reaction that forced everyone involved to reconsider how they viewed sponsorships.
The bank pulled internal reports and discovered a clear pattern. Fans who followed the soccer sponsorship churned at much lower rates than other customers. Guta said the marketing team realized they were sitting on a revenue engine they never fully understood. They began to see sponsorship as a serious retention tool rather than a vanity spend. That shift did not happen automatically. Someone had to ask whether the big brand push was connected to any measurable outcomes, and then look carefully for the link between sentiment and behavior.
Guta described another client who rebranded their product suite under one name. They planned to delete the old LinkedIn pages that showed the previous brand identities. The team assumed nobody cared about those pages because LinkedIn conversions looked low in standard reports. Guta’s data proved otherwise. Those profiles accounted for more than 10% of conversions. Even though LinkedIn often buries links and limits reach, buyers visited those profiles before searching on Google and converting later.
“Organic is a myth. It’s just conversions you forgot to measure.”
Guta said this with the calm certainty of someone who has studied enough attribution to see where the gaps live. She explained that once you recognize how long it takes for a sponsorship impression to spark a branded search or a sale, you change how you plan. You stop guessing about campaign timing. You start working backward from the conversion window. If you expect a surge in July, you begin your campaigns in May so your budget has time to mature into real conversions instead of wasted impressions.
Key takeaway: Map the path between your brand investments and your conversions with concrete data instead of assumptions. Use mindset metrics to identify early loyalty signals, then confirm whether those signals correlate with retention and branded search. When you see exactly how long each channel takes to drive revenue, you can plan campaigns months in advance and protect your budget with evidence that proves your strategy is working.
Back to the top ⬆️
The Tangible Outcomes of Brand: Purchase Intent and Memory Structures
Branding often carries a reputation as a soft layer of sentiment layered on top of performance campaigns, but Guta shares that it operates through a more rigorous mechanism than most teams realize. Branding creates memory structures that store signals in a person’s mind. When customers enter the market ready to buy, they retrieve those signals almost instantly. Their brains pull up familiar visuals, a sense of trust, or a specific promise that speeds up the choice. Guta has seen this happen repeatedly when people move straight from awareness to purchase without even visiting the company’s website again.
Guta describes the reality that many marketing teams get stuck in a single-track mindset. They keep trying to hammer home immediate behaviors without any effort to create longer-term recall. She shares that brands can think about their work in two tracks running side by side:
- One track plants attributes in memory so customers can recall the brand later.
- The other track activates specific behaviors like trying, subscribing, or purchasing.
When companies only focus on activation, they may end up with viral content that does not translate into any buying behavior. Guta has watched teams measure short-term engagement while ignoring whether the campaign left any meaningful residue in people’s minds. That gap leads to wasted budgets and confused leaders.
“You can have the yellow line and green line running in the same tracks,” Guta explains. “You can have or TikTok and run two types of communications on it. And it’s actually very effective as sim data from big techs.”
Guta points to big platforms like TikTok and YouTube, where brands combine subtle branding reminders with direct calls to action in the same placement. She has seen companies stack performance and memory-building in a single channel and lift results by pairing them. This method does not require a big budget or a dedicated brand team. Any marketer willing to blend memory and action can see improvements in conversion and recall.
Key takeaway: You can create faster buying decisions by combining memory structures with behavior prompts in your marketing. Start planning campaigns with both goals in mind. Use channels like TikTok and YouTube to plant reminders of who you are and trigger action at the same time. When your audience feels familiar with your brand, they will respond to your offers more quickly and with less hesitation.
Back to the top ⬆️
How to Convince CFOs to Fund Brand Marketing
It is interesting to consider how marketers keep pitching brand investment with vague words about purpose while the CFO wonders if they have ever seen a budget spreadsheet. Guta has seen these conversations fall apart because marketers forget that finance teams speak a different language. When the CEO demands proof that impressions and blog views drive revenue, brand leaders often respond with stories that feel disconnected from basic economics.
Guta learned this lesson the hard way while building her first company in Brazil. She and her co-founders had no investors and no cushion. Every dollar paid for actual growth. She noticed that performance marketing spend always climbed as they scaled. The most obvious buyers eventually dried up. Each new campaign cost more to convince people who were further from the brand. She started thinking about brand spending as a necessary lever to balance rising acquisition costs, rather than a side project for the creative team.
She prefers to explain the dynamic in clear financial terms. Performance marketing always shows up in the cost column. It becomes an operating expense attached to each sale. This cost keeps rising because platforms like Google and Meta eventually saturate your most receptive audience. At that point, you need to keep spending to re-engage the same buyers over and over. Brand investments can create an economic buffer. When more people recognize your company and trust it, you get a baseline of organic demand that keeps per-sale costs from ballooning.
“Marketing is a cost,” she said. “When you build a brand, you create an expense that acts more like an investment. It compounds over time instead of resetting to zero.”
Guta suggests a small test budget instead of a massive campaign. She believes every CFO can agree to a 5 percent experiment, especially when it has a short feedback loop. She started her own test on TikTok after hearing marketers in her WhatsApp group complain about the lack of professional content in Portuguese. She recorded videos about branding in her spare time and watched them spark conversations. Those early videos eventually turned into a B2B channel with nearly 100,000 followers and a TikTok case study. She believes any team can start small by identifying what their customers already care about and creating useful content in the channels they use most often.
Guta also points out that you do not even need to label it branding. She has learned that the word alone can trigger skepticism because it feels soft and slow. She likes to describe these efforts as actions that lower the cost of acquisition and improve marketing efficiency. That way you can present a plan that feels concrete instead of theoretical. In her TikTok example, her second video went viral quickly.
Sales teams started reporting that leads were mentioning the content in early calls. She tracked the conversion lag through her product and saw that some brand actions created results within a week. She thinks this blend of qualitative signals and short-term performance helps everyone stay confident. The CFO sees traction. The sales team sees proof in their conversations. You avoid a tug-of-war over whose metrics matter most.
Key takeaway: If you want finance leaders to support brand investments, describe them as tools to lower acquisition costs and improve operational margins. Use clear, economic language and propose a small budget test with an immediate timeline. Start with a channel where your audience already spends time, create content that feels genuinely useful, and look for early signals in both data and sales conversations. That way you can build evidence that brand investments pay off without relying on vague concepts or year-long timelines.
Back to the top ⬆️
How Attribution Models Handle Channel Composition and Forecast Errors

Guta shared several examples that show how marketing attribution models often struggle to keep up with messy, unpredictable reality. She has seen brands invest heavily in influencers and expect predictable returns, only to discover that what works brilliantly for one company quietly collapses for another. That pattern repeats across tactics and confuses leaders who expect certainty. Guta described PR as a category where measurement falls apart completely, especially in Brazil, where massive audiences see your coverage but leave no measurable trail.
“I do not know how many people have seen my article that I published in the largest business magazine in Brazil,” Guta said. “I do not know what to add to the model in terms of data.”
That data gap forces marketing teams to rely on judgment instead of dashboards. Guta explained that many people crave a simple narrative about one channel driving all the revenue. She called this the fantasy of a “magic channel,” which feels reassuring because it sounds controllable. In practice, the outcome comes from composition. The mix of tactics, timing, and context matters more than any single lever. TikTok campaigns created another layer of unpredictability. Guta described cases where short videos sparked unexpected surges in blog traffic and free trials. The user journey looked nothing like a neat funnel. People would:
- Watch a video while waiting for the bus.
- Search the brand’s name later that night.
- Bookmark a pricing page on their work laptop.
- Start a free trial days later after seeing the logo again.
Forecasting models can fail in ways that feel spectacular. Guta recalled a client who bought prime TV spots in Brazil, where daily audiences rival the Super Bowl. The algorithm predicted a huge sales lift, and the team felt confident enough to spend without hesitation. When the campaign ended, sales barely moved. The warehouse stayed stocked, the dashboard stayed flat, and no one could explain it. Eventually, they found that the creative had shifted just enough to disconnect from what customers expected. The model never accounted for that change because it treated all TV campaigns as identical.
Guta also described a client whose sales spiked so quickly that the team thought the data was broken. When they called to figure it out, they learned that new legislation had forced a competitor to route customers to their product. The client could not publicly communicate the partnership, so the attribution model had no category for what happened. This experience forced everyone to admit that marketing analytics can only explain what you can measure, and there are always invisible forces in the mix.
Key takeaway: Attribution models can help you see patterns, but they will never pinpoint a magic channel that works in isolation. Focus on the composition of your marketing mix instead of searching for a silver bullet. When you see unexpected flatlines or sudden surges, start by asking three questions: Did your creative change? Did outside events shift demand? Are your customers using paths your model does not track? That way you can adapt faster, respect the complexity of real markets, and keep your budget aimed at combinations that actually drive results.
Back to the top ⬆️
How to Build Trust in Marketing Measurement Solutions
Marketing attribution has turned into an endless debate that leaves most teams feeling frustrated and skeptical. Guta shared how her company grew tired of the constant arguments about whether marketing mix modeling, incrementality, or multi-touch attribution deserved the crown. She said that her team decided to build something that could handle the messy reality of brand channels like sponsorships, PR, podcasts, and influencers. She described this moment as a blend of new data expectations, expanding technical capabilities, and a cultural shift in how marketers relate to measurement itself.
To create something credible, Guta relied on a very practical strategy. She publicly asked marketers to share the raw spreadsheets they used to measure brand effects. Many people ignored her request, but a few sent over detailed files that showed exactly how they tracked and tested campaigns. These files contained everything from AB test logic to incremental lift calculations. Guta explained that those examples became the raw material for her team’s modeling work. She shared that it took four years to move from theory to a product that felt trustworthy enough to launch.
“I talked to academics, I read over thirty papers, and I kept asking people to share their metrics,” she said. “You cannot build this alone.”
Guta explained that trust grows when clients can see your forecasts compared to the numbers they already monitor. She described how teams cross-check Purple Metrics predictions with their click data and MTA dashboards. When the projections match what clients observe in their own systems, credibility improves quickly. This consistency feels concrete and reduces the sense that measurement tools are just black boxes making guesses. She said this process requires constant iteration because every company has its own context and expectations.
She also shared her frustration with marketing’s tendency to ignore academic research. She described how other disciplines, like behavioral economics, constantly blend experiments with rigorous study. Marketing often stops at catchy LinkedIn posts and simplified blog content. Guta believes teams need to study the original research behind popular terms. She said that understanding the real methods behind attribution frameworks helps marketers explain results without resorting to vague claims or borrowed jargon.
Key takeaway: If you want your measurement models to earn trust, collect real examples from the practitioners who manage budgets and design campaigns. Share your methods openly, compare predictions to clients’ existing reports, and keep improving the model with fresh data. When you can show that your forecasts align with the metrics people already use and back up your methods with credible research, you create evidence that buyers can see and verify. That way you can replace uncertainty with clarity and make measurement feel like a shared process instead of a hidden calculation.
Back to the top ⬆️
Creating Attribution Algorithms That Respect Brand Complexity
It is interesting to consider how much energy marketers spend trying to prove that a LinkedIn post can trigger a chain reaction that leads straight to revenue. Guta has seen this obsession up close. She explained that Purple Metrics helps clients measure the ripple effect of LinkedIn reach during a specific week and see what follows next in the funnel. This does not mean they claim to connect every post to closed deals. Guta thinks a lot of vendors spin big stories about “full-funnel attribution,” but most of them are just patching together guesses. Modeling the delay between brand interactions and conversions in B2B feels like trying to predict the weather with a wind sock and a hunch.
Purple Metrics briefly toyed with a different idea. They considered combining declared data—like asking prospects “Where did you hear about us?”—with behavioral signals. In theory, that combo sounds clean. In reality, declared data can be a mess. People forget, gloss over details, or just pick something familiar. Guta described it plainly:
“People can’t run a business on declared answers.”
She saw early customers using that product, and it technically worked, but it never felt solid enough to build into something bigger.
Instead of forcing that path, Guta focused on the bigger friction points marketers face. She sees an entire parade of AI-powered tools that promise to solve everything at once: measurement, creative optimization, channel recommendations. Yet none of these tools can erase the chaos inside a marketing department when it comes time to reallocate budget. An algorithm might spit out a recommendation to shift dollars from Google to out-of-home billboards. That sounds logical in a spreadsheet, but in the real world, that means moving budgets between teams, getting sign-off from new vendors, and reworking campaign plans. It is a logistical circus.
To help marketers deal with that, Guta’s team built a feature called “baby steps.” The algorithm holds back the full recommendation and shows a partial adjustment instead. Marketers can shift some budget, watch what happens, and let the system learn before making bigger moves. Guta told a story about an early moment when she asked their lead data scientist whether he would actually follow the algorithm’s advice if he were a CMO. He looked her in the eye and said no. Not because the math was wrong but because it would cause too much upheaval too fast. That moment cemented her belief that algorithms must adapt to people, not the other way around.
Key takeaway: You will never find a perfect system that connects every social impression to a sale in B2B. Guta’s experience shows that declared data is unreliable and behavioral models alone cannot account for how marketing teams operate in practice. If you want to build attribution that works, focus on incremental budget shifts, let the system learn over time, and design your tools to match how teams actually get work done. That way you can avoid the fantasy of instant clarity and start improving results step by step.
Back to the top ⬆️
How Algorithm Blindness Limits Predictive Marketing

It is interesting to consider how marketers crave an AI they can interrogate, like a colleague who always has receipts ready. You might imagine typing, “Explain why you think out-of-home ads deserve every penny,” and the algorithm would show you tidy charts, engagement stats, and a calm justification. Guta has spent years in the weeds of prediction models, and she has learned that reality rarely plays along with this fantasy.
When her team sees a forecast that looks off, the process kicks in with almost comic regularity. The customer success crew sends a panicked message to engineering. Engineering calls the data scientists. Someone inevitably mutters, “What the hell?” Then the data scientists sift through everything and respond with their equivalent of a shrug: the data is complete, nothing is broken, and the prediction stands.
“The algorithm is blind. It only knows what happened in the data. It doesn’t know what happened on Instagram,” Guta said.
She finds this blindness maddening. The model can see performance metrics like reach and conversions. It cannot grasp why a surge happened beyond the structured signals it ingests. Guta wants to design a more intelligent layer of context, so the machine can learn without depending on you to feed it gossip about political rants or local policy shifts. She is testing systems that:
- Blend quantitative results with descriptive reasoning
- Flag when reality diverges sharply from forecasts
- Help teams understand if the lift came from a genuine trend or just random noise
She compares prediction to using Google Maps. If you drive across town, the ETA feels precise. If you drive across the country, you pack an extra jacket and a little patience because you know something unexpected will happen. Marketing teams will need the same mindset. Predictions are not an ironclad promise. They are a starting point that requires your judgment when the road gets weird.
Key takeaway: Algorithm blindness is real. Your models will never fully understand the messy reasons behind a sudden spike or slump. Build workflows that combine clean data with human context, so you can spot when predictions fail to see the bigger picture. That way you can make smarter adjustments without chasing ghosts in the metrics.
Back to the top ⬆️
When Marketing Data Refuses to Flatter Your Strategy
It’s interesting to consider how multi touch attribution tools end up as elaborate camouflage for shaky marketing strategies. Guta has seen entire companies buy these platforms to create a comforting illusion that decisions are airtight. She shared blunt stories of clients who would rather cancel their Purple Metrics contract than face a report showing their branding budget was a money pit. That moment when the data refuses to flatter you feels like standing under bright fluorescent lights with no place to hide.
Many leaders insist they want to be data driven, but when the numbers come in, their instincts start clawing for the old story. Guta described companies where attribution modeling becomes an elaborate ritual. The CMO tweaks the settings, moves from U-shaped to W-shaped, then adjusts the decay rate. Every change gives a little more credit to their pet channels. She has watched this pattern so many times that she sounds almost bored describing it.
“Most CEOs will tell you they want data. But if you really want to learn scientifically, you need to be ready to fail sometimes,” she said.
In her view, companies can only grow if they agree on a few principles before the metrics ever hit the screen:
- Everyone commits that experiments sometimes fail.
- No one will get fired just because a hypothesis turns out to be wrong.
- Decisions will be made with the best data available in the moment.
- Future information will arrive, and you cannot punish your past self for not knowing it.
These ideas sound simple, but in reality, they threaten fragile egos. Guta recalled a client who faced the board after a disastrous brand test. Instead of deflecting, the team simply said, “We tried it, it failed, we learned, and we are moving on.” That moment showed a culture with real maturity.
Attribution tools can be a gift if your company has the stomach to look at results without twisting them into an apology. They can also become a justification engine, a polished excuse to never change anything that feels comfortable. Guta believes the difference is leadership willing to say out loud, “We made the best decision we could, and now we know better.”
Key takeaway: If you plan to use attribution software, set clear expectations upfront. Define what you will do when the data conflicts with your instincts. Agree to protect the people brave enough to test new ideas. That way you can build a culture where experimentation drives progress instead of fear.
Back to the top ⬆️
Building a Personal Happiness System Like a Product

Guta treats her happiness like a product she is shipping. She builds it in layers, starting with core modules and adding supporting features as life evolves. She stays vigilant for signals that something has slipped out of alignment. You can feel her designer mindset in every part of her routine. She explained that she always needs a few elements working in concert: physical activity, proximity to water, rich social connections, and the electricity of city life.
She grew up in Rio, a city that hums with salt air and constant movement. That early connection to the ocean stuck. She said, > “I need to go back to the sea every once in a while. It resets me.” When her schedule drifts too far from those essentials, she starts to feel tension. For her, it shows up as fatigue and stress. Rather than ignoring those red flags, she treats them as data points. She digs into the cause and tweaks her environment. If she is running on empty, she checks whether she has spent enough time near water or carved out hours to read.
- Her personal operating system has a few recurring features:
- Dedicated time with friends, even when deadlines are crushing
- Small travel plans that keep her looking forward
- A short, non-anxious list of books she wants to read
- Intellectual friction that sparks curiosity
She has no patience for long-term roadmaps that generate guilt. Instead, she maintains a living backlog of ideas. When she notices she is drained, she opens her “bug report,” figures out what broke, and gets back to her baseline. She avoids over-optimizing every hour, which is a trap so many founders and operators fall into. That mindset lets her stay nimble instead of buried under her own expectations.
Guta’s system works because she keeps it lightweight and personal. She does not borrow someone else’s formula. She pays attention to the metrics that actually matter to her well-being. Then she keeps refining, just like any product worth building.
Key takeaway: Treat your energy and happiness like a product in active development. Build your routine around the core elements that sustain you, watch for stress as a signal to recalibrate, and keep your plans flexible enough that they energize you rather than weigh you down. That way you can maintain momentum without losing yourself in the process.
Back to the top ⬆️
Episode Recap

Brand measurement often feels like a polite performance, with marketers lining up slides that promise growth while everyone quietly wonders where the proof lives. Guta spent years inside that tension. She started out in performance marketing because she wanted to see every dollar and every decimal. Those campaigns felt productive, but something nagged at her. Even the cleanest numbers ignored the real reasons people chose one brand over another.
Big enterprises only deepened that frustration. Every conversation about brand lift turned into a careful dance of assumptions no one fully trusted. She walked away and started over. Startups gave her something clearer. When you can only afford one initiative, you learn fast if it works. Guta helped one small company rebrand, and she watched purchase volumes climb in a way nobody could argue with. There was no fancy narrative. The sales line moved, and everyone felt it.
She kept running into the same blind spots. A bank sponsored a soccer championship expecting feel-good PR, but loyal customers ended up staying longer. Another team deleted old LinkedIn pages they thought were useless and lost a tenth of their conversions overnight. Guta said it felt like the universe reminding her that marketers love to pretend clean charts can explain everything.
When she built Purple Metrics, she decided not to sell the fantasy that an algorithm always knows where revenue comes from. Her data scientist once told her he wouldn’t follow the model’s recommendations himself because it would cause too much chaos too fast. That stuck with her. She made sure the product encouraged gradual shifts instead of big leaps no one could stomach.
She has watched teams spend months trying to connect every LinkedIn impression to a sale, as if a single spreadsheet could replace judgment. She has seen CMOs tweak attribution settings until the numbers looked comfortable. She believes progress comes when leaders accept that some tests will flop and they will have to say, “We tried, we learned, and we’re moving on.”
Guta prefers plain language when she talks to finance teams. She calls branding the work that lowers acquisition costs. That reframe makes it easier for everyone to get behind. She once filmed a few TikTok videos in Portuguese because no one else was doing it. Those clips grew into a channel with almost 100,000 followers. Some leads mentioned the videos on their first sales call. When she traced the results, she saw proof within a week.
Guta’s story feels like a reminder that no perfect model will ever save you from the messy reality of how people buy. She believes the real progress comes from steady experiments, smaller bets, and the courage to measure what happens without flinching. If you’re tired of marketing promises that sound too clean, her perspective will feel like fresh air.
Listen to the full episode ⬇️ or Back to the top ⬆️

Follow Guta 👇
✌️
—
Intro music by Wowa via Unminus
Cover art created with Midjourney (check out how)
Apple •Spotify• Pocket Casts •Youtube •Overcast •RSS
Related tags
<< Previous episode
Next episode >>
All categories
- AI (94)
- career (60)
- customer data (59)
- email (64)
- guest episode (168)
- operations (127)
- people skills (34)
- productivity (10)
- seo (14)
See all episodes
Future-proofing the humans behind the tech
Apple •Pocket Casts•Google •Overcast •Spotify •Breaker •Castro •RSS