89: The viability of warehouse-native martech: Insights from 10 industry experts

What’s up folks, today we’ll be joined by various martech pros sharing their opinions on the topic of warehouse-native martech.

The landscape of marketing technology architecture has been undergoing – what you might call – a seismic shift and many don’t even realize it. In this transformation, there’s a remarkable development – warehouse-native marketing technology, an innovative breakthrough that promises to reshape the entire industry for the better, but comes with plenty of questions and skepticism. 

Here’s today’s main takeaway: As we navigate the potential transformation to warehouse-native martech, the single most critical action is to prioritize achieving high-quality, well-structured data; it’s the golden key to unlocking the full potential of these emerging tools and strategies.

This episode explores the various facets of warehouse-native martech and its viability, pulling in insights from industry experts, piecing together a comprehensive view of this groundbreaking shift.

Jump to a Section 👇

What are warehouse native martech or connected apps?

In Dec 2021, Snowflake introduced a new term, ‘connected applications’.  Unlike traditional managed SaaS applications, connected applications process data on the customers’ data warehouse, giving customers control over their data. Benefits include:

  • preventing data silos, 
  • removing API integration backlog, 
  • enabling custom analytics, 
  • upholding data governance policies, 
  • improving SaaS performance, 
  • and facilitating actionable reporting

In other words, instead of making a copy of your DWH like most CDPs and MAPs do today, everything lives on top of the DWH and you don’t have to pay for copying your db.

Some companies solving this for product analytics are Rakam, Indicative, and Kubit.

Census and Hightouch are also doing this, being warehouse-native activation tools sitting on top of a DWH and don’t store any of your data. 

Some Messaging companies solving this use case natively on the cloud warehouse are Vero, Messagears, and Castled.

Revolutionized Data Handling in Customer Engagement Platforms

India Waters currently leads growth and technology partnerships at MessageGears. She explains how her company’s differentiation comes from its unique handling of customer data.

Unlike competitors such as Salesforce Marketing Cloud or Oracle, which require a copy of customer data to live within their tool, MessageGears directly taps into modern data warehouses like Snowflake or Google BigQuery. This unique approach is born out of the inefficiency and high costs of older platforms that necessitate copying and moving data into multiple marketing tools.

India vividly portrayed the challenge this old approach creates, imagining the confusion and resource consumption of working with out-of-date data across numerous tools. By not having to have a copy of customer data, MessageGears solves this problem for big companies, eliminating waste and creating a more coherent understanding of the customer’s journey. Clients like OpenTable, T-Mobile, and Party City can now work with the most up-to-date data, using it as a source of truth for better analytics and customer experiences.

Reflecting on how MessageGears had to become thought leaders in this approach, India acknowledged that it took time for the industry to understand and accept this innovative method. But as awareness has grown, the approach is now seen as a logical and necessary step in the evolution of customer data handling.

Hear directly from India below 👇

Takeaway: MessageGears’ refusal to follow the traditional path of copying customer data into its tools is a game-changer in the world of customer engagement platforms. By plugging directly into modern data warehouses, they’ve solved a problem that has plagued big companies, enabling them to use the most up-to-date data for insights and experiences. The industry has evolved, and MessageGears is leading the way with an approach that makes sense for today’s data-driven world.

Rethinking User Database Size Pricing in Martech

While MG has been around since 2011, more and more startups are waking up to the idea of directly accessing brands’ first-party data instead of relying on cloud data syncs. We also chatted with Arun Thulasidarhan, CEO & Co-founder at Castled.io. They’re a warehouse-native customer engagement platform that sits directly on top of cloud data warehouses.

Arun and his team set out to disrupt traditional martech to fix some of the fundamental problems as it relates to the significant disconnect between the number of users a company pays to store in their database and the actual value derived from them.

He emphasized that having millions of users doesn’t necessarily translate to substantial revenue or value, especially for smaller B2C companies. He critically questioned whether traditional pricing models based on user database size were really delivering value for businesses. Arun then went on to explain how Castled.io approaches this differently, choosing a more logical and direct connection between cost and benefit.

Unlike other martech firms that charge based on customer numbers, Castled.io bases its pricing on the number of team members using the tool. Arun argues that this is a more accurate reflection of the value a company gets from the service, as more marketers using the tool likely means a more substantial investment in the platform.

He also touched on how they handle data look-back periods and the importance of data retention for retargeting and reengagement. With traditional systems, data engineers might have to wait for months, while with Castled.io, the data is readily available in the data warehouse. The integration of data warehousing and marketing tools, according to Arun, is the future of martech pricing – something he sees as a “no-brainer.”

Hear directly from Arun below 👇

Takeaway: Traditional martech pricing models have significant inconsistencies, often failing to align the number of customers with the real value obtained. Castled.io challenges this paradigm by pricing their services based on the number of team members using the tool and ensuring that data retention aligns with business needs. This more logical and direct approach may be an essential step forward for the martech industry, promoting fairness and value over mere numbers.

Aligning Pricing Metrics with Customer Needs

MessageGears and Castled.io’s groundbreaking approach in martech isn’t merely an isolated occurrence. It’s part of a broader trend that calls for a deliberate rethinking of pricing metrics within the industry. This movement emphasizes the alignment of price with real value and accessibility.

It’s worth highlighting the intricacies of selecting the right pricing metric. We spoke with Dan Balcauski, a SaaS pricing expert who highlights that it’s not just about being innovative; it’s about making choices that truly resonate with customer needs and market demands.

Dan delved into the complexities of pricing metrics and how they can be used to either aid or hinder competitive differentiation. Though he admitted that his knowledge of the specific market wasn’t extensive, he was able to break down the various facets of pricing strategy, sharing an intriguing case study to illustrate his point.

Dan emphasized the importance of choosing a pricing metric that aligns with customers’ business requirements and the perceived value of the product. This metric, according to him, must balance fairness, predictability, and operational ease for both the buyer and the seller.

He highlighted the example of Rolls Royce’s innovative approach to jet engine pricing, where they chose to charge “power by the hour” instead of selling the engines outright. This usage-based model aligned the interests of the buyer and seller, streamlining many ancillary aspects such as maintenance and replacement.

However, Dan also warned against unnecessarily complex or “cute” pricing metrics. He stated that success in implementing innovative pricing strategies likely comes easier to industry leaders or highly innovative products. Trying to be different just for the sake of it can lead to confusion and additional costs in educating the market.

Hear from Dan directly below 👇

Takeaway: In the world of martech, warehouse-native pricing changes are a nuanced subject. As Dan’s analysis reveals, the successful implementation of a pricing strategy requires a careful balance of alignment with customer needs, perceived value, predictability, and operational efficiency. Innovative approaches can bring success, but they must be implemented thoughtfully and with a true understanding of the market. Being different for its own sake may lead to complexity without adding real value.

The Undeniable Movement Towards a Universal Data Layer

Before getting into the weeds of the viability of this shift, let’s get the lowdown from one of the most respected voices in martech.

You guessed, we’re talking about Scott Brinker. The Martech Landscape creator, the VP of Platform ecosystem at Hubspot and the acclaimed force behind chiefmartec.com, hailed universally as the martech world’s ultimate wellspring of knowledge and insight.

Scott sees a clear trend in martech towards consolidating data across the company into the warehouse, and making that data accessible across various applications. He doesn’t hesitate to point out that this is a bit different from being truly warehouse-native, which raises questions about the architecture layers and the way data interacts operationally with the warehouse.

On the exciting side, Scott highlights the robust experimentation in the field. However, he’s keen to identify the challenges too, such as the need to rationalize data that is inherently messy when consolidated into data lakes and warehouses. The sheer volume and complexity of data require layers of interpretation and structuring, something that individual martech products often provide.

Scott also highlights the performance dimension, noting that while technological advances have improved the read/write performance of data in a warehouse, there are still cases where millisecond differences in performance can have critical impacts on user experience or search engine rankings. He sees the need for operational databases fine-tuned to specific engagements as a continued necessity in the martech architecture.

In the end, Scott recognizes the undeniable movement towards a universal data layer where martech companies are being driven to contribute and leverage data from the warehouse. However, he doesn’t see it as something that will entirely replace all localized and context-specific databases in the immediate future.

Hear from Scott directly 👇

Takeaway: Scott provides a balanced and insightful perspective on the warehouse-native approach in martech, seeing it as an interesting and evolving aspect but not a complete solution. He emphasizes that while consolidation and accessibility of data are crucial, the complex nature of data, performance considerations, and the need for specific databases mean that the warehouse-native concept is still more of a developing direction rather than an established end point in the martech landscape.

The Necessity of Cloning Data in Warehouse Native Martech

As we talk about shifts in the data management landscape, Pini Yakuel, the CEO of Optimove (Marketing Automation Platform) provides a practical example of these changes, discussing how they’re CDP component is built on top of Snowflake.

Pini dived into the subject of warehouse native martech with a keen eye for architectural details. He spoke candidly about the convenience of copying data from one place to another and the efficiency of Snowflake, allowing for a seamless client experience. A clear advocate for this technology, Pini mentioned how companies can leverage Snowflake to have data easily accessible without having to move it around. The snowflake-to-snowflake data mirroring, for instance, eliminates the need for ETL, providing a significant advantage.

However, Pini didn’t shy away from the challenges either. The same technology that enables quick data processing doesn’t necessarily translate into fast response times for user experience. For instance, Snowflake, being an analytical data warehouse in the cloud, may not respond quickly enough for UX requirements.

Pini concluded with an optimistic note about the future, mentioning that Snowflake and BigQuery are emerging as significant players. But, he also acknowledged that the need to have copies of data close for certain operations still exists, leaving room for technology to evolve further.

Hear directly from Pini 👇

Takeaway: While warehouse native martech, especially through platforms like Snowflake, offers incredible convenience and has been a game-changer, it’s essential to recognize the need for closer data positioning in some cases. The current landscape is promising, but the future might hold even better ways to copy and utilize data without hindrance.

The Misguided Myth of Zero Data Copy

Whether it’s technically possible or not, not everyone is on board with the notion of zero copy data, using martech without ever needing to copy data in any of your tools. Enter Michael Katz, CEO and co-founder at mParticle, the leading independent and packaged customer data platform.

When asked about the concept of zero data copy and why he considers it misguided, Michael passionately dove into the core of the argument. He began by highlighting that copying data’s implication of creating inefficiency, particularly in terms of access cost, is fundamentally flawed. In his view, the cost of storage is negligible compared to the cost of computation, a fact well understood in the industry. Hence, creating duplicate copies of data doesn’t significantly change the overall cost structure.

Michael then went on to emphasize that it’s been demonstrated time and time again that replicating data brings tremendous efficiency for various uses and applications. He further expanded on his argument by noting that the belief in zero data copy not only misleads but also directs individuals and companies down a path of solving non-existent problems. He remarked that the focus should be on minimizing costs to maximize resources for growth, not chasing an illusion of efficiency.

Adding another layer to his argument, Michael revealed the dirty secret behind many reverse ETL companies, citing a persistent churn problem. These companies, he pointed out, offer what appears to be an “easy button” solution, but when the button is pressed, things turn out to be far from easy.

Hear directly from MK 👇

Takeaway: Michael’s debunking of the zero data copy concept is a compelling reminder that chasing illusions can lead to more harm than good. The true focus should be on understanding the problem at hand and allocating resources wisely, rather than getting lost in the allure of simplified solutions that often prove ineffective. This insight urges us to be more discerning in evaluating the effectiveness and underlying motives of the tools and strategies we adopt in the world of martech.

📫 Never miss an episode or key takeaway 💡

By subscribing to our newsletter we’ll only send you an email when we drop a new episode, usually on Tuesday mornings ☕️ and we’ll give you a summary and key takeaways.

Success! You're on the list.

Solving the Puzzle of Compute Charges in the Cloud Data Warehouse

Many industry experts agree with Michael that one of the biggest hurdles for warehouse native martech is computing charges and creating a load on your DWH/Snowflake that can add up quickly. Here’s what Arun from Castled.io had to say about his solution for this compute challenge.

When asked about how to tackle the prevalent problem of compute charges in existing cloud data warehouses, Arun clearly outlined the importance of addressing this issue. In his view, it’s more than just a concern about expenses; it’s an integral part of deciding to have a data warehouse, which still holds great value to many.

Arun dove into the core of the problem, explaining that once a data warehouse has been implemented, businesses often aim to not only enable data analytics but also marketing, where significant investments are made. This decision leads to one of the major reasons behind the compute charges: hiring bulk analytic engineers, many lacking the necessary experience to write optimal SQL queries.

Arun’s perspective on the solution is straightforward and rooted in his experience. For him, once the data is collected in the data warehouse, the most scalable model involves using warehouse-native applications like Castled.io. These applications reduce the charges by running all kinds of load tests to ensure minimal and optimal expenditure. Arun emphasized the care taken to ensure that even a minor filter change doesn’t lead to unnecessary extra charges.

Hear directly from Arun 👇

Takeaway: Arun’s insights highlight a common yet overlooked aspect of cloud data warehouse management: compute charges. By understanding the root causes and adopting warehouse-native applications, companies can not only minimize these charges but also maximize the value and efficiency of their data warehouses. His approach illustrates a thoughtful and scalable way to ensure that technology investments align with financial considerations.

Is Warehouse Martech More Beneficial for Cloud Providers Than Customers?

Despite hearing this solution on compute charges and the benefits of zero copy data, Michael Katz, CEO of mParticle held firm on his stance going back to the value to customers.

Michael began by laying out a common structure of the marketing tech stack, mentioning different components such as analytics, customer engagement platforms, experimentation tools, and customer support services like Zendesk. In this context, he highlighted that between five and ten different categories could be observed across most martech stacks.

Michael then questioned the real beneficiaries of building everything natively on a Cloud Data Warehouse. He argued that such an approach seems to favor the data warehouse provider rather than delivering genuine value to the customer. Moreover, he expressed skepticism about the notion that having all vendors run their own compute cycles on the data warehouse would necessarily lead to cost savings. He pointed out that while theoretically possible, no one has conducted a side-by-side comparison to prove that assumption.

Further, Michael emphasized that whether dealing with providers like Snowflake or mParticle, everyone is in essence reselling cloud compute, either with a markup or bundled into services. The assumption of inherent cost savings, he asserted, doesn’t stand up to scrutiny, and the claim that avoiding the creation of multiple copies of data will automatically save money is not necessarily true.

Hear MK‘s take directly 👇

Takeaway: Michael’s examination of the warehouse native approach reveals that what might seem like a cost-saving strategy on the surface might not deliver real benefits to the customers. This insight warns against blindly accepting theoretical advantages without concrete evidence, encouraging a more nuanced understanding of how value is truly generated in the martech world.

Why Zero Data Copy in Martech is Not a Black-and-White Issue

Michael’s scrutiny of the warehouse native approach invites a broader conversation about adaptability and tailored solutions in martech. It challenges the standard view, paving the way for alternative methods that don’t cling to conventional wisdom. Recognizing that one approach doesn’t fit every scenario, some companies are proposing a hybrid approach and shaping the conversation around customization and efficiency.

In this camp is Tamara Gruzbarg, VP Customer Strategy at ActionIQ – an enterprise Customer Data Platform.

When asked about the widespread arguments dismissing zero data copy as a flawed concept, Tamara offered a thoughtful perspective. She didn’t outright reject the notion, but rather emphasized the importance of not viewing it in black and white terms. In her view, the concept of zero data copy isn’t necessarily something that will work for everyone in the immediate future, but that doesn’t mean the industry shouldn’t be moving in this direction.

Tamara continued to explain that once sufficient work has been done to create a robust data environment within a client’s internal structure, there’s a real opportunity to leverage that investment. It’s about using the data from its original location to minimize costs, rather than insisting on either 0% or 100% adherence to a zero copy or fully composable CDP model.

Speaking from her experience at ActionIQ, she emphasized the value of creating a “future-proof” environment where different components from the best vendors or internal solutions can be utilized. This approach allows for adaptability, not locking into a rigid framework, and instead opting for a path that works for the individual needs of a company, with the capacity to optimize over time.

Hear directly from Tamara 👇

Takeaway: Tamara’s insight sheds light on the nuanced reality of the zero data copy debate. Rather than clinging to absolutes, she encourages a more flexible approach that aligns with the individual needs and future directions of a company. Her focus on creating a future-proof environment underscores the importance of adaptability and optimization in the ever-changing martech landscape, without falling prey to rigid ideologies.

Warehouse Native Martech Impacting Enterprise More Than SMBs

The push for flexibility and optimization in data handling hints at a wider trend affecting large enterprises. This focus on warehouse native solutions aligns more closely with the complex needs of large organizations than with SMBs, setting the stage for a broader industry shift that some experts continue to explore.

One of these experts is  Wyatt Bales a senior exec at BluprintX, an enterprise focused-martech and growth agency.

When asked about the potential future of martech being warehouse native, Wyatt presented a comprehensive view on the subject. He emphasized that this path is indeed the way forward for enterprises, defining these as organizations with 10,000 employees or more. Wyatt agreed that traditional tools, such as duplicated databases and interfaces for marketing automation, are being replaced by more sophisticated and flexible solutions.

He shared insights from current projects, where customers are rethinking their approach and moving towards more direct communication through APIs and delivery services. This transition, according to Wyatt, is not only efficient but also resonates with the changing needs of enterprise clients.

However, he didn’t see this trend affecting the Small and Medium Business (SMB) sector in the same way. The traditional path of migrating from simpler tools like MailChimp to more advanced platforms like Marketo still holds relevance for SMBs. Wyatt predicts an emerging trend where SMB markets might see the integration of work management tools, such as Asana, with marketing automation platforms. This would provide an end-to-end solution that meets the specific needs of smaller businesses.

Wyatt also highlighted the importance of adaptability in skillsets, particularly within the context of warehouse-native solutions. Emphasizing the value of SQL knowledge, he discussed how organizational decisions and structures are changing, affecting even hiring and staff positioning. The future, according to Wyatt, is not only about mastering specific tools but also having the ability to talk about cloud storage, integrations, and other technological advancements. He stressed the importance of versatility in skillsets, particularly in a landscape that is rapidly shifting towards warehouse native solutions.

Hear directly from Wyatt 👇

Takeaway: The future of martech is clearly leaning towards warehouse native solutions for enterprises, reflecting a desire for flexibility, efficiency, and direct control. However, this shift is not universal, and Wyatt points out that SMBs will continue to have different needs and paths. The landscape is evolving, and success will depend on adaptability, both in technology and in the skillsets of those navigating this complex ecosystem.

API Connections Versus Warehouse Native Approach

This being more impactful for enterprise is an argument that’s echoed by MessageGears when talking about the difference between APIs integrations and the warehouse native approach. Here’s India Waters from MessageGears (again).

India described the contrasting experiences of these two models by focusing on the real-world implications. She broke down the seemingly straightforward task of setting up individual APIs for real-time data access, especially in small to medium-sized businesses.

The problem, India explained, lies in the constantly changing environment. Whether it’s adding new fields or updating existing ones, the complexity of these tasks grows exponentially. When businesses try to synchronize tools like SalesLoft, Salesforce Pardot, or even something as specific as demand-based sales tools, the complexity doesn’t just double; it becomes an almost unmanageable challenge. Imagine a company like Best Buy or Home Depot, with countless customers and enormous volumes of first-party data. The complexity becomes a daunting puzzle.

India’s solution through MessageGears provides a refreshing perspective. By allowing businesses to view their modern data warehouse without the burden of storing data, the approach untangles the web of syncing, matching, and complying with new data privacy laws. India expressed a frustration with those who still don’t get this new approach, highlighting how the warehouse native model renders concerns like HIPAA compliance almost irrelevant.

Hear directly from India below 👇

Takeaway: India’s insights shed light on the intricacies of API connections versus the warehouse native approach. Her detailed explanation helps us understand how even simple tasks can become a tangled web as business grows. By adopting innovative solutions like MessageGears, businesses can bypass these complexities, align with modern data privacy laws, and efficiently manage their data, demonstrating a forward-thinking approach to the technological future.

Does Warehouse Native Martech Replace Reverse ETL Tools?

Some of the emerging tools to replace API integrations are called reverse ETL, basically pushing data from your warehouse to your business tools. Some of the startups solving this are Hightouch and Census. The question though is, does warehouse native martech (sitting on top of the warehouse) also replace the need for reverse ETL solutions. Just like you might prefer using a bridge to cross a river rather than paying a ferry. Here’s Arun again from Castled:

When asked about whether warehouse native can replace reverse ETL tools, Arun provided a perspective that goes beyond a simple yes or no. His insights highlight the intricate balance between technology and purpose.

Arun explained that while warehouse native solutions can indeed eliminate the need for reverse ETL pipelines, it’s essential to understand why a business would want to do so. The motivation to adopt warehouse native shouldn’t be solely to eliminate reverse ETL; otherwise, the solution may fall short. With companies like Customer.io actively incorporating reverse ETL into their systems, a mere desire to remove reverse ETL isn’t enough.

Arun’s approach emphasizes the problem-solving capabilities of the warehouse native approach. If there are tangible limitations in existing tools, and if a warehouse native solution can solve those problems, then the path becomes clear. But starting on this path just to eliminate reverse ETL, without considering the broader issues, would be a mistake.

Hear directly from Arun 👇

Takeaway: Arun’s insights underscore the importance of aligning technology with genuine needs. Warehouse native solutions offer the ability to bypass reverse ETL, but this shouldn’t be the sole driver. Businesses need to identify real challenges that can be addressed by warehouse native solutions, creating a synergy between technological innovation and problem-solving. Anything less is a fleeting pursuit that’s likely to fall short.

Established Platforms vs Warehouse Native Marketing Automation

Obviously reverse ETL platforms are going to have some hot takes about this question. One of them is Tejas Monahar, the co-founder and co-CEO of Hightouch, a reverse ETL tool that’s taken a controversial stance against the packaged CDP, claiming that it’s dead and that they can replace it.

Tejas noted that while ease of warehouse native tools are on the rise, he doesn’t envision them taking over established platforms like Salesforce Marketing Cloud or Iterable. To Tejas, these tools can’t replicate the variety of channels and functions available in existing martech solutions.

Tejas explained that marketers need to utilize their data across all channels, and solutions like Hightouch make this process simple. He was unafraid to share that he’s not bullish on the trend of warehouse native marketing tools dominating the space, as they do not address the unique needs of marketers. This includes all sorts of concerns that a Customer Engagement or ESP platform handles, not related to the data warehouse, such as data quality, governance, privacy, and identity risk.

However, Tejas clarified that his stance does not mean there’s no room for new businesses in martech like warehouse native. On the contrary, he sees a wealth of opportunities to build in this field, especially with localization and integration. What he doesn’t foresee is a platform shift that replaces giants like Salesforce and Adobe. The focus should be on integrating the data and marketing sides of the business, and Hightouch is positioned as an ideal solution for this.

Hear from Tejas directly 👇

Key Takeaway: Warehouse native tools and CDPs are growing, but Tejas argues that they will not replace the multifaceted capabilities of existing martech providers. While they may add some new functionality, their integration with traditional platforms seems more likely. The focus, he believes, should be on how marketers can use data effectively across all channels, and he sees Hightouch as the perfect solution to bridge the gap between data and marketing needs.

Effortless Data Movement and Apps as Lightweight Caches on Core Warehouses

Not all reverse ETL vendors have a negative view on warehouse native approach though. Boris Jabes, the co-founder and CEO at Census, another Reverse ETL tool, has a different perspective.

When Boris was asked about the future of warehouse native martech and its potential to replace reverse ETL, his response not only highlighted a promising vision but also revealed Census’s pioneering role in the field.

Boris acknowledged the attraction of a world where warehouse native martech diminishes fragmentation and promotes consistency in written data. He was quick to point out that Census has been a trailblazer in this domain, adopting a warehouse native solution even before the term was coined. This, he said, is a testament to the company’s innovation and leadership in the space.

He detailed Census’s offerings, such as the Audience Hub, a segmentation experience native to the warehouse. These solutions not only reflect Census’s deep understanding of warehouse native systems but also underscore the company’s commitment to letting marketers activate data without hassle.

However, Boris also emphasized the challenges and necessities of this path. Perfect data in the warehouse is key. Understanding the relationships between different sets of data, customizing relationships, and validating data before use are all integral to making warehouse native martech work seamlessly.

Boris’s vision culminated in the anticipation of a world where data movement systems are no longer a concern, and every application becomes a lightweight cache on the core data warehouse. Though he believes in this future, he cautioned that it may take time to come to fruition and urged companies to focus on transforming and modeling user data.

Hear from Boris directly 👇

Key Takeaway: Boris’s insights cast a spotlight on the potential of warehouse native martech, with Census leading the way before it was even a recognized term. His vision of applications as lightweight caches on core data warehouses paints a compelling future. Yet, it’s grounded in the reality that clean, well-structured data and a deep understanding of relationships between data sets are crucial to making this dream a reality. The path is laid out; the journey, according to Boris, requires focus, innovation, and a commitment to quality.

Closing Thought on Warehouse Native Martech

The shifting tides of warehouse-native technology are promising but they come with a fair share of skepticism. This shift is not just a simple tool swap, but a nuanced evolution requiring careful understanding and strategic decision-making, shaped by a company’s unique needs and data maturity. 

Is zero data copy really achievable?Does it save costs for the customer? Or does it benefit the cloud warehouse companies?How long will local database copies be a requirement?Can compute charges be solved with higher quality queries?Will warehouse native martech affect more enterprise or startup companies?Does warehouse native martech replace the need for reverse ETL pipelines?

Yet, amid the complexity, and all the questions, a promise shines through – a future of reduced data pipelines, seamless integration, and more efficient, direct data access. The challenge, as well as the opportunity, lies in the journey towards that future, a journey fueled by the symbiosis of pioneering tools and clean data.

You heard it here first folks: As we navigate the transformation to warehouse-native martech, the single most critical action is to prioritize achieving high-quality, well-structured data; it’s the golden key to unlocking the full potential of these emerging tools and strategies.

Listen to the full episode 🎧


Intro music by Wowa via Unminus
Cover art created with Midjourney
Music generated by Mubert https://mubert.com/render 

See all episodes

📫 Never miss an episode or key takeaway 💡

By subscribing to our newsletter we’ll only send you an email when we drop a new episode, usually on Tuesday mornings ☕️ and we’ll give you a summary and key takeaways.

Success! You're on the list.

Future-proofing the humans behind the tech

Leave a Reply