dbt
dbt Labs is redefining how data teams work with SQL. Instead of waiting on complex ETL processes, dbt lets data analysts and data engineers build production-ready transformations directly in the warehouse, using code, version control, and CI/CD. This community-driven approach puts power back in the hands of practitioners while maintaining governance and scalability for enterprise use.
With a rapidly growing open-source community and an enterprise-grade cloud platform, dbt is at the heart of the modern data stack. It’s the go-to solution for teams who want faster analytics, higher quality data, and the confidence that comes from transparent, testable transformations.
Learn more
Teradata VantageCloud
Teradata VantageCloud: Open, Scalable Cloud Analytics for AI
VantageCloud is Teradata’s cloud-native analytics and data platform designed for performance and flexibility. It unifies data from multiple sources, supports complex analytics at scale, and makes it easier to deploy AI and machine learning models in production. With built-in support for multi-cloud and hybrid deployments, VantageCloud lets organizations manage data across AWS, Azure, Google Cloud, and on-prem environments without vendor lock-in. Its open architecture integrates with modern data tools and standard formats, giving developers and data teams freedom to innovate while keeping costs predictable.
Learn more
Arroyo
Scale from zero to millions of events per second effortlessly. Arroyo is delivered as a single, compact binary, allowing for local development on MacOS or Linux, and seamless deployment to production environments using Docker or Kubernetes. As a pioneering stream processing engine, Arroyo has been specifically designed to simplify real-time processing, making it more accessible than traditional batch processing. Its architecture empowers anyone with SQL knowledge to create dependable, efficient, and accurate streaming pipelines. Data scientists and engineers can independently develop comprehensive real-time applications, models, and dashboards without needing a specialized team of streaming professionals. By employing SQL, users can transform, filter, aggregate, and join data streams, all while achieving sub-second response times. Your streaming pipelines should remain stable and not trigger alerts simply because Kubernetes has chosen to reschedule your pods. Built for modern, elastic cloud infrastructures, Arroyo supports everything from straightforward container runtimes like Fargate to complex, distributed setups on Kubernetes, ensuring versatility and robust performance across various environments. This innovative approach to stream processing significantly enhances the ability to manage data flows in real-time applications.
Learn more
Google Cloud Dataflow
Data processing that integrates both streaming and batch operations while being serverless, efficient, and budget-friendly. It offers a fully managed service for data processing, ensuring seamless automation in the provisioning and administration of resources. With horizontal autoscaling capabilities, worker resources can be adjusted dynamically to enhance overall resource efficiency. The innovation is driven by the open-source community, particularly through the Apache Beam SDK. This platform guarantees reliable and consistent processing with exactly-once semantics. Dataflow accelerates the development of streaming data pipelines, significantly reducing data latency in the process. By adopting a serverless model, teams can devote their efforts to programming rather than the complexities of managing server clusters, effectively eliminating the operational burdens typically associated with data engineering tasks. Additionally, Dataflow’s automated resource management not only minimizes latency but also optimizes utilization, ensuring that teams can operate with maximum efficiency. Furthermore, this approach promotes a collaborative environment where developers can focus on building robust applications without the distraction of underlying infrastructure concerns.
Learn more