The third milestone of Imply’s Project Shapeshift brings industry-leading developer ease of use and operational efficiency to Apache Druid.
Imply, the company founded by the original creators of Apache Druid®, today unveiled the third milestone in Project Shapeshift, an initiative designed to evolve Apache Druid and solve the most pressing issues developers face when building real-time analytics applications. This milestone introduces the following:
Apache Druid, the analytics database when real-time matters, is a popular open source database and 2022 Datanami Reader’s Choice winner used by developers at 1000s of companies including Confluent, Salesforce, and Target. Because of its performance at scale and under load – along with its comprehensive features for analyzing streaming data – Druid is relied on for operational visibility, rapid data exploration, customer-facing analytics, and real-time decisioning.
Project Shapeshift was announced at Druid Summit 2021 and it marked a strategic initiative from Imply to transform the developer experience for Druid across three pillars: cloud-native, simple, and complete. In March 2022, Imply announced the first milestone with the introduction of Imply Polaris, a cloud database service for Druid. In September 2022, Imply announced the largest architectural expansion of Druid in its history with the addition of a multi-stage query engine.
“Druid has always been engineered for speed, scale, and streaming data. It’s why developers at Confluent, Netflix, Reddit and 1000s of other companies choose Druid over other database alternatives,” stated FJ Yang, Co-Founder and CEO of Imply. “For the past year, the community has come together to bring new levels of operational ease of use and expanded functionality. This makes Druid not only a powerful database, but one developers love to use too.”
Companies including Atlassian, Reddit, and PayTM utilize Imply for Druid because its commercial distribution, software, and services simplify operations, eliminate production risks, and lower the overall cost of running Druid. As a value-add to existing open source users, Imply guarantees a reduction in the cost of running Druid through its Total Value Guarantee.
Project Shapeshift Milestone 3 includes the following major contributions to Apache Druid and new features for Imply Polaris:
Schema definition plays an essential role in query performance as a strongly-typed data structure makes it possible to columnarize, index, and optimize compression. But defining the schema when loading data carries operational burden on engineering teams, especially with ever-changing event data flowing through Apache Kafka and Amazon Kinesis. Databases such as MongoDB utilize a schemaless data structure as it provides developer flexibility and ease of ingestion, but at a cost to query performance.
Today, Imply announces a new capability that makes Druid the first analytics database that can provide the performance of a strongly-typed data structure with the flexibility of a schemaless data structure. Schema auto-discovery, now available in Druid 26.0, is a new feature that enables Druid to automatically discover data fields and data types and update tables to match changing data without an administrator.
“Now with Apache Druid you can have a schemaless experience in a high-performance, real-time analytics database,” said Gian Merlino, PMC Chair for Apache Druid and CTO of Imply. “You don’t have to give up having strongly-typed data in favor of flexibility as schema auto-discovery can do it for you. Net, you get great performance whether or not you define a schema ahead of time.”
“Druid handling real-time schema changes is a big step forward for the streaming ecosystem,” stated Anand Venugopal, Director of ISV Alliances at Confluent. “We see streaming data typically ingested in real-time and often coming from a variety of sources, which can lead to more frequent changes in data structure. Imply has now made Apache Druid simple and scalable to deliver real-time insights on those streams, even as data evolves.”
In Druid 26.0, Apache Druid has expanded join capabilities and now supports large complex joins. While Druid has supported joins since version 0.18, the previous join capabilities were limited to maintain high CPU efficiency for query performance. When queries required joining large data sets, external ETL tools were utilized to pre-join the data.
Now, Druid has added support for large joins at ingestion – architecturally via shuffle joins. This simplifies data preparation, minimizes reliance on external tools, and adds to Druid’s capabilities for in-database data transformation. The new shuffle joins are powered by Druid’s multi-stage query engine – and in the future the community will extend shuffle joins to join large data sets at query-time in addition to ingestion-time.
Imply Polaris, the cloud database service for Apache Druid, is the easiest deployment model for developers. It delivers all of Druid’s speed and performance without requiring expertise, management, or configuration of Druid or the underlying infrastructure.
This cloud database was built to do more than cloudify Druid; it also optimizes data operations and delivers an end-to-end service from stream ingestion to data visualization.
Today, Imply announces a series of product updates to Polaris that enhance the developer experience, including:
Patti Jo Rosenthal chats about her role as Manager of K-12 STEM Education Programs at ASME where she drives nationally scaled STEM education initiatives, building pathways that foster equitable access to engineering education assets and fosters curiosity vital to “thinking like an engineer.”