Analytics app development at scale
Estimated reading time: 2 minutes
Eric Tschetter, Field CTO, of Imply, chats with EITN about the database technology and its roadmap
EITN: Is Imply database which is based on Apache Druid, in the same solution area as graph database technology?
Eric: Imply provides a commercial distribution and cloud database service for Apache Druid, the leading real-time analytics database.
EITN: If not, how is it different and what are its use cases?
Eric: Apache Druid is the right choice for developers building an analytics application at any scale, for any number of users, and across streaming and batch data. These applications span analytics use cases including observability, clickstream analytics, security analytics, IoT/telemetry, and external analytics.
Three unique differentiators for Druid include sub-second response at any scale, true stream ingestion at limitless scale, low latency, and guaranteed consistency, and non-stop reliability to never go down and never lose data.
EITN: What is Apache Druid in a paragraph, and what was its original intention?
Eric: Apache Druid is an open-source, real-time analytics database originally built for an ad-tech start-up and now used by developers at 1000s of leading organisations across industries. Just like the original design criteria, developers turn to Apache Druid for its unique ability to enable interactive analytics at any scale, high concurrency at the best value, and insights on streaming and batch data. Its hyper-efficient architecture uniquely delivers sub-second response on billions to trillions of rows for 100s to 1000s of concurrent users with near-infinite scale.
EITN: What is lacking in the area of data and analytics and databases? What more can be done?
Eric: The industry at large is upon the next wave of technical hurdles for analytics based on how organisations want to derive value from data. The first wave in the 2000s was trying to solve the large-scale data processing and storage challenges, which HDFS, MapReduce, and Spark addressed. The second wave in the 2010s was then trying to solve the problem of large-scale query processing, which created the emergence of cloud data warehouses (e.g. Snowflake, Redshift, and BigQuery) and distributed SQL engines (e.g. Impala, Presto, Athena). Now the challenge organisations are trying to solve for are large scale analytics applications enabling interactive data experiences. That’s where Druid with Imply comes in.
EITN: Could you share Imply’s roadmap for the next 3 to 5 years?
Eric: Imply will continue to drive the best developer experience when building modern analytics applications. Developers building analytics applications are looking for the most capable database with the simplest experience so they can build any application without constraints on performance or scale and at the best economics. We’ll continue innovating across the core database architecture and cloud service to deliver on this mission.