Skip to main content

· 3 min read
Michael Canzoneri

Unfortunately, my semi-retirement has come to an end.

It’s been an incredible ten months waking up and not having to be anywhere other than school dropoff.  However, all good things must turn into even better things!

I’m happy to announce that I’ve joined my old friends from DataStax, Matthias Broecheler, and Daniel Henneberger, as co-founder of DataSQRL.

And what a time to join! Our self-funded venture is already profitable, with a growing customer base.

What does DataSQRL do?

Your “Operating System for Data” builds data products in minutes.  Gone are months wasted attempting to integrate multiple vendor technologies for a data pipeline.

DataSQRL abstracts and automates the building of data pipelines through our SQL interface, which we’ve affectionately dubbed SQRL.  Yes, pronounced “Squirrel,” just like the fuzzy little guys running around your yard.

Have you struggled with Apache Kafka?  Flink?  Streaming?  Data in motion? Integrating your Artificial Intelligence into your systems?  Anything “Real-Time?” Struggle no more, and get started here!

We provide an open-source tool that can help you along with some professional services.

Now that that is out of the way, onto more interesting things…

What did I do over these last ten months? You’re wondering…

What didn’t I do?  I…

  • Spent a ton of time watching my autistic son make incredible progress
  • Helped produce a shark documentary, Unmasking Monsters Below, with my friend, Kevin Lapsley
  • Traveled to more than 30 different places, both nationally and internationally (pictures below)
  • Read 41 books
  • Wrote 200 pages in my journal
  • Started writing a new book
  • Started writing a new screenplay
  • Continued working on my photography skills
  • Watched SO MUCH college football (We are!)
  • Cooked my wife breakfast (almost) every day
  • Made a ton of new recipes
  • Worked out 4-6 days per week
  • Mentored several young professionals to find their way
  • Caught up with some old friends
  • Served as an advisor to ten AI-based startups
  • And attended a few concerts…

More on all of this as the months roll on…

What’s next for me and DataSQRL?

If you’re attending Data Days in Austin, TX, next week, check out Matthias’ presentation.

Otherwise, make sure you follow us here on LinkedIn.

I promise that some of my upcoming posts will cover what I did over the last ten months.  You know me… However, here are some highlights in pictures…

A 3x Silicon Valley Unicorn veteran with IPO experience, award-winning screenwriter, and 2006 Time Magazine “Person of the Year,” Canz is often mistaken for Joe Rogan while walking down the street.  He can be found on LinkedIn, IMDB, and helping people how to pronounce “Conshohocken.”

#autism #journaling #imdb #joerogan #movies #metallica #concerts

Picture Highlights

Highlights |

Innescron, County Sligo, Ireland

Highlights |

Mystic, CT

Highlights |

Innescron, County Sligo, Ireland

Highlights |Newport, RIHighlights |Metlife Stadium, NJ - Metallica ConcertHighlights |Napa Valley, CA

· 10 min read
Matthias Broecheler

A common problem in search is ordering large result sets. Consider a user searching for “jacket” on an e-commerce platform. How do we order the large number of results to show the most relevant products first? In other words, what kind of jackets is the user looking for? Suit jackets, sport jackets, winter jackets?

Often, we have the context to infer what kind of jacket a user is looking for based on their interactions on the site. For example, if a user has men’s running shoes in their shopping cart, they are likely looking for men’s sports jackets when they search for “jacket”.

At least to a human that seems pretty obvious. Yet, Amazon will return a somewhat random assortment of jackets in this scenario as shown in the screenshot below.

Amazon search results for `jacket` |

To humans the semantic association between “running shoes” and “sport jackets” is natural, but for machines making such associations has been a challenge. With recent advances in large-language models (LLMs) computers can now compute semantic similarities with high accuracy.

We are going to use LLMs to compute the semantic context of past user interactions via vector embeddings, aggregate them into a semantic profile, and then use the semantic profile to order search results by their semantic similarity to a user’s profile.

In other words, we are going to rank search results by their semantic similarity to the things a user has been browsing. That gives us the context we are missing when the user enters a search query.

In this article, you will learn how to build a personalized shopping search with semantic vector embeddings step-by-step. You can apply the techniques in this article to any kind of search where a user can browse and search a collection of items: event search, knowledge bases, content search, etc.

· 11 min read
Matthias Broecheler

Let’s build a personalized recommendation engine using AI as an event-driven microservice with Kafka, Flink, and Postgres. And since Current23 is starting soon, we will use the events of this event-driven conference as our input data (sorry for the pun). You’ll learn how to apply AI techniques to streaming data and what talks you want to attend at the Kafka conference - double win!

We will implement the whole microservice in 50 lines of code thanks to the DataSQRL compiler, which eliminates all the data plumbing so we can focus on building.

Watch the video to see the microservice in action or read below for step-by-step building instructions and details.

What We Will Build

We are going to build a recommendation engine and semantic search that uses AI to provide personalized results for users based on user interactions.

Let’s break that down: Our input data is a stream of conference events, namely the talks with title, abstract, speakers, time, and so forth. We consume this data from an external data source.

In addition, our microservice has endpoints to capture which talks a user has liked and what interests a user has expressed. We use those user interactions to create a semantic user profile for personalized recommendations and personalized search results.

We create the semantic user profile through vector embeddings, an AI technique for mapping text to numbers in a way that preserves the content of the text for comparison. It’s a great tool for representing the meaning of text in a computable way. It's like mapping addresses (i.e. street, city, zip, country) onto geo-coordinates. It’s hard to compare two addresses, but easy to compute the distance between two geo-coordinates. Vector embeddings do the same thing for natural language text.

Those semantic profiles are then used to serve recommendations and personalized search results.

· 14 min read
Matthias Broecheler

When developing streaming applications or event-driven microservices, you face the decision of whether to preprocess data transformations in the stream engine or execute them as queries against the database at request time. The choice impacts your application’s performance, behavior, and cost. An incorrect decision results in unnecessary work and potential application failure.

To preprocess or to query? >|

In this article, we’ll delve into the tradeoff between preprocessing and querying, guiding you to make the right decision. We’ll also introduce tools to simplify this process. Plus, you’ll learn how building streaming applications is related to fine cuisine. It’ll be a fun journey through the land of stream processing and database querying. Let’s go!

Recap: Anatomy of a Streaming Application

If you're in the process of building an event-driven microservice or streaming application, let's recap what that entails. An event-driven microservice consumes data from one or multiple data streams, processes the data, writes the results to a data store, and exposes the final data through an API for external users to access.

The figure below visualizes the high-level architecture of a streaming application and its components: data streams (e.g. Kafka), stream processor (e.g. Flink), database (e.g. Postgres), and API server (e.g. GraphQL server).

Streaming Application Architecture

An actual event-driven microservice might have a more intricate architecture, but it will always include these four elements: a system for managing data streams, an engine for processing streaming data, a place to store the results, and a server to expose the service endpoint.

This means an event-driven architecture has two stages: the preprocess stage, which processes data as it streams in, and the query stage which processes user requests against the API. Each stage handles data, but they differ in what triggers the processing: incoming data triggers the preprocess stage, while user requests trigger the query stage. The preprocess stage handles data before the user needs it, and the query stage handles data when the user explicitly requests it.

Understanding these two stages is vital for the successful implementation of event-driven microservices. Unlike most web services with only a query stage or data pipelines with only a preprocess stage, event-driven microservices require a combination of both stages.

This leads to the question: Where should data transformations be processed? In the preprocessing stage or the query stage? And what’s the difference, anyways? That’s what we will be investigating in this article.

· 9 min read
Matthias Broecheler

Stream processing technologies like Apache Flink introduce a new type of data transformation that’s very powerful: the temporal join. Temporal joins add context to data streams while being efficient and fast to execute.

Temporal Join >

This article introduces the temporal join, compares it to the traditional inner join, explains when to use it, and why it is a secret superpower.

Table of Contents:

· 9 min read
Matthias Broecheler

In the world of data-driven applications, Apache Flink is a powerful tool that transforms streams of raw data into valuable results. But how do you make these results accessible to users, customers, or consumers of your application? Most often, we found the answer to that question was: GraphQL. GraphQL gives users a flexible way to query for data, makes it easy to ingest events, and supports pushing data updates to the user in real-time.

Flink hearts GraphQL >

In this blog post, we’ll discuss what GraphQL is and why it is a good fit for Flink applications. Like peanut butter and jelly, Flink and GraphQL don’t seem related but the combination is surprisingly good.

Table of Contents:

How To Access Flink Results?

Quick background before we dive into the details. Apache Flink is a scalable stream processor that can ingest data from multiple sources, integrate, transform, and analyze the data, and produce results in real time. Apache Flink is the brain of your data processing operations.

Flink Logo >

But Apache Flink cannot make the processed results accessible to users of your application. Flink has an API, but that API is only for administering and monitoring Flink jobs. It doesn’t give outside users access to the result data. In other words, Flink is a brain without a mouth to communicate results externally.

To make results accessible, you have to write them somewhere and expose them through an interface. But how? We have built a number of Flink applications and in most cases, the answer was: write the results to a database or Kafka and expose them through an API. Over the years, our default choice for the API has become GraphQL. Here’s why.

· 5 min read
Matthias Broecheler

Apache Flink is an incredibly powerful stream processor. But to build a complete application with Flink you need to integrate multiple complex technologies which requires a significant amount of custom code. DataSQRL is an open-source tool that simplifies this process by compiling SQL into a data pipeline that integrates Flink, Kafka, Postgres, and API layer.

DataSQRL allows you to focus on your application logic without getting bogged down in the details of how to execute your data transformations efficiently across multiple technologies.

We have built several applications in Flink: recommendation engines, data mesh endpoints, monitoring dashboards, Customer 360 APIs, smart IoT apps, and more. Across those use cases, Flink proved to be versatile and powerful in its ability to instantly analyze and aggregate data from multiple sources. But we found it quite difficult and time-consuming to build applications with Flink.

DataSQRL compiled data pipeline >

To start, you need to learn Flink: the table and datastream API, watermarking, windowing, and all the other stream processing concepts. Flink alone gets our heads spinning. And Flink is just one component of the application.

To build a complete data pipeline, you need Kafka to hold your streaming data and a database like Postgres to query the processed data. On top, you need an API layer that captures input data and provides access to the processed data. Your team must learn, implement, and integrate multiple complex technologies. It takes a village to build a Flink app.

DataSQRL >

That’s why we built DataSQRL. DataSQRL compiles the SQL that defines your data processing into an integrated data pipeline that orchestrates Flink, Kafka, Postgres, and API - saving us a ton of time and headache in the process. Why not let the computer do all the hard work?

Let me show you how DataSQRL works by building an IoT monitoring service.

· 8 min read
Matthias Broecheler

When creating data-intensive applications or services, your data logic (i.e. the code that defines how to process the data) gets fragmented across multiple data systems, languages, and mental models. This makes data-driven applications difficult to implement and hard to maintain.

SQRL is a high-level data programming language that compiles into executables for all your data systems, so you can implement your data logic in one place. SQRL adds support for data streams and relationships to SQL while maintaining its familiar syntax and semantics.

Why Do We Need SQRL?

Data Layer of data-driven application >

The data layer of a data-driven application comprises multiple components: There’s the good ol’ database for data storage and queries, a server for handling incoming data and translating API requests into database queries, a queue/log for asynchronous data processing, and a stream processor for pre-processing and writing new data to the database. Consequently, your data processing code becomes fragmented across various systems, technologies, and languages.

· 5 min read
Matthias Broecheler

We need to make it easier to build data-driven applications. Databases are great if all your application needs is storing and retrieving data. But if you want to build anything more interesting with data - like serving users recommendations based on the pages they are visiting, detecting fraudulent transactions on your site, or computing real-time features for your machine learning model - you end up building a ton of custom code and infrastructure around the database.

Watch the video version >|

You need a queue like Kafka to hold your events, a stream processor like Flink to process data, a database like Postgres to store and query the result data, and an API layer to tie it all together.

And that’s just the price of admission. To get a functioning data layer, you need to make sure that all these components talk to each other and that data flows smoothly between them. Schema synchronization, data model tuning, index selection, query batching … all that fun stuff.

The point is, you need to do a ton of data plumbing if you want to build a data-driven application. All that data plumbing code is time-consuming to develop, hard to maintain, and expensive to operate.

We need to make building with data easier. That’s why we are sending out this call to action to uplevel our database game. Join us in figuring out how to simplify the data layer.

We have an idea to get us started: Meet DataSQRL.

· 2 min read
Daniel Henneberger

After two long years of research, development, and teamwork, we're excited to announce DataSQRL 0.1! DataSQRL is a tool for building APIs from your data streams and datasets by defining your use case in an SQL.

DataSQRL v0.1 release: A SQRL is Born >

This is our first “official” release of DataSQRL after many months of testing and bug-fixing.
Check out the release notes on GitHub for a rundown of all the features.