Skip to main content

One post tagged with "GraphQL"

View All Tags

· 9 min read
Matthias Broecheler

In the world of data-driven applications, Apache Flink is a powerful tool that transforms streams of raw data into valuable results. But how do you make these results accessible to users, customers, or consumers of your application? Most often, we found the answer to that question was: GraphQL. GraphQL gives users a flexible way to query for data, makes it easy to ingest events, and supports pushing data updates to the user in real-time.

Flink hearts GraphQL >

In this blog post, we’ll discuss what GraphQL is and why it is a good fit for Flink applications. Like peanut butter and jelly, Flink and GraphQL don’t seem related but the combination is surprisingly good.

Table of Contents:

How To Access Flink Results?

Quick background before we dive into the details. Apache Flink is a scalable stream processor that can ingest data from multiple sources, integrate, transform, and analyze the data, and produce results in real time. Apache Flink is the brain of your data processing operations.

Flink Logo >

But Apache Flink cannot make the processed results accessible to users of your application. Flink has an API, but that API is only for administering and monitoring Flink jobs. It doesn’t give outside users access to the result data. In other words, Flink is a brain without a mouth to communicate results externally.

To make results accessible, you have to write them somewhere and expose them through an interface. But how? We have built a number of Flink applications and in most cases, the answer was: write the results to a database or Kafka and expose them through an API. Over the years, our default choice for the API has become GraphQL. Here’s why.