{"id":46827,"date":"2025-02-27T00:00:00","date_gmt":"2025-02-27T08:00:00","guid":{"rendered":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/blog\/using-kafka-with-griddbs-time-series-containers\/"},"modified":"2025-11-13T12:57:15","modified_gmt":"2025-11-13T20:57:15","slug":"using-kafka-with-griddbs-time-series-containers","status":"publish","type":"post","link":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/","title":{"rendered":"Using Kafka with GridDB&#8217;s Time Series Containers"},"content":{"rendered":"<p>In today&#8217;s article we will be discussing Kafka in conjunction with GridDB, which we have done before:<\/p>\n<ul>\n<li><a href=\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\">Stream Data with GridDB and Kafka<\/a><\/li>\n<li><a href=\"https:\/\/griddb.net\/en\/blog\/using-griddb-as-a-source-for-kafka-with-jdbc\/\">Using GridDB as a source for Kafka with JDBC<\/a><\/li>\n<li><a href=\"https:\/\/griddb.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\">Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka<\/a><\/li>\n<li>Udemy course: <a href=\"https:\/\/www.udemy.com\/course\/create-a-working-iot-project-with-iot-database-griddb\/\">Create a working IoT Project &#8211; Apache Kafka, Python, GridDB<\/a><\/li>\n<\/ul>\n<p>We will focus in this article on a new feature which allows for use of Kafka with GridDB as a sink resource which will make <code>TIME_SERIES<\/code> containers (meaning we can push time_series data from Kafka topics directly into GridDB with some configuration); prior to v5.6, we were limited to Collection Containers.<\/p>\n<p>There will be some similarities with the blog last written about using Kafka with GridDB titled: &#8220;Stream Data with GridDB and Kafka&#8221;. The differences here are that we have made all the moving parts of kafka and GridDB into Docker containers for easier portability and ease of use and will, as alluded to earlier, be using Time Series containers. If you follow along with this blog, you will learn how use Kafka to stream time series data directly into a GridDB time series container using Docker containers and Kafka.<\/p>\n<h2>High Level Overview<\/h2>\n<p>Before we get into how to run this project, let&#8217;s briefly go over what this project does and how it works. We will get Kafka and GridDB running inside of docker containers, and once those are ready, we will run a python script which acts as a kafka <code>producer<\/code> to push up random data to the <code>broker<\/code>. This simulated iot data will then sit in a Kafka queue (though it&#8217;s more accurately a <code>distributed log<\/code>) until a <code>consumer<\/code> is available to read those values.<\/p>\n<p>In our case, GridDB will act as the <code>sink<\/code>, meaning it will <code>consume<\/code> the data topics made by our python script and then save that data into tables which will created by Kafka based on our topics&#8217; schemas set within our Python script.<\/p>\n<p>To properly communicate how and where to save the Kafka topics, we will need to set up a GridDB Kafka Sink properties file. But first, we will also need to grab and build the latest version (v5.6) of the GridDB Kafka Connect and somehow share that with our running Kafka installation so that we may save time series data directly into time series containers.<\/p>\n<p>Within that properties file, we will need to set the container type to <code>time_series<\/code> along with various other important details.<\/p>\n<h2>Getting Started<\/h2>\n<p>Let&#8217;s discuss how to run this project.<\/p>\n<h3>Prerequisites<\/h3>\n<p>To follow along with this blog, you will need docker and docker compose for running Kafka and GridDB. We will also need python3 installed to create data to be pushed into Kafka as topics (and then eventually saved into GridDB).<\/p>\n<p>We will also need to grab and build the GridDB Kafka Connect jar file.<\/p>\n<h4>GridDB Kafka Connect (Optional)<\/h4>\n<p>You can download the latest version here: <a href=\"https:\/\/github.com\/griddb\/griddb-kafka-connect\">griddb-kafka-connect<\/a>. To build, make sure you have <code>maven<\/code> installed and run:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ mvn clean install<\/code><\/pre>\n<\/div>\n<p>The <code>.jar<\/code> file will be created inside of the <code>target<\/code> directory under the name: <code>griddb-kafka-connector-0.6.jar<\/code>.<\/p>\n<p>Note: The jar file is also included in the source code provided by this repo (in the next section). If you clone the repo and run this project via docker compose, you do not need to download\/build the jar file yourself.<\/p>\n<h3>Source Code<\/h3>\n<p>You can find the source code in the griddbnet github page:<\/p>\n<p><code>$ git clone https:\/\/github.com\/griddbnet\/Blogs.git --branch 7_kafka_timeseries<\/code><\/p>\n<h3>Running Project<\/h3>\n<p>Once you have the source code and docker installed, you can simply run:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ docker compose pull && docker compose up -d<\/code><\/pre>\n<\/div>\n<p>And then once it&#8217;s done, you can start checking if the Kafka connector has the GridDB sink properties file in place by running the following:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ curl http:\/\/localhost:8083\/connectors\/\n\n[\"griddb-kafka-sink\"]<\/code><\/pre>\n<\/div>\n<p>You can also take a look at the contents of the kafka-sink to see what it contains:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ curl http:\/\/localhost:8083\/connectors\/griddb-kafka-sink<\/code><\/pre>\n<\/div>\n<p>Once that&#8217;s done, you can run the python script, which acts as a kafka producer.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ python3 -m pip install kafka-python\n$ python3 scripts\/producer.py<\/code><\/pre>\n<\/div>\n<h3>GridDB Sink Properties<\/h3>\n<p>In Kafka and other stream\/event-driven architectures, the concept of sources and sinks mean to describe the direction of the flow of data. The sink is where data flows <em>in<\/em>, or where the data ends up &#8212; in this case, we want our data payloads to persist inside of GridDB as time series data inside of a time series container. And so we set the properties file as such:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">        connector.class= com.github.griddb.kafka.connect.GriddbSinkConnector\n        name= griddb-kafka-sink\n        cluster.name= myCluster\n        user= admin\n        password= admin\n        notification.member= griddb-server=10001\n        container.type= TIME_SERIES\n        topics= meter_0,meter_1,meter_2,meter_3\n        transforms=  TimestampConverter\n        transforms.TimestampConverter.type=  org.apache.kafka.connect.transforms.TimestampConverter$Value\n        transforms.TimestampConverter.format=  yyyy-MM-dd hh=mm=ss\n        transforms.TimestampConverter.field=  timestamp\n        transforms.TimestampConverter.target.type=  Timestamp<\/code><\/pre>\n<\/div>\n<p>As compared to our previous article, the main changes are the <code>container.type<\/code> designation and the transforms properties. The transforms properties tells our Kafka cluster which string value will be converted into timestamp, along with other useful information to help that process along. The other values are simply allowing for our broker to know where to send the data topics to, which is our GridDB docker container with a hostname of <code>griddb-server<\/code>.<\/p>\n<p>The topics are the name of the data topics and will also be the names of our GridDB time series containers.<\/p>\n<h3>Python Producer Script<\/h3>\n<p>There isn&#8217;t much to say here that you can&#8217;t get from simply reading the (simple) source code. The only thing I will add is that if you wished to docker-ize the docker container as well, you would change server location from <code>localhost<\/code> to <code>broker:9092<\/code><\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">#p=KafkaProducer(bootstrap_servers=['localhost:9092'])\np=KafkaProducer(bootstrap_servers=['broker:9092'])<\/code><\/pre>\n<\/div>\n<p>One other thing to note is that though we are making time_series data containers with time_series data as the row key, you still need to set your payload data fields as type <code>string<\/code> (I teased this above when discussing the <code>transform<\/code> property in the sink section).<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">\"schema\": \n{\n    \"fields\": [ \n        { \"field\": \"timestamp\", \"optional\": False, \"type\": \"string\" },\n        { \"field\": \"kwh\", \"optional\": False, \"type\": \"double\" }, \n        { \"field\": \"temp\", \"optional\": False, \"type\": \"double\" } \n    ], \n    \"name\": \"iot\", \"optional\": False, \"type\": \"struct\" \n}   <\/code><\/pre>\n<\/div>\n<p>The key here is that though the type is <code>string<\/code>, we must set the <em>first<\/em> field as our targeted timestamp type. And then in the sink for this dataset, we set the <code>transforms.TimestampConverter.field<\/code> as the name of our field we want to convert to type timestamp. With these things in place, Kafka and GridDB will create your tables with the set schema and the proper container type.<\/p>\n<h2>Running Kafka in Docker Containers<\/h2>\n<p>In our previous article about kafka, we simply ran kafka and GridDB on bare metal, meaning simply running the servers throught he CLI with commands. Though it worked well, it&#8217;s a bit confusing because you need 3-4 terminals open and need to remember to run things in sequence. For this article, we have prepared a docker compose file which allows you to run download and run everything with 2-3 commands!<\/p>\n<h3>Confluent Docker Containers<\/h3>\n<p>First, let&#8217;s discuss the docker images provided by Confluent, which is a company which provides support and tools pertaining to Kafka for your large corporation. Despite this though, they provide the docker images freely which we will use in our docker compose file.<\/p>\n<p>Essentially what docker compose does is allow us to create a set of &#8220;services&#8221; (AKA docker containers) which we can run in unison with a simple command, with rules set in which we can set which containers rely on others. For example, we can set the various kafka containers to rely on each other so that they start up in the correct sequence.<\/p>\n<p>We opted for this because as explained above, running Kafka is not an easy process &#8212; it has many different parts that need to run. For example, to run this seemingly simple project where we push data from python script &#8211;> kafka topics &#8211;> GridDb it takes 5 services in our Docker compose file.<\/p>\n<h3>Docker Compose Services<\/h3>\n<p>The following are all of the services.<\/p>\n<ul>\n<li>GridDB<\/li>\n<li>Kafka Zookeeper<\/li>\n<li>Kafka Broker<\/li>\n<li>Kafka Schema Registry<\/li>\n<li>Kafka-Connect<\/li>\n<\/ul>\n<p>And another service which we omited but we could include is a kafka data producer.<\/p>\n<p>The Kafka zookeeper  can be thought of as the brains or the main component of kafka. The Broker is the service which handles the data topics and is often run with many different brokers for failsafes, etc; when we want to point our producer of data topics to Kafka, we point to the broker.<\/p>\n<p>The kafka schema registry enforces schemas to be used for your topics. In our case, it&#8217;s useful for deserialization of our JSON schema of our data payloads from our python producer.<\/p>\n<p>The Kafka Connect container is where we add our third party libraries for use with Kafka: GridDB Kafka connect jar and our GridDB sink properties file. The connect container is a bit unique in that we need to make sure that the container is up and running first and then we push to it a json file with the GridDB sink property instructions. The GridDB Kafka Connect jar file though we push to the file system during docker image start up.<\/p>\n<h4>Docker Compose Instructions<\/h4>\n<p>For GridDB there are no special instructions: we simply pull the image from griddbnet and then set some environment variables:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">  griddb-server:\n    image: 'griddbnet\/griddb:5.6.0'\n    container_name: griddb-server\n    expose:\n      - '10001'\n      - '10010'\n      - '10020'\n      - '10040'\n      - '20001'\n      - '41999'\n    environment:\n      NOTIFICATION_MEMBER: 1\n      GRIDDB_CLUSTER_NAME: myCluster<\/code><\/pre>\n<\/div>\n<p>The zookeeper is in a similar boat:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">  zookeeper:\n    image: 'confluentinc\/cp-zookeeper:7.3.0'\n    container_name: zookeeper\n    environment:\n      ZOOKEEPER_CLIENT_PORT: 2181\n      ZOOKEEPER_TICK_TIME: 2000<\/code><\/pre>\n<\/div>\n<p>The broker exposes port 9092 so that we can run our python producer script outside of the context of our docker compose network environment (we just point to localhost:9092). There are also more environment variables necessary for pointing to the zookeeper and other cluster rules<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">  broker:\n    image: 'confluentinc\/cp-kafka:7.3.0'\n    container_name: broker\n    ports:\n      - '9092:9092'\n    depends_on:\n      - zookeeper\n    environment:\n      KAFKA_BROKER_ID: 1\n      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'\n      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT'\n      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT:\/\/broker:9092,PLAINTEXT_INTERNAL:\/\/broker:29092'\n      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1\n      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1\n      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1<\/code><\/pre>\n<\/div>\n<p>You will also notice that the broker, schema registry, kafka connect all &#8220;depend&#8221; on the zookeeper. It really makes clear to us who is charge of the entire operation.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">  kafka-schema-registry:\n    image: 'confluentinc\/cp-schema-registry:7.3.0'\n    hostname: kafka-schema-registry\n    container_name: kafka-schema-registry\n    ports:\n      - '8082:8082'\n    environment:\n      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'PLAINTEXT:\/\/broker:9092'\n      SCHEMA_REGISTRY_HOST_NAME: kafka-schema-registry\n      SCHEMA_REGISTRY_LISTENERS: 'http:\/\/0.0.0.0:8082'\n    depends_on:\n      - zookeeper<\/code><\/pre>\n<\/div>\n<p>The kafka connect also grabs from the confluent docker hub and has tons of environment variables, but it also includes volumes with a shared filesystem with the host machine so that we can share our GridDB Kafka Connect jar file. And lastly, we have scipt at the very bottom of the service which allows us to wait until our kafka-connect HTTP endpoints are available. Once we get a 200 status code as a response, we can run our script which sends our GridDB-Sink properties file.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">  kafka-connect:\n    image: confluentinc\/cp-kafka-connect:latest\n    container_name: kafka-connect\n    ports:\n      - '8083:8083'\n    environment:\n      CONNECT_BOOTSTRAP_SERVERS: 'broker:9092'\n      CONNECT_REST_PORT: 8083\n      CONNECT_GROUP_ID: device\n      CONNECT_CONFIG_STORAGE_TOPIC: device-config\n      CONNECT_OFFSET_STORAGE_TOPIC: device-offsets\n      CONNECT_STATUS_STORAGE_TOPIC: device-status\n      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: true\n      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: true\n      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http:\/\/kafka-schema-registry:8082'\n      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http:\/\/kafka-schema-registry:8082'\n      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect\n      CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: '[%d] %p %X{connector.context}%m (%c:%L)%n'\n      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: '1'\n      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: '1'\n      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: '1'\n      CONNECT_PLUGIN_PATH: >-\n        \/usr\/share\/java,\/etc\/kafka-connect\/jars\n      CLASSPATH: >-\n        \/usr\/share\/java,\/etc\/kafka-connect\/jars\n    volumes:\n      - '.\/scripts:\/scripts'\n      - '.\/kafka-connect\/connectors:\/etc\/kafka-connect\/jars\/'\n      \n    depends_on:\n      - zookeeper\n      - broker\n      - kafka-schema-registry\n      - griddb-server\n    command:\n      - bash\n      - '-c'\n      - >\n        \/etc\/confluent\/docker\/run & \n\n        echo \"Waiting for Kafka Connect to start listening on kafka-connect \u00e2\u008f\u00b3\"\n\n        while [ $$(curl -s -o \/dev\/null -w %{http_code}\n        http:\/\/kafka-connect:8083\/connectors) -eq 000 ] ; do \n          echo -e $$(date) \" Kafka Connect listener HTTP state: \" $$(curl -s -o \/dev\/null -w %{http_code} http:\/\/kafka-connect:8083\/connectors) \" (waiting for 200)\"\n          sleep 5 \n        done\n\n        nc -vz kafka-connect 8083\n\n        echo -e \"n--n+> Creating Kafka Connect GridDB sink\"\n\n        \/scripts\/create-griddb-sink.sh &&\n        \/scripts\/example-sink.sh\n\n        sleep infinity <\/code><\/pre>\n<\/div>\n<p>This properties file will give explicit instructions to Kafka that when topics with certain names are received by the broker, it should push those out to the instructions in the properties file, which in this case are our GridDB container.<\/p>\n<h2>Conclusion<\/h2>\n<p>After you run the producer, you should be able to see all of your data inside of your docker griddb server through use of the GridDB CLI: <code>$ docker exec -it griddb-server gs_sh<\/code>.<\/p>\n<p>And with that, we have successfully pushed IoT-like sensor data through Kafka to a GridDB Time Series container.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In today&#8217;s article we will be discussing Kafka in conjunction with GridDB, which we have done before: Stream Data with GridDB and Kafka Using GridDB as a source for Kafka with JDBC Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka Udemy course: Create a working IoT Project &#8211; Apache Kafka, Python, GridDB We [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":27911,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[121],"tags":[],"class_list":["post-46827","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Using Kafka with GridDB&#039;s Time Series Containers | GridDB: Open Source Time Series Database for IoT<\/title>\n<meta name=\"description\" content=\"In today&#039;s article we will be discussing Kafka in conjunction with GridDB, which we have done before: Stream Data with GridDB and Kafka Using GridDB as a\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using Kafka with GridDB&#039;s Time Series Containers | GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"og:description\" content=\"In today&#039;s article we will be discussing Kafka in conjunction with GridDB, which we have done before: Stream Data with GridDB and Kafka Using GridDB as a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/\" \/>\n<meta property=\"og:site_name\" content=\"GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/griddbcommunity\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-02-27T08:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-13T20:57:15+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.griddb.net\/wp-content\/uploads\/2021\/11\/kafka.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1160\" \/>\n\t<meta property=\"og:image:height\" content=\"653\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Israel\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:site\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Israel\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/\"},\"author\":{\"name\":\"Israel\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\"},\"headline\":\"Using Kafka with GridDB&#8217;s Time Series Containers\",\"datePublished\":\"2025-02-27T08:00:00+00:00\",\"dateModified\":\"2025-11-13T20:57:15+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/\"},\"wordCount\":1671,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/\",\"url\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/\",\"name\":\"Using Kafka with GridDB's Time Series Containers | GridDB: Open Source Time Series Database for IoT\",\"isPartOf\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"datePublished\":\"2025-02-27T08:00:00+00:00\",\"dateModified\":\"2025-11-13T20:57:15+00:00\",\"description\":\"In today's article we will be discussing Kafka in conjunction with GridDB, which we have done before: Stream Data with GridDB and Kafka Using GridDB as a\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage\",\"url\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"contentUrl\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"width\":1160,\"height\":653},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website\",\"url\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/\",\"name\":\"GridDB: Open Source Time Series Database for IoT\",\"description\":\"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL\",\"publisher\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization\",\"name\":\"Fixstars\",\"url\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"contentUrl\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"width\":200,\"height\":83,\"caption\":\"Fixstars\"},\"image\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/griddbcommunity\/\",\"https:\/\/x.com\/GridDBCommunity\",\"https:\/\/www.linkedin.com\/company\/griddb-by-toshiba\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\",\"name\":\"Israel\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"caption\":\"Israel\"},\"url\":\"https:\/\/www.griddb.net\/en\/author\/israel\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Using Kafka with GridDB's Time Series Containers | GridDB: Open Source Time Series Database for IoT","description":"In today's article we will be discussing Kafka in conjunction with GridDB, which we have done before: Stream Data with GridDB and Kafka Using GridDB as a","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/","og_locale":"en_US","og_type":"article","og_title":"Using Kafka with GridDB's Time Series Containers | GridDB: Open Source Time Series Database for IoT","og_description":"In today's article we will be discussing Kafka in conjunction with GridDB, which we have done before: Stream Data with GridDB and Kafka Using GridDB as a","og_url":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/","og_site_name":"GridDB: Open Source Time Series Database for IoT","article_publisher":"https:\/\/www.facebook.com\/griddbcommunity\/","article_published_time":"2025-02-27T08:00:00+00:00","article_modified_time":"2025-11-13T20:57:15+00:00","og_image":[{"width":1160,"height":653,"url":"https:\/\/www.griddb.net\/wp-content\/uploads\/2021\/11\/kafka.png","type":"image\/png"}],"author":"Israel","twitter_card":"summary_large_image","twitter_creator":"@GridDBCommunity","twitter_site":"@GridDBCommunity","twitter_misc":{"Written by":"Israel","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#article","isPartOf":{"@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/"},"author":{"name":"Israel","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740"},"headline":"Using Kafka with GridDB&#8217;s Time Series Containers","datePublished":"2025-02-27T08:00:00+00:00","dateModified":"2025-11-13T20:57:15+00:00","mainEntityOfPage":{"@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/"},"wordCount":1671,"commentCount":0,"publisher":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization"},"image":{"@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2021\/11\/kafka.png","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/","url":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/","name":"Using Kafka with GridDB's Time Series Containers | GridDB: Open Source Time Series Database for IoT","isPartOf":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage"},"image":{"@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2021\/11\/kafka.png","datePublished":"2025-02-27T08:00:00+00:00","dateModified":"2025-11-13T20:57:15+00:00","description":"In today's article we will be discussing Kafka in conjunction with GridDB, which we have done before: Stream Data with GridDB and Kafka Using GridDB as a","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/blog\/using-kafka-with-griddbs-time-series-containers\/#primaryimage","url":"\/wp-content\/uploads\/2021\/11\/kafka.png","contentUrl":"\/wp-content\/uploads\/2021\/11\/kafka.png","width":1160,"height":653},{"@type":"WebSite","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website","url":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/","name":"GridDB: Open Source Time Series Database for IoT","description":"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL","publisher":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization","name":"Fixstars","url":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/","url":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","contentUrl":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","width":200,"height":83,"caption":"Fixstars"},"image":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/griddbcommunity\/","https:\/\/x.com\/GridDBCommunity","https:\/\/www.linkedin.com\/company\/griddb-by-toshiba"]},{"@type":"Person","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740","name":"Israel","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","caption":"Israel"},"url":"https:\/\/www.griddb.net\/en\/author\/israel\/"}]}},"_links":{"self":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/46827","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/comments?post=46827"}],"version-history":[{"count":1,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/46827\/revisions"}],"predecessor-version":[{"id":51485,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/46827\/revisions\/51485"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media\/27911"}],"wp:attachment":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media?parent=46827"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/categories?post=46827"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/tags?post=46827"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}