{"id":52091,"date":"2025-04-04T00:00:00","date_gmt":"2025-04-04T07:00:00","guid":{"rendered":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/blog\/kafka_http\/"},"modified":"2025-04-04T00:00:00","modified_gmt":"2025-04-04T07:00:00","slug":"kafka_http","status":"publish","type":"post","link":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/","title":{"rendered":"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector"},"content":{"rendered":"<p>As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your sensor data into almost anywhere due to its high flexibility and various connectors. And indeed, we have previously written articles about using GridDB&#8217;s official Kafka Source &amp; Sink connectors to stream your data from place A to GridDB and vice versa.<\/p>\n<p>On the heels of GridDB Cloud now being free for most users worldwide, we thought we could again revisit using Kafka with GridDB, but now instead we would like to push our sensor data into the cloud using the Web API. To accomplish this, we needed to find an HTTP Sink Kafka connector and ensure that it could meet our requirements (namely data transformations and being able to change the HTTP method).<\/p>\n<p>Eventually we landed on using Confluent&#8217;s own HTTP Sink connector, as it was the only one we could find which allowed for us to use the <code>PUT<\/code> method when making our HTTP Requests. As for transforming the data, Kafka already provided a method of doing this with something they call SMT (Single Message Transform).<\/p>\n<p>And then finally, the last challenge we needed to overcome is being able to securely push our data through HTTPS as GridDB cloud&#8217;s endpoint is protected by SSL.<\/p>\n<h2>Following Along<\/h2>\n<p>All source code for this project are available on our GitHub page.<\/p>\n<p><code>$ git clone https:\/\/github.com\/griddbnet\/Blogs.git --branch kafka_http<\/code><\/p>\n<p>Within that repo you will find the source code, the docker compose file, and the SSL certificates.<\/p>\n<p>As this entire project is dockerized, to run the project yourself, you will simply need docker installed. From there, you can run the project: <code>docker compose up -d<\/code>. We have already included the <code>.jar<\/code> file in the library dir so you won&#8217;t need to build the custom SMT code to push data to GridDB Cloud.<\/p>\n<h2>Implementation<\/h2>\n<p>To connect to push data to GridDB Cloud via the Web API, you must make an HTTP Request with a data structure that the Web API expects. If you look at the <a href=\"https:\/\/github.com\/griddb\/webapi\/blob\/master\/GridDB_Web_API_Reference.md\">docs<\/a>, you will see that to push data into a container we need to ensure a couple of things: first we need to ensure we make a <code>PUT<\/code> HTTP Request. Second, we need to ensure the data is set up as an array of arrays in the order of the schema. For example:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">[\n  [\"2025-01-16T10:25:00.253Z\", 100.5, \"normal\"],\n  [\"2025-01-16T10:35:00.691Z\", 173.9, \"normal\"],\n  [\"2025-01-16T10:45:00.032Z\", 173.9, null]\n]<\/code><\/pre>\n<\/div>\n<p>In order to get our Kafka messages to output  messages like this, we will need to write a custom <code>SMT<\/code>. Here&#8217;s an excellent article on how flexible and useful these can be: <a href=\"https:\/\/www.morling.dev\/blog\/single-message-transforms-swiss-army-knife-of-kafka-connect\/\">Single Message Transformations &#8211; The Swiss Army Knife of Kafka Connect<\/a>.<\/p>\n<p>Once we have the <code>SMT<\/code> finished, we can set up our SSL rules and certs and then make our connectors and topics via Confluent&#8217;s UI or through JSON files.<\/p>\n<h3>Single Message Transformations<\/h3>\n<p>The code to get this working is not very complicated, essentially we want to take an objject structure coming in from a typical Kafka message and transform into an array of arrays with all of the values parsed out. We will ensure that the index positions match our schema outside of the context of the <code>SMT<\/code>.<\/p>\n<p>As mentioned earlier, the <code>.jar<\/code> file is included within this project so you don&#8217;t need to do anything else, but if you would like to build it yourself or make changes, you can use <code>mvn<\/code> to build it. Here is the full Java code (it&#8217;s also available in this repo in the <code>smt<\/code> directory).<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-java\">@Override\n    public R apply(R record) {\n        final Schema schema = operatingSchema(record);\n        \n        if (schema == null) {\n            final Map&lt;String, Object&gt; value = requireMapOrNull(operatingValue(record), PURPOSE);\n            return newRecord(record, null, value == null ? null : fieldPath.valueFrom(value));\n        } else {\n            final Struct value = requireStructOrNull(operatingValue(record), PURPOSE);\n            fieldNames = schema.fields(); \n\n            List&lt;List&lt;Object&gt;&gt; nestedArray = new ArrayList&lt;&gt;();\n            List&lt;Object&gt; row = new ArrayList&lt;&gt;();\n            for (Field f : fieldNames) {\n                String fName = f.name();\n                SingleFieldPath fPath = new SingleFieldPath(fName, FieldSyntaxVersion.V2);\n                row.add(fPath.valueFrom(value));\n            }\n            nestedArray.add(row);\n    \n            return newRecord(record, schema, value == null ? null : nestedArray);\n        }\n        \n    }<\/code><\/pre>\n<\/div>\n<p>The main method we will be using is this <code>apply<\/code> function. We extract all of the values from the incoming messages, remove the field names, and make a new array of arrays and return that new array. That&#8217;s it! Of course there&#8217;s more to it, but this is the important bit.<\/p>\n<p>Now that we&#8217;ve got the structure we need, let&#8217;s set up our connectors and SSL information.<\/p>\n<h3>Docker SSL Parameters<\/h3>\n<p>Because GridDB Cloud&#8217;s endpoint is SSL protected, we need to ensure that our Kafka broker and HTTP Sink have the proper SSL Certs in place to securely communicate with the endpoint. If we miss any part of the process, the connection will fail with various errors, including the dreaded <code>Handshake failed<\/code>.<\/p>\n<p>Based on the <code>docker-compose<\/code> file I used as the base for this project, to get SSL working, we will need to add a ton SSL environment values for our broker and kafka-connect.<\/p>\n<p>Here are some of the values I added to the <code>broker<\/code> in order for it to get SSL working<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,  PLAINTEXT:PLAINTEXT,  PLAINTEXT_HOST:PLAINTEXT,  SSL:SSL'\n      KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT:\/\/broker:29092,  PLAINTEXT_HOST:\/\/localhost:9092,  SSL:\/\/broker:9093'\n      KAFKA_SSL_KEYSTORE_FILENAME: kafka.kafka-1.keystore.pkcs12\n      KAFKA_SSL_KEYSTORE_CREDENTIALS: kafka-1_keystore_creds\n      KAFKA_SSL_KEY_CREDENTIALS: kafka-1_sslkey_creds\n      KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.client.truststore.jks\n      KAFKA_SSL_TRUSTSTORE_CREDENTIALS: kafka-1_trustore_creds\n      KAFKA_SECURITY_PROTOCOL: 'SSL'\n      KAFKA_SASL_MECHANISM: 'plain'\n      KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: \n      KAFKA_LISTENERS: 'PLAINTEXT:\/\/broker:29092,  CONTROLLER:\/\/broker:29093,  PLAINTEXT_HOST:\/\/0.0.0.0:9092,  SSL:\/\/broker:9093'<\/code><\/pre>\n<\/div>\n<p>On top of adding these values, we also needed to generate these certificate files and copy them to the docker containers using a mounted volume.<\/p>\n<h4>Generating SSL Certificates<\/h4>\n<p>First, let&#8217;s take a look at the <code>.pkcs12<\/code> file, which is the <code>SSL_KEYSTORE_FILE<\/code>. This is a file you can generate on your local working machine, to do so, I followed a guide which gave me the following instructions:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ openssl req -new -nodes \n   -x509 \n   -days 365 \n   -newkey rsa:2048 \n   -keyout ca.key \n   -out ca.crt \n\n$ openssl req -new \n    -newkey rsa:2048 \n    -keyout kafka-1.key \n    -out kafka-1.csr \n    -nodes\n\n$ openssl x509 -req \n    -days 3650 \n    -in kafka-1.csr \n    -CA ca.crt \n    -CAkey ca.key \n    -CAcreateserial \n    -out kafka-1.crt \n    -extensions v3_req\n\n$ openssl pkcs12 -export \n    -in kafka-1.crt \n    -inkey afka-1.key \n    -chain \n    -CAfile ca.pem \n    -name kafka-1 \n    -out kafka-1.p12 \n    -password pass:confluent\n\n$ keytool -importkeystore \n    -deststorepass confluent \n    -destkeystore kafka.kafka-1.keystore.pkcs12 \n    -srckeystore kafka-1.p12 \n    -deststoretype PKCS12  \n    -srcstoretype PKCS12 \n    -noprompt \n    -srcstorepass confluent<\/code><\/pre>\n<\/div>\n<p>With that out of the way, we will also need to tell our server that the GridDB Cloud is safe by grabbing its certs and then generating some certs and including them into our broker and connect. From the GridDB Cloud web dashboard, if you click on the lock icon from the browser, you can view\/manage the SSL Certificates. From that menu, you can download the <code>.pem<\/code> files. Alternatively, you can use the CLI: <code>openssl s_client -showcerts -connect cloud5197.griddb.com:443<\/code>.<\/p>\n<p>With the output, you can save the portions that say <code>BEGIN CERTIFICATE<\/code> to <code>END CERTIFICATE<\/code> into a separate file. Armed with this file, you can generate a truststore file to let your server know it&#8217;s a trusted location.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ keytool -import -trustcacerts -alias griddb-cloud-cert -file ca.pem -keystore kafka.client.truststore.jks -storepass confluent -v<\/code><\/pre>\n<\/div>\n<p>Now we have the two key files (<code>kafka.kafka-1.keystore.pkcs12<\/code> &amp;&amp; <code>kafka.client.truststore.jks<\/code>) needed for secure communication with GridDB Cloud &#8212; cool!<\/p>\n<h3>Connector Clients<\/h3>\n<p>This next step is where we actually tell our kafka cluster which data we want streaming to where. So in this case, we will make a test topic with a simple schema of just three values:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">{\n  \"connect.name\": \"net.griddb.webapi.griddb\",\n  \"connect.parameters\": {\n    \"io.confluent.connect.avro.field.doc.data\": \"The string is a unicode character sequence.\",\n    \"io.confluent.connect.avro.field.doc.temp\": \"The double type is a double precision (64-bit) IEEE 754 floating-point number.\",\n    \"io.confluent.connect.avro.field.doc.ts\": \"The int type is a 32-bit signed integer.\",\n    \"io.confluent.connect.avro.record.doc\": \"Sample schema to help you get started.\"\n  },\n  \"doc\": \"Sample schema to help you get started.\",\n  \"fields\": [\n    {\n      \"doc\": \"The int type is a 32-bit signed integer.\",\n      \"name\": \"ts\",\n      \"type\": \"int\"\n    },\n    {\n      \"doc\": \"The double type is a double precision (64-bit) IEEE 754 floating-point number.\",\n      \"name\": \"temp\",\n      \"type\": \"double\"\n    },\n    {\n      \"doc\": \"The string is a unicode character sequence.\",\n      \"name\": \"data\",\n      \"type\": \"double\"\n    }\n  ],\n  \"name\": \"griddb\",\n  \"namespace\": \"net.griddb.webapi\",\n  \"type\": \"record\"\n}<\/code><\/pre>\n<\/div>\n<p>Before we try pushing our data to GridDB Cloud, we will need to create our container inside of our DB. You can use the Dashboard or simply send a CURL request using Postman or the CLI to create the container to match that schema. For me, I&#8217;m calling it <code>kafka<\/code>. In this case, I&#8217;m not going to make a Time Series container and will settle for a Collection container for educational purposes.<\/p>\n<p>We will then make a source connector provided by Confluent to generate mock data in the style of that schema.<\/p>\n<p>Once you have it set up, it looks like this in the dashboard:<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/topic-messages.png\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/topic-messages.png\" alt=\"\" width=\"3242\" height=\"3278\" class=\"aligncenter size-full wp-image-31437\" srcset=\"\/wp-content\/uploads\/2025\/04\/topic-messages.png 3242w, \/wp-content\/uploads\/2025\/04\/topic-messages-297x300.png 297w, \/wp-content\/uploads\/2025\/04\/topic-messages-1013x1024.png 1013w, \/wp-content\/uploads\/2025\/04\/topic-messages-768x777.png 768w, \/wp-content\/uploads\/2025\/04\/topic-messages-1519x1536.png 1519w, \/wp-content\/uploads\/2025\/04\/topic-messages-2026x2048.png 2026w, \/wp-content\/uploads\/2025\/04\/topic-messages-600x607.png 600w\" sizes=\"(max-width: 3242px) 100vw, 3242px\" \/><\/a><\/p>\n<p>Next, we make a connector for the HTTP Sink which takes that source connector&#8217;s mock data and streams it out to the HTTP we set it to (hint: it&#8217;s GridDB Cloud!). But as the data moves through from the source to the sink, we will of course apply our <code>SMT<\/code> to change the data into an array of arrays to push to GridDB Cloud. And if we configured our SSL correctly, we should see our data inside of our GridDB Cloud container.<\/p>\n<h4>Connector Client Values and Rules<\/h4>\n<p>To send the connectors to your Kafka cluster, you can either manually enter in the values using the Kafka Control Center, which provides a nice UI for editing connectors, or simply take the <code>.json<\/code> files included with this repo and pushing them using CURL.<\/p>\n<p>Here are the values for the datagen which creates mock data for our GridDB Cloud to ingest:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">{\n  \"name\": \"web_api_datagen\",\n  \"config\": {\n    \"connector.class\": \"io.confluent.kafka.connect.datagen.DatagenConnector\",\n    \"kafka.topic\": \"griddb_test\",\n    \"schema.string\": \"{   \"connect.name\": \"net.griddb.webapi.griddb\",   \"connect.parameters\": {     \"io.confluent.connect.avro.field.doc.data\": \"The string is a unicode character sequence.\",     \"io.confluent.connect.avro.field.doc.temp\": \"The double type is a double precision (64-bit) IEEE 754 floating-point number.\",     \"io.confluent.connect.avro.field.doc.ts\": \"The int type is a 32-bit signed integer.\",     \"io.confluent.connect.avro.record.doc\": \"Sample schema to help you get started.\"   },   \"doc\": \"Sample schema to help you get started.\",   \"fields\": [     {       \"doc\": \"The int type is a 32-bit signed integer.\",       \"name\": \"ts\",       \"type\": \"int\"     },     {       \"doc\": \"The double type is a double precision (64-bit) IEEE 754 floating-point number.\",       \"name\": \"temp\",       \"type\": \"double\"     },     {       \"doc\": \"The string is a unicode character sequence.\",       \"name\": \"data\",       \"type\": \"double\"     }   ],   \"name\": \"griddb\",   \"namespace\": \"net.griddb.webapi\",   \"type\": \"record\" }\"\n  }\n}<\/code><\/pre>\n<\/div>\n<p>It is messy, but that&#8217;s because the schema string includes the raw string of the schema I shared earlier (up above).<\/p>\n<p>And here are the values of the HTTP Sink itself:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">{\n  \"name\": \"griddb_web_api_sink\",\n  \"config\": {\n    \"connector.class\": \"io.confluent.connect.http.HttpSinkConnector\",\n    \"transforms\": \"nestedList\",\n    \"topics\": \"griddb\",\n    \"transforms.nestedList.type\": \"net.griddb.GridDBWebAPITransform$Value\",\n    \"transforms.nestedList.fields\": \"ts\",\n    \"http.api.url\": \"https:\/\/cloud5197.griddb.com\/griddb\/v2\/gs_clustermfcloud97\/dbs\/ZUlQ8\/containers\/kafka\/rows\",\n    \"request.method\": \"put\",\n    \"headers\": \"Content-Type: application\/json\",\n    \"auth.type\": \"basic\",\n    \"connection.user\": \"user\",\n    \"connection.password\": \"password\",\n    \"https.ssl.key.password\": \"confluent\",\n    \"https.ssl.keystore.key\": \"\",\n    \"https.ssl.keystore.location\": \"\/etc\/kafka\/secrets\/kafka.kafka-1.keystore.pkcs12\",\n    \"https.ssl.keystore.password\": \"confluent\",\n    \"https.ssl.truststore.location\": \"\/etc\/kafka\/secrets\/kafka.client.truststore.jks\",\n    \"https.ssl.truststore.password\": \"confluent\",\n    \"https.ssl.enabled.protocols\": \"\",\n    \"https.ssl.keystore.type\": \"PKCS12\",\n    \"https.ssl.protocol\": \"TLSv1.2\",\n    \"https.ssl.truststore.type\": \"JKS\",\n    \"reporter.result.topic.replication.factor\": \"1\",\n    \"reporter.error.topic.replication.factor\": \"1\",\n    \"reporter.bootstrap.servers\": \"broker:29092\"\n  }\n}<\/code><\/pre>\n<\/div>\n<p>Some important values here: of course the SSL values and certs, as well as the URL as this contains the container name (kafka in our case). We also have our BASIC AUTHENICATION values in here as well as our <code>SMT<\/code>. All of this information is crucial to ensure that our Kafka cluster streams our mock data to the proper place with zero errors.<\/p>\n<p>You can push these connectors using HTTP Requests:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ #!\/bin\/sh\n\ncurl -s \n     -X \"POST\" \"http:\/\/localhost:8083\/connectors\/\" \n     -H \"Content-Type: application\/json\" \n     -d '{\n    \"name\": \"griddb_web_api_sink\",\n    \"config\": {\n            \"connector.class\": \"io.confluent.connect.http.HttpSinkConnector\",\n            \"transforms\": \"nestedList\",\n            \"topics\": \"griddb_test\",\n            \"transforms.nestedList.type\": \"net.griddb.GridDBWebAPITransform$Value\",\n            \"transforms.nestedList.fields\": \"ts\",\n            \"http.api.url\": \"https:\/\/cloud5197.griddb.com\/griddb\/v2\/gs_clustermfcloud97\/dbs\/ZUlQ8\/containers\/kafka\/rows\",\n            \"request.method\": \"put\",\n            \"headers\": \"Content-Type: application\/json\",\n            \"auth.type\": \"basic\",\n            \"connection.user\": \"user\",\n            \"connection.password\": \"password\",\n            \"https.ssl.key.password\": \"confluent\",\n            \"https.ssl.keystore.key\": \"\",\n            \"https.ssl.keystore.location\": \"\/etc\/kafka\/secrets\/kafka.kafka-1.keystore.pkcs12\",\n            \"https.ssl.keystore.password\": \"confluent\",\n            \"https.ssl.truststore.location\": \"\/etc\/kafka\/secrets\/kafka.client.truststore.jks\",\n            \"https.ssl.truststore.password\": \"confluent\",\n            \"https.ssl.enabled.protocols\": \"\",\n            \"https.ssl.keystore.type\": \"PKCS12\",\n            \"https.ssl.protocol\": \"TLSv1.2\",\n            \"https.ssl.truststore.type\": \"JKS\",\n            \"reporter.result.topic.replication.factor\": \"1\",\n            \"reporter.error.topic.replication.factor\": \"1\",\n            \"reporter.bootstrap.servers\": \"broker:29092\"\n        }\n    }'<\/code><\/pre>\n<\/div>\n<p>And then the same thing for the source connector. The main thing to take away from this section is the values you need to enter to successfully push your data from Kafka to GridDB Cloud. For example, you can see in the transforms section that we are using the <code>SMT<\/code> we wrote and built earlier.<\/p>\n<h2>Results<\/h2>\n<p>First, let&#8217;s take a look at our logs to see if our data is going through<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ docker logs -f connect<\/code><\/pre>\n<\/div>\n<p>Here you should see some sort of output. You can also check your Control Center and ensure that the GridDB Web API Sink doesn&#8217;t have any errors. For me, this is what it looks like:<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/no-error-control-center.png\"><img decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/no-error-control-center.png\" alt=\"\" width=\"3232\" height=\"1328\" class=\"aligncenter size-full wp-image-31436\" srcset=\"\/wp-content\/uploads\/2025\/04\/no-error-control-center.png 3232w, \/wp-content\/uploads\/2025\/04\/no-error-control-center-300x123.png 300w, \/wp-content\/uploads\/2025\/04\/no-error-control-center-1024x421.png 1024w, \/wp-content\/uploads\/2025\/04\/no-error-control-center-768x316.png 768w, \/wp-content\/uploads\/2025\/04\/no-error-control-center-1536x631.png 1536w, \/wp-content\/uploads\/2025\/04\/no-error-control-center-2048x842.png 2048w, \/wp-content\/uploads\/2025\/04\/no-error-control-center-600x247.png 600w\" sizes=\"(max-width: 3232px) 100vw, 3232px\" \/><\/a><\/p>\n<p>And then of course, let&#8217;s check our GridDB dashboard to ensure our data is being routed to the correct container:<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/griddb-query.png\"><img decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/griddb-query.png\" alt=\"\" width=\"2574\" height=\"3056\" class=\"aligncenter size-full wp-image-31435\" srcset=\"\/wp-content\/uploads\/2025\/04\/griddb-query.png 2574w, \/wp-content\/uploads\/2025\/04\/griddb-query-253x300.png 253w, \/wp-content\/uploads\/2025\/04\/griddb-query-862x1024.png 862w, \/wp-content\/uploads\/2025\/04\/griddb-query-768x912.png 768w, \/wp-content\/uploads\/2025\/04\/griddb-query-1294x1536.png 1294w, \/wp-content\/uploads\/2025\/04\/griddb-query-1725x2048.png 1725w, \/wp-content\/uploads\/2025\/04\/griddb-query-600x712.png 600w\" sizes=\"(max-width: 2574px) 100vw, 2574px\" \/><\/a><\/p>\n<h2>Conclusion<\/h2>\n<p>And with that, we have successfully pushed data from Kafka over to GridDB Cloud. For some next steps, you could try chaining SMTs to convert the mock data TS into timestamps that GridDB can understand and push to a time series container.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your sensor data into almost anywhere due to its high flexibility and various connectors. And indeed, we have previously written articles about using GridDB&#8217;s official Kafka Source &amp; Sink connectors to [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":52092,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[121],"tags":[],"class_list":["post-52091","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector | GridDB: Open Source Time Series Database for IoT<\/title>\n<meta name=\"description\" content=\"As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector | GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"og:description\" content=\"As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/\" \/>\n<meta property=\"og:site_name\" content=\"GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/griddbcommunity\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-04T07:00:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.griddb.net\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2048\" \/>\n\t<meta property=\"og:image:height\" content=\"2048\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Israel\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:site\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Israel\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"12 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/\"},\"author\":{\"name\":\"Israel\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\"},\"headline\":\"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector\",\"datePublished\":\"2025-04-04T07:00:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/\"},\"wordCount\":1512,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/\",\"url\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/\",\"name\":\"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector | GridDB: Open Source Time Series Database for IoT\",\"isPartOf\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg\",\"datePublished\":\"2025-04-04T07:00:00+00:00\",\"description\":\"As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage\",\"url\":\"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg\",\"contentUrl\":\"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg\",\"width\":2048,\"height\":2048},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.griddb.net\/en\/#website\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"name\":\"GridDB: Open Source Time Series Database for IoT\",\"description\":\"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL\",\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.griddb.net\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\",\"name\":\"Fixstars\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"contentUrl\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"width\":200,\"height\":83,\"caption\":\"Fixstars\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/griddbcommunity\/\",\"https:\/\/x.com\/GridDBCommunity\",\"https:\/\/www.linkedin.com\/company\/griddb-by-toshiba\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\",\"name\":\"Israel\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"caption\":\"Israel\"},\"url\":\"https:\/\/www.griddb.net\/en\/author\/israel\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector | GridDB: Open Source Time Series Database for IoT","description":"As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/","og_locale":"en_US","og_type":"article","og_title":"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector | GridDB: Open Source Time Series Database for IoT","og_description":"As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your","og_url":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/","og_site_name":"GridDB: Open Source Time Series Database for IoT","article_publisher":"https:\/\/www.facebook.com\/griddbcommunity\/","article_published_time":"2025-04-04T07:00:00+00:00","og_image":[{"width":2048,"height":2048,"url":"https:\/\/www.griddb.net\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg","type":"image\/jpeg"}],"author":"Israel","twitter_card":"summary_large_image","twitter_creator":"@GridDBCommunity","twitter_site":"@GridDBCommunity","twitter_misc":{"Written by":"Israel","Est. reading time":"12 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#article","isPartOf":{"@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/"},"author":{"name":"Israel","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740"},"headline":"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector","datePublished":"2025-04-04T07:00:00+00:00","mainEntityOfPage":{"@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/"},"wordCount":1512,"commentCount":0,"publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"image":{"@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/","url":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/","name":"Pushing Data to GridDB Cloud with Kafka HTTP Sink Connector | GridDB: Open Source Time Series Database for IoT","isPartOf":{"@id":"https:\/\/www.griddb.net\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage"},"image":{"@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg","datePublished":"2025-04-04T07:00:00+00:00","description":"As we have discussed before, Kafka is an invaluable tool when dealing with certain IoT workloads. Kafka can guarantee a robust pipeline of streaming your","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.griddb.net\/en\/blog\/kafka_http\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/blog\/kafka_http\/#primaryimage","url":"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg","contentUrl":"\/wp-content\/uploads\/2025\/12\/Gemini_Generated_Image_xttp0wxttp0wxttp.jpg","width":2048,"height":2048},{"@type":"WebSite","@id":"https:\/\/www.griddb.net\/en\/#website","url":"https:\/\/www.griddb.net\/en\/","name":"GridDB: Open Source Time Series Database for IoT","description":"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL","publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.griddb.net\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.griddb.net\/en\/#organization","name":"Fixstars","url":"https:\/\/www.griddb.net\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/","url":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","contentUrl":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","width":200,"height":83,"caption":"Fixstars"},"image":{"@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/griddbcommunity\/","https:\/\/x.com\/GridDBCommunity","https:\/\/www.linkedin.com\/company\/griddb-by-toshiba"]},{"@type":"Person","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740","name":"Israel","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","caption":"Israel"},"url":"https:\/\/www.griddb.net\/en\/author\/israel\/"}]}},"_links":{"self":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/52091","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/comments?post=52091"}],"version-history":[{"count":0,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/52091\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media\/52092"}],"wp:attachment":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media?parent=52091"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/categories?post=52091"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/tags?post=52091"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}