{"id":46797,"date":"2024-04-12T00:00:00","date_gmt":"2024-04-12T07:00:00","guid":{"rendered":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/"},"modified":"2025-11-13T12:56:56","modified_gmt":"2025-11-13T20:56:56","slug":"using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka","status":"publish","type":"post","link":"https:\/\/www.griddb.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/","title":{"rendered":"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka"},"content":{"rendered":"<p>With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to hook up a generic Kafka JDBC connector with GridDB out of the box. Prior to this release, we could only insert one Kafka message at a time, but now we can batch update up to 1,000 data points at a time.<\/p>\n<p>In this article, we will be walking through setting up Kafka with GridDB. We have covered this topic before as seen here: <a href=\"https:\/\/docs.griddb.net\/tutorial\/kafka\/#setup-kafka\">Data Ingestion<\/a>. In those docs, we set up GridDB with Kafka were limited to setting the batch size to only 1. This time around, we will set the batch size to 1,000.<\/p>\n<p>As a note: we did need to slightly alter the official GridDB Kafka Connector to work as a generic JDBC connector. The repo for that can be found here: https:\/\/github.com\/Imisrael\/jdbc\/tree\/master.<\/p>\n<p>So, to get this project up and running, we will need to install\/set up kafka (broker), kafka-connect, kafka (zookeeper), kafka schema registry, and GridDB. And then once those are running and installed, we will also need to use the kafka-connect service to add the generic JDBC library and the GridDB JDBC library. Next, we will need to set up our Kafka Sink properties so that our Kafka instances know what kind of data we are handling and where it should go (to GridDB via JDBC). And lastly, we will need to create our Kafka topics, push some simulated data onto there, and hope that the data flows out into GridDB.<\/p>\n<h2>How to Run Project<\/h2>\n<p>If you would like to run this project, you can simply use docker compose to run all of the associated kafka services; these containers also come bundled up with their respective library files. Which brings us to our prereqs.<\/p>\n<h3>Source Code<\/h3>\n<p>The source code can be found on GitHub:https:\/\/github.com\/griddbnet\/Blogs<\/p>\n<p><code>$ git clone https:\/\/github.com\/griddbnet\/Blogs.git --branch jdbc_sink<\/code><\/p>\n<p>You may also need to build your own GridDB JDBC Connector. What you can do is clone this repo here: <a href=\"https:\/\/github.com\/Imisrael\/jdbc\/tree\/master\">https:\/\/github.com\/Imisrael\/jdbc\/tree\/master<\/a> and then build the jar from source. Once you have that library file, you can include it as part of your build process for this project.<\/p>\n<h3>Prerequisites<\/h3>\n<p>You can of course run this project by manually installing kafka, zookeeper, etc, but this project was built using Docker, so the only true requirement for this project is Docker.<\/p>\n<p>You can download Docker from their website: <a href=\"https:\/\/docs.docker.com\/get-docker\/\">https:\/\/docs.docker.com\/get-docker\/<\/a>.<\/p>\n<h3>Building &amp; Running<\/h3>\n<p>Running this project will require the following steps: you will need to build the docker images, start them, and finally push some data into the relevant topics to see them be flushed out into your GridDB server (which is also running via Docker container).<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ docker compose build\n$ docker compose up -d<\/code><\/pre>\n<\/div>\n<p>And then once everything is running (namely the broker), you can run the python script to make some kafka topics and push data onto them:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ cd gateway-sim\n$ python3 -m pip install kafka-python\n$ python3 kafka_producer.py<\/code><\/pre>\n<\/div>\n<p>If all goes well, you should be able to see new tables created for you in your docker GridDB container<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ docker exec -it griddb-server gs_sh\ngs> searchcontainer<\/code><\/pre>\n<\/div>\n<p>And these tables should be populated with the data created by our python script.<\/p>\n<h2>Overview<\/h2>\n<p>As explained above, all of these various services are being built and run via docker compose; the nice thing about using a singular compose file for all of our services is that Docker will automatically have them all on the same shared network. This means that our kafka broker service can already communicate with the zookeeper and vice versa. This also means that our GridDB server is available to connected to via port <code>20001<\/code> (its SQL port) so that we may flush our Kafka data directly into it.<\/p>\n<p>You can take a look at the <code>docker-compose.yml<\/code> file to see how these various services are started, what images they are pulled from, and what kind of configuration we have set up. Mostly you just need to know that Kafka is doing most of the heavy lifting here. To allow Kafka to know where to push its data topics onto, we need to create what is known as a JDBC Sink Configuration file. This file contains all of the parameters we wish employ when setting up our data flow. So next, let&#8217;s take a look at how we create and apply this config file.<\/p>\n<h3>JDBC Sink Config<\/h3>\n<p>Our kafka-connect service is responsible for handling our third party integrations (JDBC in this case) and so we will need to push our config file there and apply it. The service ships with a REST API which allows us to push JSON files onto it. When it receives a JSON file, it will apply that config to your Kafka service. It will then handle all of our data flow.<\/p>\n<p>You can push up a JSON file whenever you&#8217;d like, but we will set it up so that once the kafka-connect service is ready, it will push the json file. Here&#8217;s a look at the docker compose entry:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">  kafka-connect:\n    image: confluentinc\/cp-kafka-connect:latest\n    container_name: kafka-connect\n    ports:\n      - '8083:8083'\n    environment:\n      CONNECT_BOOTSTRAP_SERVERS: 'broker:9092'\n      CONNECT_REST_PORT: 8083\n      CONNECT_GROUP_ID: device\n      CONNECT_CONFIG_STORAGE_TOPIC: device-config\n      CONNECT_OFFSET_STORAGE_TOPIC: device-offsets\n      CONNECT_STATUS_STORAGE_TOPIC: device-status\n      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter\n      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: true\n      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: true\n      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http:\/\/kafka-schema-registry:8082'\n      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http:\/\/kafka-schema-registry:8082'\n      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect\n      CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: '[%d] %p %X{connector.context}%m (%c:%L)%n'\n      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: '1'\n      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: '1'\n      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: '1'\n      CONNECT_PLUGIN_PATH: >-\n        \/usr\/share\/java,\/etc\/kafka-connect\/jars,\/usr\/share\/confluent-hub-components,\/usr\/share\/confluent-hub-components\/confluentinc-kafka-connect-jdbc\/lib\n      CLASSPATH: >-\n        \/usr\/share\/java,\/etc\/kafka-connect\/jars,\/usr\/share\/confluent-hub-components,\/usr\/share\/confluent-hub-components\/confluentinc-kafka-connect-jdbc\/lib\n    volumes:\n      - '.\/scripts:\/scripts'\n      - '.\/kafka-connect\/connectors:\/etc\/kafka-connect\/jars\/'\n      \n    depends_on:\n      - zookeeper\n      - broker\n      - kafka-schema-registry\n      - griddb-server\n    command:\n      - bash\n      - '-c'\n      - >\n        \/etc\/confluent\/docker\/run & \n\n        echo \"Waiting for Kafka Connect to start listening on kafka-connect \u00e2\u008f\u00b3\"\n\n        while [ $$(curl -s -o \/dev\/null -w %{http_code}\n        http:\/\/kafka-connect:8083\/connectors) -eq 000 ] ; do \n          echo -e $$(date) \" Kafka Connect listener HTTP state: \" $$(curl -s -o \/dev\/null -w %{http_code} http:\/\/kafka-connect:8083\/connectors) \" (waiting for 200)\"\n          sleep 5 \n        done\n\n        nc -vz kafka-connect 8083\n\n        echo -e \"n--n+> Creating Kafka Connect GridDB sink\"\n\n        \/scripts\/create-griddb-sink.sh\n\n        sleep infinity     <\/code><\/pre>\n<\/div>\n<p>In the command section of this entry, you can see that we are checking kafka-connect (itself!) and waiting for the service to be ready (a 200 response to our HTTP Request). Once it&#8217;s ready, we will run a script which will send over a json object in the body of an HTTP request. Here is what that script looks like:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">#!\/bin\/sh\ncurl -s \n     -X \"POST\" \"http:\/\/localhost:8083\/connectors\/\" \n     -H \"Content-Type: application\/json\" \n     -d '{\n            \"name\": \"test-sink\", \n            \"config\": {\n                \"connector.class\":\"io.confluent.connect.jdbc.JdbcSinkConnector\",\n                \"tasks.max\":\"1\",\n                \"topics.regex\": \"meter.(.*)\",\n                \"table.name.format\": \"kafka_${topic}\",\n                \"dialect.name\": \"PostgreSqlDatabaseDialect\",\n                \"transforms\": \"TimestampConverter\",\n                \"transforms.TimestampConverter.type\": \"org.apache.kafka.connect.transforms.TimestampConverter$Value\",\n                \"transforms.TimestampConverter.format\": \"yyyy-MM-dd hh:mm:ss.SSS\",\n                \"transforms.TimestampConverter.field\": \"timestamp\",\n                \"transforms.TimestampConverter.target.type\": \"string\",\n                \"time.precision.mode\": \"connect\",\n                \"connection.url\":\"jdbc:gs:\/\/griddb-server:20001\/myCluster\/public\",\n                \"connection.user\": \"admin\",\n                \"connection.password\": \"admin\",\n                \"batch.size\": \"1000\",\n                \"auto.create\":\"true\",\n                \"pk.mode\" : \"none\",\n                \"insert.mode\": \"insert\",\n                \"auto.evolve\": \"true\"\n            }\n}'<\/code><\/pre>\n<\/div>\n<p>This is the information that will connect our GridDB server (container) with our running Kafka service through jdbc. Some of these entries are self-explanatory, such as connection url, user, pass, etc. I will go over some of the lesser known options.<\/p>\n<p>For <code>topics.regex<\/code>, we are telling our Sink connector which topics to subscribe to. We will push data onto these topics via other means, and we will suspect that our Sink connector will find that data and push it out to our connection url. The entries related to <code>transforms<\/code> are about taking a string value of timestamp from the meter topic and converting it into an explicit timestamp data before pushing to the database.<\/p>\n<p>Once you have pushed this information to the kafka-connect, you can make sure it&#8217;s there by querying port 8083:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ curl http:\/\/localhost:8083\/connectors\n[\"test-sink\"]<\/code><\/pre>\n<\/div>\n<h3>Producing Data for Kafka Topics<\/h3>\n<p>We have successfully <code>subscribed<\/code> our Kafka sink connector to any topics which start with <code>meter<\/code>. Now let&#8217;s <code>produce<\/code> some data and send that data to our topic. You can do this in any variety of ways, but here we will simply use a simple python script which will make 10 different topics and push timestamp data to all of those topics. Because our JDBC connector is subscribed to those topics, it will detect changes in those topics and eventually push that into GridDB.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">def produce(meterid, usagemodel=None):\n    time = datetime.datetime.now()-datetime.timedelta(days=100)\n    register_device(meterid)\n\n    base_temp = random.uniform(-10,40)\n    base_kwh = random.uniform(0,2)\n    while True:\n        now = time.strftime('%Y-%m-%d %H:%M:%S.%f')\n        data= {\n            \"payload\": \n            {\n                'timestamp': now,\n                'kwh': base_kwh+random.uniform(-.2, 2),\n                'temp': base_temp+random.uniform(-5, 5) \n            },\n            \"schema\": \n            {\n                \"fields\": [ \n                    { \"field\": \"timestamp\", \"optional\": False, \"type\": \"string\" },\n                    { \"field\": \"kwh\", \"optional\": False, \"type\": \"double\" }, \n                    { \"field\": \"temp\", \"optional\": False, \"type\": \"double\" } \n                ], \n                \"name\": \"iot\", \"optional\": False, \"type\": \"struct\" \n            }    \n         }\n        time = time + datetime.timedelta(minutes=60)\n        if time > datetime.datetime.now():\n            time.sleep(3600)\n\n        m=json.dumps(data, indent=4, sort_keys=True, default=str)\n        p.send(\"meter_\"+str(meterid), m.encode('utf-8'))\n        print(\"meter_\"+str(meterid), data['payload'])<\/code><\/pre>\n<\/div>\n<p>This is our function which will make our data and send to topics labeled <code>meter_${num}<\/code>. Our fields entry will be the schema which is pushed onto GridDB.<\/p>\n<p>Once you run this script, before checking GridDB itself, you can the topic like so:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ docker-compose exec broker kafka-console-consumer --bootstrap-server broker:9092 --topic meter_0 --from-beginning<\/code><\/pre>\n<\/div>\n<p>This will show all of your data from the python script.<\/p>\n<p>And then next, we can of course check our actual GridDB instance:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ docker exec -it griddb-server gs_sh\ngs[public]> searchcontainer\nkafka_meter_0\nkafka_meter_1\nkafka_meter_2\nkafka_meter_3\nkafka_meter_4\nkafka_meter_5\nkafka_meter_6\nkafka_meter_7\nkafka_meter_8\nkafka_meter_9\nkafka_meters\ngs[public]> select * from kafka_meter_0;\n2,400 results. (4 ms)\ngs[public]> get 10\n+-------------------------+--------------------+--------------------+\n| timestamp               | kwh                | temp               |\n+-------------------------+--------------------+--------------------+\n| 2023-11-15 01:43:52.299 | 2.8875713817453637 | 38.9091116816826   |\n| 2023-11-15 02:43:52.299 | 1.8928477563702992 | 37.183344440257784 |\n| 2023-11-15 03:43:52.299 | 1.3057612343055085 | 37.9251109201419   |\n| 2023-11-15 04:43:52.299 | 1.1172883739759085 | 40.43478215590419  |\n| 2023-11-15 05:43:52.299 | 1.6667172633034288 | 36.82843364324471  |\n| 2023-11-15 06:43:52.299 | 2.5131139241648173 | 38.50469053566042  |\n| 2023-11-15 07:43:52.299 | 2.0608077564559095 | 38.62901305523018  |\n| 2023-11-15 08:43:52.299 | 2.9945117256967295 | 39.854084974922834 |\n| 2023-11-15 09:43:52.299 | 1.8693091828037747 | 41.15482986965948  |\n| 2023-11-15 10:43:52.299 | 1.0284230878567477 | 37.05776090626771  |\n+-------------------------+--------------------+--------------------+\nThe 10 results had been acquired.<\/code><\/pre>\n<\/div>\n<h2>Conclusion<\/h2>\n<p>In this article we have gone over how to set up Kafka to push data into your GridDB server using just JDBC and Docker.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to hook up a generic Kafka JDBC connector with GridDB out of the box. Prior to this release, we could only insert one Kafka message at a time, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":30063,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[121],"tags":[],"class_list":["post-46797","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka | GridDB: Open Source Time Series Database for IoT<\/title>\n<meta name=\"description\" content=\"With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka | GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"og:description\" content=\"With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to\" \/>\n<meta property=\"og:url\" content=\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\" \/>\n<meta property=\"og:site_name\" content=\"GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/griddbcommunity\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-12T07:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-13T20:56:56+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.griddb.net\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1536\" \/>\n\t<meta property=\"og:image:height\" content=\"1536\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Israel\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:site\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Israel\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\"},\"author\":{\"name\":\"Israel\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\"},\"headline\":\"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka\",\"datePublished\":\"2024-04-12T07:00:00+00:00\",\"dateModified\":\"2025-11-13T20:56:56+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\"},\"wordCount\":1206,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\",\"url\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\",\"name\":\"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka | GridDB: Open Source Time Series Database for IoT\",\"isPartOf\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg\",\"datePublished\":\"2024-04-12T07:00:00+00:00\",\"dateModified\":\"2025-11-13T20:56:56+00:00\",\"description\":\"With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage\",\"url\":\"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg\",\"contentUrl\":\"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg\",\"width\":1536,\"height\":1536},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.griddb.net\/en\/#website\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"name\":\"GridDB: Open Source Time Series Database for IoT\",\"description\":\"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL\",\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.griddb.net\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\",\"name\":\"Fixstars\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"contentUrl\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"width\":200,\"height\":83,\"caption\":\"Fixstars\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/griddbcommunity\/\",\"https:\/\/x.com\/GridDBCommunity\",\"https:\/\/www.linkedin.com\/company\/griddb-by-toshiba\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\",\"name\":\"Israel\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"caption\":\"Israel\"},\"url\":\"https:\/\/www.griddb.net\/en\/author\/israel\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka | GridDB: Open Source Time Series Database for IoT","description":"With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/","og_locale":"en_US","og_type":"article","og_title":"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka | GridDB: Open Source Time Series Database for IoT","og_description":"With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to","og_url":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/","og_site_name":"GridDB: Open Source Time Series Database for IoT","article_publisher":"https:\/\/www.facebook.com\/griddbcommunity\/","article_published_time":"2024-04-12T07:00:00+00:00","article_modified_time":"2025-11-13T20:56:56+00:00","og_image":[{"width":1536,"height":1536,"url":"https:\/\/www.griddb.net\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg","type":"image\/jpeg"}],"author":"Israel","twitter_card":"summary_large_image","twitter_creator":"@GridDBCommunity","twitter_site":"@GridDBCommunity","twitter_misc":{"Written by":"Israel","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#article","isPartOf":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/"},"author":{"name":"Israel","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740"},"headline":"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka","datePublished":"2024-04-12T07:00:00+00:00","dateModified":"2025-11-13T20:56:56+00:00","mainEntityOfPage":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/"},"wordCount":1206,"commentCount":0,"publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"image":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/","url":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/","name":"Using SQL Batch Inserts with GridDB v5.5, JDBC, and Kafka | GridDB: Open Source Time Series Database for IoT","isPartOf":{"@id":"https:\/\/www.griddb.net\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage"},"image":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg","datePublished":"2024-04-12T07:00:00+00:00","dateModified":"2025-11-13T20:56:56+00:00","description":"With the release of GridDB v5.5, GridDB has added SQL batch inserts. This release is great for many reasons, but a very clear benefit is being able to","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/blog\/using-sql-batch-inserts-with-griddb-v5-5-jdbc-and-kafka\/#primaryimage","url":"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg","contentUrl":"\/wp-content\/uploads\/2024\/04\/Gemini_Generated_Image_x96tskx96tskx96t.jpg","width":1536,"height":1536},{"@type":"WebSite","@id":"https:\/\/www.griddb.net\/en\/#website","url":"https:\/\/www.griddb.net\/en\/","name":"GridDB: Open Source Time Series Database for IoT","description":"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL","publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.griddb.net\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.griddb.net\/en\/#organization","name":"Fixstars","url":"https:\/\/www.griddb.net\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/","url":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","contentUrl":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","width":200,"height":83,"caption":"Fixstars"},"image":{"@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/griddbcommunity\/","https:\/\/x.com\/GridDBCommunity","https:\/\/www.linkedin.com\/company\/griddb-by-toshiba"]},{"@type":"Person","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740","name":"Israel","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","caption":"Israel"},"url":"https:\/\/www.griddb.net\/en\/author\/israel\/"}]}},"_links":{"self":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/46797","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/comments?post=46797"}],"version-history":[{"count":1,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/46797\/revisions"}],"predecessor-version":[{"id":51459,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/46797\/revisions\/51459"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media\/30063"}],"wp:attachment":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media?parent=46797"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/categories?post=46797"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/tags?post=46797"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}