{"id":52169,"date":"2025-07-02T00:00:00","date_gmt":"2025-07-02T07:00:00","guid":{"rendered":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/"},"modified":"2026-03-30T11:41:08","modified_gmt":"2026-03-30T18:41:08","slug":"automated-speech-dubbing-using-gpt-4o-audio-and-node-js","status":"publish","type":"post","link":"https:\/\/www.griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/","title":{"rendered":"Automated Speech Dubbing Using GPT-4o Audio and Node.js"},"content":{"rendered":"<h2>What This Blog is About<\/h2>\n<p>Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often fall short\u2014they\u2019re too slow, prone to errors, and struggle to scale effectively. For instance, human-based translation can introduce subjective inaccuracies, while manual dubbing processes frequently fail to keep pace with real-time demands or large-scale projects. However, advancements in AI have revolutionized audio translation, making it faster and more accurate.<\/p>\n<p>This blog provides a step-by-step guide to building an automated dubbing system. Using GPT-4o Audio for context-aware audio translations, Node.js for data handling, and GridDB for scalable storage, you\u2019ll learn how to process speech, translate it, and deliver dubbed audio instantly. This guide will explain how to automate speech dubbing, ensuring seamless communication across languages, and the term &#8220;speech&#8221; is used interchangeably with &#8220;audio.&#8221;<\/p>\n<h2>Prerequisites<\/h2>\n<p>You should have access to the <a href=\"https:\/\/platform.openai.com\/docs\/models#gpt-4o-realtime\">GPT-4o Audio<\/a> models. Also, you should give the app permission to use the microphone in the browser.<\/p>\n<h2>How to Run the App<\/h2>\n<p>The source code for this project is available in this <a href=\"https:\/\/github.com\/junwatu\/speech-dubbing-griddb\">repository<\/a>. You don&#8217;t need to clone it to run the app, as the working application is already dockerized. However, to run the project you need the <a href=\"https:\/\/www.docker.com\/products\/docker-desktop\/\">Docker<\/a> installed.<\/p>\n<p>Please note, that this app is tested on ARM machines such as Apple MacBook M1 or M2. While it is optimized for ARM architecture, it possible run on non-ARM machines with minor modifications, such as using a different GridDB Docker image for x86 systems.<\/p>\n<h3>1.<code>.env<\/code> Setup<\/h3>\n<p>Create an empty directory, for example, <code>speech-dubbing<\/code>, and change to that directory:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">mkdir speech-dubbing\ncd speech-dubbing<\/code><\/pre>\n<\/div>\n<p>Create a <code>.env<\/code> file with these keys:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">OPENAI_API_KEY=\nGRIDDB_CLUSTER_NAME=myCluster\nGRIDDB_USERNAME=admin\nGRIDDB_PASSWORD=admin\nIP_NOTIFICATION_MEMBER=griddb-server:10001\nVITE_APP_BASE_URL=http:\/\/localhost\nVITE_PORT=3000<\/code><\/pre>\n<\/div>\n<p>To get the <code>OPENAI_API_KEY<\/code> please read this <a href=\"#openai-api-key\">section<\/a>.<\/p>\n<h3>2. Docker Compose Configuration<\/h3>\n<p>Before run the app create a <code>docker-compose.yml<\/code> file with this configuration settings:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">networks:\n  griddb-net:\n    driver: bridge\n\nservices:\n  griddb-server:\n    image: griddbnet\/griddb:arm-5.5.0\n    container_name: griddb-server\n    environment:\n      - GRIDDB_CLUSTER_NAME=${GRIDDB_CLUSTER_NAME}\n      - GRIDDB_PASSWORD=${GRIDDB_PASSWORD}\n      - GRIDDB_USERNAME=${GRIDDB_USERNAME}\n      - NOTIFICATION_MEMBER=1\n      - IP_NOTIFICATION_MEMBER=${IP_NOTIFICATION_MEMBER}\n    networks:\n      - griddb-net\n    ports:\n      - \"10001:10001\"\n\n  clothes-rag:\n    image: junwatu\/speech-dubber:1.2\n    container_name: speech-dubber-griddb\n    env_file: .env \n    networks:\n      - griddb-net\n    ports:\n      - \"3000:3000\"<\/code><\/pre>\n<\/div>\n<h3>3. Run<\/h3>\n<p>When steps 1 and 2 are finished, run the app with this command:<\/p>\n<div class=\"clipboard\">\n<pre><code>docker-compose up -d<\/code><\/pre>\n<\/div>\n<p>If everything running, you will get a similar response to this:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">[+] Running 3\/3\n \u2714 Network tmp_griddb-net          Created                                                0.0s \n \u2714 Container speech-dubber-griddb  Started                                                0.2s \n \u2714 Container griddb-server         Started                                                0.2s<\/code><\/pre>\n<\/div>\n<h3>4. Test the Speech Dubber App<\/h3>\n<p>These are the steps to use the app:<\/p>\n<ol>\n<li><strong>Open the App<\/strong>: Open your browser and navigate to <a href=\"http:\/\/localhost:3000\">http:\/\/localhost:3000<\/a>.<\/li>\n<li><strong>Start Recording<\/strong>: Click the record button.<\/li>\n<li><strong>Allow Microphone Access<\/strong>: When prompted by the browser, click \u201cAllow this time.\u201d<\/li>\n<li><strong>Speak<\/strong>: Record your message in English.<\/li>\n<li><strong>Stop Recording<\/strong>: Click the stop button when done. Wait while the app processes and translates your audio.<\/li>\n<li><strong>Play the Translation<\/strong>: Use the playback controls to listen to the translated Japanese audio.<\/li>\n<\/ol>\n<p>The demo below summarizes all the steps:<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/speech-dubber-demo.gif\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/speech-dubber-demo.gif\" alt=\"\" width=\"1816\" height=\"1080\" class=\"aligncenter size-full wp-image-31837\" \/><\/a><\/p>\n<h2>Environment Setup<\/h2>\n<h3><strong>OpenAI API Key<\/strong><\/h3>\n<p>You can create a new OpenAI project or use the existing one and then create and get the OpenAI API key <a href=\"https:\/\/platform.openai.com\/api-keys\">here<\/a>. Later, you need to save this key in the <code>.env<\/code> file.<\/p>\n<p>By default, OpenAI will restrict the models from public access even if you have a valid key. You also need to enable these models in the OpenAI project settings:<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models.webp\"><img decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models.webp\" alt=\"\" width=\"2254\" height=\"630\" class=\"aligncenter size-full wp-image-31833\" srcset=\"\/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models.webp 2254w, \/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models-300x84.webp 300w, \/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models-1024x286.webp 1024w, \/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models-768x215.webp 768w, \/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models-1536x429.webp 1536w, \/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models-2048x572.webp 2048w, \/wp-content\/uploads\/2025\/07\/gpt-4o-realtime-audio-models-600x168.webp 600w\" sizes=\"(max-width: 2254px) 100vw, 2254px\" \/><\/a><\/p>\n<h3>Docker<\/h3>\n<p>For easy development and distribution, this project uses a docker container to &#8220;package&#8221; the application. For easy Docker installation, use the <a href=\"https:\/\/www.docker.com\/products\/docker-desktop\/\">Docker Desktop<\/a> tool.<\/p>\n<h4>GridDB Docker<\/h4>\n<p>This app needs a GridDB server and it should be running before the app. In this project, we will use the GridDB docker for ARM machines.  To test the GridDB on your local machine, you can run these docker commands:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-bash\">docker network create griddb-net\ndocker pull griddbnet\/griddb:arm-5.5.0\ndocker run --name griddb-server \n --network griddb-net \n    -e GRIDDB_CLUSTER_NAME=myCluster \n -e GRIDDB_PASSWORD=admin \n    -e NOTIFICATION_MEMBER=1 \n -d -t griddbnet\/griddb:arm-5.5.0<\/code><\/pre>\n<\/div>\n<p>By using the Docker Desktop, you can easily check if the GridDB docker is running.<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/griddb-docker-arm-scaled.webp\"><img decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/griddb-docker-arm-scaled.webp\" alt=\"\" width=\"2560\" height=\"1330\" class=\"aligncenter size-full wp-image-31834\" srcset=\"\/wp-content\/uploads\/2025\/07\/griddb-docker-arm-scaled.webp 2560w, \/wp-content\/uploads\/2025\/07\/griddb-docker-arm-300x156.webp 300w, \/wp-content\/uploads\/2025\/07\/griddb-docker-arm-1024x532.webp 1024w, \/wp-content\/uploads\/2025\/07\/griddb-docker-arm-768x399.webp 768w, \/wp-content\/uploads\/2025\/07\/griddb-docker-arm-1536x798.webp 1536w, \/wp-content\/uploads\/2025\/07\/griddb-docker-arm-2048x1064.webp 2048w, \/wp-content\/uploads\/2025\/07\/griddb-docker-arm-600x312.webp 600w\" sizes=\"(max-width: 2560px) 100vw, 2560px\" \/><\/a><\/p>\n<p>For more about GridDB docker for ARM, please check out this <a href=\"https:\/\/griddb.net\/en\/blog\/griddb-on-arm-with-docker\/\">blog<\/a>.<\/p>\n<h3>Development<\/h3>\n<p>If you are a curious developer or need further development, you can clone and examine the <a href=\"https:\/\/github.com\/junwatu\/speech-dubbing-griddb\">project source code<\/a>. Primarily, you must have Node.js, FFmpeg, and GridDB installed on your system.<\/p>\n<h2>System Architecture<\/h2>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/system-arch.webp\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/system-arch.webp\" alt=\"\" width=\"861\" height=\"670\" class=\"aligncenter size-full wp-image-31839\" srcset=\"\/wp-content\/uploads\/2025\/07\/system-arch.webp 861w, \/wp-content\/uploads\/2025\/07\/system-arch-300x233.webp 300w, \/wp-content\/uploads\/2025\/07\/system-arch-768x598.webp 768w, \/wp-content\/uploads\/2025\/07\/system-arch-600x467.webp 600w\" sizes=\"(max-width: 861px) 100vw, 861px\" \/><\/a><\/p>\n<p>The flow of the speech dubbing process is pretty simple: The process begins with the user speaking into the browser, which captures the audio. This recorded audio is then sent to the Node.js server, where it undergoes processing. The server calls the GPT-4o Audio model to translate the audio content into another language. Once the audio is translated, the server saves the original and translated audio, along with relevant metadata, to the GridDB database for storage.<\/p>\n<p>Finally, the translated audio is sent back to the browser, where the user can play it through an HTML5 audio player.<\/p>\n<h2>Capturing Speech Input<\/h2>\n<h3>Accessing the Microphone<\/h3>\n<p>To record audio, the first step is to access the user\u2019s microphone. This is achieved using the <code>navigator.mediaDevices.getUserMedia<\/code> API.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">const stream = await navigator.mediaDevices.getUserMedia({ audio: true });\n<\/code><\/pre>\n<\/div>\n<p>The code above will prompt the user for permission to access the microphone.<\/p>\n<h3>Recording Audio<\/h3>\n<p>Once microphone access is granted, the <code>MediaRecorder<\/code> API is used to handle the actual recording process. The audio stream is passed to <code>MediaRecorder<\/code> to create a recorder instance:<\/p>\n<div class=\"clipboard\">\n<pre><code>mediaRecorderRef.current = new MediaRecorder(stream);<\/code><\/pre>\n<\/div>\n<p>As the recording progresses, audio chunks are collected through the <code>ondataavailable<\/code> event:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">mediaRecorderRef.current.ondataavailable = (event: BlobEvent) => {\n    audioChunksRef.current.push(event.data);\n};<\/code><\/pre>\n<\/div>\n<p>When the recording stops (<code>onstop<\/code> event), the chunks are combined into a single audio file (a <code>Blob<\/code>) and made available for upload:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">mediaRecorderRef.current.onstop = () => {\n    const audioBlob = new Blob(audioChunksRef.current, { type: 'audio\/wav' });\n    const audioUrl = URL.createObjectURL(audioBlob);\n    setAudioURL(audioUrl);\n    audioChunksRef.current = [];\n    uploadAudio(audioBlob);\n};<\/code><\/pre>\n<\/div>\n<p>The <code>uploadAudio<\/code> function will upload the audio blob into the Node.js server.<\/p>\n<h2>Node.js Server<\/h2>\n<p>This Node.js server processes audio files by converting them to MP3, translating the audio content using OpenAI, and storing the data in a GridDB database. It provides endpoints for uploading audio files, querying data from the database, and serving static files.<\/p>\n<h3>Routes Table<\/h3>\n<p>Here\u2019s a summary of the endpoints or API available in this server:<\/p>\n<table>\n<thead>\n<tr>\n<th><strong>Method<\/strong><\/th>\n<th><strong>Endpoint<\/strong><\/th>\n<th><strong>Description<\/strong><\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><code>GET<\/code><\/td>\n<td><code>\/<\/code><\/td>\n<td>Serves the main HTML file (<code>index.html<\/code>).<\/td>\n<\/tr>\n<tr>\n<td><code>POST<\/code><\/td>\n<td><code>\/upload-audio<\/code><\/td>\n<td>Accepts an audio file upload, converts it to MP3, processes it using OpenAI, and saves data to GridDB.<\/td>\n<\/tr>\n<tr>\n<td><code>GET<\/code><\/td>\n<td><code>\/query<\/code><\/td>\n<td>Retrieves all records from the GridDB database.<\/td>\n<\/tr>\n<tr>\n<td><code>GET<\/code><\/td>\n<td><code>\/query\/:id<\/code><\/td>\n<td>Retrieves a specific record by ID from the GridDB database.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>Audio Conversion<\/h2>\n<p>The default recording file format sent by the client is <a href=\"https:\/\/en.wikipedia.org\/wiki\/WAV\">WAV<\/a>. However, in the Node.js server, this file is converted into the MP3 format for better processing.<\/p>\n<p>The audio conversion is done by <a href=\"https:\/\/github.com\/fluent-ffmpeg\/node-fluent-ffmpeg\">fluent-ffmpeg<\/a> npm package:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">const convertToMp3 = () => {\n   return new Promise((resolve, reject) => {\n    ffmpeg(originalFilePath)\n     .toFormat('mp3')\n     .on('error', (err) => {\n      console.error('Conversion error:', err);\n      reject(err);\n     })\n     .on('end', () => {\n      fs.unlinkSync(originalFilePath);\n      resolve(mp3FilePath);\n     })\n     .save(mp3FilePath);\n   });\n  };<\/code><\/pre>\n<\/div>\n<p>If you want to develop this project for further enhancements, you need to install <a href=\"https:\/\/www.ffmpeg.org\/\">ffmpeg<\/a> in your system.<\/p>\n<h2>Speech Dubbing<\/h2>\n<h3>Target Language<\/h3>\n<p>The <code>gpt-4o-audio-preview<\/code> model from OpenAI will translate the recorded audio content into another language.<\/p>\n<div class=\"clipboard\">\n<pre><code>const audioBuffer = fs.readFileSync(mp3FilePath);<\/code><\/pre>\n<\/div>\n<p>Note that this model requires audio in base64-encoded format, so you have to encode the audio content into the base 64:<\/p>\n<div class=\"clipboard\">\n<pre><code>const base64str = Buffer.from(audioBuffer).toString('base64');<\/code><\/pre>\n<\/div>\n<p>The default language for the audio translation is &#8220;Japanese&#8221;. However, you can change it in the source code or add UI for language selector for further enhancement.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">const language = \"Japanese\";\n\n\/\/ Process audio using OpenAI\nconst result = await processAudio(base64str, language);<\/code><\/pre>\n<\/div>\n<p>The response <code>result<\/code> of the <code>processAudio<\/code> function is in JSON format that contains this data:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">{\n    \"language\": \"Japanese\",\n    \"filename\": \"translation-Japanese.mp3\",\n    \"result\": {\n        \"index\": 0,\n        \"message\": {\n            \"role\": \"assistant\",\n            \"content\": null,\n            \"refusal\": null,\n            \"audio\": {\n                \"id\": \"audio_6758f02de0b48190ba109885b931122c\",\n                \"data\": \"base64-encoded_audio\",\n                \"expires_at\": 1733885501,\n                \"transcript\": \"\u3053\u3093\u306b\u3061\u306f\u3002\u4eca\u671d\u306f\u3068\u3066\u3082\u6674\u5929\u3067\u3059\u3002\"\n            }\n        },\n        \"finish_reason\": \"stop\"\n    }\n}<\/code><\/pre>\n<\/div>\n<p>This JSON data is sent to the client, and with React, we can use it to render components, such as the HTML5 audio element, to play the translated audio.<\/p>\n<h2>GPT-4o Audio<\/h2>\n<p>The <a href=\"https:\/\/platform.openai.com\/docs\/models#gpt-4o-realtime\">gpt-4o-audio<\/a> model is capable of <a href=\"https:\/\/platform.openai.com\/docs\/guides\/audio\">generating audio<\/a> and text response based on the audio input. The model response is controlled by the system and user prompts. However, this project only uses the system prompt:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">{\n  role: \"system\",\n  content: `The user will provide an English audio. Dub the complete audio, word for word in ${language}. Keep certain words in original language for which a direct translation in ${language} does not exist.`\n},<\/code><\/pre>\n<\/div>\n<p>The response type, text or audio is set by the <code>modalities<\/code> parameter, and the audio voice is set by the <code>audio<\/code> parameter:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">export async function processAudio(base64Str, language) {\n try {\n  const response = await openai.chat.completions.create({\n model: \"gpt-4o-audio-preview\",\n modalities: [\"text\", \"audio\"],\n audio: { voice: \"alloy\", format: \"mp3\" },\n messages: [\n    {\n role: \"system\",\n content: `The user will provide an English audio. Dub the complete audio, word for word in ${language}. Keep certain words in original language for which a direct translation in ${language} does not exist.`\n    },\n    {\n role: \"user\",\n content: [\n      {\n type: \"input_audio\",\n input_audio: {\n data: base64Str,\n format: \"mp3\"\n       }\n      }\n ]\n    }\n ],\n  });\n\n  return response.choices[0];\n } catch (error) {\n  throw new Error(`OpenAI audio processing failed: ${error.message}`);\n }\n}<\/code><\/pre>\n<\/div>\n<h2>Save Audio Data<\/h2>\n<h3>Data Schema<\/h3>\n<p>To save audio data in the GridDB database, we must define the schema columns. The schema includes fields such as <code>id<\/code>, <code>originalAudio<\/code>, <code>targetAudio<\/code>, and <code>targetTranscription<\/code>.<\/p>\n<p>The container name can be arbitrary; however, it is best practice to choose one that reflects the context. For this project, the container name is <code>SpeechDubbingContainer<\/code> :<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">const containerName = 'SpeechDubbingContainer';\nconst columnInfoList = [\n ['id', griddb.Type.INTEGER],\n ['originalAudio', griddb.Type.STRING],\n ['targetAudio', griddb.Type.STRING],\n ['targetTranscription', griddb.Type.STRING],\n];\nconst container = await getOrCreateContainer(containerName, columnInfoList);<\/code><\/pre>\n<\/div>\n<p>This table explaining the schema defined in the selected portion of your code:<\/p>\n<table>\n<thead>\n<tr>\n<th><strong>Column Name<\/strong><\/th>\n<th><strong>Type<\/strong><\/th>\n<th><strong>Description<\/strong><\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><code>id<\/code><\/td>\n<td><code>griddb.Type.INTEGER<\/code><\/td>\n<td>A unique identifier for each entry in the container.<\/td>\n<\/tr>\n<tr>\n<td><code>originalAudio<\/code><\/td>\n<td><code>griddb.Type.STRING<\/code><\/td>\n<td>The file path or name of the original audio file that was uploaded and processed.<\/td>\n<\/tr>\n<tr>\n<td><code>targetAudio<\/code><\/td>\n<td><code>griddb.Type.STRING<\/code><\/td>\n<td>The file path or name of the generated audio file containing the translated or dubbed speech.<\/td>\n<\/tr>\n<tr>\n<td><code>targetTranscription<\/code><\/td>\n<td><code>griddb.Type.STRING<\/code><\/td>\n<td>The text transcription of the translated audio, as provided by the speech processing API.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Save Operation<\/h3>\n<p>If the audio translation succesful, the <code>insertData<\/code> function will save the audio data into the database.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-javascript\">try {\n  const container = await getOrCreateContainer(containerName, columnInfoList);\n  await insertData(container, [generateRandomID(), mp3FilePath, targetAudio, result.message.audio.transcript]);\n} catch (error) {\n  console.log(error)\n}<\/code><\/pre>\n<\/div>\n<p>The GridDB data operation code is located in the <code>griddbOperations.js<\/code> file. This file provides detailed implementation on inserting data, querying data, and retrieving data by its ID in the GridDB database.<\/p>\n<h3>Read Operation<\/h3>\n<p>To read all data or data for a specific ID, you can use code or tools like Postman. For example, to query all data in the GridDB database by using the <code>\/query<\/code> endpoint:<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/query-postman-scaled.webp\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/query-postman-scaled.webp\" alt=\"\" width=\"2560\" height=\"1326\" class=\"aligncenter size-full wp-image-31836\" srcset=\"\/wp-content\/uploads\/2025\/07\/query-postman-scaled.webp 2560w, \/wp-content\/uploads\/2025\/07\/query-postman-300x155.webp 300w, \/wp-content\/uploads\/2025\/07\/query-postman-1024x530.webp 1024w, \/wp-content\/uploads\/2025\/07\/query-postman-768x398.webp 768w, \/wp-content\/uploads\/2025\/07\/query-postman-1536x796.webp 1536w, \/wp-content\/uploads\/2025\/07\/query-postman-2048x1061.webp 2048w, \/wp-content\/uploads\/2025\/07\/query-postman-600x311.webp 600w\" sizes=\"(max-width: 2560px) 100vw, 2560px\" \/><\/a><\/p>\n<p>And to read a specific data by ID, you can use the <code>\/query\/:id<\/code> endpoint:<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/query-by-id-postman-scaled.webp\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/query-by-id-postman-scaled.webp\" alt=\"\" width=\"2560\" height=\"1326\" class=\"aligncenter size-full wp-image-31835\" srcset=\"\/wp-content\/uploads\/2025\/07\/query-by-id-postman-scaled.webp 2560w, \/wp-content\/uploads\/2025\/07\/query-by-id-postman-300x155.webp 300w, \/wp-content\/uploads\/2025\/07\/query-by-id-postman-1024x530.webp 1024w, \/wp-content\/uploads\/2025\/07\/query-by-id-postman-768x398.webp 768w, \/wp-content\/uploads\/2025\/07\/query-by-id-postman-1536x796.webp 1536w, \/wp-content\/uploads\/2025\/07\/query-by-id-postman-2048x1061.webp 2048w, \/wp-content\/uploads\/2025\/07\/query-by-id-postman-600x311.webp 600w\" sizes=\"(max-width: 2560px) 100vw, 2560px\" \/><\/a><\/p>\n<h2>User Interface<\/h2>\n<p>The user interface in this project is build using React. The <code>AudioRecorder.tsx<\/code> is a React component for a speech dubbing interface featuring a header with a title and description, a recording alert, a toggleable recording button, and an audio player for playback if a translated audio URL is available:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-html\">&lt;Card className=\"w-full\"&gt;\n   &lt;CardHeader className='text-center'&gt;\n    &lt;CardTitle&gt;Speech Dubber&lt;\/CardTitle&gt;\n    &lt;CardDescription&gt;Push to dub your voice&lt;\/CardDescription&gt;\n   &lt;\/CardHeader&gt;\n   &lt;CardContent className=\"space-y-4\"&gt;\n    {isRecording && (\n     &lt;Alert variant=\"destructive\"&gt;\n      &lt;AlertDescription&gt;Recording in progress...&lt;\/AlertDescription&gt;\n     &lt;\/Alert&gt;\n )}\n\n    &lt;div className=\"flex justify-center\"&gt;\n     &lt;Button\n      onClick={toggleRecording}\n      variant={isRecording ? \"destructive\" : \"default\"}\n      className=\"w-24 h-24 rounded-full\"\n     &gt;\n      {isRecording ? &lt;StopCircle size={36} \/&gt; : &lt;Mic size={36} \/&gt;}\n     &lt;\/Button&gt;\n    &lt;\/div&gt;\n\n    {translatedAudioURL && (\n     &lt;div className=\"space-y-4\"&gt;\n      &lt;audio\n       src={translatedAudioURL}\n       controls\n       className=\"w-full\"\n      \/&gt;\n     &lt;\/div&gt;\n )}\n   &lt;\/CardContent&gt;\n  &lt;\/Card&gt;<\/code><\/pre>\n<\/div>\n<p>This is the screenshot when the translated audio is available:<\/p>\n<figure id=\"attachment_31831\" aria-describedby=\"caption-attachment-31831\" style=\"width: 2560px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/audio-response-scaled.webp\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/07\/audio-response-scaled.webp\" alt=\"\" width=\"2560\" height=\"1195\" class=\"size-full wp-image-31831\" srcset=\"\/wp-content\/uploads\/2025\/07\/audio-response-scaled.webp 2560w, \/wp-content\/uploads\/2025\/07\/audio-response-300x140.webp 300w, \/wp-content\/uploads\/2025\/07\/audio-response-1024x478.webp 1024w, \/wp-content\/uploads\/2025\/07\/audio-response-768x358.webp 768w, \/wp-content\/uploads\/2025\/07\/audio-response-1536x717.webp 1536w, \/wp-content\/uploads\/2025\/07\/audio-response-2048x956.webp 2048w, \/wp-content\/uploads\/2025\/07\/audio-response-600x280.webp 600w\" sizes=\"(max-width: 2560px) 100vw, 2560px\" \/><\/a><figcaption id=\"caption-attachment-31831\" class=\"wp-caption-text\">Screenshot<\/figcaption><\/figure>\n<h2>Further Improvements<\/h2>\n<p>This blog teaches you how to build a simple web application that translates audio from one language to another. However, please note that this is just a prototype. There are several improvements that you can make. Here are some suggestions:<\/p>\n<ul>\n<li>Enhance the user interface.<\/li>\n<li>Add a real-time feature.<\/li>\n<li>Include a language selector.<\/li>\n<li>Implement user management.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>What This Blog is About Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often fall short\u2014they\u2019re too slow, prone to errors, and struggle to scale effectively. For instance, human-based translation can introduce subjective inaccuracies, while manual dubbing processes frequently fail to keep pace with real-time demands or large-scale [&hellip;]<\/p>\n","protected":false},"author":41,"featured_media":52170,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[121],"tags":[],"class_list":["post-52169","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Automated Speech Dubbing Using GPT-4o Audio and Node.js | GridDB: Open Source Time Series Database for IoT<\/title>\n<meta name=\"description\" content=\"What This Blog is About Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Automated Speech Dubbing Using GPT-4o Audio and Node.js | GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"og:description\" content=\"What This Blog is About Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often\" \/>\n<meta property=\"og:url\" content=\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/\" \/>\n<meta property=\"og:site_name\" content=\"GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/griddbcommunity\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-02T07:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-03-30T18:41:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.griddb.net\/wp-content\/uploads\/2025\/12\/cover.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1472\" \/>\n\t<meta property=\"og:image:height\" content=\"832\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"griddb-admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:site\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"griddb-admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/\"},\"author\":{\"name\":\"griddb-admin\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233\"},\"headline\":\"Automated Speech Dubbing Using GPT-4o Audio and Node.js\",\"datePublished\":\"2025-07-02T07:00:00+00:00\",\"dateModified\":\"2026-03-30T18:41:08+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/\"},\"wordCount\":1481,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2025\/12\/cover.webp\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/\",\"url\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/\",\"name\":\"Automated Speech Dubbing Using GPT-4o Audio and Node.js | GridDB: Open Source Time Series Database for IoT\",\"isPartOf\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2025\/12\/cover.webp\",\"datePublished\":\"2025-07-02T07:00:00+00:00\",\"dateModified\":\"2026-03-30T18:41:08+00:00\",\"description\":\"What This Blog is About Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage\",\"url\":\"\/wp-content\/uploads\/2025\/12\/cover.webp\",\"contentUrl\":\"\/wp-content\/uploads\/2025\/12\/cover.webp\",\"width\":1472,\"height\":832},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.griddb.net\/en\/#website\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"name\":\"GridDB: Open Source Time Series Database for IoT\",\"description\":\"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL\",\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.griddb.net\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\",\"name\":\"Fixstars\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"contentUrl\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"width\":200,\"height\":83,\"caption\":\"Fixstars\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/griddbcommunity\/\",\"https:\/\/x.com\/GridDBCommunity\",\"https:\/\/www.linkedin.com\/company\/griddb-by-toshiba\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233\",\"name\":\"griddb-admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g\",\"caption\":\"griddb-admin\"},\"url\":\"https:\/\/www.griddb.net\/en\/author\/griddb-admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Automated Speech Dubbing Using GPT-4o Audio and Node.js | GridDB: Open Source Time Series Database for IoT","description":"What This Blog is About Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/","og_locale":"en_US","og_type":"article","og_title":"Automated Speech Dubbing Using GPT-4o Audio and Node.js | GridDB: Open Source Time Series Database for IoT","og_description":"What This Blog is About Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often","og_url":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/","og_site_name":"GridDB: Open Source Time Series Database for IoT","article_publisher":"https:\/\/www.facebook.com\/griddbcommunity\/","article_published_time":"2025-07-02T07:00:00+00:00","article_modified_time":"2026-03-30T18:41:08+00:00","og_image":[{"width":1472,"height":832,"url":"https:\/\/www.griddb.net\/wp-content\/uploads\/2025\/12\/cover.webp","type":"image\/webp"}],"author":"griddb-admin","twitter_card":"summary_large_image","twitter_creator":"@GridDBCommunity","twitter_site":"@GridDBCommunity","twitter_misc":{"Written by":"griddb-admin","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#article","isPartOf":{"@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/"},"author":{"name":"griddb-admin","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233"},"headline":"Automated Speech Dubbing Using GPT-4o Audio and Node.js","datePublished":"2025-07-02T07:00:00+00:00","dateModified":"2026-03-30T18:41:08+00:00","mainEntityOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/"},"wordCount":1481,"commentCount":0,"publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/12\/cover.webp","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/","url":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/","name":"Automated Speech Dubbing Using GPT-4o Audio and Node.js | GridDB: Open Source Time Series Database for IoT","isPartOf":{"@id":"https:\/\/www.griddb.net\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/12\/cover.webp","datePublished":"2025-07-02T07:00:00+00:00","dateModified":"2026-03-30T18:41:08+00:00","description":"What This Blog is About Easy communication across languages is crucial in today\u2019s interconnected world. Traditional translation and dubbing methods often","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb.net\/en\/blog\/automated-speech-dubbing-using-gpt-4o-audio-and-node-js\/#primaryimage","url":"\/wp-content\/uploads\/2025\/12\/cover.webp","contentUrl":"\/wp-content\/uploads\/2025\/12\/cover.webp","width":1472,"height":832},{"@type":"WebSite","@id":"https:\/\/www.griddb.net\/en\/#website","url":"https:\/\/www.griddb.net\/en\/","name":"GridDB: Open Source Time Series Database for IoT","description":"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL","publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.griddb.net\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.griddb.net\/en\/#organization","name":"Fixstars","url":"https:\/\/www.griddb.net\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/","url":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","contentUrl":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","width":200,"height":83,"caption":"Fixstars"},"image":{"@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/griddbcommunity\/","https:\/\/x.com\/GridDBCommunity","https:\/\/www.linkedin.com\/company\/griddb-by-toshiba"]},{"@type":"Person","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233","name":"griddb-admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g","caption":"griddb-admin"},"url":"https:\/\/www.griddb.net\/en\/author\/griddb-admin\/"}]}},"_links":{"self":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/52169","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/users\/41"}],"replies":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/comments?post=52169"}],"version-history":[{"count":3,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/52169\/revisions"}],"predecessor-version":[{"id":55101,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/52169\/revisions\/55101"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media\/52170"}],"wp:attachment":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media?parent=52169"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/categories?post=52169"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/tags?post=52169"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}