Node

How to stream OpenAI responses to browser in real-time?

December 5, 2025

download ready
Thank You
Your submission has been received.
We will be in touch and contact you soon!

Server-Sent Events stream tokens as they generate without buffering. AbortController cancels long generations cleanly. TypeScript types for OpenAI response format. Memory-efficient for mobile browsers.

Example:-

Code

app.get('/chat-stream', async ({ query: { prompt } }, res) => {
  res.set('Content-Type', 'text/event-stream');
  const stream = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: prompt }],
    stream: true
  });
  for await (const chunk of stream) {
    res.write(`data: ${JSON.stringify(chunk)}\n\n`);
  }
});
      
Hire Now!

Need Help with Node Development ?

Work with our skilled node developers to accelerate your project and boost its performance.
**Hire now**Hire Now**Hire Now**Hire now**Hire now

How to stream OpenAI responses to browser in real-time?

Server-Sent Events stream tokens as they generate without buffering. AbortController cancels long generations cleanly. TypeScript types for OpenAI response format. Memory-efficient for mobile browsers.

Example:-

Code

app.get('/chat-stream', async ({ query: { prompt } }, res) => {
  res.set('Content-Type', 'text/event-stream');
  const stream = await openai.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: prompt }],
    stream: true
  });
  for await (const chunk of stream) {
    res.write(`data: ${JSON.stringify(chunk)}\n\n`);
  }
});