Lädt...


🔧 ChatGPT clone with React Suspense and Streaming


Nachrichtenbereich: 🔧 Programmierung
🔗 Quelle: dev.to

This is a short blog to showcase a solution I developed to create ChatGPT style interfaces.

Server Side

Let's start by creating a simple server that our client will use to communicate with OpenAI.
First, we initialize the openai client with our API key:

import OpenAI from "openai";

const openai = new OpenAI(process.env.OPENAI_API_KEY);

Then we create a map to store chat sessions:

const sessions = new Map<string, OpenAI.ChatCompletionMessageParam[]>();

Now we define a request handler that will forward messages to OpenAI and stream the response down the wire:

import express from "express";
import bodyParser from "body-parser";

const app = express();

// Parse body as JSON when Content-Type: application/json
app.use(bodyParser.json());

app.post("/chat", async (req, res) => {
  // 1. Validate input
  // 2. Create session if it doesn't exist
  // 3. Add user message to session
  // 4. Fetch response from OpenAI
  // 5. Stream response to client
  // 6. Add OpenAI response to session
});

app.listen(3000, () => {
  console.log("Listening on port 3000");
});

Let's start by validating the input.
We expect to receive a session ID and a prompt from the client.

// 1. Validate input
const prompt = req.body.prompt;
const sessionId = req.body.sessionId;

// Validate input
if (typeof prompt !== "string" || prompt.length === 0) {
  res.status(400).send("prompt is required");
  return;
}
if (typeof sessionId !== "string" || sessionId.length === 0) {
  res.status(400).send("sessionId is required");
  return;
}

Then, if session doesn't exist, we create it and add the user message to it.

// 2. Create session if it doesn't exist
if (sessions.has(sessionId) === false) {
  sessions.set(sessionId, []);
}
const messages = sessions.get(sessionId);

// 3. Add user message to session
messages.push({
  role: "user",
  content: prompt,
});

Now we can fetch the response from OpenAI using the stream option.

// 4. Fetch response from OpenAI
const stream = await openai.chat.completions.create({
  messages,
  stream: true,
  model: "gpt-4",
});

The stream object is an async iterable, so we can use a for await loop to iterate over the incoming chunks. To stream the chunks to the client we simply write to the response object with res.write. Once the stream is finished, we call res.end to close the connection.

// 5. Stream response to client
let response = "";
for await (const chunk of stream) {
  const token = chunk.choices?.[0]?.delta?.content ?? "";
  res.write(token);
  response += token;
}
res.end();

Finally, we add the OpenAI response to the session.

// 6. Add OpenAI response to session
messages.push({
  role: "assistant",
  content: response,
});

Client Side

Let's now focus on the client side. We will use some React APIs that are currently available in the canary release of React 18. Let's start by preparing our environment.

Update you react version to the latest canary release:

npm update react@canary react-dom@canary
# yarn upgrade react@canary react-dom@canary
# pnpm update react@canary react-dom@canary

Then, reference the canary react types in your tsconfig.json:

{
  "compilerOptions": {
    "types": ["react/canary"]
  }
}

If you prefer to use a declaration file you can use a triple-slash directive instead:

/// <reference types="react/canary" />

Now we can start building our app. Let's start by creating a simple form to send messages to the server.
The component will accept a callback to send messages to the server and a boolean to indicate if the server is currently processing a message.

import { useState, useCallback } from "react";
import type { FormEventHandler, ChangeEventHandler } from "react";

export type ChatFormProps = {
  onSendMessage: (message: string) => void; 
  isSending: boolean;
};

export function ChatForm({ onSending, isSending }: ChatFormProps) {
  const [input, setInput] = useState("");

  const handleSubmit = useCallback<FormEventHandler<HTMLFormElement>>(
    (e) => {
      e.preventDefault();
      if (input === "") return;
      onSendMessage(input);
      setInput("");
    },
    [input, onSendMessage],
  );

  const handleInputChange = useCallback<ChangeEventHandler<HTMLInputElement>>(
    (e) => {
      setInput(e.target.value);
    },
    [],
  );

  return (
    <form onSubmit={handleSubmit}>
      <input
        value={input}
        onChange={handleInputChange}
        placeholder="Ask a question"
        required
      />
      <button disabled={isSending}>{isSending ? "Sending..." : "Send"}</button>
    </form>
  );
}

Now, we create a parent component that will handle the communication with the server.

import { useState, useCallback } from "react";
import { ChatForm, type ChatFormProps } from "./ChatForm";

export type Message = {
  role: "user" | "assistant";
  content: string;
}

export function ChatApp() {
  const [messages, setMessages] = useState<Message[]>([]);
  const [isSending, setIsSending] = useState(false);

  const handleSendMessage = useCallback<ChatFormProps["onSendMessage"]>(
    async (message) => { 
      // We will implement this later
    }, []
  )

  return (
    <div>
      <ChatForm onSendMessage={handleSendMessage} isSending={isSending} />
    </div>
  )
}

Before implementing the send message logic, let's define how we want to display the messages.
Let's create a presentational component that renders a single message.

import type { Message } from "./ChatApp";

export type ChatMessageProps = {
  message: Message;
};

export function ChatMessage({ message }: ChatMessageProps) {
  return (
    <p>
     <span>From {message.role}:</span>
     <span>{message.content}</span>
    </p>
  )
}

Now let's define a component that will render a list of messages.

export type ChatLogProps = {
  messages: Message[];
};

export function ChatLog({ messages }: ChatLogProps) {
  return (
    <div role="log">
      {messages.map((message, i) => (
        <ChatMessage key={i} message={message} />
      ))}
    </div>
  );
}

Finally we can use the ChatLog component in our ChatApp component.

// ...
import { ChatLog } from "./ChatLog";

export function ChatApp() {
  // ...
  return (
    <div>
      <ChatLog messages={messages} />
      <ChatForm onSendMessage={handleSendMessage} isSending={isSending} />
    </div>
  )
}

Now it's time for the fun part. With Suspense we can easily render messages regardless of whether they are coming from the server or from the user. Let's define a MessageRenderer component that receives a message or a promise that resolves to a message.

import { use } from "react";
import { ChatMessage } from "./ChatMessage";

export type MessageRendererProps = {
  message: Message | Promise<Message>;
};

export function MessageRenderer(props: MessageRendererProps) {
  // Use will activate the suspense boundary when message is a promise
  const message =
    props.message instanceof Promise
      ? use(props.message)
      : props.message;

  return <ChatMessage message={message} />;
}

In the history component we can now use the MessageRenderer component to render messages.

import { Suspense } from "react";
import { MessageRenderer, type MessageRendererProps } from "./MessageRenderer";

export type ChatLogProps = {
  // Now both messages and promises are accepted
  messages: MessageRendererProps["message"][];
};

export function ChatLog({ messages }: ChatLogProps) {
  return (
    <div role="log">
      {messages.map((message, i) => (
        <Suspense key={i} fallback="Loading...">
          <MessageRenderer message={message} />
        </Suspense>
      ))}
    </div>
  );
}

While the message is loading, Suspense will render the fallback component. Once the promise resolves, the message will be rendered instead. To handle errors we need to wrap the Suspense element in an ErrorBoundary component.
I recommend using the react-error-boundary package for this.

npm install react-error-boundary
# yarn add react-error-boundary
# pnpm add react-error-boundary

We can render a fallback UI when an error occurs:

import { ErrorBoundary } from "react-error-boundary";

<ErrorBoundary fallback={<p>Error</p>}>
  <Suspense fallback="Loading...">
    <MessageRenderer message={message} />
  </Suspense>
</ErrorBoundary>

Let's create dedicated components to render the loading and error states.
Since we want to Stream the response coming from the server, the suspense fallback it's going to be called StreamingMessage:

export type MessageStream = ReadableStream<Uint8Array>

export type StreamingMessageProps = {
  stream: MessageStream;
};

export function StreamingMessage({ stream }: StreamingMessageProps) {
  const [content, setContent] = useState("");

  useEffect(() => {
    if (stream.locked) return;
    readMessageStream(stream, (token) => {
      setContent((prev) => prev + token);
    });
  }, [stream]);

  const message = {
    from: "assistant",
    content,
  };

  return (
    <MessageRenderer message={message} />
  );
}

export async function readMessageStream(
  stream: ReadableStream,
  onNewToken: (token: string) => void = () => {},
) {
  const reader = stream.getReader();
  const decoder = new TextDecoder();
  const chunks: string[] = [];

  // eslint-disable-next-line no-constant-condition
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    if (value) {
      const chunk = decoder.decode(value);
      chunks.push(chunk);
      onNewToken(chunk);
    }
  }

  const text = chunks.join("");
  return text;
}

Now we can use the StreamingMessage component in our ChatLog component:

import { StreamingMessage, type MessageStream } from "./StreamingMessage";

export type ChatLogProps = {
  messages: MessageOrPromise[];
  stream?: MessageStream;
};

// ...
<ErrorBoundary fallback={<p>Error</p>}>
  <Suspense fallback={<StreamingMessage stream={stream} />}>
    <MessageRenderer message={message} />
  </Suspense>
</ErrorBoundary>

Now we can extend our ChatApp component to track the message stream and pass it to the ChatLog component.

// ...
import { ChatLog, type ChatLogProps } from "./ChatLog";
import { ChatForm, type ChatFormProps } from "./ChatForm";

export function ChatApp() {
  const [messages, setMessages] = useState<ChatLogProps["messages"]>([]);
  const [isSending, setIsSending] = useState<ChatFormProps["isSending"]>(false);
  const [stream, setStream] = useState<ChatLogProps["stream"]>();

  const handleSendMessage = useCallback<ChatFormProps["onSendMessage"]>(
    async (message) => { 
      // We will implement this later
    }, []
  )

  return (
    <div>
      <ChatLog messages={messages} stream={stream} />
      <ChatForm onSendMessage={handleSendMessage} isSending={isSending} />
    </div>
  );
}

Finally, here is the complete implementation of the handleSendMessage function:

const handleSendMessage = useCallback(
  (input: string) => {
    const userMessage: Message = {
      from: "user",
      content: input,
    }

    const assistantMessage = fetchMessageStream(input, sessionId)
      .then((stream) => {
        const [stream1, stream2] = stream.tee();
        setStream(stream1); // read by ChatLog
        return readMessageStream(stream2);
      })
      .then((text): Message => {
        return {
          from: "assistant",
          content: text,
        }
      });

    setIsSending(true);
    // Update messages state
    setMessages((prevMessages) => [
      ...prevMessages,
      userMessage,
      assistantMessage,
    ]);
  },
  [sessionId],
);


function fetchMessageStream(prompt: string, sessionId: string) {
  const response = fetch("/chat", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      prompt,
      sessionId,
    }),
  });
  if (!response.ok) {
    throw new Error("Failed to fetch message stream");
  }
  return response.body satisfies ChatLogProps["stream"];
}
...

🔧 ChatGPT clone with React Suspense and Streaming


📈 56.74 Punkte
🔧 Programmierung

🔧 Learn Suspense by Building a Suspense-Enabled Library


📈 47.42 Punkte
🔧 Programmierung

🔧 How to Use React Suspense to Improve your React Projects


📈 37.82 Punkte
🔧 Programmierung

🔧 Improving Performance with React Lazy and Suspense


📈 32.43 Punkte
🔧 Programmierung

🔧 Unveiling the Future of React: A Dive into Concurrent Mode and Suspense


📈 32.43 Punkte
🔧 Programmierung

🔧 The Ultimate Guide to React: Conquering Concurrent Mode and Suspense


📈 32.43 Punkte
🔧 Programmierung

🔧 What is React Suspense and Async Rendering?


📈 32.43 Punkte
🔧 Programmierung

🔧 React Suspense: Improving the Performance and Usability of Your Application


📈 32.43 Punkte
🔧 Programmierung

🔧 Exploring React v19: Elevating User Experiences with Concurrent Mode and Suspense


📈 32.43 Punkte
🔧 Programmierung

🔧 Understanding Suspense and Suspended Components in React


📈 32.43 Punkte
🔧 Programmierung

🔧 Implement React v18 from Scratch Using WASM and Rust - [24] Suspense(1) - Render Fallback


📈 32.43 Punkte
🔧 Programmierung

🔧 Implement React v18 from Scratch Using WASM and Rust - [25] Suspense(2) - Data Fetching with use hook


📈 32.43 Punkte
🔧 Programmierung

🔧 ChatGPT React Course – Code Your Own ChatGPT Clone


📈 32.05 Punkte
🔧 Programmierung

🔧 Introduction to React Suspense


📈 30.76 Punkte
🔧 Programmierung

🔧 TLDR; Suspense in react-query


📈 30.76 Punkte
🔧 Programmierung

🔧 Async React with Suspense


📈 30.76 Punkte
🔧 Programmierung

🔧 React Suspense for data fetching


📈 30.76 Punkte
🔧 Programmierung

🔧 This Week In React #185: React Conf, React Query, refs, Next.js after, mini-react...


📈 28.22 Punkte
🔧 Programmierung

🔧 This Week In React #185: React Conf, React Query, refs, Next.js after, mini-react...


📈 28.22 Punkte
🔧 Programmierung

🔧 Build Your Own ChatGPT Clone with React and the OpenAI API


📈 26.72 Punkte
🔧 Programmierung

📰 The Best Suspense Movies on Netflix


📈 23.71 Punkte
🖥️ Betriebssysteme

🔧 Understanding Suspense with Next 13


📈 23.71 Punkte
🔧 Programmierung

🔧 New Suspense Hooks for Meteor


📈 23.71 Punkte
🔧 Programmierung

🔧 Loading.... Suspense


📈 23.71 Punkte
🔧 Programmierung

🔧 There was an error while hydrating this Suspense boundary. Switched to client rendering. Next.js 14


📈 23.71 Punkte
🔧 Programmierung

🔧 How to used Suspense?


📈 23.71 Punkte
🔧 Programmierung

🔧 Wait for pending: A (not great) alternative Suspense algorithm


📈 23.71 Punkte
🔧 Programmierung

🐧 Git Shallow Clone and Clone Depth


📈 23.67 Punkte
🐧 Linux Tipps

🔧 What’s New in React 19? React Canaries, Actions, and React Compiler


📈 22.83 Punkte
🔧 Programmierung

🐧 GitHub - chatgpt/chatgpt: Open source and free version of @chatgpt (to be released soon)


📈 22.66 Punkte
🐧 Linux Tipps

🔧 Create a streaming AI assistant with ChatGPT, FastAPI, WebSockets and React ✨🤖🚀


📈 22.03 Punkte
🔧 Programmierung

matomo