AI

Configuration

Configure AI integration in your TurboStarter project.

To ensure scalability and avoid security vulnerabilities, AI requests are proxied by our Hono backend. This means you need to set up AI integration on both the client and server side.

Why proxy requests?

We want to avoid exposing API keys directly to the browser, as this could lead to abuse of your key and generate unnecessary costs.

In this section, we'll explore the configuration for both sides to give you a smooth start.

Server-side

On the backend, you need to set up two things: environment variables to configure the provider and the procedure to pass responses to the client. Let's go through it!

Environment variables

You need to set the environment variables that correspond to the AI provider you want to use.

For example, for the OpenAI provider, you would need to set the following environment variables:

OPENAI_API_KEY=<your-openai-api-key>

However, if you want to use the Anthropic provider, you would need to set these environment variables:

ANTHROPIC_API_KEY=<your-anthropic-api-key>

You can find the list of all available providers in the official documentation, along with the required variables that need to be set to ensure the integration works correctly.

API endpoint

As we're proxying the requests, we need to register an API endpoint that will be used to pass the responses to the client.

The steps will be the same as we described in the API section. An example implementation could look like this:

ai.router.ts
export const aiRouter = new Hono().post(
  "/chat",
  zValidator(
    "json",
    z.object({
      messages: z.array(
        z.object({
          role: z.enum(["user", "system", "data", "assistant"]),
          content: z.string(),
        }),
      ),
    }),
  ),
  (c) =>
    streamText({
      model: openai("gpt-4o"),
      messages: convertToCoreMessages(c.req.valid("json").messages),
    }).toDataStreamResponse(),
);

As you can see, we're defining which provider and specific model we want to use here.

We're also using Streams API, which allows us to pass the result to the user as soon as the model starts generating it, without needing to wait for the full response to be completed. This gives the user a sense of immediacy and makes the conversation more interactive.

Client-side

To consume the server response, we can leverage the ready-to-use hooks provided by the Vercel AI SDK, such as the useChat hook:

page.tsx
import { useChat } from "ai/react";
 
const AI = () => {
  const { messages } = useChat({
    api: "/api/ai/chat",
  });
 
  return (
    <div>
      {messages.map((message) => (
        <div key={message.id}>{message.content}</div>
      ))}
    </div>
  );
};
 
export default AI;

By leveraging this integration, we can easily manage the state of the AI request and update the UI as soon as the response is ready.

TurboStarter ships with a ready-to-use implementation of AI chat, allowing you to see this solution in action. Feel free to reuse or modify it according to your needs.

Last updated on

On this page

Ship your startup everywhere. In minutes.