AI

Configuration

Configure AI integration in your TurboStarter project.

To ensure scalability and avoid security vulnerabilities, AI requests are proxied by our tRPC backend. This means you need to set up AI integration on both the client and server side.

Why proxy requests?

We want to avoid exposing API keys directly to the browser, as this could lead to abuse of your key and generate unnecessary costs.

In this section, we'll explore the configuration for both sides to give you a smooth start.

Server-side

On the backend, you need to set up two things: environment variables to configure the provider and the procedure to pass responses to the client. Let's go through it!

Environment variables

You need to set the environment variables that correspond to the AI provider you want to use.

For example, for the OpenAI provider, you would need to set the following environment variables:

OPENAI_API_KEY=<your-openai-api-key>

However, if you want to use the Anthropic provider, you would need to set these environment variables:

ANTHROPIC_API_KEY=<your-anthropic-api-key>

You can find the list of all available providers in the official documentation, along with the required variables that need to be set to ensure the integration works correctly.

tRPC procedure

As we're proxying the requests, we need to register a tRPC procedure that will be used to pass the responses to the client.

The steps will be the same as we described in the API section. An example implementation could look like this:

ai.router.ts
export const aiRouter = createTRPCRouter({
  chat: publicProcedure
    .input(
      z.object({
        prompt: z.string(),
      }),
    )
    .mutation(async function* ({ input }) {
      const result = await streamText({
        /* here you can specify the model you want to use */,
        model: openai("gpt-4o"),
        prompt: input.prompt,
      });
 
      for await (const chunk of result.textStream) {
        yield chunk;
      }
    }),
});

As you can see, we're defining which provider and specific model we want to use here.

We're also using tRPC Streaming, which allows us to pass the result to the user as soon as the model starts generating it, without needing to wait for the full response to be completed. This gives the user a sense of immediacy and makes the conversation more interactive.

Client-side

To consume the answer from the server, we can follow the same approach as for every other API request:

page.tsx
import { useState } from "react";
import { api } from "~/lib/api/react";
 
const AI = () => {
  const [answer, setAnswer] = useState("");
  const { mutate, isPending } = api.ai.chat.useMutation({
    onSuccess: async (data) => {
      for await (const chunk of data) {
        setAnswer((prev) => prev + chunk);
      }
    },
  });
 
  return <div>{answer}</div>;
};
 
export default AI;

By leveraging the integration with Tanstack Query, we can easily manage the state of the AI request and update the UI as soon as the answer is ready.

TurboStarter ships with ready-to-use implementation of AI chat, so you can see this solution in action. Feel free to reuse it or modify it to your needs.

Last updated on

On this page

Ship your startup everywhere. In minutes.Get TurboStarter