Architecture

A quick overview of the different parts of the TurboStarter AI.

TurboStarter AI is powered by several open source libraries to support the different functionalities of the application like authentication, persistence, text generation, etc. The following is a brief overview of the architecture.

Architecture

Application framework

The project uses a monorepo structure powered by Turborepo, enabling efficient code sharing and consistent tooling across different parts of the application. This approach was chosen to maintain a single source of truth for shared code and simplify dependency management.

Web

Built with Next.js and React, the web application leverages server-side rendering and static site generation for optimal performance and SEO. The UI is styled with Tailwind CSS and shadcn/ui components for rapid development and consistent design. API routes are handled by Hono for edge computing, chosen for its minimal overhead and excellent TypeScript support.

Web | TurboStarter

Learn more about the core web application and its features.

Mobile

The mobile application uses React Native with Expo for cross-platform development. This combination was selected for its ability to share up to 90% of code between platforms while maintaining native performance. The integration with the monorepo allows seamless sharing of business logic and types with the web application.

Mobile | TurboStarter

Learn more about the mobile application and its features.

API

API is built as a dedicated package using Hono, a lightweight web framework optimized for edge computing. This architectural decision provides clear separation between frontend and backend logic, making the codebase more maintainable and easier to test.

The framework's excellent TypeScript support ensures type safety across all endpoints, while its minimal overhead and edge computing capabilities deliver optimal performance.

API

Discover API service in AI starter and demo apps.

Model providers

TurboStarter AI supports integration with most popular AI model providers including OpenAI, Anthropic, Google AI, xAI, and more. The architecture uses AI SDK to provide a unified interface across different providers, making it simple to experiment with various models.

The platform leverages specialized models for different AI tasks:

  • Text generation models for chat and content creation
  • Structured output models for data extraction and formatting
  • Image generation models for creating visuals
  • Voice synthesis models for audio content
  • Embedding models for semantic search and retrieval

Changing the specific model used in your application requires just a one-line code change, allowing you to quickly adapt to new models or switch providers based on your specific needs. This flexibility ensures your application can easily leverage the latest advancements in AI models without significant refactoring.

Authentication

The applications use Better Auth for authentication, providing a secure and flexible authentication system. By default, the AI implementation creates an anonymous user session at startup, which is then used for all subsequent queries and interactions with the AI models. This approach maintains user context across sessions while minimizing friction.

For more advanced authentication needs, you can easily extend the authentication flow by leveraging the Core implementation, which supports email/password login, magic links, OAuth providers, and more. This modular design allows you to implement the exact level of authentication security your application requires.

Authentication

Learn more about the authentication system in TurboStarter AI.

Persistence

Persistence in TurboStarter AI refers to the system's ability to store and retrieve data from a database. The application uses PostgreSQL as its primary database to store critical information such as:

  • Chat history and conversations
  • User accounts and preferences
  • Vector embeddings for retrieval-augmented generation

To interact with the database from route handlers and server actions, TurboStarter AI leverages Drizzle ORM, a high-performance TypeScript ORM that provides type-safe database operations. This ensures robust data integrity and simplified query construction throughout the application.

One of the key advantages of using Drizzle is its support for multiple database providers including Neon, Supabase, and PlanetScale. This flexibility allows you to easily switch between database providers based on your specific requirements without modifying your queries or schema definitions - making your application more adaptable to changing infrastructure needs.

Database

Explore the database architecture and persistence layer in TurboStarter AI.

Blob storage

Blob storage is handled through an S3-compatible service, providing scalable and reliable storage for various file types. The system stores user-uploaded images, generated AI content, and document files. This approach ensures efficient file handling and easy integration with different storage providers like AWS S3, Cloudflare R2, or MinIO.

Storage

Learn more about the storage system in TurboStarter AI.

Security

Security is implemented at multiple levels to protect both the application and its users. All API endpoints are protected with rate limiting to prevent abuse and ensure fair usage.

The system uses a credits-based access control system, where each user has a limited number of credits for AI operations, preventing resource exhaustion and enabling monetization options.

All external API calls, including those to AI model providers, are made exclusively from the backend. This ensures that sensitive API keys are never exposed to the client side, significantly reducing the risk of unauthorized access or key theft.

Additionally, we implement industry-standard practices including input validation, proper authentication, and regular security audits of dependencies.

Security

Explore the security measures in place for TurboStarter AI.

How is this guide?

Last updated on

On this page

Make AI your edge, not replacement.