Skip to content

🪼 Full-stack AI Chat template featuring Next.js (Frontend), FastAPI (Backend), and Vercel AI SDK with custom Python streaming

License

Notifications You must be signed in to change notification settings

nomomon/fast-api-ai-sdk

Repository files navigation


Logo

Fast API & AI SDK Template

A (wannabe) production-ready solution for building LLM chat applications with Next.js (Frontend) and FastAPI (Backend), powered by the Vercel AI SDK.
Get Started »

Table of Contents
  1. About The Project
  2. Getting Started
  3. Contributing
  4. License
  5. Contact

About The Project

Building LLM chat wrappers is becoming a common task, but setting up a robust, scalable architecture can be repetitive and tricky. While Next.js is fantastic for frontend development and interacting with AI SDKs, using Node.js for the backend isn't always the preferred choice for Python-native developers or teams leveraging Python's rich data science ecosystem.

This template bridges that gap. It combines the power of Next.js for a responsive, modern frontend with FastAPI for a high-performance Python backend. It handles the complexity of integrating the Vercel AI SDK with a custom Python backend, ensuring seamless streaming and state management without the mental overhead of switching contexts or dealing with cluttered "full-stack" Node.js monorepos.

Key Features:

  • Monorepo Structure: Managed efficiently with Turborepo.
  • Frontend: Next.js 14+ (App Router), Tailwind CSS, Radix UI.
  • Backend: FastAPI, Pydantic, Python 3.11+.
  • AI Integration: Custom streaming implementation using Vercel AI SDK protocols.
  • Developer Experience: Type safety, linting (Biome/Ruff), and hot reloading for both services.

(back to top)

Getting Started

To get a local copy up and running, follow these simple steps.

Prerequisites

Ensure you have the following installed on your system:

Installation

  1. Clone the repository

    git clone https://github.com/nomomon/fast-api-ai-sdk.git
    cd fast-api-ai-sdk
  2. Install dependencies Run this command in the root directory. It will install dependencies for both the frontend and backend.

    pnpm install
  3. Environment Setup

    Create a .env file in the root directory (or separate .env files in backend/ and frontend/).

    Backend (backend/.env):

    OPENAI_API_KEY=sk-...
    # Optional
    GEMINI_API_KEY=...
    CORS_ORIGINS=http://localhost:3000

    Frontend (frontend/.env):

    BASE_BACKEND_URL=http://localhost:8000
  4. Run the application Start both the frontend and backend development servers using Turbo:

    pnpm dev

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE for more information.

(back to top)

Contact

Mansur Nurmukhambetov - @nomomon

Project Link: https://github.com/nomomon/fast-api-ai-sdk

(back to top)

About

🪼 Full-stack AI Chat template featuring Next.js (Frontend), FastAPI (Backend), and Vercel AI SDK with custom Python streaming

Topics

Resources

License

Stars

Watchers

Forks

Contributors 8