Skip to content

Architecture

Libris is a self-hosted book management system. It ingests ebook files, enriches them with metadata from multiple sources, organizes them into a structured library, and serves them via OPDS for e-readers. It syncs reading progress with KoReader via KoSync.

Monorepo Structure

libris/
├── apps/web/              # Nuxt 4 frontend (port 3100)
├── services/api/          # Nitro v3 backend (port 3000)
│   ├── server/types/      # Shared TypeScript types + Zod schemas
│   ├── server/lib/metadata/  # Book metadata extraction & API clients
│   └── server/lib/queue/  # BullMQ queue constants
├── tests/e2e/             # Playwright end-to-end tests
├── scripts/               # Build & test scripts
├── .forgejo/workflows/    # CI/CD pipelines
└── docker-compose.test.yml

Workspace management: pnpm workspaces, orchestrated by Turborepo.

Tech Stack

LayerTechnology
FrontendNuxt 4, Vue 3, Nuxt UI v4, Pinia, Tailwind CSS v4
BackendNitro v3, h3
DatabasePostgreSQL 17, Drizzle ORM
Job QueueBullMQ, Redis 7
File WatchingChokidar
Metadata SourcesGoogle Books API, Hardcover GraphQL API (both require API keys via Settings)
Authbcrypt-hashed API keys, httpOnly session cookies (BFF)
TestingPlaywright (E2E), Vitest (unit)
ToolchainTurborepo, Vitest, Oxlint, Oxfmt
CI/CDForgejo Actions

How the Pieces Connect

mermaid
graph TB
    subgraph Clients
        Browser["🌐 Browser"]
        EReader["📖 E-Reader\n(KOReader, Calibre)"]
    end

    subgraph Frontend
        BFF["Nuxt BFF\n:3100"]
    end

    subgraph Backend
        API["Nitro API\n:3000"]
        Chokidar["Chokidar\nFile Watcher"]
    end

    subgraph Infrastructure
        PG["PostgreSQL\n(Drizzle ORM)"]
        Redis["Redis\n(BullMQ + Cache)"]
        FS["File System\n(inbox / library)"]
    end

    Browser -->|Session cookie| BFF
    BFF -->|Bearer token| API
    API -->|SSE| Browser
    EReader -->|OPDS| API
    EReader -->|KoSync| API
    Chokidar -->|New file detected| API
    API --> PG
    API --> Redis
    API --> FS
  1. Browser talks to Nuxt BFF (server-side routes proxy to the API, injecting auth)
  2. Nuxt BFF holds the session cookie and forwards requests to the Nitro API with Bearer tokens
  3. Nitro API handles all business logic: REST endpoints, job processing, file management
  4. Chokidar watches the inbox directory and enqueues jobs when files appear
  5. BullMQ workers process jobs: detect → parse → fetch metadata → organize
  6. OPDS serves the organized library to e-readers (Calibre, KOReader, etc.)
  7. KoSync syncs reading progress between KoReader devices

Backend: Single-Process Monolith

The API server runs everything in one Nitro process. Plugins start in filename order:

PluginPurpose
0.env-validate.tsValidates required environment variables
0.migrate.tsRuns Drizzle migrations before accepting requests
0.test-db.tsSets up in-memory PGlite database (test only)
1.redis.tsConnects to Redis, mounts storage drivers (production only)
2.watcher.tsStarts Chokidar file watcher on inbox directory
3.workers.tsInitializes BullMQ workers for all 6 queues

Workers

In addition to the ingestion pipeline queues, scheduled workers handle external sync and maintenance:

WorkerSchedulePurpose
hardcover-syncDaily 4 AMSyncs reading status and progress to Hardcover for all linked books
progress-history-cleanupDaily 3 AMCleans up reading progress history older than 1 year

Scheduled Tasks (Nitro)

Three Nitro scheduled tasks run independently of BullMQ:

TaskSchedulePurpose
db:cleanup-stale-inboxDaily 3 AMRemoves stale inbox entries
db:cleanup-orphaned-filesDaily 3 AMCleans up orphaned files with no book records
jobs:cleanup-completedHourlyRemoves completed jobs from queues

Reading Status

Reading status is derived from KoSync progress data rather than being set manually:

  • unread — no progress recorded
  • reading — progress between 0% and 100%
  • finished — progress at 100%
  • paused — no progress update for a configurable period

Frontend: BFF Pattern

The Nuxt app uses a Backend-for-Frontend pattern:

  • Server-side routes (/auth/*) manage httpOnly session cookies
  • All /api/* requests proxy through Nuxt's server, which injects Authorization: Bearer headers
  • The browser never sees the raw API key — it only holds a session cookie
  • SSR pages fetch data server-side with the session cookie forwarded from the request

Book Ingestion Pipeline

See the API Reference for endpoint details.

mermaid
flowchart TD
    A["📁 File appears in inbox"] --> B["BOOK_DETECTED"]
    B --> |"Compute checksum, detect format,\ncreate DB records, dedup"| C["BOOK_PARSE_FILE"]
    C --> |"Extract metadata from EPUB/PDF\n(Dublin Core, XMP, PDF Info)"| D["BOOK_FETCH_METADATA"]
    D --> |"Query Google Books + Hardcover in parallel\nInsert candidates, detect duplicates\nSet status → review"| E["👤 User reviews candidates"]
    E --> |"Pick fields from sources, approve"| F["BOOK_ORGANIZE"]
    F --> |"Move to /library/Author/Title/\nDownload cover, embed metadata in EPUB\nCompute MD5, set status → organized"| G["✅ Organized"]
    G -.-> |"Refetch metadata\n(POST /api/library/id/refetch)"| D
    G -.-> |"Re-organize\n(POST /api/library/id/reorganize)"| F

All jobs retry 3 times with exponential backoff (1s base).