Appearance
Architecture
Libris is a self-hosted book management system. It ingests ebook files, enriches them with metadata from multiple sources, organizes them into a structured library, and serves them via OPDS for e-readers. It syncs reading progress with KoReader via KoSync.
Monorepo Structure
libris/
├── apps/web/ # Nuxt 4 frontend (port 3100)
├── services/api/ # Nitro v3 backend (port 3000)
│ ├── server/types/ # Shared TypeScript types + Zod schemas
│ ├── server/lib/metadata/ # Book metadata extraction & API clients
│ └── server/lib/queue/ # BullMQ queue constants
├── tests/e2e/ # Playwright end-to-end tests
├── scripts/ # Build & test scripts
├── .forgejo/workflows/ # CI/CD pipelines
└── docker-compose.test.ymlWorkspace management: pnpm workspaces, orchestrated by Turborepo.
Tech Stack
| Layer | Technology |
|---|---|
| Frontend | Nuxt 4, Vue 3, Nuxt UI v4, Pinia, Tailwind CSS v4 |
| Backend | Nitro v3, h3 |
| Database | PostgreSQL 17, Drizzle ORM |
| Job Queue | BullMQ, Redis 7 |
| File Watching | Chokidar |
| Metadata Sources | Google Books API, Hardcover GraphQL API (both require API keys via Settings) |
| Auth | bcrypt-hashed API keys, httpOnly session cookies (BFF) |
| Testing | Playwright (E2E), Vitest (unit) |
| Toolchain | Turborepo, Vitest, Oxlint, Oxfmt |
| CI/CD | Forgejo Actions |
How the Pieces Connect
mermaid
graph TB
subgraph Clients
Browser["🌐 Browser"]
EReader["📖 E-Reader\n(KOReader, Calibre)"]
end
subgraph Frontend
BFF["Nuxt BFF\n:3100"]
end
subgraph Backend
API["Nitro API\n:3000"]
Chokidar["Chokidar\nFile Watcher"]
end
subgraph Infrastructure
PG["PostgreSQL\n(Drizzle ORM)"]
Redis["Redis\n(BullMQ + Cache)"]
FS["File System\n(inbox / library)"]
end
Browser -->|Session cookie| BFF
BFF -->|Bearer token| API
API -->|SSE| Browser
EReader -->|OPDS| API
EReader -->|KoSync| API
Chokidar -->|New file detected| API
API --> PG
API --> Redis
API --> FS- Browser talks to Nuxt BFF (server-side routes proxy to the API, injecting auth)
- Nuxt BFF holds the session cookie and forwards requests to the Nitro API with Bearer tokens
- Nitro API handles all business logic: REST endpoints, job processing, file management
- Chokidar watches the inbox directory and enqueues jobs when files appear
- BullMQ workers process jobs: detect → parse → fetch metadata → organize
- OPDS serves the organized library to e-readers (Calibre, KOReader, etc.)
- KoSync syncs reading progress between KoReader devices
Backend: Single-Process Monolith
The API server runs everything in one Nitro process. Plugins start in filename order:
| Plugin | Purpose |
|---|---|
0.env-validate.ts | Validates required environment variables |
0.migrate.ts | Runs Drizzle migrations before accepting requests |
0.test-db.ts | Sets up in-memory PGlite database (test only) |
1.redis.ts | Connects to Redis, mounts storage drivers (production only) |
2.watcher.ts | Starts Chokidar file watcher on inbox directory |
3.workers.ts | Initializes BullMQ workers for all 6 queues |
Workers
In addition to the ingestion pipeline queues, scheduled workers handle external sync and maintenance:
| Worker | Schedule | Purpose |
|---|---|---|
hardcover-sync | Daily 4 AM | Syncs reading status and progress to Hardcover for all linked books |
progress-history-cleanup | Daily 3 AM | Cleans up reading progress history older than 1 year |
Scheduled Tasks (Nitro)
Three Nitro scheduled tasks run independently of BullMQ:
| Task | Schedule | Purpose |
|---|---|---|
db:cleanup-stale-inbox | Daily 3 AM | Removes stale inbox entries |
db:cleanup-orphaned-files | Daily 3 AM | Cleans up orphaned files with no book records |
jobs:cleanup-completed | Hourly | Removes completed jobs from queues |
Reading Status
Reading status is derived from KoSync progress data rather than being set manually:
- unread — no progress recorded
- reading — progress between 0% and 100%
- finished — progress at 100%
- paused — no progress update for a configurable period
Frontend: BFF Pattern
The Nuxt app uses a Backend-for-Frontend pattern:
- Server-side routes (
/auth/*) manage httpOnly session cookies - All
/api/*requests proxy through Nuxt's server, which injectsAuthorization: Bearerheaders - The browser never sees the raw API key — it only holds a session cookie
- SSR pages fetch data server-side with the session cookie forwarded from the request
Book Ingestion Pipeline
See the API Reference for endpoint details.
mermaid
flowchart TD
A["📁 File appears in inbox"] --> B["BOOK_DETECTED"]
B --> |"Compute checksum, detect format,\ncreate DB records, dedup"| C["BOOK_PARSE_FILE"]
C --> |"Extract metadata from EPUB/PDF\n(Dublin Core, XMP, PDF Info)"| D["BOOK_FETCH_METADATA"]
D --> |"Query Google Books + Hardcover in parallel\nInsert candidates, detect duplicates\nSet status → review"| E["👤 User reviews candidates"]
E --> |"Pick fields from sources, approve"| F["BOOK_ORGANIZE"]
F --> |"Move to /library/Author/Title/\nDownload cover, embed metadata in EPUB\nCompute MD5, set status → organized"| G["✅ Organized"]
G -.-> |"Refetch metadata\n(POST /api/library/id/refetch)"| D
G -.-> |"Re-organize\n(POST /api/library/id/reorganize)"| FAll jobs retry 3 times with exponential backoff (1s base).