Skip to content

Build path

This path is for engineers who already know what they’re doing and just want a working stack now. You have an idea. You want to validate it before you commit a weekend to it. You don’t want to spend three hours wiring Postgres to a backend before you’ve written a line of business logic.

Terminal window
npm install -g @blissful-infra/cli
blissful-infra start my-app --backend spring-boot --database postgres-redis

That’s the whole setup. You now have:

  • A backend at http://localhost:8080 with REST, Kafka, Postgres, and Redis caching wired in
  • A frontend at http://localhost:3000 already calling the backend
  • Grafana, Prometheus, Loki, Tempo running and pre-provisioned (Grafana shows metrics, logs, and traces with click-through correlation)
  • A Jenkins pipeline ready to build and deploy your service
  • A management dashboard at http://localhost:3002

Open the generated project in your editor. Modify the controllers, add your own endpoints, push events through Kafka. Iterate.

Quickstart · start command

BackendBest for
spring-bootLong-running HTTP API, JPA + Postgres, Kafka producer + consumer, mature JVM observability
lambda-pythonEvent-driven serverless workloads, learning AWS Lambda locally on LocalStack

Frontend is React + Vite. Other frameworks are deliberately out of scope until they’re real. See the Philosophy page.

Terminal window
blissful-infra start my-app --backend spring-boot --frontend react-vite

The moment you have more than one project running locally, switch to the client model. Each client gets its own isolated stack with separate Kafka, Postgres, and observability, so projects don’t conflict on ports or pollute each other’s data.

Terminal window
blissful-infra client create idea-one
blissful-infra service add idea-one api --backend spring-boot --frontend react-vite
blissful-infra client up idea-one

Client model guide · client command · service command

NeedAdd
AWS-shaped storage / queues / LambdaLocalStack at the client level. See the warehouse guide.
ML pipeline (Kafka, classifier, ClickHouse, MLflow)--plugins ai-pipeline on service add
Identity providerKeycloak at the client level (opt-in via infrastructure.keycloak: true)
Distributed tracing across servicesTempo is already wired (OTLP backend). Instrument and watch traces inside Grafana, with click-through to logs.

When the prototype works and you want it on the internet:

Terminal window
blissful-infra deploy

The same blissful-infra.yaml that defines your local stack drives the deploy. Cloudflare Pages and Workers are the default target. Vercel and AWS adapters are in flight.

deploy command

If you hit something you don’t understand (a Kafka consumer-group rebalance, a JPA cascade behavior, a Prometheus histogram quantile), that’s when the Learn path becomes useful. The build path gets you running; the learn path explains why each piece looks the way it does.