Shipping TanStack Start and Bun to Railway
This site runs on TanStack Start with Bun, backed by a Railway-managed Postgres. Railway's autobuild (Nixpacks) detects Bun projects just fine. What it doesn't sequence cleanly is the particular combination I need — prisma generate at build time, vite build against a TanStack Start plugin, a custom server.ts entry, and prisma migrate deploy on boot. A Dockerfile is simpler than teaching Nixpacks all of that.
The Dockerfile has four stages: two parallel bun installs (one full, one prod-only), a build stage, and a lean runtime. None of them are clever on their own — the value is in the split.
The install stages
1FROM oven/bun:1-alpine AS deps
2WORKDIR /app
3COPY package.json bun.lock ./
4COPY prisma ./prisma/
5RUN bun install --frozen-lockfile
6
7FROM oven/bun:1-alpine AS deps-prod
8WORKDIR /app
9COPY package.json bun.lock ./
10COPY prisma ./prisma/
11RUN bun install --frozen-lockfile --productionTwo parallel stages, both cached on the same three files. deps has the full dependency tree — Vite, the TanStack Start plugin, Tailwind, type packages — so the build stage can actually build. deps-prod has only what the project formally declares as a runtime dependency; that's what ends up in the final image.
The saving is less dramatic than you'd hope. @tanstack/react-start declares chunks of the Vite ecosystem as runtime deps, so they come along for the ride even in a production install. What you do drop is the devDependency-only set: @vitejs/plugin-react, Tailwind, the @types/* packages you added yourself, vite-tsconfig-paths. Small net win rather than a big one, but it's the right shape.
--frozen-lockfile is the flag you don't want to skip: it fails the build if bun.lock would change, which catches a lot of "it works on my machine" problems before they reach production.
As long as package.json, bun.lock, and prisma/schema.prisma don't change, Docker reuses both install layers from cache — a CSS tweak or a route change won't trigger a fresh install in either.
The build stage
1FROM oven/bun:1-alpine AS builder
2WORKDIR /app
3COPY --from=deps /app/node_modules ./node_modules
4COPY . .
5
6ENV DATABASE_URL="postgresql://dummy:dummy@localhost:5432/dummy"
7RUN bunx prisma generate
8RUN bun run buildThe node_modules comes from the full deps stage, so we're not installing again. Source is copied in, Prisma generates, Vite builds.
bunx prisma generate needs a DATABASE_URL that parses as a connection string, but it doesn't connect during generate — it only reads the schema and writes the client. So a dummy URL is enough, and the image never sees a real credential at build time. Railway's production DATABASE_URL gets injected at runtime, where it belongs.
bun run build runs Vite, which produces dist/client (static assets) and dist/server/server.js (the TanStack Start handler).
The runner
1FROM oven/bun:1-alpine AS runner
2WORKDIR /app
3
4ENV NODE_ENV=production
5ENV PORT=3000
6
7COPY --from=builder /app/dist ./dist
8COPY --from=builder /app/server.ts ./server.ts
9COPY --from=builder /app/package.json ./package.json
10
11COPY --from=builder /app/prisma ./prisma
12COPY --from=builder /app/prisma.config.ts ./prisma.config.ts
13
14COPY --from=deps-prod /app/node_modules ./node_modules
15COPY --from=builder /app/node_modules/.prisma ./node_modules/.prisma
16COPY --from=builder /app/node_modules/@prisma ./node_modules/@prisma
17
18COPY --from=builder /app/entrypoint.sh ./entrypoint.sh
19RUN chmod +x ./entrypoint.sh
20
21EXPOSE 3000
22CMD ["./entrypoint.sh"]The runner starts fresh from oven/bun:1-alpine — no build toolchain, no source tree, no dev dependencies. It gets exactly what production runs:
dist/— the built outputserver.tsandpackage.json— the custom Bun server entry pointprisma/andprisma.config.ts— schema and config, needed formigrate deployat startupnode_modulesfromdeps-prod— runtime packages onlynode_modules/.prismaand@prismafrom the builder — the generated Prisma client, layered on top
One subtlety: the order of those last three COPY instructions matters. The prod node_modules goes in first; then the builder's .prisma and @prisma land on top. If you flip them, the generated client gets overwritten by whatever Prisma's install-time postinstall produced in the prod stage — which isn't guaranteed to match the schema you built against. Let the generated client always land last.
Entrypoint: migrate, then start
1#!/bin/sh
2set -e
3
4echo "Running database migrations..."
5bunx prisma migrate deploy
6
7echo "Starting server..."
8exec bun run server.tsmigrate deploy is the non-interactive sibling of migrate dev — it applies pending migrations and errors out rather than prompting. Running it in the entrypoint means every Railway deploy runs migrations before traffic flips to the new instance.
The exec on the last line matters: it replaces the shell process with the Bun process, so the container's PID 1 is Bun. Without exec, Bun runs as a child of the shell, and SIGTERM from Railway on redeploy doesn't reach Bun cleanly — the container has to wait for the kernel to kill it rather than shutting down gracefully.
Binding to Railway's port
Railway injects PORT into the container's environment at runtime. The server reads it and binds on 0.0.0.0:
1const SERVER_PORT = Number(process.env.PORT ?? 3000)
2
3const server = Bun.serve({
4 port: SERVER_PORT,
5 hostname: '0.0.0.0',
6 // ...
7})Bun.serve already defaults to 0.0.0.0, so this is explicit for the reader rather than strictly necessary. I'd rather write the bind address down than depend on defaults holding across runtime versions — binding to loopback inside a container is the class of mistake you only debug once, because it leaves no useful logs.
The Railway side
Two services: the app and a managed Postgres. In the app service's variables, DATABASE_URL is a reference to the Postgres service — Railway's UI has a "Reference Variable" option that expands to ${{Postgres.DATABASE_URL}}. The service name in the expansion has to match whatever you called your Postgres service; if yours is named db, the reference becomes ${{db.DATABASE_URL}}.
Build method: set to "Dockerfile". Start command: leave empty — the CMD from the Dockerfile is already ./entrypoint.sh.
That's the whole recipe. Four stages, a dummy URL, an entrypoint, and a port binding. No Nixpacks, no buildpacks, no custom start command. Docker does the work; Railway just runs the image.