Self‑Host FlowiseAI in 30 Minutes: The Beginner’s Guide to Your Own AI Agent Platform
You can run FlowiseAI on your own laptop or a small server in under 30 minutes. This step‑by‑step guide uses plain English and copy‑paste commands. By the end, you’ll have a secure, persistent, private AI deployment you control: an open‑source AI chatbot builder that powers real apps. If you’d rather skip servers, a note explains when Flowise Cloud is the smarter move.
FlowiseAI is a low‑code, visual builder for AI chatbots, agents, and Retrieval‑Augmented Generation (RAG) workflows. Picture a whiteboard where you drag blocks for model, prompt, document loader, vector database, and connect them. Each canvas is a “flow.” You can test instantly, wrap it with an API, and embed it in your product.
If you’re new to this world: “LLM” means large language model (a text AI like GPT‑4). “RAG” lets your AI consult your docs. A “vector database” is a smart index that finds related concepts, not just exact matches. That’s enough to get started; the guide stays beginner‑friendly and explains why each step matters.
Why self‑host FlowiseAI (and when Cloud wins)
Benefits of self‑hosting
Control and privacy: keep flows, logs, and documents on your machine or VPC.
Flexibility: connect to OpenAI, Anthropic (Claude), Google (Gemini), Mistral, and local LLMs.
Cost: start at $0 locally; basic VPS plans run roughly $5–10/month for small projects.
Tradeoffs to consider
You manage updates, uptime, security, and backups. As you scale, CPU/RAM and reliability needs grow.
When Flowise Cloud makes sense
Flowise Cloud provides managed infrastructure with one‑click updates/rollbacks, built‑in storage, logs, datasets, and evaluation tools for grading accuracy and hallucinations. If your team prefers “no‑ops,” Cloud can be cheaper once you factor time, reliability, and tooling. Many users start self‑hosted on Railway/Render or a small VPS, then move to Cloud as requirements grow.
What you need before you start
Hardware
Local laptop: a modern machine with 8 GB RAM is fine for testing. Small VPS: 1 vCPU, 1–2 GB RAM, 10–20 GB disk is enough to begin.
Tools
Docker + Docker Compose (recommended). Install Docker Desktop on macOS/Windows or Docker Engine on Linux. See Docker Desktop and Docker Engine install.
Optional for developers: Node.js 18.15+ or 20+ and npm if you prefer running via npm.
Optional accounts
LLM API keys (OpenAI, Anthropic, Google, Mistral, or any OpenAI‑compatible API). Vector database options include Pinecone, or PostgreSQL with pgvector (Supabase is an easy managed option), or a local store like Chroma. A domain name if you want a custom URL with HTTPS.
The 10‑minute local install with Docker
Docker makes everything reproducible and easy to update. Your flows persist across restarts by mapping one folder.
Create a new folder and save this as docker-compose.yml
inside it:
version: '3.8'
services:
flowise:
image: flowiseai/flowise:latest
ports:
- "3000:3000"
environment:
- PORT=3000
- FLOWISE_USERNAME=admin
- FLOWISE_PASSWORD=change_me_now
- DATABASE_PATH=/opt/flowise/.flowise
- APIKEY_PATH=/opt/flowise/.flowise
- LOG_PATH=/opt/flowise/.flowise/logs
- SECRETKEY_PATH=/opt/flowise/.flowise
- BLOB_STORAGE_PATH=/opt/flowise/.flowise/storage
volumes:
- flowise_data:/opt/flowise/.flowise
restart: unless-stopped
volumes:
flowise_data:
Start it with:
docker compose up -d
Open http://localhost:3000 and create your admin account on first run.
Useful commands and operations:
Restart: docker compose restart
Logs: docker compose logs -f
Stop: docker compose stop
Update: run docker compose stop
, then docker compose rm -f
, then docker pull flowiseai/flowise
, and finally docker compose up -d
.
Tip: if you later host publicly, don’t expose port 3000 directly. Bind Flowise to 127.0.0.1:3000
and put a reverse proxy (Nginx or Caddy) in front for HTTPS.
Prefer npm? A quick local run for developers
Requirements
Node 18.15+ or 20+; npm 9+ recommended.
Install and start
Install globally and start with:
npm install -g flowise
npx flowise start
Open http://localhost:3000.
To add a password and persistence, set environment variables before starting. Example for macOS/Linux:
export FLOWISE_USERNAME=admin
export FLOWISE_PASSWORD=change_me_now
export DATABASE_PATH=./.flowise
export APIKEY_PATH=./.flowise
export LOG_PATH=./.flowise/logs
export SECRETKEY_PATH=./.flowise
export BLOB_STORAGE_PATH=./.flowise/storage
npx flowise start
Configure the basics (just once)
Think of Flowise as a studio that needs a few folders to store your work. The environment variables above define where those live and enable login.
FLOWISE_USERNAME and FLOWISE_PASSWORD turn on built‑in authentication—always set these. DATABASE_PATH is where Flowise stores its internal SQLite database. APIKEY_PATH is where saved provider keys are kept (encrypted). LOG_PATH houses logs for troubleshooting. SECRETKEY_PATH stores encryption secrets. BLOB_STORAGE_PATH holds file uploads and processed assets. PORT defaults to 3000.
Backups are simple: regularly back up the entire folder you mapped (the .flowise
directory in the examples). That preserves your flows, credentials, and uploads.
Note: SQLite is perfect for Flowise’s internal app database. For RAG search, set up a separate vector store such as PostgreSQL with pgvector (e.g., Supabase), Pinecone, or Chroma.
Your first 5‑minute chatbot flow
Log in and create a new chat flow (a blank canvas). Add a ChatOpenAI
(or your preferred model) node. Add a System Prompt node with something like “You are a friendly assistant called Eve.” Click Save, open the tester, and say “Hello.” If you see a response, congrats—you’ve just shipped your first piece of LLM app development.
Under the hood, you’ve connected a model to a prompt and enabled a chat UI. This is how AI workflow automation starts: simple blocks composed into useful behaviors.
Connect your favorite models (cloud and local)
Flowise works with the big players and with local models. For OpenAI, paste your API key into a ChatOpenAI
node and try models like gpt-4o-mini
. For Anthropic, add a Claude node with your key. For Google, use Gemini via its node and key. For local models, add an Ollama node and run a local model such as Llama 3.1 for a private deployment without sending data to external APIs.
Add your knowledge with RAG (documents, search, answers)
To make your agent answer from your docs, open Document Stores and create one. Add loaders for PDFs, Docs, CSVs, or URLs, split text into chunks (for example, using a Recursive Character Text Splitter), pick embeddings (OpenAI embeddings or local), and choose a vector store. Popular options are PostgreSQL with pgvector via Supabase (supabase.com), Pinecone (pinecone.io), or Chroma (trychroma.com).
Click Upsert to index your content. Then, back in your flow, connect the document store to your agent and ask questions grounded in your data. The result is a trustworthy, open‑source AI chatbot builder that can actually cite and search your source material.
Go public safely (VPS or managed platforms)
VPS (full control, predictable cost)
Pick Ubuntu 22.04+ with 1 vCPU, 2 GB RAM, 20 GB disk. Install Docker and Compose. Deploy your docker-compose.yml
with the same settings you used locally:
mkdir -p /opt/flowise && cd /opt/flowise
nano docker-compose.yml # paste your file
docker compose up -d
Keep Flowise bound to 127.0.0.1:3000
and add HTTPS with a reverse proxy. Caddy offers auto‑TLS; see Caddy docs. Example Caddyfile:
your.flowise.domain {
reverse_proxy 127.0.0.1:3000
}
Nginx + Certbot is another option. Install Nginx, set proxy_pass http://127.0.0.1:3000
, then run certbot --nginx
. See Nginx docs and Certbot.
Managed platforms (Render, Railway)
Click‑to‑deploy templates exist for Flowise. Add FLOWISE_USERNAME
and FLOWISE_PASSWORD
env vars and mount a persistent volume so you don’t lose flows on restarts. Example Railway paths:
DATABASE_PATH=/opt/railway/.flowise
, APIKEY_PATH=/opt/railway/.flowise
, LOG_PATH=/opt/railway/.flowise/logs
, SECRETKEY_PATH=/opt/railway/.flowise
, BLOB_STORAGE_PATH=/opt/railway/.flowise/storage
.
Budget note: many users self‑host on Railway/Render from about $7–8/month plus a small persistent disk. As you need more CPU/RAM or guaranteed uptime, costs approach Flowise Cloud, which bundles storage, backups, and pro features.
Security essentials you shouldn’t skip
Always set FLOWISE_USERNAME and FLOWISE_PASSWORD. Never expose a bare instance. Put Flowise behind a reverse proxy and only open ports 80 and 443 publicly. Limit SSH to your IP when possible. On Ubuntu, a quick start with UFW:
ufw allow OpenSSH
ufw allow 80,443/tcp
ufw enable
Add rate limiting and a max upload size at the proxy to reduce abuse. Back up your .flowise
data folder and your vector database regularly. Keep Docker, your OS, and Flowise updated—small updates often fix big problems.
Keep it healthy: updates, backups, and troubleshooting
Update Flowise (Docker): run docker compose stop
, docker compose rm -f
, docker pull flowiseai/flowise
, then docker compose up -d
.
Update Flowise (npm): npm update -g flowise
To roll back, pin a known‑good tag in your compose file, for example image: flowiseai/flowise:vX.Y.Z
. Back up the .flowise
directory (DATABASE_PATH, APIKEY_PATH, LOG_PATH, SECRETKEY_PATH, BLOB_STORAGE_PATH). For PostgreSQL/pgvector, run pg_dump
or use provider snapshots. For Pinecone, keep original source docs since Pinecone is not a primary store for originals.
Monitor and troubleshoot with docker compose logs -f
and the files under your LOG_PATH
. Common issues include port conflicts (3000), wrong paths, missing credentials, and low RAM (OOM). If ingestion is heavy, run it off‑hours or on a bigger plan.
Grow when you’re ready
As traffic increases or flows get more complex, run multiple Flowise containers behind Nginx/Caddy or a load balancer. If you rely on in‑memory chat state, pin sessions or externalize state. Use a managed vector DB (Supabase Postgres with pgvector or Pinecone) for speed and durability. Choose larger VPS instances or autoscaling managed platforms for higher concurrency. Separate heavy background ingestion from the live instance to avoid timeouts.
What to build next (starter ideas)
A website support assistant powered by your knowledge base (embed the Flowise chat widget). An internal “docs copilot” that answers from PDFs, wikis, and policy manuals. A Slack or WhatsApp assistant via your own webhook and the Flowise REST API. An evaluation loop (built‑in on Flowise Cloud) to grade accuracy, latency, and costs.
Helpful links and where to learn more
FlowiseAI on GitHub (docs and examples): https://github.com/FlowiseAI/Flowise
Docker Desktop: https://www.docker.com/products/docker-desktop/
Docker Engine install: https://docs.docker.com/engine/install/
Caddy docs: https://caddyserver.com/docs/
Nginx docs: https://nginx.org/en/docs/
Certbot by EFF: https://certbot.eff.org/
pgvector extension: https://github.com/pgvector/pgvector
Supabase (managed Postgres + pgvector): https://supabase.com
Pinecone (managed vector database): https://www.pinecone.io
Ready in 30 minutes: your next step
Fast local test: copy the Docker Compose above, run docker compose up -d
, and open http://localhost:3000. Add a model key and send your first message. For public deployment: spin up a 1–2 GB RAM VPS, deploy the same compose file, add Nginx or Caddy with HTTPS, and point a domain to it. Prefer managed? Try Railway/Render with a persistent volume or jump to Flowise Cloud for one‑click updates, datasets, evaluation tools, and managed storage/logs.
Don’t wait for perfect. Self‑host FlowiseAI today, ship your first chatbot, and iterate. Set a 30‑minute timer—you’ll have a secure, persistent studio for AI agents and RAG workflows before it rings.
FAQs
What is FlowiseAI and what can I build with it?
FlowiseAI is an open‑source, low‑code tool that lets you build AI chatbots, agents, and RAG workflows with a drag‑and‑drop canvas. Think of it as Lego blocks for LLM apps.
Why should I self-host FlowiseAI instead of using Flowise Cloud?
Self-hosting gives you control and privacy, flexibility to connect to any OpenAI‑compatible or local models, and potentially lower costs. Cloud offers managed infrastructure, updates, and built‑in storage, but at a monthly fee and with fewer customization options.
What do I need to start self-hosting FlowiseAI (hardware and software)?
Hardware: a modern laptop with at least 8 GB RAM or a small VPS (1 vCPU, 1–2 GB RAM). Software: Docker and Docker Compose (recommended), Node.js 18.15+ or 20+ if using npm, Git, and optionally a reverse proxy like Nginx or Caddy.
What are the essential environment variables to configure FlowiseAI?
Essential vars: FLOWISE_USERNAME
and FLOWISE_PASSWORD
(auth), DATABASE_PATH
, APIKEY_PATH
, LOG_PATH
, SECRETKEY_PATH
, BLOB_STORAGE_PATH
, and PORT
. Optional for production: CORS_ORIGINS
, BASE_URL
, and rate limiting at the proxy.
How does FlowiseAI store data and stay persistent?
By default Flowise uses SQLite stored at DATABASE_PATH
. Keep data persistent by mounting a volume (for example, /opt/flowise/.flowise
). For vector search, use PostgreSQL with pgvector or other vector stores.
Can I use PostgreSQL instead of SQLite with FlowiseAI?
FlowiseAI’s internal app DB uses SQLite by default and is recommended. You can and should use PostgreSQL with pgvector as your vector store for RAG, but that’s separate from Flowise’s internal database.
How do I run FlowiseAI locally (Docker or npm) and start using it?
Docker: create a docker-compose.yml
(provided in the guide), then run docker compose up -d
and open http://localhost:3000. NPM: install Flowise globally (npm install -g flowise
) and start with npx flowise start
, then open http://localhost:3000. For persistence with npm, set the environment variables before starting.
How do I deploy FlowiseAI for public access (VPS or managed cloud)?
On a VPS: install Ubuntu 22.04+, install Docker, prepare a docker-compose.yml
, and run it behind a reverse proxy (Nginx or Caddy) with HTTPS. On managed platforms (Render/Railway): use templates, set env vars, and add persistent storage for flows.
What security steps should I follow for public deployments?
Always require a username/password, hide the app behind a reverse proxy with TLS, only open ports 80/443 (restrict SSH), rotate API keys, and back up data regularly. Consider rate limiting and keep software up to date.
What maintenance tasks should I know about (updates, backups, monitoring)?
Update FlowiseAI with docker compose pull
and restart; or npm update -g flowise
. Roll back by pinning a known image tag. Back up the .flowise
directory and any vector stores. Monitor logs and service health, and set up basic uptime checks.