- 1. Base Server Setup
- 2. Let's Encrypt Certs with Certbot (and Cloudflare DNS)
- 3. Building the Dockerized Postgres + pgvector with SSL
- Dockerfile
- docker-compose.yml
- 4. Setting up Cloudflare Tunnel (TCP Mode)
- a) Install cloudflared
- b) Create the Tunnel and DNS Route
- c) Move Tunnel Credentials and Setup Config
- d) Install/Enable Systemd Service
- 5. Connecting from a Client (Mac/anywhere)
- a) Realize That Just Pointing psql Won't Work (If Using Access Policies)
- b) The Fix: Use cloudflared on the Client
- 6. Connecting from Cloudflare Functions via Hyperdrive
Setting Up Postgres for Cloudflare Hyperdrive
This TIL blog used to rely on sqlite-vss
for semantic search features. However, with sqlite-vss
no longer under active development, it was time for an upgrade. pgvector seemed like the natural successor for handling vector embeddings within Postgres.
Since this site is hosted on Cloudflare Pages, I wanted the search functionality to run serverlessly using Cloudflare Functions. This meant the function needed a way to connect securely and efficiently to a Postgres database. Enter Cloudflare Hyperdrive, designed to accelerate database queries from Workers.
So, the challenge became: set up a small, public-facing Postgres server with pgvector, secure it with a real TLS certificate, expose zero public ports, and make it accessible to Cloudflare Functions via Hyperdrive. With this setup, the internalised search backend is reachable from Cloudflare's network, without opening port 5432 to the wider internet. Pretty complex setup for a blog no one else reads but seemed like a useful thing to learn.
This post details how I got it all working using Hetzner, Docker, Let's Encrypt, and Cloudflare Tunnels.
1. Base Server Setup
I spun up a fresh Hetzner VM and installed Docker. Instructions per the Docker docs:
sudo apt-get remove docker docker-engine docker.io containerd runc || true
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
# Docker GPG + repo
sudo install -d -m 0755 /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg \
| sudo tee /etc/apt/keyrings/docker.asc >/dev/null
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/debian $(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list >/dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Let the current user run Docker
sudo usermod -aG docker $USER && newgrp docker
Project folder:
sudo mkdir -p /opt/pgvec/{certs,data}
sudo chown $USER /opt/pgvec
cd /opt/pgvec
2. Let's Encrypt Certs with Certbot (and Cloudflare DNS)
Since my Postgres would be public-facing (through Cloudflare Tunnel), I wanted a real SSL cert—not just self-signed!
- Install snap + certbot:
Debian doesn't ship with snap by default, so...
sudo apt install -y snapd
sudo systemctl enable --now snapd.socket
sudo ln -s /var/lib/snapd/snap /snap
Then install snap + certbot:
sudo snap install core
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/local/bin/certbot
sudo snap install certbot-dns-cloudflare
- Create a Cloudflare API token
Go to https://dash.cloudflare.com/profile/api-tokens and create a token:
- Create Token
- → Edit zone DNS (Use Template)
- → Set permissions, zone resources, etc.
- → Edit zone DNS (Use Template)
- Store it securely:
sudo mkdir -p /root/.secrets
sudo chmod 700 /root/.secrets
echo "dns_cloudflare_api_token = <YOUR_API_TOKEN>" \
| sudo tee /root/.secrets/cloudflare.ini > /dev/null
sudo chmod 600 /root/.secrets/cloudflare.ini
- Run Certbot to Get a Certificate:
Request a cert using the Cloudflare DNS plugin (swap out your actual domain)
sudo certbot certonly \
--dns-cloudflare \
--dns-cloudflare-credentials /root/.secrets/cloudflare.ini \
-d pgvec.yourdomain.com --preferred-challenges dns-01
If successful, you'll see:
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/pgvec.yourdomain.com/fullchain.pem
Key is saved at: /etc/letsencrypt/live/pgvec.yourdomain.com/privkey.pem
- Copy into the project (we'll mount, not bake, to make renewal easy):
sudo cp /etc/letsencrypt/live/pgvec.yourdomain.com/{fullchain.pem,privkey.pem} certs/
sudo mv certs/fullchain.pem certs/server.crt
sudo mv certs/privkey.pem certs/server.key
chmod 600 certs/server.key
sudo chown $USER certs/*
Renewal tip: add a systemd timer that runs certbot renew --quiet
and docker compose restart pgvec
.
3. Building the Dockerized Postgres + pgvector with SSL
Dockerfile
FROM pgvector/pgvector:pg17
COPY certs/server.* /var/lib/postgresql/
RUN chown postgres:postgres /var/lib/postgresql/server.* \
&& chmod 600 /var/lib/postgresql/server.key
docker-compose.yml
Note: The host port 5557 is used throughout this guide as an example. You can change this to any available port on your host machine. If you change it, remember to update it consistently in the Cloudflare Tunnel configuration and connection commands later.
services:
pgvec:
build: .
container_name: pgvec
restart: unless-stopped
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin # change this!
POSTGRES_DB: posts
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- '5557:5432' # change 5557 to whichever port you want to expose
command: >
postgres
-c ssl=on
-c ssl_cert_file=/var/lib/postgresql/server.crt
-c ssl_key_file=/var/lib/postgresql/server.key
-c listen_addresses='*'
volumes:
pgdata:
docker compose up -d --build
Protip: Or use sudo docker... if your user isn't yet in the docker group. I fixed this with:
sudo usermod -aG docker $USER
newgrp docker
Now the container should be running locally, with SSL enforced:
$ docker compose exec pgvec psql -U admin -d posts -c "show ssl;"
ssl
-----
on
And to actually use pgvector:
docker compose exec pgvec psql -U admin -d posts -c "CREATE EXTENSION IF NOT EXISTS vector;"
4. Setting up Cloudflare Tunnel (TCP Mode)
Cloudflare Tunnel keeps every inbound‑to‑Postgres packet inside Cloudflare's network; only the tunnel connector on your VM ever sees the DB port.
a) Install cloudflared
Debian/Ubuntu:
# Add Cloudflare's apt repository
curl -fsSL https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-main.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared $(lsb_release -cs) main" |
sudo tee /etc/apt/sources.list.d/cloudflared.list
# Install
sudo apt update && sudo apt -y install cloudflared
b) Create the Tunnel and DNS Route
cloudflared tunnel login # Opens a browser tab — log in to Cloudflare
cloudflared tunnel create pgvec-tunnel # Note the Tunnel ID printed
# Create a CNAME record pointing your desired hostname to the tunnel
cloudflared tunnel route dns pgvec-tunnel pgvec.yourdomain.com
c) Move Tunnel Credentials and Setup Config
Secure the tunnel credentials file generated in ~/.cloudflared/
:
sudo mkdir -p /etc/cloudflared
sudo chmod 750 /etc/cloudflared # Restrict access
sudo mv ~/.cloudflared/*.json /etc/cloudflared/pgvec-tunnel.json # Use the actual filename if different
sudo chown root:cloudflared /etc/cloudflared/pgvec-tunnel.json # cloudflared runs as this user
sudo chmod 600 /etc/cloudflared/pgvec-tunnel.json
Create the configuration file:
sudo tee /etc/cloudflared/config.yml <<EOF
tunnel: <YOUR_TUNNEL_ID>
credentials-file: /etc/cloudflared/pgvec-tunnel.json
ingress:
- hostname: pgvec.yourdomain.com
service: tcp://localhost:5557
- service: http_status:404
EOF
Note: Replace
<YOUR_TUNNEL_ID>
with the actual ID fromcloudflared tunnel create ...
.
d) Install/Enable Systemd Service
This makes cloudflared
run automatically on boot.
sudo cloudflared service install
sudo systemctl daemon-reload
sudo systemctl enable --now cloudflared
Check the service status and logs:
systemctl status cloudflared
journalctl -fu cloudflared
You should see "Registered tunnel connection ..." in the logs.
5. Connecting from a Client (Mac/anywhere)
a) Realize That Just Pointing psql Won't Work (If Using Access Policies)
If you did set up a Cloudflare Access policy for your tunnel's hostname, simply pointing psql
at the public hostname:
psql "postgresql://admin:[email protected]:5557/posts?sslmode=require"
...it will hang or time out. Cloudflare Access is blocking the connection, waiting for an authenticated client.
(If you skipped the Access Policy in Cloudflare Zero Trust, the command above will work directly, as the tunnel acts like a simple TCP proxy.)
b) The Fix: Use cloudflared
on the Client
To satisfy the Access Policy, install cloudflared
locally:
# macOS (Homebrew)
brew install cloudflare/cloudflare/cloudflared
Log in to your Cloudflare account (opens a browser):
cloudflared login
Forward the tunnel's TCP endpoint to a local port (e.g., localhost:5557
). Leave this command running in a terminal:
Now, in another terminal, connect psql
to localhost
:
You now have an end-to-end authenticated and encrypted route! Your local cloudflared
handles the Zero Trust authentication, and the connection to Postgres still uses the Let's Encrypt SSL certificate.
You now have a Postgres instance that speaks modern TLS, hides behind Cloudflare's network, and requires zero inbound firewall rules.
6. Connecting from Cloudflare Functions via Hyperdrive
Now for the original goal: letting a Cloudflare Function query the database.
- Create a Hyperdrive Configuration:
- In the Cloudflare dashboard:
- Click "+ Create configuration".
- Private Database and fill in the details:
Save the configuration. Note the Hyperdrive ID it gives you.
- Bind Hyperdrive to Your Function:
In your wrangler.toml
file for your Cloudflare Function project, add a binding:
[[hyperdrive]]
binding = "HYPERDRIVE" # The variable name in your Worker code
id = "<YOUR_HYPERDRIVE_ID>"
- Use the Binding in Your Function Code:
- Install a Postgres client library compatible with Cloudflare Workers (e.g.,
postgres
). - Use the
env.HYPERDRIVE.connectionString
provided by the binding:
- Install a Postgres client library compatible with Cloudflare Workers (e.g.,
import postgres, { Sql } from 'postgres';
// Helper to get the SQL client
const getSql = (env: Env): Sql => {
// Use the connection string from the Hyperdrive binding
// `prepare: false` is often needed in serverless environments
return postgres(env.HYPERDRIVE.connectionString, { prepare: false });
};
export async function onRequest(
context: EventContext<Env, string, Record<string, unknown>>
) {
const url = new URL(context.request.url)
const q = url.searchParams.get('q')?.trim() ?? ''
if (!q) return new Response('', { status: 400 })
const sql = getSql(context.env)
try {
type Row = {
slug: string;
}
const rows = await sql<Row[]>`
SELECT slug, title, brief, tags, published_at
FROM posts
WHERE title ILIKE ${`%${q}%`}
`
return new Response(JSON.stringify(rows), {
headers: {
'content-type': 'application/json; charset=utf-8'
}
})
} finally {
await sql.end({ timeout: 5 })
}
}
Now your Cloudflare Function can securely connect to your Postgres database through the Cloudflare Tunnel, accelerated by Hyperdrive!