Everything you need to know about Swiss-hosted, AI-native databases. From getting started to advanced MCP integration.
Vaultbrix is a Supabase-compatible database platform hosted exclusively in Switzerland. It provides the same APIs (PostgREST, GoTrue, Realtime, Storage) with added AI context features through the Snipara engine. Your existing Supabase client code works unchanged.
Vaultbrix implements the complete Supabase stack:
PostgRESTAuto-generated REST API from your schema
GoTrueAuthentication with JWT, OAuth, Magic Links
RealtimeWebSocket subscriptions and presence
StorageS3-compatible object storage with RLS
// This code works identically on Supabase and Vaultbrix
import { createClient } from '@supabase/supabase-js'
// Supabase
const supabase = createClient(
'https://abc123.supabase.co',
'your-anon-key'
)
// Vaultbrix - just change the URL
const vaultbrix = createClient(
'https://abc123.vaultbrix.com',
'your-anon-key'
)
// All operations work the same
const { data, error } = await vaultbrix
.from('users')
.select('*')
.eq('active', true)Sign up at app.vaultbrix.com, click "New Project," select the Swiss region (CH-GVA-2), and your PostgreSQL database with all Supabase services will be ready within 60 seconds.
Visit app.vaultbrix.com and sign up using Magic Link or GitHub OAuth.
From your dashboard, click the "New Project" button. Enter a project name (lowercase, alphanumeric, hyphens allowed).
Currently available:
DE-FRA-1 (Frankfurt) planned for Q2 2026
Create a strong password for your PostgreSQL superuser. Store this securely - it cannot be recovered, only reset.
Within 60 seconds, your project is provisioned with:
# Project URL
https://YOUR_PROJECT_REF.vaultbrix.com
# API Keys (Settings > API)
ANON_KEY=eyJhbGciOiJIUzI1NiIs...
SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIs...
# Direct Database Connection
postgresql://postgres:YOUR_PASSWORD@db.YOUR_PROJECT_REF.vaultbrix.com:5432/postgres
# Pooled Connection (recommended for serverless)
postgresql://postgres:YOUR_PASSWORD@db.YOUR_PROJECT_REF.vaultbrix.com:6543/postgresUse the anon key in client-side code (browsers, mobile apps) - it respects Row Level Security. Use the service role key only in secure backend environments - it bypasses all RLS policies.
| Key | Use In | RLS | Risk Level |
|---|---|---|---|
| anon key | Browsers, mobile apps, public code | Enforced | Safe to expose |
| service role key | Backend servers, cron jobs, admin scripts | Bypassed | Never expose |
// CLIENT-SIDE (React, Next.js pages, mobile)
// Use ANON key - safe to expose
const supabase = createClient(
process.env.NEXT_PUBLIC_VAULTBRIX_URL,
process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY // Public env var
)
// SERVER-SIDE (API routes, cron jobs, admin scripts)
// Use SERVICE ROLE key - never expose
const adminClient = createClient(
process.env.VAULTBRIX_URL,
process.env.VAULTBRIX_SERVICE_ROLE_KEY // Private env var
)Never use the service role key in client-side code. It grants full database access, bypassing all security policies. If compromised, rotate it immediately in Settings → API Keys.
Install @supabase/supabase-js, then call createClient() with your Vaultbrix URL and anon key. The client is fully compatible - no code changes required beyond the URL.
# npm
npm install @supabase/supabase-js
# pnpm
pnpm add @supabase/supabase-js
# yarn
yarn add @supabase/supabase-js// lib/vaultbrix.ts
import { createClient } from '@supabase/supabase-js'
const vaultbrixUrl = 'https://YOUR_PROJECT_REF.vaultbrix.com'
const vaultbrixAnonKey = 'YOUR_ANON_KEY'
export const supabase = createClient(vaultbrixUrl, vaultbrixAnonKey)
// Usage
const { data, error } = await supabase
.from('posts')
.select('id, title, author:users(name)')
.order('created_at', { ascending: false })
.limit(10)// Generate types from your schema
npx supabase gen types typescript \
--project-id YOUR_PROJECT_REF \
--schema public > types/database.ts
// Use typed client
import { createClient } from '@supabase/supabase-js'
import type { Database } from './types/database'
export const supabase = createClient<Database>(
process.env.NEXT_PUBLIC_VAULTBRIX_URL!,
process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY!
)
// Now you get full autocomplete
const { data } = await supabase
.from('posts') // ← Autocompletes table names
.select('*') // ← Knows column typesUse @supabase/ssr for Server Components and Server Actions. Create separate browser and server clients to handle cookies properly. The setup is identical to Supabase - just change the URL.
npm install @supabase/supabase-js @supabase/ssr# .env.local
NEXT_PUBLIC_VAULTBRIX_URL=https://YOUR_PROJECT_REF.vaultbrix.com
NEXT_PUBLIC_VAULTBRIX_ANON_KEY=eyJhbGciOiJIUzI1NiIs...
VAULTBRIX_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIs...// lib/vaultbrix/client.ts
import { createBrowserClient } from '@supabase/ssr'
export function createClient() {
return createBrowserClient(
process.env.NEXT_PUBLIC_VAULTBRIX_URL!,
process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY!
)
}// lib/vaultbrix/server.ts
import { createServerClient } from '@supabase/ssr'
import { cookies } from 'next/headers'
export async function createClient() {
const cookieStore = await cookies()
return createServerClient(
process.env.NEXT_PUBLIC_VAULTBRIX_URL!,
process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY!,
{
cookies: {
getAll() {
return cookieStore.getAll()
},
setAll(cookiesToSet) {
try {
cookiesToSet.forEach(({ name, value, options }) =>
cookieStore.set(name, value, options)
)
} catch {
// Called from Server Component - ignore
}
},
},
}
)
}// app/posts/page.tsx
import { createClient } from '@/lib/vaultbrix/server'
export default async function PostsPage() {
const supabase = await createClient()
const { data: posts } = await supabase
.from('posts')
.select('*')
.order('created_at', { ascending: false })
return (
<ul>
{posts?.map(post => (
<li key={post.id}>{post.title}</li>
))}
</ul>
)
}GoTrue is the authentication server that handles sign-up, sign-in, OAuth, Magic Links, and JWT issuance. It's identical to Supabase Auth - your existing auth code works unchanged. JWTs contain user claims that integrate directly with PostgreSQL RLS policies.
Passwordless email login via Resend
Sign in with your GitHub account
// Magic Link (passwordless) - recommended
const { error } = await supabase.auth.signInWithOtp({
email: 'user@example.com',
options: {
emailRedirectTo: 'https://yourapp.com/auth/callback'
}
})
// GitHub OAuth
const { error } = await supabase.auth.signInWithOAuth({
provider: 'github',
options: {
redirectTo: 'https://yourapp.com/auth/callback'
}
})
// Get current user
const { data: { user } } = await supabase.auth.getUser()
// Sign out
await supabase.auth.signOut()All authentication data (user records, sessions, tokens) is stored in Swiss datacenters. Password hashes use bcrypt with configurable work factor. Session JWTs are signed with keys stored exclusively in Switzerland, not subject to US CLOUD Act.
Use supabase.auth.getUser() to get the current authenticated user and supabase.auth.getSession() for the JWT session. Listen to onAuthStateChange for real-time auth events.
// Get current user (validates JWT with server)
const { data: { user }, error } = await supabase.auth.getUser()
if (user) {
console.log('User ID:', user.id)
console.log('Email:', user.email)
console.log('Metadata:', user.user_metadata)
}
// Get session (local, doesn't call server)
const { data: { session } } = await supabase.auth.getSession()
if (session) {
console.log('Access Token:', session.access_token)
console.log('Expires at:', new Date(session.expires_at * 1000))
}
// Listen for auth changes
supabase.auth.onAuthStateChange((event, session) => {
switch (event) {
case 'SIGNED_IN':
console.log('User signed in:', session?.user.email)
break
case 'SIGNED_OUT':
console.log('User signed out')
break
case 'TOKEN_REFRESHED':
console.log('Token refreshed')
break
}
})// Update user metadata
const { data, error } = await supabase.auth.updateUser({
data: {
full_name: 'Jane Doe',
avatar_url: 'https://example.com/avatar.png'
}
})
// Change password
const { error } = await supabase.auth.updateUser({
password: 'new-secure-password'
})
// Sign out
await supabase.auth.signOut()
// Sign out from all devices (server-side with service role)
await adminClient.auth.admin.signOut(userId, 'global')Row Level Security is a PostgreSQL feature that restricts which rows a user can access based on policies you define. When enabled, users can only see or modify rows that match their policy conditions - even if they try to bypass your application logic.
RLS policies are SQL expressions evaluated for every row. PostgreSQL checks each row against the policy before returning results or allowing modifications.
SELECT * FROM posts WHERE user_id = 'alice'; -- Without RLS: Returns all posts where user_id = 'alice' -- With RLS: Returns only rows where BOTH: -- 1. user_id = 'alice' -- 2. The current user's policy allows access
Vaultbrix enforces RLS on all new tables by default. Tables without policies block all access from the anon key. This prevents accidental data exposure - you must explicitly define who can access what.
Enable RLS on your table with ALTER TABLE ... ENABLE ROW LEVEL SECURITY, then create policies using CREATE POLICY. Use auth.uid() to reference the current user's ID from their JWT.
-- 1. Create a table
CREATE TABLE posts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES auth.users(id) NOT NULL,
title TEXT NOT NULL,
content TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- 2. Enable RLS (required!)
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
-- 3. Policy: Users can read their own posts
CREATE POLICY "Users can view own posts"
ON posts FOR SELECT
USING (auth.uid() = user_id);
-- 4. Policy: Users can create posts as themselves
CREATE POLICY "Users can create own posts"
ON posts FOR INSERT
WITH CHECK (auth.uid() = user_id);
-- 5. Policy: Users can update their own posts
CREATE POLICY "Users can update own posts"
ON posts FOR UPDATE
USING (auth.uid() = user_id)
WITH CHECK (auth.uid() = user_id);
-- 6. Policy: Users can delete their own posts
CREATE POLICY "Users can delete own posts"
ON posts FOR DELETE
USING (auth.uid() = user_id);USINGCondition for SELECT, UPDATE (existing rows), DELETE
WITH CHECKCondition for INSERT, UPDATE (new values)
auth.uid()Returns the authenticated user's ID from the JWT
auth.jwt()Returns the full JWT payload for custom claims
Add a tenant_id column to every table, store the tenant in user metadata or JWT claims, and create RLS policies that filter by tenant. Vaultbrix recommends using custom JWT claims set via database triggers for performance.
-- Tenants (organizations)
CREATE TABLE tenants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- User-tenant memberships
CREATE TABLE tenant_members (
tenant_id UUID REFERENCES tenants(id) ON DELETE CASCADE,
user_id UUID REFERENCES auth.users(id) ON DELETE CASCADE,
role TEXT NOT NULL DEFAULT 'member',
PRIMARY KEY (tenant_id, user_id)
);
-- Tenant-scoped data
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID REFERENCES tenants(id) NOT NULL,
name TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Enable RLS
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
-- Policy: Users can only see projects in their tenants
CREATE POLICY "Tenant isolation"
ON projects FOR ALL
USING (
tenant_id IN (
SELECT tenant_id FROM tenant_members
WHERE user_id = auth.uid()
)
);For high-traffic apps, avoid subqueries in RLS policies. Instead, set the current tenant in the JWT claims using a login hook, then reference it with (auth.jwt() ->> 'tenant_id')::uuid.
The service role key bypasses all RLS policies - treat it like a root password. Store it in environment variables, never in client code. Use it only in backend code for admin operations, webhooks, and cron jobs. Rotate immediately if exposed.
// lib/vaultbrix-admin.ts (server-only!)
import { createClient } from '@supabase/supabase-js'
export function createAdminClient() {
return createClient(
process.env.VAULTBRIX_URL!,
process.env.VAULTBRIX_SERVICE_ROLE_KEY!
)
}
// app/api/admin/users/[id]/route.ts
import { createAdminClient } from '@/lib/vaultbrix-admin'
import { checkAdminAuth } from '@/lib/auth'
export async function DELETE(req: Request, { params }: { params: { id: string } }) {
// Verify caller is admin FIRST
const isAdmin = await checkAdminAuth(req)
if (!isAdmin) {
return new Response('Unauthorized', { status: 401 })
}
// NOW safe to use service role
const admin = createAdminClient()
await admin.from('profiles').delete().eq('id', params.id)
await admin.auth.admin.deleteUser(params.id)
return new Response('User deleted', { status: 200 })
}Rotate your service role key immediately if compromised, and on a regular schedule (90 days recommended).
# Via Dashboard
Dashboard → Settings → API Keys → Rotate Service Role Key
# Via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/keys/rotate" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"keyType": "service_role"}'
# Old key invalidated within 60 seconds
# Update all backend services immediatelyCreate SQL migration files locally, then apply them via the Dashboard SQL Editor or direct database connection. Migrations are versioned SQL files tracked in your repository. Test on a database branch first before applying to production.
Vaultbrix uses local SQL files for migrations. Create migration files locally, then apply them via the Dashboard SQL Editor or API.
# Create a migrations folder in your project
mkdir -p migrations
# Create timestamped migration files
touch migrations/20260209_add_posts_table.sql
# Apply via Dashboard
Dashboard → Database → SQL Editor → paste and run
# Or use your Vaultbrix connection string directly
psql $VAULTBRIX_DATABASE_URL -f migrations/20260209_add_posts_table.sql# Create a new migration
supabase migration new add_posts_table
# This creates: supabase/migrations/20260209123456_add_posts_table.sql
# Edit the file with your SQL:
-- supabase/migrations/20260209123456_add_posts_table.sql
CREATE TABLE posts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES auth.users(id) NOT NULL,
title TEXT NOT NULL,
content TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can CRUD own posts"
ON posts FOR ALL
USING (auth.uid() = user_id);# Option 1: Dashboard SQL Editor (recommended)
Dashboard → Database → SQL Editor
Paste your migration SQL and click Run
# Option 2: Direct psql connection
psql $VAULTBRIX_DATABASE_URL -f migrations/20260209_add_posts_table.sql
# Option 3: Via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/sql" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"query": "CREATE TABLE posts (...)"}'Vaultbrix uses schema-based multi-tenancy for enterprise security. Your data lives in an isolated schema (tenant_yourproject) where you have full read/write access but limited DDL (ALTER TABLE) permissions. Use the Dashboard SQL Editor for schema changes.
Database: myapp_db └── public (your tables) ✅ You own everything ✅ Full superuser access ✅ CREATE DATABASE allowed ✅ prisma db push works
Database: postgres (shared) ├── public (system) ├── auth (GoTrue) ├── storage (S3) ├── tenant_abc (Customer A) └── tenant_xyz (Your schema) ✅ Full data isolation ✅ RLS enforced ⚠️ Limited DDL access
ERROR: must be owner of table usersSolution: Use Dashboard SQL Editor for ALTER TABLE operations
ERROR: permission denied to create databaseSolution: Prisma shadow database is blocked. Use prisma db push instead of prisma migrate
Foreign key constraint violatedSolution: Schema drift - CASCADE constraints may need fixing via Dashboard
-- Dashboard → Database → SQL Editor
-- Runs as service_role with full permissions
ALTER TABLE users ADD COLUMN avatar TEXT;
CREATE INDEX idx_users_email ON users(email);# Generate migration SQL without applying
npx prisma migrate diff \
--from-schema-datamodel ./prisma/schema.prisma \
--to-migrations ./prisma/migrations \
--script > migration.sql
# Then copy migration.sql content to Dashboard SQL EditorContact support for a service_role connection string for CI/CD pipelines.
| Feature | Neon | Vaultbrix |
|---|---|---|
| Schema | public | tenant_* |
prisma db push | Direct | Via Dashboard |
prisma migrate | Direct | Generate SQL only |
| CREATE DATABASE | Allowed | Blocked |
| Multi-tenancy | Manual | Built-in |
| RLS | Optional | Enforced |
| SOC 2 / GDPR | Your job | Included |
prisma db push for development, Dashboard for productionDatabase branching creates isolated copies of your schema and data for development, preview deployments, and testing. Branches are fast (copy-on-write), cheap, and disposable. Merge migrations back to production when ready.
# Create a branch via Dashboard
Dashboard → Database → Branches → Create Branch
# Or via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/branches" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"name": "feature-auth"}'
# Branch gets its own:
# - Full database copy (schema + data)
# - Unique connection string
# - Separate API keys
# Make changes safely on the branch
# Use the branch connection string in your dev environment
# When ready, merge via Dashboard
Dashboard → Branches → feature-auth → Merge to Production
# Clean up via Dashboard or API
Dashboard → Branches → feature-auth → DeleteDatabase branching is available on Pro tier and above. Branches are billed hourly while active. Free tier projects can use local development with supabase start.
PITR (Point-in-Time Recovery) lets you restore your database to any second within the retention period. It uses continuous WAL archiving, not periodic snapshots. Daily backups are automatic on all plans; PITR retention varies by tier (7-30 days).
| Type | Frequency | Granularity |
|---|---|---|
| Daily Snapshot | Every 24 hours | Full database |
| PITR (WAL) | Continuous | 1-second precision |
Daily only
7 days
14 days
30 days
# Via Dashboard (recommended)
Dashboard → Database → Backups → Point-in-Time Recovery
# Select a restore point from the timeline
# Choose to restore in-place or to a new branch
# Via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/restore" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"targetTime": "2026-02-09T14:30:00Z",
"toBranch": "recovery-test" // optional: restore to branch
}'All backups are stored in Swiss datacenters (Exoscale SOS, Geneva). WAL archives are encrypted with AES-256 at rest. Backup data never leaves Swiss jurisdiction, ensuring LPD compliance.
Export your Supabase database with pg_dump, create a Vaultbrix project, import with psql, migrate Storage files, and update your environment variables. Your application code requires zero changes - just swap the URL.
# Get your Supabase connection string from Settings > Database
pg_dump "postgresql://postgres:PASSWORD@db.xxxxx.supabase.co:5432/postgres" \
--clean --if-exists \
--exclude-schema=_supabase \
--exclude-schema=supabase_migrations \
> supabase_export.sqlCreate a new project at app.vaultbrix.com, select Swiss region (CH-GVA-2), and note your connection details.
psql "postgresql://postgres:PASSWORD@db.YOUR_PROJECT.vaultbrix.com:5432/postgres" \
< supabase_export.sql# Before (Supabase)
NEXT_PUBLIC_SUPABASE_URL=https://xxxxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbG...
# After (Vaultbrix) - same variable names work!
NEXT_PUBLIC_SUPABASE_URL=https://YOUR_PROJECT.vaultbrix.com
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbG...MCP (Model Context Protocol) is a standardized way for AI assistants to interact with your database. Vaultbrix's MCP server lets Claude Code, Cursor, and other AI tools understand your schema, run queries, and manage your database - all governed by policies you control.
AI Assistant (Claude/Cursor)
│
▼
┌─────────────────────────────┐
│ Vaultbrix MCP Server │
│ ───────────────────────── │
│ • Schema introspection │
│ • Query execution │
│ • Context compression │
│ • Policy enforcement │
│ • Audit logging │
└─────────────────────────────┘
│
▼
Your Database (RLS enforced)rlm_context_queryFull documentation/schema query
rlm_askQuick query with predictable tokens
rlm_rememberStore learnings and decisions
rlm_recallRetrieve relevant memories
Agent Context Operations are metered API calls that allow AI agents to query and understand your database schema through the MCP server. Each operation counts against your plan's monthly quota.
rlm_context_query - Full schema/doc queryrlm_ask - Quick queryrlm_search - Pattern searchrlm_multi_query - Batch queries (counts as multiple)50
2,000
10,000
50,000
Unlimited
Persistent Agent Memory lets AI agents store and recall project-specific knowledge across sessions. This includes decisions, conventions, learned patterns, and context that persists between conversations.
Architectural choices, tech decisions
Project conventions, patterns
Lessons learned, best practices
// AI stores a decision
rlm_remember({
type: "decision",
content: "Using Zustand for state management instead of Redux.
Reasoning: simpler API, smaller bundle, sufficient for our needs.",
ttl_days: 90
})
// Later session - AI recalls relevant memories
rlm_recall({ query: "state management" })
// Returns: "Using Zustand for state management..."
// AI now knows to use Zustand without asking again20
200
1,000
Unlimited
Unlimited
Add the Vaultbrix MCP server to your IDE's configuration file. For Claude Code, edit ~/.claude/mcp.json. For Cursor, edit .cursor/mcp.json in your project. Restart the IDE to connect.
Edit ~/.claude/mcp.json:
{
"mcpServers": {
"vaultbrix": {
"command": "npx",
"args": [
"-y",
"@vaultbrix/mcp-server",
"--project", "YOUR_PROJECT_SLUG"
]
}
}
}Create .cursor/mcp.json in your project root:
{
"mcpServers": {
"vaultbrix": {
"command": "npx",
"args": [
"-y",
"@vaultbrix/mcp-server",
"--project", "YOUR_PROJECT_SLUG"
]
}
}
}After restarting your IDE, ask the AI assistant:
"What tables exist in my database?"
The AI should list your actual tables from Vaultbrix.
All Vaultbrix data is stored exclusively in Swiss datacenters, subject to Swiss Federal Data Protection Act (LPD) and GDPR. Data is not subject to US CLOUD Act. Encryption at rest (AES-256), encryption in transit (TLS 1.3), and audit logging ensure compliance.
CH-GVA-2 (Geneva, Switzerland)
Exoscale (Swiss company)
AES-256
TLS 1.3
Exoscale SOS (Swiss S3)
Swiss territory
EU data protection
Federal Data Protection Act
Infra provider certified
Vaultbrix supports all GDPR rights through the dashboard and API: