Q&A Documentation

Complete Vaultbrix Documentation

Everything you need to know about Swiss-hosted, AI-native databases. From getting started to advanced MCP integration.

Getting Started

What is Vaultbrix and how does it compare to Supabase?

Short Answer

Vaultbrix is a Supabase-compatible database platform hosted exclusively in Switzerland. It provides the same APIs (PostgREST, GoTrue, Realtime, Storage) with added AI context features through the Snipara engine. Your existing Supabase client code works unchanged.

Supabase-Compatible Behavior

Vaultbrix implements the complete Supabase stack:

PostgREST

Auto-generated REST API from your schema

GoTrue

Authentication with JWT, OAuth, Magic Links

Realtime

WebSocket subscriptions and presence

Storage

S3-compatible object storage with RLS

Vaultbrix-Specific Features

Swiss Data Residency - All data stored in Geneva (CH-GVA-2), subject to Swiss LPD, not US CLOUD Act
Snipara Context Engine - 95% schema compression for AI tools, anti-hallucination, persistent memory
MCP Server - Native integration with Claude Code, Cursor, and other AI assistants
Governed AI Access - All AI operations metered, audited, and controllable via policies

Example: Same Code, Different Host

// This code works identically on Supabase and Vaultbrix
import { createClient } from '@supabase/supabase-js'

// Supabase
const supabase = createClient(
  'https://abc123.supabase.co',
  'your-anon-key'
)

// Vaultbrix - just change the URL
const vaultbrix = createClient(
  'https://abc123.vaultbrix.com',
  'your-anon-key'
)

// All operations work the same
const { data, error } = await vaultbrix
  .from('users')
  .select('*')
  .eq('active', true)
How do I create my first Vaultbrix project?

Short Answer

Sign up at app.vaultbrix.com, click "New Project," select the Swiss region (CH-GVA-2), and your PostgreSQL database with all Supabase services will be ready within 60 seconds.

Step-by-Step Process

1Create Account

Visit app.vaultbrix.com and sign up using Magic Link or GitHub OAuth.

2Click "New Project"

From your dashboard, click the "New Project" button. Enter a project name (lowercase, alphanumeric, hyphens allowed).

3Select Region

Currently available:

  • CH-GVA-2 - Geneva, Switzerland (GDPR + LPD compliant)

DE-FRA-1 (Frankfurt) planned for Q2 2026

4Set Database Password

Create a strong password for your PostgreSQL superuser. Store this securely - it cannot be recovered, only reset.

5Project Ready

Within 60 seconds, your project is provisioned with:

  • • PostgreSQL 17 database with pgvector enabled
  • • PostgREST API endpoint
  • • GoTrue authentication
  • • Realtime WebSocket server
  • • Storage bucket system
  • • Snipara context engine (auto-provisioned)

Your Connection Details

# Project URL
https://YOUR_PROJECT_REF.vaultbrix.com

# API Keys (Settings > API)
ANON_KEY=eyJhbGciOiJIUzI1NiIs...
SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIs...

# Direct Database Connection
postgresql://postgres:YOUR_PASSWORD@db.YOUR_PROJECT_REF.vaultbrix.com:5432/postgres

# Pooled Connection (recommended for serverless)
postgresql://postgres:YOUR_PASSWORD@db.YOUR_PROJECT_REF.vaultbrix.com:6543/postgres
What are the API keys and when do I use each one?

Short Answer

Use the anon key in client-side code (browsers, mobile apps) - it respects Row Level Security. Use the service role key only in secure backend environments - it bypasses all RLS policies.

Key Comparison

KeyUse InRLSRisk Level
anon keyBrowsers, mobile apps, public codeEnforcedSafe to expose
service role keyBackend servers, cron jobs, admin scriptsBypassedNever expose

Correct Usage

// CLIENT-SIDE (React, Next.js pages, mobile)
// Use ANON key - safe to expose
const supabase = createClient(
  process.env.NEXT_PUBLIC_VAULTBRIX_URL,
  process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY // Public env var
)

// SERVER-SIDE (API routes, cron jobs, admin scripts)
// Use SERVICE ROLE key - never expose
const adminClient = createClient(
  process.env.VAULTBRIX_URL,
  process.env.VAULTBRIX_SERVICE_ROLE_KEY // Private env var
)

Security Warning

Never use the service role key in client-side code. It grants full database access, bypassing all security policies. If compromised, rotate it immediately in Settings → API Keys.

How do I connect supabase-js to my Vaultbrix project?

Short Answer

Install @supabase/supabase-js, then call createClient() with your Vaultbrix URL and anon key. The client is fully compatible - no code changes required beyond the URL.

Installation

# npm
npm install @supabase/supabase-js

# pnpm
pnpm add @supabase/supabase-js

# yarn
yarn add @supabase/supabase-js

Basic Setup

// lib/vaultbrix.ts
import { createClient } from '@supabase/supabase-js'

const vaultbrixUrl = 'https://YOUR_PROJECT_REF.vaultbrix.com'
const vaultbrixAnonKey = 'YOUR_ANON_KEY'

export const supabase = createClient(vaultbrixUrl, vaultbrixAnonKey)

// Usage
const { data, error } = await supabase
  .from('posts')
  .select('id, title, author:users(name)')
  .order('created_at', { ascending: false })
  .limit(10)

TypeScript Support

// Generate types from your schema
npx supabase gen types typescript \
  --project-id YOUR_PROJECT_REF \
  --schema public > types/database.ts

// Use typed client
import { createClient } from '@supabase/supabase-js'
import type { Database } from './types/database'

export const supabase = createClient<Database>(
  process.env.NEXT_PUBLIC_VAULTBRIX_URL!,
  process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY!
)

// Now you get full autocomplete
const { data } = await supabase
  .from('posts')  // ← Autocompletes table names
  .select('*')    // ← Knows column types
How do I integrate Vaultbrix with Next.js?

Short Answer

Use @supabase/ssr for Server Components and Server Actions. Create separate browser and server clients to handle cookies properly. The setup is identical to Supabase - just change the URL.

Installation

npm install @supabase/supabase-js @supabase/ssr

Environment Variables

# .env.local
NEXT_PUBLIC_VAULTBRIX_URL=https://YOUR_PROJECT_REF.vaultbrix.com
NEXT_PUBLIC_VAULTBRIX_ANON_KEY=eyJhbGciOiJIUzI1NiIs...
VAULTBRIX_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIs...

Browser Client (Client Components)

// lib/vaultbrix/client.ts
import { createBrowserClient } from '@supabase/ssr'

export function createClient() {
  return createBrowserClient(
    process.env.NEXT_PUBLIC_VAULTBRIX_URL!,
    process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY!
  )
}

Server Client (Server Components, Actions)

// lib/vaultbrix/server.ts
import { createServerClient } from '@supabase/ssr'
import { cookies } from 'next/headers'

export async function createClient() {
  const cookieStore = await cookies()

  return createServerClient(
    process.env.NEXT_PUBLIC_VAULTBRIX_URL!,
    process.env.NEXT_PUBLIC_VAULTBRIX_ANON_KEY!,
    {
      cookies: {
        getAll() {
          return cookieStore.getAll()
        },
        setAll(cookiesToSet) {
          try {
            cookiesToSet.forEach(({ name, value, options }) =>
              cookieStore.set(name, value, options)
            )
          } catch {
            // Called from Server Component - ignore
          }
        },
      },
    }
  )
}

Usage in Server Component

// app/posts/page.tsx
import { createClient } from '@/lib/vaultbrix/server'

export default async function PostsPage() {
  const supabase = await createClient()

  const { data: posts } = await supabase
    .from('posts')
    .select('*')
    .order('created_at', { ascending: false })

  return (
    <ul>
      {posts?.map(post => (
        <li key={post.id}>{post.title}</li>
      ))}
    </ul>
  )
}

Authentication & Row Level Security

How does GoTrue authentication work in Vaultbrix?

Short Answer

GoTrue is the authentication server that handles sign-up, sign-in, OAuth, Magic Links, and JWT issuance. It's identical to Supabase Auth - your existing auth code works unchanged. JWTs contain user claims that integrate directly with PostgreSQL RLS policies.

Supported Authentication Methods

Magic Link

Passwordless email login via Resend

GitHub OAuth

Sign in with your GitHub account

Authentication Examples

// Magic Link (passwordless) - recommended
const { error } = await supabase.auth.signInWithOtp({
  email: 'user@example.com',
  options: {
    emailRedirectTo: 'https://yourapp.com/auth/callback'
  }
})

// GitHub OAuth
const { error } = await supabase.auth.signInWithOAuth({
  provider: 'github',
  options: {
    redirectTo: 'https://yourapp.com/auth/callback'
  }
})

// Get current user
const { data: { user } } = await supabase.auth.getUser()

// Sign out
await supabase.auth.signOut()

Vaultbrix-Specific: Swiss Data Storage

All authentication data (user records, sessions, tokens) is stored in Swiss datacenters. Password hashes use bcrypt with configurable work factor. Session JWTs are signed with keys stored exclusively in Switzerland, not subject to US CLOUD Act.

How do I manage users and sessions?

Short Answer

Use supabase.auth.getUser() to get the current authenticated user and supabase.auth.getSession() for the JWT session. Listen to onAuthStateChange for real-time auth events.

Session Management

// Get current user (validates JWT with server)
const { data: { user }, error } = await supabase.auth.getUser()

if (user) {
  console.log('User ID:', user.id)
  console.log('Email:', user.email)
  console.log('Metadata:', user.user_metadata)
}

// Get session (local, doesn't call server)
const { data: { session } } = await supabase.auth.getSession()

if (session) {
  console.log('Access Token:', session.access_token)
  console.log('Expires at:', new Date(session.expires_at * 1000))
}

// Listen for auth changes
supabase.auth.onAuthStateChange((event, session) => {
  switch (event) {
    case 'SIGNED_IN':
      console.log('User signed in:', session?.user.email)
      break
    case 'SIGNED_OUT':
      console.log('User signed out')
      break
    case 'TOKEN_REFRESHED':
      console.log('Token refreshed')
      break
  }
})

User Management

// Update user metadata
const { data, error } = await supabase.auth.updateUser({
  data: {
    full_name: 'Jane Doe',
    avatar_url: 'https://example.com/avatar.png'
  }
})

// Change password
const { error } = await supabase.auth.updateUser({
  password: 'new-secure-password'
})

// Sign out
await supabase.auth.signOut()

// Sign out from all devices (server-side with service role)
await adminClient.auth.admin.signOut(userId, 'global')
What is Row Level Security (RLS)?

Short Answer

Row Level Security is a PostgreSQL feature that restricts which rows a user can access based on policies you define. When enabled, users can only see or modify rows that match their policy conditions - even if they try to bypass your application logic.

How RLS Works

RLS policies are SQL expressions evaluated for every row. PostgreSQL checks each row against the policy before returning results or allowing modifications.

SELECT * FROM posts WHERE user_id = 'alice';

-- Without RLS: Returns all posts where user_id = 'alice'
-- With RLS: Returns only rows where BOTH:
--   1. user_id = 'alice'
--   2. The current user's policy allows access

Why Use RLS?

Defense in depth - Security at the database level, not just application
Can't be bypassed - Even direct SQL access respects policies
Declarative - Security rules are explicit SQL, easy to audit
Required for anon key - Client-side access depends entirely on RLS

Vaultbrix Requirement

Vaultbrix enforces RLS on all new tables by default. Tables without policies block all access from the anon key. This prevents accidental data exposure - you must explicitly define who can access what.

How do I write my first RLS policy?

Short Answer

Enable RLS on your table with ALTER TABLE ... ENABLE ROW LEVEL SECURITY, then create policies using CREATE POLICY. Use auth.uid() to reference the current user's ID from their JWT.

Basic User-Owned Data Policy

-- 1. Create a table
CREATE TABLE posts (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id UUID REFERENCES auth.users(id) NOT NULL,
  title TEXT NOT NULL,
  content TEXT,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- 2. Enable RLS (required!)
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;

-- 3. Policy: Users can read their own posts
CREATE POLICY "Users can view own posts"
  ON posts FOR SELECT
  USING (auth.uid() = user_id);

-- 4. Policy: Users can create posts as themselves
CREATE POLICY "Users can create own posts"
  ON posts FOR INSERT
  WITH CHECK (auth.uid() = user_id);

-- 5. Policy: Users can update their own posts
CREATE POLICY "Users can update own posts"
  ON posts FOR UPDATE
  USING (auth.uid() = user_id)
  WITH CHECK (auth.uid() = user_id);

-- 6. Policy: Users can delete their own posts
CREATE POLICY "Users can delete own posts"
  ON posts FOR DELETE
  USING (auth.uid() = user_id);

Key Concepts

USING

Condition for SELECT, UPDATE (existing rows), DELETE

WITH CHECK

Condition for INSERT, UPDATE (new values)

auth.uid()

Returns the authenticated user's ID from the JWT

auth.jwt()

Returns the full JWT payload for custom claims

How do I build multi-tenant SaaS with RLS?

Short Answer

Add a tenant_id column to every table, store the tenant in user metadata or JWT claims, and create RLS policies that filter by tenant. Vaultbrix recommends using custom JWT claims set via database triggers for performance.

Multi-Tenant Schema Pattern

-- Tenants (organizations)
CREATE TABLE tenants (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  name TEXT NOT NULL,
  slug TEXT UNIQUE NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- User-tenant memberships
CREATE TABLE tenant_members (
  tenant_id UUID REFERENCES tenants(id) ON DELETE CASCADE,
  user_id UUID REFERENCES auth.users(id) ON DELETE CASCADE,
  role TEXT NOT NULL DEFAULT 'member',
  PRIMARY KEY (tenant_id, user_id)
);

-- Tenant-scoped data
CREATE TABLE projects (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  tenant_id UUID REFERENCES tenants(id) NOT NULL,
  name TEXT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Enable RLS
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;

-- Policy: Users can only see projects in their tenants
CREATE POLICY "Tenant isolation"
  ON projects FOR ALL
  USING (
    tenant_id IN (
      SELECT tenant_id FROM tenant_members
      WHERE user_id = auth.uid()
    )
  );

Performance Tip

For high-traffic apps, avoid subqueries in RLS policies. Instead, set the current tenant in the JWT claims using a login hook, then reference it with (auth.jwt() ->> 'tenant_id')::uuid.

Operations & Migrations

How do I use service role keys safely?

Short Answer

The service role key bypasses all RLS policies - treat it like a root password. Store it in environment variables, never in client code. Use it only in backend code for admin operations, webhooks, and cron jobs. Rotate immediately if exposed.

Valid Use Cases

Admin dashboards - Need to see all data across users
Background jobs - Operate without user context (cron, queues)
Webhooks - Process events from external systems (Stripe, etc.)
Data migrations - Bulk operations across tables

Never Do This

  • • Use in client-side JavaScript (browsers)
  • • Embed in mobile apps
  • • Commit to Git repositories
  • • Share via Slack/email
  • • Log the full key value

Secure Usage Pattern

// lib/vaultbrix-admin.ts (server-only!)
import { createClient } from '@supabase/supabase-js'

export function createAdminClient() {
  return createClient(
    process.env.VAULTBRIX_URL!,
    process.env.VAULTBRIX_SERVICE_ROLE_KEY!
  )
}

// app/api/admin/users/[id]/route.ts
import { createAdminClient } from '@/lib/vaultbrix-admin'
import { checkAdminAuth } from '@/lib/auth'

export async function DELETE(req: Request, { params }: { params: { id: string } }) {
  // Verify caller is admin FIRST
  const isAdmin = await checkAdminAuth(req)
  if (!isAdmin) {
    return new Response('Unauthorized', { status: 401 })
  }

  // NOW safe to use service role
  const admin = createAdminClient()
  await admin.from('profiles').delete().eq('id', params.id)
  await admin.auth.admin.deleteUser(params.id)

  return new Response('User deleted', { status: 200 })
}

Key Rotation

Rotate your service role key immediately if compromised, and on a regular schedule (90 days recommended).

# Via Dashboard
Dashboard → Settings → API Keys → Rotate Service Role Key

# Via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/keys/rotate" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"keyType": "service_role"}'

# Old key invalidated within 60 seconds
# Update all backend services immediately
How do I run database migrations?

Short Answer

Create SQL migration files locally, then apply them via the Dashboard SQL Editor or direct database connection. Migrations are versioned SQL files tracked in your repository. Test on a database branch first before applying to production.

Migration Approach

Vaultbrix uses local SQL files for migrations. Create migration files locally, then apply them via the Dashboard SQL Editor or API.

# Create a migrations folder in your project
mkdir -p migrations

# Create timestamped migration files
touch migrations/20260209_add_posts_table.sql

# Apply via Dashboard
Dashboard → Database → SQL Editor → paste and run

# Or use your Vaultbrix connection string directly
psql $VAULTBRIX_DATABASE_URL -f migrations/20260209_add_posts_table.sql

Creating Migrations

# Create a new migration
supabase migration new add_posts_table

# This creates: supabase/migrations/20260209123456_add_posts_table.sql
# Edit the file with your SQL:

-- supabase/migrations/20260209123456_add_posts_table.sql
CREATE TABLE posts (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id UUID REFERENCES auth.users(id) NOT NULL,
  title TEXT NOT NULL,
  content TEXT,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

ALTER TABLE posts ENABLE ROW LEVEL SECURITY;

CREATE POLICY "Users can CRUD own posts"
  ON posts FOR ALL
  USING (auth.uid() = user_id);

Applying Migrations

# Option 1: Dashboard SQL Editor (recommended)
Dashboard → Database → SQL Editor
Paste your migration SQL and click Run

# Option 2: Direct psql connection
psql $VAULTBRIX_DATABASE_URL -f migrations/20260209_add_posts_table.sql

# Option 3: Via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/sql" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"query": "CREATE TABLE posts (...)"}'

Best Practices

  • • Always test migrations on a branch first
  • • Make migrations idempotent where possible
  • • Include RLS policies in the same migration as the table
  • • Never modify applied migrations - create new ones
  • • Back up before major schema changes
Why can't I use Prisma like on Neon?

Key Difference

Vaultbrix uses schema-based multi-tenancy for enterprise security. Your data lives in an isolated schema (tenant_yourproject) where you have full read/write access but limited DDL (ALTER TABLE) permissions. Use the Dashboard SQL Editor for schema changes.

Architecture Comparison

Neon / Standard PostgreSQL
Database: myapp_db
└── public (your tables)

✅ You own everything
✅ Full superuser access
✅ CREATE DATABASE allowed
✅ prisma db push works
Vaultbrix (Multi-Tenant)
Database: postgres (shared)
├── public (system)
├── auth (GoTrue)
├── storage (S3)
├── tenant_abc (Customer A)
└── tenant_xyz (Your schema)

✅ Full data isolation
✅ RLS enforced
⚠️ Limited DDL access

Common Prisma Errors

ERROR: must be owner of table users

Solution: Use Dashboard SQL Editor for ALTER TABLE operations

ERROR: permission denied to create database

Solution: Prisma shadow database is blocked. Use prisma db push instead of prisma migrate

Foreign key constraint violated

Solution: Schema drift - CASCADE constraints may need fixing via Dashboard

How to Run Schema Changes

Option A: Dashboard SQL Editor (Recommended)
-- Dashboard → Database → SQL Editor
-- Runs as service_role with full permissions

ALTER TABLE users ADD COLUMN avatar TEXT;
CREATE INDEX idx_users_email ON users(email);
Option B: Generate SQL from Prisma
# Generate migration SQL without applying
npx prisma migrate diff \
  --from-schema-datamodel ./prisma/schema.prisma \
  --to-migrations ./prisma/migrations \
  --script > migration.sql

# Then copy migration.sql content to Dashboard SQL Editor
Option C: Request Elevated Access

Contact support for a service_role connection string for CI/CD pipelines.

Feature Comparison

FeatureNeonVaultbrix
Schemapublictenant_*
prisma db pushDirectVia Dashboard
prisma migrateDirectGenerate SQL only
CREATE DATABASEAllowedBlocked
Multi-tenancyManualBuilt-in
RLSOptionalEnforced
SOC 2 / GDPRYour jobIncluded

Best Practices for Prisma on Vaultbrix

  • • Use prisma db push for development, Dashboard for production
  • • Generate SQL migrations locally, apply via Dashboard SQL Editor
  • • Test schema changes on a database branch first
  • • Keep your Prisma schema as source of truth, but apply changes manually
  • • Request elevated CI/CD credentials for automated deployments
How does database branching work?

Short Answer

Database branching creates isolated copies of your schema and data for development, preview deployments, and testing. Branches are fast (copy-on-write), cheap, and disposable. Merge migrations back to production when ready.

Branching Workflow

# Create a branch via Dashboard
Dashboard → Database → Branches → Create Branch

# Or via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/branches" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"name": "feature-auth"}'

# Branch gets its own:
# - Full database copy (schema + data)
# - Unique connection string
# - Separate API keys

# Make changes safely on the branch
# Use the branch connection string in your dev environment

# When ready, merge via Dashboard
Dashboard → Branches → feature-auth → Merge to Production

# Clean up via Dashboard or API
Dashboard → Branches → feature-auth → Delete

Common Use Cases

Feature development - Test schema changes before production
Preview deployments - Each PR gets its own database
CI/CD testing - Run tests against realistic data
Staging environments - Long-lived branch for QA

Availability

Database branching is available on Pro tier and above. Branches are billed hourly while active. Free tier projects can use local development with supabase start.

How do PITR and backups work?

Short Answer

PITR (Point-in-Time Recovery) lets you restore your database to any second within the retention period. It uses continuous WAL archiving, not periodic snapshots. Daily backups are automatic on all plans; PITR retention varies by tier (7-30 days).

Backup Types

TypeFrequencyGranularity
Daily SnapshotEvery 24 hoursFull database
PITR (WAL)Continuous1-second precision

PITR Retention by Plan

Free

Daily only

Starter

7 days

Pro

14 days

Business

30 days

Restoring from Backup

# Via Dashboard (recommended)
Dashboard → Database → Backups → Point-in-Time Recovery

# Select a restore point from the timeline
# Choose to restore in-place or to a new branch

# Via API
curl -X POST "https://api.vaultbrix.com/v1/projects/{id}/restore" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "targetTime": "2026-02-09T14:30:00Z",
    "toBranch": "recovery-test"  // optional: restore to branch
  }'

Swiss Data Residency

All backups are stored in Swiss datacenters (Exoscale SOS, Geneva). WAL archives are encrypted with AES-256 at rest. Backup data never leaves Swiss jurisdiction, ensuring LPD compliance.

How do I migrate from Supabase to Vaultbrix?

Short Answer

Export your Supabase database with pg_dump, create a Vaultbrix project, import with psql, migrate Storage files, and update your environment variables. Your application code requires zero changes - just swap the URL.

Migration Steps

1Export from Supabase
# Get your Supabase connection string from Settings > Database
pg_dump "postgresql://postgres:PASSWORD@db.xxxxx.supabase.co:5432/postgres" \
  --clean --if-exists \
  --exclude-schema=_supabase \
  --exclude-schema=supabase_migrations \
  > supabase_export.sql
2Create Vaultbrix Project

Create a new project at app.vaultbrix.com, select Swiss region (CH-GVA-2), and note your connection details.

3Import to Vaultbrix
psql "postgresql://postgres:PASSWORD@db.YOUR_PROJECT.vaultbrix.com:5432/postgres" \
  < supabase_export.sql
4Update Environment Variables
# Before (Supabase)
NEXT_PUBLIC_SUPABASE_URL=https://xxxxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbG...

# After (Vaultbrix) - same variable names work!
NEXT_PUBLIC_SUPABASE_URL=https://YOUR_PROJECT.vaultbrix.com
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbG...

What Migrates Automatically

Tables, views, functions
RLS policies
Indexes and constraints
User data (auth.users)
Storage files (manual copy)
Edge Functions (redeploy)
OAuth providers (reconfigure)

AI Features & Compliance

What is the MCP server and how does it work?
Unique to Vaultbrix

Short Answer

MCP (Model Context Protocol) is a standardized way for AI assistants to interact with your database. Vaultbrix's MCP server lets Claude Code, Cursor, and other AI tools understand your schema, run queries, and manage your database - all governed by policies you control.

How MCP Works

AI Assistant (Claude/Cursor)
        │
        ▼
┌─────────────────────────────┐
│     Vaultbrix MCP Server    │
│  ─────────────────────────  │
│  • Schema introspection     │
│  • Query execution          │
│  • Context compression      │
│  • Policy enforcement       │
│  • Audit logging            │
└─────────────────────────────┘
        │
        ▼
   Your Database (RLS enforced)

Key Features

95% Context Compression - Snipara compresses 500K tokens to 5-15K
Anti-Hallucination - Source-tagged facts prevent made-up data
RLS Enforcement - AI respects your security policies
Audit Logging - Every AI operation is logged (Business+)

MCP Tools Available to AI

rlm_context_query

Full documentation/schema query

rlm_ask

Quick query with predictable tokens

rlm_remember

Store learnings and decisions

rlm_recall

Retrieve relevant memories

What are Agent Context Operations?

Short Answer

Agent Context Operations are metered API calls that allow AI agents to query and understand your database schema through the MCP server. Each operation counts against your plan's monthly quota.

What Counts as an Operation

rlm_context_query - Full schema/doc query
rlm_ask - Quick query
rlm_search - Pattern search
rlm_multi_query - Batch queries (counts as multiple)

Monthly Quotas by Plan

Free

50

Starter

2,000

Pro

10,000

Business

50,000

Enterprise

Unlimited

What is Persistent Agent Memory?

Short Answer

Persistent Agent Memory lets AI agents store and recall project-specific knowledge across sessions. This includes decisions, conventions, learned patterns, and context that persists between conversations.

Memory Types

decision

Architectural choices, tech decisions

context

Project conventions, patterns

learning

Lessons learned, best practices

How AI Uses Memory

// AI stores a decision
rlm_remember({
  type: "decision",
  content: "Using Zustand for state management instead of Redux.
            Reasoning: simpler API, smaller bundle, sufficient for our needs.",
  ttl_days: 90
})

// Later session - AI recalls relevant memories
rlm_recall({ query: "state management" })
// Returns: "Using Zustand for state management..."

// AI now knows to use Zustand without asking again

Memory Slots by Plan

Free

20

Starter

200

Pro

1,000

Business

Unlimited

Enterprise

Unlimited

How do I set up MCP for Cursor or Claude Code?

Short Answer

Add the Vaultbrix MCP server to your IDE's configuration file. For Claude Code, edit ~/.claude/mcp.json. For Cursor, edit .cursor/mcp.json in your project. Restart the IDE to connect.

Claude Code Setup

Edit ~/.claude/mcp.json:

{
  "mcpServers": {
    "vaultbrix": {
      "command": "npx",
      "args": [
        "-y",
        "@vaultbrix/mcp-server",
        "--project", "YOUR_PROJECT_SLUG"
      ]
    }
  }
}

Cursor Setup

Create .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "vaultbrix": {
      "command": "npx",
      "args": [
        "-y",
        "@vaultbrix/mcp-server",
        "--project", "YOUR_PROJECT_SLUG"
      ]
    }
  }
}

Verify Connection

After restarting your IDE, ask the AI assistant:

"What tables exist in my database?"

The AI should list your actual tables from Vaultbrix.

How does Swiss data residency and compliance work?

Short Answer

All Vaultbrix data is stored exclusively in Swiss datacenters, subject to Swiss Federal Data Protection Act (LPD) and GDPR. Data is not subject to US CLOUD Act. Encryption at rest (AES-256), encryption in transit (TLS 1.3), and audit logging ensure compliance.

What Swiss Jurisdiction Means

Not subject to US CLOUD Act - US authorities cannot compel data access
Swiss LPD compliance - Strict privacy law with data minimization
GDPR adequate - EU data protection standards met
Political neutrality - Switzerland's longstanding data haven status

Infrastructure Details

Primary Region

CH-GVA-2 (Geneva, Switzerland)

Provider

Exoscale (Swiss company)

Encryption at Rest

AES-256

Encryption in Transit

TLS 1.3

Backup Storage

Exoscale SOS (Swiss S3)

Data Never Leaves

Swiss territory

Compliance Framework

GDPR

EU data protection

Swiss LPD

Federal Data Protection Act

ISO 27001

Infra provider certified

GDPR Data Subject Rights

Vaultbrix supports all GDPR rights through the dashboard and API:

Right to access (Article 15)
Right to rectification (Article 16)
Right to erasure (Article 17)
Right to portability (Article 20)

Ready to get started?

Create your first Swiss-hosted, AI-native database in under 60 seconds.