Quick Start¶
A step-by-step tutorial that walks you through BigBrotr's core services, from seeding initial relay URLs to viewing data in the database.
Prerequisites¶
Before starting, make sure you have:
- Python 3.11+ installed
- BigBrotr installed with
uv sync --group dev(see Installation) - PostgreSQL and PGBouncer running (Docker or local)
Quickest infrastructure setup
Step 1: Clone and Install¶
If you have not already done so during installation:
git clone https://github.com/BigBrotr/bigbrotr.git
cd bigbrotr
curl -LsSf https://astral.sh/uv/install.sh | sh # install uv (one-time)
uv sync --group dev
Step 2: Start Infrastructure¶
Start only the database containers -- you will run the application services manually:
Wait for PostgreSQL to become healthy:
You should see both postgres and pgbouncer with status healthy.
Note
The PostgreSQL init scripts in deployments/bigbrotr/postgres/init/ run
automatically on first start. They create all tables, stored functions,
indexes, and materialized views.
Step 3: Set Environment Variables¶
Services need the database password to connect. Each service uses its own role (configured via pool overrides in the service YAML). Set the writer password:
Step 4: Run the Seeder¶
The Seeder is a one-shot service that loads initial relay URLs from a seed file
into the service_state table as candidates for the Finder:
You will see structured log output indicating how many seed URLs were loaded:
Info
The seed file is located at deployments/bigbrotr/static/seed_relays.txt.
Each line is a wss:// or ws:// relay URL.
Step 5: Run the Finder¶
The Finder discovers new relay URLs by scanning stored events (NIP-65 relay lists, kind 2, kind 3) and querying external APIs. Run a single cycle:
Expected output:
info finder cycle_started
info finder api_scan_completed source=api candidates=85
info finder event_scan_completed source=events candidates=210
info finder cycle_completed total_candidates=295 duration=12.4
The Finder writes discovered URLs as candidates in service_state for the Validator.
Tip
Add --log-level DEBUG to any command for verbose output:
Step 6: Run the Validator¶
The Validator tests each candidate URL by opening a WebSocket connection to confirm
it is a live Nostr relay. Valid relays are promoted to the relay table:
Expected output:
info validator cycle_started candidates=295
info validator batch_completed tested=295 valid=180 invalid=115 duration=45.2
info validator cycle_completed promoted=180 duration=45.8
Note
Validation is network-bound and may take a minute or more depending on the number of candidates and network conditions. The Validator uses per-network concurrency limits defined in its configuration file.
Step 7: Check the Database¶
After running the first three services, your database now contains real data.
Connect with psql to inspect:
docker compose -f deployments/bigbrotr/docker-compose.yaml exec postgres \
psql -U admin -d bigbrotr
Useful queries:
-- Count validated relays
SELECT count(*) FROM relay;
-- View relays by network type
SELECT network, count(*) FROM relay GROUP BY network ORDER BY count DESC;
-- Check service state entries
SELECT key, state_type, count(*) FROM service_state GROUP BY key, state_type;
What Just Happened?¶
You ran three of BigBrotr's six independent services:
flowchart TD
DB[("PostgreSQL")]
SE["Seeder<br/><small>Bootstrap</small>"]
FI["Finder<br/><small>Discovery</small>"]
VA["Validator<br/><small>Verification</small>"]
MO["Monitor<br/><small>Health checks</small>"]
SY["Synchronizer<br/><small>Event collection</small>"]
RE["Refresher<br/><small>View refresh</small>"]
SE --> DB
FI --> DB
VA --> DB
MO --> DB
SY --> DB
RE --> DB
style SE fill:#7B1FA2,color:#fff,stroke:#4A148C
style FI fill:#7B1FA2,color:#fff,stroke:#4A148C
style VA fill:#7B1FA2,color:#fff,stroke:#4A148C
style MO fill:#7B1FA2,color:#fff,stroke:#4A148C
style SY fill:#7B1FA2,color:#fff,stroke:#4A148C
style RE fill:#7B1FA2,color:#fff,stroke:#4A148C
style DB fill:#311B92,color:#fff,stroke:#1A237E
- Seeder loaded seed URLs from a text file into the database as candidates
- Finder discovered additional relay URLs from events and external APIs
- Validator tested every candidate via WebSocket and promoted live relays
The remaining three services handle monitoring, event archiving, and analytics:
- Monitor performs NIP-11 and NIP-66 health checks on validated relays and
publishes results as kind 10166/30166 Nostr events (requires
PRIVATE_KEY) - Synchronizer connects to validated relays, subscribes to events, and archives them with cursor-based pagination
- Refresher refreshes materialized views that power analytics queries
Running All Services¶
To run all services continuously (not just --once), each service enters an
infinite loop with configurable intervals between cycles:
# In separate terminals (from deployments/bigbrotr/):
python -m bigbrotr finder
python -m bigbrotr validator
python -m bigbrotr monitor # requires PRIVATE_KEY env var
python -m bigbrotr synchronizer
python -m bigbrotr refresher
Warning
The Monitor requires a PRIVATE_KEY environment variable (hex format, 64
characters) to sign and publish Nostr events. Generate one with:
Next Steps¶
You have successfully run BigBrotr's core services manually. To deploy the full stack with monitoring, Grafana dashboards, and automatic restarts:
- First Deployment -- Full Docker Compose deployment
- Configuration Reference -- Tune intervals, timeouts, and networks
- Architecture -- Understand the system design
Related Documentation¶
- Installation -- Install paths and system requirements
- First Deployment -- Full stack Docker deployment
- Configuration Reference -- YAML configuration for all services
- Database -- Schema, stored functions, and materialized views