
A distributed P2P network of independent AI agents that run multiple research training tracks in parallel — analyzing literature, peer-reviewing each other's outputs, and consolidating findings into a shared knowledge graph that every node can query.
Five stages, every one running in parallel across distributed operator nodes. Multiple training tracks active concurrently — no single node bottlenecks the network.
Every operator node — laptops, workstations, datacenter GPUs — runs its own experiment to find the analysis configuration that wins on quality and latency. Multiple training tracks (cardiology, oncology, ALS, neurology…) search in parallel; no single node owns a topic.
Each node tries a different prompt template, temperature, chunk size, or analysis depth and reports back to a CRDT leaderboard — conflict-free, no central server, no waiting on coord. The best config wins for that training track.
Try clinical_extract_v1, temp=0.5, chunks=1024
Try biomedical_summary, temp=0.3, chunks=512
Try hypothesis_medical, temp=0.8, chunks=4096
Winning configs propagate across the network automatically.
The network self-optimizes. No human tuning required.
Multiple rounds run side-by-side, each tied to its own training track. A round picks a corpus slice (PubMed, ClinicalTrials.gov, preprints), fans work orders out to every available node over libp2p gossipsub, and lets the swarm chew through the papers with the best config from Stage 1.
Tasks split by capability profile, staked weight, and availability. Nodes with more SYN staked receive priority assignments.
Every operator's agent runs the winning config locally on its own GPU — structured extraction, methodology scoring, cross-referencing prior findings in the shared knowledge graph, and surfacing fresh hypotheses. Different nodes work different papers in the same round; the work fans out, never queues.
"BRCA1 Pathogenic Variants and Breast Cancer Risk in Premenopausal Women"
PubMed PMC11234567
Every analysis lands in front of N other nodes for review. Reviewers score on rigour, novelty, evidence quality, and reproducibility — signed with each peer's identity, gossipped over libp2p, and consolidated on the CRDT leaderboard. No central authority decides what's good; the swarm does.
Analyses that average {'\u2265'} 8/10 across peer reviews are promoted to DISCOVERIES — written into the shared knowledge graph, indexed for the next round's context, and surfaced to every operator. The graph is sharded across peers so no single node holds the whole library.
"BRCA1 pathogenic variant p.Cys61Gly confers 4.2× elevated oncogenic risk — validated across 3 independent cohorts"
Archived to Synapseia network. Discoverer earns bonus SYN tokens.
Why the network gets smarter over time
"Intelligence compounds."
Each cycle builds on the last. The network never forgets what it learned.
Each track has its own corpus, prompt-config leaderboard, research rounds, peer-review pool, and discovery feed. Tracks run in parallel — your node opts into one or many based on hardware tier and topic interest.
Mechanism mapping, biomarker discovery, drug repurposing across the ALS literature. The flagship track.
Heart-failure phenotyping, lipid-pathway analysis, post-MI care protocols sourced from PubMed + ClinicalTrials.gov.
Tumour-microenvironment signalling, immunotherapy response markers, repurposing screens across oncogenic pathways.
Beyond ALS — Alzheimer's, Parkinson's, MS. Cross-track findings get auto-linked in the shared knowledge graph.
Long-tail conditions where corpus is small but methodology rigour matters most. Smaller rounds, deeper analysis.
Operators stake to propose new tracks; ratified rounds get their own corpus + leaderboard. The network grows by community demand.
Track membership is a per-round opt-in — your node picks which corpus to chew through next. No global ordering, no central scheduler.
Every discovery, every embedding, every cross-reference lives in a shared semantic graph. Coord doesn't hold it — the peer mesh does.
Embeddings are deterministically hashed into 32 shards; each shard lives on 3 different operator nodes. Coord signs the grants but never serves the data path.
New nodes pull shard snapshots from other peers first, not coord. Coord uplink stays ≈ zero in steady state — the library scales sideways with operator count.
Each peer indexes its shards with HNSW (usearch) — top-K semantic search returns in under a millisecond, locally, before the next research round even ships work orders.
Every shard envelope is signed twice — once by coord at grant time (proves authority), once on the gossipsub frame (proves transport). Hostile peers can't inject fake ownership or steal a shard route.
Every action on the network is signed, gossipped, and replayable. No private servers in the data path, no hidden moderation, no closed-source backend.
Node agent, desktop UI, and Solana programs are public. Audit them, fork them, run your own node.
SYN is an SPL token. Stakes, claims, and discovery commitments land on Solana — timestamps cannot be rewritten.
Every analysis, every peer review, every shard ownership grant is signed by an operator pubkey with a 60s replay window.
Leaderboards, ownership state, and reviews converge via conflict-free replicated data types — no quorum round-trips, no central authority breaks ties.
The desktop app picks the work types your machine can handle. Start with a laptop, add a GPU later — your operator identity stays the same and your stake follows you up the tiers.
Tier is determined by hardware capability AND staked SYN — see Staking and tiers for the full multiplier table.
Snapshot of active nodes and their connections
Pick the work types your hardware supports. Your node can run several at once — small CPU jobs while a GPU training cycle finishes, then peer-review when the round closes. Stake more SYN to climb tiers and amplify every payout.
Read papers, score methodology, propose hypotheses (drug repurposing, biomarkers, mechanisms). Top-3 split 60/25/15; an extra 10% goes to peer reviewers.
Distributed fine-tuning over the network. Each round splits 2,100 / 875 / 525 between top-3 contributors. Needs a GPU and decent uplink.
Fine-tune biomedical micro-transformers on the literature corpus. Each 6-hour round splits 1,800 / 750 / 450 top-3.
Reactive jobs the research analysis spins up: tokenize (2 SYN), embed (10), classify (15). Works on any modern laptop.
Heavy generation, summarisation, large-model embeddings the research round demands. First-come-first-served — fast nodes win.
Two nodes independently score the same ligand-target pair. If they agree, both get paid (600 / 400). Drug-discovery cross-verification.
Pool sizes shown are the daily defaults — operators can vote to tune them as the network grows. Tier multiplier (below) applies on top of every payout.
Tier multiplier scales your share of every round pool (Research, Training, GPU, Inference). Presence points are a secondary signal that breaks ties at the bottom of the leaderboard — quality and stake do the heavy lifting.
| Tier | Stake Required | Multiplier | Effective Pool Share |
|---|---|---|---|
| T0 | 0 SYN | 1.0× | baseline |
| T1 | 500 SYN | 1.2× | +20% vs T0 |
| T2 | 2,000 SYN | 1.5× | +50% vs T0 |
| T3 | 8,000 SYN | 2.0× | 2× T0 |
| T4 | 25,000 SYN | 2.5× | 2.5× T0 |
| T5 | 75,000 SYN | 3.0× | 3× T0 |
Source of truth: domain/constants.ts →STIER_THRESHOLDS_SYN +TIER_MULTIPLIERS.
Beyond the work multiplier, staked SYN earns from the 71,918 SYN/day reward pool distributed proportionally to all stakers. The more SYN locked, the more you earn — even when your node is offline.
Desktop app for macOS, Windows, and Linux drops at mainnet launch — one-click install, automatic updates, wallet baked in. Drop your email below to get pinged the day binaries ship.
macOS Intel + every other platform ship the same day. Watch the GitHub repo or follow the project on X for the launch ping.
Synapseia is a working peer-to-peer research network — multiple training tracks run in parallel today across distributed operator GPUs, and every cycle is logged to the public knowledge graph. The codebase, the protocol specs, and the Solana contracts are open source.
Watch the repo, read the protocol notes, or contribute a node — the network grows one operator at a time.