auto_storiesTraining
Per-pillar learning paths, recommended background reading, and the audio companion — a 52-episode podcast graph generated from this documentation.
How to use this site to learn
Each of the four pillars has a recommended path that takes you from "I've never used this before" to "I can build a production graph workload against it":
- storage Pillar 1 — Semantic storage — Turtle ingest, dictionary encoding, partitions, hexastore. Start here if you're new to RDF in Postgres.
- search Pillar 2 — Semantic query (SPARQL 1.1) — read surface (BGP joins → OPTIONAL → aggregates) then write surface (UPDATE). Start here if you can ingest but don't yet query in SPARQL.
- psychology Pillar 3 — Materialization (OWL 2 RL) — what the reasoner entails, when materialisation pays off, the idempotence contract. Start here when SPARQL queries on raw assertions feel underpowered.
- verified Pillar 4 — Validation (SHACL Core) — shapes, constraint components, the report-as-data idiom. Start here when you need to gate or audit graph ingestion.
- build Operations — observability, install, multi-PG support, SQL composition. Start here if you're stewarding a deployment.
Background reading
Material not specific to pgRDF, but worth a session if you're new to a given pillar:
| Pillar | External resource |
|---|---|
| RDF foundations | RDF 1.1 Primer (W3C) |
| SPARQL | SPARQL 1.1 Query Language (W3C) and Update (W3C) |
| OWL 2 RL | OWL 2 Web Ontology Language — Profiles (W3C) |
| SHACL | Shapes Constraint Language (W3C) |
| Turtle | RDF 1.1 Turtle (W3C) |
| Postgres extension model | PostgreSQL CREATE EXTENSION |
| pgrx (Rust + Postgres) | pgrx project |
micAudio companion
A 52-episode podcast graph generated from this site's markdown via Kokoro — the Apache-2.0 TTS model — with engineered phrase libraries, not page-readalouds. One podcast per documentation page. Same structure, ear-shaped delivery.
What it is
Each documentation page has a corresponding podcast, structured as a phrase library (TAG / INT / TRN / CTX / DEF / HOW / API / EX / CAV / CMP / ROAD / REF / CLS segments) and a playback order designed for listening, not page-reading. A CREATE TABLE snippet on the page becomes spoken prose in the podcast: "the dictionary table is just three columns: a bigserial primary key called id, a text column called value that's unique, and a small integer called term-type."
Generation pipeline
- Source of truth: this site's markdown.
- TTS engine: Kokoro (Apache-2.0, commercial use permitted).
- Default rendering: English, voice
af_heart,cleanpost-production chain, 24 kHz OGG @ 96 kbps. - Catalogue: 52 podcasts × ~30 segments each.
- Re-render: atomic per-segment; the text layer (
graphs/) is immutable input.
Hosting plan
Audio assets live in their own repository so the documentation repo stays small and clones stay fast. Per GitHub Pages limits, single files cap at 100 MB and a repo softly caps at 1 GB — each per-segment OGG is ~50-300 KB, so the full library fits comfortably. The docs site will reference the audio cross-origin once the catalogue is published.
Status
Coming soon
The audio catalogue is generated and pending publication to its own GitHub Pages site. Per-page "Listen to this episode" callouts will appear on each documentation page once the catalogue ships. This site reserves the integration shape but doesn't ship the audio elements yet.
Licence + attribution
- TTS model: Kokoro — Apache-2.0. Commercial use permitted with attribution.
- Generated audio: Apache-2.0 (matching pgRDF and the docs source).
- Use case: educational documentation companion. Fully within the GitHub Pages acceptable use scope.
See also
- Iconography — the visual vocabulary the site uses to differentiate concepts at a glance.
- The four pillars — quick tour of the capability surface.