How the branches work 🌿
Each node groups clips using real metrics from the core showcase data. We branch first by energy (whistles per minute), then by harmonic spread (frequency range), and finally by temporal flow (coverage). Leaves represent individual recordings with instant raw vs denoised playback.
Want to remix it? Regenerate branching_data.json after
tweaking the heuristics in dolphain/branching.py or add
new branch dimensions. This playground is perfect for future hackathon
experiments—think seasonal themes, AI narration, or branching quests
for marine biologists.