Wikipedia
HackerNews
GitHub
How It Works
Webarmonium transforms real-time web activity into generative music and visuals through a direct correlation system. Three data sources are polled continuously: Wikipedia edits (every 5s), HackerNews posts (every 10s), and GitHub commits (every 60s). Each source feeds metrics into a dynamic normalization engine that tracks historical min/max values rather than using fixed thresholds—this ensures maximum musical variety as the system adapts to actual data patterns over time.
Monitored parameters include: edit rate, velocity (rate of change), edit size, new article count, post frequency, upvote averages, comment counts, commit frequency, repository creates, and deletes. These metrics generate virtual gestures that feed into the same composition engine used in collaborative rooms, employing six generative algorithms (cellular automata, fractals, Markov chains, neural patterns, Fibonacci sequences, and chaos theory). The velocity of each metric—how quickly it changes—determines gesture intensity: only sources with significant velocity trigger musical events, preventing sonic overload and mimicking natural human interaction patterns.
Spatial and timbral mapping: Wikipedia controls the left region with bass tessitura (65-130Hz), HackerNews the center with tenor range (196-392Hz), and GitHub the right region with soprano frequencies (523-1047Hz). Cursor positions on the canvas directly correlate with metric activity—vertical position maps to pitch within each tessitura, while horizontal movement within each region reflects the source's current velocity. Visual pulses, particle flows, and network connections emerge from the same gesture data that drives the audio, maintaining a unified audio-visual correlation where what you see directly represents what you hear.