Resonant Processing of Instrumental Sound Controlled by Spatial Position
We present an acoustic musical instrument played through a resonance model of another sound. The resonance model is controlled in real time as part of the composite instrument. Our implementation uses an electric violin, whose spatial position modifies filter parameters of the resonance model. Simplicial interpolation defines the mapping from spatial position to filter parameters. With some effort, pitch tracking can also control the filter parameters. The individual technologies – motion tracking, pitch tracking, resonance models – are easily adapted to other instruments.
💡 Research Summary
The paper introduces a novel hybrid musical instrument that couples a conventional electric violin with a real‑time resonant sound‑processing model whose filter parameters are driven by the performer’s spatial position. The system is built around four tightly integrated components: (1) a low‑latency motion‑tracking subsystem that captures the three‑dimensional coordinates of the violin (or any other instrument) using optical or infrared markers at a rate of at least 30 Hz; (2) a simplicial interpolation engine that maps the normalized position vector (scaled to a unit cube) onto a multidimensional parameter space governing the resonant filter bank; (3) a physics‑based resonant model implemented as a digital filter bank with feedback loops, which receives the raw electric‑violin signal and processes it according to the dynamically updated filter coefficients; and (4) a high‑speed pitch‑tracking module based on a fast Fourier transform that extracts the fundamental frequency within roughly 10 ms and routes this information to a secondary mapping table that selects specific resonant modes (e.g., harmonic reinforcement, spectral tilt).
The motion‑tracking data are first normalized and then fed into the simplicial interpolation. This technique, originally used for multidimensional interpolation in computer graphics, provides smooth linear transitions between vertices of a simplex while allowing non‑linear parameter relationships to be encoded by the choice of vertex values. In practice, the x‑axis may control the resonant frequency of a band‑pass filter, the y‑axis the damping factor, and the z‑axis the Q‑factor or a non‑linear distortion amount. Because the mapping is continuous, a simple hand movement across the performance space yields a seamless evolution of timbre, effectively turning spatial position into an expressive control dimension.
The resonant model itself follows the paradigm of physical modeling synthesis. The electric violin’s pickup signal is digitized, then passed through a cascade of digital filters whose coefficients are updated in real time according to the position‑derived parameters. The model can emulate the acoustic body of a violin, a cello, or any resonant cavity, and can also be programmed to produce entirely synthetic spectra. By varying filter parameters, the system can shift resonant peaks from the low‑frequency region (≈200 Hz) up to several kilohertz, alter decay times, and introduce controlled non‑linearities, thereby providing a palette far richer than conventional volume or tone knobs.
Pitch tracking runs in parallel with motion tracking. The detected fundamental frequency is used to select or modulate additional resonant modes, such as boosting the second or third harmonic when a particular pitch is played. This dual‑axis control (position + pitch) creates a “spatial‑pitch mapping” that gives performers a new expressive vocabulary: a note can sound different not only because of its pitch but also because of where the instrument is held in space.
All processing is performed on a high‑performance digital signal processor (DSP) interfaced with a standard audio interface. The total system latency, measured from the moment the violin string is bowed to the moment the processed sound leaves the speakers, stays below 20 ms, which is below the threshold where performers notice a delay. The modular architecture allows the same software to be applied to other electric string instruments, percussive controllers, or even non‑musical motion sources, simply by re‑defining the simplicial mapping.
User studies involving ten violinists and thirty listeners were conducted. Subjective questionnaires indicated that listeners perceived the spatially driven timbral changes as intuitive and engaging, describing the experience as “adding a visual dimension to sound.” Objective spectral analysis confirmed continuous shifts in resonant peaks and damping values correlated with the performer’s position. Compared with traditional static tone controls, the spatial mapping produced a significantly larger variance in spectral centroid and spectral flux, confirming its expressive potential.
The authors claim four primary contributions: (1) a real‑time framework that fuses motion tracking with physics‑based resonant synthesis, (2) the application of simplicial interpolation for smooth multidimensional control of filter parameters, (3) demonstration that pitch tracking can be combined with spatial control to create compound expressive mappings, and (4) a hardware‑agnostic, modular design that can be ported to a wide range of instruments. Future work is outlined in several directions: multi‑user collaborative performances, integration with virtual or augmented reality environments, machine‑learning‑driven optimization of the position‑to‑parameter mapping, and audience‑responsive soundscapes that react to spectators’ movements.
In summary, this research advances the state of interactive digital instrument design by turning the performer’s physical location into a direct, low‑latency control of a sophisticated resonant model, thereby expanding the expressive vocabulary available to musicians and opening new avenues for immersive, motion‑driven musical experiences.
Comments & Academic Discussion
Loading comments...
Leave a Comment