USING TACTILE MIX
A Belmont University instructor uses Tactile Mix during the development cycle of the software, after making a change from using cubes to spheres after a pilot test run.
Physical Controllers vs. Hand-And-Gesture Tracking: Evaluation of Control Schemes for VR Audio Mixing
Alternative control schemes for affecting the characteristics of audio signals have been designed and evaluated within the audio research community.
The medium of virtual reality (VR) presents a unique method of sound source visualization — using a headset which displays a virtual environment to the user. This allows users to directly control sound sources with minimal intermediary interference with a variety of different controllers.
In order to provide insight into the design and evaluation of VR systems for audio mixing, the differences in subject preference between physical controllers and hand-and-gesture detection controls were investigated.
A VR audio mixing interface was iteratively developed in order to facilitate a subject evaluation of some of the differences between these two control schemes.
Ten subjects, recruited from a population of audio engineering technology undergraduate students, graduate students, and instructors, participated in a subjective audio mixing task.
The results found that physical controllers outperformed the hand-and-gesture controls in each individual mean score of subject-perceived accuracy, efficiency, and satisfaction, with mixed statistical significance.
No significant difference in task completion time for either control scheme was found.
Additionally, the test participants largely preferred the physical controllers over the hand-and-gesture control scheme, with strong indication.
Additionally, there were no significant differences in the ability to make adjustments in general when comparing groups of more experienced and less experienced users.
This study may provide useful contributing research to the wider field of audio engineering by providing insight into the design and evaluation of alternative audio mixing interfaces and further demonstrate the value of using VR to visualize and control sound sources in an articulated and convincing digital environment suitable for audio mixing tasks.
(FUN, EASIER TO READ version)
People make new types of mixing stuff for sound all the time. Just look at the Maschine, for instance, and all the Akai stuff you can get at the guitar store.
Some guys have looked at gestural controls for audio mixing before, and some guys have looked at VR for audio mixing before. But usually not together, at least not in this context.
These two lanes of research meet at the “stage metaphor”, so they’re basically cousins. If you read the paper, you should understand what that is by like page 14 or so.
I couldn’t find a program to test the differences in preference between the physical controllers and the gesture tracking, so I built my own DAW from the ground up to make it happen over 1-1.5 years.
People largely preferred the physical controllers but ratings were overwhelmingly positive for both schemes.
Older and younger engineers had very similar ratings.
Consoles are pretty backwards. Why do you have to push a fader AWAY from you to make something louder? That makes no sense to me, especially since most of these systems are digital. I get into this around section 2.
See picture below. Music dimension. Pretty cool, huh? Just applying some things that WAVE, MotionMix and LAMI did before me. Those guys are geniuses, though. (see paper)