Opensource visual performance system

This software system generates in one application stepsequences, soundsynthesis and visuals for realtime performance. It consists of 16 software instruments with a simplified but flexible wavetable synthesis. The data for sequences, wavetables and live visuals are taken from images, which are either captured in realtime from web via Yahoo image search, a webcam or localstored files. The rows of pixels provides the necessary information for each step or sample.

This software is created with following techniques:

  • OpenGL for graphic user interface and visuals
  • Trolltechs QT as glue, for instance threading, networking, image loading and persistence
  • Jack for realtime audiosynthesis
  • Alsa for Midi to control the system
  • OpenCV for capturing and analysing image data in a later version

Not only is the system based on opensource, it will be made free opensource too so its available for other artists. The sound is rather electronic and due to the stepsequencers minimal and rhythmic so its ideal for IDM or Industrial. This application is made for the project Notstandskomitee in first place so it tends to sound harsh.

Important aspect is to experiment with alternative user interaction. Creating music with the computer with the mouse as primary interface is not healthy and is not inspiring. This system uses the Korg PadKontrol as interface. Recent Korg controler are able to be hacked rather easily. A certain sequence of Midi sysexdata enable a native mode bypassing the firmware . In this mode external software can control every LED and display on it and create a custom interface with visual feedback.

With that interface the performing musician can choose,manipulate and mute sequences, alter sounds etc without the need to stop the sequencer. The user interface doesn't work with a mouseclicks but an alternative inputmethod with the alphanumeric keyboard will be implemented in case there is no Korg available.

The UI is overlayed over the realtime 3D graphics and also visible for the audience so they can see what is going on, the choosing process of the performer etc.

The synthesizer is made of 16 monophonic voices, each with two wavetable oscillators running through a dynamic waveshaper and a multimode filter. The wavetables have up to 256 waveforms which can be stepped through to animate the sound. The same is possible with the table of the waveshaper. Modulationsources are the sequenceparameters but also simple AR envelopes which can be even looped.

The clock source of the sequencer is not taken from the midi subsystem nor directly from any timers, its created with jack audio sources which seems to be more accurate.

The code can be fetched at http://sourceforge.net/projects/visualsynth/ although there is no formal release yet with documentation.