96 kHz.org
Advanced Audio Recording

Generation of generic MIDI Data with a DSP

This page describes an approach to create sound patterns from music rhythms described by mathematical equations. The idea behind this is to easily define music for long term applications like video games without describing every note in detail but getting flexible and interesting music though without many repetitions. The first project like this had been done with my DSP Platform.

algorythms - obtaining music rythms the algorithmic way

The process starts with random selection of predefined sound and rhythm patterns know from trance and dance music in the first place which cover beginnings, middle phrases, soli and endings. For any track in the setup a different point of start is chosen. Modifiers are applied in order to put dynamics and emphasis to the tracks to support the musical meaning. Patters are selected with a randomly progressing counter defining the number of tone, part and time index.


Let equations do the work

For instance Amplitude (n,t) = 1 + k * sin ( (w + n/U) * t * (1+ n/W)) defines a continuously changing amplitude for all of the pattern tracks being dependent on the parameters W and U.

The qualifier PatternOn (p,n,t) = random() + PatternOn (p,n,t-1) + PatternOn (p,n+1,t-1) + PatternOn (p,n-1,t-1) defines if a pattern is set to "on" = playing or not according to a random number, taking into account the former states of the adjacent channels. This leads to a higher probability for tracks to be activated in case there is low activity.

Controller "p" defines the pattern method which is given by the musical meaning. For example if there has to be placed a solo or not or if an ending should be performed leading over to a new block of 16,32 or 64 time ticks.


algorythms - music tracks from generic algorithms

Patterns are distributed all over the scene by controlling their successor like doe here with PatternIs (n,t) = PrimaryPattern(random() + k*n)). This selects one out of 256 primary patterns for sound or rhythm according to the number of channels. Depending on the random number the pattern number progresses within one measure or not.

As shown above mask parameters are used to control the note generation in detail and add variation to the tracks like here: MaskIs (m, n,t) = Amplitude (PrimaryPattern(m,t) , n, t) will define a dynamic mask for any of the tracks in case the pattern is "on" taking into account the point of time and it's occurrence in the track.


Adding dynamics to sound tracks

With modification qualifiers  m=f(t,p)  and p=f(t,m,k) the sound tracks are synthesized distinguishing between solo tracks, accompaignment tracks, lead music and supporting tracks for synths. Also bass drums and drums are handled this way. SoundTrack (m,n,t,k) = Patterns (n,t) * Masks (m, k,t) will cut out MIDI notes for any of the tracks according to the particular mask and pattern used.

algorythms - rythmical arpeggios from mathematical algorithms


By using special masks with repetitive windows, arpeggio notes can be created from dynamically created tracks, which were the result of a track bouncing of groups of channels.

For each track the volume (loudness) is defined according to the calculated emphasis like Amplitude (p,n,t) = Amplitude (n,t) * (1 + Emphasis (t)), so the musical meaning of a note becomes evident here.


Obtaining the finalized MIDI

Finally the MIDI channels are created from the tracks:

MIDINoteValue (n,k,p,t) = SoundTrack (n,t,k) * Amplitude (p,n,t)

MIDINoteTime (n,k,p,t) = if (SoundTrack (n,t,k)>0) * (1 + t * x *random() ) adding human characteristics

MIDIControllerVibrato (n,k,p,t) = MIDINoteOn() * MIDINoteTime (n,k,p,t) - t;

MIDIControllerTremolol(n,k,p,t) = MIDINoteOn() * f(t) * Amplitude (n,t);

Sound Demos of synthesized music

Listen to a demo sound here : ARPDANCE


Similar Articles

Read an article how to create MIDI with a DSP-System


© 2001 - Jürgen Schuhmacher