Musical Instrument digital interface (Musical Instraament Digital Inter a face, MIDI) is the 20th century, early 80's electro-acoustic musical instruments to address the communication problems between the proposed. MIDI does not transmit voice signals, but notes, control parameters, such as instructions, which instructions MIDI device what to do, how to do, such as the play which notes, such as how much volume. That they are unified into MIDI messages (MIDI Message). Transmission used in asynchronous serial communication, the standard baud rate for communications 31.25x (1 � 0.01) Kb / s [1]. MIDI is one of the core technology of synthesis. Synthesis of two kinds of main methods: frequency modulation (Frequency Modularion, FM) and wave table synthesis (wavetable) synthesis [2]. Compared with the cost of the sample wavetable synthesis, FM on lower storage space requirements, despite the sound performance of certain limitations, to achieve a greater degree of difficulty, but in the international arena is still very popular. At present, the technical level of research MIDI is not more than, FM synthesis was also most frequently used tone-based approach, which to a large extent restricted the expressive MIDI music. The author discusses the FPGA on the MIDI music digital FM synthesis methods, design and verify synthetic program, synthesis of expressive music has been essential to improve to reach the desired results. 2 Music Synthesis 2.1 FM method FM music synthesis methods, as first proposed by John Chowning, which is issued by the cyclical nature of modulator signal (modulation wave) to another signal modulation (carrier) frequency [3-4]. The basic expression for the FM One, A (n) for the amplitude envelope; I (n) for the modulation envelope; ?m for the carrier angular frequency, which determines the tone of the music; ?m for angular frequency modulation wave. If the modulation wave in the sub-audio frequency (sub-audio) above, you can hear the sirens as police cars and downs like tone of voice changes; and when the frequency increased to about 30 Hz or more can be heard sideband frequency new sound. 2.2 Polysyllabic The polyphonic synthesizer (polyphony) at the same time it relates to the issue of the ability of a number of independent audio, also known as "chord." Polyphonic general note (note) or voice (voice) to measure the number or description, voice Polyphony is the number or the number of chords. The requirements of different polyphonic MIDI music playback may not be the same as the higher Polyphony, the ability to play the stronger sound. 2.3 ADSR envelope Figure 1 show of the ADSR envelope (ADSR envelope) [5] is that many synthesizers, samplers and other electro-acoustic musical instruments of the important parts. Instrument and its function is to carry out certain aspects of the voice modulation (often volume). When the musical sound, the related volume change over time. Changes in the volume of different instruments of different designs. ADSR is defined as follows: From the playing of the (attack) the time that the voice from the activated to the time to reach full volume. Attenuation (decay) time, that sound attenuation by the peak volume to keep the time. To maintain (sustain) the time that the sound attenuation of the notes were released after the volume of pre-set values. Release (release) time, notes that after the termination of the voice out of time. If the different ADSR envelope applied to the FM in the expression A (n) and, (n), then there will be changes in the spectrum, and then set the appropriate fc / fm value, there will be a different tone. 3 Design and Implementation 3.1 The overall structure In this design, hardware and software have been the use of resources. Serial MIDI driver in the support of the sequencer software Sonar6.0 play MIDI files in the computer through the RS232 output port baud rate 38.4 Kb / s of the MIDI signal, as shown in Figure 2. Synthesis of all the digital audio house in the FPGA. Digital audio stream through the 12S Bus interface to the audio DAC, the final output from the amplifier. 3.2 Synthesis of part of the logical structure For the realization of 32 polyphonic FM synthesis, the logical structure in Figure 3 be used. It requires logic oscillator 64, 64 ADSR envelope generator and VCA 64. This logical structure of a large number of repeat units and modules to achieve directly in the FPGA will consume a lot of resources, cost too much. To reduce the size, time-division multiplexing technology used to control the resource consumption to about 1 / 64, while the macro-wise, it seems, are still parallel. Design of structures in the FPGA on a DDS (the achievement of OSC) and an ADSR envelope generator. 3.3 Design Structure FPGA on the main structure of the modular design shown in Figure 4 describes the methods for VHDL [6], LPM and the schematic diagram. MIDI UART UART that is used to the way the asynchronous serial MIDI signal is converted to MIDI bytes. Detection module MIDI message bytes the effectiveness of detection and extraction of pre-defined MIDI messages. In this design, the use of notes and the notes related to open two kinds of information (up to 0 notes related to open source equivalent to the notes). Distributor polyphonic sound synthesis is to achieve an important module. One of the key technologies as one, which is responsible for the dynamic distribution of the sound, when you need to create a new sound is available but not free, direct discarded the note (when the sound will be lost a few records juice), if the registration is available free to register; when to turn off the sound when the search value and rewrite the corresponding flag. 32 sound values and occupancy has been stored in the register, the note value and switch the value of the slot in the time-division multiplexing has been output. Look-up table for the class notes come after the frequency switch is used to trigger the ADSR envelope generator. On the other hand, the envelope of the sound feedback generator status and determine which need to be the end of polyphonic and free (released Polysyllabic resources). Notes the frequency of table look-up table derived module which notes correspond to the number of frequencies is to LPM ways. 12 in accordance with the relationship between the average rate interval, the voice of every 8 degrees higher, frequency doubled, then the ratio of adjacent frequency scale for . A sound by international standards (440Hz) for the benchmark, it notes in the MIDI message value of 69 (0x45), the notes corresponding to a value of ? the frequency of Phase accumulator module linear cumulative frequency after one cycle in the phase. Its sine table module with DDS. Which includes a register 64, 64, respectively, storage phase, and time division multiplexing to achieve FM and polyphonic. Phase modulator summation of two input and mapping back to the realization of a cycle of indirect FM. ADSR generator module controller and the flip-flop based on input parameters have the envelope 64, in the modulated carrier wave and the corresponding output sample at all times in order to form the output of the sinusoidal modulation rate. Sine wave sine table memory module form, the phase is mapped to the range. As one of the key technologies, it is the phase accumulator and phase modulator with phase modulation component of the capacity of DDS. The design, in matlab to calculate in advance phase sine wave from 0 ~ p / 2 of the 4070-point sampling, is achieved through transformation, mapping the entire cycle as a sine wave of the virtual table, the realization of a sinusoidal cycle of 16 276 points sampling, the actual storage space occupied by only the 1 / 4. The design of the audio sampling rate of about 32 550 Hz, so the frequency resolution of Module FIFO role from the delay. As a result of Mr. timing into the modulation waves of all 32 sampling points, 32 after the carrier to generate sampling points, it must be delayed so that the corresponding point in the phase modulator at the appropriate time to meet, that is, the alignment module FIFO sampling and modulation carrier wave sampling. Mixing with limiting the role of modules that have been transferred to the tone value of the sum of the sample, mixed for polyphonic output. 32 samples are 16 bit mono, and add the possibility of overflow, so they would even be limiting. Here is the use of hard limiter algorithm. Synthesis parameters of the controller module configuration parameters. The form of VHDL code in order to take Tsai ADsR different sound parameters and the fm / fc ratio. The design of built-in 8 kinds of sound, 2 separate storage sub-module A (n) and I (n) of the parameters, the parameters part of the code is as follows: The effect of digital audio with enhanced bass FPCA simple digital effects. Matlab 7.0 was designed in the IIR low-pass filter, 16 bif point calculated by the Bass Boost and after mixing the output to the post-class. 11S configuration bus interface module is responsible for the external DAC. I2S (Inter-ICSound Bus) [7] is a Philips digital audio devices for audio and data transmission standards developed by a bus. In Philips I2S standard, not only provides the hardware interface specification, but also provides digital audio data format. The design is dedicated CS4334 audio DAC. Synthesis of audio sampling rate of 32550Hz, dual-channel 16bit zero filling up to 24bit, I2S bus for the MCLK output 12.5 MHz, SCLK to 1.5625 MHz, LRCK to 32.55 kHz, SDATA is the serial audio data. 3.4 system Characteristics Total consumption of the entire FPGA design Logic Unit 5 more than 000 (about 90%), more than 57,000 memory (about 60%), the clock frequency of 50MHz, the main characteristics for dual-channel 16 bit, 32.55 kHz sampling rate; 32 polyphonic FM digital synthesis, real-time MIDI performances; an independent ADSR controller; eight kinds of built-in sound parameters; built-in digital effects, Bass Boost. Figure 5 ~ 6 for the sound mixer and the distributor to the simulation results I2S interfaces, voice input message distributor for the "open channel 0 notes on 60 (channel = 0, note = 60, on-off = 1)" upon receiving the news about 80�s after the beginning of the playing of sound, voice synthesis can be seen far less than the MIDI communication time of about 1.5 ms of the inherent delay. MIDI Synthesis for the following examples, Figure 7 for the MIDI music sequencer software in Sonar 6.0 piano roll view in the display, it indicates the tone of the opening and closing all the time. Figure 8 is this MIDI music in the FPGA on the synthesis of the actual waveform, the waveform by the sound card with 44.1 kHz sampling rate, 16 bit recording, here in the audio in Audition 2.0 software shows a channel which corresponds to Figure fragment 7. Figure 9 for the STFT analysis of the waveform. Can see clearly that the synthesized voice are rich harmonic exist (mainly in the 10kHz or less), and envelope modulation due to changes in the dynamic changes of the spectrum also. In addition, from the spectrum point of view there is no obvious bright line broadband, which is that a continuous phase synthesis waveform better sense in listening to the sound performance is not violence. Figure 10 shows the sound waveform from the playing of the stage, from the ADSR envelope can be seen the realization of the model is successful. 4 Concluding remarks The design of modern electro-acoustic technique based on the FPGA synthesis of all-digital music, the use of the classic FM music synthesis algorithms, and the successful implementation of the 32 polyphonic (polyphonic), completed a variety of organ sounds such as analog, voice enchanting music performance and strong, can be used in accompaniment, music, ring tones, video and so on various occasions and has practical value, but also reflects the development of digital FPGA advantages of IC and the flexibility of MIDI system.