The first task of this project was to create a square wave. This actually proved to be fairly simple, so we expanded and made it possible to generate and select square waves of varying frequencies. This, while being a bit more complicated than the previous task, and involving a few redesigns, still proved to be easier than we originally thought. We simply involved dividing the clock into multiple, very slow, clocks and using this as our wave. We tested these preliminary designs by outputting the varying slow clocks on the GPIO port and using an oscilloscope to ensure that everything looked as it should.
The next major undertaking was getting these square waves to output through the audio codec and its digital to analog converter (DAC) so that it would be possible to plug a device in to the DE2 audio output and hear all of these generated waves. This was a rather complex process and involved using I2C to initialize the audio codec so that it was able to interface correctly with our generated output from the FPGA. In order to help us with this task, we first observed the operation code from John Loomis (http://www.johnloomis.org/digitallab/audio/audio2/audio2.html), then used and modified this code in order to fit our specific needs.
Once we were able to get our square wave to the audio output, we wanted to see if we would be able to create a system which would play a simple song. This is where it was critical to be able to precisely generate varying frequencies of square waves in order to allow us to create the same pitches as the various notes on a piano keyboard. In order to get these precise pitches, we found a chart (http://en.wikipedia.org/wiki/Piano_key_frequencies) with all of the piano key frequencies listed. Once we selected the notes we wanted, we calculated how we would need to divide the clock in order to achieve these notes.
Initially in this process, we used a very limited number of notes so that our design could focus more on the concept and less on the repetitive details. We found that the introduction to Lady Gaga’s Bad Romance used very few notes and worked perfectly for this purpose. We proceeded to encode this piece to work in our system by creating a quasi state machine which would change between frequency states automatically based on a counter that counted clock cycles for a clock which 'ticked' with each eighth note of the song. This essentially allowed us to program each eighth note individually in the piece to be whatever note we needed. One problem which we had to overcome was what to do given the fact that there were both the bass notes and the melody at the same time. Initially we simply ‘threw away’ the bass data and only operated with the melody, however this proved to be a very poor choice as our generated output was hardly recognisable as Bad Romance. Upon further thinking, we decided that it would be in our best interests to use one channel of the audio output to play the melody, and the other to play the bass accompaniment. Once we were able to get this smaller design operational, we expanded the number of frequencies we were able to use to include a full 3 octaves. With these newly added notes, it gave us many more possibilities for songs which we could play. Based upon or design limitations, as well as our goals, we decided that we would work on implementing both Linus and Lucy from Peanuts, as well as a simple ‘run’ up and down the 3 octaves.
After getting both songs working we wanted to see if it was possible to change the volume of our audio output. Based on what we knew about waves, we determined that if we changed the amplitude of the wave we were generating, we would be able to change the perceived volume of the audio output. We then used this idea to add a ‘high/low’ volume selection switch. This is what would come to be our final design.
Our final design consists of a top-level module which calls an I2C module to configure the audio codec, calls a module which generates all of the tones we needed, and calls various modules, depending on user input, to play the different song selections. To control the entire design the user uses the DE2 switches. The two left-most switches control the song selection and the mute function. When muted the song continues to progress forward, however the user hears nothing. The assignments for these switches are as following: 00-Run, 01-Linus and Lucy, 10-Bad Romance, 11-Mute. The right-most switch on the DE2 controls the reset and stop functions. When the switch is high, the song will be reset to the beginning and will hold there until the switch is set to low. While the switch is low, the song will play as normal and loop back to the beginning when finished. The second switch from the right controls the volume of the playback. When this switch is high, the volume will set to 'loud.' When this switch is set to low, the volume will be set to 'soft.'