Above you can see images of how the computer keyboard is mapped to the piano keys.
Where is the sound?
After a day without making any sounds, only working with on making the keyboard and the key triggering logic, I felt I had to start producing sounds. Day 2 was on, and I wanted the synthesizer to be polyphonic. I wrote, tested, edited, tested, looked for examples, copied examples, edited, tested, and so on. It did not work, and stress came along…
Two hours maximum of coding after the seminars was not enough for my brain to come up with a solution, so I put the idea of making it polyphonic on the top shelf. It will be taken down later.
I had decided to go over to a monophonic synth, and during day 3 it was working. for every time I pressed a key, I created a new oscillator, passing the frequency as an argument, and for every release of the key, I stopped the oscillator. I wrote logic code to prevent multiple triggering when holding down keys and used envelopes to prevent “click” noise when stopping sounds.
Day 4 had come, and I had not gotten as far as I had hoped. Maybe I prioritized wrong; beginning with the keyboard layout, helping others, and getting in over my head in complexity, but I ended up with an ok result.
Not being so happy with the result, I decided to use some of my weekend on cleaning up my spaghetti code, optimizing it, and implementing some new functionality. First, I went away from making a new oscillator with every keypress, to connecting and disconnecting it from the output (destination). When working with the functionality of the filter and a newly added delay effect, I encountered some logic errors with the signal chain. I then decided to instead turn the oscillator on and off by setting the gain to 0.5 and 0 respectively.
I ended up with three effects. A lowpass-filter with adjustable cut-off, a delay with adjustable delay time, and a tremolo effect with adjustable tremolo speed. The user can activate and deactivate these effects with a corresponding button and use sliders to adjust the effects.
More waves, octaves, and optimization
As you can see in the picture above, I also added a dropdown where you can choose between different instruments! I added this by retrieving more wavetables from Google Chrome Labs, making several json-files, and running a XMLHttpRequest every time the user chooses an option from the list, and then updating the wave of the oscillator. In addition, you can also choose which octave you want to play in. This is quite useful, since the keyboard has a limited number of keys in a row.
I chose to start the audio context, create every node, connect every audio node, and set a default configuration with one big button. This button initiates all this, and the code runs only on the first press. The synthesizer will not work before you press the big button.
Lastly, I have worked a bit with CSS to make it look alright.
Valuable learning and future work
Even though I have previous experience as mentioned, I have learnt a lot during this week. I was more than a bit rusty on the syntax, the routing and the Web Audio API functionalities. This workshop has definitely helped me grasp these concepts more clearly.
I will continue to explore the libraries we have been shown, which I believe will make life a lot easier :smiley: I want to continue to work on how I have routed the effects, adding user ability to change dry/wet ratio, and I want to add reverb. I may choose to use some libraries in the future for faster building, but still I see the absolute need for having the foundation of how to do DSP with only the Web Audio API. One last important thing is that I need to make it responsive. For now, the GUI (Graphical User Interface) is not scalable, so it would work poorly on a small device.
If you want to see how I have built my application, look at my repository! I have tried to comment the most important lines of code and divided it into sections for easy reading.
Want to try it? Try it here