the Worthless Writeup Library @ szy.lol

FPGA Synth

work started on 2024-05 launched on 2024-06 ended on 2024-07

A very simple 4-channel synthesizer made as an HDL project for uni.


FPGAs are very interesting, the ability to create physical digital circuits on the go is very powerful. However, they are just expensive enough for me to not be entirely convinced it’s worth it for what I’m doing, as well as to be afraid with playing with them on my own (I break stuff a lot). As such, I was excited to get an opportunity to play with one in HDL class in uni. I am very grateful for the Verilog professor to not only let us go straight for the hardware (instead of messing about with simulations, like the VHDL group did), but also for providing a wildcard project topic – 21. Student's topic suggestion.

I decided to build a simple synthesizer. The board we were provided – Nexys 4 DDR – had a simple PWM audio output and UART input. When working on exploratory projects, I find having tangible results (like sound I can hear) very important – they provide much more satisfaction than seeing something blink the right way, or even worse, just seeing a program print the right thing. For this reason, the VGA port tempted me (and many others) as well, but I had no simple ideas for using it, and the listed project topics were quickly snapped up. Audio is both very physical (you can hear it) and still conceptually simple to generate and process, being a singular waveform. On a slow enough circuit, you can plug in a speaker near anywhere, and it will be at least a tone!

My concept involved having four channels, three of them square waves and a single noise channel. These waveforms are trivial to generate, square waves with a counter and comparator, and noise with an LFSR (pseudorandom number generator). The resulting waveforms (all single bit lines) are then modulated with a very fast PWM signal for volume, and mixed by fast multiplexing (quickly switching which channel is being sent out). The pitches and volumes would be controlled via UART, from a tracker-like player on a computer. Based on this design, I drafted a diagram to submit to the professor.

Block diagram of the above design

The player on the computer (playerw.html or piano.html, why is WebSerial a thing??) sends commands via UART. The signal is then decoded (uart.v > uart), and commands parsed into buffered channel states (main.v > controller). A special command causes the buffered states to be “flushed” all at once to the real state registers, ensuring the channels stay synchronized.

The “channel state” consists of a volume (6 bits), octave (3 bits), and pitch (4 bits, doesn’t apply to noise). It is then wired (main.v > main) to the generators. The square wave generator first generates a clock-enable signal based on the chosen octave (audio.v > oct_prescale), and looks up the amount of clock cycles required for a chosen pitch (audio.v > notelut). Because octaves are doublings of frequency, this allows for playing any 12TET pitch with just one value in LUT and simple division-by-two scaling. These signals are then used to control a counter (audio.v > gen), which toggles reg out, creating the desired square wave.

The noise channel also generates the octave-based clock enable, and then divides the clock even more (audio.v > noise_prescale, admittedly to taste). This clock is then used to run an LFSR, the topmost bit of which is used as the final noise source (audio.v > noise).

The resulting signals are then modulated and mixed (audio.v > mixer). First, a counter is used to create volume PWM signals, as well as the mux selection. The volume signals are ANDed with the waveforms, and additionally routed out as blinkenlights. The modulated channels are then multiplexed and possibly additionally modulated with a 50% PWM to make the signal quieter.

The final design uses no multipliers, and only uses adders to run counters. All mixing and modulation of audio signals is done via modulation and muxing, depending on the high frequency of control signals and the low pass filtering of the audio output to “smooth out” the created artifacts.

The Javascript player accepts tracker-like text pattern files as input, and converts them into the UART commands to play them as sound. You can view all source code here. I kind of regret not taking a screenshot of the final schematic view in Vivado, I think it’d look cool. Here is the final result, playing my crappy arrangement of チルノ’s theme from 東方紅魔郷.

Edit (2024-12-07): Exported schematics

I have exported the final schematics from Vivado. They aren’t that productive to look at, but look quite cool. The first drawing is the RTL schematics, it shows a higher level view of the circuitry. The logic elements are displayed symbolically, for example logic functions are shown as gates and addition is shown as a ⊕. Click on the following thumbnail to view. Warning: decently large SVG file, requires zooming and might be slow to render.

The next drawing shows the final implementation. At this point all optimizations are done, and the circuit is reduced to the most basic form, consisting entirely of the basic FPGA elements - look-up tables (LUTs) implementing logic circuitry, D flip flops implementing registers, and carry adders implementing addition. Click on the following thumbnail to open. Warning: very large SVG file, will require zooming and will be very slow to render.