GPU with line primitives on TinyFPGA BX


Hi guys.

I implemented a simple GPU on a TinyFPGA BX, which accepts a list of lines to draw over an SPI bus. The lines are transmitted as screen coordinates with begin and end points. It then renders those lines at 640x480@60Hz and displays them via a VGA connector.

Since the TinyFPGA BX lacks enough RAM for a full framebuffer, each scanline is rasterized just prior to display and then discarded to make space for the following scanlines. At no time is a full rendered image in memory.

Here it is rendering Suzanne the monkey from Blender:

(That video was recorded with my cell phone. It looks a little better in person.)

I’m driving the GPU with a Raspberry Pi. Here is what my setup looks like:

This is my first FPGA and first Verilog project, so I probably did some silly things there, but I was proud of the result, so I thought I’d share it.

If anyone wants to take a look, I put both the Verilog and Python code on github.


Woooo!! Nooooooooo!!! It’s crazy!!! :slight_smile:
What’s about “rotozoomer” for texture and mix this project for render lines? :star_struck:
Wolfenstein 3D? It’s posible? Doom maybe? :smile:


This is an excellent project! :heart_eyes:

How big is the synthesized design? I’m wondering if a soft CPU would still fit in there to generate draw lists internally.

Very cool!


I bow down to your greatness :bowing_man: :bowing_man: :bowing_man: That is sensational!


The design uses a small fraction of the available logic blocks, but the problem is that all of the RAM blocks are used up for double-buffering lists of coordinates for up to 1024 lines, along with some temporary storage which is also proportional to the number of lines. If one wanted to add a CPU, you could drop the maximum number of lines drawn fairly easily, but 1024 lines is already quite limited. The model in the video above has more than 1000 already.

A more ambitious change would be to implement the matrix multiplication in Verilog, and perform it during VSYNC every frame. If you did this, you could store the untransformed model entirely in flash, and you’d only need a single buffer for the transformed points, cutting the RAM requirements in half. I’m not sure how much more time I will spend on this, but that would be an interesting project for someone.


I built a little board to run this project using a Raspberry Pi Zero:


Nice! Good to see it working for someone else.

Was it easy enough to get everything up and running? Is there anything that should be added to the setup instructions?


Your instructions were excellent. The only thing I haven’t got working is apio on the Raspberry Pi Zero. I programmed the TinyFPGA on Windows and then connected it to the Raspberry Pi. I just tried to install apio on the Rpi but am getting errors:

Installing icestorm package:
Warning: full platform does not match: linux_armv6l
Trying OS name: linux

Error: Got an unrecognized status code ‘404’ when downloaded

There seems to be an
issue: -, which looks like it is specific to the Raspberry Pi Zero.


I was a bit surprised that you only support white and not colors. I was goiing to look into whether colors could be supported.


:thinking: if you could do some back-face culling in the Pi, it might even mean you could claw back some of the RAM used with the extra colour data…

I also started pondering how difficult it might be to raster out triangles instead of lines… (although the limited palette might not be ideal for complex scenes like the monkey head)…

On an unrelated note, my VGA PMOD with the R2R divider network has arrived… :thinking:


I think I need to make a TinyFPGA BX PMOD adapter. :sweat_smile: Maybe 4 8-bit PMODs and one or two 4-bit PMODs.


I ported this project to the myStorm BlackIce board.


A PMOD adapter would be excellent!


I agree. A PMOD adapter would open up access to a lot of interesting peripherals/sensors, etc in an easy fashion.