This is meant to be the first among a series of articles about image processing on FPGA. As it is usual on FPGA’er website, articles will include sample code and test benches.
Let’s define some common technology. A digital image is a frame. For starters, we will analyze grayscale images (as opposed to color images). The tiniest element of a digital image is the pixel. A frame is rectangular, composed of lines, and each line is composed of pixels.
So for grayscale frames, we need to know three parameters: Number of lines (m), number of pixels per line(n), and resolution, or depth of each pixel. A usual resolution is 8 bits per pixel, meaning that the gray level for each pixel can have one out of 256 levels, where 0 is black, 255 is white, and any level in the middle is gray, from dark gray to light gray.
Other resolutions (12, 14, 16 bits per pixel) are also common.
There are lots of common frame formats, with nxm taking values of 640 x 480, 1800 x 1200, etc. Most values for n and m are multiples of ‘8’, for obvious reasons in our digital binary world.
An uncompressed frame of 1800 x 1200 pixels, with 8 bits per pixel, will take roughly 2MByte.
Digital video is a continuous stream of frames. When we talk about digital video, a fourth parameter is introduced: Frame rate, usually measured in frames per second (fps). A digital stream of 1800 x 1200 pixels, with 8 bits per pixel, @60 fps, will have a data rate of more than 120MByte/s. Usually, it will take a bit more, since there is some overhead needed to send line and frame synchronization signals, but this overhead is usually very small.
If we use a parallel bus of 8 bits to send this data, we will need to provide a clock of 120MHz, which is quite reasonable on FPGAs today.
But current cameras usually demand much heavier data rates. It is not uncommon to work with frames of several tens of MegaPixels, resolutions of 12 bits and up, and also faster frame rates.
A 25MPixel camera, with a bit resolution of 12 bits and a frame rate of 160 fps, will need a clock rate of 4GHz. For the connection between camera and FPGA, this translates in the need of physical connection other than a parallel bus, usually LVDS bus or transceivers.
And internally, inside the FPGA, pixels to be processed need to be transmitted in parallel, since we don’t want to exceed a few hundred MHz on our internal FPGA clocks.
Luckily, parallelism is one of the strong points of HW based processing (FPGA, ASIC based). Most of the algorithms of image processing we will discuss in further articles will process bits in parallel.