An interesting aspect of image rendering lies in the elimination or, if not possible, of the attenuation of image artifacts; i.e. visible defects in the image. These defects can range from unrealistic lighting effects to awkward looking reflections. But in a digital pixel-based image, none is more obvious than what is referred to as aliasing. Aliasing, refers to the problem that a digital image being built out of discrete elements, here pixels, will have certain of those elements which should have been distinct, copy each other. For the pixels in an image, this aliasing artifact is referred to as "jaggies". The jaggies are mostly found in areas where there is a sudden change in light intensity or colour, creating instead of a smooth line, a "jagged" edge.
Antialiasing
Antialiasing, as its name suggest, is the set of methods used to remove or reduce pixel related artifacts. There are many algorithms that can be used to alleviate this problem: Supersampling and Stochastic (or random) ray sampling are described in Glassner's[1].
Supersampling
The method we have chosen to experiment with will be supersampling. It is the simplest, and the most adaptable to our present implementation. The concept is very simple: Instead of having one ray per pixel which could create a hard border between the edge of a sphere and its background, we fire multiple rays within the viewing window's pixel area. We then average out the intensities for the particular pixel and send that average intensity to the image buffer. The result is a much smoother image,
The ViewPoint class possesses a member method antialiasing_vectors that adds into the pixel area of the viewing window 4 evenly distributed coordinates where a ray will be sent to. This is easily done by changing the area's offsets.
Supersampling vs single ray sampling |
Rendered image with antialiasing at OFF |
Same image with antialiasing |
Antialising is OFF |
Antialiasing is ON |