Jan 172013
 

We saw in part 3 how to move the camera around a wireframe world.  Now it’s time to move onto proper solid 3D rendering.  For this we need to introduce a few new concepts: basic shading, vertex attributes, textures and the depth buffer.

Solid rendering

Moving from wireframe to solid rendering can be as simple as filling in between the lines.  There are a load of algorithms for efficient scanline rasterisation of triangles, but these days you don’t have to worry about it because your graphics hardware deals with it – simply give it three points and tell it to draw a triangle.  This is a screenshot of a solid-rendered triangle, which is pretty much the most basic thing you can draw in D3D or OpenGL:

That’s not very exciting, but that’s about all you can draw with only positional data.  In case you wonder why I choose a triangle, it’s because a triangle is the simplest type of polygon and all rendering eventually breaks down to drawing triangles (for example a square will be cut in half diagonally to give two triangles).

Vertex attributes and interpolation

I’ll just clarify a bit of terminology.  A vertex is a single point in space, usually used to define the corners of a shape, so a triangle has three vertices.  An edge is a line between two vertices, so in wireframe drawing we just draw the edges.  A polygon is the whole filled in shape, such as the triangle above.

Vertices don’t just have to contain positional information, they can have other attributes.  One example of a simple attribute is a colour.  If all the vertices are the same colour then the whole polygon could just be drawn the same colour, but if the vertices are different colours then the values can be interpolated between the vertices.  Interpolation simply means to blend smoothly from one value to a different value – for example, exactly half way between the two vertices the colour will be an even mix of each.  Because triangles have three vertices, a slightly more complex interpolation is actually used that blends smoothly between three values.  Here is an example of the same triangle with coloured vertices:

Texture mapping

The other common vertex attributes are texture coordinates.  A texture is just a picture, usually stored as an image file (although they can be generated algorithmically at runtime).  Textures can be applied to polygons by ‘stretching the picture’ across the polygon.  You can think of a 2D picture as having coordinates like on a graph – an X coordinate running horizontally, and a Y coordinate running vertically.  These coordinates range from 0 to 1 across the image, and the X and Y coordinates are usually called U and V in the case of textures.

Textures are applied to polygons by specifying a U and V coordinate at each vertex.  These coordinates (together referred to as UVs) are interpolated across the polygon when it is drawn, and instead of directly drawing a colour for each pixel, the coordinates are used to specify a point in the texture to read from.  The colour of the texture at the point is drawn instead.  This has the effect of stretching some part of the texture across the polygon that is being drawn.

As an example, here is a texture and a screenshot of part of it mapped onto our triangle:

   

This is just a really quick introduction to the basics of polygon rendering.  There are a lot more clever and interesting things that can be done which I will talk about in later sections.

A question of depth

The problem with rendering solid geometry is that often two pieces of geometry will overlap on the screen, and one will be in front of the other.  The two options for dealing with this are either to make sure that you draw everything in back-to-front order, or to keep track of the depth of each pixel that you’ve rendered so that you don’t draw more distant objects in front of closer ones.

Rendering back to front will give the correct screen output but is more expensive – a lot of rendering work will be done to draw objects that will later be drawn over (called overdraw), and additional work is required to sort the geometry in the first place.  For this reason it is more efficient in almost all cases to use a depth buffer.

Screen buffers

First lets talk about how a computer stores data during rendering.  It has a number of buffers (areas of memory) which store information for each pixel on the screen.  It renders into these buffers before copying the contents to the display.  The size of the buffers depends on the rendering resolution and the quality.  The resolution is how many pixels you want to draw, and as an example if you want to ouput a 720p picture you need to render 1280×720 pixels.

The colour buffer stores the colour of each pixel.  For a colour image you need at least one byte of storage (which can represent 256 intensity levels) for each of the three colour channels, red, green and blue.  This gives a total of 256x256x256 = 16.7 million colours, and so each pixel required three bytes of storage (but due to the way memory is organised there is a fourth spare byte for each pixel, which can be used for other things).  These days a lot of rendering techniques require higher precision but I’ll be writing more about this later on.

The second type of buffer is the depth buffer.  This is the same resolution as the colour buffer but instead stores the distance of the pixel along the camera’s Z axis.  This is often stored as a 24-bit floating point number, meaning it can represent basically any number (to varying degrees of accuracy).  Here is an example courtesy of Wikipedia of the colour and depth buffers for a scene, where you can see that pixels in the distance have a lighter colour in the depth buffer, meaning that they are a larger value:

Depth buffer rendering

Using a depth buffer to render is conceptually very simple.  At the beginning of the frame, every pixel in the buffer is reset to the most distant value.  For every pixel of every polygon that is about to be rendered to the screen, the depth of that pixel is compared with the depth value stored for that pixel in the depth buffer.  If it’s closer, the pixel is drawn and the depth value updated.  Otherwise, the pixel is skipped.  If a pixel is skipped then no calculations have to be done for that pixel, for example doing lighting calculations or reading textures from memory.  Therefore it’s more efficient to render object from front to back so that objects behind closer ones don’t have to be drawn.

That’s all that needs to be said about depth buffers for now.  Next time I’ll be talking about lighting.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)