This is the fifth in a sequence of educational 3D graphics renderers.

This version adds color to the renderer.

Basic Renderer

A "renderer" is a collection of algorithms that takes as its input a {@link renderer.scene.Scene} data structure and produces as its output a {@link renderer.framebuffer.FrameBuffer} data structure.

{@code
                           Renderer
                       +--------------+
       Scene           |              |         FrameBuffer
       data     ====>  |  Rendering   |  ====>     data
     structure         |  algorithms  |          structure
                       |              |
                       +--------------+
}

A {@link renderer.scene.Scene} data structure contains information that describes a "virtual scene" that we want to take a "picture" of. The renderer is kind of like a digital camera that takes a picture of the scene and stores the picture's data in the FrameBuffer data structure. The FrameBuffer holds the actual pixel information that describes the picture of the scene.

The rendering algorithms can be implemented in hardware (a graphics card or GPU) or in software. In this class we will write a software renderer using the Java programming language.

Our software renderer is made up of four "packages" of Java classes. Each package is contained in its own directory. The name of the directory is the name of the package.

The first package is the collection of input data structures. This is called the {@link renderer.scene} package. The data structure files in the scene package are:

The second package is the output data structure. It is called the {@link renderer.framebuffer} package and contains the files

The third package is a collection of algorithms that manipulate the data structures from the other two packages. This package is called the {@link renderer.pipeline} package. The algorithm files are:

The fourth package is a library of geometric models. This package is called the {@link renderer.models} package. It contains a number of files for geometric shapes such as {@link renderer.models.Sphere}, {@link renderer.models.Cylinder}, {@link renderer.models.Cube}, {@link renderer.models.Cone}, {@link renderer.models.Pyramid}, {@link renderer.models.Tetrahedron}, {@link renderer.models.Dodecahedron}, and mathematical curves and surfaces.

There is also a fifth collection of source files, a collection of client programs that use the renderer. These files are in the top level directory of the renderer.

Here is a brief description of the data structures from the {@link renderer.scene} package and {@link renderer.framebuffer} packages.

Scene

A {@link renderer.scene.Scene} object represents a collection of geometric models positioned in the three dimensional space. The models are in front of a {@link renderer.scene.Camera} which is located at the origin and looks down the negative z-axis. Each {@link renderer.scene.Model} object in a {@link renderer.scene.Scene} object represents a distinct geometric shape in the scene. A {@link renderer.scene.Model} object is a list of {@link renderer.scene.Vertex} objects, a list of {@link java.awt.Color} objects, and a list of {@link renderer.scene.LineSegment} objects. Each {@link renderer.scene.LineSegment} object refers to two of the {@link renderer.scene.Model}'s {@link renderer.scene.Vertex} objects and to two of the {@link renderer.scene.Model}'s {@link java.awt.Color} objects. The {@link renderer.scene.Vertex} objects represent points in the camera's coordinate system. The model's line segments represent the geometric object as a "wire-frame", that is, the geometric object is drawn as a collection of "edges". This is a fairly simplistic way of doing 3D graphics and we will improve this in later renderers.

http://en.wikipedia.org/wiki/Wire-frame_model
https://www.google.com/search?q=computer+graphics+wireframe&tbm=isch

Camera

The {@link renderer.scene.Camera} data structure represents a camera located at the origin, looking down the negative z-axis. A {@link renderer.scene.Camera} has associated to it a "view volume" that determines what part of space the camera "sees" when we use the camera to take a picture (that is, when we render a {@link renderer.scene.Scene}).

A camera can "take a picture" two ways, using a perspective projection or a parallel (orthographic) projection. Each way of taking a picture has a different shape for its view volume.

For the perspective projection, the view volume is an infinitely long pyramid that is formed by the pyramid with its apex at the origin and its base in the plane {@code z = -1} with edges {@code x = -1}, {@code x = +1}, {@code y = -1}, and {@code y = +1}.

http://math.hws.edu/graphicsbook/c3/projection-frustum.png

For the orthographic projection, the view volume is an infinitely long rectangular cylinder parallel to the z-axis and with sides {@code x = -1}, {@code x = +1}, {@code y = -1}, and {@code y = +1} (an infinite parallelepiped).

http://math.hws.edu/graphicsbook/c3/projection-parallelepiped.png

When the graphics rendering pipeline uses a {@link renderer.scene.Camera} to render a {@link renderer.scene.Scene}, the renderer "sees" only the geometry from the {@link renderer.scene.Scene} that is contained in the {@link renderer.scene.Camera}'s view volume. (Notice that this means the orthographic camera will see geometry that is behind the camera. In fact, the perspective camera also sees geometry that is behind the camera.)

The plane {@code z = -1} is the camera's image plane. The rectangle in the image plane with corners {@code (-1, -1, -1)} and {@code (+1, +1, -1)} is the camera's view rectangle. The view rectangle is like the film in a real camera, it is where the camera's image appears when you take a picture. The contents of the camera's view rectangle is what gets rasterized, by the renderer's {@link renderer.pipeline.RasterizeAntialias} pipeline stage, into a {@link renderer.framebuffer.FrameBuffer}'s viewport.

https://webglfundamentals.org/webgl/frustum-diagram.html
https://threejs.org/examples/#webgl_camera
http://math.hws.edu/graphicsbook/demos/c3/transform-equivalence-3d.html

Model, LineSegment and Vertex

A {@link renderer.scene.Model} object represents a distinct geometric object in a {@link renderer.scene.Scene}. A {@link renderer.scene.Model} data structure is mainly a {@link java.util.List} of {@link renderer.scene.Vertex} objects, a {@link java.util.List} of {@link java.awt.Color} objects, and a {@link java.util.List} of {@link renderer.scene.LineSegment} objects.

The {@link renderer.scene.Vertex} objects represents points from the geometric object that we are modeling. In the real world, a geometric object has an infinite number of points. In 3D graphics, we "approximate" a geometric object by listing just enough points to adequately describe the object. For example, in the real world, a rectangle contains an infinite number of points, but it can be adequately modeled by just its four corner points. (Think about a circle. How many points does it take to adequately model a circle? Look at the {@link renderer.models.Circle} model.)

Each {@link renderer.scene.LineSegment} object contains two integers that are the indices of two {@link renderer.scene.Vertex} objects from the {@code renderer.scene.Model}'s vertex list. Each {@link renderer.scene.Vertex} object contains the xyz-coordinates, in the camera coordinate system, for one of the line segment's two endpoints.

We use the {@link renderer.scene.LineSegment} objects to "fill in" some of the space between the model's vertices. For example, while a rectangle can be approximated by its four corner points, those same four points could also represent just two parallel line segments. By using four line segments that connect around the four points, we get a good representation of a rectangle.

If we modeled a circle using just points, we would probably need to draw hundreds of points. But if we connect every two adjacent points with a short line segment, we can get a good model of a circle with just a few dozen points.

Each {@link renderer.scene.LineSegment} object also contains two integers that are the indices of two {@link java.awt.Color} objects from the {@code renderer.scene.Model}'s color list, one color for each of the line segment's two vertices. The renderer will use linear interpolation (lerp) to interpolate the color from each endpoint of a line segment to all the points in between. But these "in between" points are not stored in the model, so this interpolation is done by the renderer when it rasterizes the line segment into pixels in the framebuffer.

Our {@link renderer.scene.Model}'s represent geometric objects as a "wire-frame" of line segments, that is, a geometric object is drawn as a collection of "edges". This is a fairly simplistic way of doing 3D graphics and we will improve this in later renderers.

http://en.wikipedia.org/wiki/Wire-frame_model
https://www.google.com/search?q=computer+graphics+wireframe&tbm=isch

Vertex, Color, and Allocating LineSegments

Giving color to vertices forces us to think about how we model geometry using Vertex, Color, and LineSegment objects. Below are several examples. Suppose that we have two line segments that share an endpoint, labeled p1 in this picture.

{@code
       p0 +---------------+ p1
                           \
                            \
                             \
                              \
                               \
                                + p2
}

Consider the following situations.

Suppose we want the horizontal line segment to have color c0 and the vertical line segment to have color {@code c1}, where {@code c0} and {@code c1} can be set and changed independently of each other. Here is one way to use Vertex, Color, and LineSegment objects to model this situation. Here, Vertex {@code v0} represents point {@code p0}, Vertex {@code v1} represents {@code p1}, and Vertex {@code v2} represents {@code p2}.

{@code
   vertexList       colorList         lineSegmentList
    +------+         +------+        +---------------+
  0 |  v0  |       0 |  c0  |      0 | (0, 1) (0, 0) |
    +------+         +------+        +---------------+
  1 |  v1  |       1 |  c1  |      1 | (1, 2) (1, 1) |
    +------+         +------+        +---------------+
  2 |  v2  |
    +------+
}

Notice how, if we change the entries in the color list, each of the two line segments will change its color.

You could also model this situation with the following allocation of Vertex, Color, and LineSgement objects. Here, point p1 is represented by both Vertex {@code v1} and Vertex {@code v2} (so {@code v1.equals(v2)} is {@code true}). Also {@code c0.equals(c1)} and {@code c2.equals(c3)} must also be {@code true}. (This is the model that OpenGL requires, because in OpenGL the vertex list and the color list must have the same length.) Notice how we need to change two colors in the color list if we want to change the color of one of the line segments. Also notice that if we want to move the point {@code p1}, then we must change both vertices {@code v1} and {@code v2}.

{@code
   vertexList       colorList         lineSegmentList
    +------+         +------+        +---------------+
  0 |  v0  |       0 |  c0  |      0 | (0, 1) (0, 1) |
    +------+         +------+        +---------------+
  1 |  v1  |       1 |  c1  |      1 | (2, 3) (2, 3) |
    +------+         +------+        +---------------+
  2 |  v2  |       2 |  c2  |
    +------+         +------+
  3 |  v3  |       3 |  c3  |
    +------+         +------+
}

Suppose we want the point {@code p0} to have color {@code c0}, the point {@code p1} to have color {@code c1}, and the point {@code p2} to have color {@code c2}. Suppose that the line segment from {@code p0} to {@code p1} should be shaded from {@code c0} to {@code c1} and the line segment from {@code p1} to {@code p2} should be shaded from {@code c1} to {@code c2}. And suppose we want the colors {@code c0}, {@code c1}, and {@code c2} to be set and changed independently of each other. Here is one way to allocate Vertex, Color, and LineSegment objects to model this. Notice how, if we change color {@code c1} to color {@code c3}, then the shading of both line segments gets changed.

{@code
   vertexList       colorList         lineSegmentList
    +------+         +------+        +---------------+
  0 |  v0  |       0 |  c0  |      0 | (0, 1) (0, 1) |
    +------+         +------+        +---------------+
  1 |  v1  |       1 |  c1  |      1 | (1, 2) (1, 2) |
    +------+         +------+        +---------------+
  2 |  v2  |       2 |  c2  |
    +------+         +------+
}

Suppose we want the horizontal line segment to have solid color c0 and the vertical line segment to be shaded from {@code c0} to {@code c1}, where {@code c0} and {@code c1} can be changed independently of each other. Here is one way to model this (be sure to compare this with the first model above).

{@code
   vertexList       colorList         lineSegmentList
    +------+         +------+        +---------------+
  0 |  v0  |       0 |  c0  |      0 | (0, 1) (0, 0) |
    +------+         +------+        +---------------+
  1 |  v1  |       1 |  c1  |      1 | (1, 2) (0, 1) |
    +------+         +------+        +---------------+
  2 |  v2  |
    +------+
}

If we change color {@code c0} to {@code c2}, then the horizontal line segment changes its solid color and the vertical line segment changes its shading.

Here is a more complex situation. Suppose we want the two line segments to be able to move away from each other, but the color at (what was) the common point {@code p1} will always be the same in each line segment.

{@code
   vertexList       colorList         lineSegmentList
    +------+         +------+        +---------------+
  0 |  v0  |       0 |  c0  |      0 | (0, 1) (0, 1) |
    +------+         +------+        +---------------+
  1 |  v1  |       1 |  c1  |      1 | (2, 3) (1, 2) |
    +------+         +------+        +---------------+
  2 |  v2  |       2 |  c2  |
    +------+         +------+
  3 |  v3  |
    +------+
}

Initially, {@code v1.equals(v2)} will be {@code true}, but when the two line segments separate, {@code v1} and {@code v2} will no longer be equal. But the color with index 1 is always shared by both line segments, so even if the two line segments move apart, and even if color {@code c1} is changed, the two line segments will always have the same color at what was their common endpoint.

You should create small Java programs that implement each of these situations.

Scene Tree Data Structure

When you put all of the above information together, you see that a {@link renderer.scene.Scene} object is the root of a simple tree data structure.

{@code
               Scene
              /     \
             /       \
       Camera         List
                     /     |     \
                    /      |      \
               Model     Model     Model
                       /   |   \
                      /    |    \
                     /     |     \
                    /      |      \
        List  List  List
          /      \                        /        \
         /        \                      /          \
    Vertex        Vertex           LineSegment     LineSegment
    /  |  \       /  |  \           /     \          /     \
   /   |   \     /   |   \         /       \        /       \
  x    y    z   x    y    z     int[2]   int[2]  int[2]    int[2]
}

FrameBuffer

A {@link renderer.framebuffer.FrameBuffer} object holds an array of pixel data that represents an image that can be displayed on a computer's screen. For each pixel in the image, the framebuffer's array holds three byte values, one byte that represents the red component of the pixel's color, one byte that represents the green component, and one byte that represents the blue component of the pixel's color. Each of these three bytes is only eight bits in size, so each of the three colors has only 256 shades (but there are 256^3 = 16,777,216 distinct colors). The three bytes of color for each pixel are packed into one Java integer (which has four bytes, so one of the integer's bytes is not used). If a FrameBuffer has dimensions {@code n} rows of pixels by {@code m} columns of pixels, then the FrameBuffer holds {@code n*m} integers. The pixel data is NOT stored as a "two-dimensional" {@code n} by {@code m} array of integers nor is it stored as a "three-dimensional" {@code n} by {@code m} by 3 array of bytes. It is stored as a one-dimensional array of integers of length {@code n*m}. This array is in "row major" form, meaning that the first {@code m} integers in the array are the pixels from the image's first row. The next {@code m} integers are the pixels from the image's second row, etc. Finally, the first row of pixels is the top row of the image when it is displayed on the computer's screen.

The FrameBuffer data structure also defines a Viewport which is a rectangular sub-array of the pixel data in the framebuffer. The viewport is the active part of the framebuffer, the part of the framebuffer that the renderer is actually writing into. The viewport has width and height dimensions, {@code w} and {@code h}, with {@code w <= m} and {@code h <= n}. Quite often the viewport will be the whole framebuffer. But the viewport idea makes it easy for us to implement effects like "split screen" (two independent images in the FrameBuffer), or "picture in a picture" (a smaller picture superimposed on a larger picture). In future renderers (starting with renderer 7), another use of a viewport that is smaller than the whole FrameBuffer is when we want the viewport to have the same aspect ratio as the Camera‘s view rectangle.

https://en.wikipedia.org/wiki/Split_screen_(computer_graphics)
https://en.wikipedia.org/wiki/Picture-in-picture

Renderer

Here is a brief overview of how the renderer algorithms process a Scene data structure to produce a filled in viewport within the FrameBuffer object.

First of all, remember that:

The main job of the renderer is to "draw" in the FrameBuffer's viewport appropriate pixels for each LineSgement in each Model from the Scene. The "appropriate pixels" are the pixels "seen" by the camera. At its top level, the renderer iterates through the Scene object's list of Model objects, and for each Model object the renderer iterates through the Model object's list of LineSegment objects. When the renderer has drilled down to a LineSegment object, then it can render the line segment into the framebuffer's viewport. So the renderer really renders line segments.

The renderer does its work on a {@link renderer.scene.LineSegment} object in a "pipeline" of stages. This simple renderer has just three pipeline stages. The stages that a {@link renderer.scene.LineSegment} object passes through in this renderer are

To understand the algorithms used in the "project, clip then rasterize" process, we need to trace through the rendering pipeline what happens to each {@link renderer.scene.Vertex} and {@link renderer.scene.LineSegment} object from a {@link renderer.scene.Model}.

Start with a Model's list of vertices.

{@code
        v0 ...  vn     A Model's list of Vertex objects
         \     /
          \   /
            |
            | camera coordinates (of v0 ... vn)
            |
        +-------+
        |       |
        |   P1  |    Projection (of the vertices)
        |       |
        +-------+
            |
            | image plane coordinates (of v0 ... vn)
            |
           / \
          /   \
         /     \
        |   P2  |   Clipping (of each line segment)
         \     /
          \   /
           \ /
            |
            | image plane coordinates (of the clipped vertices)
            |
           / \
          /   \
         /     \
        |   P3  |   Rasterization and anti-aliasing (of each clipped line segment)
         \     /
          \   /
           \ /
            |
            |  pixels (for each clipped line segment)
            |
           \|/
    FrameBuffer.ViewPort
}

Projection

The {@link renderer.pipeline.Projection} stage takes the model's list of vertices in three dimensional (camera) space and computes the two-dimensional coordinates of where each vertex "projects" onto the camera's image plane (the plane with equation {@code z = -1}). The projection stage takes the vertices inside of the camera's view volume and projects them into the camera's view rectangle (and points outside of the camera's view volume will, of course, project to points outside of the view rectangle).

Let us derive the formulas for the perspective projection transformation (the formulas for the parallel projection transformation are pretty obvious). We will derive the x-coordinate formula; the y-coordinate formula is similar.

Let {@code (x_c, y_c, z_c)} denote a point in the 3-dimensional camera coordinate system. Let {@code (x_p, y_p, -1)} denote the point's perspective projection into the image plane, {@code z = -1}. Here is a "picture" of just the xz-plane from camera space. This picture shows the point {@code (x_c, z_c)} and its projection to the point {@code (x_p, -1)} in the image plane.

{@code

           x                  /
           |                 /
       x_c +                + (x_c, z_c)
           |               /|
           |              / |
           |             /  |
           |            /   |
           |           /    |
           |          /     |
           |         /      |
           |        /       |
       x_p +       +        |
           |      /|        |
           |     / |        |
           |    /  |        |
           |   /   |        |
           |  /    |        |
           | /     |        |
           +-------+--------+------------> -z
        (0,0)     -1       z_c
}

We are looking for a formula that computes {@code x_p} in terms of {@code x_c} and {@code z_c}. There are two similar triangles in this picture that share a vertex at the origin. Using the properties of similar triangles we have the following ratios. (Remember that these are ratios of positive lengths, so we write {@code -z_c}, since {@code z_c} is on the negative z-axis).

{@code
              x_p       x_c
             -----  =  -----
               1       -z_c
}

If we solve this ratio for the unknown, {@code x_p}, we get the projection formula,

{@code
              x_p = -x_c / z_c.
}

The equivalent formula for the y-coordinate is

{@code
              y_p = -y_c / z_c.

}

Improved Line Clipping in the Basic Renderer

If a line segment is partly inside and partly outside of the camera's view volume, then we should clip off from the line segment that part of it which is not in the view volume.

If a line segment is entirely outside of the camera's the view volume, then we should discard it from any further processing by the renderer.

We have three choices of when we can clip line segments.

{@code
  1. before projection (in camera coordinates),
         clip -> project -> raterize
  2. after projection (in the view plane),
         project -> clip -> rasterize
  3. during rasterization (in the pixel-plane)
         project -> rasterize -> clip
}

In the first option, we clip line segments in camera space so that they are within the camera's view volume. In the second option we clip projected line segments in the view plane so that they are within the view plane's view rectangle. In the third option, we clip transformed line segments in the pixel-plane so that they are within the pixel-plane's logical viewport.

In the previous renderer we used the third option. We clipped line segments during rasterization by not putting into the framebuffer's viewport any line segment fragment that is outside of the pixel-plane's logical viewport. But this clippng algorithm requires that we compute every fragment of every line segment and then check if it fits in the framebuffer's viewport (that is, is the fragment inside of the logical viewport). This could be a big waste of CPU time. If a line segment extends from within the viewport to billions of pixels outside the viewport, then we would be needlessly computing a lot of fragments to discard. Even worse, if no part of the line segment is in the view volume, we would still be rasterizing, pixel by pixel, the whole line segment. As we saw in the demonstration program

     renderer_1\SomethingWrong_1.java

clipping while raterizing can cause the renderer to become unresponsive while it is "clipping" a line that has billions of fragments outside of the pixel-plane's logical viewport.

The first option, clipping line segments in camera coordinates before projection, is awkward because we need to clip against the perspective view volume differently than against the orthographic view volume. Also, we would need to clip against slanted planes for the perspective projection. Because of this awkwardness, most renderers do not clip before projection.

The best approach is to clip projected line segments in the view plane. We should clip a line segment so that both of its end points are within the view rectangle. If both endpoints of a line segment are within the view rectangle, then all the rasterized fragments of the line segment will be within the pixel-plane's logical viewport.

Clipping Algorithm

The clipping stage algorithm is a simplification of the Liang-Barsky Parametric Line Clipping algorithm.

This algorithm assumes that all Vertex objects have been projected onto the camera's image plane, {@code z = -1}. This algorithm also assumes that the camera's view rectangle in the image plane is

{@code
   -1 <= x <= +1  and
   -1 <= y <= +1.
}

If a projected vertex from a line segment has an {@code x} or {@code y} coordinate with absolute value greater than 1, then that vertex "sticks out" of the view rectangle. This algorithm will clip the line segment so that both of the line segment's vertices are within the view rectangle.

Here is an outline of the clipping algorithm.

Recursively process each line segment, using the following steps.

1) Test if the line segment no longer needs to be clipped, i.e., both of its vertices are within the view rectangle. If this is the case, then return the line segment wrapped in an {@link java.util.Optional} object.

{@code
          x = -1     x = +1
            |          |
            |          |
        ----+----------+----- y = +1
            |     v1   |
            |    /     |
            |   /      |
            |  /       |
            | v0       |
        ----+----------+----- y = -1
            |          |
            |          |
}

2) Test if the line segment should be "trivially rejected". A line segment is "trivially rejected" if it is on the wrong side of any of the four lines that bound the view rectangle (i.e., the four lines {@code x = 1, x = -1, y = 1, y = -1}). If so, then return an empty {@link java.util.Optional} object indicating that the line segment is not be rasterized into the framebuffer.

Notice that a line like the following one is trivially rejected because it is on the "wrong" side of the line {@code x = 1}.

{@code
                      x=1
                       |            v1
                       |            /
            +----------+           /
            |          |          /
            |          |         /
            |          |        /
            |          |       /
            |          |      /
            +----------+     /
                       |    /
                       |  v0
}

But the following line is NOT trivially rejected because, even though it is completely outside of the view rectangle, this line is not entirely on the wrong side of any one of the four lines {@code x = 1, x = -1, y = 1, or y = -1}. The line below will get clipped at least one time (either on the line {@code x = 1} or the line {@code y = -1}) before it is (recursively) a candidate for "trivial rejection". Notice that the line below could even be clipped twice, first on {@code y = 1}, then on {@code x = 1}, before it can be trivially rejected (by being on the wrong side of {@code y = -1}).

{@code
                      x=1
                       |          v1
                       |         /
            +----------+        /
            |          |       /
            |          |      /
            |          |     /
            |          |    /
            |          |   /
            +----------+  /
                       | /
                       |/
                       /
                      /|
                     / |
                   v0
}

3) If the line segment has been neither accepted nor rejected, then it needs to be clipped. So we test the line segment against each of the four clipping lines, {@code x = 1, x = -1, y = 1, and y = -1}, to determine if the line segment crosses one of those lines. We clip the line segment against the first line which we find that it crosses. Then we recursively clip the resulting clipped line segment. Notice that we only clip against the first clipping line which the segment is found to cross. We do not continue to test against the other clipping lines. This is because it may be the case, after just one clip, that the line segment is now a candidate for trivial accept or reject. So rather than test the line segment against several more clipping lines (which may be useless tests) it is more efficient to recursively clip the line segment, which will then start with the trivial accept or reject tests.

When we clip a line segment against a clipping line, it is always the case that one endpoint of the line segment is on the "right" side of the clipping line and the other endpoint is on the "wrong" side of the clipping line. In the following picture, assume that {@code v0} is on the "wrong" side of the clipping line ({@code x = 1}) and {@code v1} is on the "right" side. So {@code v0} needs to be clipped off the line segment and replaced by a new vertex.

{@code
                        x=1
                    v1   |
                      \  |
                       \ |
                        \|
                         \
                         |\
                         | \
                         |  \
                         |   v0
}

Represent points {@code p(t)} on the line segment between {@code v0} and {@code v1} with the following parametric equation.

{@code
       p(t) = (1-t) * v0 + t * v1  with  0 <= t <= 1
}

Notice that this equation parameterizes the line segment starting with {@code v0} at {@code t=0} (on the "wrong side") and ending with {@code v1} at {@code t=1} (on the "right side"). We need to find the value of {@code t} when the line segment crosses the clipping line {@code x = 1}. Let {@code v0 = (x0, y0)} and let {@code v1 = (x1, y1)}. Then the above parametric equation becomes the two component equations

{@code
        x(t) = (1-t) * x0 + t * x1,
        y(t) = (1-t) * y0 + t * y1,  with  0 <= t <= 1.
}

Since the clipping line in this example is {@code x = 1}, we need to solve the equation {@code 1 = x(t)} for {@code t}. So we need to solve

{@code
         1 = (1-t) * x0 + t * x1
}

for {@code t}. Here are a few algebra steps.

{@code
         1 = x0 - t * x0 + t * x1
         1 = x0 + t * (x1 - x0)
         1 - x0 = t * (x1 - x0)
              t = (1 - x0)/(x1 - x0)
}

We get similar equations for {@code t} if we clip against the other clipping lines ({@code x = -1, y = 1}, or {@code y = -1}) and we assume that {@code v0} is on the "wrong side" and {@code v1} is on the "right side".

Let {@code t0} denote the above value for {@code t}. With this value for {@code t}, we can compute the y-coordinate of the new vertex {@code p(t0)} that replaces {@code v0}.

{@code
                        x=1
                   v1    |
                     \   |
                      \  |
                       \ |
                         p(t0)=(1, y(t0))
                         |
                         |
                         |
}

Here is the algebra.

{@code
        y(t0) = (1-t0) * y0 + t0 * y1
              = y0 + t0 * (y1 - y0)
              = y0 + (1 - x0)*((y1 - y0)/(x1 - x0))
}

Finally, the new line segment between {@code v1} and the new vertex {@code p(t0)} is recursively clipped so that it can be checked to see if it should be trivially accepted, trivially rejected, or clipped again.

Color Interpolation in the Clipper

Suppose we have a line segment that extends out of the camera's view rectangle. Here we have a line segment with vertex {@code v1}, with color {@code c1}, on the "right" side of the clipping line {@code x = 1} and vertex {@code v0}, with color {@code c0}, on the "wrong" side of the clipping line.

{@code
                        x=1
                         |
                  v1,c1  |
                     \   |
                      \  |
                       \ |
                        \|
                         \
                         |\
                         | \
                         |  \
                         |   \
                         |  v0,c0
}

Vertex {@code v0} needs to be clipped off and replaced with a new vertex at the line {@code x=1}. We also need to give the new vertex a new color. Since color along the line segment will be linearly interpolated by the rasterizer, the clipping stage should give to the new vertex the same color that the rasterizer would interpolate to the line segment's pixel at {@code x=1}.

Once the clipping algorithm has solved for the value {@code t0} when the x-coordinate of {@code p(t) = 1}, then the clipping algorithm can use the following three lerp equations to calculate the new color, {@code c(t0)}, for the new vertex {@code p(t0)}.

{@code
   r(t0) = (1-t0) * r0 + t0 * r1
   g(t0) = (1-t0) * g0 + t0 * g1
   b(t0) = (1-t0) * b0 + t0 * b1
}

Memory Management

We still have one last detail to consider. We need to consider the memory management of Java objects. When we clip a line segment,

{@code
                        x=1
                    v1   |
                      \  |
                       \ |
                        \|
                         \
                         |\
                         | \
                         |  \
                         |   v0
}

we compute a new vertex, {@code p(t0)}. In our code, should we mutate the vertex object refered to by {@code v0} so that its coordinates are updated to the values computed in {@code p(t0)}?

{@code
    v0.x = p(t0).x;
    v0.y = p(t0).y;
}

Or should we replace the object referred to by {@code v0} with a new vertex object?

{@code
    v0 = new Vertex( p(t0) );
}

If we create a new vertex object to represent {@code p(t0)}, should that vertex object replace {@code v0} in the model's vertex list,

{@code
    model.vertexList.set(ls.index[0], v0);
}

or should the new vertex be added to the end of the vertex list and then mutate the line segment object to refer to the new vertex?

{@code
    int i = model.vertexList.size();
    model.vertexList.add(v0); // added at the end of the list
    ls.index[0] = i;
}

First of all, we cannot mutate the vertex {@code v0}. Here is why. That vertex may be part of another line segment. In the following picture, {@code v0} is an endpoint of two line segments (this means that each line segment object holds the index into the model's vertex list for the shared vertex, {@code v0}).

{@code
                        x=1
                    v1   |
                      \  |
                       \ |
                        \|
                         \
                         |\
                         | \
                         |  \
                    v2-------v0
                         |
                         |
                         |
}

If, while clipping the line segment from {@code v0} to {@code v1}, we mutate {@code v0} to be {@code p(t0)}, then the picure will become like this, which incorrectly clips the line segment from {@code v2} to {@code v0}.

{@code
                        x=1
                    v1   |
                      \  |
                       \ |
                        \|
                         v0
                        /|
                       / |
                      /  |
                    v2   |
                         |
                         |
}

So we cannot mutate a vertex when it needs to cippled off of a line segment.

When we create a new vertex to represent the clipping point {@code p(t0)}, by the same reasoning as above, we cannot use that new vertex to replace the original vertex in the model's vertex list. That would, once again, cause the line segment from {@code v2} to {@code v0} to be incorrectly clipped. So we need to add a new vertex to the model's vertex list and mutate the line segment object to refer to that new vertex (called {@code v3} in the following picture).

{@code
                        x=1
                    v1   |
                      \  |
                       \ |
                        \|
                         v3
                         |
                         |
                         |
                    v2-------v0
                         |
                         |
                         |
}

Notice that when the line segment from {@code v2} to {@code v0} finally gets clipped, the algorithm will put a new vertex, call it {@code v4}, in the model's vertex list and leave vertex {@code v0} in the list. No matter how many line segments {@code v0} is shared with, it will be clipped off of each one, but it will continue to remain in the model's vertex list.

{@code
                        x=1
                    v1   |
                      \  |
                       \ |
                        \|
                         v3
                         |
                         |
                         |
                    v2---v4   v0
                         |
                         |
                         |
}

Extreme Clipping Example

We mentioned, while describing the clipping algorithm, that we clip a line segment against the first view rectangle edge which we find that it crosses and then immediately recurse on the resulting clipped line segment. This is because it may be the case, after clipping the line segment, that it becomes a candidate for trivial accept or reject. An obvious question is how many times might we recurse on a line segment before it is finally accepted or rejected? Here is a picture of a line segment, from vertex {@code v0} to vertex {@code v1}, that needs to be clipped four times before we get the final clipped line segment from vertex {@code v4} to vertex {@code v5}. Remember that the clipping algorithm clips first on the edge {@code x = 1}, then on {@code x = -1}, then on {@code y = 1}, and lastly on {@code y = -1}.

{@code
                              v0
                              +
                          |  /
                          | /
                          |/
                          + v2
      x = -1             /|
        |               / |
        |              /  |
     ---+-------------+---+----- y = 1
        |            /v4  |
        |           /     |
        |          /      |
        |         /       |
        |        /        |
        |       /         |
        |      /          |
        |     /           |
        |    /            |
     ---+---+-------------+----- y = -1
        |  /v5            |
        | /               |
        |/                |
        + v3            x = 1
       /|
      / |
     +
    v1

}

When the clipping algorithm is finished, it will have added four new vertices to the vertex list of the model. The clipped line segment in the model will only use two of the six vertices in the vertex list that were created for this line segment.

The clipping algorithm does not try to remove vertices like {@code v0}, {@code v1}, {@code v2} and {@code v3} which are no longer being used by this line segment. The vertices {@code v0} and {@code v1} cannot be removed because they might still be used by other line segments in the model. And it is not worth the effort to try and keep track of vertices like {@code v2} and {@code v3} which are created by the clipping algorithm but end up not being used in the final clipped line segment.

Rasterization

The {@link renderer.pipeline.RasterizeAntialias} stage first takes the two-dimensional coordinates of a vertex in the camera's image plane and computes that vertex's location in a "logical pixel plane". This is referred to as the "viewport transformation". The purpose of the logical pixel plane and the viewport transformation is to make the rasterization stage easier to implement.

The camera's image plane contains a view rectangle with edges {@code x = +1, x = -1, y = +1}, and {@code y = -1}. The pixel plane contains a logical viewport rectangle with edges {@code x = 0.5, x = w+0.5, y = 0.5}, and {@code y = h+0.5} (where {@code h} and {@code w} are the height and width of the framebuffer's viewport).

Recall that the role of the camera's view rectangle is to determine what part of a scene is visible to the camera. Vertices inside of the camera's view rectangle should end up as pixels in the framebuffer's viewport. Another way to say this is that we want only that part of each projected line segment contained in the view rectangle to be visible to our renderer and rasterized into the framebuffer's viewport.

Any point inside of the image plane's view rectangle should be transformed to a point inside of the pixel plane's logical viewport. Any vertex outside of the image plane's view rectangle should be transformed to a point outside of the pixel plane's logical viewport.

{@code
                      View Rectangle
                (in the Camera's image plane)

                          y-axis
                            |
                            |       (+1,+1)
                  +---------|---------+
                  |         |         |
                  |         |         |
                  |         |         |
                  |         |         |
                  |         |         |
               -------------+---------------- x-axis
                  |         |         |
                  |         |         |
                  |         |         |
                  |         |         |
                  |         |         |
                  +---------|---------+
              (-1,-1)       |
                            |
                            |

                            ||
                            ||
                            ||  Viewport Transformation
                            ||
                            ||
                            \/

                      Logical Viewport
                                               (w+0.5, h+0.5)
      +----------------------------------------------+
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|   The logical pixels
      | . . . . . . . . . . . . . . . . . . . . . . .|   are the points in the
      | . . . . . . . . . . . . . . . . . . . . . . .|   logical viewport with
      | . . . . . . . . . . . . . . . . . . . . . . .|   integer coordinates.
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|
      | . . . . . . . . . . . . . . . . . . . . . . .|
      +----------------------------------------------+
 (0.5, 0.5)

                            ||
                            ||
                            ||  Rasterizer
                            ||
                            ||
                            \/

                         Viewport
                    (in the FrameBuffer)
      (0,0)
        +-------------------------------------------+
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|   The physical pixels
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|   are the entries in
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|   the FrameBuffer
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|   array.
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
        +-------------------------------------------+
                                                (w-1,h-1)
}

After the viewport transformation of the two endpoints of a line segment, the rasterization stage will convert the given line segment in the pixel plane into pixels in the framebuffer's viewport. The rasterization stage computes all the pixels in the framebuffer's viewport that are on the line segment connecting the transformed vertices v0 and v1. Any point inside the logical viewport that is on this line segment is rasterized to a pixel inside the framebuffer's viewport. Any point on this line segment that is outside the logical viewport should not be rasterized to a pixel in the framebuffer.

View Rectangle to Logical Viewport Transformation

The view rectangle in the camera's view plane has

{@code
       -1 <= x <= 1,
       -1 <= y <= 1.
}

The logical viewport in the pixel plane has

{@code
       0.5 <= x < w + 0.5,
       0.5 <= y < h + 0.5,
}

where

We want a transformation (formulas) that sends points from the camera's view rectangle to proportional points in the pixel plane's logical viewport.

The goal of this transformation is to put a logical pixel with integer coordinates at the center of each square physical pixel. The logical pixel with integer coordinates {@code (m, n)} represents the square physical pixel with

{@code
  m - 0.5 <= x < m + 0.5,
  n - 0.5 <= y < n + 0.5.
}

Notice that logical pixels have integer coordinates {@code (m,n)} with

{@code
  1 <= m <= w
  1 <= n <= h.
}

Let us derive the formulas for the viewport transformation (we will derive the x-coordinate formula; the y-coordinate formula is similar).

Let {@code x_p} denote an x-coordinate in the image plane and let {@code x_vp} denote an x-coordinate in the viewport. If a vertex is on the left edge of the view rectangle (with {@code x_p = -1}), then it should be transformed to the left edge of the viewport (with {@code x_vp = 0.5}). And if a vertex is on the right edge of the view rectangle (with {@code x_p = 1}), then it should be transformed to the right edge of the viewport (with {@code x_vp = w + 0.5}). These two facts are all we need to know to find the linear function for the transformation of the x-coordinate.

We need to calculate the slope {@code m} and intercept {@code b} of a linear function

{@code
          x_vp = m * x_p + b
}

that converts image plane coordinates into viewport coordinates. We know, from what we said above about the left and right edges of the view rectangle, that

{@code
           0.5 = (m * -1) + b,
       w + 0.5 = (m *  1) + b.
}

If we add these last two equations together we get

{@code
         w + 1 = 2*b
}
or
{@code
         b = (w + 1)/2.
}

If we use {@code b} to solve for {@code m} we have

{@code
           0.5 = (m * -1) + (w + 1)/2
             1 = -2*m + w + 1
           2*m = w
             m = w/2.
}

So the linear transformation of the x-coordinate is

{@code
       x_vp = (w/2) * x_p + (w+1)/2
            = 0.5 + w/2 * (x_p + 1).
}

The equivalent formula for the y-coordinate is

{@code
       y_vp = 0.5 + h/2 * (y_p + 1).
}

Rasterizing a LineSegment

Now we want to discuss the precise algorithm for how the rasterizer converts a line segment in the pixel-plane into a specific choice of pixels in the viewport.

Here is a picture of part of a line segment in the pixel-plane with logical pixel x-coordinates between {@code i} and {@code i+3} and with logical pixel y-coordinates between {@code j} and {@code j+6}.

{@code
     +-------+-------+-------+------/+
     |       |       |       |     / |
 j+6 |   .   |   .   |   .   |   ./  |
     |       |       |       |   /   |
     +-------+-------+-------+--/----+
     |       |       |       | /     |
 j+5 |   .   |   .   |   .   |/  .   |
     |       |       |       /       |
     +-------+-------+------/+-------+
     |       |       |     / |       |
 j+4 |   .   |   .   |   ./  |   .   |
     |       |       |   /   |       |
     +-------+-------+--/----+-------+
     |       |       | /     |       |
 j+3 |   .   |   .   |/  .   |   .   |
     |       |       /       |       |
     +-------+------/+-------+-------+
     |       |     / |       |       |
 j+2 |   .   |   ./  |   .   |   .   |
     |       |   /   |       |       |
     +-------+--/----+-------+-------+
     |       | /     |       |       |
 j+1 |   .   |/  .   |   .   |   .   |
     |       /       |       |       |
     +------/+-------+-------+-------+
     |     / |       |       |       |
  j  |   ./  |   .   |   .   |   .   |
     |   /   |       |       |       |
     +--/----+-------+-------+-------+
        i       i+1     i+2     i+3     logical pixel coordinates
}

The rasterizing algorithm can "walk" this line segment along either the x-coordinate axis from {@code i} to {@code i+3} or along the y-coordinate axis from {@code j} to {@code j+6}. In either case, for each logical pixel coordinate along the chosen axis, the algorithm should pick the logical pixel closest to the line segment and turn on the associated physical pixel.

If our line has the equation {@code y = m*x + b}, with slope {@code m} and y-intercept {@code b} (in pixel-plane coordinates), then walking the line along the x-coordinate axis means that for each logical pixel x-coordinate {@code i}, we compute the logical pixel y-coordinate

{@code
     Math.round( m*i + b ).
}

On the other hand, walking the line along the y-coordinate axis means we should use the linear equation {@code x = (y - b)/m} and for each logical pixel y-coordinate {@code j}, we compute the logical pixel x-coordinate

{@code
    Math.round( (y - b)/m ).
}

Let us try this algorithm in the above picture along each of the two logical pixel coordinate axes.

If we rasterize this line segment along the x-coordinate axis, then we need to chose a logical pixel for each {@code x} equal to {@code i, i+1, i+2}, and {@code i+3}. Always choosing the logical pixel (vertically) closest to the line, we get these pixels.

{@code
     +-------+-------+-------+------/+
     |       |       |       |#####/#|
 j+6 |   .   |   .   |   .   |###./##|
     |       |       |       |###/###|
     +-------+-------+-------+--/----+
     |       |       |       | /     |
 j+5 |   .   |   .   |   .   |/  .   |
     |       |       |       /       |
     +-------+-------+------/+-------+
     |       |       |#####/#|       |
 j+4 |   .   |   .   |###./##|   .   |
     |       |       |###/###|       |
     +-------+-------+--/----+-------+
     |       |       | /     |       |
 j+3 |   .   |   .   |/  .   |   .   |
     |       |       /       |       |
     +-------+------/+-------+-------+
     |       |#####/#|       |       |
 j+2 |   .   |###./##|   .   |   .   |
     |       |###/###|       |       |
     +-------+--/----+-------+-------+
     |       | /     |       |       |
 j+1 |   .   |/  .   |   .   |   .   |
     |       /       |       |       |
     +------/+-------+-------+-------+
     |#####/#|       |       |       |
  j  |###./##|   .   |   .   |   .   |
     |###/###|       |       |       |
     +--/----+-------+-------+-------+
        i       i+1     i+2     i+3     logical pixel coordinates
}

Make sure you agree that these are the correctly chosen pixels. Notice that our rasterized line has "holes" in it. This line has slope strictly greater than 1. Every time we move one step to the right, we move more that one step up because the slope is greater than 1, so

{@code
   rise/run > 1,
}

so

{@code
   rise > run,
}

but {@code run = 1}, so we always have {@code rise > 1}, which causes us to skip over a pixel when we round our y-coordinate to the nearest logical pixel.

If we rasterize this line segment along the y-coordinate axis, then we need to chose a logical pixel for each {@code y} equal to {@code j, j+1, j+2, j+3, j+4, j+5} and {@code j+6}. Always choosing the logical pixel (horizontally) closest to the line, we get these pixels.

{@code
     +-------+-------+-------+------/+
     |       |       |       |#####/#|
 j+6 |   .   |   .   |   .   |###./##|
     |       |       |       |###/###|
     +-------+-------+-------+--/----+
     |       |       |       |#/#####|
 j+5 |   .   |   .   |   .   |/##.###|
     |       |       |       /#######|
     +-------+-------+------/+-------+
     |       |       |#####/#|       |
 j+4 |   .   |   .   |###./##|   .   |
     |       |       |###/###|       |
     +-------+-------+--/----+-------+
     |       |       |#/#####|       |
 j+3 |   .   |   .   |/##.###|   .   |
     |       |       /#######|       |
     +-------+------/+-------+-------+
     |       |#####/#|       |       |
 j+2 |   .   |###./##|   .   |   .   |
     |       |###/###|       |       |
     +-------+--/----+-------+-------+
     |       |#/#####|       |       |
 j+1 |   .   |/##.###|   .   |   .   |
     |       /#######|       |       |
     +------/+-------+-------+-------+
     |#####/#|       |       |       |
  j  |###./##|   .   |   .   |   .   |
     |###/###|       |       |       |
     +--/----+-------+-------+-------+
        i       i+1     i+2     i+3     logical pixel coordinates
}

Make sure you agree that these are the correctly chosen pixels. In each row of logical pixels, we should choose the logical pixel that is closest (horizontally) to the line.

We see that while we can rasterize a line in either the x-direction or the y-direction, we should chose the direction based on the slope of the line. Lines with slope between -1 and +1 should be rasterized in the x-direction. Lines with slope less than -1 or greater than +1 should be rasterized in the y-direction.

Here is a pseudo-code summary of the rasterization algorithm. Suppose we are rasterizing a line from logical pixel {@code (x0, y0)} to logical pixel {@code (x1, y1)} (so {@code x0, y0, x1, y1} are all integer values). If the line has slope less than 1, we use the following loop.

{@code
    double y = y0;
    for (int x = x0; x <= x1; x += 1, y += m)
    {
       int x_vp = x - 1;                    // viewport coordinate
       int y_vp = h - (int)Math.round(y);   // viewport coordinate
       vp.setPixelVP(x_vp, y_vp, Color.white);
    }
}

Notice how {@code x} is always incremented by 1 so that it moves from one, integer valued, logical pixel coordinate to the next, integer valued, logical pixel coordinate. On the other hand, the slope {@code m} need not be an integer. As we increment {@code x} by 1, we increment {@code y} by {@code m} (since "over 1, up {@code m}" means {@code slope = m}), so the values of {@code y} need not be integer values, so we need to round each {@code y} value to its nearest logical pixel integer coordinate.

If the line has slope greater than 1, we use the following loop.

{@code
    double x = x0;
    for (int y = y0; y <= y1; y += 1, x += m)
    {
       int x_vp = (int)Math.round(x) - 1;   // viewport coordinate
       int y_vp = h - y;                    // viewport coordinate
       vp.setPixelVP(x_vp, y_vp, Color.white);
    }
}

The above code has a slight simplification in it. When the slope of the line is greater than 1, we recompute the slope as slope in the y-direction with

{@code
    change-in-x / change-in-y
}

so that the slope becomes less than 1.

Color Interpolation in the Rasterizer

This picture represents a line segment projected into the camera's view rectangle. Each end of the line segment has a color associated to it.

{@code
          x = -1           x = +1
            |                |
        ----+----------------+---- y = +1
            |                |
            |        v1,c1   |
            |         /      |
            |        /       |
            |       /        |
            |      /         |
            |     /          |
            |  v0,c0         |
            |                |
        ----+----------------+---- y = -1
            |                |
}

We want to describe how the rasterizer uses the colors from the two endpoints of the line segment to shade the pixels that represent the line segment.

If {@code c0} and {@code c1} are the same color, then the rasterizer should just give that color to every pixel in the line segment. So the interesting case is when the two colors are not the same. In that case, we want the rasterizer to shift the color from {@code co} to {@code c1} as the rasterizer moves across the line segment.

We have two ways of writing an equation for the line segment. The line segment can be described by the two-point equation for a line,

{@code
   y(x) = y0 + (y1-y0)/(x1-x0)*(x - x0)  with  x0 <= x <= x1,
}

or by the vector parametric lerp equation,

{@code
   p(t) = (1-t)*v0 + t*v1  with  0 <= t <= 1.
}

We can use either equation to shade pixels on the line segment.

Let {@code (r0, g0, b0)} be the color {@code c0} at {@code v0} and let {@code (r1, g1, b1)} be the color {@code c1} at {@code v1}.

Given a value for {@code x} with {@code x0 <= x <= x1}, then the following three linear equations linearly interpolate the three components of a color to the pixel at {@code (x, y(x))}.

{@code
   r(x) = r0 + (r1-r0)/(x1-x0)*(x - x0)
   g(x) = g0 + (g1-g0)/(x1-x0)*(x - x0)
   b(x) = b0 + (b1-b0)/(x1-x0)*(x - x0)
}

Given a value for {@code t} with {@code 0 <= t <= 1}, then the following three lerp equations linearly interpolate the three components of a color to the pixel at {@code (t, p(t))}.

{@code
   r(t) = (1-t)*r0 + t*r1
   g(t) = (1-t)*g0 + t*g1
   b(t) = (1-t)*b0 + t*b1
}

Notice that the lerp versions of the equations are easier to read and understand. But the rasterizer is written around the two-point equations, so it uses those. We will see below that the clipping algorithm uses the lerp equations.

Anti-aliasing

The goal of adding an anti-aliasing step to the line rasterizer is to make lines look better. Anti-aliasing tries to smooth out the "jaggies" that are caused when a line being rasterized moves vertically from one horizontal row of pixels to the next row. There is a noticeable jump where the pixels drawn in one row do not line up with the pixels drawn in the next row.

https://en.wikipedia.org/wiki/Jaggies
https://en.wikipedia.org/wiki/Spatial_anti-aliasing
https://commons.wikimedia.org/wiki/File:LineXiaolinWu.gif
https://www.geeksforgeeks.org/anti-aliased-line-xiaolin-wus-algorithm/

Here is a picture of a line segment passing through a 5 by 4 grid of pixels. At the center of each physical pixel is the point that is the logical pixel.

{@code
     +-------+-------+-------+-------+
     |       |       |     / |       |
     |   .   |   .   |   ./  |   .   |
     |       |       |   /   |       |
     +-------+-------+--/----+-------+
     |       |       | /     |       |
     |   .   |   .   |/  .   |   .   |
     |       |       /       |       |
     +-------+------/+-------+-------+
     |       |     / |       |       |
     |   .   |   ./  |   .   |   .   |
     |       |   /   |       |       |
     +-------+--/----+-------+-------+
     |       | /     |       |       |
     |   .   |/  .   |   .   |   .   |
     |       /       |       |       |
     +------/+-------+-------+-------+
     |     / |       |       |       |
     |   ./  |   .   |   .   |   .   |
     |   /   |       |       |       |
     +-------+-------+-------+-------+
}

Here is how this line segment would be rasterized. Notice that there are very distinct jumps where the pixels "move over" from one column to the next.

{@code
     +-------+-------+-------+-------+
     |       |       |#####/#|       |
     |   .   |   .   |###./##|   .   |
     |       |       |###/###|       |
     +-------+-------+--/----+-------+
     |       |       |#/#####|       |
     |   .   |   .   |/##.###|   .   |
     |       |       /#######|       |
     +-------+------/+-------+-------+
     |       |#####/#|       |       |
     |   .   |###./##|   .   |   .   |
     |       |###/###|       |       |
     +-------+--/----+-------+-------+
     |       |#/#####|       |       |
     |   .   |/##.###|   .   |   .   |
     |       /#######|       |       |
     +------/+-------+-------+-------+
     |#####/#|       |       |       |
     |###./##|   .   |   .   |   .   |
     |###/###|       |       |       |
     +-------+-------+-------+-------+
}

Anti-aliasing tries to smooth out those jumps by "spreading" a pixel's intensity over two adjacent pixels.

{@code
     +-------+-------+-------+-------+
     |       |       |#####/#|       |
     |   .   |   .   |###./##|   .   |
     |       |       |###/###|       |
     +-------+-------+--/----+-------+
     |       |\\\\\\\|\/\\\\\|       |
     |   .   |\\\.\\\|/\\.\\\|   .   |
     |       |\\\\\\\/\\\\\\\|       |
     +-------+------/+-------+-------+
     |       |#####/#|       |       |
     |   .   |###./##|   .   |   .   |
     |       |###/###|       |       |
     +-------+--/----+-------+-------+
     |\\\\\\\|\/\\\\\|       |       |
     |\\\.\\\|/\\.\\\|   .   |   .   |
     |\\\\\\\/\\\\\\\|       |       |
     +------/+-------+-------+-------+
     |#####/#|       |       |       |
     |###./##|   .   |   .   |   .   |
     |###/###|       |       |       |
     +-------+-------+-------+-------+
}

Here is how we will "spread" the intensity of a pixel out over two adjacent pixels. Notice that the line we are rasterizing is always between two adjacent horizontal logical pixels. In any given row of logical pixels, let {@code p0} and {@code p1} be the two logical pixels on the left and right hand sides of the line.

{@code
                 /
     +-------+--/----+-------+-------+
     |  p0   | /     |       |       |
     |   .   |/  .   |   .   |   .   |
     |       /  p1   |       |       |
     +------/+-------+-------+-------+
           /
}

Choose the number {@code t}, with {@code 0<=t<=1}, so that the point {@code p(t)} defined by

{@code
      p(t) = (1 - t)*p0 + t*p1    (the lerp formula)
}
is the point where the line we are rasterizing intersects with the line segment between {@code p0} and {@code p1}. Now give the pixel {@code p0} the shade of gray (the intensity) given by
{@code
     (r0, g0, b0) = (1-t, 1-t, 1-t)
}
and give the pixel {@code p1} the shade of gray (the intensity) given by
{@code
     (r1, g1, b1) = (t, t, t).
}

(Remember that Java lets us set the color of a pixel using either three floats between 0 and 1, or three ints between 0 and 255. Here, we are using three floats.) Notice that if the point {@code p(t)} is very near to {@code p0} (so {@code t} is near 0), then {@code p0} will be much brighter than {@code p1}, and if {@code p(t)} is near {@code p1} (so {@code t} is near 1), then {@code p1} will be brighter than {@code p0}. If {@code p(t)} is exactly in the middle of {@code p0} and {@code p1} (so {@code t = 0.5}), then the two pixels will be equally bright.

{@code
     +-------+-------+-------+-------+
     |       |       |     / |       |
 j+4 |   .   |   .   |   ./  |   .   |
     |       |       |   /   |       |
     +-------+-------+--/----+-------+
     |       |       | /     |       |
 j+3 |   .   |   .   |/  .   |   .   |
     |       |       /       |       |
     +-------+------/+-------+-------+
     |       |     / |       |       |
 j+2 |   .   |   ./  |   .   |   .   |
     |       |   /   |       |       |
     +-------+--/----+-------+-------+
     |       | /     |       |       |
 j+1 |   .   |/  .   |   .   |   .   |
     |       /       |       |       |
     +------/+-------+-------+-------+
     |     / |       |       |       |
  j  |   ./  |   .   |   .   |   .   |
     |   /   |       |       |       |
     +-------+-------+-------+-------+
        i       i+1     i+2     i+3     logical pixel coordinates
}

The code for doing anti-aliasing does not explicitly use the lerp formula as shown above. Since all the logical pixels have integer coordinates, the {@code t} value in the lerp formula, {@code (1-t)*p0 + t*p1}, is really just the fractional part of the double that is the x-coordinate of the point on the line at the integer y-coordinate of a row of logical pixels (or for lines with slope less than 1, the t in the lerp formula is the fractional part of the double that is the y-coordinate of the point on the line at the integer x-coordinate of a vertical column of logical pixel).

Gamma Correction

In the previous renderer, every pixel in the framebuffer is set to either the color white={@code (1.0, 1.0, 1.0)} or the color black={@code (0, 0, 0)}. The idea behind anti-aliasing is to take a white pixel and "spread" its color between two adjacent pixels. So a white pixel with color {@code (1.0, 1.0, 1.0)} gets split into two adjacent pixels with colors {@code (1.0-t, 1.0-t, 1.0-t)} and {@code (t, t, t)}. Since the brightness of these two pixels sum up to {@code (1.0, 1.0, 1.0)}, you might expect the two pixels together to be as bright (to our eyes) as the single white pixel. But they are not. When we turn on anti-aliasing the image gets noticeably dimmer. We fix this with something called "gamma correction".

The reason the two adjacent pixels whose brightness sum to one do not seem as bright as a single pixel with brightness one is because the LCD monitor is purposely dimming pixels with brightness less than about 0.5. This is called "gamma expansion". And the LCD monitor does this because a digital camera purposely brightens the pixels with brightness less than about 0.5 (this is called "gamma compression"). So the monitor is undoing what the digital camera did to each pixel.

Since every LCD monitor dims any pixel that is already kind of dim (brightness less than about 0.5), if we want our pixels to look correct on the monitor's display, then we need to do our own "gamma compression" of each pixel before sending the pixel to the monitor. That makes our pixels seem, to the monitor, as if they came from a digital camera.

Gamma compression is also called "gamma encoding". Gamma expansion is also called "gamma decoding". The two (opposite) operations are both referred to as "gamma correction" (each device's operation "corrects" for the other device's operation).

Both gamma compression and gamma expansion are calculated using a "power rule", that is, an exponentiation function, {@code Math.pow(c, gamma)}, where {@code c} is a color value and {@code gamma} is the exponent.

Gamma compression and gamma expansion each have their own exponent, {@code g1} and {@code g2}, and the two exponents must be reciprocals of each other {@code g1 = 1/g2}. Gamma expansion (in an LCD monitor) uses an exponent larger than 1, and it usually uses the exponent 2.2. So gamma compression (in a digital camera) uses 1/2.2.

If you have a number {@code c}, like a brightness, which is less than 1, and an exponent {@code gamma} which is greater than 1, then

{@code
      Math.pow(c, gamma) < c.
}

For example, think of what the squaring function does to the numbers between 0 and 1. So {@code gamma > 1} takes brightness values less than 1 and makes them smaller (which is how a monitor makes colors dimmer). This is more pronounced for numbers less than 0.5.

If you have a number {@code c} which is less than 1, and an exponent {@code gamma} which is also less than 1, then

{@code
      Math.pow(c, gamma) > c.
}

For example, think of what the square-root function does to the numbers between 0 and 1. So {@code gamma < 1} takes brightness values less than 1 and makes them larger (this is what a digital camera does). This is more pronounced for numbers less than 0.5.

In the rasterizer, after computing how the brightness {@code (1.0, 1.0, 1.0)} is spilt between two adjacent pixels as {@code (1-t, 1-t, 1-t)} and {@code (t, t, t)}, the brightness values {@code 1-t} and {@code t} are gamma encoded,

{@code
   Math.pow(t,   1/2.2)
   Math.pow(1-t, 1/2.2)
}

and the two gamma encoded colors are written into the two adjacent pixels in the framebuffer.

An obvious question is why do digital cameras and LCD monitors each do a calculation that undoes what the other one calculates? The answer is that gamma correction is a clever way for a digital camera to make efficient use of the eight binary digits in a byte.

The human eye is more sensitive to changes in dim light intensities than it is to changes in bright light intensities (this helps us see better in the dark). Light intensities (for each color, red, green, blue) are recorded by a digital camera as 8-bit bytes. So the camera can record 256 different levels of brightness for each color. Since the human eye is more sensitive to changes in dim light than to changes in bright light, the camera should use more of its brightness levels for dim light intensities and fewer levels for the bright light intensities. For example, out of the 256 possible levels, the camera might assign 187 levels to light intensities below 0.5, and the other 69 levels to light intensities above 0.5 (so about 73% of the possible brightness levels are used for the dimmer half of the light intensities and only 27% of the brightness levels are used for the brighter half of the light intensities). And this is exactly what the camera's gamma compression does.

Because the camera's gamma value is less than one, the camera's gamma function,

{@code
     x -> Math.pow(x, gamma),
}

has a steep slope for {@code x} near zero and shallow slope for {@code x} near one (recall the graph of the square root function). So light intensities less than 0.5 get spread apart when they are sent to their respective binary encodings and light intensities greater than 0.5 get squeezed together when they are sent, by the gamma function, to their binary encodings.

A camera's gamma value is usually 1/2.2. If we calculate the camera's gamma function with input 0.5, we get the following.

{@code
     0.5 -> Math.pow(0.5, 1/2.2) = 0.72974
}

Assume that the 256 binary values the camera stores for light intensities represent 256 evenly spaced numbers between 0.0 and 1.0. So the lower half of light intensities between 0.0 and 0.5 will be encoded and stored by the camera as binary values between 00000000 and 10111010, which is 73% of the binary values between {@code 0x00} and {@code 0xFF} (0.72974 * 255 = 186.08 and 186 in binary is 10111010).

So the camera uses about three times more encodings for the dimmer half of light intensities than for the brighter half. This gives the camera far more precision when recording a low light intensity than when recording a bright intensity. And that makes the camera match the human eye's light sensitivity.

https://en.wikipedia.org/wiki/Gamma_correction
https://www.scratchapixel.com/lessons/digital-imaging/digital-images
http://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/