Shadow volumes

The purpose of this project was to provide a straightforward implementation of shadow volumes using the depth fail approach. The project is divided into the following sections: After loading an object, detect duplicate vertices. Build the object's edge list while identifying each face associated with an edge. Identify the profile edges from the perspective of the light source. Create the quadrilaterals defining the shadow volume by extruding the profile edges. Create the shadow volume Read more [...]

Path tracing: sphere and triangle texture mapping

I had been planning to refine this project and offer a more detailed writeup on adding sphere and triangle texture mapping to the path tracer project, but for the time being I thought I would offer the code for download. Below are a couple of screen captures from the most recent version of the project. The code is available below. Download this project: path_tracer_texture_mapping.tar.bz2 Read more [...]

Feature detection and tracking with an affine consistency check

The equations for detecting features, tracking them between consecutive frames, and checking for consistency using an affine transformation will be derived below using the inverse compositional approach. We will begin by deriving the equations for tracking, because this will yield some insight into which features would be good to track. The source code for this project is available for download at the end. Below is a video of this project in action. Translation Read more [...]

kd tree construction using the surface area heuristic, stack-based traversal, and the hyperplane separation theorem

In this post we will employ the hyperplane separation theorem and the surface area heuristic for kd tree construction to improve the performance of our path tracer. Previous posts have relied simply on detecting intersections between an axis aligned bounding box and the minimum bounding box of a triangle primitive. By utilizing the hyperplane separation theorem, we can cull additional triangles from a list of potential intersection candidates. From here, we will set out to construct a kd tree Read more [...]

Path tracer: thin lens, texture mapping, Fresnel equations, and smooth shading

A few new features have been added to our path tracer. The depth of field extension has been reworked slightly using the thin lens equation allowing us to specify a focal length and aperture. Fresnel equations have been added to more accurately model the behavior of light at the interface between media of different refractive indices. Textures can be applied to the plane primitive, and normals can be interpolated across the triangle primitive allowing for smooth shading. Below are three renders Read more [...]

Path tracer with triangle primitives and binary space partitioning

UPDATE: The post below was a purely naive attempt at implementing a rudimentary bounding volume hierarchy. A much more efficient implementation using a kd tree is available in this post. We will continue with the project we left off with in this post. We will attempt to add triangles to our list of primitives. Once we are able to render triangles, this opens the door to rendering full scale models. However, because models will contain upwards of thousands of triangles, we need to be able to Read more [...]

An Arduino-based networked rover

The purpose of this project was to create an internet-based rover using the combination of a cheap RC vehicle, an Arduino Uno with a Seeed Relay Shield, a Samsung Galaxy S3 in host mode, a workstation and PlayStation 3 controller. Our server software will run in the background on the S3 and visual feedback will be provided by the IP Webcam app available on Google Play. The client software will run on the workstation and send the status of the PlayStation 3 controller to the S3. The S3 will in Read more [...]

Path tracer depth of field

This is a small extension to the previous post. We will add a depth of field simulation to our path tracer project. I ran across this algorithm at this site. Below is a render of our path tracer with the depth of field extension. Essentially, we will define the distance to the focal plane and a blur radius. For each primary ray we find its intersection with the focal plane, , and jitter the ray origin by an amount, . We then define the new ray direction as . Read more [...]

A basic path tracer with CUDA

The path tracer we will create in this project will run on CUDA-enabled GPUs. You will need to install the CUDA Toolkit available from NVIDIA. The device code for this project uses classes and must be compiled with compute capability 2.0. If you are unsure what compute capability your card has, check out this list. Below are two screen captures of this project in action. This path tracer is basic and fairly crude and inefficient. I'll provide a brief overview of the code before Read more [...]

Bidiagonalization using Householder transformations

The previous post was a discussion on employing Householder transformations to perform a QR decomposition. This post will be short. I've had this code lying around for a while now and thought I would make it available. The process of bidiagonalization using Householder transformations amounts to nothing more than alternating by left and right transformations. The cMatrix::householderBidiagonalization() method: Download the source: Read more [...]

QR decomposition using Householder transformations

It's been a while since my last post. A project I have in the works requires some matrix decompositions, so I thought this would be a good opportunity to get a post out about QR decompositions using Householder transformations. For the moment we will focus on the field of real numbers, though we can extend these concepts to the complex field if necessary. Theorem. A real matrix, , can be decomposed as , where $$\mathbf{Q} Read more [...]

Using Fourier synthesis to generate a fractional Brownian motion surface

In this post we will discuss generating fractal terrain. In the previous post we implemented our own fast Fourier transform in order to simulate an ocean surface. In that post we implemented an unnormalized inverse transform. It seemed logical to employ our FFT object as a means of generating terrain, but in order to do so we will need to add a method to compute the forward transform. This will be a relatively brief post. Most of the foundation has been laid with our ocean simulation, so we Read more [...]

Ocean simulation part two: using the fast Fourier transform

In this post we will analyze the equations for the statistical wave model presented in Tessendorf's paper[1] on simulating ocean water. In the previous post we used the discrete Fourier transform to generate our wave height field. We will proceed with the analysis in order to implement our own fast Fourier transform. With this implementation at our disposal we will be able to achieve interactive frame rates. Below are two screen captures of our result. The first uses a version of the shader Read more [...]

Ocean simulation part one: using the discrete Fourier transform

In this post we will implement the statistical wave model from the equations in Tessendorf's paper[1] on simulating ocean water. We will implement this model using a discrete Fourier transform. In part two we will begin with the same equations but provide a deeper analysis in order to implement our own fast Fourier transform. Using the fast Fourier transform, we will be able to achieve interactive frame rates. Below are two screen captures. The first is a rendering of the surface using some Read more [...]

Tangent space normal mapping with GLSL

In the previous post we discussed lighting and environment mapping, and our evaluation of the lighting contribution was performed in view space. Here we will discuss lighting in tangent space and extend our lighting model to include a normal map. If we apply a texture to a surface, then for every point in the texture we can define a set of vectors that are tangent to that point. Once we transform our light vector, halfway vector, and normal vector into tangent space, we can use the normal map Read more [...]

Lighting and environment mapping with GLSL

In this post we will expand on our skybox project by adding an object to our scene for which we will evaluate lighting contributions and environment mapping. We will first make a quick edit to our Wavefront OBJ loader to utilize OpenGL's Vertex Buffer Object. Once we can render an object we will create a shader program to evaluate the lighting and reflections. Below are a couple of screen grabs of the final result. A couple of video captures are below. Read more [...]

Rendering a skybox using a cube map with OpenGL and GLSL

I realized in my previous posts the use of OpenGL wasn't up to spec. This post will attempt to implement skybox functionality using the more recent specifications. We'll use GLSL to implement a couple simple shaders. We will create an OpenGL program object to which we will bind our vertex and fragment shaders. A Vertex Buffer Object will be used to store our cube vertices and another to store the indices for the faces. Using a cube map texture will allow our vertices to double as texture coordinates. Read more [...]

A preliminary Wavefront OBJ loader in C++

This post will examine the Wavefront OBJ file format and present a preliminary loader in C++. We will overlook the format's support for materials and focus purely on the geometry. Once our model has been loaded, an OpenGL Display List will be used to render the model. Below is a rendering of a dragon model available at The Stanford 3D Scanning Repository. In the OBJ file used for this render the vertex normals were not present. At run time, normals were evaluated at each face for lighting calculations Read more [...]

A keyboard handler and timer in C++ for the Linux platform

In this post we will construct a simple timer for evaluating the duration of a frame in addition to a keyboard handler. We will use these objects in conjunction to update our position relative to a cube. SDL and OpenGL will be used for rendering. Below is a frame captured from our application. We will query an instance of our timer once for each pass through our application loop. It should yield the duration of the previous frame. The position of objects in the current frame can then Read more [...]

A Linux C++ joystick object

In this post we will implement an object in C++ for accessing the state of an attached joystick. Below is a rendering using SDL and OpenGL of a joystick state. In "/usr/include/linux/joystick.h" we find the following event structure: Once we have opened a device node for reading we will populate this event structure with the data read from the device. We define a structure for holding the state of the joystick. As we parse the event data, the joystick state structure is updated Read more [...]

Adding preliminary TUIO support for multi-touch systems (parsing OSC packets for 2D cursor descriptions)

In this post we will begin to add TUIO (Tangible User Interface Object) support to our project by implementing a server module to parse OSC (Open Sound Control) packets for 2D cursor descriptions. This will allow us to import blob events from a client application. We will use the TUIOdroid application available for android devices to send UDP packets to our server module. The server module will parse these events, and our trackers will import the blob events. This post is intended to provide Read more [...]

A calibration method based on barycentric coordinates for multi-touch systems

In this post we will touch upon the calibration component for multi-touch systems. By the end of this post we will implement a calibration widget that is integrated with our tracker modules, but before we do that we'll discuss the mathematics behind one method for mapping camera space to screen space. Below is a screen capture of our calibration widget awaiting input. Our calibration implementation will divide each quad in the above image into two triangles, an upper left and a lower right Read more [...]

Implementing a multi-touch event system to deliver blob events to registered widgets (and creating a demo photo application with inertia) part 2 of 2

In the previous post we discussed the event queue and the abstract base class for the widgets. Now we will concentrate on creating some widgets that we can use by extending the base class, and we will look at setting up the queue, registering widgets, and calling the processEvents() method in our programs main loop. By the end of this post we should be able to implement the photo application shown in the image below. The first widget we will declare is the cRectangle object. The purpose Read more [...]

Implementing a multi-touch event system to deliver blob events to registered widgets (and creating a demo photo application with inertia) part 1 of 2

In my previous posts we've discussed blob extraction and tracking. Now we'll take it one step further and design an event system to handle those events and deliver them to registered widgets. In this post we will focus on the event system and the widget base class. In the next post we will extend the widget base class to create a photo application like below. I've drawn up a quick flowchart of what we'll be attempting to implement. Everything in the left column under "Input System" we've Read more [...]

C++ implementation of the Connected Component Labeling method using the Disjoint Set data structure

In the post before last we discussed using cvBlobsLib as a tool for blob extraction. We're going to revisit the extraction theme and look at a C++ implementation of the Connected Component Labeling method, but before we do that we're going to look at an implementation of the Disjoint Set data structure that will provide us with the necessary tool for generating equivalence sets. The Disjoint Set data structure allows us to track elements partitioned into disjoint subsets. Two sets are disjoint Read more [...]

Fiducial detection based on topological region adjacency information with identification by angle information

In my last post we discussed blob extraction and event tracking. We will continue with that project by adding support for two-dimensional fiducial tracking. We will attempt to implement the fiducial detection algorithm used on the Topolo Surface1. We will first describe the fiducials and how their properties are encoded in their structure, and we will add a class to our project to support fiducial detection and rendering. When finished we will obtain the following renderings: Below Read more [...]

Detecting blobs with cvBlobsLib and tracking blob events across frames

In my previous post we discussed using OpenCV to prepare images for blob detection. We will build upon that foundation by using cvBlobsLib to process our binary images for blobs. A C++ vector object will store our blobs, and the center points and axis-aligned bounding boxes will be computed for each element in this vector. We will define a class that operates on this vector to track our blobs across frames, converting them to an event type. An event will be one of three types, BLOB_DOWN, BLOB_MOVE, Read more [...]

Using OpenCV to process images for blob detection (with SDL and OpenGL for rendering)

In this post I will discuss how you can capture and process images in preparation for blob detection.  A future post will discuss the process of detecting and tracking blobs as well as fiducials, but here we are concerned with extracting clean binary images that will be passed to our detector module.  We will use OpenCV's VideoCapture class to extract images from our capture device and then pass these images through a series of filters so that we end up with a binary image like below. We Read more [...]