Adding preliminary TUIO support for multi-touch systems (parsing OSC packets for 2D cursor descriptions)

In this post we will begin to add TUIO (Tangible User Interface Object) support to our project by implementing a server module to parse OSC (Open Sound Control) packets for 2D cursor descriptions. This will allow us to import blob events from a client application. We will use the TUIOdroid application available for android devices to send UDP packets to our server module. The server module will parse these events, and our trackers will import the blob events. This post is intended to provide Read more [...]

A calibration method based on barycentric coordinates for multi-touch systems

In this post we will touch upon the calibration component for multi-touch systems. By the end of this post we will implement a calibration widget that is integrated with our tracker modules, but before we do that we'll discuss the mathematics behind one method for mapping camera space to screen space. Below is a screen capture of our calibration widget awaiting input. Our calibration implementation will divide each quad in the above image into two triangles, an upper left and a lower right Read more [...]

Implementing a multi-touch event system to deliver blob events to registered widgets (and creating a demo photo application with inertia) part 2 of 2

In the previous post we discussed the event queue and the abstract base class for the widgets. Now we will concentrate on creating some widgets that we can use by extending the base class, and we will look at setting up the queue, registering widgets, and calling the processEvents() method in our programs main loop. By the end of this post we should be able to implement the photo application shown in the image below. The first widget we will declare is the cRectangle object. The purpose Read more [...]

Implementing a multi-touch event system to deliver blob events to registered widgets (and creating a demo photo application with inertia) part 1 of 2

In my previous posts we've discussed blob extraction and tracking. Now we'll take it one step further and design an event system to handle those events and deliver them to registered widgets. In this post we will focus on the event system and the widget base class. In the next post we will extend the widget base class to create a photo application like below. I've drawn up a quick flowchart of what we'll be attempting to implement. Everything in the left column under "Input System" we've Read more [...]

C++ implementation of the Connected Component Labeling method using the Disjoint Set data structure

In the post before last we discussed using cvBlobsLib as a tool for blob extraction. We're going to revisit the extraction theme and look at a C++ implementation of the Connected Component Labeling method, but before we do that we're going to look at an implementation of the Disjoint Set data structure that will provide us with the necessary tool for generating equivalence sets. The Disjoint Set data structure allows us to track elements partitioned into disjoint subsets. Two sets are disjoint Read more [...]

Fiducial detection based on topological region adjacency information with identification by angle information

In my last post we discussed blob extraction and event tracking. We will continue with that project by adding support for two-dimensional fiducial tracking. We will attempt to implement the fiducial detection algorithm used on the Topolo Surface1. We will first describe the fiducials and how their properties are encoded in their structure, and we will add a class to our project to support fiducial detection and rendering. When finished we will obtain the following renderings: Below Read more [...]

Detecting blobs with cvBlobsLib and tracking blob events across frames

In my previous post we discussed using OpenCV to prepare images for blob detection. We will build upon that foundation by using cvBlobsLib to process our binary images for blobs. A C++ vector object will store our blobs, and the center points and axis-aligned bounding boxes will be computed for each element in this vector. We will define a class that operates on this vector to track our blobs across frames, converting them to an event type. An event will be one of three types, BLOB_DOWN, BLOB_MOVE, Read more [...]

Using OpenCV to process images for blob detection (with SDL and OpenGL for rendering)

In this post I will discuss how you can capture and process images in preparation for blob detection.  A future post will discuss the process of detecting and tracking blobs as well as fiducials, but here we are concerned with extracting clean binary images that will be passed to our detector module.  We will use OpenCV's VideoCapture class to extract images from our capture device and then pass these images through a series of filters so that we end up with a binary image like below. We Read more [...]