Kombucha is a fermented tea which has many purported health benefits. I make and drink it because: it may be a probiotic; it may contain beneficial vitamins; I want to drink green tea regularly; and I enjoy the taste, including the mild “bite” it provides. I usually mix it with something else, either fruit juice or plain green tea. For a more detailed description of kombucha and its history, see this article.

The process of making kombucha involves simply allowing a combination of yeast and bacteria to convert sugar into carbon dioxide and vinegar. The yeast/bacteria combination is called a SCOBY, which is an acronym for “symbiotic combination of bacteria and yeast”. The sugar used must be an actual sugar, not just a sweetener. Table sugar (sucrose) works best. Other sugars, such as honey, maple syrup, brown rice syrup, molasses or agave syrup will work, however, they have additional considerations. First, the timing will be different for each sugar. Also, the sugar must be sterile, in order to not interfere with the SCOBY. Lastly, the flavor of the kombucha will be different.
For the simplest kombucha, here’s the list of tools and ingredients.


  • Container for fermenting, at least three quarts capacity
  • Spoon for stirring
  • Air permeable cloth like cheesecloth
  • Rubber band or string to hold the cloth in place
  • Air-tight container for the new SCOBY and starter kombucha


  • Green tea
  • Sugar

One of the most important considerations for making kombucha is that everything be kept very clean. You can handle the SCOBY and even put your hands in the kombucha (to retrieve the SCOBY) but your hands must be very clean — free of soaps and perfumes and lotions. The container and the spoon and anything else which will come in contact with or contain the kombucha or the SCOBY must be very clean. Also, since kombucha is rather acidic (around a pH of 3) the container and utensils should be either food grade glass or stainless steel. They most definitely shouldn’t be plastic or reactive metals.

It’s possible, but not trivial,  to grow your own SCOBY from a bottle of commercial kombucha tea. It’s easier to buy a SCOBY from your grocer. It’s best to get a SCOBY from a friend or someone local. The SCOBY should be a milky white color, it should hold together well and shouldn’t have any black spots or fuzzy molds on it. The SCOBY should also include some kombucha from a previous batch, to keep the SCOBY moist and to provide the acid for starting the new batch of kombucha.

Begin the process by brewing two quarts of your favorite tea (green or black are fine). Pour the tea into the container, cover it with the cloth and allow it to return to room temperature (about 20 C or 70 F). If your container is clear glass, place it so it won’t be in direct sunlight.

Once the tea is at room temperature, stir in two cups of sugar. Ensure that the sugar is completely dissolved.

Separate the SCOBY from its starter kombucha. You can place the SCOBY on a clean plate or in a clean bowl. If you’re dextrous, you can hold the SCOBY, but it will drip. Pour about a cup of the starter kombucha into your tea and sugar mixture; stir it well.

If you know which side of the SCOBY was the top, gently place it on the tea/sugar/starter mixture in the same orientation. Some times it will float and sometimes it won’t — it’s not really important. Cover the container with the cloth and secure it with the rubber band or the string. I’ve found that a paper towel works best. The container must not be airtight but the cloth or paper must be sufficiently dense to keep out any dust and small insects. Place the container somewhere that it won’t be disturbed and where it can remain about 70 degrees F. If the ambient temperature is cooler, the kombucha will require more time to ferment. If the temperature is consistently higher, it will require less time.

After about one week, take a look at the SCOBY. It should have floated to the surface, or a new SCOBY should have formed on the surface. A new SCOBY may be very thin. It should also be milky white and smooth. Carefully look for signs of black or fuzzy mold. If you do find signs of mold, it’s safest to assume that your kombucha was somehow contaminated. All of the contents should be discarded, everything must be washed very thoroughly and the process should be started again.

If everything else looks good, it’s time to taste the kombucha. Use a clean plastic drinking straw or something which will allow you to reach beneath the SCOBY and extract enough of the kombucha to taste. You’re checking three things:

  1. The acidity
  2. The sweetness
  3. The general flavor

It should taste or feel acidic. If it does not, there’s a risk of the kombucha being contaminated by the wrong bacteria. The acidity can be corrected by the addition of a bit of apple cider vinegar.

If it still tastes sweet, that indicates that there’s still sugar to feed the SCOBY. You can stop at this point and have a sweeter kombucha or continue to ferment and have a thicker SCOBY, more probiotics and vitamins, and a “livelier” kombucha.

The flavor is either going to be something you enjoy (or learn to enjoy) or something you’ll want to adjust later by mixing the finished kombucha with something else.

If you want more acidity and less sweetness, let the kombucha ferment longer. The cooler the ambient temperature, the longer the kombucha must ferment. If the acidity and sweetness are acceptable, wash your hands well and clean your new SCOBY container. Remove the SCOBY from the kombucha and place it in the new jar. You may be able to separate the SCOBY into several layers, each of which can subsequently be used for the next batch(es) of kombucha. Place at least one cup of the new kombucha with the new SCOBY and close the container. You must fully cover the new SCOBY with the new kombucha. The new SCOBY(s) can be preserved in the fridge for at least a month. You’ll need to return them to room temperature before you use them in a new batch of kombucha.

Transfer the remaining new batch of kombucha to clean bottles or jars. For safety, these bottles must be able to withstand a bit of internal air pressure, in case the kombucha continues to ferment after being bottled. You can decide to either fill each bottle with just kombucha or to leave room in each bottle for the addition of another flavoring liquid.

It’s now time to enjoy your batch of kombucha, to share it with friends and to even share your SCOBY. It’s not necessary to use an entire SCOBY to make a batch of kombucha.


Covariance matrices are a way of describing the relation between a collection of variables. A single covariance value describes the relation between two variables. They are a tool for estimating the possible error in a numerical value and for predicting a numerical value. One of their several applications is in robotics sensor fusion with “regular” and “extended” Kalman filters. In this article I’ll describe how to interpret a covariance matrix and provide a practical example. I’ll leave the formal mathematical and general definition to someone better at that than me.

Let’s begin with the concept of “variance” of a numerical value. That’s the amount by which that value can be expected to vary. For example, if we were to measure the outdoor air temperature with a digital electronic sensor, we may want to know the maximum amount of error to expect in that measurement. That possible amount of error is called the variance and it’s described as a single value. Variance is always positive. For a more in-depth description of variance, please see http://en.wikipedia.org/wiki/Variance.

For the rest of this article I’ll use the terms “value” and “variable” interchangeably. I suppose we could think of a “value” as the current value of a particular “variable”.

Now imagine that there are several properties or conditions or states being measured at the same time and that we’d like to know if there is any relationship between those values. If we could predict in advance how each variable changes, relative to every other variable, that would give us two useful things. First, it would allow us to better identify (and eliminate) outlier values, where one particular value has changed so much that it’s probably not a good measurement. And second, if at one time a measured value was missed, it might be possible to predict what the value should be, based on how all of the other values to which it’s related have changed.

To proceed from a single variance to the idea of covariance and a collection of covariances contained in a matrix, we’ll need an understanding of covariance. Instead of expressing the expected range of possible change in one variable, a covariance expresses the correlation between a change in one variable and a change in another variable. For a much more in-depth explanation, see http://en.wikipedia.org/wiki/Covariance.

To illustrate, we’ll need a more complicated example. Let’s assume we have a mobile robot which can measure both its current position and its orientation. Since this robot can’t levitate or swim we’ll simplify the position and use only the two dimensional X-Y plane. That means the robot’s current position can be adequately described as a position along the X axis and a position along the Y axis. In other words, the robot’s position can be described as (x, y).

The position describes where the robot is located on the surface, which may be a parking lot or the living room floor or the soccer pitch, but it doesn’t describe in which direction it’s pointed. That information about the current state of the robot is called the orientation and will require one dimension. We’ll call this orientation dimension the yaw. The yaw describes in which direction it’s pointing. It’s worth repeating that this is a simplified way of representing the robot’s position and orientation. A full description would require three position values (x, y and z) and also three orientation values (roll, pitch and yaw). The concepts about to be described will still work with a six-dimensional representation of the robot state (position and orientation). Also, yaw is sometimes identified by the lower case Greek letter theta.

Now that we can describe both the position and the orientation of the robot at any point in time and assume that we can update those descriptions at a reasonably useful and meaningful frequency, we can proceed with the description of a covariance matrix. At each point in time, we’ll be measuring a total of three values: the x and y position and the yaw orientation. We could think of this collection of measurements as a vector with three elements.

We’ll start with two sets of measurements, each of which contains three values. Assume the first measurements were taken at 11:02:03 today and we’ll call that time t1. The second set were taken at 11:02:04 and we’ll call that time t2. We’ll also assume that our measurements are taken once per second. The measurement frequency isn’t as important as consistency in the frequency. Covariance itself doesn’t depend upon time, but the timing will become useful further on in this example.

Covariance is a description of how much change to expect in one variable when some other variable changes by a particular amount and in a particular direction. Using the position and orientation example we’ve started, we’d like to know what to expect of the yaw measurement from time t2 when the change in the y measurement between time t1 and t2 was large in the positive direction. Covariance can tell us to expect a similarly large positive change in yaw when y becomes more positive. It could also predict that yaw would become more negative when y became more positive. Lastly, it could state that there doesn’t appear to be any predictable correlation between a change in yaw and a change in y.

Just in case, let’s try a possibly more intuitive example of a correlation. Our initial example measures the position of the robot, with a corresponding x and y value, every second. Since we have regular position updates; since we know the amount of time between the updates (one second); and since we can calculate the distance between the position at time t1 and the position at time t2, we can now calculate the velocity at time t2. We’ll actually get the speed along the x axis and the speed along the y axis which can be combined into a velocity.

Assume the robot is pointed along the x axis in the positive direction and it’s moving. The regular measurements of the position should show a steadily increasing x value and, at least in a perfect world, an unchanging y value. What would you expect the yaw measurement to be – unchanging or changing? Since the robot is not changing its direction the yaw should not be changing. Put in terms of covariance, a change in the x value with no change in the y value is NOT correlated with a change in the yaw value. On the contrary, if we measured a change in yaw with no directional change in the velocity, we would have to suspect that at least one of those measurements, the yaw or the velocity, is incorrect.

From this basic idea of covariance we can better describe the covariance matrix. The matrix is a convenient way of representing all of the covariance values together. From our robotic example, where we have three values at every time t, we want to be able to state the correlation between one of the three values and all three of the values. You may have expected to compare one value to the other two, so please keep reading.

At time t2, we have a value for x, y and yaw. We want to know how the value of x at time t2 is correlated with the change in x from time t1 to time t2. We then also want to know how the value of x at time t2 is related to the values of y and yaw at time t2. If we repeat this comparison, we’ll have a total of 9 covariances, which means we’ll have a 3×3 covariance matrix associated with a three element vector. More generally, an n value vector will have an n×n covariance matrix. Each of the covariance values in the matrix will represent the covariance between two values in the vector.

The first part of the matrix which we’ll examine more closely is the diagonal values, from (1, 1) to (n, n). Those are the covariances of: x to a change in x; y to a change in y; and yaw to a change in yaw. The rest of the elements of the covariance matrix describe the correlation between a change in one value, x for example, and a different value, y for example. To enumerate all of the elements of the covariance matrix for our example, we’ll use the following:

Vector elements at time t:

1st:  x value

2nd:  y value

3rd:  yaw value

Covariance matrix elements:

1,1  1,2  1,3

2,1  2,2  2,3

3,1  3,2  3,3

where the elements correspond to:

1,1 x to x change covariance

1,2 x to y change covariance

1,3 x to yaw change covariance

2,1 y to x change covariance

2,2 y to y change covariance

2,3 y to yaw change covariance

3,1 yaw to x change covariance

3,2 yaw to y change covariance

3,3 yaw to yaw change covariance

Hopefully, at this point, it’s becoming clearer what the elements of a covariance matrix describe. It may also be revealed that there can be certain elements where a correlation is not expected to exist.

It’s important to remember that certain covariance values are meaningful and others don’t provide any directly useful information. A large, positive covariance implies that a large change in the first value, in one direction, will usually correspond with a similarly large change, in the same direction, in the related value. A large negative covariance implies a corresponding large change but in the opposite direction. Smaller covariance values can imply that there either is no correlation between the changes and the values or that the correlation exists but results in a small change.

Over the years, I have installed several submerged float switches in the bilge of my boat. They’re supposed to run the bilge pump when the level of water in the bilge exceeds a certain level and then stop the pump after the water level is lower. They usually work well for the first year or two but inevitably fail. Sometimes they fail simply because they’re submerged in water constantly and they had a very small leak. Other times they failed probably because I left them underwater over the winter and they were frozen.

To improve upon this I wanted to switch, pun intended, from a submerged mechanical switch to a solid state, adjustable switch. I wanted to be able to remove as much of the switch from the water, hoping to extend its useful life. Of course there are a few commercial switches available but they’re not always customizable. Also, buying a packaged switch wouldn’t teach me anything about the circuitry required or give me the intimate knowledge on how they work.

Now, giving credit where it’s due, none of the circuits in my implementation are completely my own. I borrowed liberally from others who have gone before me. My design has two components and I used someone else’s circuit designs for both. My “value add” was the combination of the two circuits and the replacement of a fixed resistor with a potentiometer, in order to make the switch adjustable.

Before I dive into the design of the circuits, a bit of background may be useful. The level of water in the bilge of a floating boat is not a stable thing. Most importantly, the level rises and falls, not always predictably. The average level of the water may stay the same over longer periods of time but the water can slosh back and forth. Both of these properties can confuse a simple water level sensing switch.

There is a simple solution to this problem though. If the level of the water is sensed by allowing the water to conduct between two electrical probes, setting the height of those probes above the water is important. In short, the pump must be able to reduce the water level to below the height of the probes. Otherwise, the pump will start as soon as the water level rises to, or sloshes against, the probes. The pump will then run and reduce the water level just enough  to break the circuit between the probes and stop the pump. But then, possibly within a few seconds the water level will rise, or slosh, closing the circuit between the probes and running the pump for a few seconds again. This cycle will likely continue indefinitely.

However, if the probes are well above the lowest possible water level and if the pump can run long enough after the probes are above the water level to reduce the water to that lowest level, then the pump is much less likely to rapidly cycle between off and on. If you’re understandably concerned about the pump stopping when the water is running into the bilge non-stop, have no fear. As long as the probes are submerged, the pump will never stop, at least as long as it has power.

Now we can jump into the actual circuits. The first part is the water level sensor. It’s the simpler of the two circuits but possibly the more important part. I started with a circuit from Gary A. Pizl, described at http://www.mhsd.org/model/autopump.htm. If you look closely at Gary’s design, he has the switch connected directly to the pump. His design has the problem of continuously cycling on and off as the bilge water sloshes. To improve upon that design I disconnected the circuit from the pump and instead connected the output of the circuit to a timer circuit, described in the next paragraph.

The initial circuit for the timer was taken from John Hewes’s monostable design at http://www.kpsec.freeuk.com/555timer.htm#monostable . I made a few modifications to it, however. I didn’t need the reset capability, because I could simply cut the power to stop the timer. I also found that it works just fine without pulling pin 4 of the 55 high with the resistor to positive voltage. I didn’t put the small capacitor on pin 5 either. The major change was to the value for resistor R1. Instead of using a fixed resistor, I used a potentiometer. That way I could adjust the length of the timer.

The assembled timer looks like this:

Assembled board

And the complete schematic is:


After not having used it for a couple of decades, I had to dive back into C++ for a project. I hadn’t used “include” files in a long time and needed to refresh my understanding of how they, and the compiler and linker, worked. To do that, I wrote a quick example to make sure I still understood.

My example defines only one class and then exercises that class in a simple main() function. The class has all of: a constructor; a destructor; a public method; a public variable; and a protected variable. Because I also wanted to use more than one source file and use an include file too, the code is separated into three files.

First, the class is declared in an “include” file. Declaring a class merely states, in a way, what the class can do and how a program which uses it can interact with it. A pure declaration doesn’t actually implement any logic. The actual implementation is called the “definition” or “defining the class”. I defined the methods  of my class in a separate file. Lastly, I created a third file which referred to the include file and the class definition file, in order to exercise them.

The include file, test_class.h

#ifndef _TEST_CLASS_H
#define _TEST_CLASS_H

	class TestClass {

	    TestClass(int protectedVar);

	    void testFunc();

	    int publicVar;


	    int protectedVar;


#endif // _TEST_CLASS_H

Next, the definition of the methods in the class, in TestClass.cc

#include <iostream>
#include "test_class.h"

	    TestClass::TestClass(int protectedVarArg) {
	        std::cout << "TestClass constructor" << std::endl;
                publicVar = 3;
                protectedVar = protectedVarArg;


	    TestClass::~TestClass() {
	        std::cout << "TestClass destructor" << std::endl;


	    void TestClass::testFunc() {
	        std::cout << "In testFunc()" << std::endl;
	        std::cout << " protectedVar " << protectedVar << std::endl;


And, finally, the main function, in classTest.cc

#include <iostream>
#include "test_class.h"

main(int numberOfArguments,
     char* arrayOfArguments[]) {

    TestClass testClass(5);

    std::cout << "testClass.publicVar: " << testClass.publicVar << std::endl;


    return 0;

On a Linux system using the Gnu C++ suite, these three files can be compiled and linked into a binary executable with:

g++ -o classTest classTest.cc TestClass.cc

A large part of developing robotic systems involves measuring the state of the robot and then executing some action, to change the state of the robot. Because there is usually not an exact correspondence between the action and the desired state and because it’s also usually necessary to measure the new state, some method of control is required. It’s helpful if the control method can take into account all of the current state of the robot, the previous changes to the state and their corresponding effect and the new state. One typical method is called PID control.

PID control is useful when the state of some part of the robot can be measured and represented as a numerical value. The actual value is the difference between the actual measurement, called the process variable, and the desired measurement, called the setpoint. The PID control works to bring the difference between the process variable and the setpoint to zero and keep it there. As an example, assume we have a robot which we wish to move at a constant speed. The speed of the robot is based upon the voltage applied to its drive motors. Apply more voltage and it goes faster, apply less voltage and it goes slower. The challenge arises because the change in voltage is not instantaneous and also because the robot is not always travelling over exactly the same surface and with the same load. Sometimes the same voltage will make it go a bit faster and other times a bit slower.

The PID control attempts to reduce the difference between the process variable and the setpoint by calculating a control value and then applying that control value to the process. In our example, the setpoint is the robot speed wanted, the process variable is the actual, current robot speed and the control value is the voltage to apply.

Proportional Gain

The first part of the PID control is the proportional or P adjustment. This is, perhaps, the simplest part of the PID control and responds in proportion to the difference between process value and setpoint. When that difference is large, the P control produces a large control value. When the difference is small the P control produces a small control value. So, let’s say that the voltage range in this example is from 0 to 12. 0 Volts causes the robot to stop (eventually) and 12 Volts causes it to travel at its top speed (again, eventually). We can then write an equation to calculate the control value, based on the setpoint and the process variable:

ControlValue = [P x (SetPoint – ProcessVariable) x (MaxVoltage – CurrentVoltage)] + CurrentVoltage

The ProcessVariable is the current speed of the robot. The SetPoint is the desired speed. The CurrentVoltage is assumed to be what’s causing the robot to move at its current speed. P is the proportional gain, from the P in PID Control, and is used to determine just how much voltage change is applied to the current state of the robot, in order to move the ProcessVariable closer to the SetPoint.

At this point, what we have is a way to adjust the speed of our robot, which responds proportionally to the difference between the actual speed and the desired speed. In other words, if the actual speed is close to the desired speed, this control method will only make a small adjustment. When the difference is great, a large adjustment will be made. This would seem to be a good thing, until we consider latency, in terms of the delay between measuring the speed and responding to the speed and in terms of the delay between applying a new voltage and reaching the full effect of that new voltage. Because of these possible latencies, the system being controlled can have a tendency to oscillate, above and below the desired speed, and never actually reach the desired speed.

Fortunately, the PID control method can deal with that and it’s specifically the D part of the method which I’ll describe next.

Derivative gain

to be continued …

I use MySQL 5.1, at work and at home. Often, I would like to have a way of raising an error from an SQL script, so the SQL script could communicate the error back to the calling program. It needs to be a pure SQL solution, which could be used in any of: a function; a stored procedure; or a plain SQL query. Today, I thought of a solution which meets my needs.

Specifically, today’s project was using an ETL tool (PDI from Pentaho). I was creating a mechanism to tell if all of the input data was ready and up to date, or not. I had already created a couple of MySQL functions which evaluate the state of different parts of the data. Each of those functions return the string “YES” if all is well and something else if it’s not. The particular version of the ETL tool which I’m using doesn’t provide a way to evaluate the return value from a function so I couldn’t simply look for the absence of “YES” and treat that as a failure. The ETL tool would only report whether the SQL statement succeeded or returned an error.

What I found to work well is the following:
select if(testingFunction() = "YES", "YES", someSpecialFunction());

Now, this, in and of itself, isn’t terribly fancy. They key is in the details of someSpecialFunction(). In order for the above to work, in other words, to raise a MySQL error when testingFunction() returns something other than “YES”, someSpecialFunction() must exist. However, it must not be executable by the MySQL user which is running the original query. The special function must exist, otherwise the entire statement will always raise an error, because the existence of a function is evaluated at “compile” time. The permission to execute a function isn’t evaluated until “run” time. Therefore, if testingFunction() returns “YES”, then the special function is never evaluated further. But, when the testing function returns something other than “YES”, causing the “else” clause of the “if” function to be evaluated, a MySQL error will be raised and passed back to the calling program. The details of the error aren’t important in this context – only that an error was raised.

This solution gives me a way to test the state of the input data before I start processing that data.

This time around, installing Pentaho BI Server version 3.8.0 Community Edition, I started with a Dell PowerEdge R715. 64 GB of RAM, 24 CPU cores at 2.5 GHz and a boatload of RAIDed disk space. For the OS, I installed Debian server 6.0 (squeeze). MySQL is still our standard for RDBMS so I installed version 5.1.49-3-log from the Debian packages. I configured it with InnoDB and MyISAM and located the datadir on the RAID device.

We experimented a bit before installing any software, to confirm that we were using an appropriate filesystem type, with appropriate parameter values. We didn’t find a huge performance difference between ext3, ext4 and xfs, ext4 performed slightly better, in what we thought were “typical” usage scenarios so we chose that filesystem. We also set the option string “nosuid,nodev,noatime,data=writeback,nobh,barrier=0,nouser_xattr”.

I will try OpenJDK 1.6.0_18 from the Debian packages and start with its default configuration. I will not be surprised if I run into some issues which can only be solved by replacing it the the “official” JDK build from java.com.

Admin server configuration

To configure the Admin Server first, I adjusted three files:


I defined the solution-path, war-path and platform-username to fit my environment. I added default-roles and adjusted the default-server-dir to match my installation.


I adjusted the File parameter and set all of the logging levels to DEBUG.


I changed the password for the admin user. I also switched from an OBF password to an MD5 password.


Enabled SSL.

BI server configuration

To enable the use of LDAP for user authentication, I adjusted three files:




I’ve been using the ROS tf package for a couple of projects recently. As I tried to visualize the coordinate frames and how they would relate to each other, I found myself a bit short on hand and arm joints. I was making the right-handed rule and couldn’t always get my hand into the needed orientation. Because of that, I wanted to create a way to physically visualize the state of the coordinate frame and simultaneously display it with rviz. By creating something like that, I’d get to practice using the tf package and rviz at the same time.

To begin, I needed a physical representation of the coordinate frame. I had some short pieces of wooden dowel in the shop and plenty of small scrap wood blocks. I made a small, nearly cubicle block and then used the drill press to drill three orthogonal holes, one for each of the X, Y and Z axes. To better match the rviz representation of the coordinate frame, I painted the dowels red, green and blue.

Next, I needed a way to measure the orientation of my physical coordinate frame. I had a Sparkfun Serial Accelerometer Tri-Axis – Dongle in my collection of parts. I attached the accelerometer to another face of the block, so that the accelerometer’s axes were aligned with the wooden dowels. This is the end result:

Now that I had the physical coordinate frame, I had to create a class to read the data from the accelerometer and a ROS publisher node to take that data and publish it as a transformation. In order to simplify the design I made a few assumptions and design decisions. First, this coordinate frame is only rotated – I don’t report any translations. The second assumption is that there aren’t any intermediate stages of rotation but only increments of 90 degrees.

In the process of implementing the classes, I learned more about transformations. First, all coordinate frames are tied together into a tree structure. The tree must have a frame named “world” at the top and there can’t be any loops in the tree (which would make it not a tree). Each frame in the tree specifies its current position and orientation relative to its parent frame. Each frame can have zero or more children but can only have exactly one parent. The “world” frame is a special case in that it has no parent.

The position of a frame relative to its parent is specified with a translation from the parent frame and then a rotation in the parent frame. The translation specifies where the origin of the child frame is, relative to the origin of the parent frame, and is stated in meters along each of the three axes. The rotation describes how the child is rotated about each of the parent’s axes and is given in radians. I used Euler angles where a rotation specifies not only the amount but also the direction. A positive rotation about the X axis rotates from the positive Y axis toward the positive Z axis. For the Y axis rotation, positive is from the Z to the X axis and for the Z it’s from the X to the Y. Since ROS wants to represent rotations using quaternions, I had to use the tf.transformations.quaternion_from_euler() method in my publisher node.

The class which reads from the accelerometer is Accelerometer.py and the class which acts as the transform publishing node is TfDemo.py. Including these two files into an appropriate ROS package, running them and then using rviz, it’s possible to change the rotation of the physical frame and see the result in the rvix display.

Several years ago, I decided to enter a robot in the local Robo-Magellan contest. For those not already familiar, Robo-Magellan is a contest defined by the Seattle Robotics Society. I like to describe the contest to friends as “the DARPA Grand Challenge for hobbyists without a 20 million dollar corporate sponsor”. This has been a long, educational project. Here are some of the details.

Robo-Magellan requires a fully autonomous, mobile rover. I started with a mobile rover which I bought from a friend. It was assembled from salvage components, in a frame made from aluminum angle stock. The drive motors are two power window motors with small lawn mower wheels. The motors are controlled by a pair of MotionMind motor controllers from Solutions Cubed. The motors have shaft encoders and the motor controllers, in addition to driving the motors, include PID control capability and the ability to read the shaft encoders.

For the main logic system, I have an old Dell notebook for which I exchanged a $50 Starbucks card. The display was cracked so I removed it. I used the WiFi interface on the notebook for connectivity during development and I bought a four serial port USB dongle to connect to the sensors and actuators.

I plan to use one of my handheld GPS units for part of the navigation system. Depending on which one, it will either use one of the serial ports or connect directly to the USB. I also have a Sparkfun three axis accelerometer and a Parallax gyroscope. The accelerometer has a serial port and the gyroscope is an SPI device.

Before I learned about the Arduino family of microcontroller packages, I used a bare Atmel AVR ATmega32. I use that AVR to read the gyroscope and to also control a pair of Parallax ultrasonic rangefinders. The rangefinders are mounted on the front of the rover, facing forward, and function as the long range collision avoidance system.

Lastly, there is a cheap Logitech webcam mounted on the forward top of the rover. It will be used for a simple color-tracking vision system, to locate and identify the orange traffic cones on the contest field. It connects directly to the USB.

I plan to write the majority of the control software in Python using ROS. The code for the AVR is written in C. I may switch to the Arduino, to simplify development of that part of the software.

In the next article I’ll describe the high-level design of the ROS components.

I accepted a contract to build a remotely controlled, multi-rover planetary exploration simulation for a museum. I decided to use ROS as the communication and control mechanism and to write the software in Python. This article describes the architecture and implementation of the project.

The purpose of the simulation is to allow the visitor to drive the rover around the simulation, looking for artifacts on the planetary surface. There are three artifacts: gypsum, mud and fossils. When the visitor brings the rover within range of these artifacts, the display on the monitor changes to show the visitor additional information about the artifacts.

The design of the simulation has a control console for the visitor. That console includes a large monitor, which displays streaming video from the rover, status information about the rover and some details about what the visitor has “discovered”. The rovers are deployed in a diorama of a planetary surface and can freely roam about that area. They are wheeled rovers using differential steering and, in addition to the HD webcam facing forward, they include fore and aft IR range sensors, battery voltage sensors and an RFID reader. The RFID reader is mounting very low on the front of the rover and is connected via USB.

At the control console, in addition to the monitor, there is a joystick and four push buttons. The joystick is used to drive the rover around the display. One of the buttons is the “start” button and the other three are associated with the artifacts.

The specific hardware components are:

  • CoroWare Corobot classic rover
  • RedBee RFID reader
  • Phidgets motor controller
  • Phidgets 8/8/8 interface board
  • Logitech HD webcam
  • Phidgets voltage sensor
  • Lithium ferrous phosphate batteries
  • Happ UGCI control with daughter board
  • Joystick and four buttons

The software is decided into two parts, one which runs on the control console and another which runs on the rover. The control console is a desktop computer with a wired network connection. It hosts the ROS name server and parameter server. The rovers are mobile ubuntu computers which are battery-powered and have WiFi network connections. They refer to the control console as the ROS master.

On the rover there are N nodes running. The DriveMotors is a service which accepts requests to move the rover. The RfidReader node publishes messages each time it comes within range of an RFID tag. The RoverState node publishes periodic messages about the charge level of the rover’s batteries.

On the control console, there are two ROS nodes. They are the Control node and the DirectionButtons node. The DirectionButtons node is both a Publisher and a Service. It publishes messages each time the state of the joystick or a button changes. The service accepts messages which request a state change for a button lamp. The Control node is all of a Publisher, a Subscriber and a Service requestor. It subscribes to the RfidReader topic to know when the rover has passed over RFID tags. It subscribes to the DirectionButtons topic to know when the visitor has moved the joystick or presses a button. It subscribes to the RoverState topic to monitor the charge level on the batteries. It makes requests to the DriveMotors service to control the rover motion and it makes requests to the ButtonLamp service in order to light the buttons.

The basic flow of the system is a loop with a 120 second run session and then a 15 second delay between sessions. The visitor drives the rover around the diorama, looking for the artifacts. When an artifact is found, the matching button on the control console is lit and pressins that button will cause descriptive information to be displayed on the monitor. If, while the rover is being driven, the battery voltage drops below a pre-set threshold, the rover operation is suspended and a message is sent via E-mail to the Operators. The Operators will then replace the rover with a spare, which has fully charged batteries.

In the next article I’ll describe some of the challenges I faced with this project.