Archives for category: Programming

There are several sources of world map data available online today, from the likes of Google Maps, the Open Street Maps project and Nokia Maps. These systems usually make their maps available as collections of tiles and use the Mercator projection system to create the tiles.

In order to request the right tile for a specific set of geographic coordinates, it’s necessary to be able to convert a given latitude and longitude into the column and row for the matching map tile. In addition to the coordinates, it’s also necessary to specify a map zoom level.

Before I jump into the implementation details, it’s necessary to describe a bit of the background. This article refers to the Mercator projection. It works with the latitude and longitude in degrees and represented as signed, floating point numbers. For example, latitudes north of the equator are in the range (0.0 – 90.0] and longitudes west of the Prime Meridian are in the range (0.0 – -180.0]. Although the procedure I’m about to describe will work with latitude values in excess of ±85.5, the values are usually meaningless, because the Mercator projection breaks down near the poles of the earth, due to near infinite expansion in the projection.

The tiles in the projection are identified by column and row, with both beginning at zero. There is also always the same number of columns as rows, regardless of the zoom level. Zoom levels typically begin at 1, the least amount of zoom, and proceed up to around 18, the greatest amount of zoom. Zoom level 1 typically displays the entire map in a single tile, while zoom level 18 displays an area similar to a portion of a US city block . The total number of tiles at a particular zoom level is determined by the formula (2zoomLevel)2.

With that out of the way, let’s look at how to actually convert a latitude and longitude to a map tile column and row. Since it’s easy to read, I’ll use Python to write my “pseudo-code”.

First, convert the latitude and longitude from degrees to radians:

lonRad = math.radians(lonDeg)
latRad = math.radians(latDeg)

and then convert the coordinates into the Mercator projection with

columnIndex = lonRad
rowIndex = math.log(math.tan(latRad) + (1.0 / math.cos(latRad)))

At this point, we have a column and row, but for a set of tiles which has its origin in the center of the collection. Usually, the origin of the collection is in the top, left corner. Also, we haven’t accounted for the zoom level yet. Let’s do that next, with

columnNormalized = (1 + (columnIndex / math.pi)) / 2
rowNormalized = (1 - (rowIndex / math.pi)) / 2

tilesPerRow = 2 ** zoomLevel

column = round(columnNormalized * (tilesPerRow - 1))
row = round(rowNormalized * (tilesPerRow - 1))

At this point, column, row and zoomLevel can be used to request the appropriate tile from a map tile service, which will contain the starting latitude and longitude and will be at the specified zoom level.

Advertisements

Covariance matrices are a way of describing the relation between a collection of variables. A single covariance value describes the relation between two variables. They are a tool for estimating the possible error in a numerical value and for predicting a numerical value. One of their several applications is in robotics sensor fusion with “regular” and “extended” Kalman filters. In this article I’ll describe how to interpret a covariance matrix and provide a practical example. I’ll leave the formal mathematical and general definition to someone better at that than me.

Let’s begin with the concept of “variance” of a numerical value. That’s the amount by which that value can be expected to vary. For example, if we were to measure the outdoor air temperature with a digital electronic sensor, we may want to know the maximum amount of error to expect in that measurement. That possible amount of error is called the variance and it’s described as a single value. Variance is always positive. For a more in-depth description of variance, please see http://en.wikipedia.org/wiki/Variance.

For the rest of this article I’ll use the terms “value” and “variable” interchangeably. I suppose we could think of a “value” as the current value of a particular “variable”.

Now imagine that there are several properties or conditions or states being measured at the same time and that we’d like to know if there is any relationship between those values. If we could predict in advance how each variable changes, relative to every other variable, that would give us two useful things. First, it would allow us to better identify (and eliminate) outlier values, where one particular value has changed so much that it’s probably not a good measurement. And second, if at one time a measured value was missed, it might be possible to predict what the value should be, based on how all of the other values to which it’s related have changed.

To proceed from a single variance to the idea of covariance and a collection of covariances contained in a matrix, we’ll need an understanding of covariance. Instead of expressing the expected range of possible change in one variable, a covariance expresses the correlation between a change in one variable and a change in another variable. For a much more in-depth explanation, see http://en.wikipedia.org/wiki/Covariance.

To illustrate, we’ll need a more complicated example. Let’s assume we have a mobile robot which can measure both its current position and its orientation. Since this robot can’t levitate or swim we’ll simplify the position and use only the two dimensional X-Y plane. That means the robot’s current position can be adequately described as a position along the X axis and a position along the Y axis. In other words, the robot’s position can be described as (x, y).

The position describes where the robot is located on the surface, which may be a parking lot or the living room floor or the soccer pitch, but it doesn’t describe in which direction it’s pointed. That information about the current state of the robot is called the orientation and will require one dimension. We’ll call this orientation dimension the yaw. The yaw describes in which direction it’s pointing. It’s worth repeating that this is a simplified way of representing the robot’s position and orientation. A full description would require three position values (x, y and z) and also three orientation values (roll, pitch and yaw). The concepts about to be described will still work with a six-dimensional representation of the robot state (position and orientation). Also, yaw is sometimes identified by the lower case Greek letter theta.

Now that we can describe both the position and the orientation of the robot at any point in time and assume that we can update those descriptions at a reasonably useful and meaningful frequency, we can proceed with the description of a covariance matrix. At each point in time, we’ll be measuring a total of three values: the x and y position and the yaw orientation. We could think of this collection of measurements as a vector with three elements.

We’ll start with two sets of measurements, each of which contains three values. Assume the first measurements were taken at 11:02:03 today and we’ll call that time t1. The second set were taken at 11:02:04 and we’ll call that time t2. We’ll also assume that our measurements are taken once per second. The measurement frequency isn’t as important as consistency in the frequency. Covariance itself doesn’t depend upon time, but the timing will become useful further on in this example.

Covariance is a description of how much change to expect in one variable when some other variable changes by a particular amount and in a particular direction. Using the position and orientation example we’ve started, we’d like to know what to expect of the yaw measurement from time t2 when the change in the y measurement between time t1 and t2 was large in the positive direction. Covariance can tell us to expect a similarly large positive change in yaw when y becomes more positive. It could also predict that yaw would become more negative when y became more positive. Lastly, it could state that there doesn’t appear to be any predictable correlation between a change in yaw and a change in y.

Just in case, let’s try a possibly more intuitive example of a correlation. Our initial example measures the position of the robot, with a corresponding x and y value, every second. Since we have regular position updates; since we know the amount of time between the updates (one second); and since we can calculate the distance between the position at time t1 and the position at time t2, we can now calculate the velocity at time t2. We’ll actually get the speed along the x axis and the speed along the y axis which can be combined into a velocity.

Assume the robot is pointed along the x axis in the positive direction and it’s moving. The regular measurements of the position should show a steadily increasing x value and, at least in a perfect world, an unchanging y value. What would you expect the yaw measurement to be – unchanging or changing? Since the robot is not changing its direction the yaw should not be changing. Put in terms of covariance, a change in the x value with no change in the y value is NOT correlated with a change in the yaw value. On the contrary, if we measured a change in yaw with no directional change in the velocity, we would have to suspect that at least one of those measurements, the yaw or the velocity, is incorrect.

From this basic idea of covariance we can better describe the covariance matrix. The matrix is a convenient way of representing all of the covariance values together. From our robotic example, where we have three values at every time t, we want to be able to state the correlation between one of the three values and all three of the values. You may have expected to compare one value to the other two, so please keep reading.

At time t2, we have a value for x, y and yaw. We want to know how the value of x at time t2 is correlated with the change in x from time t1 to time t2. We then also want to know how the value of x at time t2 is related to the values of y and yaw at time t2. If we repeat this comparison, we’ll have a total of 9 covariances, which means we’ll have a 3×3 covariance matrix associated with a three element vector. More generally, an n value vector will have an n×n covariance matrix. Each of the covariance values in the matrix will represent the covariance between two values in the vector.

The first part of the matrix which we’ll examine more closely is the diagonal values, from (1, 1) to (n, n). Those are the covariances of: x to a change in x; y to a change in y; and yaw to a change in yaw. The rest of the elements of the covariance matrix describe the correlation between a change in one value, x for example, and a different value, y for example. To enumerate all of the elements of the covariance matrix for our example, we’ll use the following:

Vector elements at time t:

1st:  x value

2nd:  y value

3rd:  yaw value

Covariance matrix elements:

1,1  1,2  1,3

2,1  2,2  2,3

3,1  3,2  3,3

where the elements correspond to:

1,1 x to x change covariance

1,2 x to y change covariance

1,3 x to yaw change covariance

2,1 y to x change covariance

2,2 y to y change covariance

2,3 y to yaw change covariance

3,1 yaw to x change covariance

3,2 yaw to y change covariance

3,3 yaw to yaw change covariance

Hopefully, at this point, it’s becoming clearer what the elements of a covariance matrix describe. It may also be revealed that there can be certain elements where a correlation is not expected to exist.

It’s important to remember that certain covariance values are meaningful and others don’t provide any directly useful information. A large, positive covariance implies that a large change in the first value, in one direction, will usually correspond with a similarly large change, in the same direction, in the related value. A large negative covariance implies a corresponding large change but in the opposite direction. Smaller covariance values can imply that there either is no correlation between the changes and the values or that the correlation exists but results in a small change.

After not having used it for a couple of decades, I had to dive back into C++ for a project. I hadn’t used “include” files in a long time and needed to refresh my understanding of how they, and the compiler and linker, worked. To do that, I wrote a quick example to make sure I still understood.

My example defines only one class and then exercises that class in a simple main() function. The class has all of: a constructor; a destructor; a public method; a public variable; and a protected variable. Because I also wanted to use more than one source file and use an include file too, the code is separated into three files.

First, the class is declared in an “include” file. Declaring a class merely states, in a way, what the class can do and how a program which uses it can interact with it. A pure declaration doesn’t actually implement any logic. The actual implementation is called the “definition” or “defining the class”. I defined the methods  of my class in a separate file. Lastly, I created a third file which referred to the include file and the class definition file, in order to exercise them.

The include file, test_class.h

#ifndef _TEST_CLASS_H
#define _TEST_CLASS_H

	class TestClass {
	    public:

	    TestClass(int protectedVar);
            ~TestClass();

	    void testFunc();

	    int publicVar;

	    protected:

	    int protectedVar;

	};

#endif // _TEST_CLASS_H

Next, the definition of the methods in the class, in TestClass.cc

#include <iostream>
#include "test_class.h"

	    TestClass::TestClass(int protectedVarArg) {
	        std::cout << "TestClass constructor" << std::endl;
                publicVar = 3;
                protectedVar = protectedVarArg;

	        return;
	    }

	    TestClass::~TestClass() {
	        std::cout << "TestClass destructor" << std::endl;

	        return;
	    }

	    void TestClass::testFunc() {
	        std::cout << "In testFunc()" << std::endl;
	        std::cout << " protectedVar " << protectedVar << std::endl;

	        return;
	    }

And, finally, the main function, in classTest.cc

#include <iostream>
#include "test_class.h"

int
main(int numberOfArguments,
     char* arrayOfArguments[]) {

    TestClass testClass(5);

    std::cout << "testClass.publicVar: " << testClass.publicVar << std::endl;

    testClass.testFunc();

    return 0;
};

On a Linux system using the Gnu C++ suite, these three files can be compiled and linked into a binary executable with:

g++ -o classTest classTest.cc TestClass.cc

I use MySQL 5.1, at work and at home. Often, I would like to have a way of raising an error from an SQL script, so the SQL script could communicate the error back to the calling program. It needs to be a pure SQL solution, which could be used in any of: a function; a stored procedure; or a plain SQL query. Today, I thought of a solution which meets my needs.

Specifically, today’s project was using an ETL tool (PDI from Pentaho). I was creating a mechanism to tell if all of the input data was ready and up to date, or not. I had already created a couple of MySQL functions which evaluate the state of different parts of the data. Each of those functions return the string “YES” if all is well and something else if it’s not. The particular version of the ETL tool which I’m using doesn’t provide a way to evaluate the return value from a function so I couldn’t simply look for the absence of “YES” and treat that as a failure. The ETL tool would only report whether the SQL statement succeeded or returned an error.

What I found to work well is the following:
select if(testingFunction() = "YES", "YES", someSpecialFunction());

Now, this, in and of itself, isn’t terribly fancy. They key is in the details of someSpecialFunction(). In order for the above to work, in other words, to raise a MySQL error when testingFunction() returns something other than “YES”, someSpecialFunction() must exist. However, it must not be executable by the MySQL user which is running the original query. The special function must exist, otherwise the entire statement will always raise an error, because the existence of a function is evaluated at “compile” time. The permission to execute a function isn’t evaluated until “run” time. Therefore, if testingFunction() returns “YES”, then the special function is never evaluated further. But, when the testing function returns something other than “YES”, causing the “else” clause of the “if” function to be evaluated, a MySQL error will be raised and passed back to the calling program. The details of the error aren’t important in this context – only that an error was raised.

This solution gives me a way to test the state of the input data before I start processing that data.

I’ve been using the ROS tf package for a couple of projects recently. As I tried to visualize the coordinate frames and how they would relate to each other, I found myself a bit short on hand and arm joints. I was making the right-handed rule and couldn’t always get my hand into the needed orientation. Because of that, I wanted to create a way to physically visualize the state of the coordinate frame and simultaneously display it with rviz. By creating something like that, I’d get to practice using the tf package and rviz at the same time.

To begin, I needed a physical representation of the coordinate frame. I had some short pieces of wooden dowel in the shop and plenty of small scrap wood blocks. I made a small, nearly cubicle block and then used the drill press to drill three orthogonal holes, one for each of the X, Y and Z axes. To better match the rviz representation of the coordinate frame, I painted the dowels red, green and blue.

Next, I needed a way to measure the orientation of my physical coordinate frame. I had a Sparkfun Serial Accelerometer Tri-Axis – Dongle in my collection of parts. I attached the accelerometer to another face of the block, so that the accelerometer’s axes were aligned with the wooden dowels. This is the end result:

Now that I had the physical coordinate frame, I had to create a class to read the data from the accelerometer and a ROS publisher node to take that data and publish it as a transformation. In order to simplify the design I made a few assumptions and design decisions. First, this coordinate frame is only rotated – I don’t report any translations. The second assumption is that there aren’t any intermediate stages of rotation but only increments of 90 degrees.

In the process of implementing the classes, I learned more about transformations. First, all coordinate frames are tied together into a tree structure. The tree must have a frame named “world” at the top and there can’t be any loops in the tree (which would make it not a tree). Each frame in the tree specifies its current position and orientation relative to its parent frame. Each frame can have zero or more children but can only have exactly one parent. The “world” frame is a special case in that it has no parent.

The position of a frame relative to its parent is specified with a translation from the parent frame and then a rotation in the parent frame. The translation specifies where the origin of the child frame is, relative to the origin of the parent frame, and is stated in meters along each of the three axes. The rotation describes how the child is rotated about each of the parent’s axes and is given in radians. I used Euler angles where a rotation specifies not only the amount but also the direction. A positive rotation about the X axis rotates from the positive Y axis toward the positive Z axis. For the Y axis rotation, positive is from the Z to the X axis and for the Z it’s from the X to the Y. Since ROS wants to represent rotations using quaternions, I had to use the tf.transformations.quaternion_from_euler() method in my publisher node.

The class which reads from the accelerometer is Accelerometer.py and the class which acts as the transform publishing node is TfDemo.py. Including these two files into an appropriate ROS package, running them and then using rviz, it’s possible to change the rotation of the physical frame and see the result in the rvix display.

Several years ago, I decided to enter a robot in the local Robo-Magellan contest. For those not already familiar, Robo-Magellan is a contest defined by the Seattle Robotics Society. I like to describe the contest to friends as “the DARPA Grand Challenge for hobbyists without a 20 million dollar corporate sponsor”. This has been a long, educational project. Here are some of the details.

Robo-Magellan requires a fully autonomous, mobile rover. I started with a mobile rover which I bought from a friend. It was assembled from salvage components, in a frame made from aluminum angle stock. The drive motors are two power window motors with small lawn mower wheels. The motors are controlled by a pair of MotionMind motor controllers from Solutions Cubed. The motors have shaft encoders and the motor controllers, in addition to driving the motors, include PID control capability and the ability to read the shaft encoders.

For the main logic system, I have an old Dell notebook for which I exchanged a $50 Starbucks card. The display was cracked so I removed it. I used the WiFi interface on the notebook for connectivity during development and I bought a four serial port USB dongle to connect to the sensors and actuators.

I plan to use one of my handheld GPS units for part of the navigation system. Depending on which one, it will either use one of the serial ports or connect directly to the USB. I also have a Sparkfun three axis accelerometer and a Parallax gyroscope. The accelerometer has a serial port and the gyroscope is an SPI device.

Before I learned about the Arduino family of microcontroller packages, I used a bare Atmel AVR ATmega32. I use that AVR to read the gyroscope and to also control a pair of Parallax ultrasonic rangefinders. The rangefinders are mounted on the front of the rover, facing forward, and function as the long range collision avoidance system.

Lastly, there is a cheap Logitech webcam mounted on the forward top of the rover. It will be used for a simple color-tracking vision system, to locate and identify the orange traffic cones on the contest field. It connects directly to the USB.

I plan to write the majority of the control software in Python using ROS. The code for the AVR is written in C. I may switch to the Arduino, to simplify development of that part of the software.

In the next article I’ll describe the high-level design of the ROS components.

I accepted a contract to build a remotely controlled, multi-rover planetary exploration simulation for a museum. I decided to use ROS as the communication and control mechanism and to write the software in Python. This article describes the architecture and implementation of the project.

The purpose of the simulation is to allow the visitor to drive the rover around the simulation, looking for artifacts on the planetary surface. There are three artifacts: gypsum, mud and fossils. When the visitor brings the rover within range of these artifacts, the display on the monitor changes to show the visitor additional information about the artifacts.

The design of the simulation has a control console for the visitor. That console includes a large monitor, which displays streaming video from the rover, status information about the rover and some details about what the visitor has “discovered”. The rovers are deployed in a diorama of a planetary surface and can freely roam about that area. They are wheeled rovers using differential steering and, in addition to the HD webcam facing forward, they include fore and aft IR range sensors, battery voltage sensors and an RFID reader. The RFID reader is mounting very low on the front of the rover and is connected via USB.

At the control console, in addition to the monitor, there is a joystick and four push buttons. The joystick is used to drive the rover around the display. One of the buttons is the “start” button and the other three are associated with the artifacts.

The specific hardware components are:

  • CoroWare Corobot classic rover
  • RedBee RFID reader
  • Phidgets motor controller
  • Phidgets 8/8/8 interface board
  • Logitech HD webcam
  • Phidgets voltage sensor
  • Lithium ferrous phosphate batteries
  • Happ UGCI control with daughter board
  • Joystick and four buttons

The software is decided into two parts, one which runs on the control console and another which runs on the rover. The control console is a desktop computer with a wired network connection. It hosts the ROS name server and parameter server. The rovers are mobile ubuntu computers which are battery-powered and have WiFi network connections. They refer to the control console as the ROS master.

On the rover there are N nodes running. The DriveMotors is a service which accepts requests to move the rover. The RfidReader node publishes messages each time it comes within range of an RFID tag. The RoverState node publishes periodic messages about the charge level of the rover’s batteries.

On the control console, there are two ROS nodes. They are the Control node and the DirectionButtons node. The DirectionButtons node is both a Publisher and a Service. It publishes messages each time the state of the joystick or a button changes. The service accepts messages which request a state change for a button lamp. The Control node is all of a Publisher, a Subscriber and a Service requestor. It subscribes to the RfidReader topic to know when the rover has passed over RFID tags. It subscribes to the DirectionButtons topic to know when the visitor has moved the joystick or presses a button. It subscribes to the RoverState topic to monitor the charge level on the batteries. It makes requests to the DriveMotors service to control the rover motion and it makes requests to the ButtonLamp service in order to light the buttons.

The basic flow of the system is a loop with a 120 second run session and then a 15 second delay between sessions. The visitor drives the rover around the diorama, looking for the artifacts. When an artifact is found, the matching button on the control console is lit and pressins that button will cause descriptive information to be displayed on the monitor. If, while the rover is being driven, the battery voltage drops below a pre-set threshold, the rover operation is suspended and a message is sent via E-mail to the Operators. The Operators will then replace the rover with a spare, which has fully charged batteries.

In the next article I’ll describe some of the challenges I faced with this project.

I have had a couple of USB temperature sensors in my collections for some time now. They are the PCsensor Temper units from Tenx Technology, Inc.. The USB Vendor ID is 1130 and the Product ID is 660C. I wanted to use them in my Linux environment, as part of my home monitoring system. I also wanted to interact with them using Python, since that is my preferred language for most of my monitoring and robotics projects. I didn’t look very hard but didn’t find anything that someone else had already written in Python, which worked out well because I wanted to exercise my new-found PyUSB skills. I couldn’t find a datasheet for the devices but I did find a C version on Robert Kavaler’s ‘blog.

My Python class to read the temperature from the TEMPer unit is here.

Off and on, I have been building a complete CRM and telephony solution for a Customer Service call center. I’m using asterisk and SugarCRM and a whole bunch of SQL and Python to integrate all of the data with everything else. The basic idea of one particular part of the project involves the opening of accounts online. In a perfect world, the customer is able to completely open the account on his own and then fund the account so it can be used by the customer. In reality, some customers have trouble with the account opening process and some get distracted. Others are able to open the account but don’t seem to add funds to the account in a timely fashion. The company which is providing the services associated with these accounts wanted to open more accounts sooner and also get them funded sooner. To do so, they established a small group of employees who would be dedicated to contacting those customers and helping them to both finish the account opening process and then fund the account.

The first step in the newly automated process was to create a list of incomplete account applications and not-yet-funded accounts, which met certain criteria. The criteria had to do with how long ago the customer started the application process and any other business the customer was already doing. On a daily basis the collection of applications and accounts are examined and a list is created with those accounts which meet the criteria. Using the CRM, a “task” is created and assigned to one of the people in the previously mentioned small group dedicated to contacting the customers. Each task identifies the specific account, the current state of the account, the customer associated with the account and a due date for completing the task. This system was put into production and the small group was turned loose on the tasks.

After a few months of using this system, we realized a few weaknesses. First, the process for selecting an account to be included in this system wasn’t explained in detail to the people actually calling the customers. That resulted in a lot of manual effort and review, when it came time at the end of the month for the members of the small group to be paid for what they accomplished. There was almost always a difference between what they expected and what the system calculated. The moral of this part of the story is really nothing more than a communication issue. As is usually the case, there are about three general roles involved in a system like this one. There is the role which defines the business need and objectives, a second role which implements those requirements in software and the third role which actually uses it. That third role wasn’t involved in the project until it came time to use the finished product and the people in that role didn’t have a good understanding of how all of the pieces worked. Had they been included earlier, the system probably would have been designed a bit differently and they would at least have had a much better understanding of how it worked.

The way these first issues were worked around was entirely manual. It was agreed that the results would be reviewed each month and adjustments would be made.

A few more months went by and the second opportunity for improvement came into view. As mentioned earlier, every day more tasks were created and assigned to the group. Every day the group selected a subset of those tasks and did something with him. What we started to see was that the system was creating way more tasks each day than the small group could process. The result was that the small group began to be overwhelmed and was not able to efficiently manage the ever-increasing volume of tasks. After talking directly with the small group I had an idea on how to easily and automatically remove some of the tasks.

Sometimes a task would be created but the customer would effectively complete the task on his own, before the small group got to it. Those tasks are now marked as completed on a daily basis, reducing the number of outstanding tasks. Each task also has a due date, beyond which no one in the small group will receive credit, no matter what happens with the account. On a daily basis those “expired” tasks are also marked as completed. Both of these two automatic actions managed to reduce the number of outstanding tasks by almost 40%. By having significantly fewer tasks to look at and sort through each day, the people in the small group believe that they will now be more efficient in determining whom to call.

This condition occurred because a solution was found for an immediate problem but not enough thought was given to how the system would be used and would react over time. By solving a short-term problem, the solution actually created a different kind of problem. Now that an improvement has been made, to eliminate the tasks which are irrelevant, the small group again believe that they will be more efficient.

The third and last thing that was learned from this project came as a result of the programmer talking directly with the end-users. By working together to find solutions to the first two problems and by actually delivering an improvement, the end-users gained confidence in the programmer and engaged him (me) in further discussions.

What the people in the small group had learned about the system and the process is that, often, they would be calling a customer about his account and then, maybe a few days later, calling the same household but asking for a different customer. The customers, who may be a husband and wife or a parent and child or two business partners, would ask, rightfully so, why they were being called again when someone just called them about essentially the same thing just a few days ago.

To solve this the group wanted to be able to see all of the relations between all of the tasks. They wanted to know which other tasks were related to each task and which other existing customers were somehow related. After a few prototypes and tests, we settled on a solution which the members of the small group are now using to do just that. This latest enhancement actually accomplishes two things for them: it allows them to treat the related customers as an identified group in a single call and it allows them to identify other customers which shouldn’t be bothered because of already existing relations. The customer gets a better experience and the small group’s time is spent more effectively. I’ll continue to talk to the members of the small group, to see how these latest enhancements are holding up. I also expect that they and I together will think of the next enhancement, to improve the process further.

I live in a building with five other units. This building has a shared source of hot water. This single, shared source has a 120 gallon reservoir and supplies all of the hot water for the kitchens, dishwashers, clothes washers, bathrooms, showers and bathtubs in the building. On a recurring basis, occupants are complaining about the water not being hot enough or being simply not heated at all. Being an engineer and an environmentalist (and a bit of a curmudgeon) I decided to look closer at the situation.

The water supply to the building is from the municipal source. It is heated by a gas-fired unit, separate from the reservoir. As mentioned earlier, the reservoir has a 120 gallon capacity. The system also has a pump which keeps heated water circulating through the entire building, regardless of a demand. This recirculation pump runs non-stop.

I thought the way to understand the system was to measure the water temperature at various points and to measure the flow of water. I found a water flow meter from Badger Meter  to measure the flow. I used 1-Wire temperature sensors to measure the temperature and a LinkUSB device to collect the data from the 1-Wire devices. I used a LabJack U12 to capture the flow data from the meter. All of this data was collected by an old notebook computer which I attached to a sheet of plywood and then hung on the basement wall.

The Badger Meter was used to measure the total volume of water flowing through the water heating system. I installed it on the cold input, instead of the hot output, in order to stay within the operating temperature range of the meter.

The 1-Wire devices were used to measure the temperature of the system at various points. I didn’t want to actually penetrate any of the plumbing, in order to avoid creating leaks. Since all of the plumbing for the system consists of copper tubing, I used heat sink compound and electrical tape to attache the sensors directly to the plumbing. I used a total of six sensors, at the following locations:

  1. the ambient air temperature surrounding the system
  2. the heated water output from the reservoir
  3. the recirculation loop return
  4. the municipal supply
  5. the flow into the heating unit
  6. the flow out of the heating unit and into the reservoir

The last and possibly most important piece of this entire system was the software which I used to collect and process all of the data. I wanted to accomplish several things:

  1. understand how the system worked and how the temperatures were related to each other
  2. understand how much water we used on average
  3. send advance notice of an impending hot water shortage
  4. record sudden large increases in consumption
  5. confirm or deny an actual outage
  6. understand the conditions leading up to an outage

The software which collected and processed all of the data was rrdtool and a program which I wrote in Python. This software captured the sensor data every 60 seconds, archived and processed it. If measurements exceeded thresholds, an E-mail message was sent to interested parties.

I’ll describe the specific details of the system in the next article. I was surprised, and a bit disgusted, by what I learned about the usage of hot water in my building.