Estimating object location with a single camera and OpenCV

Problem Statement: Determine the physical (real-world) co-ordinates of an object in a scene using a single camera. Concretely, if a reference object (a red cube) is within the field-of-view of oroboto’s camera, determine its position such that oroboto can drive to it. Recall that by dead reckoning, oroboto is aware (more or less) of its own physical co-ordinates within a two-dimensional local frame of reference.

Terminology

In this post the term physical co-ordinates refers to the (x,y) co-ordinates of something in oroboto’s local co-ordinate system / frame of reference. These co-ordinates are specified in units of centimeters and are relative to the location that oroboto was initially powered on, which is considered (0,0) in the physical co-ordinate system.

The term pixel co-ordinates refers to the (x,y) co-ordinates of a given pixel within an image.

Introduction

If we can isolate a reference object within an image we still only possess its pixel co-ordinates: we need a way to convert those to the physical co-ordinates used by the robot.

In this post I’ll discuss using OpenCV to isolate a reference object and then a technique to estimate its physical co-ordinates using only the image from a single camera. A common computer vision (CV) method of determining distance to an object from a camera lens is to use two cameras and the principle of stereoscopic vision to measure depth. While it’s possible to connect multiple cameras to a single Raspberry Pi I wanted to avoid the extra current draw and bulk. I also thought it would be interesting to see the accuracy I could achieve using a simpler approach.

Reference Object, Prerequisites and Assumptions

This technique estimates the physical co-ordinates (x,y) of a reference object (a red cube) of known physical width and height within oroboto’s local frame of reference.

A prerequisite to using the technique is to measure the relationship between the physical dimensions of the cube and how big it appears, in terms of pixels, at various distances from the camera lens. To do this I simply placed the camera at a fixed position and set down a line of markers on the floor at 10cm intervals.

pixel-to-distance

I then took successive images of the cube at each marker and measured how many pixels wide it appeared at each distance. With a table of physical distance (10cm, 20cm, … n cm) and pixel mappings I calculated a regression to get the line of best fit in Excel at arrived at the following pixel-to-distance equation:

pixel-to-distance-eq

If we’re able to isolate the reference object in an image and measure its width in pixels this formula can be used to estimate how far it physically is from the camera lens. This technique makes two major assumptions:

  1. The equation accurately describes the pixel-to-distance relationship across all distancesplane
  2. If you imagine a plane p that runs normal / perpendicular to the ray that extends directly outward from the camera lens and extends through the reference object, that the pixel-to-distance relationship holds true along all points within the camera’s field of view along that plane

Neither of these assumptions are true in all cases which is the primary reason this technique can only estimate position and lacks accuracy.

A final assumption is that the middle of the camera’s field of view aligns exactly with the heading of the robot.

From Pixels to Position

Assume oroboto’s camera has taken an image of the scene and that we’ve used a yet to beopencv1 described technique to isolate the reference cube within the image. We’re now in possession of the (x,y) pixel co-ordinates of the cube as well as its width and height in pixels.

Using the cube’s width in pixels and the pixel-to-distance equation we can estimate (based on assumption #2 above) the physical distance r to the point mid which lies at the intersection of the ray extending directly outward from the camera lens and an imaginary plane p running perpendicular to that ray. Note that mid also lies on the vertical line that equally divides the camera’s image.

opencv-dia1.jpg    opencv2

Given that we know oroboto’s physical co-ordinates C and heading q at the time the image was captured, we can use simple trigonometry to estimate the physical (x,y) co-ordinates of mid:

mid-eq.png

Again based on assumption #2, as the reference cube has known physical width (in centimeters) it is possible to calculate the relationship cmPerPx between pixels and centimeters along plane p (ie. how many cm a pixel represents along that plane). The distance in pixels between mid (recall this lies along the vertical line equally dividing the camera’s image) and the reference cube can then be used to estimate od, the physical distance along plane p between mid (whose physical (x,y) co-ordinates were established above) and the reference cube.

Taking the ray r as the new base of a right-angled triangle (which doesn’t look that right-angled in my diagram) more trigonometry yields q, r and from oroboto’s current position C an estimate of the reference cube’s physical co-ordinates:

obj-eq.png

Isolating the reference cube

The last task is actually locating the red cube within the camera’s image, this is where OpenCV comes in.

In a scene that could contain a multitude of different objects we have two primary “hints” that one of those objects might be ours: its shape and its colour. As we detect and examine objects within the image these will be our primary cues that the object might be the reference cube. Even with these hints we can still be confounded by the fact the shape, which we expect to be a square, could be more trapezoidal or skewed if one of the cube’s faces is off-angle to the camera, and that the colour could appear different depending on lighting conditions. We need to add some robustness to our detection techniques to deal with these cases.

Once the image has been captured it is first converted from RGB (actually BGR, as this is OpenCV’s native representation of RGB) to the HSV (Hue, Saturation, Value) colourspace. Representing the image in HSV is one way to gain some resiliency against varied lighting conditions (shadows, ambient lighting etc).

In HSV, each pixel is specified in terms of its hue (its “core” colour, like “red”), saturation (or “chroma”, this defines the brilliance or intensity of the colour: imagine taking pure red paint and incrementally adding white paint to it such that you move through pinks to pure white) and value (this defines the darkness of a colour or amount of light reflected, imagine taking pure red paint and incrementally adding black paint to it such that you move through rust to pure black). This is useful because it separates colour information from luma (brightness) information. Using HSV, in varied lighting we might expect the saturation and value of the cube’s (red) pixels to vary, but their hue should remain relatively constant. When oroboto is powered on the camera is calibrated to the current environment by placing the red cube in front of it and measuring its average hue. This average, along with a wide range of saturations and values, is used when thresholding incoming images to filter out any pixels that don’t have a similar colour to the cube.

After colourspace conversion, the image is thresholded using OpenCV’s inRange blurred-0function. This takes every pixel of the image and compares it to a range of HSV values measured during calibration, turning potentially red (ie. cube) pixels into white and anything else black. The result is a greyscale image with white pixels wherever red pixels had originally been. This is then smoothed using a Gaussian blur to remove noise (the blur eliminates stray white pixels).

OpenCV offers a range of powerful feature detection algorithms, including findContours, which can extract curves in the image by joining continuous points that have the same colour (which all white pixels in the thresholded image do). We use findContours to extract an external contour (path) around any clusters of white pixels: we hope that one of these clusters resembles a square. For each contour that is found the approxPolyDP function is used to overlay a closed polygon: if that polygon turns out to have 4 sides its aspect ratio is close to 1:1, we’ve found a candidate square. The contours / polygons are iterated to find the best match (the polygon with the aspect ratio closest to 1:1) and if found, this is considered to be the detection of the red cube.

Once isolated the width and height of the cube (in pixels) are known and can be used in the pixel-to-distance equation to start the physical co-ordinate estimation process.

Arduino with Raspberry Pi I2C Slave

I²C (“Inter-integrated circuit”, which as a trademark of Phillips is more generically implemented by other vendors as “Two Wire Interface” or TWI) is a synchronous serial interface that is often used to connect peripheral devices (think EEPROMs, gyroscopes, accelerometers, wheel encoders etc) to microcontrollers. Electrically, it requires only two “wires” between each device, a clock line (SCL) and a data line (SDA) and these lines can be shared as a common bus between multiple devices, allowing for a multi-master and multi-slave environment. Each slave on the bus has a unique address within a 7-bit address space.

Both the ATmega32U4 (the microcontroller included on the Romi 32U4 control board) and the Raspberry Pi (as functionality included in its Broadcom BCM2835 (or later) SOC), include hardware allowing them to function as either I²C master or slave. This presents an opportunity to allow simple communication between the two. But why would this be useful and how should we choose who gets to be master and who is a slave?

As an 8-bit microcontroller with 32kbyte program memory the ATmega32U4 is clearly limited in its capabilities. There is no room or compute power for more advanced functionality, such as computer vision, that we want oroboto to be capable of. However, if we couple it with a more powerful system, such as the Raspberry Pi, a whole new set of opportunities open up: CV, communication over WiFi, co-operative robotics. The Romi 32U4 control board that the newer iterations of oroboto are using has a female 40-pin header with pinout and mounting holes that are compatible with the Raspberry Pi HAT (Hardware Attached on Top) specification. It directly connects the ATmega32U4’s SDA and SCL lines to those of the Raspberry Pi (which actually has multiple I²C buses) and also routes power from the Romi’s 6x AA batteries to the Pi. You simply have to slot the Pi onto the top of the control board and you now have a powerful robotics platform: the ATmega32U4 can take care of low level functionality like motor control and anything that is timing sensitive, such as reading wheel encoders, and higher level logic, such as navigation strategy and communication can be offloaded to the Pi. Even when adding extra hardware, such as the Raspberry Pi camera, I can power the whole lot for around 45 minutes on the 6x AA batteries.

Having the Raspberry Pi powered up and running alone obviously isn’t sufficient for an integrated system: it needs to be able to communicate with the Arduino via I²C.  However, getting this working can have its complications. There is a known bug in Broadcom Serial Controller (BSC) of the Broadcom BCM2835 SOC (the chip that contains the Raspberry Pi’s CPU, RAM, video and other peripherals such as I²C) that corrupts data when I²C slaves (such as the Arduino) implement a technique known as clock stretching to slow down communication on the bus. It would seem that the hardware TWI (I²C) module in AVR microcontrollers does stretch the clock when the AVR is a slave and this has and is causing trouble for people trying to connect it to a Raspberry Pi because of the Broadcom bug. I should note that I haven’t been able to directly verify whether the Broadcom issue exists in later versions of the BCM283x as used in Raspberry Pi 2 and 3 models.

To work around these issues, the Romi team have created an alternative to the standard Arduino Wire.h I²C library: the PololuRPISlave library which configures the Arduino as an I²C slave and adds timing hacks at the right places to allow successfully communication with a Raspberry Pi master at relatively high speeds. Phew, saved! But… no, oroboto of course has to be more complicated than that.

As previously mentioned, the Romi control board includes the LSM6DS33 IMU (which we need for a gyroscope) and for a while I was also experimenting with a 3-axis magnetometer, the QMC5883L: the readings from both of these units require the Arduino to operate as an I²C master and the sensors as I²C slaves. The pre-canned libraries (which I wasn’t inclined to reimplement) for the LSM6DS33 and QMC5883L configure the Arduino as an I²C master and PololuRPISlave configures it as a slave. Using them together doesn’t work out of the box. While it does seem to be possible to run the Arduino as a mixed master / slave it was a level of fiddling and patching existing libraries I wanted to avoid if possible. The solution I arrived at was counter intuitive from the perspective of “big is master, little is slave” but works: the Arduino remains the single master on the I²C bus and I configure the Raspberry Pi as an I²C slave (even though in the long run it’s the “command module” of the overall integrated system).

The pigpio package (and its Python bindings) expose the bsc_i2c() function that allow the Pi to operate as an I²C slave. After installing pigpio on the Raspberry Pi (it starts a daemon that implements the functionality exposed by its bindings) it’s simple to configure a slave address for the Pi and register for an interrupt when a master attempts to communicate with that address:

import pigpio

pi = None
slave_addr = 0x13

def i2cInterrupt():
   global pi
   global slave_addr
   status, bytes_read, data = pi.bsc_i2c(slave_addr) 

   if bytes_read:
      doYourThing(data)

pi = pigpio.pi()
int_handler = pi.event_callback(pigpio.EVENT_BSC, i2cInterrupt)
pi.bsc_i2c(slave_addr)

Later posts will go into more detail, but the code that runs on oroboto’s Raspberry Pi essentially:

  1. Sets up a UDP server listening for data from botLab (the companion macOS application to send commands and receive telemetry from oroboto)
  2. Sets up the Raspberry Pi as an I²C slave
  3. Is periodically polled by the Arduino (acting as I²C master) to see whether any new commands have arrived from botLab (ie. “navigate to waypoint x,y”)
  4. If a command has been received, sends the command payload back to the Arduino for execution

 

 

Measuring rotation with the LSM6DS33 Inertial Measurement Unit

oroboto-with-centroidA differential drive robot’s two wheels are placed horizontally adjacent to each other, as if connected by an axle (their “axis”), and their speed is independently controllable. On most (all?) differential drive robots the axis running through the two wheels also runs through the centre of the bot so that any rotation occurs through its centroid in the x-y plane.

As mentioned in the previous post, there are times, such as when a significant correction in heading is required, when we want the robot to “spin on a dime”: that is, rotate around its centroid without changing its (x,y) position. This can be achieved using odometry. In order to rotate π / 2 rad (90 deg) each wheel must travel a distance of arclen, which is defined by the wheelbase (distance between the wheels) of the robot and the desired rotation angle. So, one implementation of rotation can simply be to rotate both wheels in opposite directions at the same speed and periodically (often) check their odometry readings: once they’ve travelled arclen the robot has rotated the required angle.

odometry

The problem with this approach is as mentioned in the previous post: the voltage/speed response curves of each motor are not identical. The Romi’s motor libraries accept a signed signed 16-bit integer (capped to -300 to +300) as input, which is converted into a PWM signal to the DRV8838 motor driver, which ultimately is represented as a control voltage to the motor. Instructing both motors to spin at “+100” will result in slightly different actual speeds in each motor. The net result is slip on one wheel (and/or drag on the other) and an invalidation of the assumption that because you’ve measured a distance travelled of d (via odometry) for one wheel that the arclen travelled was really d. In other words, you under or over rotate. While this seems theoretical, in practice it quickly manifests itself. Box test failed.

Luckily, the Romi’s control board has a built-in LSM6DS33 inertial measurement unit. This consists of 3-axis accelerometer and 3-axis gyroscope. Gyroscopes measure rotational motion (angular velocity), that is, how fast they (or the fixed body they’re attached to, in this case, oroboto) are rotating around a given axis. This angular velocity is measured in degrees per second. Just as we can measure (x,y) displacement by integrating translational velocity over time, we can measure angular displacement (rotation) by integrating angular velocity over time.

gyro

The course correction mode previously implemented with odometry becomes simpler:

  • At t = 0 assume angle_rotated = 0
  • Set speed of both motors to s
  • Periodically measure (sample) angular velocity around the z-axis (the axis that runs through the center of the robot, “pinning” it to the floor: this corresponds to its change in heading (the interval of this periodic check is dt)
  • Integrate the angular velocity with respect to dt, yielding angle_rotated (degrees rotated since t = 0)
  • Stop when angle_rotated >= desired rotation

This is a simplification; in the code you can see the LSM6DS33 actually reports angular rotation in “gyrodigits” (digits per second, or DPS), which allow the device to support multiple levels of sensitivity / granularity and that it’s necessary to calibrate the gyroscope before use. Calibration allows some of the random noise the gyroscope experiences to be removed from each sample.

Each sample of the gyroscope’s angular velocity around an axis contains a random amount of noise (ie. some samples will under-report angular velocity, others will over-report angular velocity): continued integration of these samples accumulate that noise and result in drift: the integrated value (ie. the amount of rotation that has occurred since we started measuring) will become less accurate over time. Calibration helps to remove some of this noise but cannot account for all of it. If using the gyroscope to measure rotation over a long time period, this is a problem. Thankfully, course correction routines, where the robot is trying to correct for a specific amount of heading error, are short-lived and essentially “reset the drift clock” each time they start. At the start of the routine, we are seeking to correct for err degrees heading error from the current heading, which for the purposes of the correction, is considered “0 degrees”. The integration of angular velocity only has to occur until err degrees of rotation has occurred from that initial 0 degree starting point. Because this rotation occurs quickly (the robot can spin 360 degrees in under a few seconds, even when moving slowly) there isn’t time for drift to accumulate and as such the gyroscope is accurate under these conditions as illustrated in the videos below.


Drift becomes a problem when trying to use the gyroscope to measure rotational change over a longer time period. Consider a pilot who takes off along a runway with known heading (ie. perhaps it runs due north-south) and who has reset his gyroscope to “0 degrees” when facing north up the runway before takeoff. At this time, the gyroscope reading is accurate. After takeoff, his gyroscope is continually integrating angular velocity (in whichever axes it cares about, hopefully all three) with respect to that initial reset on the runway. Over time, a few hours into his trip, the gyroscope will have accumulated significant error through drift and will no longer be identifying accurate headings. A similar situation occurs with oroboto when using the gyroscope as an alternative to odometry when measuring change in heading between PID loop iterations. In this case the robot continues integrating gyroscope readings as it transits between waypoints, each time adding or subtracting from its last known heading. Over time, gyroscope drift causes these measurements to become less and less accurate and leading to incorrect heading adjustments for the PID loop. Box test failed.

As a method based on integration, odometry suffers exactly the same drift problems as the gyroscope, but in my testing, under the distances travelled (less than 100 meters), odometry was providing better accuracy than the gyroscope method.

One way to correct for drift is to combine the readings of a drift prone sensor (such as a gyroscope or wheel odometry) with the readings of a non-drift prone sensor, such as a magnetometer. This is called sensor fusion, a topic that deserves way more than one post alone.

Before delving into that rabbit hole, however, my next post will focus on interfacing the Romi to a Raspberry Pi I2C slave. This gives oroboto a huge amount of computing power to execute higher level navigational tasks and allows for simple robot-to-robot and robot-to-controller communication over WiFi.

oroboto v2.0

It’s been a long time between posts but I’ve not been idle. I recently discovered the Romi differential-drive chassis from Pololu and have been experimenting with two robots I’ve built on top of it. Unlike the initial iterations of oroboto, which involved a lot of sticky tape and solder bridges to bring everything together on top of the Beaglebone Black (BBB), Romi is a set of neatly interchangeable components including the chassis (with in-built 6xAA battery compartment), 2x 120:1 DC motors, ball caster and wheels. Pololu sell a control board that sits neatly on top of the chassis and provides an ATmega32U4 AVR (Arduino-compatible) microcontroller, DRV8838 H-bridge motor drivers, power distribution circuit and an LSM6DS33 3-axis accelerometer and gyroscope. The motors are easily interfaced with magnetic Hall-effect quadrature encoders to enable odometry and the control board has a header for direct interfacing with a Raspberry Pi. In short, it provides everything the original oroboto had (albeit with a Raspberry Pi rather than a Beaglebone Black) but in a more robust package with less assembly required.

orobotos

Porting the original oroboto code to the Romi was trivial: all of the interface code that poked the BBB’s GPIOs and interrupts got chucked out and replaced with Arduino-specific library calls but the core PID loop and navigation logic remained. Earlier readers may recall the significant problems I had getting oroboto to “hit its waypoints” while executing a transit. I had hoped the port to Romi would result in immediate performance improvements for two reasons: a) the Romi’s motors were more highly geared (120:1 vs 100:1) which should grant slightly higher rotary encoder resolution, but more importantly; b) everything was running directly on the AVR instead of jumping back and forth between the kernel and user-space on the BBB, which should have resulted in much finer timing and accuracy. The original BBB code relied on a select() on a file descriptor for the user-mode process to learn of GPIO interrupts from the rotary encoder and on a fairly opaque PWM implementation to drive the motors. By the time the feedback loop from input to output was complete there was a lot of context switching and I couldn’t guarantee how the kernel would schedule the oroboto control process. In contrast, on the AVR, everything is running in a single “thread” and timing control is deterministically related only to the running code.

Needless to say, I was disappointed. Well, I wasn’t totally disappointed, it’s always nice after porting a project from one platform to another that it runs at all, but for my “box test” (getting oroboto to transit between 4 waypoints that define the corners of a 1m x 1m square) the accuracy was still fairly miserable. I ruminated on the problem for a few days and realised the obvious: you can’t make pigs fly.

Without giving it due consideration, I think I’d been hoping the PID implementation would magically cause oroboto to spin on a dime once it reached one waypoint and began the transit to the next, but in reality the magnitude of control signal change is often too high to map onto the physical constraints of the motor. Even if it was mappable, such a sudden change in torque can cause slip and inaccurate odometry readings. To be specific, the DC motor speed is varied by controlling its input voltage: this is achieved via a PWM signal sent to the DRV8838 motor driver. Assume the motor’s input voltage range is 0V to 5V and this maps to a (fictitious, I haven’t actually measured min and max RPM)  speed range (post gearing) of 0RPM to 50RPM (discounting for the fact the mapping is not linear, which is a further complication). At any iteration, the PID loop is trying to correct for the heading error (the error between the robot’s current heading and the heading it needs to travel on to reach its target waypoint) by adjusting the speed (RPM) of the left and right motors to create the required amount of “turn” to the left or right. If the robot has reached a waypoint and its next waypoint requires a 90 degree turn to the left (ie. to the adjacent corner of the “box”), the heading error is 90 degrees, which translates into a large positive control signal to right motor (“turn forwards at 200 RPM!”) and a large negative control signal to the left (“turn backwards at 200RPM!”). The main problems in this scenario are that the magnitude of the control signal (as measured in RPM) can exceed the physical capabilities of the motor and that the response curve of each motor (the mapping between input voltage and output RPM) is neither linear nor matched to that of the other motor. While you might hope these differences “come out in the wash”, that wasn’t my experience. The first problem is somewhat akin to the turning circle of a car: you simply can’t turn at an angle greater than that allowed by the physics of the steering mechanism.

I decided to simplify the problem and let the differential-drive robot do what it does best: spin. At each iteration of the PID loop, if the heading error exceeds 30 degrees, I break into a “course correction” mode that stops forward transit and (relatively) slowly spins the motors in opposite directions to achieve the amount of “on a dime” turn required to correct the heading error. Once corrected, I let the PID loop take over “straight line” correction again. This made a significant improvement in accuracy during the box test and even allowed the robot to complete several transits of more complicated box patterns with little displacement error. However, the problem of mismatched response curves between motors remained. During this course correction mode, the arc length that must be travelled by one of the motors to achieve the rotation required to correct for the heading error is computed and then the odometry of that wheel is monitored until the arc length has been traversed: in other words, it assumes that if that wheel has travelled a certain distance then the other wheel has travelled exactly the same distance in the opposite direction and thus that the required rotation has occurred. This is a poor assumption: the motors do not have the same response curves and when instructed to spin at the same speed, a margin of error (which differs based on the requested speed due to non-linearities in the motors themselves) is introduced. The net result is that over or under rotation occurs.

While I did spend some time writing motor calibration code to do simple lookup table based left-to-right response curve correction I settled on a different approach: ditch odometry for the rotation measurement and use the onboard gyroscope instead.

The next few posts will discuss using the LSM6DS33 inertial measurement unit (IMU) gyroscope; why sensor fusion between an accelerometer and gyroscope can’t ever give better measurements of yaw; how to interface the AVR to a Raspberry Pi acting as an I2C slave (and why you might want to do that); introducing peer-to-peer communication over WiFi between two robots for co-operative navigation and using AWS Rekognition and OpenCV to give oroboto eyes.

3D Network Visualisation

The lack of robot related posting is due to a minor course deviation into a work-related minefield. Mine sweepers have been deployed and we expect to be back on our original trajectory momentarily. The crew wish to thank you for your patience.

In the interim, I have been writing Interconnect, a 3D network connectivity visualisation tool for Mac OSX and CampSight, an iOS application to locate nearby campsites in Australia and the United States even when Internet connectivity is unavailable.

The next stage for oroboto will be improved ultrasonic ranging capabilities before moving onto navigation with SONAR-derived local area maps.

BeagleBone Black Wifi

While tuning the PID loop and debugging the ultrasonic range finding code (more on that to come) in oroboto I’ve been writing log files to its flash for subsequent analysis using dotVision.

A log line contains the following data:

timestamp x y logtype waypoint

token data type description
timestamp
float time in seconds since the first log entry (intrinsically 0.0)
x
float x co-ordinate
y
float y-coordinate
logtype
string class of entry (for dotVision colour markup etc)
waypoint
int whether or not this is a waypoint marker

There were two reasons for recording this data:

  1. Keep track of where the robot thought it was (x,y co-ordinates) at a given time (to allow for comparison to where it really was)
  2. For ultrasonic range finding, keep track of where the robot detected obstacles (to allow for comparison to where they actually are)

While the timestamps are useful, because they allow dotVision to play the log back “in realtime” I still had to try and remember where the robot was at that time in order to make any meaningful comparisons for #1. This turned out to be a fruitless effort.

Better solution: pipe the data back to dotVision in realtime over a wireless connection.

dotVision (updated on git) has now been updated with a UDP server (uses Grand Central Dispatch to put a dispatch source around the socket and then schedules “dot” additions into the dotLog on the main UI thread for drawing) and the DotLog class on the BBB has a mini UDP client in it.

Which brings up the next question: what’s the ideal physical layer to send this over? The two obvious choices are 802.11(whatever) or BlueTooth. I had a spare D-Link DWA-131 revB USB wifi adapter lying around so I gave that a shot…

DWA-131 revB Power Requirements

First things first, you’re gonna need more power. I had limited success getting the DWA-131 revB running when tethered via USB and no success at all when running off the measly 5x AA cell power supply I had jury rigged. Scratch that, time for a LiPo.

Lithium-based battery chemistry has come a long way and as any RC fan can attest, LiPo (lithium polymer) batteries can deliver a significant amount of current in a small form factor (just be careful charging them, they also make great fireworks).

zippy2s

I picked up a few Zippy 2200mAh 2S batteries (2S meaning 2 cells in series, so 7.4V at full charge) and rejigged the robot chassis a bit to house one of them (and we now have an on/off switch, what progress!). This keeps the BBB, wifi adapter and motors happy for at least an hour in the testing I’ve been doing.

“Look ma, no hands!”

Now for the meaty bit. Once this is done the robot can boot the default BBB Angstrom Linux distro and be immediately accessible directly via WiFi. This assumes you’re connecting to an infrastructure mode network using WPA2.

  • Download the firmware for the WiFi chipset used by the DWA-131 (an RTL8192CU) from here and copy it into /lib/firmware/rtlwifi
  • Load the kernel module at boot:
# echo rtl8192cu > /etc/modules-load.d/rtl8192cu.conf
# update-modules
  • Disable connman, it’s a con, man:
# systemctl disable connman.service
  • Define the properties for your wireless network and the NIC you’ll be using (wlan0). Note that unlike as mentioned in the references, there is no pre-up command here. This happens in /etc/network/interfaces:
auto wlan0
iface wlan0 inet static
   address X.X.X.X
   netmask mask.mask.mask.mask
   gateway your.gateway.ip
   wireless_mode managed 
   wireless_essid "YOUR_ESSID"
   wpa-driver nl80211
   wpa-conf /etc/wpa_supplicant.conf
   post-down killall -q wpa_supplicant
  • Generate the hash for your WPA pre-shared key using wpa_passphrase, follow the prompts and then look for the psk line, you’ll be wanting to put that in your wpa_supplicant configuration file, /etc/wpa_supplicant.conf:
fast_reauth=0
ap_scan=1
network={
 scan_ssid=1
 proto=WPA2
 key_mgmt=WPA-PSK
 pairwise=TKIP
 group=TKIP
 ssid="YOUR_ESSID"
 psk=long_hex_string_from_wpa_password

}

  • We now need to ensure the DWA-131 firmware is loaded at boot time by configuring the kernel module with the USB device ID of the adapter. Angstrom is using systemd for boot time initialisation scripts so I just wrote a little service that runs when the networking stack comes up. The key here is to ensure it runs before we try to bring the interface up (see wifi.service below), note the Before condition.  The contents below goes into /lib/systemd/system/wifidriver.service:
[Unit]
Description=wifidriver
Wants=network.target
Before=wifi.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/home/root/wifidriver.sh
[Install]
WantedBy=multi-user.target
  • The /home/root/wifidriver.sh it references is simple:
#!/bin/sh
echo "2001 330D" | tee /sys/bus/usb/drivers/rtl8192cu/new_id
  • Enable the service so it runs at boot:
# systemctl enable wifidriver.service
  • The last step is to get the interface itself to come up (as we disabled connman which technically was meant to do it on demand, but seems to be rather average at that job). This is again done with a simple systemd service. The contents below goes into /lib/systemd/system/wifi.service:
[Unit]
Description=wifi
Wants=network.target
Before=network.target
BindsTo=sys-subsystem-net-devices-wlan0.device
After=sys-subsystem-net-devices-wlan0.device

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/home/root/wifi.sh
ExecStop=/bin/sh -c "ifdown wlan0"

[Install]
WantedBy=multi-user.target
  • The /home/root/wifi.sh it references (the sleep is just to give the driver time to load the firmware and boot the device):
#!/bin/sh
sleep 10
ifup wlan0
  • Finally, enable the service so it runs at boot:
# systemctl enable wifi.service

 

The act of bringing up the interface should cause wpa_supplicant to run and authenticate against your network (it will remain running as a daemon to handle rekeying etc).

Reboot and hopefully you’ll be able to SSH in directly without the USB tether.

References:

  1. Ubuntu 12.04 – Install D-Link DWA-131 rev. B1 wifi adapter
  2. Auto connect to a wireless network (Ångström Linux)

Next research project: low power BlueTooth instead of WiFi. But before then, some notes on ultrasonic range finding.

Go To Goal Strategy Implementation

If you take a look on GitHub there is now an example implementation of the Go to Goal strategy described in the previous post.

  • The bulk of the logic is in the file modules/controller.cpp
  • Additional support libraries have been added, such as:
    • gpio: wrapper to configure BBB GPIOs (used by the odometer)
    • odo: a basic odometry implementation to measure distance travelled by a wheel using optical encoders
    • led: wrapper to configure and interface with the onboard user LEDs
    • logger: simple text-based logging abstraction
    • dotlog: class that can create files to be read by dotVision

I’ve also committed the dotVision project, a simple Objective C application (provided as an X-Code project) for OSX that takes log files of data from the robot and plots it either on the same timescale as the data was created or in one hit. This is a useful debugging tool to quickly visualise 2D data that the robot is / was processing (future improvements will include socket support to take a feed of data in realtime (ie. via WiFi) to plot progress).

dotVision

The screenshot shows the dead reckoned position co-ordinates of the Go to Goal strategy module (ie. where it thought it was in its 2D co-ordinate system as it followed the defined waypoints). It would of course be interesting to be able to overlay this with the actual measured position of the robot.

The trace is black while the strategy thinks it is still far from the next waypoint, red as it believes it is approaching the waypoint and yellow when it stops as close as it can to the waypoint.

You can see that while it hits the first and second waypoints easily, the interaction between the PID controller tuning and the simplistic implementation of “set forward velocity” leads to a very loose turning circle as it adjusts its heading for the third waypoint.

Implementation Challenges

There were quite a few issues encountered (and not fully resolved) while implementing the current Go to Goal strategy including:

Knowing when to stop

When the strategy sets out for a waypoint it takes its current (believed) position (in theory the middle point of the robot between the wheels) and heading and works out the new heading required for the robot to be “facing” (directed toward) the next waypoint. It then drives at a set forward velocity until its (believed) heading is as close as possible to the required heading (the “set forward velocity” is modulated by changes to the left and right wheel velocities required to make any turns needed to track the reference heading: this is the job of the PID loop whose reference variable is simply the heading).

(As a side note this simplistic implementation means that it’s difficult/impossible for the strategy to deal with waypoints that are too close together.)

Consider the very real possibility that due to a combination of PID tuning and physics the robot simply isn’t tracking well to its required path: perhaps it’s at (90,45) and wants to move to (0,45) but never quite manages to hit the straight line between them: at some point it will go past the target waypoint and … probably keep driving off into the distance.

The simple solution implemented in the current strategy is to measure a vector between the current position and the waypoint.

If the vector is noted to be “small enough” (relative to the size of the robot) the strategy believes it has reached the waypoint and stops.

If the vector does not ever get “small enough” the strategy waits until it sees the size of the vector increasing (on the logic that for the bulk of the transit, other than some initial turning, from waypoint A to waypoint B, the vector should shrink) and when it notes a prolonged increase (ie. we keep getting further and further from the target) that is significant in the context of the initial vector between the two waypoints it “gives up” and assumes any further transit is only going to place it further from its goal.

Determining how often to reset the reference heading between waypoints

Initially the strategy calculated the reference heading just once (as described above) at the time it began a transit from one waypoint to another.

The initial reasoning behind this was that the error from dead reckoning was assumed to be large (ie. once the journey from A to B had started the “believed” position was likely to be increasingly wrong). As such, recalculating the reference based on that dodgy position was just going to cause issues (this is somewhat of a moot point though: if the incremental believed position is wrong, the “final” believed position (ie. where we think we are vs where we actually are when we arrive at the target waypoint) will also be wrong and hence the next reference heading calculation for the subsequent waypoint will also be wrong.

In reality the dead reckoned position over the distances of the experiment (less than 1m x 1m so not really useful or significant) was fairly accurate, so continuously recalibrating the “new” reference heading during the transit, based on “believed” position, rather than just once per transit, seemed to give better outcomes than doing it just once.

Achieving a reasonable balance between PID loop tracking, algorithm and physical limitations

As can be seen in the dotVision plot, the strategy of using a continuous forward velocity causes some issues when large turning circles are required.

The robot itself is capable of turning on a pin: a better strategy could take advantage of the physical characteristics of independent wheel rotation to:

  • use smaller forward velocities when enacting large heading corrections to minimise lateral / translational drift
  • adopt a separate “turn on a dime” strategy when starting out from a given waypoint

Next Steps

Rather than continuing to explore some of the adaptions above to the strategy itself the next steps for this robot will be to use the ultrasonic transducer to begin taking measurements as it moves and build a map of the surroundings.

In future this information can then be used by the “Go to Goal” strategy to avoid obstacles, ideally I’d like to see the robot adapt a new path when an obstacle appears in between two of its desired waypoints.