BeagleBone Black Wifi

While tuning the PID loop and debugging the ultrasonic range finding code (more on that to come) in oroboto I’ve been writing log files to its flash for subsequent analysis using dotVision.

A log line contains the following data:

timestamp x y logtype waypoint

token data type description
timestamp
float time in seconds since the first log entry (intrinsically 0.0)
x
float x co-ordinate
y
float y-coordinate
logtype
string class of entry (for dotVision colour markup etc)
waypoint
int whether or not this is a waypoint marker

There were two reasons for recording this data:

  1. Keep track of where the robot thought it was (x,y co-ordinates) at a given time (to allow for comparison to where it really was)
  2. For ultrasonic range finding, keep track of where the robot detected obstacles (to allow for comparison to where they actually are)

While the timestamps are useful, because they allow dotVision to play the log back “in realtime” I still had to try and remember where the robot was at that time in order to make any meaningful comparisons for #1. This turned out to be a fruitless effort.

Better solution: pipe the data back to dotVision in realtime over a wireless connection.

dotVision (updated on git) has now been updated with a UDP server (uses Grand Central Dispatch to put a dispatch source around the socket and then schedules “dot” additions into the dotLog on the main UI thread for drawing) and the DotLog class on the BBB has a mini UDP client in it.

Which brings up the next question: what’s the ideal physical layer to send this over? The two obvious choices are 802.11(whatever) or BlueTooth. I had a spare D-Link DWA-131 revB USB wifi adapter lying around so I gave that a shot…

DWA-131 revB Power Requirements

First things first, you’re gonna need more power. I had limited success getting the DWA-131 revB running when tethered via USB and no success at all when running off the measly 5x AA cell power supply I had jury rigged. Scratch that, time for a LiPo.

Lithium-based battery chemistry has come a long way and as any RC fan can attest, LiPo (lithium polymer) batteries can deliver a significant amount of current in a small form factor (just be careful charging them, they also make great fireworks).

zippy2s

I picked up a few Zippy 2200mAh 2S batteries (2S meaning 2 cells in series, so 7.4V at full charge) and rejigged the robot chassis a bit to house one of them (and we now have an on/off switch, what progress!). This keeps the BBB, wifi adapter and motors happy for at least an hour in the testing I’ve been doing.

“Look ma, no hands!”

Now for the meaty bit. Once this is done the robot can boot the default BBB Angstrom Linux distro and be immediately accessible directly via WiFi. This assumes you’re connecting to an infrastructure mode network using WPA2.

  • Download the firmware for the WiFi chipset used by the DWA-131 (an RTL8192CU) from here and copy it into /lib/firmware/rtlwifi
  • Load the kernel module at boot:
# echo rtl8192cu > /etc/modules-load.d/rtl8192cu.conf
# update-modules
  • Disable connman, it’s a con, man:
# systemctl disable connman.service
  • Define the properties for your wireless network and the NIC you’ll be using (wlan0). Note that unlike as mentioned in the references, there is no pre-up command here. This happens in /etc/network/interfaces:
auto wlan0
iface wlan0 inet static
   address X.X.X.X
   netmask mask.mask.mask.mask
   gateway your.gateway.ip
   wireless_mode managed 
   wireless_essid "YOUR_ESSID"
   wpa-driver nl80211
   wpa-conf /etc/wpa_supplicant.conf
   post-down killall -q wpa_supplicant
  • Generate the hash for your WPA pre-shared key using wpa_passphrase, follow the prompts and then look for the psk line, you’ll be wanting to put that in your wpa_supplicant configuration file, /etc/wpa_supplicant.conf:
fast_reauth=0
ap_scan=1
network={
 scan_ssid=1
 proto=WPA2
 key_mgmt=WPA-PSK
 pairwise=TKIP
 group=TKIP
 ssid="YOUR_ESSID"
 psk=long_hex_string_from_wpa_password

}

  • We now need to ensure the DWA-131 firmware is loaded at boot time by configuring the kernel module with the USB device ID of the adapter. Angstrom is using systemd for boot time initialisation scripts so I just wrote a little service that runs when the networking stack comes up. The key here is to ensure it runs before we try to bring the interface up (see wifi.service below), note the Before condition.  The contents below goes into /lib/systemd/system/wifidriver.service:
[Unit]
Description=wifidriver
Wants=network.target
Before=wifi.service
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/home/root/wifidriver.sh
[Install]
WantedBy=multi-user.target
  • The /home/root/wifidriver.sh it references is simple:
#!/bin/sh
echo "2001 330D" | tee /sys/bus/usb/drivers/rtl8192cu/new_id
  • Enable the service so it runs at boot:
# systemctl enable wifidriver.service
  • The last step is to get the interface itself to come up (as we disabled connman which technically was meant to do it on demand, but seems to be rather average at that job). This is again done with a simple systemd service. The contents below goes into /lib/systemd/system/wifi.service:
[Unit]
Description=wifi
Wants=network.target
Before=network.target
BindsTo=sys-subsystem-net-devices-wlan0.device
After=sys-subsystem-net-devices-wlan0.device

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/home/root/wifi.sh
ExecStop=/bin/sh -c "ifdown wlan0"

[Install]
WantedBy=multi-user.target
  • The /home/root/wifi.sh it references (the sleep is just to give the driver time to load the firmware and boot the device):
#!/bin/sh
sleep 10
ifup wlan0
  • Finally, enable the service so it runs at boot:
# systemctl enable wifi.service

 

The act of bringing up the interface should cause wpa_supplicant to run and authenticate against your network (it will remain running as a daemon to handle rekeying etc).

Reboot and hopefully you’ll be able to SSH in directly without the USB tether.

References:

  1. Ubuntu 12.04 – Install D-Link DWA-131 rev. B1 wifi adapter
  2. Auto connect to a wireless network (Ångström Linux)

Next research project: low power BlueTooth instead of WiFi. But before then, some notes on ultrasonic range finding.

Advertisement

Go To Goal Strategy Implementation

If you take a look on GitHub there is now an example implementation of the Go to Goal strategy described in the previous post.

  • The bulk of the logic is in the file modules/controller.cpp
  • Additional support libraries have been added, such as:
    • gpio: wrapper to configure BBB GPIOs (used by the odometer)
    • odo: a basic odometry implementation to measure distance travelled by a wheel using optical encoders
    • led: wrapper to configure and interface with the onboard user LEDs
    • logger: simple text-based logging abstraction
    • dotlog: class that can create files to be read by dotVision

I’ve also committed the dotVision project, a simple Objective C application (provided as an X-Code project) for OSX that takes log files of data from the robot and plots it either on the same timescale as the data was created or in one hit. This is a useful debugging tool to quickly visualise 2D data that the robot is / was processing (future improvements will include socket support to take a feed of data in realtime (ie. via WiFi) to plot progress).

dotVision

The screenshot shows the dead reckoned position co-ordinates of the Go to Goal strategy module (ie. where it thought it was in its 2D co-ordinate system as it followed the defined waypoints). It would of course be interesting to be able to overlay this with the actual measured position of the robot.

The trace is black while the strategy thinks it is still far from the next waypoint, red as it believes it is approaching the waypoint and yellow when it stops as close as it can to the waypoint.

You can see that while it hits the first and second waypoints easily, the interaction between the PID controller tuning and the simplistic implementation of “set forward velocity” leads to a very loose turning circle as it adjusts its heading for the third waypoint.

Implementation Challenges

There were quite a few issues encountered (and not fully resolved) while implementing the current Go to Goal strategy including:

Knowing when to stop

When the strategy sets out for a waypoint it takes its current (believed) position (in theory the middle point of the robot between the wheels) and heading and works out the new heading required for the robot to be “facing” (directed toward) the next waypoint. It then drives at a set forward velocity until its (believed) heading is as close as possible to the required heading (the “set forward velocity” is modulated by changes to the left and right wheel velocities required to make any turns needed to track the reference heading: this is the job of the PID loop whose reference variable is simply the heading).

(As a side note this simplistic implementation means that it’s difficult/impossible for the strategy to deal with waypoints that are too close together.)

Consider the very real possibility that due to a combination of PID tuning and physics the robot simply isn’t tracking well to its required path: perhaps it’s at (90,45) and wants to move to (0,45) but never quite manages to hit the straight line between them: at some point it will go past the target waypoint and … probably keep driving off into the distance.

The simple solution implemented in the current strategy is to measure a vector between the current position and the waypoint.

If the vector is noted to be “small enough” (relative to the size of the robot) the strategy believes it has reached the waypoint and stops.

If the vector does not ever get “small enough” the strategy waits until it sees the size of the vector increasing (on the logic that for the bulk of the transit, other than some initial turning, from waypoint A to waypoint B, the vector should shrink) and when it notes a prolonged increase (ie. we keep getting further and further from the target) that is significant in the context of the initial vector between the two waypoints it “gives up” and assumes any further transit is only going to place it further from its goal.

Determining how often to reset the reference heading between waypoints

Initially the strategy calculated the reference heading just once (as described above) at the time it began a transit from one waypoint to another.

The initial reasoning behind this was that the error from dead reckoning was assumed to be large (ie. once the journey from A to B had started the “believed” position was likely to be increasingly wrong). As such, recalculating the reference based on that dodgy position was just going to cause issues (this is somewhat of a moot point though: if the incremental believed position is wrong, the “final” believed position (ie. where we think we are vs where we actually are when we arrive at the target waypoint) will also be wrong and hence the next reference heading calculation for the subsequent waypoint will also be wrong.

In reality the dead reckoned position over the distances of the experiment (less than 1m x 1m so not really useful or significant) was fairly accurate, so continuously recalibrating the “new” reference heading during the transit, based on “believed” position, rather than just once per transit, seemed to give better outcomes than doing it just once.

Achieving a reasonable balance between PID loop tracking, algorithm and physical limitations

As can be seen in the dotVision plot, the strategy of using a continuous forward velocity causes some issues when large turning circles are required.

The robot itself is capable of turning on a pin: a better strategy could take advantage of the physical characteristics of independent wheel rotation to:

  • use smaller forward velocities when enacting large heading corrections to minimise lateral / translational drift
  • adopt a separate “turn on a dime” strategy when starting out from a given waypoint

Next Steps

Rather than continuing to explore some of the adaptions above to the strategy itself the next steps for this robot will be to use the ultrasonic transducer to begin taking measurements as it moves and build a map of the surroundings.

In future this information can then be used by the “Go to Goal” strategy to avoid obstacles, ideally I’d like to see the robot adapt a new path when an obstacle appears in between two of its desired waypoints.

Beaglebone Black PID Control and Go to Goal

Goal: Get the robot to travel from one position to another in a known co-ordinate system

To achieve this goal the robot must be able to determine:

  • where it is within its co-ordinate system at a given time
  • how movement affects its position

Dead Reckoning

We can solve the first problem using our odometers, trigonometry and some simplifications about what the robot looks like (the wheelbase, L, of our differential drive robot is the distance between its two wheels) and how it moves. Let’s assume that the robot starts at position (xt, yt) and heading θ, then travels distance DC along that heading; the new position (xt+1, yt+1) and heading can be determined as thus:

Screen Shot 2014-10-26 at 10.39.03 AM

Screen Shot 2014-10-26 at 10.37.55 AM

The first challenge is that we don’t actually know DC: the odometers measure the distance travelled by each wheel, DL and DR, which, if the robot is turning, will not be equal. That’s not such a big deal though, because DC must always be between the arcs of each wheel:

Screen Shot 2014-10-26 at 10.33.53 AMScreen Shot 2014-10-26 at 10.33.49 AMScreen Shot 2014-10-26 at 10.34.55 AM

The next challenge is that the odometers don’t actually measure distance in centimeters or meters, they measure distance in ticks (number of state table transitions). Noting that the Pololu rotary encoders being used generate 48 ticks (let N = 48) per revolution of the wheel, that each wheel has a radius, r, and that one full revolution covers a distance equal to the circumference (2πr) of of the wheel (negating any slippage), then the physical distance travelled by a wheel is:

Screen Shot 2014-10-26 at 10.38.37 AM

Now we can do this:

x = 0, y = 0, theta = 0
while (true)
{
   ticksLeft = odometer.getTicks(left)
   ticksRight = odometer.getTicks(right)
   distanceLeft = 2 * pi * r * (ticksLeft / N)
   distanceRight = 2 * pi * r * (ticksRight / N)
   distance = (distanceLeft + distanceRight) / 2
   x += distance * cos(heading)
   y += distance * sin(heading)
   heading += (distanceRight - distanceLeft) / L
   sleep(shortTime)
}

This technique is known as dead reckoning: by adding the distance we think we travelled to our previous location we can infer our new location.

It works, but it is error prone. If for any reason the distance we think we have travelled is not actually the distance we have travelled (which can happen for a multitude of reasons, the most common being wheel slippage, where the wheel rotates, but skids: the odometer measures rotation, by the slippage means the wheel does not actually move the same distance over ground).

For the purposes of this experiment, dead reckoning will do: if you’re planning on travelling any real distance, or over uneven terrain, a more accurate positioning system, or ability to reset to known positions, may be required (ie. GPS, known location beacons based on ultrasound, infrared etc).

So far, so good: the robot can now (sort of) determine where it is (assuming a known initial position and heading).

Go to Goal Strategy

The next challenge is to be able to control the robot to go somewhere specific (this is the “go to goal” strategy, ie. “robot, go to x = 10, y = 20!”)

As a starting point, consider the robot as a point-mass, positioned at (x,y), travelling along a straight line with heading θ.

Screen Shot 2014-10-26 at 11.04.41 AM

Just as with the dead reckoning, trigonometry can be used to show that its translational (x,y) position will change as a function of its translational velocity, v, (the speed at which it is moving forward along the straight line) and its current heading, θ. This is great news! As a robot all we need to know is our starting position and heading, and then by controlling v and θ we can go anywhere we want.

Unfortunately for us, our robot is not a point-mass, nor do we have “direct” control over its translational velocity or heading. We have a left and a right motor and two wheels with a known radius, r, positioned a known distance, L (the wheelbase) apart. The only variables we can control are the velocities (VL and VR) at which the motors spin.

Screen Shot 2014-10-26 at 11.05.52 AM

As a programmer, I don’t want to tell my robot “turn your left wheel at 5cm/s and your right wheel at 10cm/s, then adjust left speed to 3cm/s and right to 12cm/s, keep going for 10 seconds and then stop.” Instead, I’d much prefer to say “travel at a heading of 45 degrees at 10cm/s for 10 seconds”, or better yet, “Go to (x = 10, y = 20), there’s a tasty sandwich there I want you to pick up.”

Luckily, by noting that the derivatives in the unicycle model kinematics (our “point mass” model) are the same derivatives as in the differential drive kinematics (our actual model, see also here), that is, they both describe how translational position and heading change with respect to time, we can equate the two models. This allows us to “talk” in terms of translational and angular velocity for the purposes of planning, but then convert them to left and right wheel velocities at the last minute.

Screen Shot 2014-10-26 at 10.21.12 AM

and Screen Shot 2014-10-26 at 10.29.56 AM

Therefore:

Screen Shot 2014-10-26 at 10.32.46 AM

Screen Shot 2014-10-26 at 10.39.27 AM

Now, in order to go from point A, to point B, we can say:

  • What is my current heading (θ)?
  • What heading do I require to get to point B?
  • What is the difference between the two headings (Δθ)?

And then, by altering the velocities of the left and right wheels, we can adjust our heading (and translational velocity) such that the difference between the two is reduced and finally our heading matches the required heading.

PID Control

What we now have is a dynamic system (something that changes over time) over which we wish to exert control.

A familiar situation here is cruise control for a car. Driving along toward your summer holiday you get bored of regulating the speed of the car and set cruise control to 100km/h, call this the reference, r. The cruise control system takes a measurement, y, of the current state, x of the variable it is tracking (speed) and compares this with the reference, to create an error (difference), e = r – y.

Screen Shot 2014-10-26 at 11.20.56 AM

If the cruise control thinks the car is travelling at 90km/h, e = 10km/h and it applies a control signal, u, in this case, acceleration, to try and reduce the error.

If the cruise control thinks the car is travelling at 110km/h, e = -10km/h, and it applies a control signal, u, in this case deceleration, to try and reduce the error.

(As a side note, the reason we distinguish between x and y is because the measurement, based on our sensors, may not actually equal the actual state of the variable. This measurement error represents a known or unknown error introduced by our sensor equipment).

Our robot is much the same, except that we’re not (right now) interested in controlling its speed, we’re interested in controlling its heading, because we know that by adjusting the heading, we can position the robot wherever we like (within reason: we may not be able to make certain turning circles etc). I’ve made a simplification here too: by dropping control of translational velocity and assuming it is constant, we can focus solely on controlling the heading. Control of both variables is beyond the scope of this post.

Just as the control signal for cruise control was the first derivative of the state (acceleration being the first derivative of velocity), the control signal for the robot will be the first derivative of the state we care about: angular velocity (the first derivative of heading/angle). If our error is large (ie. our current heading is really wrong compared to the heading we need to orient ourselves toward point B) then we can apply a large angular velocity (ie. a quick rotation) to ensure we approach our desired heading as quickly as possible.

So, how do we choose the right magnitude of control signal? This is where PID control comes in. PID is an acronym that represents: Proportional, Integral, Derivative.

Let’s go back to the car analogy and say that because we want to get to our reference speed as quickly as possible, we’re going to apply the following rules:

  • If e > 0 (ie. we’re below reference speed), u = umax (max acceleration)
  • If e < 0 (ie. we’re above reference speed), u = -umax (max braking)
  • If e = 0, u = 0 (do nothing)

Now consider what happens if the car is at rest and the reference is 100km/h: the car will accelerate as fast as it can. Similarly, if the car is travelling fast and the reference is set slow, the car will brake as hard as it can. Needless to say, anyone drinking a coffee in the car isn’t going to be happy.

Health and safety aside, you’re quickly going to burn out your actuators, especially when you take real world physics and inertia into account. From standstill, maximum acceleration is going to see the car overshoot the reference velocity, then slam on the brakes, which will undershoot the reference velocity and subsequently rev the accelerator. Because of inertial effects the error will almost always be non-zero, and the actuators will always be working at full action: do that enough and neither will last for long (you’ll probably get car sick too).

Clearly, the control signal should not over-react to small errors: its magnitude must be proportional to the error. If the error is large, our response is also allowed to be large (to hell with onboard coffee drinkers), however if the error is small (ie. we’re only 5km/h off the reference speed), we don’t need to go crazy with the accelerator: just a little bit more will get us there.

If you implement a purely proportional control signal however, you’ll probably notice that while you get close to the reference, you never track it exactly. Small errors accumulate over time that the proportional part of the control signal does not take into account, so we must add an integral part: by integrating the error (with respect to time), we are able to remove more of it.

Whether or not a derivative component is required really depends on the implementation and the physics. It adds fast-rate responsiveness to the control signal (it tracks the derivative of the error over time: if the error is increasing fast, the derivative increases – ie. “quick, this is getting out of control really fast, you need to apply more control!”, if the error is relatively constant, the derivative is a zero component).

Therefore, the control signal (for the robot: angular velocity, or how quickly it is changing its heading) that ends up being applied, consists of P, I and D components:

Screen Shot 2014-10-26 at 11.28.38 AM

It is our job to pick Kp, Ki and Kd (the PID coefficients) to ensure the control signal, u can make our state, x, approach the reference r (good tracking), without too much oscillation (good stability) as quickly as possible (optimal).

While there are varied methods for selecting Kp, Ki and Kd, often the most realistic (and eventual) solution is simply trial and error: you adjust them until you get the desired result. You can simulate your model and fiddle with them, but when you put the “rubber to the road” (or the robot on the floor), the simulation often drifts away from reality.

Here are some graphs of the simulated results of the robot’s position while travelling from (0,0) to (50,50) using various values of Kp, Ki and Kd:

Screen Shot 2014-10-25 at 6.47.01 PM

Kp = 0.01, Ki = 0.01, Kd = 0.00

As can be seen, the heading swings about, never quite reaching its desired reference (θ = π/4 rad) and just when it almost gets there, starts swinging away again. Trying with a more aggressive proportional coefficient:

Screen Shot 2014-10-25 at 6.47.18 PM

Kp = 0.75, Ki = 0.05, Kd = 0.00

With these coefficients the control is stable (the heading does not oscillate around its reference) and fairly optimal (the heading reaches the reference quickly). The actual coefficients implemented in the code are different again, taking into account the influence that actual physics of the robot and manner in which the code is implemented have impact.

The Implementation

If you browse through the code you’ll see there is a Controller class, which is responsible for implementing the go to goal strategy by keeping track of the robot’s current position and fulfilling requests to send it to new positions. The implementation sends the robot from a “known” origin of (0,0) at heading 0 rad (ie. pointing out along the positive x-axis) through several waypoints.

The high level implementation is thus:

referenceHeading = getHeading(currentPosition, desiredPosition, currentHeading)

// keep doing this until we get very close to our desiredPosition
while (vectorToTargetMagnitude > sensitivityLimit)
{
   // how long did we sleep since the end of the last iteration (required for PID)
   dt = getActualTimeSlept()

   // get the total distance travelled according to the odometers
   distanceLeft = odometer.getDistance(left)
   distanceRight = odometer.getDistance(right)
   distanceTotal = (distanceLeft + distanceRight) / 2
   
   // determine how far we've travelled in last iteration
   distanceLeftDelta = lastDistanceLeft - distanceLeft
   distanceRightDelta = lastDistanceRight - distanceRight
   distanceDelta = lastDistanceTotal - distanceTotal

   // determine new position and heading
   currentX += distanceDelta * cos(headingCurrent)
   currentY += distanceDelta * sin(headingCurrent)
   currentHeading += (distanceRightDelta - distanceLeftDelta) / L

   lastDistanceLeft = distanceLeft
   lastDistanceRight = distanceRight
   lastDistanceTotal = distanceTotal

   // how wrong is our current heading required to what we need?
   headingError = referenceHeading - currentHeading

   // determine the required control signal based on the error
   headingErrorDerivative = (headingError - lastHeadingError) / dt
   headingErrorIntegral += (headingError * dt)
   lastHeadingError = headingError

   // control signal is our required angular velocity
   u = (Kp * headingError) + (Ki * headingErrorIntegral) + (Kd * headingErrorDerivative)

   // convert angular velocity to wheel velocities based on kinematic equalities
   velocityLeft = (2*velocityTranslational - u*L) / (2*r)
   velocityRight = (2*velocityTranslational + u*L) / (2*r)

   // convert these velocities to PWM DCs to drive the motors
   motor.setLeftPWM(convertVelocityToPWM(velocityLeft))
   motor.setRightPWM(convertVelocityToPWM(velocityRight))

   // create dt
   usleep(50000)
}

The Real World

Here is a video of the robot running an implementation of the described approach. It has been programmed to move through the following waypoints (which you can see marked out on the floor in black tape if you look closely):

float waypoints[][4] = {
   {45.0, 0.0},
   {90.0, 45.0},
   {0.0, 45.0},
   {1.0, 1.0}
};

Crazily, after all that effort, it still stinks. The tracking to the first waypoint is good, the second not bad, the third a bit flakey but by the fourth we’re way off target. This could be down to odometry errors, unrealistic turning circles, a poor PID implementation or who knows what else. Luckily I’m not building a Mars rover.

Next step: mapping the world with SONAR.

Beaglebone Black Odometry

Goal: Measure the distance travelled by a single wheel over time

Unless you’re building a unicycle, measuring the distance travelled by a single wheel seems like a fairly pointless exercise. However, bear with me, because with the wonders of ancient mathematics, kinematics and some (fairly broad) assumptions, this is a fundamental stepping stone to building a robot that can determine its relative position from a known origin using dead reckoning.

At this stage it was time to upgrade from my rickety little “wheels attached to battery case” rig onto a more sturdy platform, so I put in another order with robotgear.com.au:

  • Pololu 5″ Robot Chassis ($18.90)
  • Pololu 42x19mm wheel & encoder set ($45.95)

After a bit of stuffing around tuning up the encoder to work at 3.3V and get the pulse trains back in the right phasing (instructions here at Pololu) I was ready to go.

Muuuuuuuum! Are we there yet?

Just as your car’s odometer measures the distance you’ve travelled on your summer holiday, odometry for robotics does the same: it’s purpose is to use sensor data to determine distance travelled by the robot over time. By combining this information with the physical attributes of the robot itself, relative position can be reckoned (but more on this to come).

A common method of odometry is to measure how many rotations (or partial rotations) a wheel has made. You may have actually done this yourself as a kid: did you ever have to walk around the basketball court at school with a little wheel on the end of a stick? If so, you’ve been a human odometer. If we know the circumference c of the wheel is 1m and the wheel rotates once, then we must have travelled 1m (caveat: this ignores any slippage, which turns out to be a significant problem with wheel-based odometry). Furthermore, as the first derivative (with respect to time) of position is velocity, we can say that if the wheel rotated once in 1 second, it must be rotating at 1m/s (3.6km/h).

Rotary Encoders

0J1203.1200So far, so good. But how can we put this into practice with the wheel(s) of a robot? A common option is to use a rotary encoder such as the Pololu encoder I have used here. This is an optical encoder and uses two infrared reflectance sensors that shine infrared light onto the inside of the wheel hub, upon which is arranged 12 “teeth” with gaps between them.

As the wheel rotates around the shaft of the motor, the teeth pass over the reflectance sensors. When a tooth is above a sensor it reflects infrared light and when no tooth is above it doesn’t. The sensor (after some signal cleaning) outputs a pulse train: high, low, high, low that corresponds to tooth, no tooth, tooth, no tooth. The higher the frequency of the pulse train, the faster the wheel is turning.

A single sensor is good: its pulse train allows us to determine how fast the wheel is rotating, but it lacks a critical bit of information: in which direction is the wheel rotating?

This is where the second sensor comes in. The sensors are physically positioned to provide the same pulse train, but 90 degrees out of phase with one another: a quadrature output signal. When the wheel is rotating in direction A (ie. clockwise) the pulse train from sensor A “leads” that of sensor B (because the tooth passes over A first) and when the wheel is rotating in the opposite direction (ie. counter-clockwise) the pulse train from sensor B “leads” that of sensor A (because the tooth passes over sensor B first).

Screen Shot 2014-10-11 at 2.41.51 PM

Image Courtesy of EDN

Looking at the quadrature output when sensor A leads sensor B (the top two pulse trains, let’s assume this represents clockwise rotation), it can be seen that for a series of consecutive rising and falling edges, the state of the outputs forms the following state table:

Sensor A Sensor B
1 0
1 1
0 1
0 0

Also note that this pattern is cyclical. Once the final state is reached, it will wrap around to the first stage again.

Looking at the quadrature output when sensor B leads sensor A (the lower two pulse trains, let’s assume this is counter-clockwise rotation), it can be seen that for a series of consecutive rising and falling edges, the state of the output forms the following state table:

Sensor A Sensor B
0 1
1 1
1 0
0 0

Notice that the second is exactly the same state table as the first, just “starting” on a different state and transitioning backwards? If so, please collect the big fluffy bear as your prize. The transitions are exactly the same, they just occur in the… opposite direction (because the wheel is rotating in the opposite direction).

Side note: this state table is a gray code, the hamming distance between adjacent states is always 1: only one of the two outputs ever changes its level between adjacent states. If we ever see both outputs change state at the same time then we know there has been an error: there is either noise corrupting the signal or we missed a transition (this latter part is important later on as we’ll be measuring these transitions using interrupts, which, depending on our hardware, could be missed).

To summarise:

  • Our encoder uses an infrared sensor to detect wheel rotation by measuring reflected light bouncing off teeth in the wheel hub
  • The sensor outputs a pulse train that indicates when a tooth is present / not present
  • By counting the pulses we know how many teeth have passed by the sensor
  • By knowing the circumference of the wheel and the number of teeth we can determine the distance the wheel has rotated
  • By measuring the time it took for N teeth to pass, we can determine the velocity of the wheel’s rotation
  • By adding a second sensor, positioned next to the first, we get two identical pulse trains, 90 degrees of out phase with one another
  • Because a tooth must pass over one of the sensors “first”, we can determine the direction of rotation by examining the phasing of the pulse trains

So, if we can get the Beaglebone Black to monitor these pulse trains, we can work out how the wheel is rotating (and as we have two wheels, we’ll end up with two of everything: two encoders, two sets of pulse trains, four individual signals to monitor).

Detecting Interrupts with Beaglebone Black’s GPIOs

As well as the many peripherals (PWMs, ADCs) that can be exposed on the Beaglebone’s external pins, most of them can also act as General Purpose I/Os (GPIOs), capable of acting as inputs (which determine the level of the signal applied to them) or outputs (which generate a signal, ie. to drive an LED or other external device). When configured as inputs they can also generate interrupts on a rising of falling edge of the input signal and this is the functionality that I used to monitor the encoder signals and thus implement the odometer.

Unfortunately, when I was working with the Beaglebone Black it was still quite fiddly to configure the GPIOs. Recall from earlier posts that the BBB has a lot of internal peripherals whose functionality is multiplexed out onto external pins according to the device tree. In order to change what gets mapped onto which pins, device tree overlays are used to reconfigure the hardware at runtime. So I had to write a DTO (Device Tree Overlay) that configured P9_11, P9_13 and P8_7, P8_8 (there is nothing magical about this choice of pins other than that they can act as GPIOs and aren’t being used for the PWM or ADC in this project) as GPIOs in input mode and load this before running my odometer program. Derek Molloy has a great tutorial about using the GPIOs on the BBB and writing / compiling DTOs here. You’ll want to read it if you haven’t before.

Putting this all together, to implement the odometry I had to:

  • Connect the two outputs from each rotary encoder (one per wheel) to external pins on the BeagleBone Black that can act as GPIOs (P9_11, P9_13, P8_7 and P8_8 in my case, these are protected with 100ohm inline resistors)
  • Write and compile a Device Tree Overlay that multiplexes the external pins to GPIOs and configures their hardware as inputs
  • Write some code that uses the virtual files in sysfs to export control of the GPIOs into userland (a directory of virtual control files is created for each GPIO that is exported, Derek’s tutorial discusses this in detail) and then configure the GPIOs as inputs that generate interrupts on both rising and falling edges
  • Write some code that monitors the state of the encoder outputs by waiting for interrupts on the GPIOs and measures the direction and amount of rotation

The code itself is on github and documented, but I wanted to explain in some more detail how the main odometer loop works and the assumptions that it makes. The code performs odometry for two encoders/wheels, but to simplify the explanation below we assume we’re dealing with just one encoder.

  • It tracks five values for each encoder (wheel):
    • The current state (as measured directly after the interrupt, or assumed based on logic) of each sensor (currentA and currentB)
      • Assumed based on logic is… an assumption. If we trust that the phase difference between the two pulse trains is always 90 degrees, that it is only possible for the states to transition between adjacent codes according to the gray code and that we do not miss any interrupts then when an interrupt (rising or falling edge) occurs on one sensor, if we measure the level of that signal we can assume we know the level of the other sensor – because it follows a gray code it cannot have changed since the last interrupt (because only one signal can change a time between adjacent gray code states, and the changing signal must have been the one experiencing the interrupt). This assumption falls apart if we miss an interrupt for any reason (ie. the processor was busy, something got lost in noise etc). In that case, we may not actually be in the state immediately adjacent to the last one we measured: we could be several/hundred/millions of states further on.
    • The previous state (at the last interrupt) of each sensor (lastA and lastB)
    • A counter that increments by one for each forward transition (“tick“) and decrements by one for each backward transition (“tick“) through the state table (odometer).
      • If odometer == 0 and then the wheel rotates forward/clockwise over the course of five consecutive interrupts/ticks, odometer == 5.
      • The Pololu encoder generates 48 ticks per complete revolution of the wheel. Thus, if the wheel has a radius = 2cm and thus a circumference c = 12.57cm, if we see 48 consecutive transitions through the table in a single direction, the wheel has turned 12.57cm. If we see 24 consecutive transitions through the table in one direction, and then 24 consecutive transitions through the table in the opposite direction, the wheel has still turned 12.57cm, but if it were travelling in a straightline over ground it would be back at its origin position.
  • Before looping, it opens 2 file handles, one to each GPIO (we need 2 GPIOs per rotary encoder, one to measure each pulse train)
  • The main loop construct is built around a blocking poll() on the file handles. An interrupt on either of the GPIOs (ie. the detection of a rising or falling edge on one of the encoder output pulse trains) causes the poll to return.
  • It then determines which GPIO had the interrupt and reads the state of that GPIO (ie. the current level of the pulse that just experienced the edge). Going back to the assumptions above, it now “knows” the current level of both sensors: currentA and currentB.
  • It XORs the current level of each sensor against the level of the opposite sensor at the last interrupt (lastA and lastB) which allows it to determine whether it is transitioning “forward” or “backward” through the state table, and thus the direction in which the wheel is rotating. The odometer counter is incremented or decremented accordingly. It also performs a basic sanity check to ensure the current states do not violate the gray code (which probably indicates that we’ve missed an interrupt and cannot be certain of our count any more).

To illustrate, we can walk through several ticks from a reset state and see what happens. To refresh, the gray code for the encoder is:

State Sensor A Sensor B
0 0 0
1 0 1
2 1 1
3 1 0

tick 0 (power on)

Note: This assumes the initial state of the encoder is 0 (ie. sensor A = 0, sensor B = 0). If it’s not we just get an initial gray code violation error, the code could be made more robust by simply measuring the actual starting state).

currentA = 0
currentB = 0
lastA = 0
lastB = 0 
odometer = 0

tick 1 (first interrupt, rising edge on B: state 0 -> state 1)

currentA == 0 (this is assumed, because interrupt occurred on b)
currentB == 1 (this is read now, because interrupt occurred on b)
lastA == 0
lastB == 0

if (currentA ^ lastB) odometer--;   // false
if (currentB ^ lastA) odometer++;   // true

NOTE: The above determines whether the transition is from state 0 to state 1 (forwards) or from state 2 to state 1 (backwards). If lastA 
and lastB had been 1 (ie. previous state was state 2) the odometer would have been decremented.

lastA = currentA = 0
lastB = currentB = 1
odometer == 1

tick 2 (second interrupt, rising edge on A: state1 -> state 2)

currentA == 1 (this is read now, because interrupt occurred on a)
currentB == 1 (this is assumed, because interrupt occurred on a)
lastA == 0
lastB == 1 

if (currentA ^ lastB) odometer--;   // false
if (currentB ^ lastA) odometer++;   // true

lastA = currentA = 1
lastB = currentB = 1
odometer == 2

tick 3 (third interrupt, falling edge on B: state 2 -> state 3)

currentA == 1 (this is assumed, because interrupt occurred on B)
currentB == 0 (this is read now, because interrupt occurred on B)
lastA == 1
lastB == 1

if (currentA ^ lastB) odometer--;   // false
if (currentB ^ lastA) odometer++;   // true

lastA = currentA = 1
lastB = currentB = 0
odometer == 3

tick 4 (fourth interrupt, direction reversed, rising edge on B: state 3 -> state 2)

currentA == 1 (this is assumed, because interrupt occurred on B)
currentB == 1 (this is read now, because interrupt occurred on B)
lastA = 1
lastB = 0

if (currentA ^ lastB) odometer--;   // true
if (currentB ^ lastA) odometer++;   // false

lastA = currentA = 1
lastB = currentB = 1
odometer == 2

Screen Shot 2014-10-11 at 6.33.52 PM

Beaglebone Black Analog to Digital Converter

Goal: Get the robot to drive around the room and avoid obstacles

Once the DRV8833 was hooked up to the two DC micro motors and the BBB’s PWMs, I wrote some simple C++ code to abstract movement (ie. forward(), reverse(), spinLeft(), spinRight() etc). It was now possible to write little programs to move around along a predetermined path. At this point the robot had no idea where it was, but was capable of driving forward for 2 seconds, turning left, doing that some more and making a square for example.

It’s worth noting at this point that although I later took some actual measurements as to the mechanical differences between the two motors and how they each uniquely responded to the same PWM duty cycle, during this early phase a DC of 50% more than adequately drove the robot in a “straight” line for small distances (and it was obvious from trial and error that there was a critical minimum DC at which the motors did not generate enough torque to turn the wheels when in contact with the floor).

Before I delved further into location awareness and odometry, I wanted to test out the Maxbotix MaxSonar-EZ1 ultrasonic transducer that I had bought and get some practice with the BeagleBone’s built-in ADC. The aim was to write a simple finite state machine to:

  1. Drive forward while measuring proximity of objects in front of robot using the ultrasonic transducer
  2. When a “safe” distance threshold was breached, slow down
  3. When a “danger” distance threshold was breached, enact collision avoidance (ie. reverse a bit, spin around, drive forward again)

With the PWMs and DRV8833 all the driving component was taken care of, the major work was in interfacing with the MaxSonar-EZ1.

MaxSonar-EZ1

For hobby work this is one of the kings of ultrasonic sensing (and the price tag reflects it!). In the past I’ve worked with bare 40kHz transducers and built 1573_HRLV-EZ Ultrasonic Range Finderan amplifier and wave generator for them, along with interfacing them back into timers for distance measurement, but a package like the EZ1 takes care of all of this for you.

The EZ1 has a typical 30cm deadzone, but outside of that has excellent resolution (they claim 1mm), a 5m range (I’ve only tested it up to about 3m in my apartment, but it gives linear readings all the way out to that distance) and supports multiple interfaces including analog voltage output (although this decreases the object detection resolution to 5mm), serial over TTL, serial over RS232 and PWM.

I really haven’t explored all the options this thing has to offer because it was so simple to get up and running, but with the other output options and the ability to start and stop its sensing, it can be used a lot more effectively than I’m currently using it here. The datasheet is easy reading and the Quick Start Guide is useful for playing around with it on the bench.

Using the BBB ADC

The MaxSonar-EZ1 supports multiple output methods but for my testing I wanted to use the analog output so I could play around with the BBB’s ADC. The EZ1 can run on anything from 2.5V – 5.5V DC and the analog output voltage it emits to represent distance obviously scales within 0V and the nominated supply voltage. In my case I am powering it from the 3.3V output on the BBB itself (available on pin 3 of the P9 header).

The BBB has a 7 channel ADC with 12-bit resolution. The most important thing to note is that the ADC inputs are 1.8V logic: do not exceed this or you’ll fry it. Because we’re supplying the EZ1 with 3.3V, its analog output (on pin 5 of the EZ1 board) must be put through a voltage divider so the BBB ADC only sees up to 1.8V.

I’m using a simple 50/50 divider built from two 200ohm 0.1% error tolerance resistors, the output of the voltage divider is being fed into pin 33 on P9, which corresponds to AIN4 (the 5th ADC input). It’s important to use low error tolerance resistors because we’re trying to measure a voltage that varies between 0 – 1.8V with a 12-bit resolution (4096 possible values). Technically this gives the ADC the ability to differentiate down to the sub-mV level (ie. one quantisation step represents around 0.44mV: if the resistors skew the real output from the EZ1 then you can lose noticeable accuracy.

The ADC also needs a GND reference connected (pin 34 of the P9 header), I simply connect this to pin 2 of the P9 header which is connected to the BBB ground.

Like with the PWM, on the version of Angstrom that my BBB came with there is a handy module that exposes the readings from the ADC into userland virtual files, called cape-bone-iio. To enable the ADCs the virtual cape itself must first be loaded (as per the PWM the # in the capemgr directory represents a BBB-specific value, your mileage may vary):

# echo cape-bone-iio > /sys/devices/bone_capemgr.#/slots

Each ADC now has its own virtual file that can be read which will return a value between 0 and 1800: the module takes care of converting the 12-bit resolution into an actual mV reading. You’ll see the OCP reference around quite a bit, I believe it stands for “On Chip Peripherals” – all the good stuff that’s baked directly into the AM3359 processor.

# cat /sys/devices/ocp.#/helper.#/AIN4
2049

The MaxSonar-EZ1’s output voltage can be converted into distance using the formula:

Vcc / 512 = mV_per_inch
distance_inches = mv_from_ADC / mv_per_inch

Because I like working in centimeters and the voltage scale the BBB ADC sees is half that of the actual EZ1 Vcc (because of the voltage divider):

distance_cm = (mv_from_ADC * 512 * 2.54) / 1800

Finally, the readings you get from the ADC might bounce around a bit (ie. try putting something solid a fixed distance in front of the uS sensor and then reading the ADC 5 times in a row, once per second). To get around this I sample the ADC several times (32) with a fixed delay between each sample (1ms: the ADC has a sample time of 125nS so this is plenty enough), then I order the samples and select the median. This is a really inefficient way of doing median filtering, there is a paper about much better alternatives here.

Hipster Circuits, the builders of the incredibly cool Replicape 3D printer cape for Beaglebone, note on their BBB ADC post that newer kernels have changed the way that the ADC is accessed and that cape-bone-iio no longer is used.

 Circuit Diagram

Screen Shot 2014-04-01 at 9.16.05 PM

 

Controlling motors with the Beaglebone Black

My first goal was to get the Beaglebone using the DRV8833 to drive the two micro motors forward: that seemed like an achievable start (if that wasn’t possible this was going to be a pretty short-lived hobby). It should be easy right? Build a little platform, attach the wheels, apply some voltage, watch it go.

No shell for you. One Year. SSH Host Key Corruption

Early attempts at this were thwarted by the fact I couldn’t SSH into my BBB. The Beaglebone runs an SSH daemon called Dropbear and on first boot Dropbear creates the board’s public/private keypair. If this fails (ie. you accidentally power the board off halfway through writing the files to the BBB’s flash) then you can’t SSH into the BBB. I suspect that this could be happening during factory testing rather than the end user’s first boot: the board is powered on long enough to check for basic operation, but then yanked off the production line and boxed. Other people have experienced this issue and there are some proposed solutions floating around but in the long run I just booted up a second BBB I had and used that: I’m still getting around to reflashing the old one.

BeagleBone Black Pulse Width Modulation0J3867.600

Once I had a functioning BBB the first order of business was interfacing it with the DRV8833. This is an awesome little chip from Texas Instruments that packs two H-bridges capable of driving up to 1.2A each into a tiny space: the last time I built an H-bridge it was about the size of a stamp, these are smaller than a fingernail. The Pololu carrier neatly exposes the chip’s pins and adds some additional components required for its basic operation.

The DRV8833 doesn’t need much to work: simply supply DC power (anything from 2.7V – 10.8V) on one side for the motors and four control signals on the other (the input pins are 3V and 5V compatible, which is great, because the BBB has 3.3V I/O). Each motor (called A and B) has two control signals (AIN1/AIN2 and BIN1/BIN2). Applying a HIGH signal to AIN1 makes motor A spin forwards, applying it to AIN2 makes it spin backwards and so on. See the datasheet for the gory details.

Once the motors are connected and some power applied, a simple test at this point is applying a HIGH signal to the each of the input pins (they’re internally pulled low) and checking that the motors spin in the desired direction. For the robot however, we need more control. We have the need for (variable) speed. Enter Pulse Width Modulation.

When applying a constant signal to xIN1 or xIN2 on the DRV8833 you’re telling the H-Bridge to dadc74b4230supply steady current to the motor (in one of two polarities depending on which control signal is HIGH). This drives the motor at whatever speed it will run for the given input voltage, full speed ahead.

In order to vary the speed we can pulse width modulate the control signal to the motor. A pulse width modulated signal has a specific frequency but a varying duty cycle. That is to say that the rising edges of each cycle occur the same wavelength apart (constant frequency) but the duration of the cycle where the signal is HIGH vary: we are modulating the width of the pulse. The longer that each pulse stays HIGH for, the more time the DRV8833 will deliver voltage/current to the motor and the faster it will run.

If your embedded device has any form of digital I/O then it’s possible to write your own PWM routine in software (assuming you have a reasonably stable clock source/timing) but luckily for us, the BBB has a heap of onboard peripherals, including the “Enhanced High Resolution Pulse Width Modulator” (eHRPWM).

We can use this to PWM the control signal to DRV8833’s input pins and control the motor speed. If we apply our control signal to AIN1 we can vary the forward speed of motor A, if we apply it to AIN2 we can vary the reverse speed of motor A.

Device Tree

Before delving into how the eHRPWM can be used however, a brief historical tour of the entire known universe must be undertaken to understand why Device Tree and Device Tree Overlays exist and why they are important when it comes to PWM (and almost any other hardware) on the BBB.

  1. Linus Torvalds starts writing Linux in 1991, initially the kernel is very x86-centric
  2. Linux kernel is ported to a few other CPU architectures such as Alpha and SPARC around 1995
  3. Linux begins being touted as a serious alternative to commercial UNIXes and desktop platforms around 1998-2000
  4. Custom machines/boards built on SoC concepts increase exponentially, people start porting Linux kernel to an increasing number of hardware platforms
  5. Kernel requires lots of board-specific code to describe the specific hardware on a board, many boards need their own kernel image
  6. Everyone gets sick of having to manage so many board-specific differences being introduced into the kernel
  7. Linux kernel introduces support for Device Tree, a data structure that describes the hardware in a system (ie. where to find it, how to interface with it) to the kernel

x86 PCs have been around a long time and despite supporting hardware and software from a huge variety of sources, at their heart there is a lot of standardisation other than just the x86 instruction set. One area of (relative) standardisation is the BIOS and concepts such as ACPI which allow the kernel to learn about and interface with the underlying hardware (ie. decode keystrokes from a keyboard, write a character to a display, discover devices on a bus etc). One of the kernel’s first tasks as it boots the machine is to discover and configure the hardware: how many CPUs are available? How much RAM is there? What peripheral devices are available that it may have device drivers for?

The BIOS and things like ACPI are used to do this on IBM PCs but for custom boards using SoCs which may contain many custom peripherals, there isn’t necessarily a lot of standardisation. One board might have a PWM, one might have some GPIOs, many have both, but almost all of them expose the registers, memory mappings and control interfaces to these devices in different ways. Before Device Tree, custom code had to be written to describe a SoC implementation to the kernel and this was generally compiled into the kernel image itself.

Device Tree provides a standard data structure to describe system hardware to the kernel and better yet, allows this data structure to be passed to the kernel at boot time, rather than compiled into the kernel image. Now it is possible to have one kernel image that via DT support and kernel modules can support many different board configurations, much as many Linux-on-PC users enjoy. You can learn more (yay!) about the rationale behind Device Tree here and here.

Device Tree Overlays and BeagleBone Capes

Device Tree support helped clean up the kernel for SoC board developers, but hardware like the Beaglebone Black takes the requirements one step further.GPSGPRS

Although there are many useful peripherals on the BBB, one of its major features is its extensibility via capes, which are hardware boards that fit over the top of the BBB (like a… cape… on a Beagle). A user plugs a cape (or many, they’re stackable) onto the BBB and when the board boots the kernel magically discovers the cape, configures its hardware (ie. an LCD screen) and makes it available to the rest of the OS. (By magically I mean the BBB reads an EEPROM on the cape that describes the cape and then loads the right Device Tree Overlay into the kernel to configure the hardware).

Hang on. Device Tree Overlay? “We have to learn MORE to get the motors running?” I hear you say. Just a little. Look, it’s not much more, the scroller is almost half-way down the page already. A Device Tree Overlay is a fragment of a complete device tree that can be loaded at runtime, after boot. It allows the basic hardware definition (the BBB’s hardware) to be extended (by that of the cape). The Capemgr is some kernel code that was created to allow the right overlay to be dynamically loaded (from disk/flash) based on the capes that are installed.

  1. BBB boots, kernel uses BBB device tree to find all the BBB’s on-board hardware and configure it
  2. BBB probes for EEPROMs on any expansion capes accessible via the P8/P9 expansion headers
  3. Based on data in EEPROM, Capemgr loads the correct DTO to further configure the hardware

It’s sort of like a device driver, or a companion to a device driver if you will: it describes the hardware that a device driver will ultimately control, and like a device driver, it can be loaded dynamically when it’s needed.

BeagleBone Black Built-In Hardware and Virtual Capes

While expansion capes provide some cool extra functionality, the BBB has a heap of onboard hardware (Ethernet, HDMI etc) with a lot of it built directly into the XAM3359 ARM processor. The System Reference Manual is a good place to start to fully understand the capabilities available – pg 53 has a good block diagram of the processor’s features. You probably want to steer clear of the actual AM3359 Technical Reference Manual for the time being: it’s over 4,000 pages – probably not bad if you’re suffering insomnia though.

The problem is that it has so many internal peripherals that there aren’t enough pins on the SOC package, let alone expansion header pins (the BBB has two expansion headers, P8 and P9), to give each one a unique physical interface. For example, pin 21 on the P9 header can be used as a GPIO, a PWM output, for the I2C bus, SPI bus, a UART and more, but obviously not all at the same time.

The processor internally multiplexes the functionality of its various peripherals onto its external pins (many of which are then routed onto the BBB’s P8 and P9 expansion headers for end-user consumption). When we want to use a peripheral (ie. the PWM) we use the Capemgr to load a Device Tree Overlay (DTO) that tells the kernel (and ultimately the processor) to configure the peripheral and also the multiplexer so that the peripheral becomes available on the relevant pin.

There are quite a few DTOs that ship with the default Angstrom Linux distribution on the BBB that configure and expose important peripherals and functions such as HDMI output, the PWM and ADC onto various P8/P9 pins. These are sometimes known as virtual capes: they’re not physical capes that extend the BBB, but virtual ones that reconfigure the on-board hardware in a useful way.

At this stage (in fact, well before now) it’s worth pointing out one of the most useful Beaglebone resources around, the blog and videos of Dr Derek Molloy, Senior Lecturer in the School of Electronic Engineering at Dublin City University. On his pages you’ll find many really informative videos about using the BBB and he maintains two important PDFs: a map of P8 expansion header pin modes and a map of P9 expansion header pin modes. These PDFs can be used to write your own DTOs that configure the BBB’s processor to multiplex the functionality you need out to the right pins. We’ll be doing this a bit later on…

Using the BBB eHDPWM

Luckily, with the version of Angstrom that ships with the BBB (at the moment) you won’t need to write or compile a thing, you can do everything from user-space. There is already a handy kernel module and DTO that allows you to easily interface with the PWMs.

Further information is available here, but to summarise:

  • Tell Capemgr to load the AM33xx PWM DTO (this enables the PWMs). The Capemgr is interfaced with via reading and writing to the slots virtual file, the exact directory will change depending on your Beaglebone, the # in mine is 9.
# echo am33xx_pwm > /sys/devices/bone_capemgr.#/slots
  • Expose a PWM to a specific expansion header pin. In the example below pin 13 on the P8 header will become a PWM output. From Derek Molloy’s PDFs we can see that P8_13 could also be configured as a GPIO: instead, we’re using it as a PWM.
# echo bone_pwm_P8_13 > /sys/devices/bone_capemgr.#/slots
  • Configure the PWM’s frequency, polarity and duty cycle. The # represents a number that can vary between BBBs (but not, for me, between boots, thankfully). The period and duty are specified in nanoseconds. In the example below 500000ns represents a frequency of 2kHZ and the 250000 gives us a 50% DC. Advice varies on the frequency you should use for motor PWM, this works for me, some people suggest 20kHz upwards.
# echo 500000 > /sys/devices/ocp.#/pwm_test_P8_13.#/period
# echo 250000 > /sys/devices/ocp.#/pwm_test_P8_13.#/duty
# echo 0 > /sys/devices/ocp.#/pwm_test_P8_13.#/polarity
  • Connect the expansion header pin (pin 13 on P8 in the above examples) to the DRV8833 input (ie. AIN1 or AIN2)
  • Watch the motor spin

Fast and Slow Decay

oroboto2DC motors are inductive by nature: when current is removed from them their inductance will try to sustain the current and that residual current (called recirculation current) has to have somewhere to flow. When pulse width modulating the H-Bridge inputs, the current is being applied and stopped continually so dealing with the recirculation current is a constant task for the H-Bridge.

The DRV8833 allows for two options when it comes to dealing with the recirculation current: it can allow it to flow back through the diodes that form the H-Bridge itself (called fast decay) or it can short the motor windings (called slow decay). I don’t claim to understand why, but from some reading I did people suggested that slow decay is best when dealing with DC motors.

The decay mode used is determined by the state of the “other” input signal (ie. if PWM is applied to AIN1, the state of AIN2 will determine whether or not the DRV8833 uses fast or slow decay). See the data sheet for the truth table.

End Result

I wish I had a video of this first endeavour but unfortunately all I have is a photo I took. The bot here is a bit more advanced than what is described in this post (ie. ignore the ultrasonic transducer) but essentially it is simply the BBB mounted on top of the battery case, the micro motors stuck to the case with double sided tape and a ball caster. Broom broom.

Circuit Diagram and a Note on Power

Screen Shot 2014-03-31 at 8.16.19 PM

The BBB itself draws quite a lot of current, anywhere from 210-460mA on its own without any additional hardware like the motors. For initial work I powered everything off a bench supply (well, a wall-wart I found that did 5V at 700mA or similar) but dragging a cord around isn’t very robot-like, so I quickly put together a small regulator circuit to power it from batteries.

Technically the LM7805 requires 7.3V for a 5V output, at the moment I’m using 4 NiMH rechargeables (1.2V per cell) and a 1.5V AA, this gives 6.3V and it works just fine, even with the motors – I don’t know, maybe the LM7805 doesn’t do as good a job with less than 7.3V but it seems fine. As soon as I get upgrade the battery shell the robot sits on I’ll just migrate to 6 NiMH cells.

 

All you really need to know for the moment is that the universe is a lot more complicated than you might think, even if you start from a position of thinking it’s pretty damn complicated in the first place.

It turns out that the same applies for robotics. What started as a (in hindsight, ambitious) simple idea six months ago has led me (often blindly) through the winding alleys and backwaters of mathematics, kinematics, electronics and programming.

The Idea

Build a differential steering robot with with sufficient sensor capability to generate a map of a room (or, ideally, a single floor house) that can then instruct a smaller “blind” (or less capable) robot how to move about that space (ie. avoiding obstacles). Eventually the more complex robot could become an “advance party” sent into an unknown situation to survey it and then provide detailed navigation information to a swarm of simpler, cheaper actors.

The Execution

I wanted to start playing around either with a Beaglebone Black or a Raspberry Pi and after some research decided the BBB was going to be more flexible (although it turns out that flexibility doesn’t necessarily make things simpler, often it made things more complicated) and I liked the idea of having such a powerful platform to use as a motherboard (with future visions of OpenCV etc). With a 1GHz ARM A8, 512MB RAM, 2GB flash and as many peripherals and buses as you could possibly cram on a board, it’s pretty sweet.

I also wanted to home cook as much of it as possible, so rather than using existing BBB capes or fully fledged robot kits I went with buying lots of individual components to do it myself (some of these however are pretty black box already though, for example the DRV8833, which is amazingly compact for the power it can put out, and the Maxbotix MaxSonar-EZ1, which does a heap of signal cleanup gratis).

My initial BOM looked something like this:

  1. 1x BeagleBone Black ($51.90)
  2. 1x Pololu TI DRV8833 carrier ($7.70)
  3. 2x 100:1 micro metal gearmotor ($33.90)
  4. 1x Pololu ball caster with 1/2″ ball ($4.95)
  5. 1x Battery holder ($2.20)
  6. 1x Maxbotix HRLV-MaxSonar-EZ1 ultrasonic sensor ($38.50)
  7. 1x Pololu 32x7mm wheels with silicone tyres ($6.05)

For a total of AUD$145.20 – it turns out I’m not going to be giving hundreds of these away for Christmas any time soon. Everything except the BBB came from Robot Gear who were super fast on the delivery and really helpful.

I realised that I was going to need more kit in the future, especially in the wheel encoder department, but my initial aim was just to get something up and running that could drive around and not bump into things. Seems simple enough, right?

“Oh… You’re using a Beaglebone!”