Beaglebone Black PID Control and Go to Goal

Goal: Get the robot to travel from one position to another in a known co-ordinate system

To achieve this goal the robot must be able to determine:

  • where it is within its co-ordinate system at a given time
  • how movement affects its position

Dead Reckoning

We can solve the first problem using our odometers, trigonometry and some simplifications about what the robot looks like (the wheelbase, L, of our differential drive robot is the distance between its two wheels) and how it moves. Let’s assume that the robot starts at position (xt, yt) and heading θ, then travels distance DC along that heading; the new position (xt+1, yt+1) and heading can be determined as thus:

Screen Shot 2014-10-26 at 10.39.03 AM

Screen Shot 2014-10-26 at 10.37.55 AM

The first challenge is that we don’t actually know DC: the odometers measure the distance travelled by each wheel, DL and DR, which, if the robot is turning, will not be equal. That’s not such a big deal though, because DC must always be between the arcs of each wheel:

Screen Shot 2014-10-26 at 10.33.53 AMScreen Shot 2014-10-26 at 10.33.49 AMScreen Shot 2014-10-26 at 10.34.55 AM

The next challenge is that the odometers don’t actually measure distance in centimeters or meters, they measure distance in ticks (number of state table transitions). Noting that the Pololu rotary encoders being used generate 48 ticks (let N = 48) per revolution of the wheel, that each wheel has a radius, r, and that one full revolution covers a distance equal to the circumference (2πr) of of the wheel (negating any slippage), then the physical distance travelled by a wheel is:

Screen Shot 2014-10-26 at 10.38.37 AM

Now we can do this:

x = 0, y = 0, theta = 0
while (true)
{
   ticksLeft = odometer.getTicks(left)
   ticksRight = odometer.getTicks(right)
   distanceLeft = 2 * pi * r * (ticksLeft / N)
   distanceRight = 2 * pi * r * (ticksRight / N)
   distance = (distanceLeft + distanceRight) / 2
   x += distance * cos(heading)
   y += distance * sin(heading)
   heading += (distanceRight - distanceLeft) / L
   sleep(shortTime)
}

This technique is known as dead reckoning: by adding the distance we think we travelled to our previous location we can infer our new location.

It works, but it is error prone. If for any reason the distance we think we have travelled is not actually the distance we have travelled (which can happen for a multitude of reasons, the most common being wheel slippage, where the wheel rotates, but skids: the odometer measures rotation, by the slippage means the wheel does not actually move the same distance over ground).

For the purposes of this experiment, dead reckoning will do: if you’re planning on travelling any real distance, or over uneven terrain, a more accurate positioning system, or ability to reset to known positions, may be required (ie. GPS, known location beacons based on ultrasound, infrared etc).

So far, so good: the robot can now (sort of) determine where it is (assuming a known initial position and heading).

Go to Goal Strategy

The next challenge is to be able to control the robot to go somewhere specific (this is the “go to goal” strategy, ie. “robot, go to x = 10, y = 20!”)

As a starting point, consider the robot as a point-mass, positioned at (x,y), travelling along a straight line with heading θ.

Screen Shot 2014-10-26 at 11.04.41 AM

Just as with the dead reckoning, trigonometry can be used to show that its translational (x,y) position will change as a function of its translational velocity, v, (the speed at which it is moving forward along the straight line) and its current heading, θ. This is great news! As a robot all we need to know is our starting position and heading, and then by controlling v and θ we can go anywhere we want.

Unfortunately for us, our robot is not a point-mass, nor do we have “direct” control over its translational velocity or heading. We have a left and a right motor and two wheels with a known radius, r, positioned a known distance, L (the wheelbase) apart. The only variables we can control are the velocities (VL and VR) at which the motors spin.

Screen Shot 2014-10-26 at 11.05.52 AM

As a programmer, I don’t want to tell my robot “turn your left wheel at 5cm/s and your right wheel at 10cm/s, then adjust left speed to 3cm/s and right to 12cm/s, keep going for 10 seconds and then stop.” Instead, I’d much prefer to say “travel at a heading of 45 degrees at 10cm/s for 10 seconds”, or better yet, “Go to (x = 10, y = 20), there’s a tasty sandwich there I want you to pick up.”

Luckily, by noting that the derivatives in the unicycle model kinematics (our “point mass” model) are the same derivatives as in the differential drive kinematics (our actual model, see also here), that is, they both describe how translational position and heading change with respect to time, we can equate the two models. This allows us to “talk” in terms of translational and angular velocity for the purposes of planning, but then convert them to left and right wheel velocities at the last minute.

Screen Shot 2014-10-26 at 10.21.12 AM

and Screen Shot 2014-10-26 at 10.29.56 AM

Therefore:

Screen Shot 2014-10-26 at 10.32.46 AM

Screen Shot 2014-10-26 at 10.39.27 AM

Now, in order to go from point A, to point B, we can say:

  • What is my current heading (θ)?
  • What heading do I require to get to point B?
  • What is the difference between the two headings (Δθ)?

And then, by altering the velocities of the left and right wheels, we can adjust our heading (and translational velocity) such that the difference between the two is reduced and finally our heading matches the required heading.

PID Control

What we now have is a dynamic system (something that changes over time) over which we wish to exert control.

A familiar situation here is cruise control for a car. Driving along toward your summer holiday you get bored of regulating the speed of the car and set cruise control to 100km/h, call this the reference, r. The cruise control system takes a measurement, y, of the current state, x of the variable it is tracking (speed) and compares this with the reference, to create an error (difference), e = r – y.

Screen Shot 2014-10-26 at 11.20.56 AM

If the cruise control thinks the car is travelling at 90km/h, e = 10km/h and it applies a control signal, u, in this case, acceleration, to try and reduce the error.

If the cruise control thinks the car is travelling at 110km/h, e = -10km/h, and it applies a control signal, u, in this case deceleration, to try and reduce the error.

(As a side note, the reason we distinguish between x and y is because the measurement, based on our sensors, may not actually equal the actual state of the variable. This measurement error represents a known or unknown error introduced by our sensor equipment).

Our robot is much the same, except that we’re not (right now) interested in controlling its speed, we’re interested in controlling its heading, because we know that by adjusting the heading, we can position the robot wherever we like (within reason: we may not be able to make certain turning circles etc). I’ve made a simplification here too: by dropping control of translational velocity and assuming it is constant, we can focus solely on controlling the heading. Control of both variables is beyond the scope of this post.

Just as the control signal for cruise control was the first derivative of the state (acceleration being the first derivative of velocity), the control signal for the robot will be the first derivative of the state we care about: angular velocity (the first derivative of heading/angle). If our error is large (ie. our current heading is really wrong compared to the heading we need to orient ourselves toward point B) then we can apply a large angular velocity (ie. a quick rotation) to ensure we approach our desired heading as quickly as possible.

So, how do we choose the right magnitude of control signal? This is where PID control comes in. PID is an acronym that represents: Proportional, Integral, Derivative.

Let’s go back to the car analogy and say that because we want to get to our reference speed as quickly as possible, we’re going to apply the following rules:

  • If e > 0 (ie. we’re below reference speed), u = umax (max acceleration)
  • If e < 0 (ie. we’re above reference speed), u = -umax (max braking)
  • If e = 0, u = 0 (do nothing)

Now consider what happens if the car is at rest and the reference is 100km/h: the car will accelerate as fast as it can. Similarly, if the car is travelling fast and the reference is set slow, the car will brake as hard as it can. Needless to say, anyone drinking a coffee in the car isn’t going to be happy.

Health and safety aside, you’re quickly going to burn out your actuators, especially when you take real world physics and inertia into account. From standstill, maximum acceleration is going to see the car overshoot the reference velocity, then slam on the brakes, which will undershoot the reference velocity and subsequently rev the accelerator. Because of inertial effects the error will almost always be non-zero, and the actuators will always be working at full action: do that enough and neither will last for long (you’ll probably get car sick too).

Clearly, the control signal should not over-react to small errors: its magnitude must be proportional to the error. If the error is large, our response is also allowed to be large (to hell with onboard coffee drinkers), however if the error is small (ie. we’re only 5km/h off the reference speed), we don’t need to go crazy with the accelerator: just a little bit more will get us there.

If you implement a purely proportional control signal however, you’ll probably notice that while you get close to the reference, you never track it exactly. Small errors accumulate over time that the proportional part of the control signal does not take into account, so we must add an integral part: by integrating the error (with respect to time), we are able to remove more of it.

Whether or not a derivative component is required really depends on the implementation and the physics. It adds fast-rate responsiveness to the control signal (it tracks the derivative of the error over time: if the error is increasing fast, the derivative increases – ie. “quick, this is getting out of control really fast, you need to apply more control!”, if the error is relatively constant, the derivative is a zero component).

Therefore, the control signal (for the robot: angular velocity, or how quickly it is changing its heading) that ends up being applied, consists of P, I and D components:

Screen Shot 2014-10-26 at 11.28.38 AM

It is our job to pick Kp, Ki and Kd (the PID coefficients) to ensure the control signal, u can make our state, x, approach the reference r (good tracking), without too much oscillation (good stability) as quickly as possible (optimal).

While there are varied methods for selecting Kp, Ki and Kd, often the most realistic (and eventual) solution is simply trial and error: you adjust them until you get the desired result. You can simulate your model and fiddle with them, but when you put the “rubber to the road” (or the robot on the floor), the simulation often drifts away from reality.

Here are some graphs of the simulated results of the robot’s position while travelling from (0,0) to (50,50) using various values of Kp, Ki and Kd:

Screen Shot 2014-10-25 at 6.47.01 PM

Kp = 0.01, Ki = 0.01, Kd = 0.00

As can be seen, the heading swings about, never quite reaching its desired reference (θ = π/4 rad) and just when it almost gets there, starts swinging away again. Trying with a more aggressive proportional coefficient:

Screen Shot 2014-10-25 at 6.47.18 PM

Kp = 0.75, Ki = 0.05, Kd = 0.00

With these coefficients the control is stable (the heading does not oscillate around its reference) and fairly optimal (the heading reaches the reference quickly). The actual coefficients implemented in the code are different again, taking into account the influence that actual physics of the robot and manner in which the code is implemented have impact.

The Implementation

If you browse through the code you’ll see there is a Controller class, which is responsible for implementing the go to goal strategy by keeping track of the robot’s current position and fulfilling requests to send it to new positions. The implementation sends the robot from a “known” origin of (0,0) at heading 0 rad (ie. pointing out along the positive x-axis) through several waypoints.

The high level implementation is thus:

referenceHeading = getHeading(currentPosition, desiredPosition, currentHeading)

// keep doing this until we get very close to our desiredPosition
while (vectorToTargetMagnitude > sensitivityLimit)
{
   // how long did we sleep since the end of the last iteration (required for PID)
   dt = getActualTimeSlept()

   // get the total distance travelled according to the odometers
   distanceLeft = odometer.getDistance(left)
   distanceRight = odometer.getDistance(right)
   distanceTotal = (distanceLeft + distanceRight) / 2
   
   // determine how far we've travelled in last iteration
   distanceLeftDelta = lastDistanceLeft - distanceLeft
   distanceRightDelta = lastDistanceRight - distanceRight
   distanceDelta = lastDistanceTotal - distanceTotal

   // determine new position and heading
   currentX += distanceDelta * cos(headingCurrent)
   currentY += distanceDelta * sin(headingCurrent)
   currentHeading += (distanceRightDelta - distanceLeftDelta) / L

   lastDistanceLeft = distanceLeft
   lastDistanceRight = distanceRight
   lastDistanceTotal = distanceTotal

   // how wrong is our current heading required to what we need?
   headingError = referenceHeading - currentHeading

   // determine the required control signal based on the error
   headingErrorDerivative = (headingError - lastHeadingError) / dt
   headingErrorIntegral += (headingError * dt)
   lastHeadingError = headingError

   // control signal is our required angular velocity
   u = (Kp * headingError) + (Ki * headingErrorIntegral) + (Kd * headingErrorDerivative)

   // convert angular velocity to wheel velocities based on kinematic equalities
   velocityLeft = (2*velocityTranslational - u*L) / (2*r)
   velocityRight = (2*velocityTranslational + u*L) / (2*r)

   // convert these velocities to PWM DCs to drive the motors
   motor.setLeftPWM(convertVelocityToPWM(velocityLeft))
   motor.setRightPWM(convertVelocityToPWM(velocityRight))

   // create dt
   usleep(50000)
}

The Real World

Here is a video of the robot running an implementation of the described approach. It has been programmed to move through the following waypoints (which you can see marked out on the floor in black tape if you look closely):

float waypoints[][4] = {
   {45.0, 0.0},
   {90.0, 45.0},
   {0.0, 45.0},
   {1.0, 1.0}
};

Crazily, after all that effort, it still stinks. The tracking to the first waypoint is good, the second not bad, the third a bit flakey but by the fourth we’re way off target. This could be down to odometry errors, unrealistic turning circles, a poor PID implementation or who knows what else. Luckily I’m not building a Mars rover.

Next step: mapping the world with SONAR.

Advertisement

Beaglebone Black Odometry

Goal: Measure the distance travelled by a single wheel over time

Unless you’re building a unicycle, measuring the distance travelled by a single wheel seems like a fairly pointless exercise. However, bear with me, because with the wonders of ancient mathematics, kinematics and some (fairly broad) assumptions, this is a fundamental stepping stone to building a robot that can determine its relative position from a known origin using dead reckoning.

At this stage it was time to upgrade from my rickety little “wheels attached to battery case” rig onto a more sturdy platform, so I put in another order with robotgear.com.au:

  • Pololu 5″ Robot Chassis ($18.90)
  • Pololu 42x19mm wheel & encoder set ($45.95)

After a bit of stuffing around tuning up the encoder to work at 3.3V and get the pulse trains back in the right phasing (instructions here at Pololu) I was ready to go.

Muuuuuuuum! Are we there yet?

Just as your car’s odometer measures the distance you’ve travelled on your summer holiday, odometry for robotics does the same: it’s purpose is to use sensor data to determine distance travelled by the robot over time. By combining this information with the physical attributes of the robot itself, relative position can be reckoned (but more on this to come).

A common method of odometry is to measure how many rotations (or partial rotations) a wheel has made. You may have actually done this yourself as a kid: did you ever have to walk around the basketball court at school with a little wheel on the end of a stick? If so, you’ve been a human odometer. If we know the circumference c of the wheel is 1m and the wheel rotates once, then we must have travelled 1m (caveat: this ignores any slippage, which turns out to be a significant problem with wheel-based odometry). Furthermore, as the first derivative (with respect to time) of position is velocity, we can say that if the wheel rotated once in 1 second, it must be rotating at 1m/s (3.6km/h).

Rotary Encoders

0J1203.1200So far, so good. But how can we put this into practice with the wheel(s) of a robot? A common option is to use a rotary encoder such as the Pololu encoder I have used here. This is an optical encoder and uses two infrared reflectance sensors that shine infrared light onto the inside of the wheel hub, upon which is arranged 12 “teeth” with gaps between them.

As the wheel rotates around the shaft of the motor, the teeth pass over the reflectance sensors. When a tooth is above a sensor it reflects infrared light and when no tooth is above it doesn’t. The sensor (after some signal cleaning) outputs a pulse train: high, low, high, low that corresponds to tooth, no tooth, tooth, no tooth. The higher the frequency of the pulse train, the faster the wheel is turning.

A single sensor is good: its pulse train allows us to determine how fast the wheel is rotating, but it lacks a critical bit of information: in which direction is the wheel rotating?

This is where the second sensor comes in. The sensors are physically positioned to provide the same pulse train, but 90 degrees out of phase with one another: a quadrature output signal. When the wheel is rotating in direction A (ie. clockwise) the pulse train from sensor A “leads” that of sensor B (because the tooth passes over A first) and when the wheel is rotating in the opposite direction (ie. counter-clockwise) the pulse train from sensor B “leads” that of sensor A (because the tooth passes over sensor B first).

Screen Shot 2014-10-11 at 2.41.51 PM

Image Courtesy of EDN

Looking at the quadrature output when sensor A leads sensor B (the top two pulse trains, let’s assume this represents clockwise rotation), it can be seen that for a series of consecutive rising and falling edges, the state of the outputs forms the following state table:

Sensor A Sensor B
1 0
1 1
0 1
0 0

Also note that this pattern is cyclical. Once the final state is reached, it will wrap around to the first stage again.

Looking at the quadrature output when sensor B leads sensor A (the lower two pulse trains, let’s assume this is counter-clockwise rotation), it can be seen that for a series of consecutive rising and falling edges, the state of the output forms the following state table:

Sensor A Sensor B
0 1
1 1
1 0
0 0

Notice that the second is exactly the same state table as the first, just “starting” on a different state and transitioning backwards? If so, please collect the big fluffy bear as your prize. The transitions are exactly the same, they just occur in the… opposite direction (because the wheel is rotating in the opposite direction).

Side note: this state table is a gray code, the hamming distance between adjacent states is always 1: only one of the two outputs ever changes its level between adjacent states. If we ever see both outputs change state at the same time then we know there has been an error: there is either noise corrupting the signal or we missed a transition (this latter part is important later on as we’ll be measuring these transitions using interrupts, which, depending on our hardware, could be missed).

To summarise:

  • Our encoder uses an infrared sensor to detect wheel rotation by measuring reflected light bouncing off teeth in the wheel hub
  • The sensor outputs a pulse train that indicates when a tooth is present / not present
  • By counting the pulses we know how many teeth have passed by the sensor
  • By knowing the circumference of the wheel and the number of teeth we can determine the distance the wheel has rotated
  • By measuring the time it took for N teeth to pass, we can determine the velocity of the wheel’s rotation
  • By adding a second sensor, positioned next to the first, we get two identical pulse trains, 90 degrees of out phase with one another
  • Because a tooth must pass over one of the sensors “first”, we can determine the direction of rotation by examining the phasing of the pulse trains

So, if we can get the Beaglebone Black to monitor these pulse trains, we can work out how the wheel is rotating (and as we have two wheels, we’ll end up with two of everything: two encoders, two sets of pulse trains, four individual signals to monitor).

Detecting Interrupts with Beaglebone Black’s GPIOs

As well as the many peripherals (PWMs, ADCs) that can be exposed on the Beaglebone’s external pins, most of them can also act as General Purpose I/Os (GPIOs), capable of acting as inputs (which determine the level of the signal applied to them) or outputs (which generate a signal, ie. to drive an LED or other external device). When configured as inputs they can also generate interrupts on a rising of falling edge of the input signal and this is the functionality that I used to monitor the encoder signals and thus implement the odometer.

Unfortunately, when I was working with the Beaglebone Black it was still quite fiddly to configure the GPIOs. Recall from earlier posts that the BBB has a lot of internal peripherals whose functionality is multiplexed out onto external pins according to the device tree. In order to change what gets mapped onto which pins, device tree overlays are used to reconfigure the hardware at runtime. So I had to write a DTO (Device Tree Overlay) that configured P9_11, P9_13 and P8_7, P8_8 (there is nothing magical about this choice of pins other than that they can act as GPIOs and aren’t being used for the PWM or ADC in this project) as GPIOs in input mode and load this before running my odometer program. Derek Molloy has a great tutorial about using the GPIOs on the BBB and writing / compiling DTOs here. You’ll want to read it if you haven’t before.

Putting this all together, to implement the odometry I had to:

  • Connect the two outputs from each rotary encoder (one per wheel) to external pins on the BeagleBone Black that can act as GPIOs (P9_11, P9_13, P8_7 and P8_8 in my case, these are protected with 100ohm inline resistors)
  • Write and compile a Device Tree Overlay that multiplexes the external pins to GPIOs and configures their hardware as inputs
  • Write some code that uses the virtual files in sysfs to export control of the GPIOs into userland (a directory of virtual control files is created for each GPIO that is exported, Derek’s tutorial discusses this in detail) and then configure the GPIOs as inputs that generate interrupts on both rising and falling edges
  • Write some code that monitors the state of the encoder outputs by waiting for interrupts on the GPIOs and measures the direction and amount of rotation

The code itself is on github and documented, but I wanted to explain in some more detail how the main odometer loop works and the assumptions that it makes. The code performs odometry for two encoders/wheels, but to simplify the explanation below we assume we’re dealing with just one encoder.

  • It tracks five values for each encoder (wheel):
    • The current state (as measured directly after the interrupt, or assumed based on logic) of each sensor (currentA and currentB)
      • Assumed based on logic is… an assumption. If we trust that the phase difference between the two pulse trains is always 90 degrees, that it is only possible for the states to transition between adjacent codes according to the gray code and that we do not miss any interrupts then when an interrupt (rising or falling edge) occurs on one sensor, if we measure the level of that signal we can assume we know the level of the other sensor – because it follows a gray code it cannot have changed since the last interrupt (because only one signal can change a time between adjacent gray code states, and the changing signal must have been the one experiencing the interrupt). This assumption falls apart if we miss an interrupt for any reason (ie. the processor was busy, something got lost in noise etc). In that case, we may not actually be in the state immediately adjacent to the last one we measured: we could be several/hundred/millions of states further on.
    • The previous state (at the last interrupt) of each sensor (lastA and lastB)
    • A counter that increments by one for each forward transition (“tick“) and decrements by one for each backward transition (“tick“) through the state table (odometer).
      • If odometer == 0 and then the wheel rotates forward/clockwise over the course of five consecutive interrupts/ticks, odometer == 5.
      • The Pololu encoder generates 48 ticks per complete revolution of the wheel. Thus, if the wheel has a radius = 2cm and thus a circumference c = 12.57cm, if we see 48 consecutive transitions through the table in a single direction, the wheel has turned 12.57cm. If we see 24 consecutive transitions through the table in one direction, and then 24 consecutive transitions through the table in the opposite direction, the wheel has still turned 12.57cm, but if it were travelling in a straightline over ground it would be back at its origin position.
  • Before looping, it opens 2 file handles, one to each GPIO (we need 2 GPIOs per rotary encoder, one to measure each pulse train)
  • The main loop construct is built around a blocking poll() on the file handles. An interrupt on either of the GPIOs (ie. the detection of a rising or falling edge on one of the encoder output pulse trains) causes the poll to return.
  • It then determines which GPIO had the interrupt and reads the state of that GPIO (ie. the current level of the pulse that just experienced the edge). Going back to the assumptions above, it now “knows” the current level of both sensors: currentA and currentB.
  • It XORs the current level of each sensor against the level of the opposite sensor at the last interrupt (lastA and lastB) which allows it to determine whether it is transitioning “forward” or “backward” through the state table, and thus the direction in which the wheel is rotating. The odometer counter is incremented or decremented accordingly. It also performs a basic sanity check to ensure the current states do not violate the gray code (which probably indicates that we’ve missed an interrupt and cannot be certain of our count any more).

To illustrate, we can walk through several ticks from a reset state and see what happens. To refresh, the gray code for the encoder is:

State Sensor A Sensor B
0 0 0
1 0 1
2 1 1
3 1 0

tick 0 (power on)

Note: This assumes the initial state of the encoder is 0 (ie. sensor A = 0, sensor B = 0). If it’s not we just get an initial gray code violation error, the code could be made more robust by simply measuring the actual starting state).

currentA = 0
currentB = 0
lastA = 0
lastB = 0 
odometer = 0

tick 1 (first interrupt, rising edge on B: state 0 -> state 1)

currentA == 0 (this is assumed, because interrupt occurred on b)
currentB == 1 (this is read now, because interrupt occurred on b)
lastA == 0
lastB == 0

if (currentA ^ lastB) odometer--;   // false
if (currentB ^ lastA) odometer++;   // true

NOTE: The above determines whether the transition is from state 0 to state 1 (forwards) or from state 2 to state 1 (backwards). If lastA 
and lastB had been 1 (ie. previous state was state 2) the odometer would have been decremented.

lastA = currentA = 0
lastB = currentB = 1
odometer == 1

tick 2 (second interrupt, rising edge on A: state1 -> state 2)

currentA == 1 (this is read now, because interrupt occurred on a)
currentB == 1 (this is assumed, because interrupt occurred on a)
lastA == 0
lastB == 1 

if (currentA ^ lastB) odometer--;   // false
if (currentB ^ lastA) odometer++;   // true

lastA = currentA = 1
lastB = currentB = 1
odometer == 2

tick 3 (third interrupt, falling edge on B: state 2 -> state 3)

currentA == 1 (this is assumed, because interrupt occurred on B)
currentB == 0 (this is read now, because interrupt occurred on B)
lastA == 1
lastB == 1

if (currentA ^ lastB) odometer--;   // false
if (currentB ^ lastA) odometer++;   // true

lastA = currentA = 1
lastB = currentB = 0
odometer == 3

tick 4 (fourth interrupt, direction reversed, rising edge on B: state 3 -> state 2)

currentA == 1 (this is assumed, because interrupt occurred on B)
currentB == 1 (this is read now, because interrupt occurred on B)
lastA = 1
lastB = 0

if (currentA ^ lastB) odometer--;   // true
if (currentB ^ lastA) odometer++;   // false

lastA = currentA = 1
lastB = currentB = 1
odometer == 2

Screen Shot 2014-10-11 at 6.33.52 PM