Here’s a board I designed back in 2008. It’s an excessively feature-packed brushless DC motor (BLDC) controller.
- Can drive brushed and brushless (3-phase) DC motors
- 15~30V input voltage
- Single-supply operation (onboard LT3470 SMPS)
- 10+A per-phase current handling (Si4456DY MOSFETs)
- Low and high-side N-Channel MOSFETs (driven by LM5101 gate drivers)
- Per-phase current sensing (sensed via AD8216 difference amplifiers)
- Per-phase voltage sensing
- Sensored (hall-effect or rotary encoder) or sensorless (back-EMF measuring) operation
- Cortex-M3 32-bit ARM CPU (STM32)
- CAN-bus transceiver (MCP2551)
- RS-422 full-duplex transceiver (DS89C21)
..And all of that fits onto a single low-spec (BatchPCB compatible) 2.5×1.5″ 2-layer PCB.
The top side is pretty well packed with components, so quite a few spilled over to the back side of the PCB. It’s not ideal from a manufacturability viewpoint, but I hadn’t planned on mass-producing these.
The sheer quantity of components was due to my stubborn desire to implement all of the functionality of a dedicated brushless motor driver chip (like the remarkably well-specced Allegro A3930 driver), and then go above-and-beyond with extra features – for example, the per-phase current-sensing was intended to allow for more sophisticated drive waveforms (e.g. sinusoidal current, as used in a vector drive). Ultimately, for my application, most of those extra features would prove irrelevant (spoiler: if I were to design this board again, I’d just use an A3930 and a simple AVR microcontroller).
The application (as is typical of my projects) pertains to robotics – specifically, driving the brushless DC motor that powers the above E-Maxx R/C truck chassis (a future outdoor robotics platform). The choice of brushless vs. brushed and custom controller vs. off-the-shelf was driven by three major points:
- good very-low-speed driveability (which essentially eliminates any off-the-shelf R/C car controller)
- mostly-accurate odometry (the motor needs an encoder – e.g. the integrated hall-effect sensors in a BLDC motor)
- simple/reliable mechanics (being rather more of an electrical engineer than a mechanical engineer, I hoped to avoid having to graft a new encoder on to an existing motor)
Mechanical odometry would become a lower priority if/when I implement SLAM/visual odometry on the robot, but it’s still an excellent thing to have as a reference point (most visual odometry algorithms appreciate having a good input estimate, too). Thus, a standard sensored BLDC motor was selected.
At the heart of the design is a 32-bit ARM Cortex-M3 CPU from ST. For most projects, I would just use a tried-and-true AVR microcontroller – but this one had special requirements (not unlike the reasons that drove me to use dsPICs on Eddie’s motor controller). Controlling a sensored brushless motor directly – without relying on an interface/driver chip (like the aforementioned A3930) – requires generating 6 carefully synchronized PWM outputs; one output for the top MOSFET and one for the bottom MOSFET in each of the 3 per-phase half-bridges. It was desirable to be able to insert adjustable dead-time between the switching of top and bottom MOSFETs, as well.
The STM32F103C used here has several hardware peripherals that make this easier. It has an Advanced Control Timer that is specifically designed to support driving 3-phase motors – it can generate complementary 3-phase PWM outputs (6 total outputs) with programmable dead-time between the complementary channels. Its general-purpose timers have hardware support for decoding both quadrature encoder signals, and hall-effect sensors. It has multiple 12-bit 1-megasample/second ADCs. It has a DMA engine and dedicated triggering between all of the aforementioned peripherals – so it’s possible to significantly reduce interrupt rates by moving a lot of routine control loops into pure hardware.
An AVR could have been made to work here, but it would have required a lot more firmware fiddling to keep all of the PWM outputs synchronized. Dead-time control would have been tricky, at best. The AVR’s ADCs weren’t fast enough for cycle-by-cycle current limiting (so I would have needed to fall back to a slower/less-sophisticated control loop). The AVR’s timers aren’t capable of decoding quadrature encoder signals in hardware (so I would have needed to generate an interrupt for every single encoder transition). I should note: this board was designed before Atmel’s XMEGA line of high-end AVRs became available – these things are pretty amazing, and have hardware peripherals that address all of shortcomings I mention here (very comparable to the STM32). For applications that don’t require a lot of compute power, but do require robust peripherals (PWM, ADC/DAC, DMA) – like low-level motion control – the XMEGA is an appealing alternative to the STM32 used here.
Two different communication interfaces are provided to the CPU – one is a plain full-duplex asynchronous serial link (like RS-232, but with a twist: it’s run through a differential transceiver, making it an RS-422 interface), while the other is a CAN (controller area network) link (also differential).
The choice of only offering differential interfaces was driven by a desire to have more reliable communication channels. With Eddie, I2C was the bus of choice. It’s a slow/low-voltage/single-ended/open-drain bus, that is most often confined to a single PCB. We were using it to connect between a couple different boards in Eddie. Due to how power wires and other high-speed interfaces (e.g. the FireWire cable to Eddie’s camera) were routed, we experienced periodic corruption of communications packets. This caused some sporadic reliability problems, which were later addressed by adding packet checksums and retry mechanisms to the communications protocol (a good practice, anyway – one that, sadly, is not standardized nor typically implemented on many low-level busses). Using a differential interface helps protect against many potential sources of noise in a distributed many-node system (such as a robot). Noise susceptibility (e.g. from mismatched ground levels or coupled from adjacent wires) is substantially reduced by using a differential interface.
The CAN bus was, for this application, seen as a natural progression from I2C – it retains I2C’s desirable multi-drop/multi-master nature, while addressing many of its shortcomings. The most common physical-layer for CAN is differential, and highly fault tolerant (it is, after all, designed for automotive environments). And, it defines a much higher level protocol than I2C, which includes robust packet checksumming. All of these features, however, come at a cost of increased complexity and reduced device support. Most lower end microcontrollers don’t natively support CAN (including AVRs), and the protocol requires more complex firmware to manage.
So, the RS-422 link is present as a simple, lowest-common-denominator alternative to CAN. In a multi-device system, it would be used in a point-to-point fashion, with a dedicated device (a microcontroller with a lot of UARTs or an FPGA) functioning as a central hub. A higher level protocol would be constructed to support routing and error checking. Since it’s just a serial interface with a simple differential transceiver, virtually any device can be made to work with it. Thus far, all development work has been done with the RS-422 link bridged to a PC’s serial port.
The board is designed to be powered from a single unregulated power source. This simplifies power routing within the robot, and (in conjunction with the purely differential communication interfaces) eliminates the potential for a variety of tricky power related problems (e.g. ground loops) – but it, of course, mandates a more complex power supply on the board.
In addition to the unregulated input – which powers the motor directly – three distinct power rails are needed on the board: 12V to drive all of the power MOSFETs, 5V to power the current-sense amplifiers and differential transceivers, and 3.3V to power the microcontroller. Given the intended wide operating range of the board (upwards of 40V in), a small SMPS was much preferred to a linear solution. A Linear Technology LT3470 integrated buck regulator chip was selected to supply the 12V rail. It integrates the high-side MOSFET and catch diode, thus only requiring a small external inductor and a few other passives. Its 40V absolute maximum input is also what sets the board’s maximum input voltage (the next limit – assuming higher voltage MOSFETs are used – is the AD8216’s 65V max).
Simple linear regulators are used to power the 5V and 3.3V rails. A MIC5236 regulates from 12V down to 5V, while a MIC5256 regulates from 5V down to 3.3V. Both regulators have power-good flags, which are wired so that the microcontroller is held in reset until both of the logic supplies are stable.
The power stage was designed for only modest current capacity, with more of an emphasis on small size, ease of assembly, and low power dissipation. Since I didn’t have access to any form of reflow soldering at the time, I avoided using sturdier packages – like D2PAKs, or more exotic manufacture-specific packages like International Rectifier’s DirectFET series. Instead, I selected some high-end SO-8 packaged devices from Vishay (Si4456DY), with a maximum VDS of 40V and a worst-case RDS-on of only 0.0038 ohms. The SOIC package limits steady-state power dissipation to only a couple hundred milliwatts, and doesn’t offer a lot of thermal mass with which to absorb transient events – so minimizing power dissipation is critical.
Three independent half-bridges of MOSFETs are used to switch each of the 3 phases going to the motor. Each bridge is composed entirely of the aforementioned Vishay N-channel MOSFETs (P-channel MOSFETs weren’t much of an option, since they have on the order of ~3x the on-resistance for a given die-size). Driving N-channel devices on the high-side of a bridge is normally a tricky affair, since they need 5~15V applied across their gate and source pins – but the source pin of a high-side MOSFET is tied to the bridge output – so it’s swinging between 0V and the supply voltage (e.g. 30V). For high-voltage bridges (like in a solid-state Tesla Coil), this requires some specialized hardware (gate-drive transformers or opto-isolators). For low-voltage bridges, however, this is largely a solved problem; there exist a variety of gate-driver chips specifically designed for driving high-side N-channel switches.
I selected some LM5101 half-bridge driver chips from National Semiconductor for this task. They’ll run with up to a 100V bridge supply, and can drive the high-side MOSFET’s gate to well beyond the rail. Their basic operation is pretty simple: when the low-side MOSFET is on (thus connecting the high-side MOSFET’s source pin to ground), a high-side bootstrap capacitor (C4/C20/C31 in the schematic) is charged via a diode from the 12V supply rail (the diode is integrated into the LM5101). When the high-side MOSFET needs to turn on, this (now charged) bootstrap capacitor is used to provide the floating supply for the high-side drive circuit. This works well for most cases, but means that you must periodically turn on the low-side MOSFET in order to recharge the bootstrap capacitor, thus making 100% duty-cycle operation impossible (but ~99% is fine). Some fancier bridge driver chips (A3930..) will include charge-pumps to top off the bootstrap supplies, allowing for true 100% duty-cycle.
This falls into two distinct categories: off-board (e.g. encoder feedback) and on-board (e.g. current sensing). For off-board feedback, a single 6-pin connector is provided. Two pins are used for power and ground, with the power being selectable between 3.3V and 5V via a 0-ohm resistor jumper. One pin connects to an ADC input on the CPU, and has load-options for a pull-up or pull-down resistor – allowing the board to read an external thermistor (as is found in many brushless motors for over-temperature protection).
The three remaining pins connect directly into the STM32 CPU’s timer inputs. If pull-ups are needed (as they are with most hall-effect sensors), the STM32’s built-in pull-ups may be enabled. The STM32, while being a 3.3V device, has 5V-tolerant inputs – allowing it to directly interface with 5V sensors. The three timer inputs allow for connecting to the 3 hall-effect sensors on a typical brushless motor, or to a quadrature encoder with an index pulse, or even to an absolute rotary encoder with a PWM output (like US Digital’s MA3 encoder). The STM32’s timers can be configured to decode any of these inputs with minimal firmware overhead.
On-board feedback is comprised of 6 analog channels: 4 voltages (3 motor phases and the unregulated supply rail) and 2 currents (for 2 motor phases – the 3rd can be calculated from the 2). Voltage sensing is straight-forward: just simple resistor dividers with some capacitive filtering (incidentally: the only known errata for the board is that the supply voltage sensor is incorrectly hooked to a non-ADC pin on the CPU; shifting all of the analog lines over by 1 pin would fix this).
Current sensing is more complicated. Due to my original intent of being able to do vector control on the motor, I wanted to be able to monitor the current in each motor phase directly. It’s possible to get similar data by monitoring other nodes – like the ground paths in each half-bridge, but you wind up making compromises (like not being able to monitor recirculating current that flows exclusively through the high-side devices). At the time, I didn’t want to compromise. Thankfully, Analog Devices made just the device that I needed: the AD8216, which allows for monitoring the voltage across a floating current-sense resistor.
The AD8216 runs from a single 5V supply, and can amplify a small differential voltage (the drop across the current-sense resistor) in the presence of a large common-mode voltage (the phase voltage, which swings between 0V and the supply voltage). It even supports below-the-rail operation, allowing for some ground-bounce during switching events. Its bandwidth is high (3 MHz) – which, in conjunction with the STM32’s 1 Msps ADCs, allows for precisely monitoring current within a single PWM cycle.
Schematic capture and PCB layout were done in EAGLE. The final layout is surprisingly concise.
Layout priority was, naturally, given to the power stage. Input power is distributed to the 3 half-bridges by large unbroken copper fills on the back side of the board. Bridging the two rails is a whole slew of ceramic decoupling capacitors. The MOSFETs are all on the top side, with plenty of vias connecting to the power fills (for both electrical and thermal reasons). Outputs from each of the half-bridges are routed through the current-sense resistors (for 2 of the phases, and a dummy 0-ohm resistor for the 3rd) directly to off-board wires going to the motor (off-board, in-line connectors would be used in the final robot – on-board connectors of suitable current handling capacity would have substantially increased the PCB size).
The MOSFET driver chips are placed near their respective half-bridges. The high-current gate/source drive paths between the driver chips and the MOSFETs are routed adjacent to each other to improve drive waveforms.
Analog current-sense paths were given similar attention. The differential traces from the sense resistors to the amplifier chips were routed (as much as is possible with a 2-layer PCB) to avoid non-common-mode interference (the differential amplifiers would, of course, remove most common-mode noise).
The SMPS needed to have its current-loops minimized, as well – which wasn’t a problem, given how small it was.
The rest of the board had relatively simple routing requirements. Aside from the CPU’s crystal oscillator and per-chip decoupling, the remaining chips and traces could go wherever they fit.
I have some basic firmware written for this board that correctly drives a sensored brushless motor, either with locked-antiphase or sign/magnitude style PWM. Originally, I had intended to use locked-antiphase exclusively, since it allows for regenerative braking. Unfortunately, when designing the board, I hadn’t fully appreciated just how low-inductance most brushless R/C car motors are. The low inductance results in very quick current changes in the phase windings. Since locked-antiphase depends on the winding inductance to smooth out and reduce circulating/ripple current, this was a major problem; at reasonable PWM frequencies, the ripple current was very high – while, at high PWM frequencies, the MOSFET switching losses were unacceptable.
Switching to sign/magnitude resolves most of those issues, but introduces some of its own. Most importantly, the body-diode on the MOSFETs must now come into play during PWM off-cycles. The body diode is a lot higher loss than the MOSFET, so it results in more power dissipation. Synchronous rectification would help with this, but would be tricky to implement in firmware (especially given the rapid current decay rate). Many dedicated motor controller ICs have integrated hardware for implementing synchronous rectification.
Despite these potential inefficiencies, the board still works fine. Now it’s just waiting on me to finish the rest of the robot, so it actually has something to do!
This board was designed before I standardized my part libraries, so you won’t find the parts in my EAGLE parts library. In any case, the EAGLE schematic and PCB layout can still be downloaded via Mercurial, in my eagle_mtrdrv1 repository (or via a direct ZIP download).