Hardware design and development

The process of creating digital logic is not unlike the embedded software development process you're already familiar with. A description of the hardware's structure and behavior is written in a high-level hardware description language (usually VHDL or Verilog) and that code is then compiled and downloaded prior to execution. Of course, schematic capture is also an option for design entry, but it has become less popular as designs have become more complex and the language-based tools have improved. The overall process of hardware development for programmable logic is described in the paragraphs that follow.

Perhaps the most striking difference between hardware and software design is the way a developer must think about the problem. Software developers tend to think sequentially, even when they are developing a multithreaded application. The lines of source code that they write are always executed in that order, at least within a given thread. If there is an operating system it is used to create the appearance of parallelism, but there is still just one execution engine. During design entry, hardware designers must think-and program-in parallel. All of the input signals are processed in parallel, as they travel through a set of execution engines-each one a series of macrocells and interconnections-toward their destination output signals. Therefore, the statements of a hardware description language create structures, all of which are "executed" at the very same time. (Note, however, that the transference from macrocell to macrocell is usually synchronized to some other signal, like a clock.)

Typically, the design entry step is followed or interspersed with periods of functional simulation. That's where a simulator is used to execute the design and confirm that the correct outputs are produced for a given set of test inputs. Although problems with the size or timing of the hardware may still crop up later, the designer can at least be sure that his logic is functionally correct before going on to the next stage of development.

Compilation only begins after a functionally correct representation of the hardware exists. This hardware compilation consists of two distinct steps. First, an intermediate representation of the hardware design is produced. This step is called synthesis and the result is a representation called a netlist. The netlist is device independent, so its contents do not depend on the particulars of the FPGA or CPLD; it is usually stored in a standard format called the Electronic Design Interchange Format (EDIF).

The second step in the translation process is called place & route. This step involves mapping the logical structures described in the netlist onto actual macrocells, interconnections, and input and output pins. This process is similar to the equivalent step in the development of a printed circuit board, and it may likewise allow for either automatic or manual layout optimizations. The result of the place & route process is a bitstream. This name is used generically, despite the fact that each CPLD or FPGA (or family) has its own, usually proprietary, bitstream format. Suffice it to say that the bitstream is the binary data that must be loaded into the FPGA or CPLD to cause that chip to execute a particular hardware design.

Increasingly there are also debuggers available that at least allow for single-stepping the hardware design as it executes in the programmable logic device. But those only complement a simulation environment that is able to use some of the information generated during the place & route step to provide gate-level simulation. Obviously, this type of integration of device-specific information into a generic simulator requires a good working relationship between the chip and simulation tool vendors.

Device programming

Once you've created a bitstream for a particular FPGA or CPLD, you'll need to somehow download it to the device. The details of this process are dependent upon the chip's underlying process technology. Programmable logic devices are like non-volatile memories in that there are multiple underlying technologies. In fact, exactly the same set of names is used: PROM (for one-time programmables), EPROM, EEPROM, and Flash.

Just like their memory counterparts, PROM and EPROM-based logic devices can only be programmed with the help of a separate piece of lab equipment called a device programmer. On the other hand, many of the devices based on EEPROM or Flash technology are in-circuit programmable. In other words, the additional circuitry that's required to perform device (re)programming is provided within the FPGA or CPLD silicon as well. This makes it possible to erase and reprogram the device internals via a JTAG interface or from an on-board embedded processor. (Note, however, that because this additional circuitry takes up space and increases overall chip costs, a few of the programmable logic devices based on EEPROM or Flash still require insertion into a device programmer.)

In addition to non-volatile technologies, there are also programmable logic devices based on SRAM technology. In such cases, the contents of the device are volatile. This has both advantages and disadvantages. The obvious disadvantage is that the internal logic must be reloaded after every system or chip reset. That means you'll need an additional memory chip of some sort in which to hold the bitstream. But it also means that the contents of the logic device can be manipulated on-the-fly. In fact, you could imagine a scenario in which the actual bitstream is reloaded from a remote source (via a network of some sort?), so that the hardware design could be upgraded as easily as software.

What's it to ya?

Hopefully, you now have a better understanding of this new kind of software that is really hardware in disguise. (Or is it a new kind of hardware that is really software in disguise?) This should give you a better basis for communicating with hardware designers on partitioning issues like: What functions on your next project should be implemented in dedicated logic, programmable logic, and/or software? I've found that there are valid reasons for choosing all three of these implementation techniques, and that you must pay close attention to the requirements of the particular application. As software and hardware continue to simultaneously expand and overlap, we must all broaden our perspectives and be willing to learn new things.

Перевод

Наши рекомендации