“Product Development Methodologies and Design For Six Sigma”
Article by Jerry Bellott, MSEE
This paper describes circuit design and development methods used to design and test analog and digital circuits and systems. Several examples are given, typical steps are listed, and many important technological considerations are described.
The GTDigital Industrial Control and Monitoring Circuit with voice prompts is used as an example in addition to several other circuits.
This webpage contains embedded links to other articles on this website.
My seminar “Product Development Methodologies for Success!” describes processes followed by engineering organizations within major corporations (working with marketing, QA groups, and the factory) to help them deliver products that meet the customers’ needs and expectations on time and within budget.
I also offer seminars entitled “Design Using DSP Hardware” (low power single core, multi-core high performance, ASIC’s, GPU’s (data flow processing), and FPGA’s with 1000 plus MAC’s designed using SimuLink) and “System Design and Enterprise Quality Management” (covering waterfall and Agile scheduling methods, milestone deliverables, design tools, and automated tools for project management).
In some cases, marketing requirements, engineering requirements, engineering specifications, design files (e.g. C++ code, HDL files, schematics, and gerber files) and test plans serve as milestone scheduling points. In large and small companies alike, milestones based upon documentation have numerous benefits:
· Dissemination of information within the organization before the final product exists, including external interfaces. – Permits others to begin their work.
· Facilitate Peer reviews to catch errors early on – Cost savings
· Valuable company IP for the future – Hand-off to other employees working on a similar products.
Written documentation can be brief in a small company (as short as a few pages about the key items, capturing only the essentials to save time) or more comprehensive in a larger company.
System test cases included in test plans are often designed to be “traceable” to the project goals. That is, the test cases are chosen intentionally to cover specific items listed in the product requirements, specifications, and appropriate quality criteria documentation. This provides a concrete way to evaluate how well a test plan “covers” the product development goals and objectives.
Additional testing, sometimes documented separately from test cases run in the engineering lab include these areas: interface standards certification (e.g. Ethernet), agency compliance standards (e.g. FCC, CE, UL, CSA), and areas to be verified on paper by analysis (e.g. tabulation of MTBF information for components).
Methodologies have the goal of helping engineers in a company deliver products on-time and in-budget. They can be repeated and improved.
The Definity PBX was designed at Bell Labs and sold for many years by spin-off Avaya. Definity G2 and follow-on releases used equipment cabinets supporting up to 800 analog and digital phone lines. The equipment cabinets could be connected to a center stage switch to support several thousand phone lines. The control software for the 800-line cabinet ran under the Oryx-Pecos RTOS on the BellMac32, the first 32-bit microprocessor (designed by Bell Labs). Definity G2 used a second core to handle the protocol stack for communication between the 800-line cabinet and the center stage switch. This communication interface used the LAPD protocol and was transferred via the PBX switch fabric and interconnecting trunks (T3). Both the main CPU processor and the inter-processor communication circuits interfaced to a memory circuit I designed for Definity G2. The memory circuit supported error checking in parallel with read operations, correction of single-bit errors with automatic post-writes, main CPU I-cache burst fills, I/O block transfers (address, N*data), and pipelined pre-read and post-write features. The memory circuit had substantial complexity. I was very pleased when memory diagnostic tests passed the first time I powered up the board. After thorough testing, only small changes were needed to finalize the design.
Short schedules are common in industry. From the time the project froze the bus interface specifications in our system, I had written my memory circuit specifications and then began to design the memory schematics described in the previous section. I had a working prototype in my hands for software developers to use within 9 weeks of beginning the detailed design.
The memory circuit worked well initially in part due to time spent up front to plan and document an interconnecting bus for the numerous circuit boards in the product. Product development methodologies using hierarchical milestones are invaluable to design departments. They help ensure that a logical sequence of steps are followed by an organization designing a new product.
That said, there is no substitute for design disciplines, attention to detail, meticulous execution, and well developed skills.
The specifications to working prototype design interval for my Bell Labs Sceptre 2G cell phone chipset reference design board (used by Motorola while designing Sceptre into the StarTac flip-top lid cell phone) was ten weeks.
My work designing interfaces to the 64-MAC 1GHz MathStar on the IC evaluation board that I co-designed at Valley Tech for MathStar was completed on a short schedule and included resolution of routing issues that increased the speed of the dual DDR2 memory banks by 50% as compared to an early prototype made by MathStar that lacked system connectivity. My part of the project also involved design of several interfaces to the MathStar part including JTAG circuits, a boot PROM, connections to a PMC card, LVDS interfaces to the backplane, and other circuits, and writing layout specifications for my circuits.
Product ideas may be originated by either engineers or marketers. Strategic product planners in marketing departments work to understand what customers need and expect. When a product fulfills the goals marketers attempt to identify, this helps ensure sales. Engineers may also originate ideas that are either evolutionary or entirely new. When engineering originates ideas for a new product, they are evaluated in light of understanding the marketing department works to achieve about customer needs and expectations.
It is important for engineers to think creatively and to innovate. This process is fueled by knowledge and continuing education. In particular, it is essential to learn as much as possible about emerging technologies in ones field which will be implemented in new standards. New IC’s are constantly produced to support new standards and make new competitive options possible.
Business planners, engineering managers, and marketers work to understand the financials of new product development projects on the table for consideration, and finally an executive decision is made about what to produce next.
Hierarchical product development methodologies support “DFSS” (Design for Six Sigma) and other quality goals well because the milestones are completed in a logical order and the milestones are used as gates to the next step. Milestone Peer reviews are invaluable for catching mistakes early on, saving expense later.
Figure 1 - Industrial Control Module ICM Concept Diagram
Smart sensor - Can take readings and evaluate data from directly connected sensors.
Remote Control - Can be used to control and monitor remotely located equipment connected via serial port.
20x4 character LCD Display Module or 4 digit display options. Button menu and entry navigation. Voice prompts and audio alerts. Keypad option.
The detailed designer (the person asked to design an analog or digital circuit board, FPGA, embedded software, or other component of a system) may be asked to write product specifications that answer the goals contained in marketing product requirements and/or system requirements. The engineer writing specifications describes in detail what the implementation will be and how it will be done (including all pertinent technical details at the same level of detail as the schematics, for example, that will be produced after planning). Block diagrams are prepared, I/O pins are specified, processor rates, logical functions, key components, all system interface related facts (power levels, backplane pin assignments and other connectors, operating conditions, etc.) are specified, and any other important planning details, including cost and schedule estimates are documented as goals of the detailed design.
Before or in parallel with writing a design spec for a circuit board, engineers plan what the design needs to be like to meet the product requirements, cost, and schedule. Parts need to be selected that will perform the desired functions and will be available in the right quantity at an appropriate cost during the manufacturing window. A preliminary paper design is useful for early planning and may include analysis and feasibility studies to resolve any design issues. It may be appropriate to design some circuitry in detail up front to identify issues. Team reviews of the system requirements or interconnection schemes may occur during this time.
Figure 2 - ICM Block Diagram and Schematics
(This circuit is used as an illustration in my seminar “System Design Using DSP’s.”)
Engineers prepare a bill of materials (BOM) as they plan the detailed implementation of a design. Parts which can be purchased or sampled with short lead times are useful for developing prototypes. In the case of unusual parts, engineers typically verify that the parts will be for sale from two more sources during the lifecycle of product sales and manufacturing. This reduces risk. Engineering departments often maintain a database of parts commonly used which includes an internal part number for a vendor part, listing the part manufacturer, distributors who carry the part, and the characteristics of the part (i.e. ohms, tolerance, package type, size and speed of memory, MTBF, etc.). Engineers re-use the same parts or add to the database, adding a new sequential part number. By pooling information in this type of database, designers can minimize the number of unique parts their department uses, adding economy and efficiency to the design process. The final BOM for a specific design lists only the parts used in that product.
Figure 3 - Bill of Materials - Internal and External Part Numbers Plus Relevant Part Characteristics
Larger companies often have a library of catalog parts that are kept in stock in their factory. These can be the most economical to use because they are purchased in volume. Often the final step in planning the BOM is to re-specify equivalent part numbers that are readily available in the factory after the design is prepared which meets all requirements. Early involvement of factory engineers and procurement specialists can help engineers make the right component choices. Sometime a quick-turn prototype manufacturer (third party) may be able to use different parts than the company’s own factory, allowing, for example, through hold parts to be used in a prototype, but surface mount versions of the same IC to be used in production.
In addition to CMOS digital and mixed signal parts, low voltage designs might include ADC, DAC, transistor, op amp, filter, power supply, and heat sinking design elements that need to be planned as the specifications are written and before the final CAD schematics are entered. Analog circuits with gain require low impedances at the front end to avoid thermal noise that can deteriorate SNR. A plan for studying the sensitivity of analog circuits to variations in components needs to be prepared. SPICE is a useful tool during the design process.
PCB’s with long, high-speed traces need to be studied and designed using transmission line techniques. Examples are memory bus signals and backplane signals. Transmission lines require proper source and termination impedance to avoid distortions due to reflections. Stubs (small branches in the transmission line) need to be minimized or avoided to prevent impedance discontinuities that can add signal distortion.
Source termination is the theoretical ideal type of termination; other possible termination methods are sometimes useful to make a design feasible using source and termination IC parts that are desirable for various design reasons. Signal traces on PCB’s can be designed with specified impedances that typically vary from 75 ohms for a typical low speed signal down to 50 ohms which is often used for high-speed digital signals. PCB manufacturers can manufacture critical nets to spec with tolerances that should be included as necessary in the manufacturing contract. High-speed transmission lines are typically implemented on the outer copper layers as micro-strips because it is easier for manufacturers to produce controlled impedances on these layers.
Cadence OrCad is an excellent tool for schematic capture that I often use. SPICE is integrated with OrCad, making it good for checking circuits and component sensitivity. Siemens’s competing product, PADS, is similar.
Many embedded applications that involve I/O throughput lend themselves to “control plane” and “data plane” architectural partitions. The Control Plane and Data Plane firmware both have real-time constraints that need to be carefully identified.
Means of designing for repeatable real-time performance include:
1. Careful identification of all externally imposed real-time constraints.
2. Identification of all board design internal real-time constraints.
3. Use of an RTOS (e.g. VxWorks or WindRiver Linux) for moderate to complex products provides access to real-time delay specifications that vary when using a non real-time OS.
4. Utilization of processor architectural features taking specified, repeatable, and measurable delays into account, including repeatable specifications for: interrupt delay latency, memory access delays that do not vary when undesirable (e.g. random changes in memory performance due to DMA’s or other accesses that are not designed to support the real-time aspects of the board’s performance), measurable and repeatable code segment execution times, etc.
5. Use of no cache and/or no Virtual Memory Management (a common DSP solution), or the use of deterministic code (that must always perform faster than a specified maximum) locked in part of I-cache.
6. For high performance applications, dual core architectures designed for use with RTOS’s and capable of booting separately and running in parallel may be used. The Freescale PowerPC 8641D is an example.
7. Code timing measurements through simulation.
8. Lab test equipment verification of hardware/firmware RT performance.
Typical control plane tasks for a board that has a port for data input and data output:
1. Power up diagnostics
2. Initialization of board state
3. Management of board operation directly or under supervision of system host controller board.
4. High level communication (remote host communication such as via PICMG backplane Ethernet) with other local or remote circuit boards.
5. Administration of features, settings, and options using console or remote interface (e.g. SNMP).
6. High level protocol stack functions (e.g. ATM Adaptation Layers, Ethernet TCP/IP, etc.),
7. Activation of Input and Output interfaces (e.g. fiber and Ethernet for Ethernet in the First Mile, or Ethernet interfaces and fabric interface circuits for LEC or ILEC switch port boards.). May involves complex start-up and control features.)
8. Monitoring for errors especially including hardware detected errors, which may include header and CRC, flow control, other hardware faults, and status updates in control/status registers of board IC’s and FPGA’s that require control, monitoring, and maintenance during operation.
Typical data plane tasks for a board that has input and output communication ports:
1. Management of board data transfers in real-time under supervision of control plane.
2. Control and Status communication with control plane. Data transfer and messaging with control plane.
3. Protocol Stack functions for lower level protocols. Separation of headers and data. Error detection in frames, e.g. CRC when applicable.
4. Low level error detection.
5. Overflow or dropped frame detection.
6. Priority queuing of traffic types.
7. Management of real-time transfer of data between IC’s, memory, etc. including DMA set-up and control, operation of multiple buffer regions in memory if required, inter-process communication via Semaphores, registers, or TCP/IP or alternately TIPC (sometimes used with dual core devices; a streamlined stack for messages on-chip where reliability is high.).
Multitasking allows functions to process in parallel. The kernel portion of an RTOS is in control of all tasks being executed. Multitasking is based upon a timer that switches between functions at regular intervals or preemptively.
A distinguishing characteristic of an RTOS is that it supports applications that permit data to arrive without being lost. The ability to support applications without losing incoming data allows such an RTOS to be used for data communications applications without requiring extensive lost frame detection and retransmission requests, for example.
RTOS’s have the ability to switch to a function and complete it with precisely repeatable delays (that is, start to finish). Variability in this delay is called “jitter.” Deterministic RTOS’s have “hard” jitter specs, and support highly repeatable operation. This maximizes the ability to hit timing targets on receiving incoming data or perhaps more importantly putting out data in time to meet standards for cellular transmission, for example.
The kernel usually runs in “privileged” (or “supervisory”) mode on a processor where present. This mode typically utilizes shadow registers in an advanced processor that are loaded with kernel software values during privileged mode execution, and taken into the background or “shadowed” (stored temporarily) while application code is running using a duplicate set of registers. (Note that shadowing data uses no time to store each value one at a time.) If the application code has a fault (i.e. crashes), the kernel can still recover and run recovery functions to try to keep the processor running in its intended application. The escalation to supervisory/kernel mode would occur by timeout or one of the processors exception faults for the code being run. Kernels usually have reserved interrupts that cannot be masked by the user for this purpose. Some processors have “bad instruction” type hardware exceptions (meaning possible bad ROM) that also escalate to the kernel if possible.
Sharing of resources (memory or hardware) while using an RTOS can be done using one of three common methods: 1) semaphores, 2) masking interrupts, or 3) messaging (e.g. TCP/IP or other faster messaging; TCP is reliable but can be slow. A resource in use is locked so that other software cannot corrupt it. Appropriate solutions avoid deadlocked operation by appropriate programming. For example, a semaphore test and set should not be able to be preempted.
Application routine design is partitioned logically to accomplish the goals required by the system. Drivers for I/O in an RTOS may include software code and an ISR. Processors designed for use with an RTOS have exact repeatable delays for entering interrupt routines after an interrupt, fault, or exception occurs.
A scheduler routine runs on the kernel and calls functions needed to operate the product using either round robin or preemptive priorities. At a given instant, functions may be idle and may not need to be run.
While implementing system software across the platform, system variables that are updated by the application functions are one means of coordinating tasks that occur when functions are called or queued to be run. In essence, they represent system state variables.
For high performance applications, dual core architectures designed for use with RTOS’s such as WindRiver’s Linux RTOS and capable of booting separately and running individually in parallel like two PowerPC’s may be used. The Freescale 8641D is an example.
WindRiver’s Linux RTOS can also be run on low cost microcontrollers such as the consumer PowerQuicII 8313.
Other RTOS’s include: FreeRTOS (runs on MicroChip MIPS4K) and WindRiver VxWorks. Many processors come with a supported platform for applications. For example, the TI DaVinci multimedia platforms contain a full set of host processor API’s and DSP functions that may be used with their OS solution. The TI DaVinci platform includes a software stack and runs on a TI DSP. It is useful for designing graphics and video projectors.
Automated field self-test software has the goal of identifying which field-replaceable component in a product as failed. When equipment needs to be returned to the factory, more detailed information logged by self-test diagnostics can provide useful information about failure modes.
Diagnostic test software that can be run in the field may include these types:
· Power up self-test diagnostics
· Scheduled diagnostic tests (run periodically to completion. Taking a product out of operation for a few minutes at night for scheduled testing is one way to increase the thoroughness of these tests).
· Diagnostic tests continuously run in the background - Not necessarily as thorough as tests run while the feature software is not using the equipment, but well-written tests can catch many errors quickly so that maintenance can be performed as soon as possible.
Equipment that is connected to the internet can send periodic status reports to the manufacturer about equipment operating in the field.
Automated field testing to the replaceable component level simplifies field maintenance of large systems.
During the design of circuit board architecture, controllability and observability of all circuits by the main CPU is an important goal. This facilitates the ability to test circuits thoroughly. The more controllable and observable the peripheral circuits managed by a CPU, the better the test diagnostic software can check the hardware, and thus greater reliability of a product can be maintained. In some cases, I/O circuits can be given a “loop-back” mode to facilitate self-test. More elaborate testing may involve testing in tandem with the remote end of edge I/O interfaces.
Controllability and observability are accomplished by using appropriate registers for testing, setup, control, and monitoring. Even in Real-time environments, control plane controllability and observability keeping up with I/O sub-systems is required to allow the CPU to diagnose operation of the function being performed.
Verification of new diagnostic tests is sometimes performed by injecting random faults on circuit boards and verifying that the diagnostic tests fail.
Either the system requirements (for major functions) or board specification describe programmable parts to be used in the circuit design. FPGA’s are programmed using VHDL or Verilog. Schematic capture programs such as OrCad include integrated programmable device tools.
FPGA’s have decreased in price and increased in functionality rapidly over the past decade. ASIC solutions are most useful when volume is high and the IP is certain to be fixed (e.g. a part which implements a standard, and whose design has been certified).
FPGA’s use an external PROM to store the part’s programming information. The information is serially shifted into the FPGA at power up. FPGA’s route between circuit elements by using RAM memory bits to control gates that form the interconnection fabric. FPGA’s also use RAM to store state information related to the address in the RAM. External logic latches the most recently accessed state variables. The next state may or may not then depend upon inputs to the IP block whose state machine is implemented in RAM. FPGA design tools allow users to specify gate and routing solutions or RAM based state variables so the user can take advantage of their insight when selecting the design approach.
Older, smaller programmable devices may use either fuse link programmability or an external PROM for programmability similar to larger FPGA’s.
Practical designs use very high-speed logic only internal to devices. FPGA’s have multiple clock circuits with PLL’s and dividers that allow portions of logic to be operated at high frequencies. Synchronous design practices enhance reliability and must be followed.
Many off the shelf IP packages are available from Programmable Logic manufacturers and third parties today, making it unnecessary to spend time and expense developing commonly used functions in VHDL or Verilog. For example, one can purchase an MPEG4 decoder and use it directly in one’s design. Some FPGA’s have silicon IP built in and optimized. One example is the PowerPC architecture available on Xilinx FPGAs. The FPGA is on the same piece of silicon with a Freescale (now NXP Inc.) processor. ARM cores are also available on Xilinx products. Intel’s Altera FPGAs are a competitor to Xilinx with a full product line for FPGA design.
Xilinx’s System Generator tool allows one to choose from a large library of IP circuits and interconnect them using internal buses as though designing by selecting the elements in a block diagram. System Generator places all necessary registers in assigned locations for access by your software code if you are using a processor in the design.
MathWorks MatLab library routines can be used to analyze data. MathWorks is also a good C code prototyping tool, because it was designed so that MatLab code can be converted to standalone C code.
System level simulation is possible using MathWorks SimuLink. Simulation can include processors (using an emulator), software code (based upon functions in toolbox libraries), and programmable devices (defined using logic schematics or a design language such as VHDL).
SimuLink is capable of hardware/software co-simulation using portions of the solution running on actual hardware such as a processor or FPGA on a board with analog circuits interfaced to them. SimuLink can use Toolbox libraries to generate FPGA VHDL for a variety of common functions, including DSP filters and other DSP functions. SimuLink can automatically generate C code for numerous processors including TI DSP’s to implement desired functions. Efficiency is not as good as hand coded software, but the code is guaranteed to work as it does in system simulation with SimuLink, and time to market can be reduced.
For I/O frequencies with frequency components less than 20 MHz, design is simpler and the number of critical timing situations are minimized.
Higher speed digital signals require careful attention to limit noise on the ground plane, avoid EMI issues, design to limit reflections, avoid cross-talk, and also to meet all appropriate setup and hold times.
In modern digital designs, very high-speed signals are often implemented using low voltage differential current mode signals (LVDS). On a scope, these are centered at about 0.9 volts, and travel between about 0.5 V and 1.2 volts. They are more immune to noise than voltage mode signals because the differential current in the pair of signal wires is measured at the receiving end on the terminating IC. This allows bit rates up to and exceeding 1Gbps to be implemented (with care) in a practical manner. High-speed signals of this type are often used over short runs between IC’s or to the edge of a board or card to interface to other circuits, such as optical fibers. Fibre channel SAN’s employ bit rates up to 4 Gbps, for example.
For critical high-speed bus signals such as DDR1-3 memory leads, one needs to specify controlled impedance micro-strips in manufacture. Equations are available from manufacturers and on the internet for calculation of all required trace dimensions and other specifications given the type of dielectric used in board manufacture. DDR memory trace lengths need to be identical for simultaneous arrival of all bus signals. Routing of high-speed signals needs to have gentle bends (not 45 degree angles) to avoid impedance discontinuities (the discontinuity is caused by the “EM” field traveling down the path around the signal and encountering a change in direction of current flow.) DDR memory controllers typically have adjustable clock skews with about 10ps granularity which allows one to make adjustments in firmware after the board is correct. An appropriate goal is to design so that the firmware does not need to be changed over the operating specifications of the PCB.
DRAMs have always been made separately from CMOS processors in order to optimize the fabrication process to optimize the yield of each type of design on silicon wafers. A growing trend for high-speed, DDR3 memory applications is to use multi-chip packages so that the wiring from a processor IC to a DDR3 memory can be minimized to help achieve DDR3 clock rates to 1.6 GHz. Micron (memory manufacturer), Xilinx, and Samsung (ARM9 manufacturer) are among the companies who feature multi-chip packages. The Samsung dual-core ARM9 and DDR memory are used in the Apple iPhone 4 to control the user interface. Separate DSPs and other silicon are used for the radio solution.
When studying LVDS signals to verify and characterize the design, the eye pattern needs to be examined by one of several means to determine whether the center of the eye is clear of transitions to support the required low BERR. Simulation tools using accurate models can estimate the probable timing of the eye pattern over long periods of time, estimating excursions due to noise and other effects and noting their probability within specified guard band timing. Storage scopes provide another means in the lab to study eye patterns over time while a circuit is operating.
Xilinx tools are capable of estimating BERR on high-speed differential I/O signals including current mode low voltage differential signals. Xilinx tools have the ability to make a color plot showing probability from hot to cold of signal transitions in a small time window around the center of a differential signal eye. The tools use circuit and transmission line models. This allows the design to be adjusted so that the center of the eye will have no misaligned timing (the setup and hold can be checked, and the BERR within a certain timing window can be computed by simulation). Xilinx offers an advanced I/O circuit block in their newer FPGA’s that allows incoming reflections to travel through a delay line and be subtracted from the signal to clean it up after it has entered the receive buffer at the device boundary with the PCB. This is used as a secondary means of improving signal integrity.
This link contains information about OrCad schematic capture and PSpice Simulation capabilities.
Figure 4 – PSPICE SIMULATION, Single Rail Op Amp Bandpass Mic Input Buffer Circuit
The figure above shows bias conditions for a single rail op amp circuit. This circuit is a useful bandpass filter for buffering an ac signal added with circuitry (not shown, by capacitive coupling) atop the positive input to the op amp. As the frequency of an input signal (e.g. from an electrets microphone, for example) atop the bias voltage at the positive input to the op amp increases, the circuit becomes a unity gain follower because C1 is lower in impedance at higher frequencies (an op amp behaves as a unity gain follower when the output is tied to the negative input).
Single rail op amp circuits often use the positive input to the op amp for the signal, with feedback to the negative input and a network connected to it. This permits one to avoid using negative voltages. Op amps designed for this purpose allow the output to traverse the full range between 0V and the power supply voltage.
Many op amp circuits using dual power rails (e.g. positive and negative supply voltages) apply the signal to the negative input to the op amp, with the positive input either tied to ground directly or connected to ground through a network. Feedback from the output is connected to the negative input also, and thus the output signal can traverse below 0V into negative voltages. Reference to ground is simple with this type of traditional dual rail op amp circuit design, because no biasing circuits external to the op amp are involved.
The figure above shows dc bias conditions in RED, and power dissipation (Watts) in BLUE. SPICE is capable of plotting not only bias conditions as in the figure above, but also parametric sweeps of the input frequency (as from a microphone input signal) verses gain.
See this link for more information about Cadence OrCad, PSpice, and other circuit simulation tools.
Siemens “HyperLynx Line-Sim” is a useful transmission line simulation tool for PCB traces and cable design. It is employed before board layout. Users can specify arbitrary source and termination impedances and use as accurate a transmission line model as one desires. The simulator allows users to specify the timing and shape of the excitation signal from the driver and check the with a simulated scope at any point along the transmission line. Stubs and vias need to be minimized or avoided on high-speed, longer signal paths to keep signal integrity high.
Buried vias offer one way to minimize the effect of the via impedance discontinuity. The downside of using buried vias is that they are more expensive.
To design for testability, test point pads can be included that permit controllability and observability of all portions of a circuit. Designers can include test options in programmable devices when used. JTAG scan chains are useful for checking point to multi-point connectivity on digital PCB’s after manufacture. ATE equipment can test IC operation in addition to wiring using a bed of nails and test program. In this approach, a board is held on a rubber board template by air suction while the “bed of nails” make contact with probe points identified of planned and designed in for this purpose. The ATE literally operates portions of the circuit by applying power to the board and injecting signals and observing responses.
Burn-in in a factory thermal chamber exposes and weeds out device infant mortality so that the sweet spot on the product lifetime reliability curve will ship out the door, and products will only fail at the end of life, estimated in terms of “years later.”
DFSS is the design department’s contribution to overall Six Sigma goals for an organization. “Six Sigma” refers to moving defects out on the probable outcome curve to statistical insignificance.
Common methods include:
a. Estimating likely risks and mitigating them,
b. Following hierarchical methodologies with milestone review gates, and
c. Rigorous compliance to identified requirements that will allow the product to meet schedule, cost, customer needs and expectations, plus other important goals that have been identified.
The finished product is verified for compliance with the design projects requirements. Sometimes design adjustments are needed to meet full compliance once the product is characterized. (Characterization involves measuring actual specifications, performance, and operation as compared to planned or theoretical results).
Final data sheets (as compared to preliminary data sheets) contain actual specifications of a product, based upon characterization information and the degree of tolerance the manufacturer wishes to publish (given that the circuit meets the specifications). Final data sheets may use product requirements and specifications as a starting point, but these milestone documents are never the basis of the final product data sheet information, which is based upon decisions about manufacturing tolerances/sorting and actual product and manufacturing process characterization.
Several means are available to estimate reliability of a circuit. One measure for PCB’s is the “FIT rate” (# of failures in 1 trillion seconds). The primary causes of failure are often related to solder junctions and the MTBF of larger IC’s used in the design. Tables are sometimes available from manufacturers and suppliers that permit designers to tabulate total FIT rates due to assembly (e.g. solder joints) and vendor parts.
One must follow appropriate ground design rules including use of separate analog and digital ground regions when required and appropriate connection of earth ground from boards back to the backplane, and from the backplane ground to the chasses and electrical supply earth ground (the third wire in an ac plug).
EMC tests include conducted and radiated emissions and susceptibility. Electrostatic discharge tests can also come into play for some products. For FCC certification, the tests required depends upon the type of product and where it will be used. FCC rules preventing a product from interfering with another company’s product are among the most stringent FCC rules. I have achieved FCC, CSA, CE, and other European approvals in the past for projects I have worked on. I have also designed tests using Mil-Std-461E which follows procedures to accomplish tests like the FCC tests.
One typically uses a third party test house for EMI/EMC and environmental and shock/vibration tests. Some larger corporations have their own test labs with test chambers and special equipment. Typically the high level test plan is designed by the product design engineers, and third party test houses operate their RF and other equipment to test the products. Engineers designing the tests need to set up the DUT (device under test), connect all appropriate signals to control and observe the product from outside of test chambers when appropriate, and note often with time logging when errors occur during tests involving frequency and power sweeps. The availability of system diagnostics that can isolate problems to the component level can be invaluable during tests of this type.
The FCC specifies that production hardware and final software releases must be used during required testing. Manufacturers who affix a label of compliance must have test records on file for FCC audits. When new software release occur, often the spectrum of the RF sent out from a product varies significantly. For example, a major software loop can suddenly come into play, resulting in the transmission of an RF pulse that was not there before. This is why retesting is required for many products that require full certification.
When problems occur at specific test frequencies in radiated tests, one should either add shielding to the final product or check the length of all PCB wires that support an antenna at the problem frequencies. Board re-layout can solve the problem.
See this link on for additional design checks and mitigation measures that can be taken. The link also contains an example EMI/EMC Compliance Test Plan Outline.
EMI/EMC testing is conducted indoors in test cells (anechoic chambers or TEM cells) to pre-test (or pre-screen) products. Screening is done as soon in the product development cycle as feasible. Indoor testing is sometimes followed by field testing for official certification.
Siemens HyperLynx Board-Sim can be used to re-simulate after layout and before PCB fabrication using the Gerber files, models for the circuit traces, and models for the IC’s that transmit or receive signals. The Gerber file is imported, the pins of the modeled devices are assigned to the proper connection points for the IC’s, and printed wiring models are also selected for the traces of interest. As in HyperLynx Line-Sim (a pre-layout simulator), signal generation and scope monitoring can then be simulated.
After PCB prototype fabrication, it is wise to have bare board tests run on prototypes before the surface mount and any through hole parts are populated. This is usually performed using automated net testing equipment. It helps ensure PCB integrity before more complex issues are introduced when parts are populated.
Design Verification of prototypes comprises extensive testing to:
1. Debug prototype assembly errors
2. Verify that the design works properly
3. Characterize the actual design as compared to design goals.
After design characterization (by lab measurement under operating condition specs), if additional design changes are not then made to achieve full requirements/specifications goal compliance, the measured actual design characteristics are used to amend the product specifications and for data sheet purposes.
Design Verification testing includes both formal and informal test steps. The informal test steps can be originated during testing and improvised. No speculative area of testing that an engineer wants to check to make sure the design is good should be omitted.
Formal design verification steps include:
1. Hardware unit test
2. System hardware integration test, using firmware diagnostics that are stand-alone and written for the purpose.
3. Software unit test
4. Software integration test, without hardware to extent possible
5. System hardware and software integration test: Goal: boot the OS, demonstrate ability to communicate with each basic component
6. System hardware and software full product testing: Verify all system hardware and software features.
7. Design qualification – EMI/EMC – done at this point due to possible rework of PCB or software to improve emissions or susceptibility. Note: FCC rules require production grade hardware and official software load when certification is required for a product.
8. Design qualification – Environmental (e.g. temperature, altitude, humidity, salt-air)
9. Design qualification – Shock and Vibration
The hardware integration test firmware diagnostics can be used at any point in time to verify that the hardware is still in good operating condition (not damaged by handling in the lab) while trying to debug system software features.
When a digital scope is used to check a high-speed transmission line, a logic probe header with minimal footprint needs to be used. Tektronix offers an ultra compact footprint probe for monitoring buses comprised of high-speed transmission line signals (e.g. DDR memory signals). See other comments about BERR and measurement of differential signal eyes in the lab.
Transmission lines and high-speed signal traces are appropriate to check in the lab, in addition to the performance and behavior or other parts of a circuit board.
Spectral components of a signal can be measured and analyzed using a storage spectrum analyzer. Newer scopes with spectrum analysis features can be used to evaluate SNR, THD, and other signal characteristics. Another option is to transfer data from a storage analyzer to a PC for analysis by MatLab to compute values such as SNR.
When sub-components that already exist (off the shelf) are purchased in volume from a third party (called an “OEM” supplier), the buyer still writes a product specification that can be used as the basis of a contract for purchasing the product from the manufacturer in volume. Examples of products in this category include disk drives, keyboards, and mice that will be included in a PC product line. The specifications are often based on existing specifications published by the third-party vendor. They are sometimes customized by agreement with the vendor to make the product more suitable for use in the buyer’s product or system. Production samples undergo “qualification tests” by the buyer to verify their conformance to the agreed upon specifications. QA tests may also be performed at the receiving end in the factory as part of the system manufacturing process.
Design Verification of Prototypes differs from manufacturing testing in that design prototype verification skips no speculative testing and probing (in addition to a complete list of pre-planned tests) and leaves nothing to chance. By contrast, production manufacturing testing assumes that the design is good. Factory testing is designed to verify that the design that has been “cookie cut” in the factory is working properly, and needs to be done as quickly and inexpensively as possible.
Engineering Project Milestones are usually items and deliverables listed on a supervisor’s schedule for an employee’s work (vertical management) and listed on a Project Manager’s milestone schedule chart for a specific project (horizontal management).
Horizontal management augments upward reporting to management by using a project manager to coordinate the activities of people in several groups who are involved with a project. Several project managers might be assigned to coordinate members of multiple groups within a company to contribute to different projects to develop and manufacture products. In large companies, groups may have special areas of expertise, such as audio amplifiers, microcontroller board design, power supply design, embedded software, etc. One or two group members from each group might be involved with a project scheduled and tracked by a project manager to contribute to the product design.
Supervisors own the schedules for their group members, never the less, while project managers work across the breadth of the organization.
Engineering project milestones are used for scheduling purposes at the supervisory level. The work that needs to be completed to complete a milestone is often called a “task list,” which might be kept independently by the engineer responsible for the milestone or developed and discussed with the supervisor.
See this link for examples ofthat are common in industry. The example deliverables are somewhat full-blown for the sake of illustration, and might be appropriate given time and resources. Milestone documentation can be brief, capturing only the essentials when needed.
Engineering project change control tracking databases are typically used by engineering, QA groups, and applications engineers (sometimes using info fed from the field). “Modification Requests” are opened when a problem has been identified. They are assigned to a staff member to reproduce the problem and identify root causes. The are assigned to a designer to correct the problem. The correction is scheduled for inclusion in a project release.
Release notes highlight the main improvements of this type for a new version of a product. Sometimes changes are deferred to a later time if they are considered low priority or requiring further study.
A project change control administrator maintains the database, and may hold meetings or meet with employees to check the status of MR’s that have been entered in the database. The database administrator ensures that next worker assignments are made and that the status of the “MR” is updated in the database. Examples of MR status in an engineering organization are:
· UNDER INVESTIGATION (engineer assigned)
· DEFERRED (by administrator working with departments)
· FIXED (by engineer; hardware, HDL, or software module and release version)
· TEST (engineer assigned; pass proceeds to “CLOSED” or “SYSTEM TEST” if a QA group is involved for regression testing” or UI (Under Investigation again if failed)
· SYSTEM TEST (engineer assigned) optional; for larger engineering organizations using separate QA test engineers.
· CLOSED (by administrator; ready for use in specific release)
It is said that “two minds are better than one.” Organizations that review milestone gates catch problems early on. This can save a significant amount of money.
A good review method that I have seen used successfully is as follows:
1. Documentation distributed to reviewers several days in advance. Of course, only a few hours of reviewing are expected, but the reviewers need time to work this into their schedule.
2. The meeting announcement is emailed or distributed with the document.
3. Meeting has a fixed length not exceeding two hours.
4. Roles are assigned during the meeting: A scribe takes notes. Moderator walks through the document, asking for “general comments, comments on chapter, comments on each page.”
5. Discussions are taken off line after the meeting.
6. Comments may be submitted in writing if more than a sentence or two.
7. At the end of the meeting, the review team votes:
“Complete as is,”
“Done with comments incorporated” (this is verified by the moderator to enter “Complete” status after author adds comments), or
“Re-review” (needs more work, addressing comments).
8. A final signoff sheet is distributed with a PDF
or hard copy of the final document. The sign off completes the milestone, and
includes the supervisor, project manager, and anyone else the supervisor
designates. Some attendees in the review can also be included. The list needs
to be short to save time, though.
Peer reviews have been found to be valuable in many organizations because attendees open up / come out with criticism more easily.
Supervisors sometimes review documents in paper form separately when peer reviews are held.
is performed on prototypes and on products already being manufactured to test changes. Depending upon the size of the project and engineering department, sometimes system testing is performed by a separate group. Independent testing has the advantage of using test cases that were not created with knowledge of the product implementation. Sometimes this second tier of testing can catch design issues missed during designer testing.
This link on contains a design review checklist. The link also contains an example EMI/EMC Compliance Test Plan Outline.
EMI/EMC testing is conducted indoors in test cells (anechoic chambers or TEM cells) to pre-test (or pre-screen) products. “TEM” stands for Transverse Electromagnetic Wave. Screening is done as soon in the product development cycle as feasible. The most troublesome product physical orientations with respect to the transmitting or receiving antenna are identified during screening by the test engineer.
Indoor testing is sometimes followed by field testing in a large open area (often outdoors with the support equipment placed in an underground room below the units under test) for official certification.
Quality Management Tools permit not only requirements traceability and change control tracking, but also assignment of re-usable test scripts to different product deliveries and project management tools for risk assessment and prioritization of activities.
CTRL+CLICK to open the other documents on this website: