Sunday, July 29, 2012

IC Design: PLL ( Phase locked loop ) design and simulation using Analog Devices freeware tools.

We have been looking at various RF/Microwave design freeware tools recently. One of the tools that we looked at closely is the Analog Device ADsimPLL tools. This allows the design of PLLs and synthesizers using AD's devices which come pre-programmed in the software. The tool is interactive and fairly intuitive and user friendly. There are of course, a few challenges but considering that one pays nothing for its use it is well worth the time spent on analyzing and using it. We designed a 1.83 Ghz loop using the AD4360-7 device. The tool allowed us to calculate the various PLL related component values and provided a quick assessment of the operation both visually and textually. We would have liked to see some additional small features in the tool but all in all our assessment of the tool is quite positive. For further information on this or on PLL design activity at SPG, please visit our website at http://www.signalpro.biz and use the contact menu item for any further discussions or questions on our experience.

Sunday, July 22, 2012

IC Design: Estimating the signal band noise in a delta-sigma modulator

Sigma delta modulators are popular devices used in a multiplicity of applications. One of the most prolific of these is the A/D converter. A delta - sigma A/D basically consists of a delta-sigma modulator ( typically first or second order), followed by a decimation filter. The modulator operates in such a way that it generates a high pass response for the noise in the system. This response is known as the NTF or noise transfer function of the modulator. In this way the modulator suppresses noise within the passband but allows the out of band noise components to have a high pass characteristic. A low pass system of decimation filters removes this latter noise also. It becomes imporatnt,in the practical sense, to estimate noise in the passband. An expression can be developed to do this for higher order modulators with fairly accurate results. This subject is dealt with in a recent brief paper released by Signal Processing Group Inc. It may be found in http://www.signalpro.biz> "engineer's corner".

Thursday, July 19, 2012

IC Design: RF/Wireless/MMIC freeware

A survey using search of RF/Wireless/MMIC freeware on the web led to a nice harvest of freeware routines that provide useful tools for those of us who may want to use these types of programs. It is well known that a number of EDA companies sell fairly expensive RF/Wireless/MMIC programs. For many designers it may be difficult to buy these because of the cost. For these users the freeware that is available on the web might be a partial solution. The freeware programs are not as beautifully formatted but appear to be reasonably accurate when compared to results provided by the more expensive packages. An ongoing interest for us is to take a look at these freeware programs and assess their usefulness and price/performance ratio. A useful package distributed free by Agilent is the first on our list. It is called "appcad" and may be downloaded free. Apart from the marketing type information in this package a number of useful tools are included. It certainly deserves a close look.

Tuesday, July 17, 2012

IC Design: SINAD: What is it and why is it important?

SINAD is figure of merit typically for radio receivers or similar devices. It may also be used in other applications. SINAD compares the signal power, the noise power and distortion power of signals. The specification is usually used in an audio sense. i.e the quantity under consideration is the quality of the received audio. A report on SINAD, its definition and other related parameters is available in the Signal Processing Group Inc., website at http://www.signalpro.biz > engineer's corner for interested parties.

IC Design: Logarithmic amplifiers ( LOG AMP): A very useful component.

Logarithmic amplifiers or Logamps as they are commonly called are very useful components. They are used in communications, RF and wireless systems, cell phone base stations, audio systems, and power control to name a few application areas.. A typical use in RF/wireless is in the RSSI ( received signal strength indicator) circuit. The logamp can be deceiving in its functionality so a basic description is of help for those who plan to use it. A paper on this component and its basics is available on the Signal Processing Group Inc. website http://www.signalpro.biz under the "engineer's corner" menu item.

Thursday, July 12, 2012

IC Design: RF/MMIC power amplifier design considerations

Integrated circuit RF/MMIC power amplifiers are getting more and more popular. The PAs can be standalone or part of a larger device. Multiple technologies exist for the implemntation of the circuits from CMOS to III-V. For the designer of these circuits different technologies present different challenges. In a brief paper by Signal Processing Group Inc., technical team, some of these issues are explored in a cookbook fashion. The paper may be found in the SPG website at http://www.signalpro.biz>engineer's corner.

Sunday, July 8, 2012

IC Design: First pass analog and RF IC success

For customers of analog, RF/MMIC ASICS a first pass success is a consummation devoutly to be desired. But what is a "first pass success"? In the strictest sense, a first pass success for an ASIC of any kind means: (i) It works functionally right out of the fab. (ii) It not only works functionally but also meets all the electrical and environmental specifications.

The question then is: Is it possible to develop and fabricate devices which will be first pass successful as per the above definition? I think the answer to this question is quite complicated.

The following set of posts will address this issue.

Anyone who knows about the process of device development, fabrication, testing , packaging and applications knows that each of these steps have their own perils. Therefore to meet the above definition of a first pass success each of these hurdles must be overcome successfully.

Let us first take a look at the device development phase.

Device development:
During the device development phase, a specification is agreed to, after conceptual deliberations and feasibility studies. The integrity of this specification is very important as this document will be the guide to the rest of the execution. Therefore it is imperative that the specification should be as good as can be with no T.B.D's. Any T.B.D's will lead to risk.

After this, the entire chip will be designed from the top down using various design tools appropriate to the type of device. ( MATLAB? ). Once the top level design has been been verified on a functional block basis ( or behavioral basis) the various functional blocks will be converted to circuit schematics.

Each block will then be designed and simulated using industry standard simulation tools ( PSPICE, ADS, CADENCE etc). These simulations will be performed for various ( specified) environmental conditions such as temperature. ( Industrial range, -40 to 85 degrees C or military -55 to 125 degrees C etc). Multiple design reviews with all concerned parties will be held to make sure that these simulations are precise and essential. In this way all the functional blocks will be designed, simulated and finalized.

The next stage of the development will interconnect these functional blocks and attempt to simulate the complete chip over all the required operating conditions. This is a key step in the development and in the pursuit of first pass success. This is explained below.

Note that the simulations of the chip are done using electrical and geometrical models provided by the fabrication facility and the packaging facility. It is absolutely essential that a fab and a packager be picked that provides a complete set of these models and certifies that these models are up to date and accurate. The reason is simple. If the models are not accurate, the simulations will also be inaccurate and the device will fail to operate as required and first pass success will be thwarted!

Assume for the moment that the models provided are accurate. The next question is about the simulation tools being used and the nature of the device being simulated. The simulator tool can simulate very complicated circuits but it has some real problems when a certain set of conditions of simulation are met. For example convergence of DC and transient solutions can be a very real hazard. DC convergence problems can occur with the existence of very high impedance nodes or branches in the circuit. Transient non convergence can occur when there are very long and very short time constants involved in the circuit. Analog and digital circuits in the same circuit can be a big problem because they are very difficult to simulate.

There are analog simulators and digital simulators but a true mixed signal simulator is not really available. Analog simulators simulate time point ( or frequency point) by point and thus generate a very large number of data points .

Digital simuators generate a true or false data set. Therefore if there is significant digital content in the circuit the data generated by the analog simulator will be very large and swamp the computer memory. A digital simulator will be incapable of simulating the small analog steps required for precise analog simulation. In addition the time for simulation will be so long as to be really not practical. In general a fairly small mixed signal device can play havoc with the simulations! This is not a practical way of simulating this class of circuits. No one really knows currently of a practical way of simulating these type of circuits.

Having understood this, it is now possible to point to risk number one for the failure of the device upon first pass. If the complete chip cannot be simulated a 100% then the probability that the chip will be a first pass success will be lower than 100%.

How can one estimate the probability of success quantitatively?

The difference between 100% simulation of the entire chip and the actual depth of simulation will be the risk that the chip will not meet the specifications on the first pass depending on the circuit simulation issues alone. Therefore to avoid this risk, the chip must be 100% simulated. If it cannot, then one has to assume the risk mentioned above.

Following the simulation of the chip, layout will be done ( or even before the chip simulation is complete). The layout is the second most critical part of the process which will determine first pass success. There is no correct or incorrect way of doing layout in general, except insofar as all the foundry layout rules are obeyed and the layout is LVS compliant. ( LVS = Layout versus schematic verification).

However, for analog and RFIC/MMIC designs layout becomes a very critical activity, since shape, placement and interconnect type of the layout elements becomes important to performance of the chip. Matching of active and passive devices is dependent on how close these devices are on the layout. They need to be in the same orientation. For temperature critical elements, the devices ( resistors, capacitors, active devices) may not only have to obey shape and orientation rules but also lie on isothermals on the chip surface. Fringe capacitors can lead to unintended coupling of signals. For a high gain, wideband amplifier, input and output traces placed close together can lead to parasitic oscillation! Ground shielding must be used whenever there is a danger of unintended coupling for reasons of size or electrical performance.

Matching of devices is also important. Common centroid layout ( layout of sections of a passive device or a number of matched active devices around a common pivotal point) has to be used. For reduction of offset the usual differential pair may have to be split into a quad and cross-connected.

If the device is a radhard device a number of other techniques have to used, specially for CMOS type devices where threshold shifts with radiation will almost certainly kill the performance/device.

There are a multitude of layout techniques, beyond the scope of this post which have to be learned through experience. The point however, is this. Even if all these techniques are used the layout of the device is more susceptible to errors which lead to chip failure ( and thereby miss the first pass criterion) than schematic errors.

In spite of this a number of devices can indeed be first pass successful. However, in the author's experience these are fairly simple devices where the level of criticality is low. In such devices first pass success can be expected and many times, found.

Thus, there is a finite risk of failure due to layout issues.

The next step in the chip path is the fabrication.

The fabrication of the device is carried out in the foundry selected. Hopefully, the foundry will be a good one, providing precise models and design rules/process rules which will be certified.

The foundry will run its own DRC ( design rules checks) on the chip database sent to them for fabrication. If the DRC toolset at the foundry and that at the customer are correlated then no DRC errors will be found at this stage. However, the usual case is, that there will be some errors found at this juncture. These errors will have to be corrected ( or waived) by the customer. In the author's opinion waiving errors is not a safe option. All DRC errors should be corrected before fabrication is started. If not, this will lead to another risk that the chip may fail first pass success.

As the fabrication proceeds the customer will be given access to the WIP ( Work in Progress) database and when the fabrication ends the customer will receive the finished wafers and the process control module test results. The wafer test results must be scrutinized for compliance with upper and lower level limits of all parameters.

Again sometimes the fabricator cannot meet the process limits and may ask for a waiver. This should be considered very seriously as any parameters out of limits can cause a failure of the device.

The bottom line is this. There should be no DRC errors in the final DRC run by the fabricator and no waivers asked for before/after the processing, if we are to eliminate the risk of the fabrication causing a chip failure .

Finally if all goes well the wafers should be available for a probe test using a test program ( or a manual probe test) which will be the first evaluation of the device ( before packaging). If the simulations are accurate, the layout is accurate and the fabrication is done correctly, the probe test should yield first pass functional devices.

However, we are not there yet. The device must be packaged ( usually) and the package test must yield good devices. As is well known package parasitics can have severe effects on the device performance, specially if it is a high performance device. This problem can be avoided of course, by making sure precise package parasitics are available when the device is in the simulation stage. If this is not done, then there is a finite probability of the device failing the packaged test.

Finally if the device does pass the packaged test, it must be inserted into the board or the system it was designed to operate in. Here the device may be subjected to various forms of stress such as EMI, RFI, thermal. mechanical, noise, etc, etc. In order for the device to pass this test, it should have been designed to operate in the environment it is in now. This is why a great deal of attention must be paid to the deliberations and assessments during the conceptual stage of development. In the author's opinion the road to first pass success really starts at that point.

Significant attention during the conceptual phase is a good approach, which leads to a positive result at the end of the entire process. If this is not done then there is a finite probability that the device will fail at this late stage which is a really catastrophic event by any standards.

As can be seen, to ensure a first pass success a great many factors must be taken into account and due diligence paid to them. In spite of this approximately 5%-10% of devices fail to be first pass functional ( studies have shown) for one reason or the other and may have to be re-iterated with a reduced set of masks.. This factor should always be taken into consideration when planning a new high performance analog or RF ASIC or MMIC.

For more information and resources on analog and wireless ASIC and module design please visit our website at http://www.signalpro.biz.

IC Design: ASIC success factors based on customer-vendor relationships

ASIC success is something that all customers and ASIC vendors pray for. For the customer the issue is one of his/her credibility and recognition within his/her own organization and potential gain/loss of funding. The issue for ASIC vendor is his/her reputation, revenue and long term prosperity. Whichever side one is on, any information on succeeding with an ASIC product can only come in handy.

Over the past 35 years, my involvement with ASICs, first as an employee of large systems companies and then latterly ( 21 years) in a smaller "ASIC - centric" company I have learnt some lessons which I thought might be useful for others. Therefore this post.

The following success factors are in no particular order. Yet all are equally important. If the interested person makes use of these ideas then his/her success with ASICs will be greatly enhanced. That is my earnest hope.

1.0 Underestimation of time and money: The success or failure of an ASIC actually starts right at the beginning, when the vendor is asked for a quotation. Many times the vendor thinks that by underbidding the project he may improve his chances of getting the business. As a matter of fact, this is true in many cases. A low bidder wins. However, the customer may not realize that two things are about to happen:

(A). The vendor cannot possibly complete the project in the time and money originally quoted so he asks for more time or money or both, in many ingenious ways.
(B) The vendor fails to do a complete job on the ASIC and releases the ASIC to fabrication anyway. Of course the probability of failure is very high, since due diligence may not have been done.

Both of these options lead to negative consequences. One of the most serious being loss of trust by the customer, and the death of a close friendly relationship between the two. It is very hard to salvage the project after this has happened.

Therefore for ASIC success, an accurate estimation of time and money is critical. If the vendor estimates a higher amount, he still has to live by it and quote it. In spite of this the wise customer will add at least 20% 0r more to both these quantities as contingency planning.

2.0 Schedule: This is a corollary to the first factor. Establishment of a reasonable workable schedule is one of the most important success factors for ASICs. Similar comments hold as above. When schedules become too tight or unworkable the project can continue for a while in cloud cuckoo land but the cuckoos will ultimately come home to roost as the time runs out. The result may be (i) ASIC not complete (ii) ASIC done badly to satisfy customer push to meet an unworkable schedule leading to failure of the ASIC at probe or bench test. After this it is very difficult to continue with the project because it may take a very long time to debug and fix the ASIC. Again a contingency plan should add more time to take into account any unknown factors from completely wrecking the schedule and hence the ASIC.

3.0 ASIC development agreements: A carefully thought out and written ASIC development and supply agreement is an absolute must for success. Agreements should contain, at a minimum, a clear and detailed SOW for each phase of the project, review process, Engineering Change Order( ECO) procedures, a program plan as accurate as can be at this initial stage, payment schedule and terms, communication procedures between the vendor and customer and any other legalities ( boiler plate). Functional and test specifications may or may not be included. If a separate conceptual/feasibility study was done before the actual ASIC project was started ( highly recommended) then both specifications should be part of the agreement.

4.0 Customer expectations: Customer expectations are a very important success factor for an ASIC project. Customer expectations are really set by his/her view of the vendor and the information that vendor supplies. It is imperative that a set of realistic ( or slightly pessimistic, [I daresay], expectations) be the rule. I know that if the expectations are very pessimistic then the customer will decide not to do the project. Conversely, if the customer expectations are too high then when reality sets in, the negative feelings may cause untold misery to both parties. I believe that one of the critical jobs that a vendor must do is to manage customer expectations in an effective and positive manner.

5.0 Design tools: Lets make sure that the appropriate design tools are available both to the vendor and customer. At a minimum, a reasonable system simulator, a circuit simulator, a layout tool, a debug tool and a documentaion tool is essential. Today this is not a problem, since CAD tools are freely available under a variety of license options. A corollary to this, is that the vendor must know how to make effective use of the tool. Currently CAD tools have become so complicated and massive that sometimes this very fact causes problems. I think that the rule should be, to use the simplest and most user friendly tool appropriate to the job. The customer needs to have some way to review the work being done and to help if any issues arise. Therefore the customer should also have access to some tools which allow him/her to do his bit.

6.0 Fabrication models and design rules: This is very important and has major ramifications. As shown in an earlier post, one of the ways to realise a successful ASIC to through extensive simulation, clean layout and verification of the layout before submission to a fabrication facility. Accurate device models and design rules must be supplied by the fabricator and packaging houses. Any inaccuracies in this data will cause untold problems, during and after the fabrication of the ASIC and may render a ASIC completely useless. So lets make sure we pick a good fabricator and packaging house who can supply accurate models.

7.0 Design expertise: The vendor should make sure that the design expertise exists in the company for a particular type of ASIC. Competence is what is required. The augmentation of expertise with CAD tools is great and perhaps competence with the CAD tools is part of the expected competence. If the appropriate design expertise for a certain part of the ASIC does not exist within the company, then ask for help within the larger ASIC vendor or consultant community to fill the gap.

8.0 Full disclosure to the customer: It is essential that the vendor and customer disclose any issues that may be troubling or which may impact the success of the ASIC. Colloquialy, "lets be up front" with each other to get the maximum benefit from each others expertise. Anything that is hidden will eventually be found out and usually at the worst possible time.

9.0 Customer - vendor relationship: This is one of the most important success factors for ASIC success. A close, cordial, mutually respectful and friendly relationship between the customer and vendor, in my opinion, is a very important success factor. Even in times of stress ( for whatever reason) a close relationship will help to get over any issues being generated by the project. A close friendly relationship is so important that I rate it as the number one success factor for ASIC success.

10.0 Communications between the customer and the vendor: Another important success factor is the level of communications between the customer and vendor. Regular reviews, informal or formal conversations between the appropriate members of the customer/vendor team make all the difference in the world. Specially to catch any problems in their infancy, before they become big problems. E-mail, video teleconferencing, telephone, etc. whatever means are most effective should be used often and regularly to achieve a close communication link.

11.0 Use of ECOs ( Engineering change orders): In some cases, even though due diligence was done, a change may be required by the customer after the project starts. I strongly recommend the use of ECOs to prevent problems due to "MISSION CREEP", the biggest problem in some major projects. Obviously it is not easy to eliminate this, but the ECO allows both parties to do the needful in a friendly and professional manner. The change to be done is discussed, a new quote for time and money is generated and the program plan is amended in such a manner that both customer and vendor are satisfied.

12.0 Multiple iterations for large ASICs: The rate at which there are first pass successes for ASICs has risen steadily. However, when the ASIC is complex for any reason ( Large analog and digital content, different types of simulation conditions, unknown design parameters etc), multiple iterations should be factored in right at the beginning and customer and vendor should clearly understand that this is the case. The number of iterations required varies from project to project. Sometimes the vendor may fail to inform the customer of this fact and thereby fail to manage customer expectations as mentioned above leading to a failure of the project.

13.0 Post fabrication analysis tools: If the ASIC is small and uncomplicated, then the probability is, that it will be a first pass success and no other work will be needed post fabrication. However, when the ASIC is complex ( as described above), it is very probable that multiple iterations will be required to get it to production status. In order to effectively analyze performance problems or debug the ASIC some analysis tools and equipment is neccessary.
An analytical prober with a low capacitance, high impedance probe capable of probing down to
1 micron is neccessary. Access to a Focused Ion Beam ( FIB) resource is becoming more and more popular. Appropriate laboratory test equipment is neccessary. A PCB design, layout and fabrication resource is required. This is of course not a comprehensive list. Some of this equipment and tools may be acquired internally. However some expensive items like the FIB tool may be rented as needed.

14.0 Conclusions: The above musings are a result of my experience. The success factors listed above are by no means exhaustive. I would welcome comments from peers on their experiences and permission to add their recommendations to this list. Finally I sincerely hope that this post is useful and leads to better ASICs and great ROI for our customers.

IC Design: Developing a specification for an analog IC

It is true that the quality and success of an analog ASIC generally depends on the specification that is developed for it. A good specification with requirements clearly defined may account for more than 80% of the success, of not only the ASIC device, but also the entire process of development including the business and technical relationship that develops between the customer and analog ASIC vendor.

For it is true that the quality of an analog ASIC is defined by not only how well the device meets the specifications, but also the experience the customer has with the very process of working with the vendor.

It behooves us then, to at least define some basic ground rules for the generation of specifications. It is also true that each analog ASIC will be unique and have its own features, but it is usual for certain items to be included in the specification and follow certain formats.

These issues are explored in this post. We hope this post will be of help to those involved in specifying or implementing an analog ASIC.

This post follows the following outline:

Types of specifications required
Suggested format for the specifications
Challenges in building specifications



1.0 Types of specifications required

In general the specification of an analog chip should be in two parts. The first part is the functional specification and the second part is the test specification.

The functional specification contains a comprehensive description of the chip and all the detailed functionality required by the user. It includes the interaction of the chip with the board ( or substrate ) along with all external components.

The test specification contains the test methods, test options, reliability test options, burn in, thermal operational tests. These numbers and descriptions are usually specified at the I/O of the chip since no other part of the device is available to the outside world.






2.0 Suggested Formats

2.1 Functional Specifications:

2.1.1 The cover page should be the part number of the chip, approvals, revisions and any other high level information.

2.1.2 The next section should provide a clear but brief conceptual level description of the function.

2.1.3 Following the functional description a fairly detailed block diagram with the pin I/O clearly marked should be provided.

2.1.4 A table of pin descriptions should be included which provides clear information on the pin number, the pin name, the pin symbol, whether input or output, and a
succinct description of the function of the pin.

2.1.5 Also included are the absolute maximum ratings for current, voltage, temperature, etc. the chip may be exposed to in extreme cases in a tabular format.

2.1.6 The next section of the specification should clearly describe the principle of operation, timing, flow charts, relevant technical data, operational characteristics etc. in reasonable detail.

2.1.7 Specify the DC operating conditions of the chip including logic levels, power dissipation, supply currents, operating temperature, supply voltages etc. in this section. Minimum, typical and maximum values are preferred along with the symbols of the parameters being specified and the conditions under which the specification has been made.

2.18 Specify the transient operating conditions of the chip, including all delay times, rise and fall times, hold times, setup times, clock frequencies etc. Include timing diagrams if more clarity is required for each parameter. Include symbols for all parameters being specified and the conditions under which the specification is made.

2.19 Specify AC operating conditions. Specify gains, noise levels, input and output impedances, input and output analog voltage and current levels, frequencies, analog accuracies and tolerances etc. Include symbols for all the parameters being specified and the conditions under which the specification is made.

2.110 In the last section include some typical application circuits and/or applications hints that allow the user/designer to understand the operation of the overall system including the role of external components and any test signals. Also include in this section, the suggested board layout for accurate operation of the device. If possible include a specification of the board material or other substrate being recommended for usage.

2.2 Test Specifications:

2.2.1 Cover page is almost identical to that of the functional specifications with all the nomenclature indicating revisions, dates, initiators, approvals and title.

2.2.2 Provide a block diagram of the test architecture showing all external components and any switching relays, matrices of other auxiliary test structures to be used.

2.2.3 Provide a complete pin I/O description. Note that in many cases the test pin I/O list may be more extensive that the functional pin I/O list since there may be test pins included on the device. The package for test may or may not have the same number of pins. Provide a clear description of the pins and their functions.

2.2.4 Provide a complete list of tests to be carried out. Name each test with an appropriate name and number. Link a test description to each test number.

2.2.5 Provide detailed device specific test procedures for each of the tests specified in 2.2.4 above including the role of external supplies and other signals and expected results and tolerances for the results.

3.0 Challenges in building Specifications

It is one thing to say that specifications should be provided for a design to be done accurately and another to actually do it. This is specially the case if the device is a new device with very little functional or test history behind it.

In most cases no one really knows enough about the device to specify it completely. Typically, information that needs to be input into the specifications is non - existent before the device designed. This is the first hurdle or challenge faced by those who would specify the device.

Therefore it is common practice to have a “ preliminary “ specification which is a specification which has a considerable amount of information but also has a lot of “TBD’s” i.e. “ To be determined “ parameters.

The TBD’s can only be replaced by hard data after the chip has been designed and in some cases after the chip has been fabricated and evaluated.

The test specifications are also in a similar position. Since the operating parameters may be unknown the test specification suffers a similar fate with a TBD’s also.

There is a common practice in test specification development where a number of iterations may be performed on the specification. The first test specification may be “ Comprehensive”. This simply means that there is an overkill of tests included in it. These extra tests provide information for the final test specification after the device is designed and fabricated.

As more and more information becomes available the “ Comprehensive “ specification is trimmed downwards with fewer and fewer tests remaining until a final test specification can be approved.

IC Design: Sampling rate conversion in digital signal processing.

Multirate processing, sampling rate conversion, or interpolation and decimation as it is known, is a clever technique in DSP. As analog and mixed signal design engineers we have learned to use this technique in various product designs for our customers. It offers an added degree of freedom in the design of mixed signal integrated circuits that may be of help to other professionals such as us.

Multirate processing finds use in signal processing systems where various sub-systems with differing sample or clock rates need to be interfaced together. At other times multirate processing is used to reduce computational overhead of a system. For example, an algorithm requires k operations to be completed per cycle. By reducing the sample rate of a signal or system by a factor of M, the arithmetic bandwidth requirements are reduced from kfs operations to kfs/M operations per second. fs is the sampling rate. M is the decimation factor.

In other applications, resampling a signal at a lower rate will allow it to pass through a channel of limited bandwidth. In another application a high accuracy delta-sigma A/D converter can be made with very high modulation rate at the front end followed by a decimator ( down converter) to reduce the sampling rate and provide converted samples at or near the Nyquist rate.

Applications for this technique abound, if understood by the practitioner. The challenge is that it is not easy to pick up a book or a paper on DSP and understand Decimation and Interpolation to an intuitive extent. This causes hesitation in usage.

A tutorial paper has been written to aid in further understanding of this fascinating topic. Read it at http://www.signalpro.biz/sampling_rate_conv.pdf.

IC Design: Diode design

This month we got involved in the detailed design of a diode. For most analog and RF ASIC designers diodes are pretty trivial as far as design is concerned. The reason is that we get the parameters from the foundry and use the scaling for diodes already fully characterized by the foundry. However, when we get a specification like: " Need a diode with a series resistance of 1 ohm, a capacitance of 0.2pF, with a clamping voltage of 25 volts with a 5 Amp current", things get a little more sticky.

Where do we start? I suppose one set of answers are: (1) Start with the substrate.
(2) Use Irwin's curves to calculate sheet resistance, (3) Use a manual like the Semiconductor QRM design manual to get an initial design for the required capacitance. This involves extracting (a)The concentration gradient (b) The built in voltage (c) zero bias capacitance... Once all of these preliminary parameters are calculated, we use a simulator like Athena or Atlas ( or other simulation tools like the Stanford University TCAD set) or SYNOPSYS. This is the tricky part where optimization becomes so important and it takes a long time!

IC Design: Estimating current source parameters for a current source DAC

An ever present issue in the design of analog circuits is the challenge of estimating power dissipation, size on silicon, cost etc. We would all like to know these factors as early on as possible. Both designers and customers can benefit from this information. These parameters are very dependent on the specifications and therefore the technology chosen for implementation. As we got to pondering this, a customer did, very bluntly, ask for these estimates for a current source "high speed" DAC. As a result we had to go and look at these design factors. Ultimately it turned out that the results of that little study turned into a report. We published the report and it proved to be very useful indeed, not only for that particular item but for a broad class of analog devices. The report is available at www.signalpro.biz/pcsdac.htm for interested viewers.

IC Design: Microstrip on silicon design

Microstrip is the preferred style for designing passive circuitry for MMICs, RF and high speed digital circuits. If the substrate is a board or GaAs the task is simpler and the design can be pretty much cookbook. However, if the designer has to do this on a silicon substrate ( just an ordinary one, say for a SiGe process or fine line CMOS) then it becomes complicated. Why?

The reason is that standard silicon substrates are very lossy for high frequency signals and the design of microstrip ( specially the initial hand calculation/engineering judgement type designs) become a chore. If one is fortunate to have expensive CAD tools that one can use extensively then it is less of a grind. However, one still has to understand how microstrip behaves on silicon and what one has to do to make the right corrections.

A while ago I wrote an article on this precise subject. It is available on the SPG website under the engineering pages> engineer's corner for interested colleagues. Feedback on this will be greatly appreciated since some of the issues were expounded based on personal observation and experience.

IC Design: The intricacies of CRC encoding.

Recently, we at SPG got involved in high speed data transmission issues and in particular the CRC algorithm. The algorithm itself has been around forever it seems, yet its simplicity is very appealing. Anyone involved in it, or about to get involved in data transmission is probably very familiar with it. In any case I found it very interesting.

The basic scoop on it is as follows: ( Interested readers may view the details on our webpage: www.signalpro.biz>engineering_pages>engineer's corner and look for the detailed article and hardware implementations.)

The CRC procedure can be explained as follows: You have a data message you want to transmit which is k bits long. You can use the CRC to generate another sequence of bits that is n bits long. The latter sequence is called the frame check sequence. What you have to do is actually trasmit both the original k bits of your message and the FCS that is n bits long. Therefore the total length of your transmitted message becomes k + n bits. This k+n bits should be exactly divisible by some predetermined number.

At the receiver the received k+n bit long message is divided by the same predetermined number. If there is no remainder then the message has been received without errors. If there is a remainder then the message has errors. Its as simple as that!

IC Design: Minority carrier lifetimes in silicon.

As semiconductor designers we grew up with the concept of lifetimes of minority carriers in silicon. Our task was to take the process parameters and design rules from the foundry and fashion a chip. However, once we venture beyond this safe boundary and pit our skills against device design from scratch, a number of issues come up with which we are not too familiar with. One such came up for me this weekend. I was trying to calculate minority carrier lifetimes for specific conditions. I found out that this is a very difficult thing to do. Minority carrier lifetimes vary quite broadly and are dependent on a number of factors. Among these are Auger recombination, band to band recombination and Shockley-Read-Hall (SRH)recombination.

The lifetime is a strong function of the doping concentration of the silicon. It is easier to use analytical formulas for lifetime calculation when the concentration is high ( > 1E17).

High resistivity material is harder to handle analytically. The lifetimes in these materials can be a function of the construction of the crystal(CZ versus FZ). In addition various processing steps can have an impact on the lifetime.

Nevertheless analytical formulas do exist for estimation of lifetimes. The one that I am now using is: lifetime = 5E-7/(1.0 + 2.0E-17)N, where N is the doping concentration in cm**3.

Roulston has published a curve that also shows the approximate variation of lifetime with concentration. Both of these techniques are just approximations. I compared calculations of the lifetime for various concentrations using the analytical formula with Roulston's curve. The fit became very close as the concentrations increased, but was poor at low concentrations (highly resistive silicon).

My conclusions are that if the need is simply to estimate the lifetime to a rough order of magnitude then by all means one can use the analytical formula given above or Roulston's curve. However, if precise numbers are required then measurements must be made on samples of doped silicon under the conditions of operation. There is no shortcut here for that kind of accuracy!

IC Design: More about diodes!

Need a pn junction diode that has a high reverse breakdown, very low capacitance ( so its fast) and low resistance in the forward direction? If this is the case then a simple pn junction diode may not provide the answer. The reason is, that as the breakdown voltage goes up, the forward resistance goes up, capacitance goes down. If you a need a lot of current then this diode will not provide it. As the resistance goes down the breakdown goes down and the capacitance goes up. So sometimes a simple pn junction diode cannot meet specifications.

Yet if this type of performance is required, either for purposes of high current
(read low resistance ) or high frequency applications then a different type of diode is needed.

This is the P-I-N or N-I-P diode. The I stands for "intrinsic". This diode has a heavily doped p region and n region, just like in a ordinary pn diode. However, the resemblance ends there. In a P-I-N diode there is a high resistivity ( or
"intrinsic" region) sandwiched between the n and p heavily doped regions. The inclusion of the intrinsic or high resistivity region imparts some very useful characteristics to this structure. These characteristics are explored heuristically in this post.

Resistance: The resistance of the P-I-N diode is inversely proportional to the forward current through the diode and can be controlled by it. Very flat resistance characteristics can be generated this way. The reason for the low resistance with current is that as the high resistive region has very few carriers for recombination, any injected minority carriers coming from the heavily doped p and n regions do not die quickly but persist for "long" lifetimes in the I region. Thus the higher the current, the more free carriers in the I region and the lower the resistance. In the ultimate limit the forward resistance reaches the contact resistance which can be made very low.

Capacitance: The pn junction zero bias capacitance in the P-I-N diode is very low ( or relatively low compared to the ordinary pn junction diode). The reason is that the depletion region ( the region that is completely depleted of carriers with increasing reverse bias or zero bias) forms the "insulator" of a parallel plate capacitance. The parallel plates are, of course, the heavily doped p and n regions of the diode. The higher the resistivity of the I region the wider the depletion region and the lower the capacitance. Also the capacitance is very flat over a wide band of high frequencies so matching with other circuits becomes easier. As a result of the low capacitance the P-I-N diode can switch very fast and can be used in high frequency applications.

Reverse breakdown voltage: The breakdown voltage is high since the breakdown electric field drops voltage across a wider depletion region. As the depletion region becomes wider and wider with reverse voltage the breakdown increases.

Thus if one wants to reconcile high breakdown with low resistance and low capacitance then a P-I-N diode is a great choice. Both power diodes and RF diodes can be made with this technology.

Some disadvantages in the usage of the P-I-N diode are that (a) Its performance can only be predicted accurately if the lifetime of the minority carriers in the I region are known accurately. There are not a lot of analytical techniques to calculate this, therefore for precise usage, measurements need to be made. ( See the previous posts). (b) Most circuit simulator programs such as PSPICE do not provide a mathematical model ( empirical or physics based) so circuit simulation is difficult. (c) The fabrication of the diode is slightly more complex. However most vendors provide the parameters and application notes for their P-I-N diodes so usage is made fairly easy. However, designing one from scratch can be quite involved because of the above factors.

IC Design: References for diode design

Collecting the right references for the design of diodes became a fairly serious project. However, this was done and a reasonable collection was generated for both simple p-n junctions and pin diodes. Use of these references can make the job a little easier. The reference list is on the website under "engineer's corner" in engineering pages.

IC Design: Clock distribtution in mixed signal ICs

In relatively high speed analog and mixed signal IC designs, a challenge is to distribute the clock ( usually derived from a clock reference like a PLL) such that clock skew is either eliminated or minimized.In one of our designs, clock distribution was becoming a problem so we studied it and came up with a solution which is illustrated in this posting and its accompanying article under "Engineering Pages" in the website. For a detailed look at this technique, interested readers may go to www.signalpro.biz and then navigate to "engineering_pages>engineer's corner>clock distribution strategy".

IC Design: An ADPLL for clock generation in a mixed signal IC

Precise clock generation is required in a majority of mixed signal ICs. Generally a PLL of some sort is used. In a prior post the concept of clock distribution was explored. The actual clock was generated by an interesting PLL based on a DCO. There are some advantages to this technique when it comes to providing a clock to an AMS system. Interested readers may go to www.signalpro.biz and then navigate to "Engineering_Pages>Engineer's corner" and look for the ADPLL design... paper.

IC Design: The harmonic balance algorithm

The Harmonic Balance algorithm is now an established technique for CAD programs of various types, specially for RF/MMIC and analog. We felt we needed to understand the algorithm. This would allow us to be better at using it in simulations and more importantly be able to say if we wanted to purchase it in a CAD tool we wanted or not.

The implementation of these algorithms in the circuit simulator are fairly involved. However, luckily, compared to a couple of decades ago we as circuit designers do not really need to know its intricacies. What we want to know is at a higher level of abstraction. The expectation is that, if we do this we can do better at simulation and know when to use it effectively and when to not use it!

As a result of discussions internal to our design and CAD group a better understanding was gained and we decided to write a brief paper on it. This paper is now available on our website at www.signalpro.biz. Interested readers may follow the links www.signalpro.biz>engineering pages>engineer's corner and read the paper if they wish.

IC Design: Relationship between the tf, the forward transit time and ft the transition frequency in bipolar transistors

Someone recently asked how the ft of a bipolar is related to tf, the forward transit time of the bipolar. The tf is a model parameter while ft is not. Yet we always talk about the ft of the transistor. The answer to this question can be found in the spg website ( www.signalpro.biz) under engineering pages>engineer's corner for interested parties.

Saturday, July 7, 2012

IC Design: Two useful matching techniques

For maximum transfer of power from a source to a load, the source and load impedances must be conjugate matched. A number of techniques to do this have been developed. This post looks at two fairly simple and very popular ones. The L - section match and the cascade transmission line match. Simple analytical techniques are used to do this and described in the paper. The calculations can be done with a simple calculator. In order to access the detailed description, interested readers are directed to our website at www.signalpro.biz. Follow the links in the website to engineering pages>engineer's corner and then select the paper from the list on the page.

IC Design: RF/MW ESD complex matching using resonance

An interesting technique that finds extensive use in RF/MW ESD circuits and complex matching circuits is the concept of resonating out reactances. Taking the case of the ESD circuit we find that in the most usual case RF/MW ESD circuits ( as other ESD circuits do) use some form of diodes to protect sensitive inputs on an IC. This of course leads to a parasitic capacitance which causes loading and mismatches. In order to eliminate the effect of this capacitance, at a single frequency an inductor can be used in parallel with the parasitic capacitance. The value of the inductor is chosen to resonate with the parasitic capacitor and therefore at the resonant frequency the pair becomes invisible leaving only the resistive part to be matched or considered. This is a simple technique which finds wide application in a number of critical circuits. Obviously the limitation is the single frequency characteristic. However, with some subtle manipulations it can also be used in wider bandwidth applications.

IC Design: Thermal modeling and analysis of devices and MCMs

Thermal modeling and analysis of devices and MCMs ( modules) is becoming very important in recent times. In years past, most of the thermal effort was based on the design of devices, and thermal analysis was built into the circuit simulators such as SPICE. The rest of the modeling based on the electrical analogs of heat transfer was well understood, and could be done in a fairly simple way. Today the situation is quite complex. As more and more performance is demanded from semiconductors ( individually ) and from MCMs ( multi-chip modules ), and indeed entire products, such as cell phones, the heat per area is rising towards the 500 W/cm squared limit. This is a lot of heat, and it is very difficult to use the old methods to model and understand these problems. Therefore thermal modeling is attacking these issues in electrothermal dynamics by using newer CAD tools based on FEM ( Finite element methods ) and CFD ( Computational Fluid Dynamics). A number of new thermal modeling tools have appeared on the market. Some are reasonably priced and others are not. Again you get what you pay for! Heat flow is based on the conduction, convection and radiation of heat ultimately. Thermal CAD tools model these processes in their own proprietary way. We have used a couple of these tools and find them fairly, ( and I mean fairly ) complex. So practice is neccessary. However, the results obtained are within the 20% error band. The accuracy of results also depend on the skill of the user! More on this topic as time goes on. It is a fascinating subject.

IC Design: Noise figure versus input referred noise

If we use the specification for a low noise amplifier, invariably the noise performance is a Noise Figure. However, in a particular system design we calculated the input referred voltage that could be a limiting factor for the very first stage LNA. The issue was how to convert from the noise figure of a selected LNA ( from Analog Devices no less) to the input referred noise voltage to make sure the amplifier was being chosen correctly. Well here is the conversion at least in one form.

Note: The noise factor is simply 1 + NA/Ni. Ni is the noise power coming in from a 50 Ohm matched source and is equal to -174 dBm/Hz. ( Pretty standard usage).

The noise voltage being generated by the 50 Ohm source is vni=4.46E-8 Vrms/Hz. This can then be used to compare whether the amplifer will work with a particular noise figure ( from the expression 1 + NA/Ni).

Check and see if the number NA, the noise input referred power generated by the amplifier itself, converted from a voltage to power is acceptable or not. Must remember to use the impedance level of 50 Ohm. Simple?


Example: If the NF is = 0.8, then 1+ NA/Ni = 10**0.08 = 1.2 ( approx). We can calculate vna as above for vni.

Here is a note on input noise. It has been found that the -174 dBm/Hz should be modified to -162 dBm/Hz for the rural environment in the US and to -98 dBm/Hz for the urban environment. The -174 dBm/Hz is therefore a theoretical figure used to specify and calculate noise figures and noise factors!

Yes, another thought; we need to make sure that the derivation for the noise factor is elaborated: Here it is:

Noise factor F = SNRi/SNRo where i stands for input and o stands for output.

So = Si X G ( G = Gain)
No = [Ni noise power from the 50 Ohm source + NA, noise power generated by the amp].

F = [Si/Ni] / [GSi/G(Ni+NA)] = 1 + NA/Ni.

Also for other items of engineering interest go to our website at www.signalpro.biz.

IC Design: A low power 32.768Khz crystal oscillator

The frequency 32, 768 Hz, is one of the most popular frequencies for crystal oscillators as it is used in most time keeping applications. With the proper interface circuit ( PLLs ) it can also be used for high frequency synthesizers. The actual quartz is also relatively inexpensive and this lends itself to cost effective frequency circuits and timekeeping. Of course temperature control can aso be used to generate TCXOs.

In any case, we recently designed, fabricated and throroughly analyzed a low power crystal oscillator ( in conjunction with our sister company). The circuit was first pass functional. The crystal oscillator section dissipates a mere 200 - 500 nA of current at the rated frequency.

The entire chip consists of a crystal oscillator, a low power analog buffer, a level converter and a digital output buffer capable of driving 100 pF. In addition the device has a means of trimming the frequency using an analog trim as well as a digital fine trim.

The device was evaluated thoroughly and its temperature characteristics measured extensively. Interested parties may contact us through our website at www.signalpro.biz for our experience and these results. All in all a most satisfying experience!

IC Design: FIR Filters

FIR filters are strictly not analog or even mixed signal in nature. They are in fact, digital circuits. However, it seems that more and more of these filters are being used in mixed signal designs, specially in fine line semiconductor processes, where analog processing is used to convert to digital and then circuit blocks such as FIR filters, Comb filters, multipliers etc take on the task of further signal processing within a chip. A perfect example is a sigma delta A/D converter. Here there is a minimum of analog circuitry, followed by significant amounts of digital ciruitry. Among these are digital filters. ( Usually sinc filters). A brief note on the practicalities of FIR filter design are presented and can be found in the engineering pages of the our website: www.signalpro.biz.

IC Design: The DMOS transistor

We are all familiar with the MOSFET. Some of us are also very familiar with JFETs.
However, there are a number of transistor types that are not so common. One of these is the DMOS transistor or double diffused MOS transistor. In recent years the DMOS transistor has been used more and more to provide high voltage capability to analog and mixed signal IC designers. It is very popular in the design of MEMs interfaces where higher voltages are required. Currents are usually not high. DMOS transistors can deliver higher currents but need a larger size. The tradeoff is obvious. The DMOS structure is an interesting one. For further detailed information please go to our website, www.signalpro.biz and take a look at the DMOS tutorial article in the engineering pages.

IC Design: Bandwidth requirement to pass fast rising digital signals.

How wide must the bandwidth be to pass a fast rising digital pulse so that at the output it can still be recognized as a pulse and detected? A common enough question. However sometimes the answer is not so obvious. Common wisdom says a minimum 3dB point of the filter or medium through which the pulse has to transition must be at least 1/pi*tr where tr is the risetime and pi is 3.1415 etc. Upon simulation using a simple RC filter, the result is: (a) The rule is correct. (b) The pulse width and period must be such as to accomodate the rise and fall time of the pulse. (c) The bandwidth may be narrower if the detection threshold can be set higher. (d) If the detection threshold is low then detection errors may occur if the above rules are disobeyed!

IC Design: The half IF spurious response and the second order intercept point.

An irksome 2nd-order spurious response called the half-IF (1/2 IF) spurious response, is defined for the mixer indices of (m = 2, n = -2) for low-side injection and (m = -2, n = 2) for high-side injection. For low-side injection, the input frequency that creates the half-IF spurious response is located below the desired RF frequency by an amount fIF/2 from the desired RF input frequency. The desired RF frequency is represented by 2400 MHz, and in combination with the LO frequency of 2200 MHz, the resulting IF frequency is 200MHz. For this example, the undesired signal at 2300 MHz causes a half-IF spurious product at 200MHz. For high-side injection, the input frequency that creates the half-IF spurious response is located above (by fIF/2) the desired RF. Note that high side injection implies that the LO frequency is above the RF frequency and low side injection implies that the LO frequency is below the RF frequency.

The second order intercept point is used to predict the mixer performance with respect to the half IF spurious response. For further details please see the article under engineer's corner/engineering pages in our website at www.signalpro.biz.

IC Design: A reduced power dissipation clock driver

When designing clock drivers for capacitive loads ( or indeed for any load), using a CMOS inverter type driver, the power dissipation can be large if precautions are not taken to attenuate the direct current that flows from the P or N channel output transistors, when, for a fraction of the drive cycle both the transistors may be momentarily ON.

A simple way to alleviate this problem is to use a non - overlapping clock driver. Such a driver is presented on our website at www.signalpro.biz/engineer's corner.
A simple and useful circuit.

IC Design: Load line analysis for RF power amplifiers


The most basic of analyses is the load line analysis for RF power amps ( or for that matter, any power amp). It is true that we all learned this in our formative years. However, it is equally true that we graduated to high performance complicated CAD programs that do so many things in an invisible manner that we no longer want to know ( sometimes) how the tool go to where it got to. A somewhat similar condition is common in digital ASIC design where the designer no longer needs to know how the logic gate works or what its device level parameters are. He or she simply writes the code that enables the design on a high level of abstraction. A brief expose of load line analysis is presented in a newly released paper by SPG and may be found at www.signalpro.biz under engineer's corner for interested readers.

IC Design: RF power amplifier design: Load pull analysis


In the design of RF power amplifiers it is useful ( and important) to know how the output power of the amplifier gets influenced by changes of the the load impedance under varying conditions. In order to get an understanding of this, a useful technique is "load pull analysis". It is a graphical ( usually) technique that uses the Smith Chart to plot the contours of the load impedance for fixed constant powers. It provides valuable information to the engineer/user about the performance of the amplifier for reasons of assessment of the quality of the amplifier, conditions of operation, design fit or various other parameters. A technical article on the technique has been released by Signal Processing Group Technical staff and is available for perusal by interested parties in www.signalpro.biz>engineer's corner.

IC Design: USB3 interface IC design: The K28.5 test sequence


In NRZ ( non return to zero ) signaling, a series of 1's and 0's are used. The probablity of occurence of a digit is 50%. As a result of this there is a relatively high probability of getting a long series of 0's or 1's in the signal. The spectrum of such a sequence contains low frequency content. Consequently high frequency transmission design can become difficult. In order to alleviate this problem data encoding or scrambling is used. A typical technique ( used in USB3 for example) uses 8b/10b encoding. In this case, an 8 bit word is encoded into a 10 bit word. The extra bits are added to make the number of 0's equal to the number of 1's in a given bit interval. Additionally this encoding can also be used to improve BER. ( But that is another posting!). For different applications, different types of encoding may be used as well as test patterns. One of the test patterns ( an ubiquitous one) is the
K28.5 pattern. This pattern is a composite of a K28.5+ and a K28.5- bit word and can be described as follows: K28+ = 1100000101 and the K28.5-: ( The inverse of K28.5+)=0011111010. The complete pattern is thus: 11000001010011111010. In USB 3 circuit design, this pattern is encountered often. Please visit our website at www.signalpro.biz and the engineer's corner for other interesting articles on wireline communications.

IC Design: Thermal modeling in IC design


After a number of mishaps in the design of power ICs and modules with respect to devices blowing up it was decided that we would go back to first principles and understand thermal effects and furthermore use thermal modeling to design better, safer and robust ( with respect to thermal operation) devices and modules. In our attempt to do this we came across many different types of information in the literature concerning thermal design. From the very simplistic thermal resistance and power relationships to fairly complicated thermal models. We also came across thermal modeling software information. This was in the year 2008. We took this information and wrote a brief thermal modeling technical note in the hope that we would have less incidents of thermally caused destructive events. Another interesting result of this was that we were able to set up thermal models of MCMs and devices that were not yet in existence and study the effects of thermals on these to - be devices. These models were built up in MATLAB/SIMULINK and were still fairly simple. We used commercially available thermal modeling software for more complex models. All this effort did help and in the end we were able to meet our thermal design goals in a large number of projects. The initial note was released for publication and now resides in: www/signalro.biz >> engineer's corner for those interested in thermal modeling or thermal effects. We acknowledge the contribution made to thermal modeling by a number of authors both on and off the web.

IC Design: Thermal management in IC design


More and more thermal management is required for current analog devices as power dissipation levels climb. In some devices such as power amplifiers, LED drivers, DC - DC converters and other higher power devices, the problem is so obvious as to sometimes burn one ( and not just metaphorically speaking) in a significant manner. The demonstration of the Dell laptop which burst into flame not so very long ago was a vivid demonstration of what we as design engineers have to live with. Its not just the power devices that need thermal management. A cooler device will run better in many environents so even lower power devices need thermal management. A PCB with various power level devices mounted on it needs thermal management. In spite of these reasons thermal management is not as well understood as one would expect. This post is an attempt to bring attention to, and provide some useful information for thermal management. In a recent report released by Signal Processing Group Inc, to be found at www.signalpro.biz, information and resources can be found by interested parties. Please go www.signalpro.biz>engineer's corner and access the thermal management articles.

IC Design: Heat sinks in IC design


As power circuit designs and devices proliferate in products such as LED drivers, HID lighting, motor control and electric vehicles it is becoming important to understand themal effects in active devices. All active devices dissipate power, and power active devices dissipate lots of power. This power dissipation creates heat which must be removed by some means to prevent excessive heat buildup inside a package or module which ultimately would lead to destruction of the appliance, circuit or device. One of the ways devices can be made safer, thermally that is, is the use of a passve heat sink. The role of the heat sink in active device thermal management is explored in a recent report released by Signal Processing Group Inc.'s technical staff and may be found in: http://www.signalpro.biz>engineer's corner>heatsink.pdf.

IC Design: Decimation filters for sigma-delta A/D converters


A typical filter used as the pre-decimation filter for an oversampled A/D is the Hogenauer filter, also called the CIC filter. These filters have some advantages which make them particularly suitable for use as decimation filters. In general the output stream from a OSR A/D is a 1 bit high frequency digital signal. The 1 bit signal has to be downconverted in frequency and increased in bit width. This is the fundamental decimation operation. Hogenauer filters offer the following advantages (1) No multipliers are needed. (2) No storage is needed for filter coefficients. (3)Intermediate storage is reduced by integrating at the high sampling rate and comb filtering at a low rate. (4) The structure of the CIC filters is very uniform, using only two basic building blocks. (5) Little external control or complicated local timing is required. (6) The same design can easily be used for a wide range of rate change factors with the addition of a scaling unit. As a result of these advantages Hogenauer filters have been used and continue to be used in overampled systems. A technical report prepared by technical staff at Signal Processing Group Inc. is now available in a series of posts that deal with the Hogenauer filter as well as OSR A/D converters. It was assumed that since the CIC filter is an important component at the backend of an OSR ADC, understanding the design parameters of this filter is essential to the design of the overall OSR ADC. Subsequent posts to this one deal with the details of design for decimation filters. The paper may be found at http://www.signalpro.biz>engineer's corner.

IC Design: Random signal generation for SPICE/PSPICE


The SPICE programs we use for circuit simulation do not have a direct way to generate random waveforms. i.e. there is no voltage or current source which can be attached to a circuit node and which can generate a random signal for analysis. As a result we had to develop code on MATLAB and C++ to allow us to generate a PWL random waveform of as long a length as required. It is used as a piece wise linear signal and can generate the random signal as required.Please contact us through our website located at http://www.signalpro.biz for more information about this circuit simulation tool.

IC Design: Input impedance of differential stages


Use of the emitter coupled bipolar differential amplifier is prolific. In addition a good way to stablize gain and bias stability is the use of a emitter degeneration resistor. This post simply presents, without proof, what happens to the input impedance of the differential device when degeneration is used. First one has to know the rpi of the bipolar small signal model. This is calculated as: Beta0/gm. Where Beta0 is the dc gain of the bipolar. If no degeneration is used, this is the input impedance of the transistor. When a degeneration resistor is used then the impedance rises significantly. The rise in input impedance is: (Beta + 1)*Re. Here Beta is the current gain at the particular bias point and frequency and Re is the degeneration resistor. Therefore the total input impedance rises to rp1+(Beta+1)*Re. For other items of interest please visit our website at http://www.signalpro.biz.

IC Design: PRBS signal power calculations using sinc squared functions


The power in a PRBS NRZ signal is expressed as a sinc squared function of the independent variable x. In order to calculate the power in this signal from 0 to some arbitrary x a definite integral of the sinc squared function has to be found. This is not an easy task. Having searched the web for ready solutions of this problem very few relevant references were found. Therefore a technique was evolved from series expansions of the sinc and sinc squared functions. The accuracy of the estimates found by using this technique is completely dependent on the engineer. We found that using just four or five terms in the expansion allowed us to calculate to within accuracies of interest to us. The technical report can be found in the engineer's corner in the SPG website located at http://www.signalpro.biz.

IC Design: The eye diagram


The eye diagram is a very useful and practical tool for analyzing, evaluating, diagnosing and correcting errors in digital communication systems or indeed any digital/wireless system. The premise is fairly simple. Using the eye diagram a number of valuable parameters may be extracted at a glance. These parameters play a critical role in the transmission and reception of data. An intuitive understanding of the eye diagram is essential for good design technique and analysis of systems. Simulation of the eye diagram and its measurement can be better understood if one knows the underlying technique of eye diagram construction. A brief expose of this tool can be found at http://www.signalpro.biz. > engineer's corner.

IC Design: Why 50 Ohm?


Has anyone wondered why we use 50 Ohms as the the reference resistance in so many of our designs. Why 50 Ohm seems to be a defacto standard. We normalize to 50 Ohm; we use 50 Ohm in our oscilloscopes; we pick 50 ohms as a good convenient reference resistor. But how did this happen. Where did this 50 Ohm factor come from. We ran across a explanation which sounds reasonable enough and decided to post it to this blog. Standard coaxial lines in England in the 1930's used a commonly available center conductor which turned out to be 50 Ohms! Others say that for minimum signal attenuation, the transmission line characteristic impedance is 77 Ohm. For maximum power handling it is around 30 Ohm. A good compromise is 50 Ohm for both performance parameters. So this is how 50 Ohm became a convenient impedance level!?

IC Design: Peaking current source design


Most of us are very familiar with the Widlar current source which uses a resistor in series with a diode connected bipolar to act as a source for a current. It is probably the most popular current source in existence. However, this source does have its problems such as variations with resistance,
and the low input resistance of the bipolar. There is another lesser known current source known as a "peaking" current source that at times can be used with advantages beyond those offered by the time honored Widlar source. It is also useful when the supply voltages are low. A white paper on this source is available now by courtesy of the Techteam at Signal Processing Group Inc. For interested readers it is located at http://www.signalpro.biz/> engineer's corner.

IC Design: Lumped and distributed elements


How does one determine whether to treat a component as a lumped or distributed one? The answer is, that if the the element size is greater than lambda/20, where lambda is the effective wavelength of the signal associated with the element, then it should be treated as a distributed component or element. This means that for typical discrete designs, the lumped approximations are valid for frequencies in the 500 to 1000 Mhz range. For ICs the frequency range is much larger because of the small size of the elements encountered there. This range may be up to 10 Ghz. One has to ask, where did the 5% of lambda come from? It is like most other things in practical engineering an approximation and a thumb rule. It should be considered a guideline. A distributed model is usually more accurate for any frequency above DC but experience says that the 5% guideline is a good transition value. Note: The effective lambda is the lambda in free space divided by the square root of the effective dielectric constant. The effective dielectric constant in homogeneous media is simply the relative permittivity. For non-homogeneous media is not. Usually for non-homogeneous systems such as microstrip the effective dielectric constant is less than the relative permittivity.

IC Design: Temperature independent resistors in ICdesign


In IC technology all resistor materials have an associated temperature coefficient. Most commonly, resistors are made from polysilicon, diffusion of various kinds and metal. The most common of these resistors is poly and diffusion. In certain applications a temperature independent resistor may be required. In order to do this one has to search the technology properties to see if there are resistor materials in the technology that can provide (1) An appropriate sheet resistance and (2) opposite temperature coefficients. Almost all semiconductor technologies provide this. Once the materials are established a first order temperature independent resistor may be synthesized as shown in a recent report released by Signal Processing Group Inc. This report may be found at:
http://www.signalpro.biz>engineer's corner.

IC Design: An analog front end IC for multiple applications


Signal Processing Group Inc, has released an interesting device ( silicon proven and volume production proven ) for use as a mixed signal controller device. At its input is a two/three wire interface ( a clock, data and latching pins) which is used to communicate with a micro-controller and memory of the users choice. The protocol is much like a I2C protocol. The inputs are digital words which drive currents multiplexed into a set of six outputs. These outputs can be used to drive LEDs ( 50 mA each) or other transducers such as pressure sensors, motors, etc . A feedback TIA ( trans-impedance amplifier) is used to capture an analog feedback signal. This feedback signal is converted to a 10 bit digital word ( conversion time is approximately 100us) and sent via the serial interface to a micro-controller for processing. Looking at these functional blocks it appears that the device would be well suited for feedback control of various micro-systems including automatic lighting control, toys, sensor interface, etc. For further information and a detailed datasheet please go to the SPG website at http://www.signalpro.biz and use the link to proven IP.

IC Design: First order filter parameter calculations


A recurring problem in ac filter circuit design, is the calculation of attenuation at a particular frequency or conversely the calculation of a frequency given the attenuation. In addition related calculations deal with estimations of time constants and filter parameters such as resistance and capacitance. These calculations play a crucial role in the design of anti-aliasing filters, low pass filters, phase lock loops etc. A paper published recently by the techteam at Signal Processing Group Inc, documents these calculations and provides examples for interested readers, cookbook fashion. The paper is located at http://www.signalpro.biz >engineer's corner.

IC Design: Image frequency in RF circuits


The image frequency in Rf/wireless receivers is an issue that has to be understood by radio designers and tackled for robust design. The image frequency is a so-called spurious signal which can cause a number of bad effects.Its origin lies in the mixing of multiple frequency signals in the receiver mixer. A paper released recently by Signal Processing Group Inc., describes this effect in simple terms so that an understanding of the effect may be obtained by interested designers. The paper can be accessed in the engineer's corner at http://www.signalpro.biz.

IC Design: The MOS varactor: An introduction


In many IC designs frequency based trimming or control is required. For instance a filter may need to to be trimmed for corner frequencies. A PLL VCO needs to be controlled by changing the frequency based on its feedback signal. An adaptive equalizer needs to shift its pole-zero configuration. These and many other related applications need a device to be voltage controllable, and offer a change of reactance. The varactor is a useful component that is used frequently to do this. In general varactors are assumed to be junction type devices where the depletion capacitance can be changed to vary the reactance. In CMOS or BiCMOS processes another type of varactor is available, almost as a byproduct of the MOSFET structure. This is the MOS varactor It seems that every CMOS process has the capability to produce a MOS varactor. However, although the varactor is available it may have some limitations of Q and sensitivity. In addition CMOS technology vendors do not characterize or optimize their MOS varactors. This is left to those specialized technology vendors who offer high performance or RF type processes. A recent report on the MOS Varactor is available as an introduction at http://www.signalpro.biz/ > engineer's corner for interested parties.

Friday, July 6, 2012

IC Design: Useful identities for CMOS IC design


Powerful simulation programs for circuit simulation and IC simulation can be used today to simulate IC circuits to the nth degree. However when the initial design is done it is usually done using some identities for the various DC and AC parameters of the MOSFET. This allows fast hand calculations and possibly a sanity check of the results obtained from the more complex models built into the simulator. A set of these identities can be found on the SPG website under engineer's corner. Interested readers can access these at http://www.signalpro.biz > engineer's corner.

IC Design: Useful identities for bipolar IC design


Bipolar design has been popular for a very long time. It continues to provide a device that is being used today in various forms. In standard bipolar processes, in combination with CMOS in BiCMOS processes, in high current designs, in high voltage with high current designs. Technology and device vendors keep improving their technologies and processes. Recently the advent of SiGe technology also provides a very high performance bipolar device. For the design engineer a set of identities which provides a way for simple hand calculations of the bipolar device for use in a circuit can be useful. Ultimately, the circuit design can be either breadboarded or simulated to evaluate performance. However, hand calculations can be and should be a first step. To facilitate this process the technical team at Signal Processing Group Inc., have recently released a brief paper on some useful bipolar design identities. This is available on the SPG website in the " Engineer'corner". Please visit http://www.signalpro.biz.

IC Design: Note on the reference for bondwire fusing article in engineer's corner


This is a note to confirm a reference quoted in an article in engineer's corner on bondwire fusing current. The complete reference should read, J. Thomas May, Electrical Overstress - Electrostatic Discharge Symposium 1994.

IC Design: De-embedding in high frequency measurements


High frequency measurements for circuits such as MMICs and high speed digital circuits are made using some kind of Vector Network Analyzer ( VNA) or some kind of TDR instrument. In most cases the DUT ( device under test) is mounted on a test fixture which probably has an input connector and microstrip and an output connector and microstrip. The measurements are to be made on the characteristics of the DUT. To do this the test fixtures have to be de-embedded. This technique and its basics form the subject of the latest brief paper from the technical team at Signal Processing Group Inc. It can be found at http://www.signalpro.biz in the Engineer's Corner.

IC Design: Multi-chip in a package technology


When a designer has a system that is designed with chips with differing voltages, currents, frequencies and special characteristics it is difficult to integrate the system for cost or size reduction. In this case the usual approach is a motherboard - daughter board combination. ( Usually, but not always). Recently it appears that designers are turning to multi-chip in a package technology. In this case a package is used which has die in it assembled in a vertical configuration or a side by side combination. Properly done , this can be a powerful way of getting the job done in a shorter time with less cost than a difficult integration approach. The design of the multi-chip configuration is the key. Some parameters to be considered seriously are temperature effects, parasitic connections, grounding, and frequency performance. Signal Processing Group Inc., is offering a multi-chip in a package design and assembly service for interested users. SPG website is located at http://www.signalpro.biz.

IC Design: The wave number β or the phase constant


β is an important quantity used in understanding transmission lines and waveguides. It is not intuitive so this treatment presents a brief explanation of the quantity in the analysis of transmission lines, waveguide and other wave systems.

Sometimes β is referred to as the phase constant of the line or guide. If the cartesian coordinate system is used and a coordinate, say “z” is used as the direction of wave propagation then βz measures the instantaneous phase at point z on the line with respect to z =0.

In addition, voltage or current on the line is the same at any two points separated in z such that βz differs by multiples of 2π. Since the shortest distance between points where voltage or current is at the same phase is a wavelength, then:


βλ = 2π

( replacing z by λ),

β = 2π/λ

_____________________________________________________________

IC Design: Analog IC design: Magnitude and frequency scaling in filters


Not infrequently, filters are designed using a different scale for their component parts than the final requirement. For example a filter could be designed for a frequency of 1.0, inductors in Henries and capacitors in Farads. The Smith chart uses scaling as a matter of common usage. It becomes a vital part of the design engineer's repertoire to understand this concept. This post deals with the very basics of scaling in a cookbook fashion for simplicity. Here are the rules: (1) If each inductor and capacitor is multiplied by a quantity 1/alpha, then the network is said to be scaled in frequency by alpha. If every resistance and inductance is multiplied by a quantity beta, and every capacitor is divided by beta, then the network is said to be magnitude scaled by beta.

IC Design: RFIC design: Electrical length


Sooner or later, the design engineer who is working in microwave or high frequency electronics, is going to come up against the concept of electrical length. In order to understand this concept lets work out the following arithmetic:

1.0 The wave number or phase constant = β = 2π/λ

For those unfamiliar with this, we recommend looking up the description of this quantity in the SPG blog at (http://signalpro-ain.blogspot.com/).

2.0 The electrical length is defined by θ = βl where l = physical length

3.0 θ = βl = (l/ λ) *360 degrees

Here λ is the wavelength of the signal in the applicable dielectric ( or sometimes called the guide wavelength).

4.0 For a frequencies in Ghz, this becomes: [360 * fGhz * l(cm) * √εeff]/30 cm


In this case frequency is in Ghz, physical length is in centimeters.

For example:

Let frequency be 1 Ghz.
Let λ = 0.8 λ(air) or √εeff = 1.25
Let l = 0.1 meters = 0.1E2 centimeters

Then :

θ = [360* 1*0.1E2*1.25]/30 degrees

θ = 150 degrees

IC Design: Microwave filters: Lumped element designs to transmission line equivalents


As frequencies increase in filters, lumped elements no longer satisfy the requirements for various reasons ( parasitics, accuracy etc). At this point the designer may choose to convert the lumped element filter to a distributed element filter. One of the techniques used is transmission line stubs in the conversion. This technique is described in a white paper released from Signal Processing Group Inc. recently. The paper may be found at http://www.signalpro.biz >> engineer's corner by interested readers.

IC Design: Measuring temperature in ICdesign


Measuring temperature fairly accurately can be done using a number of methods. The sensors that are available to do this are the RTD, the pn junction, the positive temperature coefficient thermister ( PTC) and the negative temperature coeffcient (NTC)thermister. Among these options the NTC thermister seems to be used more and more in applications where the temperature rate of change is fast. The advantages and disadvantages of using this type of device are: Advantages, fast reaction time, small size, two wire connections and relatively inexpensive. The disadvantages are: the temperature versus resistance characteristic is very non linear, some kind of excitation is required, the temperature range is limited, subject to self heating and relatively fragile. In spite of the disadavantages the thermister is a choice many design engineers are making. The web has a number of good articles that are very helpful in the understanding of the thermister. Articles from Betatherm, Microchip technology and National Instruments to mention a few. The challenge does not lie in understanding the thermister itself. It is very easy to undertstand, at least as far as the users perspective goes. The challenge is in coming up with analog and mixed signal circuitry that interfaces with the thermister and allows for accurate measurement of the temperature. Signal Processing Group Inc., has developed a number of circuits which can be used with varying accuracies to measure temperature with thermisters. Interested users may contact SPG at http://www.signalpro.biz> contact.

IC Design: Why is power transfer and power quantities used in high frequency designs?


It is seen that in high frequency circuits, power transfer and power quantities are used. Typically dBm will be a standard unit in use. The question is: why? The answer to this question is found in relative performance of circuits at high and low frequencies. When frequencies are low, a voltage or current signal applied at an input of a circuit or chip is reproduced quite faithfully in the chip or at the operating terminals of the circuit. The same is true at the outputs. The reason is that parasitic quantities do not play as large a role at low frequencies.The situation is quite different at high or microwave frequencies. At these frequencies the voltage or current signal applied to the input terminal of a device package is not what the active device sees inside the package. The reason is of course, the parasitics of the circuit.If instead of input current or input voltage as the signal quantities we use power delivered to the input port then this problem goes away since reactances do not dissipate power. At the output, if the true available power gain of the device is given, we can calculate accurately what to expect assuming no power is dissipated in the parasitic elements. These reasons are why RF/MMIC circuits are almost always designed with power flow or power transfer considerations.

IC Design: Definition of the Q factor


Definitions of the Q factor


1.0 Unloaded Q : Energy stored in the component/Energy dissipated in component.

2.0 Loaded Q: Energy stored in component/Energy dissipated in component and
external circuit./load.

IC Design: Ferrite beads: Useful circuit components


Ferrite beads are a very low cost and easy way to add high frequency isolation loss in a circuit without a power loss at DC and low frequencies. Ferrite beads are most effective at frequencies in excess of 1.0 Mhz. When these are used with the appropriate parallel capacitance, they provide high frequency decoupling and parasitic suppression. A brief paper on ferrite beads has been released by Signal Processing Group Inc and may be found at http://www.signalpro.biz>>engineer’s corner.

IC Design: Equalizer design experience with IC design


About two and half years ago we started a program for the design and development of wireline equalizers, both fixed and adaptive. Our first designs will be going into fabrication this month. This post is an attempt to document some issues and challenges we faced on this project.

1) Data and models of cables: Immediately it was obvious that there is a big hole in the data for cables. Our designs were for 5 Ghz and 1.65 Ghz. We found almost no data on the characteristics of cables for these frequencies. After a little research it turned out that we would have to do our own modeling using a TDR and Simulink/MATLAB and a few home grown tools. This is not an inexpensive activity. The boards required as interfaces to the machine cost about $10k a piece! The TDR is also a very expensive machine. We tried searching the web but found little available data. Manufacturers of the cables do publish data but it turned out that it was the wrong kind of data for our purposes. So cable characteristics are difficult to get.

2) Design tools: The second challenge was, the design tools available for design of ICs are, in our opinion not terribly useful when designing equalizers. Long sequences of really high frequency data are needed to check performance. These types of simulations can really run extremely slow and simulating a complete chip was almost impossible. A combination of SIMULINK and SPICE type simulators ( including Agilent ADS) were used but in our opinion left quite a bit to be desired. Equalizer designers beware!

3) IC process data: The fabrication houses that we selected ( “world class”) provided very good data on their processes. Again this data was good for about 80% of the design but 20% of the design could not be covered by the given data.

4) ESD protection: This is a problem for high frequency equalizer design in particular and in general a good ESD structure is difficult to do. The issue is this: If we use the characterized ESD cells then we have a challenge because of the parasitics. If we make our own ESD cells then we have no characterization data. So I suppose this makes ESD a major challenge in these types of devices. Remembering that the input lines actually come in from outside. ( Existing TVS devices are woefully inadequate for ESD.)

5) Test: The challenge of testing the equalizers looms large of course. A combination of standard lab equipment ( expensive) and custom made equipment is perhaps the best approach. Again the making of the test equipment is a challenge in itself as we found.

6) Demo boards: A real challenge. We had to go through a number of iterations with both PCB vendors and designs. The first PCB we did gave a clear impedance step at 150 Mhz and really caused errors in the measurements. Subsequent designs were great improvements but we still need more improvement and are working on it.

So the design and development of these wireline equalizers is, in our opinion not a “walk in the park” Good luck to all the equalizer designers and many congratulations to the successful ones. You guys have really licked the problems!

IC`Design: Super-beta or high current gain transistors


In certain analog ICs it is necessary to have very high input impedance and very low base currents. For such applications, the typical current gains of an integrated npn transistor are not high enough. It is possible to increase the current gain of an npn transistor significantly by improving the base transport efficiency. In this case the base is very narrow ( a few hundred angstroms or less). The collector to emitter breakdown of a structure like this is relatively low ( 2V - 3V) because the collector base depletion layer can punch through the active base region into the emitter. This is the punch-through or "super-beta" transistor. Current gains of 5000 are obtainable using this technique at currents of 20uA or so with a Vce of around 0.5V. The fabrication of super-beta transistors in a standard process can be done by using one extra masking step and diffusion. After the base diffusion for the normal NPN transistors a special mask is used to open up the emitter diffusion for the super-beta transistors. At this stage the emitter of the super-beta transistor is only partially diffused.This step is then followed by the masking and n+ diffusion of the standard npn. Owing to the extra diffusion step for the super-beta transistor, the emitter of the super-beta transistor is diffused slightly deeper
than the normal npn resulting in a narrow base width.

IC Design: Reverse engineering obsolete ICs


In our work on resurrecting really old and obsolete devices using bipolar technology, some designed using rubylith techniques, we found an interesting evolutionary trend from the oldest to the older. The layout techniques and the basic designs were dictated by the availablity of or non-availability of CAD tools. The earliest designs tend to have the very simplest layouts for the individual devices such as: simple epi-tub, base and emitter rectangular diffusions. Large contact areas of every shape and description and very broad isolation and device to device spacings starting at almost 10 mils and coming down to about a mil for the older devices. Devices are layed out almost as one would layout a PCB using discrete devices. Active devices occupy their own tubs, resistors occupy their tubs and there is a general absence of capacitors. For the relatively newer obsolete devices the layout style changes to active devices, resistors sometimes occupying a single tub with very unique shapes and geometries. As the the CAD tools become better, circular geometries become more and more prevalent and we see lateral pnps and smaller npns with circular emitters. On chip capacitors make their appearance using the emitter diffusion, oxide /nitride and metal sandwiches. The line widths shrink down to sub mil sizes and device densities per chip increase. Interestingly bondpad sizes seem to be consistent for a long period of time ( around 100 um X 100 um). Scribe lines appear to also hold on to widths. ( Around 100 to 150 um wide). All in all the art of reverse engineering these devices, including the electrical characteristics as deduced from the layout and ancient specifications form a most interesting activity for those interested in the art. Interested parties may contact SPG for reverse engineering of obsolete parts via our website at http://www.sinalpro.biz