Free Essay

Electronics Reliability

In: Computers and Technology

Submitted By Anegron69
Words 6033
Pages 25
Technical Article

Reliability in Electronics
CONTENTS


Introduction
1.1
Failure Rate
1.2 Reliability
1.3 Mean Time Between Failures (MTBF)
Mean Time to Failure (MTTF)
1.4 Service Life (Mission Life, Life)



Factors Affecting Reliability
2.1 Design Factors
2.2 Complexity
2.3 Stress
2.4 Generic (Inherent)



Estimating The Failure Rate
3.1 Prediction
3.1.1 Parts Stress Method
3.1.2 Parts Count Method
3.2 Assessment
3.2.1 Confidence Limits
3.2.2 PRST
3.3 Observation



Prototype Testing



Manufacturing Methods



Systems Reliability
(a) More Reliable Components
(b) Redundancy



Comparing Reliabilities

xppower.com

Introduction
Most of us are familiar with the concepts of reliability and MTBF at a superficial level, without considering what lies behind the figures quoted and what significance should be attached to them. The subject deserves a deeper understanding, so, let us start by having a better look at the terminology.
1.1 Failure Rate (

)

Failure Rate

The failure rate is defined as the percentage of units failing per unit time. This varies throughout the life of the equipment and if (lambda) is plotted against time, the characteristic "bathtub" curve is obtained for most electronic equipment (See Figure 1).

A

B

C

Time
Fig 1. Failure Rate vs. Time

This curve has three regions:
A

Infant mortality.

B

Useful life.

C

Wear out

In region "A", poor workmanship and substandard components cause failures. This period is usually a few hundred hours and a "burn in" is sometimes employed to stop these failures occurring in the field. Note that this does not stop the failures occurring, it just ensures that they happen in-house and not on the customer’s premises.
In region "B",

is approximately constant and it is only for this region that the following analysis applies.

In region "C", components begin to fail through having reached their end of life, rather than by random failures.
Examples are electrolytic capacitors drying out, fan bearings seizing up, switch mechanisms wearing out etc. Well implemented preventive maintenance can delay the onset of this region.
1.2 Reliability (R(t))
There are a large number of definitions and one will get different answers from statisticians, engineers, mathematicians and so on, an essentially practical definition is: The probability that a piece of equipment operating under specified conditions shall perform satisfactorily for a given period of time.
Probability is involved since it is impossible to predict the behaviour with absolute certainty. The criterion for "satisfactory performance" must be defined as well as the operating conditions such as input, output, temperature, load etc.
1.3 Mean Time Between Failures (MTBF), Mean Time To Failure (MTTF)
Strictly speaking, MTBF applies to equipment that is going to be repaired and returned to service, MTTF to parts that will be thrown away on failing. The MTBF is the inverse of the failure rate.
MTBF = 1

2

....(1)

xppower.com
Many people, unfortunately, misunderstand MTBF, and tend to assume that the MTBF figure indicates a minimum, guaranteed, time between failures. This assumption is wrong, and for this reason the use of the failure rate rather than the MTBF is highly recommended.
R(t) =

e-

t = e -t m ....(2)

t

m=

....(3)

( )

Logn 1
R(t)

Where
R(t)

= Reliability

e

= Exponential (2.718)
= Failure Rate

m

= MTBF

Note that for a constant failure rate, plotting reliability against time "t" gives a negative exponential curve (See Figure 2).
1.0

0.9

0.8

0.7

0.6

R(t)0.5

0.4
0.3

0.2

0.1

0.0
0.0

2.0

1.0

Fig 2. Reliability R(t) Plotted Against (

t

3.0

4.0

5.0

t) for a unit with a constant failure rate

When t/m = 1, i.e., after a time “t”, numerically equal to the MTBF figure “m”:
R(t) = e -1 = 0.37

....(4)

Equation (4) can be interpreted in a number of different ways:
(a) If a large number of units are considered, only 37% of them will survive for as long as the MTBF figure.
(b) For a single unit, the probability that it will work for as long as its MTBF figure, is only 37%.
(c) We can say that the unit will work for as long as its MTBF figure with a 37% Confidence Level.
In order to put these numbers into context, let us consider a power supply with a MTBF of 500,000 hours, (a failure rate of 0.2%/1000 hours), or as the advertising would put it "an MTBF of 57 years!"
From eq.(2), R(t) for 26,280 hours (3 years) is approximately 0.95, i.e., if such a unit is used 24 hours a day for 3 years, the probability of it surviving that time is 95%. The same calculation for a ten year period will give a R(t) of 84%.
Now let us consider a customer who has 700 such units. Since we can expect, on average, 0.2% of units to fail per 1000 hours, approximately one unit per month will fail on average, since the number of failures per year is:
0.2
100

*

1
1000

* 700 * 24 * 365 = 12.26

3

xppower.com
1.4 Service Life (Mission Life, Life)

SERVICE LIFE (YEARS)

Note that there is no direct connection or correlation between service life and failure rate. It is perfectly possible to design a very reliable product with a short life. A typical example is a missile for example: it has to be very, very reliable (MTBF of several million hours), but its service life is only 0.06 hours (4 minutes). 25 year old humans have a MTBF of about 800 years, (FR about 0.1%/year), but not many have a comparable "service life". Just because something has a good MTBF, it does not necessarily have a long service life as well (See Figure 3).

Transatlantic
Cable
Human

100

10

Car

PSU

Toaster
1.0

0.1

Missile

103 104 105 106 107 108 109

MTBF

Fig 3. Examples of Service Life vs. MTBF

Factors Affecting Reliability
2.1 Design Factors
The most important factor is good, careful design based on sound experience, resulting in known safety margins. Unfortunately, this does not show up in any predictions, since they assume a perfect design!
It has to be said that a lot of field failures are not due to the classical random failure pattern discussed here, but to shortcomings in the design and in the application of the components, as well as external factors such as occasional voltage surges, etc. These may well be ‘outside specification’ but no one will ever know, all that will be seen is a failed unit. Making the units rugged through careful design and controlled overstress testing is a very important part of making the product reliable.
The failure rate of the equipment depends on three other factors:
• Complexity
• Stress
• Inherent (generic) reliability of the components used
2.2 Complexity
Keep things simple - what isn’t there, can't fail, but be careful: what isn’t there can cause a failure! A complicated or difficult specification will, invariably result in reduced reliability. This is not due to the shortcomings of the design staff, but to the resultant component count. Every component used will contribute to the equipment’s unreliability.
2.3 Stress
In electronic equipment, the most prominent stresses are temperature, voltage, vibration, and temperature rise due to current. The effect of each of these stresses on each of the components must be considered. In order to achieve good reliability, various derating factors have to be applied to these stress levels. The derating has to be traded off against cost and size implications.
Great care and attention to detail is necessary to reduce thermal stresses as far as possible. The layout has to be such that heat-generating components are kept away from other components and are adequately cooled.

4

xppower.com
Thermal barriers are used where necessary and adequate ventilation needs to be provided. The importance of these provisions cannot be overstressed since the failure rate of some components will double for a 10 °C increase in temperature. Note that decreasing the size of a unit without increasing its efficiency will make it hotter, and therefore less reliable!
2.4 Generic (Inherent) Reliability
Inherent reliability refers to the fact that film capacitors are more reliable than electrolytic capacitors, wirewrap connections more reliable than soldered ones, fixed resistors more reliable than pots, and so on. Components have to be carefully selected to avoid the types with high generic failure rates. Quite often, there is a cost trade off - more reliable components are usually more expensive.
3. Estimating the Failure Rate
The Failure Rate should be estimated and measured throughout the life of the equipment:
• During design, it is predicted
• During manufacture, it is assessed
• During the service life, it is observed
3.1 Prediction
Predicting the failure rate is done by evaluating each of the factors effecting reliability for each component and then summing these to get the failure rate of the whole equipment. It is essential that the database used is defined and used consistently. There are three databases in common use: MIL-HDBK-217, HRD5 and Bellcore. These reflect the experiences of the US Navy, British Telecom and Bell Telephone, respectively. Other sources of data are component manufacturers and some large companies like Siemens, Philips, France Telecom or Italtel. Data from these should not be used unless specifically requested by the customer.
In general, predictions assume that:
• The design is perfect, the stresses known, everything is within ratings at all times, so that only random failures occur
• Every failure of every part will cause the equipment to fail.
• The database is valid
These assumptions are wrong. The design is less than perfect, not every failure of every part will cause the equipment to fail, and the database is likely to be at least 15 years out-of-date. However, none of this matters much, if the predictions are used to compare different topologies or approaches rather than to establish an absolute figure for reliability. This is what predictions should be used for.
3.1.1 Parts Stress Method
In this method, each factor affecting reliability for each component is evaluated. Since the average power supply has over 100 components and each component about 7 factors (Typically: stress ratio, generic, temperature, quality, environment, construction, and complexity) this method requires a considerable effort and time. Predictions are usually done in order to compare different approaches or topologies, i.e. when detailed design information is not available and the design itself is still in a fluid state. Under such circumstances, it is hardly worthwhile to spend this effort, and the much simpler and quicker Parts Count Method is used.
3.1.2 Parts Count Method
In this method, all like components are grouped together, and average factors allocated for the group. So, for example instead of working out all the factors for each of the 15 electrolytic capacitors used, there is only one entry of cap. electr.’ and a quantity of 15. Usually only two factors are allocated: generic and quality. The other factors, including stress levels, are assumed to be at some realistic level and allowed for in the calculation. For this reason, the factors are not interchangeable between the two methods. In general, for power supplies, HRD5 gives the most favourable result, closely followed by Bellcore, with MIL-HDBK-217F the least favourable. This depends on the mix of components in the particular equipment, since one database maybe "unfair" on ICs, and an other on FETs. Hence, the importance of comparing results from like databases only.

5

xppower.com
3.2 Assessment
This is the most useful and accurate way of predicting the Failure Rate. A number of units are put on "life test" (more correctly described as a
Reliability Demonstration Test), usually at an elevated temperature, and so the stresses and the environment is controlled. Note, however, that it is not always possible to model the real environment accurately in the laboratory.
During life-tests and reliability demonstration tests, it is usual to apply greater stresses than normal, so that we get to the desired result quicker. Great care has to be applied to ensure that the effects of the extra stress is known and proven to be calculable, and that no hidden, additional failure mechanisms are activated by the extra stress. The usual "extra stress" is an increase of temperature, and its effect can be calculated from the Arrhenius equation, as long as the maximum ratings of the device are not exceeded.
Note that the accelerating effect depends on the activation energy that applies for the chemistry of the particular component. This would indicate that the Acceleration Factor from 25 °C to 50 °C is approx. 5.25, so be suspicious of results based on 0.7 eV, or even 1 eVn.
At the beginning of such a test it is sometimes difficult to distinguish between early failures ("infant mortality", region A) and the first failures belonging to the "constant failure rate" region (region B). In such cases, the Cumulative Distribution Function is plotted on Weibull paper. This paper has double logarithmic scaling such that a constant failure rate will result in a straight line at an indicated gradient of 1. Decreasing FR
(region A) will give a smaller gradient, increasing FR (wearout, region C) a higher gradient. Both the available time and the number of units on test are limited, and so it is of the utmost importance that the maximum amount of useful information is extracted from a limited amount of data. Statistical methods are used to achieve this.
3.2.1 Confidence limits
What we are attempting to do is to predict the behaviour of the large number of units in the field (called the population) from the behaviour of a small number of randomly selected units (called the sample). This process is called Statistical Inference. The results obtained by such means cannot, of course, be completely accurate, and it is therefore essential to establish the degree of accuracy that applies. This is done by estimating the mean value and defining a band or an interval around this estimated mean that will include the actual, true mean value of the complete population. Such an interval is defined by a Confidence Limit, i.e., if we establish that the failure rate is between 1%/1000 hours and 2%/1000 hours with a Confidence Limit of 90%, this means that we expect 90% of the units in the field to exhibit failure rates between these limits, and the other 10% of units to have a lower or higher failure rate. For a population exhibiting a constant failure rate,
=
where

X2 (2r+2), (1-ø)

....(5)
2tN
= demonstrated failure rate with a one-sided higher confidence limit of Ø (phi)

t
N
r
X2 (2r+2), (1-Ø)

=
=
=
=

test time number of units on test number of failures value of the X2 distribution

with probability (1 - Ø) of not being exceeded in random sampling where (2r + 2) is the number of degrees of freedom.
The constants given by this equation are tabulated below for values of r between 0 and 10, and for values for Confidence Limits used in industry).

of 0.6 and 0.9. (These are the usual

r
0

3

93 x 10

230 x 103

1

200 x 103

390 x 103

2

310 x 103

530 x 103

3

420 x 103

670 x 103

4

530 x 103

790 x 103

5

630 x 103

910 x 103

6

730 x 103

1040 x 103

7

830 x 103

1160 x 103

8

930 x 103

1300 x 103

9

1040 x 103

1410 x 103

10

6

Ø = 0.6

Ø = 0.9

1140 x 103

1530 x 103

xppower.com
To use this table divide the factor given by the total number of unit-hours to get the failure rate in %/1000 hours. Let us consider the case when we have 50 units on test and one fails after 4 months (2920 hours): t = 2920, N = 50, r = 1
From the table, we can say with 60% confidence that the failure rate will be less than:
200,000
50 x 2920

=

1.37%/1000 hrs

Alternatively, we can say with 90% confidence that the failure rate will be less than:
390,000
50 x 2920

=

2.67%/1000 hrs

No. Of Units From The P

In the parent population, therefore, we expect 60% of the units to exhibit a failure rate better than 1.37%/1000 hrs (an MTBF of 73,000 hrs.), and therefore 40% of units to have a FR worse than that; or 90% of the units to be better than 2.67%/1000hrs (an MTBF of 37,400 hrs.), and therefore 10% of units to have a FR worse than that (See Figure 4).

’Good’

’Bad’

60%

40%
Failure Rate
10%
90%
1.37%/kh
2.67%/kh
Fig 4. Confidence Limit

However, there is a practical problem with this method: although we get valid answers, the length of time for that answer is a function of the number of failures. Suppose we want to show a FR of 0.5%/1000h at a CL of 60%, and we have 50 units.
We start the test and expect an answer after 23 weeks, if there are no failures. Should we have a failure though, the test time goes out to 48 weeks, or with two failures to 74 weeks! In fact, if we are unlucky, we could test for over a year, only to find, at the end, that we do not meet the required reliability. The test method that we need is one which will give us an answer in a fixed, pre-determined time. Such a method is called the Probability Ratio Sequential Test, or PRST.

7

xppower.com
3.2.2 PRST
Consider what happens if we plot the number of failures against test time in unit-hours. Since the failure rate is constant and the number of units is constant, we expect units to fail, on average, at equal intervals. The resultant graph will be a uniform staircase, with the trend line indicating the failure rate. As we know, the units will fail at random time intervals (not at a uniform interval), however the trend line will still be as described above. So, the trend line is indicative of the final answer, and we shall not get a different answer as the test time increases, just more confidence in that answer. This means that we can draw conclusions early on, by taking some risks, simply by terminating the test at a pre-determined time ("accept"), or at a predetermined number of failures ("reject"). The risk is that the initial few failures are either too few or too many compared to the average, due to the random timing of the failures. The mathematics is complex, but the end result is simple.
Suppose we define the risks as follows:
1. There is a low FR that is acceptable to the producer, and a higher one acceptable to the customer. The ratio of the two values is called the Discrimination Ratio, and is usually 2. The FR and the DR needs to be defined.
2. The risk of rejecting a "good" population on the basis of the PRST test has to be defined, and is called the Producer’s Risk.
3. The risk of accepting a "bad" population on the basis of the PRST test has to be defined, and is called the Consumer’s Risk.
4. The producer’s and customer’s risk is normally equal and between 40% and 10%.

No. Of Fail

Once these risks are defined, the length of time to an "accept" and "reject" result can be calculated, or looked up in tables such as the ones in MIL-HDBK-781. The test will be run according to this plan, and the product accepted or rejected within a fixed time-frame (See Figure 5).

REJECT

ACCEPT

Unit-Hours

Fig 5. PRST

3.3 Observation
This is observing the large population itself (as opposed to the small sample during assessment) and is the final proof and measure of the equipment's reliability. There is, normally, no need for Statistical Methods since there is plenty of data available.
The problems during this phase are two fold:
1. The sheer mechanics of actually collecting and collating the data.
2. The uncertainty of the duty, conditions of use and stresses, or abuse, that the units were subjected to.
Great care has to be exercised in drawing conclusions due to the difficulty of distinguishing between true random failures and misuse in the field (accidental or otherwise).

8

xppower.com
4. Prototype Testing
With all the sophisticated computer analysis, simulation and tolerancing methods available, there is still no substitute for thoroughly testing the maximum number of prototypes. An effort should be made to locate and use components from different batches, especially for critical components. These units must be tested under dynamic conditions to ensure reliability. An effective test is to cycle the temperature, the input, and the load independently. The units should be tested at both maximum and minimum temperature cycling according to this plan.
Cpk analysis of the results is used to ensure that the specification parameter margins are adequate. After testing, these units are normally used as the first batch on the reliability demonstration tests.
At least one unit should be subjected to HALT testing, and several to destructive overstress tests to establish the safety margins.
The timing of these tests is critical - it must not be so early in the development phase that the final circuit is radically different, and it must not be so late that production starts before the results are evaluated. A pitfall to watch out for, if changes are proposed as a result of these tests is that the up-dated units must be subject to long term testing themselves.
5. Manufacturing Methods
This is a separate subject in itself, but there are three main factors contributing to unreliability in manufacture:
• Suppliers
• Manual assembly methods
• Tweaking of settings and parameters
Suppliers must be strictly controlled to deliver consistently good devices, with prior warning of any process changes and any other changes.
These days, with modern QA practices and JIT manufacturing methods, this level of reliability is achieved by dealing with a small number of trusted suppliers. Manual assembly is prone to errors and to some random, unintentional abuse of the components by operators. This creates latent defects, which show up later.
Tweaking produces inconsistency and side effects. A good motto is: if it works, leave it alone; if it does not, find the root cause and do not tweak. There must be a root cause for the deviation, and this must be found and eliminated, rather than masked by the tweak. There are well-established TQM and SPC methods to achieve this. Testing and Quality Assurance has a major part to play. Testing must be appropriate to ensure that the units perform well in the application. Cpk analysis ensures that the specification parameter margins are adequate and controlled. 6. System Reliability
There are two further methods of increasing system reliability. Firstly, more reliable components. MIL standard or other components of assessed quality could be used, but in industrial and commercial equipment, the expense is not normally justified.
Secondly, redundancy. In a system where one unit can support the load, and two units are used in parallel, the system is much more reliable since the system will still work even with one unit failed. Clearly, the probability of two units failing simultaneously is much less than that of one unit failing. This system would have a big size and cost penalty, (twice as big and twice as much) so normally an N+1 system is used, where
N units can support the load, but N+1 units are used in parallel, "2+1" or "3+1" being the usual combinations. Supposing the reliability of each unit under the particular conditions is 0.9826, (m=500,000h, t=1year) the system reliability for an "N+1" system where N = 2 would be
0.9991, an improvement of 20 times. (Nearly 60 times in a 1+1 system).
However, there are many pitfalls in the system design, such as:
1. N units must be rated to support full load.
2. Any part failing must not make the system fail.
3. If any part fails this must be brought to the operator's notice so that it can be replaced.
4. Changing units must not make the system fail (hot plugging).
It is very difficult and tricky to design the system to satisfy items 2 & 3. For example the failure of components that do not effect system operation when all units are OK, but would effect operation if there was a fault (such as an isolating diode going short circuit, or a parallelling wire or connector going open circuit), must be signalled as a problem, and must be repaired. The circuitry necessary to arrange for all this,
(isolating diodes, signalling logic, hot plugging components, current sharing, etc.) has its own failure rate, and so degrades the overall system failure rate. In the following illustrations, this is ignored for simplicity, but in a real calculation, it must be taken into account. In many applications, the only way to detect such latent faults is to simulate a part failing by shutting it down remotely for a very short time. This circuitry will, of course, increase complexity and decrease reliability further still, as well as being dangerous: a system failure could be caused by the test circuit shutting the system down.

9

xppower.com
Calculating system reliability involves the use of the binomial expansion, as follows:
(R + Q)T = [(RT + TR(T-1) Q + (T(T - 1)/2!) R(T-2) Q2 + (T (T - 1) (T - 2)/3!) R(T-3) Q3 + .… + QT]

....(6)

where:
T = Total No. of Units
R = Probability of Success
Q = Probability of Failure = (1-R)
The 1st term is the probability that 0 units will fail,
The 2nd term is the probability that 1 unit will fail,
The 3rd term is the probability that 2 units will fail,
The 4th term is the probability that 3 units will fail,
The 5th term is the probability that 4 units will fail, … and so on.
These terms must be summed as appropriate, based on what combination of part failures gives a system failure.
For example, with 4 units of R = 0.8, the probability of failures is:
0
1
2
3
4

failures failure failures failures failures

:
:
:
:
:

0.84
4 x 0.83 x 0.2
(4 x 3/2) x 0.82 x 0.22
(4 x 3 x 2/3 x 2) x 0.81 x 0.23
0.24

=
=
=
=
=

0.4096
0.4096
0.1536
0.0256
0.0016

So if 1 unit is enough to supply the load, then if there are 0 Failures, or 1F, or 2F, or 3F, the system is still working, hence the system reliability is: 0.4096 + 0.4096 + 0.1536 + 0.0256 = 0.9984
This particular result could have been obtained from special case 2 (any one is OK, this would be a "n+3" system):

1 - 0.24 = 0.9984

If two units are needed to maintain the system, then only 0, 1 and 2 failures are OK (this would be a "n+2" system):
The system reliability is: 0.4096 + 0.4096 + 0.1536 = 0.9728
If three units are needed to maintain the system, then only 0 and 1 failures are OK:
The system reliability is: 0.4096 + 0.4096 = 0.8192
This particular result could have been obtained from special case 1 ("n+1"): 0.84 + 0.2 x 4 x 0.83 = 0.8192
Note that the improvement over one unit is only marginal for such a low reliability (0.8), however this is an effective solution in cases where R > 0.9. If there is no redundancy, the only acceptable case is that of 0 failures: 0.84 = 0.4096. This particular case is the same as the series situation (any part failure causes a system failure).
Special case 1: "n+1" redundancy, identical units.
In this case, 0F and 1F will not cause a system failure, and the reliability is given by the sum of the first two terms of the expansion:
RT = RT + QT{R(T-1)}

....(7)

Special case 2: Redundancy where any one unit is capable of supplying the load:
RT = 1-[(1-RA)(1-RB)(1-RC)(1-RD)]

....(8)

Parts in series:
(any part failure will cause a system failure)
RT = [(RA)(RB)(RC)(RD) …]

....(9)

Availability
Availability is sometimes mentioned in this context, this is defined as:
Availability =

MTBF
MTBF + MTTR

Where MTTR is the mean time to repair.---

10

xppower.com
For good, reliable systems, Availability tends to be 0.99999……, where the mathematics gets tedious and the number difficult to interpret.
In such cases Unavailability is more meaningful, this being (1- Availability) and usually expressed in minutes/year.
Consider the previous example (m=500,000h, t=1year), and assume that MTTR is 3 hours.
Availability is 0.999 994, and Unavailability is 0.000 006 or 3.15 minutes/year.
Now consider the "N+1" system shown (N=2).
Availability will be 0.999 999 694, and Unavailability 0.000 000 306 or 10 seconds/year.
Note however, that we now have 3 units in the system, so service calls will be 3 times as frequent, or in other words the MTBF for service calls = 500,000/3 = 166,700hours.
It is an interesting fact that when using redundancy to improve availability, the service calls to repair system failures gets much less frequent, but the service calls to repair part failures gets more frequent. Since the object of the exercise is to maintain system availability, this is a small price to pay, but the costs of system failure should be weighed against the costs of service maintenance.
In some cases it is possible to either reduce costs or improve system availability further by partitioning, i.e. have different load-groups fed by different power-supply-groups. This is a subject in itself, but as an illustration the level of redundancy in a typical telephone exchange is as follows: • Each switching card is powered by 1+1 redundant dc/dc inverters.
• Each card is duplicated in 1+1 redundancy
• Each bay and its supplies are partitioned.
• The AC/DC supplies feeding a bay are 1+1 redundant.
• The power cables and connections are 1+1 redundant.
• There is a battery backup system at the output of the ac/dcs, feeding independent busbars.
• There is a diesel generator system to back up the mains supply.
The usual design criteria is that since batteries are large, expensive, dangerous and require maintenance, only about 20 minutes of battery backup is provided, which gives enough time for several attempts to start up the diesel generator. (An automatic sequence of 10 attempts).
Since there is, on average, a short failure of the mains every week (MTBF of 170 hours (!)), this is a very necessary precaution.
7. Comparing Reliabilities
The real use of reliability predictions is not for establishing an accurate level of reliability, but for comparing different technical approaches, possibly from different manufacturers, on a relative (comparative) bases. Hence the importance of using the same database, environment etc.
When such comparisons are made, always check that all of the following are satisfied, otherwise the comparison is completely meaningless:
• The database must be stated, and must be identical. Comparing a MIL-HDBK-217F prediction with a MIL-HDBK-217E prediction or an
HRD5 prediction is meaningless – there is no correlation.
• The database must be used consistently and exclusively. The result is meaningless if a different database is used for some component.
The justification may be reasonable, but the result is meaningless.
• The external stresses and environment must be stated and must be identical. (Input, load, temperature, etc.) The result is meaningless if all the environmental details are not stated, or are different.
• The units must be FFF interchangeable in the application. If one is rated at 10A and the other at 5A, the comparison is fair, as long as the load is less than 5A. If the ratings are identical, but one needs an external filter and the other does not, then there is no comparison. (Although, it is possible, sometimes, to work out the failure rate of the external filter and add it to the FR of the unit, using the same database, environment and stress.)
• Comparing a predicted reliability figure with the results of a reliability demonstration test (lifetest) is also meaningless. One could argue that the results of the reliability demonstration are more meaningful, but that depends on the details of the test, the environment and the acceleration factors used. All these factors must be identical when comparing two test results, but in any case comparing test results with predictions is a meaningless comparison.
There are no miracles: if we predict 200,000 hours and an other manufacturer states 3,000,000 hours for a comparable product, then they must have used either a different database, or a different stress level, or a different environment, etc.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]

J.H. Bompas-Smith, "Mechanical Survival" McGrew-Hill, 1973
R.A. Fisher & F. Yates, "Statistical Tables", Oliver & Boyd
P. Bardos, "Reliability in Electronics Production" Proceedings of PCIM, Munich, 1979
P. Bardos, "Introduction to Reliability" Proceedings of Powercon10, SanDiego, 1980
P. Bardos, "Reliability in Electronics Production" Electron, September 1979
P. Bardos, "The reliability of switch mode power supplies – 15 years on" 1986

11

www.xppower.com
---

North American HQ

Asian HQ

XP Power
990 Benecia Avenue, Sunnyvale, CA 94085
Phone : +1 (408) 732-7777
Fax
: +1 (408) 732-2002
Email
: nasales@xppower.com

XP Power
401 Commonwealth Drive, Haw Par Technocentre, Lobby B,
#02-02, Singapore 149598
Phone : +65 6411 6900
Fax
: +65 6741 8730
Email
: apsales@xppower.com
Web
: www.xppowerchina.com / www.xppower.com

North American Sales Offices
Toll Free.........................+1 (800) 253-0490
Central Region...............+1 (972) 578-1530
Eastern Region ............+1 (973) 658-8001
Western Region.............+1 (408) 732-7777

Asian Sales Offices
Shanghai ....................... +86 21 51388389
Singapore.......................... +65 6411 6902

European HQ

German HQ

Distributors

XP Power
Horseshoe Park, Pangbourne,
Berkshire, RG8 7JW, UK
Phone : +44 (0)118 984 5515
Fax
: +44 (0)118 984 3423
Email
: eusales@xppower.com

XP Power
Auf der Höhe 2, D-28357
Bremen, Germany
Phone : +49 (0)421 63 93 3 0
Fax
: +49 (0)421 63 93 3 10
Email
: desales@xppower.com

Australia ..........................+61 2 9809 5022
Balkans ...........................+386 1 583 7930
Czech Rep. ...................+420 235 366 129
Czech Rep. ...................+420 539 050 630
Estonia ................................+372 6228866
Greece ..........................+30 210 240 1961
Israel ................................+972 9 7498777
Japan ..............................+81 48 864 7733
Korea ..............................+82 31 422 8882
Latvia ................................+371 67501005
Lithuania...........................+370 5 2652683
Poland..............................+48 22 8627500
Portugal..........................+34 93 263 33 54
Russia ...........................+7 (495)234 0636
Russia ...........................+7 (812)325 5115
South Africa.....................+27 11 453 1910
Spain..............................+34 93 263 33 54
Taiwan ..............................+886 3 3559642
Turkey ...........................+90 212 465 7199

European Sales Offices
Austria ........................+41 (0)56 448 90 80
Belgium .....................+33 (0)1 45 12 31 15
Denmark ..........................+45 43 42 38 33
Finland........................+46 (0)8 555 367 01
France ......................+33 (0)1 45 12 31 15
Germany....................+49 (0)421 63 93 3 0
Italy ................................+39 039 2876027
Netherlands ...............+49 (0)421 63 93 3 0
Norway.............................+47 63 94 60 18
Sweden ..................... +46 (0)8 555 367 00
Switzerland ................ +41 (0)56 448 90 80
United Kingdom.........+44 (0)118 984 5515

Amtex
Elbacomp
Vums Powerprag
Koala Elektronik
Elgerta
ADEM Electronics
Appletec
Bellnix
Hanpower
Caro
Elgerta
Gamma
Venco
Prosoft
Gamma
Vepac
Venco
Fullerton Power
EMPA

Global Catalog Distributors
Americas ........................................Newark
Europe & Asia...................................Farnell
China ............................Premier Electronics

newark.com farnell.com premierelectronics.com.cn

June-08

Similar Documents

Premium Essay

Hrm Marketing

... Example: A company's performance measure for managers is deficient because it does not measure such aspects of managerial performance as developing others or social responsibility. 3. A contaminated measure evaluates irrelevant aspects of performance or aspects that are not job related. Example: A company's performance measure would be contaminated if it evaluated its managerial employees based on how physically attractive they were. C. Reliability refers to the consistency of the performance measure. 1. Interrater reliability is the consistency among the individuals who evaluate the employee's performance. Example: Professor Wagner's teaching evaluations have interrater reliability since both her students and her peers who visited her classes rated her above average. 2. With some measures, the extent to which all the items rated are internally consistent is important (internal consistency reliability)....

Words: 1717 - Pages: 7

Free Essay

No Idea

...technology, auditor will want to confirm “the electronic confirmation process is secure and properly controlled, the information is obtained directly by the auditor, and the information is obtained from a third party who is the intended recipient.” (AU330) a. In AU-C-330: “factors that may assist the auditor in determining whether external confirmation procedures are to be performed as substantive audit procedures include the following:” Confirming party’s knowledge of the subject matter, the respondent is reliable if they have knowledge about the information being confirmed; the ability/willingness of the intended confirming party to respond (brushing it off/not accept responsibility, too time consuming to respond, legal issues from responding, different currencies, responding not in job duties); The objectivity of the intended party (not reliable if the intended party is part of the entity. b. May also be ineffective when, based on prior years’ audit experience, response rates inadequate or known to be unreliable – should probably obtain audit evidence from other sources at that point c. AU-C-505: “Audit evidence is more reliable when it is obtained from independent sources outside the entity; Audit evidence obtained directly by the auditor is more reliable than audit evidence obtained indirectly or by inference; Audit evidence is more reliable when it exists in documentary form, whether paper, electronic, or other medium.” Reliability: i. “if auditor identifies factors that...

Words: 458 - Pages: 2

Premium Essay

Human Resources Assignment

...Andrew Macritchie HR111 Assignment 4 “How can selectors ensure the candidates they choose are the ones who will perform better than rejected applicants?” (Cooper & Robertson, 1995:3).For all organisations recruiting the best staff for the job can be extremely difficult due to the multiple recruitment processes available and the different traits each employee can possess. When a company is choosing a recruitment process it must look at the reliability of the test by looking at the consistency of results of a test and the validity of the test must also be considered as the test must measure what it is set out to measure. (Arnold et al, 1995:131) A company must also consider the cost and effectiveness of the recruitment process they choose. There are many tools of selection which an organisation can use, for example, interviews, psychometric tests and application forms. With regards to the recruitment of effective routinized service, retail or call-centre workers I believe that psychometric tests are the most appropriate selection process for these jobs. Psychometric testing refers to the testing carried out on individuals in order to measure their ability in a specific area of working. These tests can measure all kinds of traits such as sensitivity, memory, intelligence, aptitude or personality. Psychometric tests are becoming increasingly used by employers to choose the right individual that fits a certain job entry. The term ‘psychometric tests’...

Words: 1103 - Pages: 5

Premium Essay

Psychometric Properties of Psychological Assessment Measures

... 6 5. Pre-testing the experimental version of the measure 6 4. Item analysis phase 7 1. Item difficulty (p) 7 2. Discrimination power 7 3. Preliminary investigation into item bias 8 5. Revising and standardizing the final version of the measure 8 6. Technical evaluation and establishing norms 8 1. Issues related to the reliability of a psychological measure 8 1. Definition 8 2. Measurement error 8 3. The reliability coefficient 9 4. Standard error of measurement 9 5. Types of reliability 10 2.6.1.5.1. Reliability measures of stability 10 - Test-retest reliability - Alternate-form reliability 2.6.1.5.2. Reliability measures of internal consistency 11 - Split-half reliability - Kuder-Richardson and coefficient Alpha reliability - Inter-scorer reliability 2.6.2. Issues...

Words: 6499 - Pages: 26

Free Essay

Fuzzy Hugs

...Penelope Ramirez BUS 230 Bill Forte June 2, 2013 Fuzzy Hugs Maintaining a high-quality, low-cost strategy is a philosophy many companies try to pursue in today’s competitive market. Not everyone can achieve that without hard work, massive time and other resources dedicated to ensure methods. Keeping a diverse work force is what we strive for. It allows employees from different backgrounds, different educational and occupational experience to collaborate and reach common goals. Adverse impact and validity are among the topics in this analysis; as follows. Effectively using information to make business decisions is vital to a company’s success. Analyzing data can help organizations determine if they have a high quality and talented workforce that can perform, meet objectives, and implement strategy. Successful data analysis can also help with hiring, training, and planning decisions. Yet, this same information can be used for decisions on down-sizing, and layoffs. It is important and fundamental to have measurements t assist in making decisions. The problem we face is deciding between two assessment systems, both of which are relatively expensive. As Fuzzy Hugs pursuing and maintaining a high quality low cost strategy is the business model. Furthermore, underperforming manufacturing employees cannot be afforded to be employed given the lean staffing model. The first system brought forth by Fuzzy Hugs has high validity and predicts job success well, but it results in fairly...

Words: 928 - Pages: 4

Premium Essay

Research Applications

...Research Applications The main purpose of the Human Resource Management is to create a fair, equal, and stable work environment for everyone at the company. The HR Department is must also needs to make sure they are fulfilling the companies vision, mission and goals. In order for the Human Resource to meet the vision, mission and goals of the company HR managers need to do research. Throughout the paper I will explain important key terms and explanations of the HR research along with its importance to the company. Goals of Research in HR To achieve an understanding of the goals of HR research is one must understand what the problem is and what they want to achieve. Research in the HR is generated by a concern problem, or foreseen problem in the HR department. Once the concern/problem has brought to attention the HR manager/researcher they will precede to the following steps. 1) Clearly diagnose and state the business problem. 2) Calculate cost of the problem with hard numbers. 3) Develop human resource based solutions. 4) Calculate total cost 5) Calculate savings from the solution.6) Calculate the cost/benefit ratio. 7) Implement the solution. 8) Report improvements. (Burrows 1996) After all, the purpose of HR research is to insure that the company achieves its vision, mission and goals. One problem that I believe my current organization might be better address by research is in the area of employee training. Comparison of Primary and Secondary Research When...

Words: 718 - Pages: 3

Premium Essay

Develop Phychological Measure

... 9 5. ITEM ANALYSIS PHASE…………………………………………………………….. 9 1. Determining item difficulty (p)………………………………………………. 9 2. Determining discriminating power………………………………………….. 10 3. Preliminary investigation into item bias………………………................... 11 6. REVISING AND STANDARDISING THE FINAL VERSION OF THE MEASURE…… 12 1. Revising the items and test…………………………………………………. 12 2. Selecting items for the final version………………………………………... 12 3. Refining administration instructions and scoring procedures…................. 12 4. Administering the final version………………………………….................. 12 7. TECHNICAL EVALUATION AND ESTABLISHING NORMS………………………….. 13 1. Establishing validity and reliability………………………………………….. 13 1. Reliability…………………………………………………................ 13 2. Validity……………………………………………………………….. 14 2. Establishing norms, setting performance standards or cut-scores……… 16...

Words: 4418 - Pages: 18

Premium Essay

Selecting the Best Person for the Job

...Selecting the Best Person for the Job Having the right people on staff is crucial to the success of an organization. Various selection devices help employers predict which applicants will be successful if hired. These devices aim to be not only valid, but also reliable. Validity is proof that the relationship between the selection device and some relevant job criterion exists. Reliability is an indicator that the device measures the same thing consistently. For example, it would be appropriate to give a keyboarding test to a candidate applying for a job as an administrative assistant. However, it would not be valid to give a keyboarding test to a candidate for a job as a physical education teacher. If a keyboarding test is given to the same individual on two separate occasions, the results should be similar. To be effective predictors, a selection device must possess an acceptable level of consistency. Application forms For most employers, the application form is the first step in the selection process. Application forms provide a record of salient information about applicants for positions, and also furnish data for personnel research. Interviewers may use responses from the application for follow-up questions during an interview. These forms range from requests for basic information, such as names, addresses, and telephone numbers, to comprehensive personal history profiles detailing applicants' education, job experience skills, and accomplishments. According...

Words: 1241 - Pages: 5

Premium Essay

Personal

...Determining sources to be reliable and relevant and whether or not there is an author bias is based on factors, such as, source accurate, information to validate, and Author bias. The reliability of the source can strengthen your essay or weaken the essay with inaccurate information. Reliable and Relevant Using sources to conduct research on a particular topic it is important that the source be accurate. The information the source is providing will be accurate supported by documentation and evidence as to the validity of the information provided. The information in the source will support the topic, have links, references. The source will have the information you need for your research and best found in school, government or military sites or search engines. These sites support the information obtained, the information is accurate and no information can be added or make changed by just anyone. There are some sources that pass the test of being accurate, but the author is bias to the topic, which gives information that is based solely on the Author’s view of the topic. The source that is considered to by Author bias is usually not supported by references, links or accurate information, and anyone can input data and change information provided in the source. Strengthen or Weaken the Essay When obtaining information from sources it can strengthen the essay by providing accurate, consistent and validation to the essay. The source’s are most likely to be...

Words: 270 - Pages: 2

Premium Essay

How Do You Evaluate Sources of Valitdity

...How do you evaluate sources of validity? We, as University of Phoenix students have different options to do research for our papers. The most reliable information source will be the University Library. Other sources available would be the Internet, media (journals, news papers) and encyclopedias, almanacs and dictionaries. Conducting research on the Internet for our courses is going to make evaluating the sources for validity necessary. Unlike the university library collection – “evaluated and selected for usefulness and reliability by educated librarians” – websites and Internet resources are not necessarily evaluated by anyone. That’s why we have to use our critical thinking in evaluating those sources. Using the academic or governmental directories is in our advantage because we know that somebody screened them and listed only those that were reliable and updated. An easy to use tool for evaluating the Internet sources was offered by Robert Harris, professor and author of WebQuester: A guide book to the web. It’s called CARS test for information quality (Credibility, Accuracy, Reasonableness and Support). What we have to question using this test would be anonymous materials, negative evaluations of the materials, little or no evidence of quality control and bad grammar or misspelled words. You have to ask yourself who the author is and whether or not he is a recognized expert. Some signals of lack of accuracy would be lack of date, (you have to know how recent the source...

Words: 375 - Pages: 2

Free Essay

Asdfdfg

...What is Reliability? * Consistency * It is not a characteristic of the test but a property of the scores obtained when the test is administered to a particular group of people on a particular occasion under specific conditions * Not the same thing as stability Classical Reliability Theory * By Charles Spearman (1904) * Also called the Theory of true and error scores * It assumed that a person’s observed score on a test is composed of a “true” score plus some unsystematic error of measurement * TRUE SCORE – the average of the scores a person would obtain if he/she took the test an infinite number of times 2 Factors that influence test scores: * 1. Factors that contribute to consistency - consist of those stable attributes of the individual which the examiner is trying to measure (TRUE VARIANCE) * 2. Factors that contribute to inconsistency – these include the characteristics of the individual, test, or situation which have nothing to do with the attribute being measured but affects the scores (ERROR VARIANCE) EQUATION: * X = T + e * Where: * X is the obtained score * T is the true score * e is the errors of measurement * Errors in measurement –represent discrepancies b/n obtained scores and the corresponding true scores (E=X – T) SOURCES OF MEASUREMENT ERROR (ERROR VARIANCES) * Item Selection(Intrinsic Error Variance) * Found in the instrument itself * Test items may not be equally fair to all...

Words: 529 - Pages: 3

Premium Essay

Benjamin Franklin

...Reliability and Validity During this paper the different types of reliability and validity will be discussed. Reliability and validity will then be applied to the Human Services profession in showing how they apply to the Human Services field. There will be two methods of data collection presented as well as why they are important to this field. Reliability is defined as: “the ability to be relied on or depended on, as for accuracy, honesty, or achievement” (Webster dictionary). The central method that reliability is measured is the test/retest theory. According to Carmine, “The more consistent the results given by repeated measurements, the higher the reliability of the measuring procedure; conversely the less consistent the results, the lower the reliability” (Carmine and Zeller pg 12). For example, a blood pressure machine can be used to test and re-test your blood pressure. When measuring one’s blood pressure repeatedly one will get similar results at that given time. Any measuring appliance is legitimate if it does what it is intended to do by giving consistent results on repeated measurements. Another common way in measuring reliability is by using Cronbach’s Alpha. According to Santos, “Cronbach's Alpha is a numerical coefficient of reliability. Computation of alpha is based on the reliability of a test relative to other tests with same number of items, and measuring the same construct of interest” (Santos, volume 32). Validity is defined as: “the state...

Words: 577 - Pages: 3

Free Essay

Acct 504

...Case Study Two Situation A. The missing component here is; Ensure accurate, reliable accounting records. I say this because there is no one going behind the purchasing agent to review his or her work. The purchasing agent is responsible for the purchasing of the diamonds, the approving of the purchasing agreement and signing of the check. Without proper controls, records may be unreliable, making it impossible to tell which part of the business is profitable and which part needs improvement. I would recommend that a supervisor or other higher manager approve the invoices for payment and sign the checks. This would ensure the accuracy of the purchases and the records that the company keeps. Situation B. This missing component here is; Promote operational efficiency. This is because when the owner of the firm is away two of the senior architects tends to take over the work of the office manager which results in poor performance of their normal duties. This is not the most efficient use of the company’s use of employee resources. I would recommend that the owner hires a permanent office manager so that while she is away with clients she will not have to worry about other senior architects neglecting their work because they feel the need to step up in her absences. Situation C. For this situation I believe that the missing component Comply with legal requirements. This states that companies, like people, are subject to laws, such as those of regulatory agencies like the SEC...

Words: 377 - Pages: 2

Premium Essay

Managing Diversity at Cityside Financial Services

...1 a. Three purposes: &Increase applicant pool at reasonable cost &Fulfill organizational legal/social goals &Increase success rate of selection process. HuaWei Company posts some recruitment ads on the Internet, which could extend the range for finding the right person and which could also raise the employment rates to some extent. This wide selection methods could raise the successful rates. 1b External: Internet recruiting people on the Internet, such as posting the ads Internal: promotion to promote someone from a lower position Two reasons: Bringing more new ideas It could help the society to solve the problem of unemployment 1c Yes. Chinese companies usually use the Internet to recruit people. There are many website services provided by the company and the applicant. With the further development of the Internet, this kind of method is becoming more and more popular. Chinese companies still use method of promotion as the Chinese tend to give more opportunities to them know. 2a &Develop realistic goals/manage interview process (structure: RJP, Selling/Message Content, Measurement) &Define differentiating job performance expectations (barriers, & competency requirements) necessary to perform successfully &Ask questions that predict candidate’s ability to meet performance standards &Decide on answers to the targeted questions in advance of the interview &Conduct the interview in a manner that maximizes effective communication & accurate measurement &Use behavioral decision-making...

Words: 639 - Pages: 3

Free Essay

Hrm Reliability and Validity

...Validity is a measure of the effectiveness of a given approach. A selection process is valid if it helps you increase the chances of hiring the right person for the job. It is possible to evaluate hiring decisions in terms of such valued outcomes as high picking speed, low absenteeism, or a good safety record. A selection process is not valid on its own, but rather, relative to a specific purpose. For example, a test that effectively predicts the work quality of strawberry pickers may be useless in the selection of a capable crew foreman. A critical component of validity is reliability. Validity embodies not only what positive outcomes a selection approach may predict, but also how consistently(i.e., reliably) it does so. In this chapter we will (1) review ways of improving the consistency or reliability of the selection process; (2) discuss two methods for measuring validity; and (3) present two cases that illustrate these methods. First, however, let’s consider a legal issue that is closely connected to validity: employment discrimination. Avoiding Discrimination Charges Avoiding content errors Reducing rater inconsistency VALIDITY OF SELECTION METHODS Validity refers to the quality of a measure that exists when the measure assesses a construct. In the selection context, validity refers to the appropriateness, meaningfulness, and usefulness of the inferences made about applicants during the selection process. It is concerned with the issue of whether applicants will...

Words: 271 - Pages: 2