澳洲幸运5开奖号码

澳洲幸运5在线开奖

澳洲幸运5网站登录

ReliaSoft June 2008

In today’s competitive electronic products market, having higher reliability than competitors is one of the key factors for success. To obtain high product reliability, consideration of reliability issues should be integrated from the very beginning of the design phase. This leads to the concept ofreliability prediction. Historically, this term has been used to denote the process of applying mathematical models and component data for the purpose of estimating the field reliability of a system before failure data are available for the system. However, the objective of reliability prediction is not limited to predicting whether reliability goals, such as MTBF, can be reached. It can also be used for:

  • Identifying potential design weaknesses.
  • Evaluating the feasibility of a design.
  • Comparing different designs and life-cycle costs.
  • Providing models for system reliability/availability analysis.
  • Establishing goals for reliability tests.
  • Aiding in business decisions such as budget allocation and scheduling.

Once the prototype of a product is available, lab tests can be utilized to obtain more accurate reliability predictions. Accurate prediction of the reliability of electronic products requires knowledge of the components, the design, the manufacturing process and the expected operating conditions. Several different approaches have been developed to achieve the reliability prediction of electronic systems and components. Each approach has its unique advantages and disadvantages. Among these approaches, three main categories are often used within government and industry: empirical (standards based), physics of failure and life testing. In this article, we will provide an overview of all three approaches.

First, we will discuss empirical prediction methods, which are based on the experiences of engineers and on historical data. Several standards, such as MIL-HDBK-217, Bellcore/Telcordia, RDF 2000 and China 299B, are widely used for reliability prediction of electronic products. Next, we will discuss physics of failure methods, which are based on root-cause analysis of failure mechanisms, failure modes and stresses. This approach is based upon an understanding of the physical properties of the materials, operation processes and technologies used in the design. Finally, we will discuss life testing methods, which are used to determine reliability by testing a relatively large number of samples at their specified operation stresses or higher stresses and using statistical models to analyze the data.

澳洲幸运5开奖最快结果

Empirical prediction methods are based on models developed from statistical curve fitting of historical failure data, which may have been collected in the field, in-house or from manufacturers. These methods tend to present good estimates of reliability for similar or slightly modified parts. Some parameters in the curve function can be modified by integrating engineering knowledge. The assumption is made that system or equipment failure causes are inherently linked to components whose failures are independent of each other. There are many different empirical methods that have been created for specific applications. Some have gained popularity within industry in the past three decades. The table below lists some of the available prediction standards and the following sections describe three of the most commonly used methods in a bit more detail.

澳洲幸运5龙虎和走势图

MIL-HDBK-217 is very well known in military and commercial industries. It is probably the most internationally recognized empirical prediction method, by far. The latest version is MIL-HDBK-217F, which was released in 1991 and had two revisions: Notice 1 in 1992 and Notice 2 in 1995.

The MIL-HDBK-217 predictive method consists of two parts; one is known as the parts count method and the other is called the part stressmethod [1]. The parts count method assumes typical operating conditions of part complexity, ambient temperature, various electrical stresses, operation mode and environment (called reference conditions). The failure rate for a part under the reference conditions is calculated as:

where:

  • λref is the failure rate under the reference conditions.
  • i is the number of parts.

Since the parts may not operate under the reference conditions, the real operating conditions will result in failure rates that are different from those given by the “parts count” method. Therefore, the part stress method requires the specific part’s complexity, application stresses, environmental factors, etc. (called Pi factors). For example, MIL-HDBK-217 provides many environmental conditions (expressed as πE) ranging from “ground benign” to “cannon launch.” The standard also provides multi-level quality specifications (expressed as πQ). The failure rate for parts under specific operating conditions can be calculated as:

where:

  • πS is the stress factor.
  • πT is the temperature factor.
  • πE is the environment factor.
  • πQ is the quality factor.
  • πA is the adjustment factor.

Figure 1 shows an example using the MIL-HDBK-217 method (in ReliaSoft’s Lambda Predict software) to predict the failure rate of a ceramic capacitor. According to the handbook, the failure rate of a commercial ceramic capacitor of 0.00068 mF capacitance with 80% operation voltage, working under 30 degrees ambient temperature and “ground benign” environment is 0.0216/106 hours. The corresponding MTBF (mean time before failure) or MTTF (mean time to failure) is estimated to be 46,140,368 hours.



Figure 1: MIL-HDBK-217 capacitor failure rate example

澳洲幸运5人工计划

Bellcore was a telecommunications research and development company that provided joint R&D and standards setting for AT&T and its co-owners. Because of dissatisfaction with military handbook methods for their commercial products, Bellcore designed its own reliability prediction standard for commercial telecommunication products. In 1997, the company was acquired by Science Applications International Corporation (SAIC) and the company’s name was changed to Telcordia. Telcordia continues to revise and update the standard. The latest two updates are SR-332 Issue 1 (May 2001) and SR-332 Issue 2 (September 2006), both called “Reliability Prediction Procedure for Electronic Equipment.”

The Bellcore/Telcordia standard assumes a serial model for electronic parts and it addresses failure rates at the infant mortality stage and at the steady-state stage with Methods I, II and III [2-3]. Method I is similar to the MIL-HDBK-217F parts count and part stress methods. The standard provides the generic failure rates and three part stress factors: device quality factor (πQ), electrical stress factor (πS) and temperature stress factor (T). Method II is based on combining Method I predictions with data from laboratory tests performed in accordance with specific SR-332 criteria. Method III is a statistical prediction of failure rate based on field tracking data collected in accordance with specific SR-332 criteria. In Method III, the predicted failure rate is a weighted average of the generic steady-state failure rate and the field failure rate.

Lambda Predict has implemented Methods I and II, and Method III will be added in the next version. Figure 2 shows an example in Lambda Predict using SR-332 Issue 1 to predict the failure rate of the same capacitor in the previous MIL-HDBK-217 example (shown in Figure 1). The failure rate is 9.654 Fits, which is 9.654 / 109 hours. In order to compare the predicted results from MIL-HBK-217 and Bellcore SR-332, we must convert the failure rate to the same units. 9.654 Fits is 0.000965 / 106 hours. So the result of 0.0216 / 106 hours in MIL-HDBK-217 is much higher than the result in Bellcore/Telcordia SR-332. There are reasons for this variation. First, MIL-HDBK-217 is a standard used in the military so it is more conservative than the commercial standard. Second, the underlying methods are different and more factors that may affect the failure rate are considered in MIL-HDBK-217.

Figure 2: Bellcore capacitor failure rate example

澳洲幸运5玩法

RDF 2000 is a reliability data handbook developed by the French telecommunications industry. This standard provides reliability prediction models for a range of electronic components using cycling profiles and applicable phases as a basis for failure rate calculations [4]. RDF 2000 provides a unique approach to handle mission profiles in the failure rate prediction. Component failure is defined in terms of an empirical expression containing a base failure rate that is multiplied by factors influenced by mission profiles. These mission profiles contain information about how the component failure rate may be affected by operational cycling, ambient temperature variation and/or equipment switch on/off temperature variations. RDF 2000 disregards the wearout period and the infant mortality stage of product life based on the assumption that, for most electronic components, the wearout period is never reached because new products will replace the old ones before the wearout occurs. For components whose wearout period is not very far in the future, the normal life period has to be determined. The infant mortality stage failure rate is caused by a wide range of factors, such as manufacturing processes and material weakness, but can be eliminated by improving the design and production processes (e.g. by performing burn-in).

As an example, the empirical expression formula for a ceramic capacitor of class I is given by:

where:

  • (πt)i is the temperature factor related to the ith junction temperature of the capacitor mission profile.
  • τi is the working time ratio of the capacitor for the ith junction temperature of the mission profile.
  • τon is the total working time ratio of the capacitor, with τon + τoff = 1.
  • (πn)i is the ith influence factor related to the annual cycles number of thermal variations seen by the package, with the amplitude ΔT.
  • ΔTi is the thermal amplitude variation of the ith mission profile.

Figure 3 shows the implementation of the failure rate prediction using RDF 2000 in Lambda Predict.



Figure 3: RDF 2000 capacitor failure rate example

澳洲幸运5官方走势图

Although empirical prediction standards have been used for many years, it is always wise to use them with caution. The advantages and disadvantages of empirical methods have been discussed a lot in the past three decades. A brief summary from the publications in industry, military and academia is presented next [5-9].

Advantages of empirical methods:

  1. Easy to use, and a lot of component models exist.
  2. Relatively good performance as indicators of inherent reliability.
  3. Provide an approximation of field failure rates.

Disadvantages of empirical methods:

  1. A large part of the data used by the traditional models is out-of-date.
  2. Failure of the components is not always due to component-intrinsic mechanisms but can be caused by the system design.
  3. The reliability prediction models are based on industry-average values of failure rate, which are neither vendor-specific nor device-specific.
  4. It is hard to collect good quality field and manufacturing data, which are needed to define the adjustment factors, such as the Pi factors in MIL-HDBK-217.

澳洲幸运5稳定杀码

In contrast to empirical reliability prediction methods, which are based on the statistical analysis of historical failure data, a physics of failure approach is based on the understanding of the failure mechanism and applying the physics of failure model to the data. Several popularly used models are discussed next.

澳洲幸运5开奖规律

One of the earliest and most successful acceleration models predicts how the time-to-failure of a system varies with temperature. This empirically based model is known as the Arrhenius equation. Generally, chemical reactions can be accelerated by increasing the system temperature. Since it is a chemical process, the aging of a capacitor (such as an electrolytic capacitor) is accelerated by increasing the operating temperature. The model takes the following form.

where:

  • L() is the life characteristic related to temperature.
  • A is the scaling factor.
  • Ea is the activation energy.
  • k is the Boltzmann constant.
  • T is the temperature.

澳洲幸运5开奖软件

While the Arrhenius model emphasizes the dependency of reactions on temperature, the Eyring model is commonly used for demonstrating the dependency of reactions on stress factors other than temperature, such as mechanical stress, humidity or voltage.

The standard equation for the Eyring model [10] is as follows:

where:

  • L(T ,S) is the life characteristic related to temperature and another stress.
  • A, α, B and C are constants.
  • S is a stress factor other than temperature.
  • T is absolute temperature.

According to different physics of failure mechanisms, one more term (i.e., stress) can be either removed or added to the above standard Eyring model. Several models are similar to the standard Eyring model. They are:

Two Temperature/Voltage Model:

Three Stress Model (Temperature-Voltage-Humidity):

Corrosion Model:

Electronic devices with aluminum or aluminum alloy with small percentages of copper and silicon metallization are subject to corrosion failures and therefore can be described with the following model [11]:

where:

  • B0 is an arbitrary scale factor.
  • α is equal to 0.1 to 0.15 per % RH.
  • f(V) is an unknown function of applied voltage, with empirical value of 0.12 to 0.15.

Hot Carrier Injection Model:

Hot carrier injection describes the phenomena observed in MOSFETs by which the carrier gains sufficient energy to be injected into the gate oxide, generate interface or bulk oxide defects and degrade MOSFETs characteristics such as threshold voltage, transconductance, etc. [11]:

For n-channel devices, the model is given by:

where:

  • B is an arbitrary scale factor.
  • Isub is the peak substrate current during stressing.
  • N is equal to a value from 2 to 4, typically 3.
  • Ea is equal to -0.1eV to -0.2eV.

For p-channel devices, the model is given by:

where:

  • B is an arbitrary scale factor.
  • Igate is the peak gate current during stressing.
  • M is equal to a value from 2 to 4.
  • Ea is equal to -0.1eV to -0.2eV.

Since electronic products usually have a long time period of useful life (i.e. the constant line of the bathtub curve) and can often be modeled using an exponential distribution, the life characteristics in the above physics of failure models can be replaced by MTBF (i.e. the life characteristic in the exponential distribution). However, if you think your products do not exhibit a constant failure rate and therefore cannot be described by an exponential distribution, the life characteristic usually will not be the MTBF. For example, for the Weibull distribution, the life characteristic is the scale parameter eta and for the lognormal distribution, it is the log mean.

澳洲幸运5官网开奖历开奖结果

Electromigration is a failure mechanism that results from the transfer of momentum from the electrons, which move in the applied electric field, to the ions, which make up the lattice of the interconnect material. The most common failure mode is “conductor open.” With the decreased structure of Integrated Circuits (ICs), the increased current density makes this failure mechanism very important in IC reliability.

At the end of the 1960s, J. R. Black developed an empirical model to estimate the MTTF of a wire, taking electromigration into consideration, which is now generally known as the Black model. The Black model employs external heating and increased current density and is given by:

where:

  • A0 is a constant based on the cross-sectional area of the interconnect.
  • J is the current density.
  • Jthreshold is the threshold current density.
  • a is the activation energy.
  • K is the Boltzmann constant.
  • T is the temperature.
  • N is a scaling factor.

The current density (J) and temperature (T) are factors in the design process that affect electromigration. Numerous experiments with different stress conditions have been reported in the literature, where the values have been reported in the range between 2 and 3.3 for N, and 0.5 to 1.1eV for Ea. Usually, the lower the values, the more conservative the estimation.

澳洲幸运5网页计划

Fatigue failures can occur in electronic devices due to temperature cycling and thermal shock. Permanent damage accumulates each time the device experiences a normal power-up and power-down cycle. These switch cycles can induce cyclical stress that tends to weaken materials and may cause several different types of failures, such as dielectric/thin-film cracking, lifted bonds, solder fatigue, etc. A model known as the (modified) Coffin-Manson model has been used successfully to model crack growth in solder due to repeated temperature cycling as the device is switched on and off. This model takes the form [9]:

where:

  • Nf is the number of cycles to failure.
  • Α is a coefficient.
  • f is the cycling frequency.
  • ΔT is the temperature range during a cycle.
  • Α is the cycling frequency exponent.
  • Α is the temperature exponent.
  • G(Tmax) is equal to:

which is an Arrhenius term evaluated at the maximum temperature in each cycle.

Three factors are usually considered for testing: maximum temperature (Tmax), temperature range (ΔT) and cycling frequency (f). The activation energy is usually related to certain failure mechanisms and failure modes, and can be determined by correlating thermal cycling test data and the Coffin-Manson model.

澳洲幸运5直播网

A given electronic component will have multiple failure modes and the component’s failure rate is equal to the sum of the failure rates of all modes (i.e. humidity, voltage, temperature, thermal cycling and so on). The system’s failure rate is equal to the sum of the failure rates of the components involved. In using the above models, the model parameters can be determined from the design specifications or operating conditions. If the parameters cannot be determined without conducting a test, the failure data obtained from the test can be used to get the model parameters. Software products such as ReliaSoft’s ALTA can help you analyze the failure data.

We will give an example of using ALTA to analyze the Arrhenius model. For this example, the life of an electronic component is considered to be affected by temperature. The component is tested under temperatures of 406, 416 and 426 Kelvin. The usage temperature level is 400 Kelvin. The Arrhenius model and the Weibull distribution are used to analyze the failure data in ALTA. Figure 4 shows the data and calculated parameters. Figure 5 shows the reliability plot and the estimated B10 life at the usage temperature level.



Figure 4: Data and analysis results in ALTA with the Arrhenius- Weibull model



Figure 5: Reliability vs. Time plot and calculated B10 life

From Figure 4, we can see that the estimated activation energy in the Arrhenius model is 0.92. Note that, in ALTA, the Arrhenius model is simplified to a form of:

Using this equation, the parameters B and C calculated by ALTA can easily be transformed to the parameters described above for the Arrhenius relationship.

Advantages of physics of failure methods:

  1. Accurate prediction of wearout using known failure mechanisms.
  2. Modeling of potential failure mechanisms based on the physics of failure.
  3. During the design process, the variability of each design parameter can be determined.

Disadvantages of physics of failure methods:

  1. Need detailed component manufacturing information (such as material, process and design data).
  2. Analysis is complex and could be costly to apply.
  3. It is difficult to assess the entire system.

澳洲幸运5号码开奖

As mentioned above, time-to-failure data from life testing may be incorporated into some of the empirical prediction standards (i.e., Bellcore/Telcordia Method II) and may also be necessary to estimate the parameters for some of the physics of failure models. However, in this section of the article, we are using the term life testing method to refer specifically to a third type of approach for predicting the reliability of electronic products. With this method, a test is conducted on a sufficiently large sample of units operating under normal usage conditions. Times-to-failure are recorded and then analyzed with an appropriate statistical distribution in order to estimate reliability metrics such as the B10 life. This type of analysis is often referred to as Life Data Analysis or Weibull Analysis.

ReliaSoft’s Weibull++ software is a tool for conducting life data analysis. As an example, suppose that an IC board is tested in the lab and the failure data are recorded. Figure 6 shows the data entered into Weibull++ and analyzed with the 2-parameter Weibull lifetime distribution while Figure 7 shows the Reliability vs. Time plot and the calculated B10 life for the analysis.



Figure 6: Data and analysis results in Weibull++ with the Weibull distribution



Figure 7: Reliability vs. Time plot and calculated B10 life for the analysis

澳洲幸运5官网登录

The life testing method can provide more information about the product than the empirical prediction standards. Therefore, the prediction is usually more accurate, given that enough samples are used in the testing.

The life testing method may also be preferred over both the empirical and physics of failure methods when it is necessary to obtain realistic predictions at the system (rather than component) level. This is because the empirical and physics of failure methods calculate the system failure rate based on the predictions for the components (e.g., using the sum of the component failure rates if the system is considered to be a serial configuration). This assumes that there are no interaction failures between the components but, in reality, due to the design or manufacturing, components are not independent. (For example, if the fan is broken in your laptop, the CPU will fail faster because of the high temperature.) Therefore, in order to consider the complexity of the entire system, life tests can be conducted at the system level, treating the system as a “black box,” and the system reliability can be predicted based on the obtained failure data.

澳洲幸运5两期计划

In this article, we discussed three approaches for electronic reliability prediction. The empirical (or standards based) methods can be used in the design stage to quickly obtain a rough estimation of product reliability. The physics of failure and life testing methods can be used in both design and production stages. In physics of failure approaches, the model parameters can be determined from design specs or from test data. On the other hand, with the life testing method, since the failure data from your own particular products are obtained, the prediction results usually are more accurate than those from a general standard or model.

澳洲幸运5计划平台

[1] MIL-HDBK-217F, Reliability Prediction of Electronic Equipment, 1991. Notice 1 (1992) and Notice 2 (1995).

[2] SR-332, Issue 1, Reliability Prediction Procedure for Electronic Equipment, Telcordia, May 2001.

[3] SR-332, Issue 2, Reliability Prediction Procedure for Electronic Equipment, Telcordia, September 2006.

[4] ITEM Software and ReliaSoft Corporation, RS 490 Course Notes: Introduction to Standards Based Reliability Prediction and Lambda Predict, 2006.

[5] B. Foucher, J. Boullie, B. Meslet and D. Das, “A Review of Reliability Prediction Methods for Electronic Devices,” Microelectron. Wearout., vol. 42, no. 8, August 2002, pp. 1155-1162.

[6] M. Pecht, D. Das and A. Ramarkrishnan, “The IEEE Standards on Reliability Program and Reliability Prediction Methods for Electronic Equipment,”Microelectron. Wearout., vol. 42, 2002, pp. 1259-1266.

[7] M. Talmor and S. Arueti, “Reliability Prediction: The Turnover Point,” 1997 Proc. Ann. Reliability and Maintainability Symp., 1997, pp. 254-262.

[8] W. Denson, “The History of Reliability Prediction,” IEEE Trans. On Reliability, vol. 47, no. 3-SP, September 1998.

[9] D. Hirschmann, D. Tissen, S. Schroder and R.W. de Doncker, “Reliability Prediction for Inverters in Hybrid Electrical Vehicles,” IEEE Trans. on Power Electronics, vol. 22, no. 6, November 2007, pp. 2511-2517.

[10] NIST Information Technology Library. [Online document] Available HTTP: www.itl.nist.gov

[11] Semiconductor Device Reliability Failure Models. [Online document] Available HTTP: www.sematech.org/docubase/document/3955axfr.pdf

End Article

[Editorial Note: In the printed edition of Volume 9, Issue 1, there were two errors that have been corrected in this online version. We apologize for any inconvenience. 1) 9.654 Fits is 9.654 / 109 hours (rather than 1010). 2) In the equations for hot carrier injection models, “Ea is equal to -0.1eV to -0.2eV.“]

澳洲幸运5微信群

How Does Formation Testing Work?

While every effort is put forth before drilling begins to ensure a successful well, not every well hits hydrocarbons at a commercially productive level. Statistically, wildcat (or exploratory) wells have a one in seven chance of discovering oil or gas. While six may be dry holes, one in the group can make a big enough difference to outweigh all the risks.

Once drilling operations have been completed, it is important for drillers, engineers and geologists to determine whether to move on to the next phase: completion for production. Formation tests ascertain if there are enough hydrocarbons to produce from a well, as well as provide important information to design the well completion and production facilities.

Used to establish formation pressure, permeability, and reservoir and formation fluid characterization, there are three major methods of formation testing that help to reveal the downhole formation: well logging, core sampling and drill stem tests.

Well Logging

A way of retrieving and recording downhole information, well logginginvolves lowering measurement instruments into the well during or after the drilling process. Used as a journal of what has been encountered during drilling operations, well logs measure the electric, acoustic, radioactive and electromagnetic properties of a downhole formation.

These measurements help to determine the permeability, porosity and reservoir pressure, among other characteristics of the formation, and ultimately the presence of hydrocarbons downhole. Well logging tools can be lowered into the well and raised to retrieve the information, or they can be included in the drillstem and send the information to the surface in real-time.

Well logging is the first step in formation evaluation to determine whether hydrocarbons are present within the well.

Core Samples

Another method of formation testing is performed by obtaining core samples. Here, a small segment of the formation is retrieved from the well and analyzed to determine porosity, permeability and the presence of oil and gas – the capabilities and productivity of the well.

While core samples can be taken throughout the drilling process, core samples are also retrieved after drilling has been completed. In this case, the drill stem is pulled from the well, and the drillbit is removed from the end — replaced by a special coring instrument called a core head. Next, the drillstem is introduced back into the well and the core head retrieves a long cylinder of rock from the bottom of the well.

The core sample is then analyzed and broken to determine the presence of hydrocarbons, the fluid makeup and reservoir qualities. Sometimes oil can be seen in oil staining of the rock fragments within the sample. Also, the sample can be put under an ultraviolet light, and if there is oil and gas, the hydrocarbons will glow.

Drill Stem Tests

Used to provide a more definitive idea of the production capacity of the well, drill stem tests identify the types of fluids within the well, as well as the flowrate of these fluids, formation permeability and reservoir pressure.

Drill stem tests involve connecting a measurement device to the bottom of the drill stem, also in place of the drillbit, and lowering the system into the well, all the way to the formation. The instrument is activated at the bottom of the well, measuring the flow of oil or gas for a specified amount of time, usually an hour.

Drill Stem Test

The testing tool includes a perforated anchor at the bottom that allows fluids to enter the empty pipe. Also rubber packers expand against the sides of the hole to seal pressure. A series of valves open and close to control the flow of the hydrocarbons into the empty drill stem. Additionally, the tool contains a pressure-measuring device.

When the tool is opened, the oil and gas enter the drill pipe and are sent through a flowline to the reserve pit on the surface. While oil or gas can reach the surface during the specified testing time, many times hydrocarbons and water simply enter the drill pipe, but do not reach the surface. Nonetheless, the flow, pressure and volumes are recorded.

Important factors in determining the success of the drill stem test and, in turn, the well, include the depth of the tool; duration of the test; time required for hydrocarbons to reach the surface; fluids recovered in the drill pipe; initial and final flow pressures, indicating the increase in flowing capacity of the well; and the shut-in bottom hole pressure, which signifies the maximum reservoir potential.

While drilling a well can be expensive, sometimes completion operations can be even more expensive. It is important to decide whether a well is commercially productive or it is more logical to simply plug the well and move on to another location. Typically, one or more formation test is performed to determine if the well is productive or not.

If formation tests reveal that the well does not have enough hydrocarbons present to complete the well for production, the well is plugged and abandoned. However, if the formation tests prove the well productive, it is moved into the completion phase, which includes running completion strings the length of the well, casing the well andcementing it.

澳洲幸运5全天一期计划

澳洲幸运5安卓手机软件

澳洲幸运5官方登录

Author:   Scott Nelson
Company:   Harris Corporation
Date: 10/31/2012   Volume: 25-4
Abstract:
Portable electronics, miniaturization, and cost reduction have been key drivers in the dramatic increase in use of bottom termination components (BTC’s) over the past several years. Similar to when the Ball Grid Array (BGA) was introduced into the market, BTC’s have brought with them new challenges and adjustments in design and assembly processes. For high reliability applications, these adjustments must be made with caution to avoid creating the potential for long term reliability issues and latent field defects.While low cost is an attractive feature of most BTC component packages, cost is typically not the driver for the use of BTC’s in low volume, high reliability electronics. Because BTC packages have the die very close to the PCB and because there are no leads extending from the sides of the package, they typically exhibit low parasitic losses due to low resistance and capacitance. Another benefit of using BTC’s on high reliability products is excellent thermal dissipation due to a relatively large component thermal pad that is attached directly to the PCB. This explains why we are seeing, and will continue to see, an increased use of bottom termination components in high reliability Aerospace and Defense applications.

With the rapid implementation of BTC’s in high reliability applications two key issues have evolved. The first issue is that of improper land pattern design and the second is improper solder stencil design. Unfortunately, these two issues are at opposite extremes of the product cycle. When issues arise, this can make it difficult to answer the question, “Is it a design problem or is it a manufacturing problem?” In many cases the answer has been found to be a combination of both.

While there are numerous component packages that make up the BTC family of components, those discussed in this paper will be limited to the following industry packages: Quad Flat No-lead (QFN), Dual Flat No-lead (DFN), Small Outline No-lead (SON), and Micro Lead Frame (MLF). All of these packages are similar in construction, related by common fabrication processes, and therefore share similar design and assembly criteria.

The target audience for this paper are those that are involved in printed circuit board (PCB) design and assembly, specific but not limited to high reliability electronic circuit card assemblies (CCA’s). The intent of this paper is to provide specific BTC guidance to companies that design and/or assemble high performance electronics for applications such as aerospace and defense.

Circuit-Board2.jpg

澳洲幸运5复试计划

澳洲幸运5安卓手机软件

澳洲幸运5官方登录

Author:   Scott Nelson
Company:   Harris Corporation
Date: 10/31/2012   Volume: 25-4
Abstract: Portable electronics, miniaturization, and cost reduction have been key drivers in the dramatic increase in use of bottom termination components (BTC’s) over the past several years. Similar to when the Ball Grid Array (BGA) was introduced into the market, BTC’s have brought with them new challenges and adjustments in design and assembly processes. For high reliability applications, these adjustments must be made with caution to avoid creating the potential for long term reliability issues and latent field defects.While low cost is an attractive feature of most BTC component packages, cost is typically not the driver for the use of BTC’s in low volume, high reliability electronics. Because BTC packages have the die very close to the PCB and because there are no leads extending from the sides of the package, they typically exhibit low parasitic losses due to low resistance and capacitance. Another benefit of using BTC’s on high reliability products is excellent thermal dissipation due to a relatively large component thermal pad that is attached directly to the PCB. This explains why we are seeing, and will continue to see, an increased use of bottom termination components in high reliability Aerospace and Defense applications.

With the rapid implementation of BTC’s in high reliability applicationstwo key issues have evolved. The first issue is that of improper land pattern design and the second is improper solder stencil design. Unfortunately, these two issues are at opposite extremes of the product cycle. When issues arise, this can make it difficult to answer the question, “Is it a design problem or is it a manufacturing problem?” In many cases the answer has been found to be a combination of both.

While there are numerous component packages that make up the BTC family of components, those discussed in this paper will be limited to the following industry packages: Quad Flat No-lead (QFN), Dual Flat No-lead (DFN), Small Outline No-lead (SON), and Micro Lead Frame (MLF). All of these packages are similar in construction, related by common fabrication processes, and therefore share similar design and assembly criteria.

The target audience for this paper are those that are involved in printed circuit board (PCB) design and assembly, specific but not limited to high reliability electronic circuit card assemblies (CCA’s). The intent of this paper is to provide specific BTC guidance to companies that design and/or assemble high performance electronics for applications such as aerospace and defense.

 

 

澳洲幸运5彩票app

Vanderbilt University

Engineering Capability Brief

 

Reliability of Electromechanical Systems

 

S. Mahadevan, Professor
Civil and Environmental Engineering, Vanderbilt University

VU Station B 351831, Nashville, TN 37235-1831; (615)-322-3040; fax (615)-322-3365 

email:  sankaran.mahadevan@vanderbilt.edu

 

 

Overview: The reliability of electronic devices has traditionally been estimated using experimental testing and classical statistics. Increasing complexity in modern devices and systems in recent years has led to prohibitive testing costs. Therefore, elegant and inexpensive mathematical methods that combine probability theory and optimization are becoming popular in recent years, drastically reducing testing and development costs.

 

Vanderbilt University is conducting a multi-year research effort, funded by the Sandia National Laboratories, to develop physics-based computational techniques to estimate the time-dependent reliability of electromechanical systems. The problems being addressed are reliability against stress voiding and corrosion in electronic circuits, and solder joint reliability. First-order and second-order analytical methods, as well as advanced simulation techniques are being developed.

 

Example Application: Increasing miniaturization of silicon integrated circuits in recent years has led to the observation of void initiation and growth in aluminum conductor lines of these circuits. The aluminum conductor lines are passivated with a glass layer. The thermal expansion coefficient of glass is an order of magnitude smaller than that of aluminum. The resulting growth of voids in the aluminum interconnects due to heating and cooling is a time-dependent phenomenon. Several types of variables influence this — grain size, conductor length, initial stress, void shape, stress concentration, elastic modulus, operating temperature etc.  All of these variables have uncertainties associated with them.

 

The uncertainties in the variables are modeled through statistical distributions, and combined with a stress voiding model and probabilistic analysis. The possibility of multiple void sites is also considered. The probabilistic growth of the void size with time, and the degradation of the overall aluminum liner reliability with time, are computed using several analytical and simulation techniques.

 

Other applications include corrosion and solder joint reliability modeling. The methods being investigated include first-order and second-order approximations, maximum likelihood estimation, advanced probability integration schemes, importance sampling and latin hypercube sampling, and genetic algorithms. New methods are under development for problems with multiple, highly nonlinear limit states.

 

Potential Applications: Computational probabilistic methods are increasing in popularity for the reliability estimation of a wide variety of engineering systems, due to the savings in testing and development costs. The methods provide valuable sensitivity information that helps make reliability vs. cost trade-off decisions during design. Probabilistic analysis identifies the important factors affecting reliability; this information can be used for the optimum allocation of testing resources, and to achieve robustness in design.

 

澳洲幸运5选号技巧

How Do Subsea Trees Work?
Source:  Rigzone October 21, 2013

Used on offshore oil and gas fields, a subsea tree monitors and controls the production of a subsea well. Fixed to the wellhead of a completed well, subsea trees can also manage fluids or gas injected into the well.

Since the 1950s, subsea trees have been topping underwater wellheads to control flow. A design taken from their above-ground cousins, subsea trees are sometimes called xmas trees because the devices can resemble a tree with decorations.

Subsea trees are used in offshore field developments worldwide, from shallow to ultra-deepwaters. The deepest subsea trees have been installed in the waters offshore Brazil and in the US Gulf of Mexico, and many are rated for waters measuring up to 10,000 feet deep.

Types of Subsea Trees

There are various kinds of subsea trees, many times rated for a certain water depth, temperatures, pressure and expected flow.

The Dual Bore Subsea Tree was the first tree to include an annulus bore for troubleshooting, well servicing and well conversion operations. Although popular, especially in the North Sea, dual bore subsea trees have been improved over the years.

These trees can now be specified with guideline or guideline-less position elements for production or injection well applications.

Standard Configurable Trees (SCTs) are specifically tailored for company’s various projects. A general SCT is normally used in shallower waters measuring up to 1,000 meters deep.

High Pressure High Temperature Trees (HPHT) are able to survive in rough environments, such as the North Sea. HPHT trees are designed for pressures up to 16,500 psi and temperatures ranging from -33 C to 175 C.

Other subsea trees include horizontal trees, mudline suspension trees, monobore trees and large bore trees. Companies that manufacture subsea trees are Aker Solutions, Cameron, FMC Technologies and Schlumberger.

澳洲幸运5最新开奖结果

澳洲幸运5官方直播

by David K. Hurst  |   11:00 AM October 2, 2013

Every organization that aspires to greatness has something to learn from relevant success stories of the past. But how should managers go about unlocking the lessons of those efforts? Many of their consultants advocate an engineering approach:

  1. Find multiple examples of organizations that have coped with equivalent challenges successfully.
  2. Reverse-engineer the reasons for their success, looking for features that they share in common.
  3. Present these shared “success factors” as precepts, rules, and principles that should be implemented by all those who wish to achieve similar levels of success.

This approach sounds great, and the growth of the consultancies pushing it cannot be gainsaid. But it simply doesn’t work. The engineering approach can be described but not practiced.

Start by considering an extreme and high-visibility case. At the outset of the Iraq War, President George W. Bush expressed the hope that Iraq would become a federal democracy and a beacon to all the totalitarian states in the Middle East. The Americans then set about creating facsimiles of various institutions – the critical success factors of its own democracy. But if these were necessary conditions then clearly they were not sufficient. Iraq is far from a viable democratic system.

Similarly, in the management world, we constantly see the engineering approach being urged and falling short. As just one example, academics W. Chan Kim and Renée Mauborgne examined the emergence of outrageously successful companies like Cirque du Soleil, and claim to have discovered the keys. While never claiming that their case organizations, with their idiosyncratic histories and unique contexts, had consciously implemented their “blue ocean” principles, Kim and Maubourgne argued that it was “as if” they had. How else could they have moved their businesses into positions that so thoroughly defied competition?

Unfortunately this approach has done no more for corporate strategic success than it has for nation states. Managers are presented with inspiring stories from the past that they quickly discover cannot be replicated, and with abstract principles that sound incontrovertible yet cannot be implemented. They might, at best, produce facsimiles of certain features of great organizations, or get learn to say all the right words about what it will take to succeed. But while they can talk the talk, their organizations can’t walk the walk.

The fundamental problem with the engineering approach is that simple mechanics do not drive outcomes in complex systems. Where causes and effects are constantly subject to dynamic adaptation, as they are in ecosystems, societies, and organizations, conditions cannot be reproduced.

Moreover, we have yet to see an organization succeed by deliberately hewing to some equation for sure success. An example from baseball (or cricket) helps us understand why. Professional fielders in these sports catch most fly balls successfully. From the perspective of a physicist it is “as if” they can calculate the velocity of the ball off the bat, predict its trajectory and run to the spot where it will land. We know that they don’t actually do this; instead they maintain a constant angle of gaze between their eyes and the ball. If the ball rises in their field of view they run away from it; if it’s dropping they run toward it. A constant process of adjustment allows them to be at the right place by the time the ball becomes catchable. They gain this skill through practice and feedback, built upon a platform of native capability powered by high motivation. They improve their performance through deliberate practice and expert coaching. Teaching them physics and how to calculate the trajectories of ballistic objects is not only unnecessary. It can only distract them from the efforts that will truly help them catch more baseballs.

It’s the same with successful companies (and nations); while they all seem to arrive at the right place strategically, they don’t get there by “implementing” any abstract engineering principles. They get there by high levels of motivation (and at the corporate level no one gets up early to maximize shareholder value), a guided process of trial of error, practice and feedback. Trying to teach them abstract principles derived from other successful companies or nations is not helpful in this effort.

What is needed is an ecological approach to learning from the past, which is rather different from the engineering one:

  1. Study successful organizations to appreciate the rich contexts and processes involved – their histories – but not to distill generic precepts and principles from them.
  2. Focus intensively on the organization at hand to understand the opportunities and challenges – the potential – inherent in the current situation.
  3. Resolve to control the controllable, preempt the undesirable, and exploit the inevitable to produce outcomes that none could have anticipated.

Unfortunately, there are no short cuts to excellence. We should always try to learn what drove the success of other organizations, but never believe our own success can be as simple as borrowing the keys. We must pay attention to the innovation bubbling up in our own organizations, and work to spread it further – not try to transplant what has grown up elsewhere, in very different contexts. Our focus should be on fostering communities of trust and practice, disciplined yet free, from which brilliant strategies can emerge organically through doing and learning. In short, we need to recognize the inherent complexity of organizations and work to cultivate excellence within them, not try to engineer it from without.

 

This post is part of a series of perspectives leading up to the fifth annual Global Drucker Forum in November 2013 in Vienna, Austria. For more on the theme of the event, Managing Complexity, and information on how to attend, see the Forum’s website.

More blog posts by 

澳洲幸运5走势

PLC 9-26-13

Programmable Logic Controllers

A Programmable Logic Controller, (PLC) is a digital computer used for automation of electromechanical processes, such as liquid control levels, pressure relief or flow control.  The abbreviation “PLC” and the term “Programmable Logic Controller” are registered trademarks of the Allen-Bradley Company.  Unlike general purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact.  Consequently, it can be understood why PLC’s are an excellent tool for the extreme conditions often seen in the oil and gas extraction, refining and distribution world.

Programs to control machine operations are typically stored in a battery-backed-up system or non-volatile memory.  A PLC is an example of a real time system since output results must be produced in response to input conditions within a limited time, otherwise an adverse condition could result.

Modern PLCs can be programmed in a variety of ways, from the relay-derived ladder logic to programming languages such as specifically adapted dialects of Basic and C.  Another method is State logic, a very high-level programming language designed to program PLCs based on state transition diagrams.

The main difference from other computers is that PLCs are armored for severe conditions (such as dust, moisture, heat, cold) and have the facility for extensive input/output (I/O) arrangements.  These connect the PLC to sensors and actuators.  PLCs read the limit switches, analog process variables (such as temperature and pressure) and the positions of complex positioning systems.  On the actuator side, PLCs operate electric motors. pneumatic or hydraulic cylinders, magnetic relays, solenoids or analog outputs.  The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a computer network that plugs into the PLC.

PLCs may need to interact with people for the purpose of configuration, alarm reporting or everyday control.  A human-machine interface (HMI) is employed for this purpose.   PLCs have built in communication ports, usually 9-pin RS-232, but optionally EIA-485 or Ethernet, Modbus, BACnet or DFI.  Most  modern PLCs can communicate over a network to some other system, such as a computer running a SCADA system or web browser.