Unfortunately, highly granular failure rate information for new OEM hardware is generally unavailable beyond their published MTBF (Mean Time Between Failure) data. However, it is possible to gain insight into the average reliability of infrastructure equipment through category-specific analysis. For example, failure projections compiled by server manufacturers to qualify for federal/military contracts show that the highest expected failure points are hard drives, fans and power supplies. During the first three years, apparently manufacturers expect 1% to 5% of these components to fail during the first 3 years. Following this infant mortality phase, failure rates then stabilize at roughly 1% until the wear out/end of life phase begins, a tipping point which varies widely depending on the product.
If stakeholders in the procurement and maintenance of IT hardware are often unaware of the 2% to 3% failure rate of factory fresh gear, it’s understandable that they’d be equally surprised by the roughly .05% failure rate of pre-owned hardware. Of course, the factor that skews OEM data higher can be explained by the Bathtub Curve. By far the largest percentage of hardware failures occur during the first 30 days of installation. Whether shipped D.O.A. (not functioning straight out of the box) or due to catastrophic failure during the start-up and commissioning phase, “infant mortality” is a quantifiable failure risk in brand new, out-of-the-box network hardware.
Testing Identifies Problems, Avoids Failures
When making any purchase, as a consumer, IT manager, or procurement professional, buyers understandably have pre-conceived dependability expectations. Upon assuming ownership, when a product is “new” – fresh from the manufacturer and unused – it is presumed to be just beginning its useful life and expected to be trouble-free. Although this assumption of product integrity should hold true for infrastructure hardware, two factors are thought to play a role in compromising reliability.
The first are the practices governing the sourcing of components. Just three decades ago the bulk of consumer and business computer hardware was designed and manufactured in Silicon Valley and assembled domestically, mostly in Texas. Today, the vast majority of IT products are built with parts from China, Taiwan, the Philippines and Indonesia, and assembled in contract manufacturing facilities in the city of Shenzhen, China where it is estimated close to 90% of the world’s electronic devices are manufactured. Many tech industry analysts believe that the worldwide supply chain used by OEMs has introduced quality assurance issues, security vulnerabilities, and standardization challenges.
The second likely culprit are the testing protocols in use by virtually all OEM hardware contract manufacturers. For cost-saving and practicality reasons, it is estimated that only a percentage of name brand routers, switches, servers and consumer PCs are actually bench tested prior to OSIGlobalIT White PaWhite Pa White PaWhite Pa per per shipping. While this practice of spot-checking can undoubtedly be useful in discovering QA problems that are statistically significant, clearly it allows anomalies which can affect the failure of a single unit to pass undetected.
Due to the nature of pre-owned hardware, the standards for certifying the operational integrity of each component demands rigorous testing methodologies. While most vendors of secondary market hardware have a strong commitment to in-house bench testing, we believe our 15-point Inspection and Testing protocols are among the most demanding in the industry. All pre-owned switches, routers, servers and optics undergo intensive scrutiny by our Hardware Operations Engineers, using procedures based on industry accepted best practices.