|Main Page > Misc > MicroHouse PC Hardware Library Volume I: Hard Drives|
|Previous||Table of Contents||Next|
When you shop for a drive, you may notice a feature called the Mean Time Between Failures (MTBF) described in the brochures. MTBF figures usually range from 20,000 hours to 500,000 hours or more. I usually ignore these figures, because they usually are just theoretical, statistical values. Most drives that boast these figures have not even been manufactured for that length of time. One year of five-day work weeks with eight-hour days equals 2,080 hours of operation. If you never turn off your system for 365 days and run the full 24 hours per day, you operate your system 8,760 hours each year; a drive with a 500,000-hour MTBF rating is supposed to last (on average) 57 years before failing! Obviously, that figure cannot be derived from actual statistics because the particular drive probably has been on the market for less than a year.
Statistically, for the MTBF figures to have real weight, you must take a sample of drives, measure the failure rate for at least twice the rated figure, and measure how many drives fail in that time. To be really accurate, you would have to wait until all the drives fail and record the operating hours at each failure. Then you would average the running time for all the test samples to arrive at the average time before a drive failure. For a reported MTBF of 500,000 hours (common today), the test sample should be run for at least 1 million hours (114 years) to be truly accurate, yet the drive carries this specification on the day that it is introduced.
The bottom line is that I do not really place much emphasis on MTBF figures. Some of the worst drives that I have used boasted high MTBF figures, and some of the best drives have lower ones. These figures do not necessarily translate to reliability in the field, and that is why I generally place no importance on them.
When you select a hard disk, an important feature to consider is the performance (speed) of the drive. Hard disks come in a wide range of performance capabilities. As is true of many things, one of the best indicators of a drive's relative performance is its price. An old saying from the automobile-racing industry is appropriate here: "Speed costs money. How fast do you want to go?"
You can measure the speed of a disk drive in two ways:
Average seek time, normally measured in milliseconds (ms), is the average amount of time it takes to move the heads from one cylinder to another cylinder a random distance away. One way to measure this specification is to run many random track-seek operations and then divide the timed results by the number of seeks performed. This method provides an average time for a single seek.
The standard way to measure average seek time used by many drive manufacturers involves measuring the time that it takes the heads to move across one-third of the total cylinders. Average seek time depends only on the drive; the type of interface or controller has little effect on this specification. The rating is a gauge of the capabilities of the head actuator.
Be wary of benchmarks that claim to measure drive seek performance. Most IDE and SCSI drives use a scheme called sector translation, so any commands to the drive to move the heads to a specific cylinder do not actually cause the intended physical movement. This situation renders some benchmarks meaningless for those types of drives. SCSI drives also require an additional command, because the commands first must be sent to the drive over the SCSI bus. Even though these drives can have the fastest access times, because the command overhead is not factored in by most benchmarks, the benchmark programs produce poor performance figures for these drives.
A slightly different measurement, called average access time, involves another element, called latency. Latency is the average time (in milliseconds) that it takes for a sector to be available after the heads have reached a track. On average, this figure is half the time that it takes for the disk to rotate one time, which is 8.33 ms at 3,600 RPM. A drive that spins twice as fast would have half the latency. A measurement of average access time is the sum of the average seek time and latency. This number provides the average amount of time required before a randomly requested sector can be accessed.
Latency is a factor in disk read and write performance. Decreasing the latency increases the speed of access to data or files, accomplished only by spinning the drive platters faster. I have a drive that spins at 4,318 RPM, for a latency of 6.95 ms. Some drives spin at 7,200 RPM or faster, resulting in an even shorter latency time of only 4.17 ms. In addition to increasing performance where real-world access to data is concerned, spinning the platters faster also increases the data-transfer rate after the heads arrive at the desired sectors.
The transfer rate probably is more important to overall system performance than any other specification. Transfer rate is the rate at which the drive and controller can send data to the system. The transfer rate depends primarily on the drive's HDA and secondarily on the controller. Transfer rate used to be more bound to the limits of the controller, meaning that drives that were connected to newer controllers often outperformed those connected to older controllers. This situation is where the concept of interleaving sectors came from. Interleaving refers to the ordering of the sectors so that they are not sequential, enabling a slow controller to keep up without missing the next sector.
|Previous||Table of Contents||Next|