When a company goes to the considerable time and effort to purchase and implement a server, it is always for an important purpose. The server will be utilized for hosting the company’s web site or handling online sales transactions, or internally will be used as a database, CRM, file or email server. In any case, whatever the use, that server will be expected to live up to its name and deliver top-notch service.
Wherever within a corporation that server is put to work, if it slows down in its delivery of data it is going to impact the company’s bottom line. If a sales prospect visits the web site and is browsing for a new purchase, and pages take too long to load, there’s a good chance the prospect is going to leave. The same effect is created if a sales representative, on the phone with a prospect, is having to wait for product information from the database. Internally, if accounts receivable is having to wait too long for invoices to be generated, billings will be late and so will income. Even if email is slow, vital orders or data could be late reaching a recipient resulting in mistakes or lack of coordination between departments.
Many enterprises today are attaching disks with capacities of 1 terabyte or larger to servers in order to increase capacity while at the same time lessening the data center footprint. Such a move also simplifies the storage model with shorter routes to more data. But in the case of terabyte drives, one other factor must be taken into account that might be assumed to be a foregone conclusion: file fragmentation.
Traditional defragmenters in use at enterprises now adopting terabyte drives were only designed to handle a certain range of storage capacity. At 50 GB, they run just fine. At 100 GB, they begin to strain. At 500 GB, the runtimes are overly-long, but they still might get the job done. But getting up into 1 and then 2 TB range, these “one size fits all” defragmenters cannot cope and will just run on endlessly, never actually defragmenting the drive.
Fortunately some fragmentation solution developers had seen this coming, and have now released solutions containing special technology for large disks. These “engines” are designed to handle multi-terabyte capacities and can make it possible to fully defragment such drives in a matter of hours. Once defragmented they are kept that way, as a majority of fragmentation is prevented on-the-fly with no impact on users and no required scheduling.
In the case of enterprise servers, speed of service is the key. Ensure that with any server you install, the fragmentation solution selected will stand up to the job and help guarantee that speed.
Bruce Boyers Fragmentation Blog
Thursday, October 7, 2010
Tuesday, September 28, 2010
Keeping Systems Up and Running: “If You’re Gonna Drive to Cleveland, Make Sure Your Car Will Make It.”
“If you’re gonna drive to Cleveland, make sure your car will make it.” Such practical advice, or advice very similar, has been handed down by parents to wanderlust-smitten youngsters for many years. Simply translated, it means to make sure you have a working automobile that you know is going to get you safely and comfortably wherever it is you are going.
The same could be said for a company on its journey to financial success and glory, as regards their computer system. For the mission of the company, for the predicted number of employees, for the work that needs to be accomplished on a regular basis, is that system adequate and maintained to run at peak performance so it will get them there?
It starts, of course, with hardware. Regularly analyze the company and make sure there are enough servers, workstations, and hard drives to continuously get the job done. This also goes for peripherals such as printers, network cabling, and all the other material that collectively make up a system. This could be likened to making sure you have a running car in the first place.
Next up, of course, is the software chosen. It starts with the operating system, but applications are just as important. They should be chosen wisely and tested thoroughly. Are they easy to use, or is there a year-long learning curve for employees? Are they easy to maintain and upgrade? And last but certainly not least, how good is the support? This could be liked to how features are installed in the car. You want the user to be easily able to do things like turn on the lights and operate the turn signals without fumbling about and possibly running off the road.
Another top basic concern is defragmentation. If disks are not consistently defragmented, especially in today’s computing climate of enormous files and high-capacity disk drives, file fragmentation slows down production like dirty oil and bad gasoline will slow down a car, no matter how good the hardware, operating system, or applications. And don’t rely on scheduled defragmentation; scheduling has become near impossible with servers that can never be taken offline, and in between the scheduled runs fragmentation continues to make for performance that runs in fits and starts.
Fragmentation solutions today must be fully automatic, run invisibly in the background, and require no scheduling. Performance is always maximized. Because only otherwise-idle system resources are used, there is never a negative performance impact. Best of all, systems are maintained so that the computer system will assist the company to really and truly get where they’re going.
If you’re going to drive to Cleveland, make sure your car will make it. And if you’re going to utilize a computer system to raise your company to ultimate success, make sure that system will make it, too!
The same could be said for a company on its journey to financial success and glory, as regards their computer system. For the mission of the company, for the predicted number of employees, for the work that needs to be accomplished on a regular basis, is that system adequate and maintained to run at peak performance so it will get them there?
It starts, of course, with hardware. Regularly analyze the company and make sure there are enough servers, workstations, and hard drives to continuously get the job done. This also goes for peripherals such as printers, network cabling, and all the other material that collectively make up a system. This could be likened to making sure you have a running car in the first place.
Next up, of course, is the software chosen. It starts with the operating system, but applications are just as important. They should be chosen wisely and tested thoroughly. Are they easy to use, or is there a year-long learning curve for employees? Are they easy to maintain and upgrade? And last but certainly not least, how good is the support? This could be liked to how features are installed in the car. You want the user to be easily able to do things like turn on the lights and operate the turn signals without fumbling about and possibly running off the road.
Another top basic concern is defragmentation. If disks are not consistently defragmented, especially in today’s computing climate of enormous files and high-capacity disk drives, file fragmentation slows down production like dirty oil and bad gasoline will slow down a car, no matter how good the hardware, operating system, or applications. And don’t rely on scheduled defragmentation; scheduling has become near impossible with servers that can never be taken offline, and in between the scheduled runs fragmentation continues to make for performance that runs in fits and starts.
Fragmentation solutions today must be fully automatic, run invisibly in the background, and require no scheduling. Performance is always maximized. Because only otherwise-idle system resources are used, there is never a negative performance impact. Best of all, systems are maintained so that the computer system will assist the company to really and truly get where they’re going.
If you’re going to drive to Cleveland, make sure your car will make it. And if you’re going to utilize a computer system to raise your company to ultimate success, make sure that system will make it, too!
Monday, September 20, 2010
Don’t Let Fragmentation Add to SAN Complexity
Storage Area Networks (SANs) are a great boon to enterprises everywhere. Because a SAN removes network traffic from the production system, network traffic is freed up to accommodate day-to-day operations—themselves a heavy load. SANs generally implement multiple physical disk drives in some form of fault tolerant disk striping (RAID), and do provide a great benefit to an enterprise: because stored data does not reside directly on any of a network's servers, server power is utilized for business applications and network capacity is released to the end user.
Connecting a machine to a SAN has always been a bit of a task—it normally has to be performed manually, and with today’s heterogeneous environments there has to be considerable know-how involved in the machine’s interaction with the SAN. It becomes even more complicated, however, with the advent of virtual machines (VMs)—for each VM, a “relationship” must be established with the SAN. Since VMs can now be created and deleted on-the-fly by the users themselves, automated solutions are now appearing that will allow VMs to be automatically connected. Whether this will be a workable solution or not remains to be seen, but obviously something needs to happen to make this operation efficient.
File fragmentation already negatively affects SAN performance, if not fully addressed with an automatic solution. Physical members in a SAN environment are not read or written to directly by an application, but instead are “seen” by an application and even the OS as one single “logical” drive. When an I/O request is processed by the file system, there are a number of attributes that must be checked which cost valuable system time. If an application has to issue multiple "unnecessary" I/O requests, as in the case of fragmentation, not only is the processor kept busier than needed, but once the I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. When files are fragmented into hundreds, thousands or tens of thousands of fragments (not at all uncommon), there are obviously many more extra I/O requests. Performance slows to a crawl.
With all that must be done to keep a SAN up and running and to ensure all machines and applications are connected, IT personnel cannot afford to be chasing down and addressing symptoms of file fragmentation. Especially with the addition of VMs, there is already enough to do. Fragmentation must be constantly addressed so that is simply eliminated—a task that can only be performed with a fully automatic solution. Such a solution works invisibly, in the background, with no negative impact on system processes and—best of all—no required scheduling by IT personnel.
Don’t let fragmentation add to SAN complexity. Make sure your fragmentation solution allows you to address factors that truly need addressing.
Connecting a machine to a SAN has always been a bit of a task—it normally has to be performed manually, and with today’s heterogeneous environments there has to be considerable know-how involved in the machine’s interaction with the SAN. It becomes even more complicated, however, with the advent of virtual machines (VMs)—for each VM, a “relationship” must be established with the SAN. Since VMs can now be created and deleted on-the-fly by the users themselves, automated solutions are now appearing that will allow VMs to be automatically connected. Whether this will be a workable solution or not remains to be seen, but obviously something needs to happen to make this operation efficient.
File fragmentation already negatively affects SAN performance, if not fully addressed with an automatic solution. Physical members in a SAN environment are not read or written to directly by an application, but instead are “seen” by an application and even the OS as one single “logical” drive. When an I/O request is processed by the file system, there are a number of attributes that must be checked which cost valuable system time. If an application has to issue multiple "unnecessary" I/O requests, as in the case of fragmentation, not only is the processor kept busier than needed, but once the I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. When files are fragmented into hundreds, thousands or tens of thousands of fragments (not at all uncommon), there are obviously many more extra I/O requests. Performance slows to a crawl.
With all that must be done to keep a SAN up and running and to ensure all machines and applications are connected, IT personnel cannot afford to be chasing down and addressing symptoms of file fragmentation. Especially with the addition of VMs, there is already enough to do. Fragmentation must be constantly addressed so that is simply eliminated—a task that can only be performed with a fully automatic solution. Such a solution works invisibly, in the background, with no negative impact on system processes and—best of all—no required scheduling by IT personnel.
Don’t let fragmentation add to SAN complexity. Make sure your fragmentation solution allows you to address factors that truly need addressing.
Wednesday, September 15, 2010
Don’t Let Fragmentation Bring You Down from the Cloud
The last couple of years have brought the “next big platform” to the computing world: cloud computing. A true paradigm shift, cloud computing makes it possible for companies to change over from costly company-owned computing resources to performing most needed processes via simple web interfaces through facilities owned and located outside the enterprise.
The actual computing is done by vendors providing infrastructure, platforms and software as services, and is performed using server farms that spawn virtual machines on demand to meet client needs. Several heavy-hitting companies offer full cloud computing services, including Amazon, IBM, Google, Microsoft and Yahoo. As cloud computing gains broader acceptance—which is rapidly occurring—many more providers are certain to arrive on the scene.
While it would seem that a technology as lofty as cloud computing would be far beyond the simple performance problems that have plagued systems since the earliest days, it is unfortunately not true. Yes, file fragmentation is still with us—and is more of a detriment than ever.
A key component of cloud computing is the use of virtual machines. In this environment, a single drive or set of drives is supporting a number of virtual machines—and data from all of those machines is saved on the drive or set of drives. File fragmentation, which drastically slows down performance on any drive, has an even more profound effect in virtual machines.
A virtual machine has its own I/O request which is relayed to the host system. This means that multiple I/O requests are occurring for each file request—at least one request for the guest system, another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) there are multiple I/O requests for each fragment of every file. This scenario is then multiplied by the number of virtual machines resident on any host server, then again multiplied by the number of servers. Performance is drastically slowed—and can even be stopped—for an entire computing cloud.
Such advanced technology requires advanced solutions. The only fragmentation solution that can keep the cloud aloft is one that ensures files stored at the virtual environment hardware layer are consistently and automatically in an unfragmented state. This method uses only idle resources to actually prevent a majority of fragmentation before it occurs, which means that users are never negatively affected performance-wise, and scheduling is never required. Performance and reliability virtual machines—and thus the cloud—are constantly maximized.
Don’t let fragmentation bring you down from the cloud. Ensure your cloud computing service provider is employing a fragmentation solution that will truly allow it to fly.
The actual computing is done by vendors providing infrastructure, platforms and software as services, and is performed using server farms that spawn virtual machines on demand to meet client needs. Several heavy-hitting companies offer full cloud computing services, including Amazon, IBM, Google, Microsoft and Yahoo. As cloud computing gains broader acceptance—which is rapidly occurring—many more providers are certain to arrive on the scene.
While it would seem that a technology as lofty as cloud computing would be far beyond the simple performance problems that have plagued systems since the earliest days, it is unfortunately not true. Yes, file fragmentation is still with us—and is more of a detriment than ever.
A key component of cloud computing is the use of virtual machines. In this environment, a single drive or set of drives is supporting a number of virtual machines—and data from all of those machines is saved on the drive or set of drives. File fragmentation, which drastically slows down performance on any drive, has an even more profound effect in virtual machines.
A virtual machine has its own I/O request which is relayed to the host system. This means that multiple I/O requests are occurring for each file request—at least one request for the guest system, another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) there are multiple I/O requests for each fragment of every file. This scenario is then multiplied by the number of virtual machines resident on any host server, then again multiplied by the number of servers. Performance is drastically slowed—and can even be stopped—for an entire computing cloud.
Such advanced technology requires advanced solutions. The only fragmentation solution that can keep the cloud aloft is one that ensures files stored at the virtual environment hardware layer are consistently and automatically in an unfragmented state. This method uses only idle resources to actually prevent a majority of fragmentation before it occurs, which means that users are never negatively affected performance-wise, and scheduling is never required. Performance and reliability virtual machines—and thus the cloud—are constantly maximized.
Don’t let fragmentation bring you down from the cloud. Ensure your cloud computing service provider is employing a fragmentation solution that will truly allow it to fly.
Wednesday, September 8, 2010
Fragmentation Solutions: Invisible versus “In Your Face”
Computing technology has always striven for the “totally automatic.” It certainly wasn’t always so; just look at the level of technical skill it once took to simply operate a computer. The first systems took MIT grads to simply turn them on and get answers to equations. Down through the years, they became easier to operate and required less skill, until we finally reached the PC that anyone could run.
The goal of “fully automatic” could also be said for all the various factors that go into system administration. Except for putting the physical hardware there at a desk, a new user’s desktop can now be completely set up remotely. Network loads and traffic flows can be adjusted automatically. Entire servers (virtual) can be automatically set up and run. And now, finally, the defragmentation chore can be set up to run fully automatically, and pesky file fragmentation won’t bother anyone ever again.
But wait: if you think that claim is being made about low cost or free fragmentation solutions, think again. They must be scheduled, which means use of valuable IT hours. It also means that there are many times that defragmentation is not occurring, and performance-crippling fragmentation is continuing to impact company productivity.
There are many other drawbacks to such solutions as well, especially when compared to a state-of-the-art fully automatic solution. Some require 15 to 20 percent free space in order to defragment. Many only defragments files, instead of both files and free space. In many cases, only one instance of the built-in can be run at a time.
Additionally, some have no method of reporting on defrag results or even defrag status as they operate, leaving IT personnel in the dark. Some allow no defragmentation of system and metadata files, nor exclusion of any files from defrag. They are generally “one size fits all,” addressing all types of fragmentation and sizes of drives with one defrag method.
A true fully automatic solution requires no scheduling and is always addressing fragmentation invisibly, using only otherwise-idle resources so that there is never a negative performance impact—only the positive one. The automatic solution addresses both files and free space, and only requires 1 percent free space. It tackles drives and partitions simultaneously, instead of one at a time, and also positions frequently used files for faster access. The automatic solution fully reports on defrag status and results.
Today there is even technology for preventing a majority of fragmentation before it even occurs.
The entire point of technology, going all the way back to the origin of computing, is to decrease workload. Only the fully automatic fragmentation solution accomplishes that mandate. Make sure your fragmentation chores are addressed with the invisible background technology available today, actually lowering unnecessary IT tasks and increasing IT efficiency.
The goal of “fully automatic” could also be said for all the various factors that go into system administration. Except for putting the physical hardware there at a desk, a new user’s desktop can now be completely set up remotely. Network loads and traffic flows can be adjusted automatically. Entire servers (virtual) can be automatically set up and run. And now, finally, the defragmentation chore can be set up to run fully automatically, and pesky file fragmentation won’t bother anyone ever again.
But wait: if you think that claim is being made about low cost or free fragmentation solutions, think again. They must be scheduled, which means use of valuable IT hours. It also means that there are many times that defragmentation is not occurring, and performance-crippling fragmentation is continuing to impact company productivity.
There are many other drawbacks to such solutions as well, especially when compared to a state-of-the-art fully automatic solution. Some require 15 to 20 percent free space in order to defragment. Many only defragments files, instead of both files and free space. In many cases, only one instance of the built-in can be run at a time.
Additionally, some have no method of reporting on defrag results or even defrag status as they operate, leaving IT personnel in the dark. Some allow no defragmentation of system and metadata files, nor exclusion of any files from defrag. They are generally “one size fits all,” addressing all types of fragmentation and sizes of drives with one defrag method.
A true fully automatic solution requires no scheduling and is always addressing fragmentation invisibly, using only otherwise-idle resources so that there is never a negative performance impact—only the positive one. The automatic solution addresses both files and free space, and only requires 1 percent free space. It tackles drives and partitions simultaneously, instead of one at a time, and also positions frequently used files for faster access. The automatic solution fully reports on defrag status and results.
Today there is even technology for preventing a majority of fragmentation before it even occurs.
The entire point of technology, going all the way back to the origin of computing, is to decrease workload. Only the fully automatic fragmentation solution accomplishes that mandate. Make sure your fragmentation chores are addressed with the invisible background technology available today, actually lowering unnecessary IT tasks and increasing IT efficiency.
Monday, August 30, 2010
Fully Automatic Defrag for the Most Effective SANs
The late comedian George Carlin used to do a routine that defined a home as “a place to put your stuff.” As it unfolded, the bit talked about the increasing accumulation of “stuff” and how eventually one needed to purchase a bigger home because one had “more stuff.”
The amount of data required by enterprises in order to operate could certainly fall into this humorous category. As computing has become more sophisticated, the volume of “stuff” needed to be kept and analyzed has grown dramatically, and so has the problem of efficiently storing and accessing it all. Storage Area Networks (SANs) solved the problem of isolated storage arrays and their accessibility from all applications; these arrays are networked together in such a way that the entire SAN is viewed as a series of “virtual disk drives,” each easily accessible from anywhere. In addition to access, benefits include simplified administration, scalability and flexibility.
There is one crucial factor that, if not properly and effectively addressed, can however bring SAN efficiency to a crawl, and that is file fragmentation. Since the SAN is “seen” by the OS and applications as logical drives, an I/O request processed by the file system has a number of attributes that must be checked, costing valuable system time. Fragmentation causes an application to issue multiple unnecessary I/O requests, keeping the processor busier than needed. Additionally, once an I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. With all the additional I/O requests, performance is greatly effected.
Today’s data centers are usually up 24X7, and are a terrific hotbed of activity without the added strain of fragmentation. SANs need to be maintained at maximum performance, period; fragmentation must be constantly addressed so that is simply eliminated. The “traditional” approach of scheduling defrag simply won’t work when there are few time windows in which to schedule maintenance—and in between such times fragmentation continues to build and hamper SAN performance.
The only true solution for SAN fragmentation is one that works fully automatically and invisibly, in the background. Because it utilizes only otherwise-idle resources, it requires no scheduling at all and has no negative impact on system processes. Fragmentation is no longer a problem, and SAN performance and reliability are fully maximized.
A SAN is one of the ultimate solutions for an enterprise to store and easily access their “stuff.” Make sure it is always quickly and reliably accessible by choosing the right fragmentation solution from the start.
The amount of data required by enterprises in order to operate could certainly fall into this humorous category. As computing has become more sophisticated, the volume of “stuff” needed to be kept and analyzed has grown dramatically, and so has the problem of efficiently storing and accessing it all. Storage Area Networks (SANs) solved the problem of isolated storage arrays and their accessibility from all applications; these arrays are networked together in such a way that the entire SAN is viewed as a series of “virtual disk drives,” each easily accessible from anywhere. In addition to access, benefits include simplified administration, scalability and flexibility.
There is one crucial factor that, if not properly and effectively addressed, can however bring SAN efficiency to a crawl, and that is file fragmentation. Since the SAN is “seen” by the OS and applications as logical drives, an I/O request processed by the file system has a number of attributes that must be checked, costing valuable system time. Fragmentation causes an application to issue multiple unnecessary I/O requests, keeping the processor busier than needed. Additionally, once an I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. With all the additional I/O requests, performance is greatly effected.
Today’s data centers are usually up 24X7, and are a terrific hotbed of activity without the added strain of fragmentation. SANs need to be maintained at maximum performance, period; fragmentation must be constantly addressed so that is simply eliminated. The “traditional” approach of scheduling defrag simply won’t work when there are few time windows in which to schedule maintenance—and in between such times fragmentation continues to build and hamper SAN performance.
The only true solution for SAN fragmentation is one that works fully automatically and invisibly, in the background. Because it utilizes only otherwise-idle resources, it requires no scheduling at all and has no negative impact on system processes. Fragmentation is no longer a problem, and SAN performance and reliability are fully maximized.
A SAN is one of the ultimate solutions for an enterprise to store and easily access their “stuff.” Make sure it is always quickly and reliably accessible by choosing the right fragmentation solution from the start.
Monday, August 23, 2010
Scheduled Defragmentation: Is It Enough?
An argument is now occurring in the defragmentation world: does it take continuous work on a disk to keep it defragmented, or can it be effectively done periodically, scheduled in a specified time window? One might think the answer depends on which defragmentation solution provider you're talking to—but real-world challenges and disk activity can actually shed light on the truth of the matter.
In a laboratory environment, a disk with fragmented files can be defragmented during a specified time and be shown to have been effectively defragmented. But this laboratory environment has a few key differences between itself and the real world—not the least among them being the fact that in the real world, disk access and file fragmentation is constant. An ancient law of physics tells us that the only constant is change, and this is never more true than as regards the data residing on disk drives. What is occurring between these scheduled defrag runs? Is the disk remaining perpetually defragmented? Of course not. Fragmentation begins right away following the defragmentation run and continues to increase until the next scheduled run. And with today's technology and with constant access, that fragmentation—and its impact on performance—can be significant.
In contrast to the scheduled approach, a recent technical breakthrough allows fragmentation to actually be prevented—automatically, transparently, whenever idle system resources are available. This means that the solution is far more equipped to keep up with the ever-changing state of a disk drive—in short, it is changing as the fragmented state of the files are changing. Fragmentation is consistently addressed, and disk performance and reliability are kept at maximum.
Another aspect of the "scheduled" approach is that it is actually outmoded in today's computing environment. With much of today's business being globalized, access to many servers is 24X7. So when can defragmentation be scheduled in such a way that it won’t impact users? The answer: it can’t. Perhaps it can be scheduled when the least number of users are accessing a server—but users are obviously still being affected.
The new breakthrough requires no scheduling, as its operations do not impact system performance while it is running, hence does not affect users at all. This is an approach better geared to today’s demanding environment.
In addition, IT staff time is required to analyze an enterprises disk drives and schedule defragmentation. With today’s shortage of experienced IT personnel, scheduling defragmentation is hardly a worthy activity.
The scheduled approach to defragmentation may have worked once, when disk activity was far less hectic and there was significant downtime in which defragmentation could take place. But with today's constant access and file fragmentation, it can be easily shown to be an insufficient solution.
In a laboratory environment, a disk with fragmented files can be defragmented during a specified time and be shown to have been effectively defragmented. But this laboratory environment has a few key differences between itself and the real world—not the least among them being the fact that in the real world, disk access and file fragmentation is constant. An ancient law of physics tells us that the only constant is change, and this is never more true than as regards the data residing on disk drives. What is occurring between these scheduled defrag runs? Is the disk remaining perpetually defragmented? Of course not. Fragmentation begins right away following the defragmentation run and continues to increase until the next scheduled run. And with today's technology and with constant access, that fragmentation—and its impact on performance—can be significant.
In contrast to the scheduled approach, a recent technical breakthrough allows fragmentation to actually be prevented—automatically, transparently, whenever idle system resources are available. This means that the solution is far more equipped to keep up with the ever-changing state of a disk drive—in short, it is changing as the fragmented state of the files are changing. Fragmentation is consistently addressed, and disk performance and reliability are kept at maximum.
Another aspect of the "scheduled" approach is that it is actually outmoded in today's computing environment. With much of today's business being globalized, access to many servers is 24X7. So when can defragmentation be scheduled in such a way that it won’t impact users? The answer: it can’t. Perhaps it can be scheduled when the least number of users are accessing a server—but users are obviously still being affected.
The new breakthrough requires no scheduling, as its operations do not impact system performance while it is running, hence does not affect users at all. This is an approach better geared to today’s demanding environment.
In addition, IT staff time is required to analyze an enterprises disk drives and schedule defragmentation. With today’s shortage of experienced IT personnel, scheduling defragmentation is hardly a worthy activity.
The scheduled approach to defragmentation may have worked once, when disk activity was far less hectic and there was significant downtime in which defragmentation could take place. But with today's constant access and file fragmentation, it can be easily shown to be an insufficient solution.
Subscribe to:
Posts (Atom)