“If you’re gonna drive to Cleveland, make sure your car will make it.” Such practical advice, or advice very similar, has been handed down by parents to wanderlust-smitten youngsters for many years. Simply translated, it means to make sure you have a working automobile that you know is going to get you safely and comfortably wherever it is you are going.
The same could be said for a company on its journey to financial success and glory, as regards their computer system. For the mission of the company, for the predicted number of employees, for the work that needs to be accomplished on a regular basis, is that system adequate and maintained to run at peak performance so it will get them there?
It starts, of course, with hardware. Regularly analyze the company and make sure there are enough servers, workstations, and hard drives to continuously get the job done. This also goes for peripherals such as printers, network cabling, and all the other material that collectively make up a system. This could be likened to making sure you have a running car in the first place.
Next up, of course, is the software chosen. It starts with the operating system, but applications are just as important. They should be chosen wisely and tested thoroughly. Are they easy to use, or is there a year-long learning curve for employees? Are they easy to maintain and upgrade? And last but certainly not least, how good is the support? This could be liked to how features are installed in the car. You want the user to be easily able to do things like turn on the lights and operate the turn signals without fumbling about and possibly running off the road.
Another top basic concern is defragmentation. If disks are not consistently defragmented, especially in today’s computing climate of enormous files and high-capacity disk drives, file fragmentation slows down production like dirty oil and bad gasoline will slow down a car, no matter how good the hardware, operating system, or applications. And don’t rely on scheduled defragmentation; scheduling has become near impossible with servers that can never be taken offline, and in between the scheduled runs fragmentation continues to make for performance that runs in fits and starts.
Fragmentation solutions today must be fully automatic, run invisibly in the background, and require no scheduling. Performance is always maximized. Because only otherwise-idle system resources are used, there is never a negative performance impact. Best of all, systems are maintained so that the computer system will assist the company to really and truly get where they’re going.
If you’re going to drive to Cleveland, make sure your car will make it. And if you’re going to utilize a computer system to raise your company to ultimate success, make sure that system will make it, too!
Tuesday, September 28, 2010
Monday, September 20, 2010
Don’t Let Fragmentation Add to SAN Complexity
Storage Area Networks (SANs) are a great boon to enterprises everywhere. Because a SAN removes network traffic from the production system, network traffic is freed up to accommodate day-to-day operations—themselves a heavy load. SANs generally implement multiple physical disk drives in some form of fault tolerant disk striping (RAID), and do provide a great benefit to an enterprise: because stored data does not reside directly on any of a network's servers, server power is utilized for business applications and network capacity is released to the end user.
Connecting a machine to a SAN has always been a bit of a task—it normally has to be performed manually, and with today’s heterogeneous environments there has to be considerable know-how involved in the machine’s interaction with the SAN. It becomes even more complicated, however, with the advent of virtual machines (VMs)—for each VM, a “relationship” must be established with the SAN. Since VMs can now be created and deleted on-the-fly by the users themselves, automated solutions are now appearing that will allow VMs to be automatically connected. Whether this will be a workable solution or not remains to be seen, but obviously something needs to happen to make this operation efficient.
File fragmentation already negatively affects SAN performance, if not fully addressed with an automatic solution. Physical members in a SAN environment are not read or written to directly by an application, but instead are “seen” by an application and even the OS as one single “logical” drive. When an I/O request is processed by the file system, there are a number of attributes that must be checked which cost valuable system time. If an application has to issue multiple "unnecessary" I/O requests, as in the case of fragmentation, not only is the processor kept busier than needed, but once the I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. When files are fragmented into hundreds, thousands or tens of thousands of fragments (not at all uncommon), there are obviously many more extra I/O requests. Performance slows to a crawl.
With all that must be done to keep a SAN up and running and to ensure all machines and applications are connected, IT personnel cannot afford to be chasing down and addressing symptoms of file fragmentation. Especially with the addition of VMs, there is already enough to do. Fragmentation must be constantly addressed so that is simply eliminated—a task that can only be performed with a fully automatic solution. Such a solution works invisibly, in the background, with no negative impact on system processes and—best of all—no required scheduling by IT personnel.
Don’t let fragmentation add to SAN complexity. Make sure your fragmentation solution allows you to address factors that truly need addressing.
Connecting a machine to a SAN has always been a bit of a task—it normally has to be performed manually, and with today’s heterogeneous environments there has to be considerable know-how involved in the machine’s interaction with the SAN. It becomes even more complicated, however, with the advent of virtual machines (VMs)—for each VM, a “relationship” must be established with the SAN. Since VMs can now be created and deleted on-the-fly by the users themselves, automated solutions are now appearing that will allow VMs to be automatically connected. Whether this will be a workable solution or not remains to be seen, but obviously something needs to happen to make this operation efficient.
File fragmentation already negatively affects SAN performance, if not fully addressed with an automatic solution. Physical members in a SAN environment are not read or written to directly by an application, but instead are “seen” by an application and even the OS as one single “logical” drive. When an I/O request is processed by the file system, there are a number of attributes that must be checked which cost valuable system time. If an application has to issue multiple "unnecessary" I/O requests, as in the case of fragmentation, not only is the processor kept busier than needed, but once the I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. When files are fragmented into hundreds, thousands or tens of thousands of fragments (not at all uncommon), there are obviously many more extra I/O requests. Performance slows to a crawl.
With all that must be done to keep a SAN up and running and to ensure all machines and applications are connected, IT personnel cannot afford to be chasing down and addressing symptoms of file fragmentation. Especially with the addition of VMs, there is already enough to do. Fragmentation must be constantly addressed so that is simply eliminated—a task that can only be performed with a fully automatic solution. Such a solution works invisibly, in the background, with no negative impact on system processes and—best of all—no required scheduling by IT personnel.
Don’t let fragmentation add to SAN complexity. Make sure your fragmentation solution allows you to address factors that truly need addressing.
Wednesday, September 15, 2010
Don’t Let Fragmentation Bring You Down from the Cloud
The last couple of years have brought the “next big platform” to the computing world: cloud computing. A true paradigm shift, cloud computing makes it possible for companies to change over from costly company-owned computing resources to performing most needed processes via simple web interfaces through facilities owned and located outside the enterprise.
The actual computing is done by vendors providing infrastructure, platforms and software as services, and is performed using server farms that spawn virtual machines on demand to meet client needs. Several heavy-hitting companies offer full cloud computing services, including Amazon, IBM, Google, Microsoft and Yahoo. As cloud computing gains broader acceptance—which is rapidly occurring—many more providers are certain to arrive on the scene.
While it would seem that a technology as lofty as cloud computing would be far beyond the simple performance problems that have plagued systems since the earliest days, it is unfortunately not true. Yes, file fragmentation is still with us—and is more of a detriment than ever.
A key component of cloud computing is the use of virtual machines. In this environment, a single drive or set of drives is supporting a number of virtual machines—and data from all of those machines is saved on the drive or set of drives. File fragmentation, which drastically slows down performance on any drive, has an even more profound effect in virtual machines.
A virtual machine has its own I/O request which is relayed to the host system. This means that multiple I/O requests are occurring for each file request—at least one request for the guest system, another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) there are multiple I/O requests for each fragment of every file. This scenario is then multiplied by the number of virtual machines resident on any host server, then again multiplied by the number of servers. Performance is drastically slowed—and can even be stopped—for an entire computing cloud.
Such advanced technology requires advanced solutions. The only fragmentation solution that can keep the cloud aloft is one that ensures files stored at the virtual environment hardware layer are consistently and automatically in an unfragmented state. This method uses only idle resources to actually prevent a majority of fragmentation before it occurs, which means that users are never negatively affected performance-wise, and scheduling is never required. Performance and reliability virtual machines—and thus the cloud—are constantly maximized.
Don’t let fragmentation bring you down from the cloud. Ensure your cloud computing service provider is employing a fragmentation solution that will truly allow it to fly.
The actual computing is done by vendors providing infrastructure, platforms and software as services, and is performed using server farms that spawn virtual machines on demand to meet client needs. Several heavy-hitting companies offer full cloud computing services, including Amazon, IBM, Google, Microsoft and Yahoo. As cloud computing gains broader acceptance—which is rapidly occurring—many more providers are certain to arrive on the scene.
While it would seem that a technology as lofty as cloud computing would be far beyond the simple performance problems that have plagued systems since the earliest days, it is unfortunately not true. Yes, file fragmentation is still with us—and is more of a detriment than ever.
A key component of cloud computing is the use of virtual machines. In this environment, a single drive or set of drives is supporting a number of virtual machines—and data from all of those machines is saved on the drive or set of drives. File fragmentation, which drastically slows down performance on any drive, has an even more profound effect in virtual machines.
A virtual machine has its own I/O request which is relayed to the host system. This means that multiple I/O requests are occurring for each file request—at least one request for the guest system, another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) there are multiple I/O requests for each fragment of every file. This scenario is then multiplied by the number of virtual machines resident on any host server, then again multiplied by the number of servers. Performance is drastically slowed—and can even be stopped—for an entire computing cloud.
Such advanced technology requires advanced solutions. The only fragmentation solution that can keep the cloud aloft is one that ensures files stored at the virtual environment hardware layer are consistently and automatically in an unfragmented state. This method uses only idle resources to actually prevent a majority of fragmentation before it occurs, which means that users are never negatively affected performance-wise, and scheduling is never required. Performance and reliability virtual machines—and thus the cloud—are constantly maximized.
Don’t let fragmentation bring you down from the cloud. Ensure your cloud computing service provider is employing a fragmentation solution that will truly allow it to fly.
Wednesday, September 8, 2010
Fragmentation Solutions: Invisible versus “In Your Face”
Computing technology has always striven for the “totally automatic.” It certainly wasn’t always so; just look at the level of technical skill it once took to simply operate a computer. The first systems took MIT grads to simply turn them on and get answers to equations. Down through the years, they became easier to operate and required less skill, until we finally reached the PC that anyone could run.
The goal of “fully automatic” could also be said for all the various factors that go into system administration. Except for putting the physical hardware there at a desk, a new user’s desktop can now be completely set up remotely. Network loads and traffic flows can be adjusted automatically. Entire servers (virtual) can be automatically set up and run. And now, finally, the defragmentation chore can be set up to run fully automatically, and pesky file fragmentation won’t bother anyone ever again.
But wait: if you think that claim is being made about low cost or free fragmentation solutions, think again. They must be scheduled, which means use of valuable IT hours. It also means that there are many times that defragmentation is not occurring, and performance-crippling fragmentation is continuing to impact company productivity.
There are many other drawbacks to such solutions as well, especially when compared to a state-of-the-art fully automatic solution. Some require 15 to 20 percent free space in order to defragment. Many only defragments files, instead of both files and free space. In many cases, only one instance of the built-in can be run at a time.
Additionally, some have no method of reporting on defrag results or even defrag status as they operate, leaving IT personnel in the dark. Some allow no defragmentation of system and metadata files, nor exclusion of any files from defrag. They are generally “one size fits all,” addressing all types of fragmentation and sizes of drives with one defrag method.
A true fully automatic solution requires no scheduling and is always addressing fragmentation invisibly, using only otherwise-idle resources so that there is never a negative performance impact—only the positive one. The automatic solution addresses both files and free space, and only requires 1 percent free space. It tackles drives and partitions simultaneously, instead of one at a time, and also positions frequently used files for faster access. The automatic solution fully reports on defrag status and results.
Today there is even technology for preventing a majority of fragmentation before it even occurs.
The entire point of technology, going all the way back to the origin of computing, is to decrease workload. Only the fully automatic fragmentation solution accomplishes that mandate. Make sure your fragmentation chores are addressed with the invisible background technology available today, actually lowering unnecessary IT tasks and increasing IT efficiency.
The goal of “fully automatic” could also be said for all the various factors that go into system administration. Except for putting the physical hardware there at a desk, a new user’s desktop can now be completely set up remotely. Network loads and traffic flows can be adjusted automatically. Entire servers (virtual) can be automatically set up and run. And now, finally, the defragmentation chore can be set up to run fully automatically, and pesky file fragmentation won’t bother anyone ever again.
But wait: if you think that claim is being made about low cost or free fragmentation solutions, think again. They must be scheduled, which means use of valuable IT hours. It also means that there are many times that defragmentation is not occurring, and performance-crippling fragmentation is continuing to impact company productivity.
There are many other drawbacks to such solutions as well, especially when compared to a state-of-the-art fully automatic solution. Some require 15 to 20 percent free space in order to defragment. Many only defragments files, instead of both files and free space. In many cases, only one instance of the built-in can be run at a time.
Additionally, some have no method of reporting on defrag results or even defrag status as they operate, leaving IT personnel in the dark. Some allow no defragmentation of system and metadata files, nor exclusion of any files from defrag. They are generally “one size fits all,” addressing all types of fragmentation and sizes of drives with one defrag method.
A true fully automatic solution requires no scheduling and is always addressing fragmentation invisibly, using only otherwise-idle resources so that there is never a negative performance impact—only the positive one. The automatic solution addresses both files and free space, and only requires 1 percent free space. It tackles drives and partitions simultaneously, instead of one at a time, and also positions frequently used files for faster access. The automatic solution fully reports on defrag status and results.
Today there is even technology for preventing a majority of fragmentation before it even occurs.
The entire point of technology, going all the way back to the origin of computing, is to decrease workload. Only the fully automatic fragmentation solution accomplishes that mandate. Make sure your fragmentation chores are addressed with the invisible background technology available today, actually lowering unnecessary IT tasks and increasing IT efficiency.
Subscribe to:
Posts (Atom)