Thursday, October 7, 2010

Server Frag Solutions: It’s All About Speed of Service

When a company goes to the considerable time and effort to purchase and implement a server, it is always for an important purpose. The server will be utilized for hosting the company’s web site or handling online sales transactions, or internally will be used as a database, CRM, file or email server. In any case, whatever the use, that server will be expected to live up to its name and deliver top-notch service.

Wherever within a corporation that server is put to work, if it slows down in its delivery of data it is going to impact the company’s bottom line. If a sales prospect visits the web site and is browsing for a new purchase, and pages take too long to load, there’s a good chance the prospect is going to leave. The same effect is created if a sales representative, on the phone with a prospect, is having to wait for product information from the database. Internally, if accounts receivable is having to wait too long for invoices to be generated, billings will be late and so will income. Even if email is slow, vital orders or data could be late reaching a recipient resulting in mistakes or lack of coordination between departments.

Many enterprises today are attaching disks with capacities of 1 terabyte or larger to servers in order to increase capacity while at the same time lessening the data center footprint. Such a move also simplifies the storage model with shorter routes to more data. But in the case of terabyte drives, one other factor must be taken into account that might be assumed to be a foregone conclusion: file fragmentation.

Traditional defragmenters in use at enterprises now adopting terabyte drives were only designed to handle a certain range of storage capacity. At 50 GB, they run just fine. At 100 GB, they begin to strain. At 500 GB, the runtimes are overly-long, but they still might get the job done. But getting up into 1 and then 2 TB range, these “one size fits all” defragmenters cannot cope and will just run on endlessly, never actually defragmenting the drive.

Fortunately some fragmentation solution developers had seen this coming, and have now released solutions containing special technology for large disks. These “engines” are designed to handle multi-terabyte capacities and can make it possible to fully defragment such drives in a matter of hours. Once defragmented they are kept that way, as a majority of fragmentation is prevented on-the-fly with no impact on users and no required scheduling.

In the case of enterprise servers, speed of service is the key. Ensure that with any server you install, the fragmentation solution selected will stand up to the job and help guarantee that speed.

Tuesday, September 28, 2010

Keeping Systems Up and Running: “If You’re Gonna Drive to Cleveland, Make Sure Your Car Will Make It.”

“If you’re gonna drive to Cleveland, make sure your car will make it.” Such practical advice, or advice very similar, has been handed down by parents to wanderlust-smitten youngsters for many years. Simply translated, it means to make sure you have a working automobile that you know is going to get you safely and comfortably wherever it is you are going.

The same could be said for a company on its journey to financial success and glory, as regards their computer system. For the mission of the company, for the predicted number of employees, for the work that needs to be accomplished on a regular basis, is that system adequate and maintained to run at peak performance so it will get them there?

It starts, of course, with hardware. Regularly analyze the company and make sure there are enough servers, workstations, and hard drives to continuously get the job done. This also goes for peripherals such as printers, network cabling, and all the other material that collectively make up a system. This could be likened to making sure you have a running car in the first place.

Next up, of course, is the software chosen. It starts with the operating system, but applications are just as important. They should be chosen wisely and tested thoroughly. Are they easy to use, or is there a year-long learning curve for employees? Are they easy to maintain and upgrade? And last but certainly not least, how good is the support? This could be liked to how features are installed in the car. You want the user to be easily able to do things like turn on the lights and operate the turn signals without fumbling about and possibly running off the road.

Another top basic concern is defragmentation. If disks are not consistently defragmented, especially in today’s computing climate of enormous files and high-capacity disk drives, file fragmentation slows down production like dirty oil and bad gasoline will slow down a car, no matter how good the hardware, operating system, or applications. And don’t rely on scheduled defragmentation; scheduling has become near impossible with servers that can never be taken offline, and in between the scheduled runs fragmentation continues to make for performance that runs in fits and starts.

Fragmentation solutions today must be fully automatic, run invisibly in the background, and require no scheduling. Performance is always maximized. Because only otherwise-idle system resources are used, there is never a negative performance impact. Best of all, systems are maintained so that the computer system will assist the company to really and truly get where they’re going.

If you’re going to drive to Cleveland, make sure your car will make it. And if you’re going to utilize a computer system to raise your company to ultimate success, make sure that system will make it, too!

Monday, September 20, 2010

Don’t Let Fragmentation Add to SAN Complexity

Storage Area Networks (SANs) are a great boon to enterprises everywhere. Because a SAN removes network traffic from the production system, network traffic is freed up to accommodate day-to-day operations—themselves a heavy load. SANs generally implement multiple physical disk drives in some form of fault tolerant disk striping (RAID), and do provide a great benefit to an enterprise: because stored data does not reside directly on any of a network's servers, server power is utilized for business applications and network capacity is released to the end user.

Connecting a machine to a SAN has always been a bit of a task—it normally has to be performed manually, and with today’s heterogeneous environments there has to be considerable know-how involved in the machine’s interaction with the SAN. It becomes even more complicated, however, with the advent of virtual machines (VMs)—for each VM, a “relationship” must be established with the SAN. Since VMs can now be created and deleted on-the-fly by the users themselves, automated solutions are now appearing that will allow VMs to be automatically connected. Whether this will be a workable solution or not remains to be seen, but obviously something needs to happen to make this operation efficient.

File fragmentation already negatively affects SAN performance, if not fully addressed with an automatic solution. Physical members in a SAN environment are not read or written to directly by an application, but instead are “seen” by an application and even the OS as one single “logical” drive. When an I/O request is processed by the file system, there are a number of attributes that must be checked which cost valuable system time. If an application has to issue multiple "unnecessary" I/O requests, as in the case of fragmentation, not only is the processor kept busier than needed, but once the I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. When files are fragmented into hundreds, thousands or tens of thousands of fragments (not at all uncommon), there are obviously many more extra I/O requests. Performance slows to a crawl.

With all that must be done to keep a SAN up and running and to ensure all machines and applications are connected, IT personnel cannot afford to be chasing down and addressing symptoms of file fragmentation. Especially with the addition of VMs, there is already enough to do. Fragmentation must be constantly addressed so that is simply eliminated—a task that can only be performed with a fully automatic solution. Such a solution works invisibly, in the background, with no negative impact on system processes and—best of all—no required scheduling by IT personnel.

Don’t let fragmentation add to SAN complexity. Make sure your fragmentation solution allows you to address factors that truly need addressing.

Wednesday, September 15, 2010

Don’t Let Fragmentation Bring You Down from the Cloud

The last couple of years have brought the “next big platform” to the computing world: cloud computing. A true paradigm shift, cloud computing makes it possible for companies to change over from costly company-owned computing resources to performing most needed processes via simple web interfaces through facilities owned and located outside the enterprise.

The actual computing is done by vendors providing infrastructure, platforms and software as services, and is performed using server farms that spawn virtual machines on demand to meet client needs. Several heavy-hitting companies offer full cloud computing services, including Amazon, IBM, Google, Microsoft and Yahoo. As cloud computing gains broader acceptance—which is rapidly occurring—many more providers are certain to arrive on the scene.

While it would seem that a technology as lofty as cloud computing would be far beyond the simple performance problems that have plagued systems since the earliest days, it is unfortunately not true. Yes, file fragmentation is still with us—and is more of a detriment than ever.

A key component of cloud computing is the use of virtual machines. In this environment, a single drive or set of drives is supporting a number of virtual machines—and data from all of those machines is saved on the drive or set of drives. File fragmentation, which drastically slows down performance on any drive, has an even more profound effect in virtual machines.

A virtual machine has its own I/O request which is relayed to the host system. This means that multiple I/O requests are occurring for each file request—at least one request for the guest system, another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) there are multiple I/O requests for each fragment of every file. This scenario is then multiplied by the number of virtual machines resident on any host server, then again multiplied by the number of servers. Performance is drastically slowed—and can even be stopped—for an entire computing cloud.

Such advanced technology requires advanced solutions. The only fragmentation solution that can keep the cloud aloft is one that ensures files stored at the virtual environment hardware layer are consistently and automatically in an unfragmented state. This method uses only idle resources to actually prevent a majority of fragmentation before it occurs, which means that users are never negatively affected performance-wise, and scheduling is never required. Performance and reliability virtual machines—and thus the cloud—are constantly maximized.

Don’t let fragmentation bring you down from the cloud. Ensure your cloud computing service provider is employing a fragmentation solution that will truly allow it to fly.

Wednesday, September 8, 2010

Fragmentation Solutions: Invisible versus “In Your Face”

Computing technology has always striven for the “totally automatic.” It certainly wasn’t always so; just look at the level of technical skill it once took to simply operate a computer. The first systems took MIT grads to simply turn them on and get answers to equations. Down through the years, they became easier to operate and required less skill, until we finally reached the PC that anyone could run.

The goal of “fully automatic” could also be said for all the various factors that go into system administration. Except for putting the physical hardware there at a desk, a new user’s desktop can now be completely set up remotely. Network loads and traffic flows can be adjusted automatically. Entire servers (virtual) can be automatically set up and run. And now, finally, the defragmentation chore can be set up to run fully automatically, and pesky file fragmentation won’t bother anyone ever again.

But wait: if you think that claim is being made about low cost or free fragmentation solutions, think again. They must be scheduled, which means use of valuable IT hours. It also means that there are many times that defragmentation is not occurring, and performance-crippling fragmentation is continuing to impact company productivity.

There are many other drawbacks to such solutions as well, especially when compared to a state-of-the-art fully automatic solution. Some require 15 to 20 percent free space in order to defragment. Many only defragments files, instead of both files and free space. In many cases, only one instance of the built-in can be run at a time.

Additionally, some have no method of reporting on defrag results or even defrag status as they operate, leaving IT personnel in the dark. Some allow no defragmentation of system and metadata files, nor exclusion of any files from defrag. They are generally “one size fits all,” addressing all types of fragmentation and sizes of drives with one defrag method.

A true fully automatic solution requires no scheduling and is always addressing fragmentation invisibly, using only otherwise-idle resources so that there is never a negative performance impact—only the positive one. The automatic solution addresses both files and free space, and only requires 1 percent free space. It tackles drives and partitions simultaneously, instead of one at a time, and also positions frequently used files for faster access. The automatic solution fully reports on defrag status and results.

Today there is even technology for preventing a majority of fragmentation before it even occurs.

The entire point of technology, going all the way back to the origin of computing, is to decrease workload. Only the fully automatic fragmentation solution accomplishes that mandate. Make sure your fragmentation chores are addressed with the invisible background technology available today, actually lowering unnecessary IT tasks and increasing IT efficiency.

Monday, August 30, 2010

Fully Automatic Defrag for the Most Effective SANs

The late comedian George Carlin used to do a routine that defined a home as “a place to put your stuff.” As it unfolded, the bit talked about the increasing accumulation of “stuff” and how eventually one needed to purchase a bigger home because one had “more stuff.”

The amount of data required by enterprises in order to operate could certainly fall into this humorous category. As computing has become more sophisticated, the volume of “stuff” needed to be kept and analyzed has grown dramatically, and so has the problem of efficiently storing and accessing it all. Storage Area Networks (SANs) solved the problem of isolated storage arrays and their accessibility from all applications; these arrays are networked together in such a way that the entire SAN is viewed as a series of “virtual disk drives,” each easily accessible from anywhere. In addition to access, benefits include simplified administration, scalability and flexibility.

There is one crucial factor that, if not properly and effectively addressed, can however bring SAN efficiency to a crawl, and that is file fragmentation. Since the SAN is “seen” by the OS and applications as logical drives, an I/O request processed by the file system has a number of attributes that must be checked, costing valuable system time. Fragmentation causes an application to issue multiple unnecessary I/O requests, keeping the processor busier than needed. Additionally, once an I/O request has been issued, the RAID hardware and software must process it and determine to which physical member the I/O request must be directed. With all the additional I/O requests, performance is greatly effected.

Today’s data centers are usually up 24X7, and are a terrific hotbed of activity without the added strain of fragmentation. SANs need to be maintained at maximum performance, period; fragmentation must be constantly addressed so that is simply eliminated. The “traditional” approach of scheduling defrag simply won’t work when there are few time windows in which to schedule maintenance—and in between such times fragmentation continues to build and hamper SAN performance.

The only true solution for SAN fragmentation is one that works fully automatically and invisibly, in the background. Because it utilizes only otherwise-idle resources, it requires no scheduling at all and has no negative impact on system processes. Fragmentation is no longer a problem, and SAN performance and reliability are fully maximized.

A SAN is one of the ultimate solutions for an enterprise to store and easily access their “stuff.” Make sure it is always quickly and reliably accessible by choosing the right fragmentation solution from the start.

Monday, August 23, 2010

Scheduled Defragmentation: Is It Enough?

An argument is now occurring in the defragmentation world: does it take continuous work on a disk to keep it defragmented, or can it be effectively done periodically, scheduled in a specified time window? One might think the answer depends on which defragmentation solution provider you're talking to—but real-world challenges and disk activity can actually shed light on the truth of the matter.

In a laboratory environment, a disk with fragmented files can be defragmented during a specified time and be shown to have been effectively defragmented. But this laboratory environment has a few key differences between itself and the real world—not the least among them being the fact that in the real world, disk access and file fragmentation is constant. An ancient law of physics tells us that the only constant is change, and this is never more true than as regards the data residing on disk drives. What is occurring between these scheduled defrag runs? Is the disk remaining perpetually defragmented? Of course not. Fragmentation begins right away following the defragmentation run and continues to increase until the next scheduled run. And with today's technology and with constant access, that fragmentation—and its impact on performance—can be significant.

In contrast to the scheduled approach, a recent technical breakthrough allows fragmentation to actually be prevented—automatically, transparently, whenever idle system resources are available. This means that the solution is far more equipped to keep up with the ever-changing state of a disk drive—in short, it is changing as the fragmented state of the files are changing. Fragmentation is consistently addressed, and disk performance and reliability are kept at maximum.

Another aspect of the "scheduled" approach is that it is actually outmoded in today's computing environment. With much of today's business being globalized, access to many servers is 24X7. So when can defragmentation be scheduled in such a way that it won’t impact users? The answer: it can’t. Perhaps it can be scheduled when the least number of users are accessing a server—but users are obviously still being affected.

The new breakthrough requires no scheduling, as its operations do not impact system performance while it is running, hence does not affect users at all. This is an approach better geared to today’s demanding environment.

In addition, IT staff time is required to analyze an enterprises disk drives and schedule defragmentation. With today’s shortage of experienced IT personnel, scheduling defragmentation is hardly a worthy activity.

The scheduled approach to defragmentation may have worked once, when disk activity was far less hectic and there was significant downtime in which defragmentation could take place. But with today's constant access and file fragmentation, it can be easily shown to be an insufficient solution.

Tuesday, August 17, 2010

The Right Fragmentation Solution Means True ROI

Return-on-investment (ROI) is a very important term to businesses. For any dollar spent, they want it proven—and then demonstrated—that there will be more than a dollar returned for that expenditure. Especially in an economic climate like that of today, an investment with an even or a minus return is extremely unpopular. An example might be a piece of equipment costing tens of thousands of dollars that seems to be requiring a thousand dollars a month to maintain, another five thousand to man and operate and that appears to be adding little to nothing of value to the final product. Another (sometimes less obvious) example is the Vice President that generates reams of inter-office memos, ties up otherwise-productive employees in lengthy meetings and contributes less than nothing to the forward motion of the company's goals.

Another very common—but often unobserved—item with no or minus ROI is the traditional method of addressing fragmentation known as scheduled defragmentation. On the surface it may sound fine: defragmentation can be scheduled to occur during off hours so that computer files can be maintained in a fragmentation-free state and computer access and performance can be maximized. But when you start examining the actual ROI of scheduled defragmentation, the proposition starts to unravel.

First, when exactly can defragmentation be scheduled so it won't interfere with users or other production? Many of today's servers operate 24X7 and cannot be interrupted or taken down for defragmentation. Second, what vital tasks are not being completed while valuable IT hours are being invested in analyzing and scheduling defragmentation?

While both the above are important points and negatively affect ROI, it's the third point that's the real kicker: is the solution actually and fully defragmenting drives? If you apply a simple bit of scrutiny, you will find the answer to be a resounding "no!" In between the scheduled runs, fragmentation is continuing to build and impact performance, and in some cases the defragmenter isn't addressing fragmentation at all. The basic ROI of a fragmentation solution is the fact that it eliminates fragmentation and increases performance all across an enterprise; if it isn't doing that, there is no ROI and it belongs on the scrap heap with that useless machine or being ushered out the back door with that superfluous Vice President.

The only solution that truly addresses today's fragmentation is fully automatic. This solution operates invisibly, in the background, utilizing otherwise-idle resources so that fragmentation is consistently eliminated. No scheduling is ever required. With such a solution comes the real ROI expected: for a relatively small investment, computer performance and reliability are actually maximized along with employee productivity. There are few investments that can be made with that kind of solid, dependable return.

Tuesday, August 10, 2010

The Right Defrag Solution--More Cost Effective Than a Free Solution

A utility or function that is "free" should always be examined carefully. For example, an aspiring musician may go shopping for a keyboard and find one for a decent price that has a built-in "free" drum machine. On the surface, it sounds like a great deal; they can have rhythm accompaniment as they play and it's like having your own little band. But when they get the keyboard home and actually put this "free" drum machine to work, they find that it's very limited in the sounds it will produce and in fact makes their creations sound a bit cheesy. In order to make it sound halfway good, the drum sounds will have to be run through a separate amplifier. But then they find that they can't even run the drum machine out from the keyboard separately. After countless wasted hours and endless tweaking, the musician realizes that it would have taken far less money and time—and would have produced a better result—if he or she had just invested in a professional drum machine in the first place.

Moving over into the world of computers, one finds a similar scenario: that of a free defragmenter. Again on the surface it sounds ideal. There is no up-front cost, and it will take care of that performance-crippling fragmentation problem so IT can take their attention off of it and move onto other pressing matters.

But when the time comes to actually get the defragmentation work done, a realization will dawn upon the system administrator about how “free” this defrag solution isn’t. First it has to be scheduled, many times on systems that must remain up and running 24X7. Many IT hours can be wasted in trying to fit in such schedules. Users cannot be on the system when the free defrag is running, causing more lost time and worse, income. And in between these scheduled runs, fragmentation continues to build and impact performance and reliability, which lowers the cost-effectiveness of the entire enterprise.

Then even more problems arise. IT personnel can never tell if a disk is fully defragged because there is no progress chart in the UI. Only one instance of the defragger can be run at a time. Only local drives can be defragmented.

The only true defrag solution today is one that does not require scheduling and addresses fragmentation consistently in the background. Because only idle system resources are used, there is never a negative impact on users. Such a solution means that IT can remove their attention from fragmentation at the moment this solution is installed—performance and reliability are constantly maximized. When comparing such a solution to the free defragmenter, the actual and staggering costs of the "free" utility become readily apparent.

Wednesday, August 4, 2010

When "Free" Actually Costs More

This might have happened to you: You’re off on a resort vacation that you’ve looked forward to for months, in some tropical paradise. You’re dreamily exploring the resort on the first day, and you come across a person who offers you a free champagne brunch, for you and your spouse. You think, great! A free high-end meal! When you get there, though, you realize how free it isn’t. The food is great—but you have to sit there for two hours listening to a sales pitch on condo timesharing. And for the rest of your vacation, some salesman is pursuing you all over the resort, popping up every time you turn around, trying to close you on buying into his "great plan." That "free brunch" was anything but—it put a serious damper in your dream vacation, which you paid substantial money to take.

In the world of computing, the same could be said for a free or inexpensive defrag utility. Yes, it costs little to nothing at the outset—but defragmentation is a serious performance problem, and soon you’re going to have to run that utility. The first problem you’re going to have is, it needs to be scheduled. Most corporate systems need to be up and running constantly, so finding a time window in which to schedule defragmentation is a major problem. And, the time that the system is offline is the first major cost of the free or inexpensive utility.

When you finally can schedule and run it, you find out that it runs, and runs, and runs, consuming system resources the whole while, and never seems to actually complete a defrag job. If you add up the number of hours that IT staff have spent trying to get useful work out of this utility, you’ll come across the second major cost of this utility.

There are third-party solutions to fragmentation that are far more efficient. In fact, technology has now evolved to the point that a majority of fragmentation can actually be prevented before it even occurs—completely automatically, and with no impact on system resources. The I/O resources required to defragment files after they have already been fragmented are saved, and peak performance is constantly maintained.

Now, compare the price of the of the free or "low-cost" utility to the third-party solution. There is an initial cost for the third-party solution—but once it is installed and running, fragmentation is basically a thing of the past. The net result: the bargain utility actually costs far more.

Wednesday, July 28, 2010

Fragmentation and the Virtual Revolution

It hasn’t been all that long since virtual servers came on the scene, allowing entire machines to operate as software applications. This technology meant that companies hanging onto only partially used hardware servers—and sweating over the expensive space and energy they were taking up—could consolidate them as virtual servers and fully utilize machines. Virtualization has certainly meant a revolution for data centers.

But as one might expect, the revolution has not stopped with servers. Now companies are looking at how they might virtualize the scores of desktops scattered throughout an enterprise, again reducing hardware and energy consumption and simplifying management as well. The PC monitors remain with users, but become very similar to terminals connected to virtual PCs that could be located anywhere, including a company’s main data center.

It might seem that since a machine is virtual, it would not suffer from a traditional issue such as fragmentation. After all, if a machine is existing as purely data in memory, how can file fragmentation be a problem?

The answer is that the data being utilized by a virtual machine is still being saved on a hard drive. A single drive or set of drives is supporting a number of virtual machines—called “guest systems”—and data from all of those machines is saved on the drive or set of drives on what is called the “host system.” File fragmentation, which drastically slows down performance on any drive, has an even worse effect in virtual server environments. Note that fragmentation will occur no matter what is being “virtualized”—servers, PCs or even in the future, networks.

Since a virtual machine has its own I/O request which is relayed to the host system, multiple I/O requests are occurring for each file request—minimally, one request for the guest system, then another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) it means the generation of multiple I/O requests for each fragment of every file. This action is multiplied by the number of virtual machines resident on any host server, and doing the math it can be easily seen that the result is seriously degraded performance.

Defrag is obviously crucial for virtual machines—but it must be the right defrag technology. A fully automatic defrag solution means that files stored at the hardware layer are consistently and automatically defragmented, and fragmentation is never an issue at all. Only idle resources are utilized to defragment, which means that users never experience a negative performance impact, and scheduling is never required. Virtual machine performance and reliability are constantly maximized.

Virtualization means the sky is the limit for enterprises today. Don’t let fragmentation keep you tied to the ground.

Tuesday, July 20, 2010

Yes, It’s Free—but Will It Do the Job?

If you ask anyone who writes advertising copy for a living, you’ll find that the word most responded to in promotional literature of any kind is the word “free.” Not surprising—who doesn’t want something useful without paying anything? The problem is that the old adage, “If it sounds too good to be true, it probably is,” usually applies.

One version of this is the “free trial.” Some trialware is full-featured and only has a time limit. In other cases, however, it is feature-hobbled and when you really need it to do the job, it won’t.

Another is simply “free software” which has been common throughout the history of the web. When you look at the functionality, however, you may find many things missing. An example would be word processing. A free word processor might be Notepad, found on all PCs. It’s fine if you’re simply writing text and don’t care about formatting, fonts or symbols, let alone spelling and grammar checking, or the ability to embed graphics and photos. For that functionality you would need to do what many do: pay for and turn to Microsoft Word.

A free virus checker would be another example. If it’s not trialware (which is probably going to have limited functionality anyway), then it most likely is not updated with all recent virus signatures and it may or may not protect your computer. For that, you’d need a professional robust application along with a subscription that always keeps it up to date and keeps your computer safe from malware attack.

The bottom line: in responding to any offer of something free, check the functionality. In most cases, you’ll find that it won’t do the job you need it to do.

In terms of functionality, a great case in point would be a free defragmenter. In looking over the features, you would most likely find that it either must be run manually or—at best—it needs to be scheduled. Most sites cannot afford to take a system down for maintenance to defragment its hard drives, as most must remain up and running pretty near constantly. The end result is that the enterprise’s computers will continue to suffer the performance-crippling effects of fragmentation, simply because defrag can be run so seldom.

The second problem would be the utility’s ability to actually defragment your drives. Does it have the needed technology to truly do the job? Many do not, especially with today’s much larger drives, enormous file sizes and sheer number of files. A defrag utility not up for the task will simply grind endlessly and never fully defragment. Once again, a company is saddled with fragmentation’s effects—despite the apparency of a “solution.”

The lesson to be learned is that “free” does not always mean “effective.” In fact, far from it. Before you decide a free utility is best, look closely at the features.

Wednesday, July 14, 2010

Maximum Optimization of Virtual Environments

Virtualization has revolutionized data centers in many ways. It has made it possible to create servers within minutes for specific functions—even by users. It has brought us the ability to substantially conserve on hardware resources by running multiple servers within the same hardware platform. It has made it possible to greatly reduce the physical footprints of server farms.

Interestingly, however, there is a “missing link” in the optimization of virtual resources that, if not addressed, can drastically affect virtual performance.

First, there is the issue of file fragmentation. All hard drives suffer from this malady, but virtual systems actually suffer twice as much; there is fragmentation at both the host and guest levels. Additionally, by consolidating 4 - 5 servers into one, a single storage device is forced to work overtime due to the 4 – 5 times increase in I/O traffic. The result is heavy processing bottlenecks.

Second, multiple virtual machines are sharing mutual system resources—an activity that can become a drain on performance.

Third, when virtual hard disks are set to dynamically grow, they do not then shrink when users or applications remove data. This is a costly waste of space that could otherwise be allocated to other virtual systems.

For the first problem—fragmentation—there are, of course, defragmenters that can be implemented. The advanced technology of virtualization, however, requires a more advanced solution. Technology will soon be available that actually prevents a majority fragmentation both at the guest and host levels, as fragmentation occurs in both places. This makes fragmentation a thing of the past for virtual systems, allowing their innate performance potential to be realized.

The solution to the second problem—competition for multiple resources—would be the synchronization of the complex and ongoing activity between host and multiple guest operating systems. In addition to solving file fragmentation, this would mean additional performance optimization for the entire virtual platform.

The third problem, the “bloating” of virtual hard drives, can be solved with tools that allow system personnel to monitor wasted resources and compact virtual disks when required. This facility would allow IT personnel to efficiently allocate virtual storage resources.

These problems put together equal an overall issue in virtual machine optimization. Mutually solved, maximum performance and reliability for high-traffic virtual environments can be fully exploited. When implementing virtualization—as most enterprises are today—it is wise to take these issues, and their solutions, fully into account.