SERVER CONSOLIDATION Term paper Report CONTENTS

32
SERVER CONSOLIDATION Term paper Report A term paper report submitted in partial fulfillment of the re Bachelor of Technology In Information technology Submitted By Name of Student: Rajat Kaul Enrollment No.:A50105311003 Submitted to Name: Anupma Sehrawat Department of Computer Science & Information technology Amity School of Engineering and Technology AMITY UNIVERSITY HARYANA OCTOBER, 2012

Transcript of SERVER CONSOLIDATION Term paper Report CONTENTS

SERVER CONSOLIDATION

Term paper Report

A term paper report submitted in partial fulfillment of the requirement ofBachelor of Technology

InInformation technology

Submitted By

Name of Student: Rajat Kaul

Enrollment No.:A50105311003

Submitted to

Name: Anupma Sehrawat

Department of Computer Science & Information technology

Amity School of Engineering and Technology

AMITY UNIVERSITY HARYANA

OCTOBER, 2012

CONTENTSTOPIC PAGE No

List of Figures 3Certificate 4Abstract 5

1. Introduction to Server 6 Consolidation

1.1 Types of Consolidation 81.1.1 Logical consolidation 8 1.1.2 Physical consolidation 91.1.3 Rationalized consolidation 10

1.2 Advantages And Disadvantages1.2(a) Advantages of server 11 consolidation1.2(b) Disadvantages of consolidation 131.2.3 Improved Service 14

1.3 Technologies for rationalized 15 Consolidation1.3.1 Workload Management 191.3.2 Partitioning 201.3.2(a)Implementation of partitioining By various organisation

1.4 Use of hardware virtualization 23 For Server Consolidation1.4.1 Explaination of Hardware Virtualisation 231.4.2 Advantage of hardware virtualisation 24 1.5 Scalable Server Consolidation 291.6 Future of server Consolidation 30 References 32

Page 2

LIST OF FIGURES

S No. Name of figure PAGE NO

1.1 Server consolidation 6

1.2 Types of consolidation 8 1.3 Results of fortune 1000 companies 11 1.4 Difference between workload 15 Management And partitioning

1.5 Workload virtualization 16

1.6 Examples of shares and limits 16

1.7 Multiprocessor support in 18 Windows 2000

1.8 Three types of Partitioning 19

1.9 Hardware Partitioning 21

1.10 Blade Servers 21

1.11 Hardware virtualization Architecture 23

1.12 Virtualization 24

1.13 Benefits of virtualization 241.14 Comparison of virtual machines 27 b/w mainframe and PC

1.15 VMware workstation architecture 28

Page 3

CERTIFICATEThis is to certify that the work contained in the term paper report titled “SERVER CONSOLIDATION ” by RAJAT KAUL (A50105311003) in partial fulfillment of the course work requirement of B.Tech. Program in the Department of Computer Science & Information Technology has been carried out under my guidance and supervision.

Date:Place:Amity University Haryana

Anupma SehrawatDeptt.CSE/IT

Amity University Haryana

Page 4

ABSTRACTServer Consolidation is often defined to mean running many differing types of applications, suchas email, financials, database, and file serving, on a single very large server. However, many ofthe same benefits of consolidating to a single, or very few, servers can be achieved through otherconsolidation approaches. Due to this, the definition of server consolidation is broad and hasevolved to encompass much more than running everything possible on as few servers aspossible.

The most important part of a server consolidation initiative is the planning and analysis of enduser requirements, application requirements, and server environment. Only by having a goodunderstanding of the requirements and challenges of any given infrastructure can a beneficialserver consolidation program be proposed, planned, and implemented.

In order to fully understand server consolidation in today’s Information Technology (IT)environment it is necessary to break it down into three different approaches: logical, physical,and workload. Studying existing servers and how they are used determines what type of serverconsolidation to pursue, and to what degree. Most implementations of server consolidation willactually be a mixture of the three approaches to achieve the maximum benefit.

Logical consolidation is adopting the same management and control processes across all servers.This approach to consolidation is relatively easy to achieve and will provide benefits quickly.Physical consolidation involves the geographical location of servers and attempts to keep theselocations to a minimum. Workload consolidation is the actual reduction of the number of serversby moving from multiple smaller servers to fewer larger servers.

By approaching server consolidation with the right method for the right environment, greatbenefits are achievable. It is important to analyze the entire server environment before beginningany server consolidation program. In a real world server consolidation implementation it islikely that all three approaches can be used to some degree. In particular, logical consolidation isalways a good first step to ensure that a clear picture of the server resources is available beforecontinuing with physical and workload consolidation steps.

Page 5

1.INTRODUCTION TO SERVER CONSOLIDATIONServer Consolidation is the combining of various units into more efficient and stable larger units.When applied to an IT department, consolidation specifically translates into improved costefficiency from higher utilization of resources, standardization and improved manageability ofthe IT environment, and (more recently) a focus on a “green” IT environment through reducedenergy consumption. One of the important components in the IT environment is the database.

Databases tend to be very widespread across the enterprise, because they are very efficient forhandling relational storage of data and are designed to be platforms for a wide variety ofapplications. Because databases form the foundation of so many business systems, ITdepartments can easily lose control of the number of databases that need to be maintained,because each group may simply create their own database to solve a specific problem they maybe experiencing. This leads to a proliferation of databases and machines running databaseinstances also known as database sprawl. Thus databases are one of the prime candidates forconsolidation. When consolidating database applications, consider the following three potentialstrategies: using a single physical machine to host multiple virtual machines (VMs) running datamanagement software, using a single machine to host multiple Server instances, and using asingle instance of Server to host multiple databases.

Page 6

Figure 1.1 Server consolidation

Server consolidation describes a variety of ways of reducing capital and operating expensesassociated with running servers. There is no commonly agreed definition of server consolidation.Instead it is used as an umbrella term to describe a variety of ways of reducing capital andoperating expenses associated with running servers.There is no commonly agreed definition ofserver consolidation.

Gartner Research divides consolidation projects into three different types, with progressivelygreater operational savings, return on investment and end-user benefits, but also progressivelygreater risks Instead it is used as an umbrella term to describe a variety of ways of reducingcapital and operating expenses associated with running servers. Of 518 respondents in a GartnerGroup research study, six percent had conducted a server consolidation project, 61% werecurrently conducting one, and 28% were planning to do so in the immediate future. The main reasons why companies undertake server consolidation are to simplify managementby reducing complexity and eliminating “server sprawl”; reduce costs, particularly staff costs butalso hardware, software and facilities costs; and to improve service. Data on the ROI of serverconsolidation projects is hard to come by but anecdotal evidence from big companies indicatesthat typical savings run into millionsof dollars. A server consolidation project may also providethe opportunity to improve scalability and resilience (including disaster recovery) andconsolidate storage.

Although consolidation can substantially increase the efficient use of server resources, it mayalso result in complex configurations of data, applications, and servers that can be confusing forthe average user to contend with. To alleviate this problem, server virtualization may be used tomask the details of server resources from users while optimizing resource sharing. Anotherapproach to server consolidation is the use of blade servers to maximize the efficient use ofspace.

Blade Servers

A blade server is a stripped-down server computer with a modular design optimized tominimize the use of physical space and energy. Whereas a standard rack-mount server canfunction with (at least) a power cord and network cable, blade servers have many componentsremoved to save space, minimize power consumption and other considerations, while still havingall the functional components to be considered a computer. A blade enclosure, which can holdmultiple blade servers, provides services such as power, cooling, networking, variousinterconnects and management. Together, blades and the blade enclosure form a blade system(also the name of a proprietary solution from Hewlett-Packard). Different blade providers havediffering principles regarding what to include in the blade itself, and in the blade systemaltogether.Blade servers offer an alternative approach to hardware partitioning. A blade servercomprises several thin server modules that sit side by side in a chassis. The chassisprovides highspeed I/O capabilities for all the modules and so reduces the amount of cabling required in thedata center. Blade servers are also supplied with management software that simplifies a numberof serve administration tasks. Each manufacturer’s blade system is proprietary but the major

Page 7

vendors are expected to launch blades foreach architecture that they sell so, for example, it maybe possible to run IBM xSeries,pSeries, iSeries and zSeries servers together in a single chassis.

Partitioning can occur at three different levels within a server: the hardware, logical and softwarelevels. In hardware partitioning,each partition has one or more processors and a block of memorybut all the partitions share, to some extent, the I/O components. Hardware partitions areelectrically isolated from each other so a fault in one partition does not affect any of the others.On UNIX servers from major vendors, such as IBM, HP and Sun, the hardware can bepartitioned dynamically (i.e. without stopping the operating systems) but Windows and Linux donot yet support his feature.

1.1 TYPES OF CONSOLIDATION

Figure 1.2 Types of consolidation

Gartner research divides consolidation projects into three categories:

1.1.1 LOGICAL CONSOLIDATION

The easiest type of consolidation to achieve is logical consolidation. Implementing systemsmanagement across all servers and using common administration and setup processes can helpsave a great deal of time. And it can help save resources too, as fewer system administrators areneeded to manage the same number of servers.An enterprise wide implementation of systemsmanagement provides great benefits.

Page 8

In addition to being able to better manage all servers, a systems management package also offersthe advantage of providing a clear inventory of all systems.This allows theorganization/enterprise to have a real-time picture of how many systems of what type arecurrently deployed.

Logically all of the servers are consolidated around a set of management tools and processes.These can be scaled to include how applications are set up and deployed, depending on theenvironment. By streamlining the number of processes used in the setup of servers andapplications, their management becomes much easier to document and maintain. Engineers werethen able to reserve time on servers for upcoming projects and management was able to planmore effectively for new server purchases.

1.1.2 PHYSICAL CONSOLIDATION

The Physical Consolidation approach is to have all servers located in a single location or as fewlocations as possible. A reduction in the number of data centers can lead to a reduction in costs.Administration typically becomes easier, because administrators are centralized, leading to amore efficient use of their time during maintenance of existing servers and during setup of newservers. By using server racks to house many servers in a small amount of space, much less realestate is required. And upgrades to server applications can become much faster due to thecentralized location of servers.Physical consolidation also allows for the use of server clusteringtechnology. Clustering servers together helps improve application availability to users, becauseif one server in a cluster has a failure, the system is designed so that the application will continueto be available from the other server or servers in the cluster.

Storage Area Networks (SANs) provide a highly available, high performance storage backend tomultiple servers. Although it is possible to create Storage Area Networks where the storage andservers are geographically separate, the cost can be significantly reduced if the servers andstorage are in the same building or campus. SANs are designed to offer the advantages of lowercost per MB, higher scalability, and better fault tolerance than storage that is directly attached toservers. Using a SAN is an example of storage consolidation. In order to further extend thebenefits of server consolidation it makes sense to consolidate storage as well.

In addition, new blade server technology allows for many more servers to be installed into asmaller space.A large computer education organization used a physical consolidation approachwhen it recently implemented a single data center to provide the test systems for all of its classesheld worldwide. Students connect to the servers over the Internet and run their lab exercises onreal servers. Instructors no longer have to spend a day of prep time to get the lab environmentsetup, as it is up and maintained at a centralized data center

Software upgrades have become much easier, as all systems are upgraded as soon as the softwareis available and has been tested at the data center. This allows students to get education on thelatest software releases much faster than if individual field servers had to be upgraded. Physical

Page 9

consolidation allows organizations to focus their highly skilled person on higher-value tasks,which gives the IT staff the opportunity to add more value to the business.

1.1.3 RATIONALIZED CONSOLIDATION

Consolidating workload means using fewer larger servers to replace what was beingaccomplished by a large number of smaller servers. When reducing the number of servers thereare two distinct paths that can be followed. The easier and more common approach is tocontinue to dedicate a server to a specific application, but to use fewer servers by takingadvantage of multiprocessor systems using the most recent technology. The other, and morecomplex, path is to take disparate applications and put them on the same server. An examplewould be using a single server to provide file-print and email services. This type ofconsolidation can be very powerful in helping to reduce costs, but it is also is the most difficultto plan and implement.

A workload consolidation requires the most planning and analysis of the existing serverinfrastructure, because it is a change in the workload that the servers will be running. In thelogical and physical approaches to consolidation there is no change in the number of servers orthe number of users they are supporting. The actual workload that each server is running doesnot change in these approaches, so there is a low risk of failure due to improper server sizing. Inworkload consolidation a greater number of users will be accessing each server. In addition, ifdifferent applications are consolidated onto a single server then the differences in thoseapplication requirements must be considered when sizing the server.

Although workload consolidation is the most difficult to plan and implement, it can also providethe greatest benefits. “Pursuing the ultimate consolidation goal of a single system image for alldistributed server applications can yield great rewards when balanced with the expense, but thepotential for a poor or failed implementation is far greater.”

Administration and management are easier because there are fewer servers. This means fewersystems upon which to install software updates and patches. Fewer servers also result in lowersoftware licensing costs for both operating systems and applications. Workload consolidationcan also result in a much more efficient use of resources. Replacing under-utilized one or twoprocessor servers with a single four- or eight-processor server that will be utilized at a higherlevel is more efficient.It is easy to see the benefits of workload consolidation in a simpleexample of file and print servers. Instead of having ten file and print servers, with each onededicated to a department and only 10 to 15 percent utilized, they are replaced with one or twolarge file and print servers that are shared by the same ten departments. Software licensing costsare greatly reduced. In addition, backup of all user data can now be accomplished much moreeasily by attaching a tape library directly to the file server.

Page10

Organizations may reap significant benefits from logical and physical consolidation but these areessentially non-technological approaches. The main focus is therefore on rationalizedconsolidation, which is used here to include:

Replacing a number of small servers with fewer larger servers. For example, a companywith separate file and print servers for each department might decide to consolidate theminto two large file and print servers which all the departments share. Dell calls thisworkload consolidation and Microsoft and others call it physical consolidation (note thatthis meaning of the term physical consolidation is different from Gartner’s);

Replacing existing servers with higher density configurations. For example, a collectionof servers housed in PC towers could be replaced with 1U high rack-mountedequivalents, or a collection of 1U rack-mounted servers could be replaced with bladeservers.

Combining several instances of the same application on a single server. For example,four 4-processor SQL servers could be consolidated on a single 16-processor server or, ifthe original SQL servers are under-utilised, perhaps a single 8-processor server.

Combining several different applications on a single server. For example, a company mayhave several applications that are only required by a small number of users, making itdifficult to justify the cost of maintaining a separate server for each. In some cases it ispossible to combine applications that run on different operating systems (e.g. Windowsand Linux) on the same server.

For example, if a number of servers are brought together on one site, a NAS (NetworkAttached Storage) device based on RAID (Redundant Array of Inexpensive Disks)technology may be used to reduce the risk of data loss for all applications. A largerconsolidation project may involve the installation of a high performance, high availabilitystorage back end based on SAN (Storage Area Network) technology.

Finally, server consolidation projects can focus on existing applications (backwardconsolidation), new applications (forward consolidation) or both. When planning theintroduction of any new application it is a good idea to consider the opportunity forforward consolidation and for rationalizing the new application with other applications.

1.2(a)ADVANTAGES FOR SERVER CONSOLIDATIONIn a survey of Fortune 1000 companies by Forrester Research the top three benefits of server consolidation cited byrespondents were simplified management, lower costs and improved service. Let’s look at each of those in turn.

Page11

Figure 1.3 Results of fortune 1000 companies

Simplified management

In the late 1990s many enterprises expanded their IT infrastructure rapidly. Often this led to“server sprawl” – a huge proliferation of print servers, file servers, email servers, developmentservers and test servers throughout the organization. Server sprawl creates major challenges forIT management, particularly with regard to maintaining security, performing regular back-upsand keeping the software up to date. Implementing consistent management tools and processesacross the organization, standardizing on fewer server platforms, reducing the number of serversand concentrating them in fewer locations all help to reduce complexity. This may result intheneed for fewer support staff or, alternatively, allow more projects to be undertaken and servicelevels to be improved without the need for additional staff. For globalorganizations,standardization may also permit support to be provided on a 24 x 7 basis using world-wideresources. Consolidation also provides an opportunity to address management issues such as resilience andscalability, storage consolidation and disaster recovery (which may be impossible if servers aredispersed throughout the organization).The introduction of fewer, more powerful servers can provide increased headroom for growththrough pooling of excess capacity. Reducing the number of servers should also result in asimpler network structure that is easier to manage.

Lower costs In the 1970s and early 1980s when enterprises ran centralized mainframe data centers,a commonrule of thumb was that 80% of costs were capital and 20% were operating.Today studies byGartner and others indicate that the ratio is more like 30% capital and 70% operating.Serverconsolidation can reduce costs in a number of ways:

Staff costs Support staff are often recruited on the basis of how many servers an organisation has (withWindows NT, one member of staff per twenty servers is often used as a benchmark). Mostserver consolidation projects aim to reduce costs by freeing staff from mundane servermaintenance tasks. Gartner suggests that more than 70% of the potential savings from a typicalproject will come from reduced staffing requirements, but they caution that this is usually thehardest area in which to pre-quantify savings, especially since displaced support staff oftenmove to new posts elsewhere in the same information services organization.

Hardware costsConsolidation can reduce costs through better server utilization,by reducing the totalrequirement for storage and by enabling the use of more cost-effective back-up/restoremechanisms (including in-house disaster recovery). Centralized purchasing may also enablebetter discounts to be negotiated withhardware suppliers. On the other hand remember thatmany Intel-based platforms, including 1U rack-mounted servers, are already very cost-effectivebecause of cut-throat competition. Moving from a generic Intel-based platform to a singlesource platform, even one that is much higher powered, may increase hardware costs;Software costs

Page12

Consolidation may also reduce the total number of licenses needed while standardizing onfewer applications may allow better deals to be negotiated with suppliers. With many (but notall) applications the incremental cost of software licenses decreases as the number of usersincreases.

Facilities costsServer consolidation can reduce the amount of floor space needed for data centers. This is aparticular benefit if one or more existing locations are full. Bear in mind however, that greaterpower density and cooling capacity are required – a 42U cabinet filled with low-power bladeservers might consume three times as much power, and hence generate three times as much heat, as the same cabinet filledwith 1U rack-mounted servers.

1.2(b) DISADVANTAGES OF CONSOLIDATION

Perhaps the most cited disadvantage of server consolidation is the single-point-of-failure issue—failure of a single consolidated component will have a greater impact than failure of one ofseveral redundant components. This problem can be mitigated by having the appropriate level ofredundancy and a complete disaster recovery plan.

Other disadvantages include the following:

• Sophisticated management skills are required. With several different applications running on asingle server, more sophisticated management skills and processes are necessary, such aschange control to ensure that a change in one application does not negatively impact another,and capacity planning to make sure all applications have sufficient resources. As applicationsshare a common server, it can also be harder to plan downtime for maintenance.

• Cost allocation is more complicated. If consolidation results in multiple applications fordifferent departments within an organization running on the same server, new chargebackprocesses may be required to allocate computing costs to individual departments or users.

• Users will feel a lack of control. Departments or IT customers that previously had their ownserver may object to the loss of control and flexibility created by server consolidation.

• Infrastructure may need upgrading. If servers are centralized to one location, theninfrastructure, such as the network, may need upgrading to ensure sufficient bandwidth andsufficient reliability for the necessary traffic. Additionally, the OS on the servers may need to beupgraded to ensure the benefits of consolidation.

1.2.3 IMPROVED SERVICE

Page13

Improved service should be a natural consequence of simplified management and thedeployment of more resilient server, storage and back-up technology. It should also be possibleto deploy new applications more quickly if they do not have to be loaded onto a large number ofphysically dispersed servers, while platform standardization should reduce the number ofunforeseen delays which occur during deployment. Application development and customizationmay also be speeded up through server consolidation. Finally it is quite common to improve service levels by consolidating help desk and networksupport activities as part of a server consolidation project, although they canequally well betackled separately.

PROFITS GAINED BY IMPLEMENTATION OF SERVER CONSOLIDATION

Battery Ventures recently carried out research on the server consolidation market. They foundlittle empirical data of hard ROI savings but uncovered some interesting anecdotal examples ofsavings, including:

A Fortune 500 company has 300 servers in their IT organization handling development, testand QA functions. They believe that through rationalized consolidation of their servers theycan cut this number in half, saving $5-10 million over the next five years;

Another Fortune 500 company has 175 single processor Intel servers handling print function.Through rationalized consolidation they believe they can reduce the number of servers toabout 40. They think this will save them over $5 million in capital and operating expensesover the next few years;

A Fortune 50 company believes that one division can eliminate 600 development/test/QAservers from their server farm, saving them $2.4 million a year in operating expense;

A multi-billion dollar software firm is consolidating 300 pre-sales servers down to 80 andbelieves they can save millions of dollars per year.

1.3 TECHNOLOGIES FOR RATIONALIZED SERVER CONSOLIDATION

The definition of rationalized consolidation given at the start of this paper said that it usuallyinvolved workload management and partitioning. Like server consolidation these are rather looseterms which different vendors use in different ways.Here are the meanings that we give to them:

Workload management describes a set of techniques which enable different applicationsto run together on a single instance of an operating system. The techniques aim to balancethe resource demands that each of the applications places on the system so that all ofthem can co-exist. Note that, rather confusingly,some vendors refer to workloadmanagement as resource partitioning or soft partitioning.

Page14

Partitioning involves the division of a server, which might ordinarily run a singleinstance of an operating system, into several smaller systems each of which canrun itsown copy of an operating system. Note that all the copies run simultaneously – this is notthe same as partitioning a PC hard disk so that you can select which operating systemruns when the machine boots up.

In short, workload management allows several applications to run on one operating system,partitioning allows several operating systems to run on one machine. Strictly speaking, thetechniques are not mutually exclusive but partitioning is commonly carried out because badly-behaved applications will not co-exist on a single operating system, i.e. when workloadmanagement cannot deliver. The problem commonly manifests itself as one application hoggingall the processor cycles, or repeatedly causing the operating system to crash and thus bringingdown the other applications. 1.3.1 WORKLOAD MANAGEMENT

The following sections discuss the workload management and partitioning solutions offered by anumber of vendors. We focus particularly on solutions for Windows, UNIX and Linux, sincethese operating systems are the most common targets for server consolidation projects.

Workload management techniques fall into two categories:

1. Processor binding . In a server which contains several processors running the sameinstance of the operating system (i.e. a Symmetric MultiProcessing or SMPenvironment), applications can be forced to run on a particular processor or subset ofprocessors. This is a relatively crude technique with limited granularity, since eachapplication must be allocated a whole number of processors. It can be effective where the

Page15

Figure 1. 4 Difference between workload management and partitioning

processing requirements of the applications are well understood but it can also lead tolow processor utilization.

2. Software-based resource allocation . In this technique software is used to allocateresources such as processing power, memory and I/O bandwidth to applications and userson a priority basis. Some implementations allow the priorities to be set in terms of servicelevel objectives. This approach is more sophisticated and provides much greatergranularity and a more dynamic response to changing workloads.

Workload management for UNIX

Page16

Figure 1.6 Examples of Shares and limits

Figure 1.5 workload virtualization

There is no standard implementation of workload management for UNIX so different vendorshave implemented their own solutions. IBM, HP and Sun all provide softwarebased resourceallocation as well as processor binding on their UNIX ranges.IBM’s product for its pSeries ofservers running AIX (IBM’s implementation of UNIX) iscalled AIX Workload Manager (WLM)and is supplied as part of the AIX operating system. WLM allows the system administrator tocreate different classes of service for jobs and to specify attributes for those classes. Jobs areautomatically placed in a class according to the user, the user’s group and the application. Forexample, when theuser Tom from the group Engineers runs the CAD application, he might beplaced in a class called Development. The administrator can allocate Development to one of tentiers which determines the relative priority of the class (e.g. classes in Tier 0 have priority overclasses in Tier 1 which in turn have priority over classes in Tier 2 etc).Within each tier theadministrator can allocate processor, memory and I/O bandwidthresources by means of limitsand shares. For example Development might be allocated 40 processor shares. If all of the active classes inits tier have 100 shares between them then Development will be given 40% of the processortime. However, if another class with 100 shares becomes active Development will then only begiven 20% of the processor time. The system administrator can also set limits to ensure, forexample,Development is given a minimum of 25% and a maximum of 50% of the processortime.If limits are used they take precedence over shares. From AIX version 5 onwards,WLM alsoprovides processor binding.

HP’s mechanism for binding applications to processors is called Processor Sets (Psets) and isavailable on all multi-processor models in the HP 9000 server series running HP-UX11i, thelatest version of HP’s UNIX implementation. It allows dynamic reconfiguration of theprocessors in a Pset and dynamic binding of applications to a Pset or a specific processor withina Pset.

For software-based resource allocation, HP provides two packages called Workload Manager(WLM, not to be confused with IBM’s product of the same name) and Process ResourceManager (PRM). PRM reserves processing power, memory and disk bandwidth within apartition for up to 64 separate applications. WLM allows the administrator to set service levelobjectives (e.g. response time for transactions,completion time for batch jobs) for differentapplications in priority order. WLM then adjusts the PRM processor settings dynamically as theworkload varies to try to achieve the service level objectives.Sun’s workload management functionality is built into its Solaris operating system. Sun hasoffered processor binding and software-based resource allocation for some time but with thelaunch of Solaris 9, Sun integrates them into a new system called Solaris Containers. So far theonly component of Containers that has been released is Solaris Resource Manager and thiscurrently only supports processor management.

Sun says it will introduce physical memory, swap space and I/O bandwidth management to Resource Manager shortly. The full implementation of Containers is intended to provide fault and security isolation between different applications running on the same operating system. Sun claims that Containers will thus provide many of the benefits of virtual machines (see section on Software partitioning below) with lower overhead.

Page17

Workload management for Windows and Linux

Depending on the edition (see table below) Windows 2000 will support up to 32 processors.Microsoft allows processes (instances of executing applications) to be bound to particularprocessors using a feature called Processor Affinity, which can be assigned in advance or setusing the Windows Task Manager if the application is running. Microsoft also allows processesto be given different priority levels.

Figure 1. 7 Multi- processor support in windows 2000

IBM developed a more sophisticated workload management tool called ProcessControl whichwas provided to Microsoft for inclusion in Windows 2000 Datacenter Server but is onlyavailable from IBM for the Windows 2000 Server and Advanced Server products. ProcessControl integrates the standard Microsoft features and adds control over real and virtual memoryconsumption, limits on the number of copies ofan application that can run concurrently, andprocessor time limits (so an applicationdoesn’t get stuck in a loop and endlessly eat processorcycles). Microsoft itself plans tointroduce a greatly enhanced version of Process Control calledWindows System

Resource Manager with the higher end versions of Windows .NET Server 2003.HP has analternative product called Workload Management Pack (WMP) which works on all versions ofWindows Server and on other vendors’ hardware. WMP allows the administrator to create so-called Resource Partitions containing processors and memory and to allocate processes to theResource Partitions. A dynamic rules engine then allows the resources within a partition to bealtered depending on utilization ortime of day.

Finally a company called Aurema, whose previous product forms the basis of Solaris ResourceManager, has developed a new product called Active Resource Management Technology(ARMTech). ARMTech implements the concepts of shares and limits (see discussion of IBM’sAIX WLM above) for processors and memory (but not I/Obandwidth) on servers running any version of Windows 2000 Server. Microsoft has licensedARMTech for distribution with Windows 2000 Datacenter Server and it can be purchased for usewith Windows 2000 Server and Advanced Server. Aurema has also developed a version ofARMTech for Linux but it is not being actively marketed at present.

1.3.2 PARTITIONINGIBM mainframes have supported partitioning techniques for over 30 years so that multipleoperating systems of different flavors could run simultaneously on the same system. Over time

Page18

partitioning technology has trickled down on to smaller systems and it is now available for PCs.Partitioning can occur at three different levels within the server:

1. Hardware (or physical) partitioning is a technique which can only be applied to serverswith multiple processors. Each partition has one or more processors and a block ofmemory dedicated to it but the partitions share, to some degree, the disk and the I/Ocomponents of the server. Hardware partitions are electrically isolated from each other soa fault in one partition does not affect any of the others. In most cases, the allocation ofresources (i.e. memory, processors and I/O paths) to hardware partitions can only bealtered when the operating systems are off line.

2. Logical partitioning uses a layer of hardware microcode or firmware (and sometimessoftware as well) to enable a single processor to run more than one partition. The way inwhich resources are allocated to logical partitions can usually be altered without stoppingthe operating systems. The microcode or firmware is platform-specific and logicalpartitioning is only available on high end servers;

3. Software partitioning achieves the same effect as logical partitioning using VirtualMachine (VM) software rather than microcode or firmware. The VM software acts as themaster operating system, supporting the operating systems used by the applications asguests. Software partitioning is usually quite easy to implement but it tends to createmore overhead on the system than the other two techniques –typically absorbing between10% and 20% of the processor power.

1.3.2 (a) IMPLEMENTATION OF PARTITIONING IN VARIOUS ORAGANISATIONS

Hardware partitioning

Page19

Figure 1.8 Three types of partitioning

IBM supports three types of hardware partitioning on higher-end models within the Intel-basedxSeries range of servers. IBM uses a four-processor node as its basic building block. Hardwarepartitions must coincide with node boundaries, so two or more nodes may act as a singlepartition but a node server may not be subdivided.Fixed partitioning is carried out while thesystem is powered off, and involves the cabling together (or uncabling) of two or more physicalnodes to modify the partitioning. After recabling the operating system must be restarted. Staticpartitioning requires the nodes being modified to be taken off line so that they can be accessedusing systems management software, but the remaining nodes are unaffected. Dynamicpartitioning allows changes to be made without stopping the operating system.However, whilethis technique is supported by the hardware it is not yet supported by Windows or Linux so it isof little practical benefit today.

HP offers two forms of hardware partitioning on its latest Superdome architecture and on otherHP 9000 servers designed to run HP-UX. HP 9000 servers will work together in a configurationknown as a Hyperplex. Hardware partitions within the Hyperplex consist of one or more nodes(i.e. servers). This form of partitioning is comparable to IBM’s fixed partitioning. On Superdomeand higher-end HP 9000 servers a technology called nPartitions is also provided. These serversare built out of cells containing up to four processors, memory and optional I/O resources and aSuperdome server can comprise up to 16 cells. With nPartitions, a hardware partition consists ofone or morecells. Cells are moved between nPartitions using the systems managementinterface.The affected partitions need to be taken off line. HP’s nPartitions is comparable toIBM’s static partitioning.Unisys provides static partitioning on its ES7000 “Windowsmainframe”, an enterprise server designed to run Windows 2000 Datacenter Server. Theprocessor building block in the ES7000 is known as a sub-pod and consists of four Intelprocessors, a third-level cache and memory. A single ES7000 can contain amaximum of eight sub-pods. A static partition can comprise any number of sub-pods, butsub-pods cannot be split between partitions. In order to move resources between static partitions,the affected partitions need to be taken off line and subsequently rebooted. Unisys also supportsa feature that the company calls soft partitioning but which is, in reality, a workload managementfeature enabling the processors within a static partition to be assigned to different applications.ES7000 soft partitioning makes use of the processor affinity feature in Windows 2000Datacenter Server. Like IBM’s xSeries, the ES7000 will also permit dynamic partitioning oncethis is supported by the operating system.

Sun’s hardware partitioning technology is called Dynamic System Domains (DSDs) and isavailable on Sun Fire “Midframe” and high end servers and the Sun Enterprise 10000 server. Asthe name suggests, DSDs deliver dynamic hardware partitioning today but only with Sun’sSolaris operating system. The building block for DSDs is Sun’s Uniboard processor and memoryboard which is configured with two or four Sun UltraSPARC processors. There are however,additional constraints on the number of DSDs that a particular server will support. For example,the Sun Fire 6800 server will take up to six Uniboards but the number of DSDs is limited to four.

Page20

Figure 1. 9 Hardware Partitioning

Blade servers offer an alternative approach to hardware partitioning. At present, they cannot doanything as sophisticated as the systems described above, but the technology is evolving rapidlyand more sophisticated techniques are likely to emerge.The first blade servers were launched bystart-ups like RLX Technologies in late 2000 and consist of open, single board computerscomplete with memory and often one or two hard disk drives that slot into a chassis. The chassisprovides high speed I/O capabilities that are shared between the blades. Blade servers wereoriginally promoted for their high packing density (RLX can fit 24 blades in a 3U chassis) andlow power consumption (the first RLX blades used Transmeta Crusoe processors for this reason)but it turned out that customers were more interested in the servers’ management features and thereduction in cabling. Consequently, more recent offerings from mainstream vendors like HP,IBM and Dell use standard Intel processors and havelowerpacking densities but more robustmechanical designs (i.e. the blades are enclosed in metal cases). Each vendor’s blade system isproprietary which lockscustomers in and, it could be argued, allows vendors to obtain highermargins than they could on standard Intel-based rack-mounted servers.

Figure 1.10 Blade servers

The management systems for blade servers allow each blade to be stopped, started and rebootedfrom a central location. Some systems have a hard disk on each blade so that the system image istightly linked to that blade, but others allow the used of shared disk space so that any systemimage can be run on any blade. This gives systemadministrators great flexibility – a server can

Page21

change its role from email server to web server in a matter of minutes and if one blade fails, themanagement system can restart the application on another blade. At present blade servers onlysupport Windows and Linux operating systems, but Sun has announced a blade server that willrun Solarisand HP is expected to introduce HP-UX support on its higher end pClass system.

Logical partitioning

Intel’s architecture does not readily support logical partitioning so this technology is onlyavailable from vendors like IBM and HP who have servers based on their own chip technology.IBM’s product is called LPAR (simply an abbreviation of Logical PARtioning) and is availableon the company’s iSeries (formerly AS/400) and pSeries (formerly RS/6000).On the iSeries, thebase partition must run IBM’s OS/400 operating system but theother partitions can run OS/400 or Linux. Processors can be shared between partitions(including the base partition) but I/O paths cannot.The latest version of LPAR for theiSeries is dynamic, i.e. it allows resources such as processor,memory and interactive performance to be moved between partitions without taking theiroperating systems off line. On the pSeries LPAR is less sophisticated: processors cannot beshared between partitions and although dynamic LPAR is available in partitions running newversions of AIX, Linux partitions must be taken off line to perform logical partitioning onthem.HP’s equivalent of LPAR is called vPars (short for virtual partitions) and is available on itsmedium to high end HP-UX (i.e. UNIX) servers. In reality, vPars is a technology that straddlesthe boundary between logical partitioning and software partitioning since the partitions arecreated and managed by virtual partition monitor software. Processorsand I/O paths cannot beshared between vPars but the technology is dynamic soresources can be moved without having toreboot the partitions affected.

Software partitioning

Software partitioning technology was originally developed in the mid 1960s as a way of allowingmany users to share the power and resources of a mainframe computer without affecting eachother. The first commercial implementation was the VM operating system for IBM mainframes,which was the result of research projects carried out at MIT in Cambridge. VM has proved to beremarkably durable – 35 years on the latest incarnation, z/VM, is still widely used on IBMzSeries mainframes. In the last few years researchers have examined how software partitioningtechniques can be applied to PC operating systems. This has resulted incommercial products for Intel-based servers from companies like VMware andConnectix. The application or guest operating system executes a set of instructions that arecategorized as privileged or non-privileged. Privileged instructions are those that could affectusers of other virtual machines, e.g. instructions that attempt to alter the contents of a hardwareregister. The VMM allows non-privileged instructions to be executeddirectly on the hardwarebut it intercepts privileged instructions and executes them itself or emulates the results andreturns them to the virtual machine that issued them.

1.4 USE OF HARDWARE VITUALISATION FOR SERVER CONSOLIDATION

Page22

1.4.1 Explaination of hardware virtualisationComputer hardware virtualization is the virtualization of computers or operating systems. Ithides the physical characteristics of a computing platform from users, instead showinganother abstract computing platform At its origins, the software that controlled virtualization wascalled a "control program", but nowadays the terms "hypervisor" or "virtual machine monitor"are preferred.The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called"pseudo machine"), a term which itself dates from the experimental IBM M44/44Xsystem. Thecreation and management of virtual machines has been called "platform virtualization", or"server virtualization", more recently.

Server virtualization is a technology for partitioning one physical server into multiple virtualservers. Each of these virtual servers can run its own operating system and applications, andperform as if it is an individual server. This makes it possible, for example, to completedevelopment using various operating systems on one physical server or to consolidate serversused by multiple business divisions.

Figure 1.11 hardware virtualization architecture

Platform virtualization is performed on a given hardware platform by host software (a controlprogram), which creates a simulated computer environment, a virtual machine (VM), forits guest software. The guest software is not limited to user applications; many hosts allow theexecution of complete operating systems. The guest software executes as if it were runningdirectly on the physical hardware, with several notable caveats. Access to physical systemresources (such as the network access, display, keyboard, and disk storage) is generally managedat a more restrictive level than the host processor and system-memory. Guests are often restrictedfrom accessing specific peripheral devices, or may be limited to a subset of the device's nativecapabilities, depending on the hardware access policy implemented by the virtualization host.

Page23

Virtualization often exacts performance penalties, both in resources required to run thehypervisor, and as well as in reduced performance on the virtual machine compared to runningnative on the physical machine.

1.4.2 Advantages for virtualizationIn the case of server consolidation, many small physical servers are replaced by one largerphysical server to increase the utilization of costly hardware resources such as CPU. Althoughhardware is consolidated, typically OSes are not. Instead, each OS running on a physical serverbecomes converted to a distinct OS running inside a virtual machine. The large server can"host" many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V)transformation.

Figure 1.13 Benefits of virtualization

Page24

Figure 1. 12 vitualization

Reduce number of serversPartitioning and isolation, the characteristics of server virtualization, enable simple and safeserver consolidation.Through consolidating, the number of physical servers can be greatly reduced. This alonebrings benefits such as reduced floor space, power consumption and air conditioning costs.However, it is essential to note that even though the number of physical servers is greatlyreduced, the number of virtual servers to be managed does not change. Therefore, whenvirtualizing servers, installation of operation management tools for efficient servermanagement is recommended.

Reduce TCOServer consolidation with virtualization reduces costs of hardware, maintenance, power, andair conditioning. In addition, it lowers the Total Cost of Ownership (TCO) by increasing theefficiency of server resources and operational changes, as well as virtualization-specificfeatures. As a result of today’s improved server CPU performance, a few servers have highresource-usage rates but most are often underutilized. Virtualization can eliminate suchineffective use of CPU resources, plus optimize resources throughout the server environment.Furthermore, because servers managed by each business division's staff can be centrallymanaged by a single administrator, operation management costs can be greatly reduced.

Improve availability and business continuityOne beneficial feature of virtualized servers not available in physical server environments islive migration. With live migration, virtual servers can be migrated to another physical serverfor tasks such as performing maintenance on the physical servers without shutting themdown. Thus there is no impact on the end user. Another great advantage of virtualizationtechnology is that its encapsulation and hardware-independence features enhanceavailability and business continuity.

Increase efficiency for development and test environmentsAt system development sites, servers are often used inefficiently. When different physicalservers are used by each business division's development team, the number of servers caneasily increase. Conversely, when physical servers are shared by teams, reconfiguringdevelopment and test environments can be time and labor consuming.

Such issues can be resolved by using server virtualization to simultaneously run variousoperating system environments on one physical server, thereby enabling concurrentdevelopment and test of multiple environments. In addition, because development and testenvironments can be encapsulated and saved, reconfiguration is extremely simple.

Consolidating servers can also have the added benefit of reducing energy consumption. Atypical server runs at 425W and VMware estimates an average server consolidation ratio of10:1

Page25

A virtual machine can be more easily controlled and inspected from outside than a physicalone, and its configuration is more flexible. This is very useful in kernel development and forteaching operating system courses.

A new virtual machine can be provisioned as needed without the need for an up-fronthardware purchase.

A virtual machine can easily be relocated from one physical machine to another as needed.For example, a salesperson going to a customer can copy a virtual machine with thedemonstration software to his laptop, without the need to transport the physical computer.Likewise, an error inside a virtual machine does not harm the host system, so there is no riskof breaking down the OS on the laptop.

Because of the easy relocation, virtual machines can be used in disaster recovery scenarios.

However, when multiple VMs are concurrently running on the same physical host, each VM mayexhibit a varying and unstable performance, which highly depends on the workload imposed onthe system by other VMs, unless proper techniques are used for temporal isolation among virtualmachines.

Processors that are designed to be virtualizable, such as those on mainframes, can operate indistinct privileged and non-privileged (also called user) modes. The VMM runs in privilegedmode and the virtual machines run in user mode. The processor only allows privilegedinstructions to be executed in privileged mode so if a virtual machine issues a privilegedinstruction it is automatically trapped and control of the processor is passed back to the VMM.Unfortunately, Intel’s x86 architecture is not fully virtualizable so a VMM running directly onPC hardware would not be able to trap all the privileged instructions.VMware and Connectixsolve this issue by running their VMMs on a host operating system. Connectix products usevarious versions of Windows as the host operating system. VMware uses Windows or Linux asthe host. In the case of VMware’s top-ofthe-range ESX Server the package contains the VMMembedded in a Linux kernel(where it executes with less overhead) so there is no need to install aseparate hostoperating system.

Unfortunately, Intel’s x86 architecture is not fully virtualizable so a VMM running directly onPC hardware would not be able to trap all the privileged instructions. VMware and Connectixsolve this issue by running their VMMs on a host operating system. Connectix products usevarious versions of Windows as the host operating system. VMware uses Windows or Linux asthe host. In the case of VMware’s top-ofthe-range ESX Server the package contains the VMMembedded in a Linux kernel(where it executes with less overhead) so there is no need to install aseparate host operating system.

Page26

Figure 1. 14 Comparison of vitual machine software between mainframe and personal computer

Using a host operating system to run a VMM on a PC solves another problem – how to providedrivers for the huge variety of hardware and I/O devices that are available for the PC platform.The virtual machine provides the guest operating system with a series of virtual interfaces forcommon devices. The VMM takes data from the virtual interface and sends it to the realinterface at the appropriate time using the host operating system’s hardware driver. Onedifference between VMware and Connectix is that VMware virtual interfaces are based ongeneric virtual hardware while Connectix emulates real hardware.

There is no reason why the virtual and real interfaces need be the same. This means that thevirtual machine environment can be constant and independent of the underlying hardware, whichbrings two benefits. Firstly, system images (comprising perating systems and applications) cansimply be copied between VMs, rather than requiring installation. Secondly, the virtual machineenvironment can emulateperipherals for obsolete operating systems, such as IBM’s OS/2,enabling OS/2 to run on modern hardware even though there are no OS/2 drivers available forthe hardware.The following is a more detailed explanation of virtual machine software for Intel platforms. Thedescription is based on VMware Workstation but the other products work in similar ways. WhenWorkstation is installed it creates three components – the VMX driver, the VMM and theVMware Application. The VMX driver is installed within the host operating system, therebyobtaining the high privilege levels that are permitted to drivers and that the VMM requires.When the VMware Application is executed it uses the VMX driver to load the VMM into thememory used by privileged applications, such as the operating system. The host operatingsystem is aware of the VMX driver and the VMware Application but it is ignorant of the VMM.The system now contains two “worlds” – the host world and the VMM world. When the guestoperating systems are running purely computational programs, the VMM world communicatesdirectly with the processor and memory. When an I/O function (such as disk access) needs to beperformed, the VMM intercepts the request and switches to the host world. The VmwareApplication then carries out the request using standard host operating system calls and returnsthe data to the guest operating system through the VMX driver, the Vmware Application and theVMM.

Page27

Figure 1. 15 VMware workstation architecture

Each time a world switch is performed all the user and processor state information needs to bestored, so performance is lower than it would be with a real system or a mainframe VMM. Howmuch lower depends on the I/O intensity of the application,and the utilization of the server.Workstation’s VMM tries to minimize the loss ofperformance by analyzing the I/O requests and determining which are actually moving data andwhich are simply checking the status of I/O ports, and emulating the latter. Performance is alsoaffected by the fact that the VMware Application and guest operating systems run as applicationsunder the host operating system and thereforerun the risk of being swapped out of memory at inconvenient times.Despite these limitations,virtual machine software for Intel-based servers is proving tobe a valuable method of serverconsolidation. The major benefits are:

Low cost. The software itself is inexpensive and commodity hardware can be used(orexisting Intel-based server hardware re-used).

Wide range of guest operating systems. Support is available for virtually everyoperating system that can run on a PC including Windows server software from NT 3.1onwards, Linux, FreeBSD, Novell, OS/2 and Solaris;

Quick and easy to install. No recabling is required. Existing applications and operatingsystems are not touched;

Dynamic resource management. Once installed virtual machine software permitsdynamic reconfiguration of processors and memory to virtual machines with a fine levelof granularity.

Page28

1.5 SCALABLE SERVER CONSOLIDATIONConverting a hundred physical servers into a hundred virtual machines running on a dozen or sophysical hosts will reduce your facilities costs and may reduce management costs as well if theproject includes logical or physical consolidation, but you will still have a hundred operatingsystems to manage separately. Similarly,replacing a hundred physical servers with a hundredblades in a single rack will reduce the amount of floor space you need and enable you to use theblade vendor’s tools to simplify management but it won’t alter the fundamental fact that there arestill a hundred servers to be looked after.To maximize the ongoing benefits of server consolidation you really need to consider data centerautomation tools as part of the project, to address day to day management issues and also help todeliver wider benefits such as more effective disaster recovery,improved system availability andhigher peak load capacity.The key benefit of virtual machines and, to a lesser extent, bladeservers is that they separate the operating system from the underlying hardware. The hardwarethen becomes a resource pool that can be assigned to particular applications as needed. A newapplication draws resources from the pool, and retiring an obsolete application returns resourcesto the pool. Having a hardware pool makes it much easier to tackle issues like system availabilityand peak load capacity. The pool approach can also be extended to include a disaster recoverycentre, where the available machines becomeanother pool of resources that can be assigned tothe transplanted applications.If you only have a small number of physical servers running, say, 10-15 virtual machines it isquite feasible to manage the entire system using the tools provided by virtual machine softwarevendor, combined with a few custom-designed scripts to copy virtual machines from onephysical server to another, and Microsoft’s SysPrep tool to customize duplicated Windowsimages. Experience shows that this manual approach frequently breaks down if the initial small-scale system is successful and a decision is take to expand the deployment. Adding more hostand guest machines, especially if some of them are sited remotely, significantly increases themanagement effort.What is needed is a single management system that permits monitoring and control of thephysical servers, the host operating system and the virtual machines, i.e. logical consolidation ofthe virtual machine environment. If however, the consolidated servers belong to differentdivisions, it is often necessary to leave server control with the divisional IT team. At the sametime it is highly desirable to manage the underlying hardware, backup, spare capacity, anddisaster recovery centrally. This implies that the management system need to offer some form ofrole-based access control that regulates access according to the user’s role. This is no differentfrom the user management systems in mainframes which have traditionally permitted sharingbetween departments.Frequent system configuration changes may also strain a manual system of control to breakingpoint. Many training, QA, support and development situations require several system changes aday, and it may be necessary to co-ordinate the changes across hundreds of machines at a time.At this point automation tools become essential.Training, QA, support and development would often like to archive thousands of system imagesso that a particular image can be built once and reused many times.This requires a sophisticatedimage cataloguing system. But image cataloguing need not be limited to individual servers. Bystoring both the images and the interconnectioninformation for a network of servers it is possibleto restore a completely preconfigured system in minutes.

Page29

This idea of storing entire systems can be applied to Disaster Recovery, so at regular intervalsthe key line of business servers are backed up to tape. Recreating the entire data center in adisaster recovery facility then becomes a matter of loading virtual machine software onto thedisaster recovery servers and copying the images from the tape in order to start the virtualmachines.Data center automation tools can also help to improve system reliability and peak loadcapability. System reliability is a key issue because consolidation increases the business impactof a failure in a host server or a blade rack. Instead of affecting a single system, a dozen or sosystems may need to be recovered simultaneously. Data center automation tools can track thehealth of guest and host systems and automatically move the guest systems to another host if theprimary host crashes. They can also provide information on available capacity system-wide, so itcan be assigned when and where required.

1.6 FUTURE OF SERVER CONSOLIDATION64-bit processors from Intel and AMDIntel’s Itanium and AMD’s Opteron are designed to compete head to head with Sun’s 64-bitSPARC processors. Moving to a larger computer instruction size increases the amount ofmemory that a single processor can address and allows for a more powerful instruction set. Thereare 64-bit versions of both the Microsoft Windows .NET Server and Linux.While the Intel Itanium processor is not backwards compatible with the current 32-bit Intelprocessors the AMD offering is compatible and therefore offers an easier migration path. Atpresent however, AMD has a tiny share of the market for processors used in servers and majorserver vendors have yet to announce any forthcoming Opteron models.The move by Intel and AMD to 64-bit processors will have two effects on server consolidation.Firstly, it will make the virtual machine approach more attractive by enabling more guestmachines per host, and allowing all guest machines to have access to a greater peak loadcapacity. Secondly, servers using these processors will take market share away from Sun, andcompanies will have less reason to operate a mixed computing environment. Once again this willmake server consolidation easier.

Autonomic computingThe term autonomic computing is intended to present an analogy with the human body’sautonomic nervous system - the system which controls essential processes like blood flow andbreathing without requiring any conscious recognition or effort. IBM describes an autonomiccomputing system as one that possesses eight key elements:

It must “know itself” and comprise components that also possess a system identity.

It must configure and reconfigure itself under varying and unpredictable Conditions.

It must always look for ways to optimize its workings;

It must be able to recover from routine and extraordinary events that may cause some ofits parts to malfunction;

Page30

It must be an expert in self-protection;

It must know its environment and the context surrounding its activity, and actaccordingly.

It must function in a heterogeneous world and implement open standards.

It must anticipate the optimized resourced needed while keeping its complexity hidden.

Clearly all of this is some way off, though IBM and other companies are investing real money inautonomic computing research. Nevertheless better tools to supplement the current manualapproach to server and software management represent a small step along the road to autonomiccomputing. Such tools have been available for some time,but companies have, by and large, notdeployed them because there has not been a strong pressure to improve data center efficiency.The current recession has changed that, and faced with a hiring freeze IT directors are turning todata center automation tools to make their current teams much more efficient.

Muli-core chip offerings

The future of multi-core processorsMulti-core processors are the technology that enables server consolidation.

Intel launches dual-core Xeon processorThe world's largest chip maker announced the release of its first dual-core processor which will feature hyper-threading and run at 2.8 GHz.

Power-saving technologies in the data center With data centers exceeding watts capacity, many are looking at new technologies to keep energy demand from spiraling out of control. IT pros are exploring virtualization, multi-core chips and DC-powered equipment to fight the problem.

Protecting virtual servers gets smarterBacking up VMware virtual machines has been a clunky process to date, but a handful of new products coming in February, 2006 were slated to address this issue.

Consolidation sparks mainframe revivalIn February of last year, two deals with German firms, one for the implementation of 20 z990s, reflected how the economic demand to consolidate servers was driving sales of IBM's premier mainframe.

REFERENCES

Page31

http://www.directionsonmicrosoft.com/sample/DOMIS/research/2004/07jul/0704i_sb.htm

http://www.nec.com/en/global/prod/expresscluster/ http://users.techtarget.com/registration/searchdatacenter http://www.wikipedia.com/ HP Puts 1000 Cores in a Single Rack" http://www.wikipedia.com/ HP BladeSystem p-Class Infrastructure http://www.wikipedia.com/ Sun Blade Modular System http://http://www.abrconsulting.com/Custom_Code_Pages/calc4.php as of 3rd February

2008. http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html#_ftnref1

http:// www.wikipedia.com/economicsofvirtualization.html

http://www.linux-kvm.org/wiki/images/7/7f/2010-forum-perf-and-scalability-server-consolidation.pdf

http://www.blu.org/meetings/2002/12/paper.pdf

http://www.infosys.com/IT-services/infrastructure-management-services/white-papers/Documents/server-consolidation-virtualization-xeon.pdf

http://research.cs.wisc.edu/multifacet/papers/isca07_virtual_hierarchy.pdf

http://www.bltrading.com/pdf/services/idc_server_consol.pdf

http://searchdatacenter.techtarget.com/definition/server-consolidation

http://download.microsoft.com/download/a/c/0/ac0344f7-7f1c-426c-bc13-cc3070189be7/SCUV_Sales_Datasheet_Final.pdf

http://www.wisegeek.com/what-is-server-consolidation.htm

Page32