Cisco vs. Brocade Director Architecture in FICON Environments

14
STORAGE AREA NETWORK Competitive Brief: Cisco vs. Brocade Director Architecture in FICON Environments Reviews the Brocade 48000 Director architecture and refutes erroneous claims from Cisco about its architecture and performance; includes a brief tutorial on FICON performance measurement and metrics.

Transcript of Cisco vs. Brocade Director Architecture in FICON Environments

STORAGE AREA NETWORK

Competitive Brief: Cisco vs. Brocade Director Architecture in FICON Environments

Reviews the Brocade 48000 Director architecture and refutes erroneous claims from Cisco about its architecture and performance; includes a brief tutorial on FICON performance measurement and metrics.

STORAGE AREA NETWORK Brocade SAN Competitive Brief

CONTENTS INTRODUCTION..........................................................................................................................................................................................................3

BROCADE 48000 ARCHITECTURE AND ASICs ................................................................................................................................................3 Brocade Uses Shared Memory Architecture ....................................................................................................... 3 Cisco Chose Crossbar Architecture ..................................................................................................................... 4 Local Switching in Brocade Architecture............................................................................................................. 5 Performance Impact of CP Failure Modes .......................................................................................................... 7

MAINFRAME I/O AND STORAGE PERFORMANCE..........................................................................................................................................8 Examples............................................................................................................................................................... 9 Channel Performance ........................................................................................................................................10

FICON PERFORMANCE TECHNOLOGY..............................................................................................................................................................11 Channel Path Activity Report .............................................................................................................................12 FICON Director Activity Report ...........................................................................................................................13

CONCLUSION............................................................................................................................................................................................................14

REFERENCES............................................................................................................................................................................................................14

Cisco vs. Brocade Director Architecture in FICON Environments 2 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

INTRODUCTION

Cisco has made a number of claims about the Brocade 48000 Director architecture and how it may impact performance in a FICON environment during corner-case failure conditions. These claims have little or no technical validity and can be classified as marketing FUD (“Fear, Uncertainty, and Doubt”). Their intent appears to be to deflect attention from Cisco’s own architectural shortcomings.

This paper reviews the Brocade 48000 architecture, refutes the erroneous claims, and provides a brief tutorial on FICON performance measurement. It includes the results from an independent performance study for reference. The intention is to give customers enough information about the Brocade director characteristics to reach technically-valid conclusions about the Cisco claims. It includes the following sections:

• Brocade 48000 Architecture and ASICs

• Mainframe I/O and Storage Performance

• FICON Performance Terminology

• Conclusion

• References

BROCADE 48000 ARCHITECTURE AND ASICs

All directors need internal connectivity between line cards or port modules. Directors have port blades with chips that provide outward-facing connectivity and separate chips, usually on centralized blades, that provide bandwidth between blades. There are different ways to construct these chips; the two most currently popular approaches include shared memory and crossbar technology.

High-speed Ethernet and Fibre Channel (FC) switches and directors use shared memory designs to achieve the highest performance. Shared memory switches are most often built using customized Application Specific Integrated Circuits (ASICs), which allow them to support advanced features in hardware rather than relying on slower, less optimal approaches. Crossbars are typically used to lower development costs, but at the sacrifice of performance and features.

Brocade Uses Shared Memory Architecture

The Brocade 48000 features an internal fabric of purpose-built Fibre Channel ASICs capable of switching at 256 Gbit/sec per chip (512 Gbit/sec full duplex). The 48000 is powered by a matrix of these “Condor” ASICs. Each chip has thirty-two 4 Gbit/sec Fibre Channel ports. The ASICs’ ports can be combined into virtual interfaces, or “trunks,” of up to 32 Gbit each. It is also possible to balance I/O between multiple trunk groups to create a “pipe” of up to 256 Gbit/sec between different directors in a fabric. Shared memory architecture also allows the Brocade 48000 to be free from Head-of-Line Blocking (HoLB), a performance issue that has traditionally plagued crossbar architectures.

The Condor ASIC allows the Brocade 48000 to perform as any of the following:

• Non-blocking, 1:1 subscribed, 128-port, 4 Gbit/sec FICON director

• Non-blocking, 16:8 subscribed, 256-port, 4 Gbit/sec FICON director

• Non-blocking, 24:8 subscribed, 384-port, 4 Gbit/sec FICON director

Cisco vs. Brocade Director Architecture in FICON Environments 3 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

The backend, inter-ASIC connections on the 48000 use the same protocol frame format as the frontend chip ports, enabling the backend ports to avoid latency due to protocol conversion. Crossbar switches, in contrast, convert frames to a proprietary backend protocol, and then convert them back into Fibre Channel before they leave the switch. This double-protocol conversion is inherently inefficient.

It is important to note that the internal ASIC connections in a Brocade 48000 are not E_Ports connecting an internal network of switches. The entire director is a single domain and a single hop in a Fibre Channel network. When a port blade is removed, a fabric reconfiguration is not sent across the network. Backend switching ASIC connections use the same frame format as frontend ports, but because they are contained within a single switch, there is no need to run any of the higher layer (service) FC protocols across these connections.

When a frame enters the Condor ASIC, the address is immediately read from the header, which enables routing decisions to be made even before the whole frame has been received. This allows the ASICs to perform cut-through routing: a frame can begin transmission out of the correct destination port on the ASIC even before the initiating device has finished transmitting it. Only Brocade offers a FICON director that can make these types of decisions at the port level, enabling local switching (below) and the ability to deliver 1.5 TB of bandwidth for a 256-port 48000 switching FICON. Local ASIC latency is 0.8 microseconds and blade-to-blade latency is 2.4 microseconds, which is the lowest latency in the industry. As a result, the Brocade 48000 has the lowest delay and highest performance of any Fibre Channel director product in the industry.

Cisco Chose Crossbar Architecture

Cisco chose to use the older crossbar architecture for the MDS 9513 series directors in order to reduce their development costs, and this impacts performance. Cisco often refers to its MDS 9513 as a “528-port FICON director.”

FACT CHECK: From the point of view of performance, each of the MDS 9513 line cards is limited to delivering 48 Gbit/sec of full duplex bandwidth to the midplane and crossbar. A maximum of 12 ports per line card can operate at 4 Gbit/sec. Regardless of whether or not you use the Cisco 12/24/48 port line cards, a maximum of 132 ports (non- or oversubscribed) can switch simultaneously at 4 Gbit/sec on a Cisco MDS 9513 director.

In contrast, to the Cisco MDS 9513, each blade in the Brocade 48000 Director can deliver 64 Gbit/sec on non-blocking bandwidth to other slots (33 percent more than the MDS 9513 can deliver between slots). For customers who take even partial advantage of local switching (below), the 48000 can deliver full-speed, full-duplex 4 Gbit/sec performance on all 384 ports at the same time. This is approximately three times the performance of the Cisco director.

In conventional LAN and WAN networks, the network is composed of multiple switches and routers wired in a mesh topology. With multiple links connecting groups of switches and routers and routing protocols to determine optimum paths through the network, the network can withstand an outage of an individual link or switch and still deliver data from source to destination. This network-centric approach assumes that all connected end devices are peers and the role of the network is simply to provide any-to-any connectivity between peer devices.

Cisco vs. Brocade Director Architecture in FICON Environments 4 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

Local Switching in Brocade Architecture

In addition to having more backplane bandwidth per slot, Brocade can deliver 4 Gbit/sec bandwidth per port even on an oversubscribed blade through a process called “local switching.”

In the Brocade 48000, each port blade ASIC exposes some ports for user connectivity and other ports connect to the backplane. If the destination port is on the same ASIC as the source, the director can switch the traffic the traffic without ever needing to leave the blade. On the Brocade 16- and 32-port blades, local switching is performed within 16-port groups. On the 48-port blade, traffic can be localized in 24-port groups.

Even if the traffic in question is running on an oversubscribed blade, the localized traffic does not use the oversubscribed backplane resource. Since local switching connections do not use backplane bandwidth, this traffic does not count against the subscription ratio and cannot be impacted by traffic from other devices. Likewise local flows do not count against the subscription ratio, which improves performance for other, non-localized traffic patterns.

Regardless of the number of devices communicating over the backplane, locally switched devices are guaranteed 4 Gbit/sec bandwidth. This enables every port on a Brocade 48000 high-density blade to communicate at a full 4 Gbit/sec speed with port-to-port latency of just 800 nanoseconds, about 25 times faster than the MDS 9513. This is an important feature for high-density/high-performance mainframe FICON environments, because it allows oversubscribed blades to achieve full non-congested line rate performance. The MDS 9513 director from Cisco does not allow local switching—traffic must traverse the backplane/crossbar even when traveling to a neighboring port on a port card—a characteristic that ultimately degrades performance.

This means that a 9513 has a maximum of 528 Gbit/sec of chassis bandwidth, versus the 1Tbit/sec at 256 ports for FICON and 1.5Tbit/sec at 384 ports for Open Systems on the 48000. The dramatic difference is the result of the limited crossbar Cisco architecture while Brocade uses a shared memory design.

For example, on a Brocade 48000, traffic from a FICON Express 4 Channel Path Identifier (CHPID) on a System z9 can ingress to a node port and could egress to its Direct-Attached Storage Device (DASD) array out another port in the same 16-port group on a port card. This traffic would not cross the backplane, would not congest under any conditions, and would move from source to destination port in 800 nanoseconds. Contrast that with the Cisco MDS 9513 architecture, which has no similar capability due to its crossbar design. Even between neighboring ports, a Cisco director must use valuable slot and crossbar bandwidth and switches frames more than an order of magnitude more slowly.

FACT CHECK: Cisco’s claim that taking advantage of local switching involves difficult planning is without merit. Many mainframe customers carefully plan their CHPID connectivity using tools such as the IBM CHPID mapping tool and welcome exchanging planning time for improved performance. Cisco’s concerns about planning and change control are understandable, considering their history in the IP networking space. Fortunately, these are not issues for mainframe environments.

Figure 1 is an illustration of the Brocade 48000 32-port blade design. Figure 2 shows how the blade positions in a Brocade 48000 are connected to each other in a 256-port configuration.

Cisco vs. Brocade Director Architecture in FICON Environments 5 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

Figure 1. Brocade 48000 32-port blade design

Figure 2. Overview of a 256-port configuration

Cisco vs. Brocade Director Architecture in FICON Environments 6 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

Performance Impact of CP Failure Modes

Any type of failure on the Brocade 48000 director —whether a Control Processor (CP) or core ASIC—is extremely rare. According to reliability statistics from Brocade OEM partners and customers, Brocade 48000 CPs have a Mean Time Between Replacement (MTBR) rate of 337,000 hours (more than 38 years) based on real-world field performance. However, even in the extremely rare occurrence of a failure, the Brocade 48000 is designed for fast and easy CP replacement.

The Brocade 48000 has two CP blades, each of which contains a CPU complex and a group of ASICs. The ASICs provide the core switching capacity between port groups for traffic switched over the backplane (non-locally switched). The CP functions are active-passive (hot standby) redundant while the switching functions of the core ASICs are active-active. The CP that has the active processor is known as the “active CP blade,” but both active and standby CPs have active core ASIC elements. The ASICs and CPU blocks are separated in both hardware and software except for a common DC power source. Figure 3 illustrates the Brocade 48000 CP blade design.

Figure 3. Brocade 48000 CP blade design

If the processor section of the active CP blade fails, only the management plane is affected: the core ASICs are functionally separate and continue switching frames without interruption. It is possible for a control processor block to fail completely, while the core ASICs continue to operate without performance degradation. A CP failure has no effect on the data plane; the standby CP automatically

Cisco vs. Brocade Director Architecture in FICON Environments 7 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

takes over and the switch continues to operate without dropping any data frames. Only during the short duration of a service procedure, during which the CP is physically replaced, is there be a temporary degradation of backplane bandwidth.

In most real-world cases, even during the short service procedure, application performance is not degraded. For example, it would not affect locally switched traffic, and if the traffic that needs to traverse the CPs is less than the capacity of the maximum system-wide bandwidth, no congestion would occur. Given the very high Mean Time Between Failures (MTBF)/MTBR of the blade and the fact that the outage can and should be scheduled during a time favorable to host operations, this characteristic does not have a noticeable effect in a real-world FICON environment.

If a core element ASIC failure occurs, the potential impact to overall system bandwidth is straightforward. If half of the core elements go offline due to a hardware failure, half of the aggregate backplane switching capacity would be unavailable until the condition is corrected. A Brocade 48000 Director with just one core element can still provide 256 Gbit/sec (512 Gbit/sec full duplex) of backplane switching bandwidth, or 32 Gbit/sec (64 Gbit/.sec full duplex) to every director slot.

NOTE: A core element failure has no impact on local switching bandwidth. In best-case scenarios, the Cisco MDS 9513 has only 48 Gbit/sec (96 Gbit/sec full duplex) of bandwidth per slot. In contrast, if only 1/4th of the traffic on a 32-port Brocade blade were localized, that blade could provide 48 Gbit/sec even in its most degraded mode of operation. In that scenario, the worst case Brocade performance during a failure case would be as good as the best-case performance for a Cisco director.

Data flows would not necessarily be congested in the Brocade 48000 with one core element failure. The worst case is that data flows might become congested, but this would require that the director already be running at 50 percent of backplane capacity on a sustained basis. On systems with typical mainframe I/O patterns, the aggregate usage of the available backplane would not even reach 50 percent. In such environments there would be no impact, even if the problem persisted for an extended time period. Very few Brocade FICON environments (if any) have all ports running at 4 Gbit/sec with a 100 percent load on all data flows, and at the same time use no local switching for any data flows. The scenario used by Cisco is simply not realistic, even by conservative analysis methods.

FACT CHECK: When you consider the truth about the Cisco MDS 9513 architecture and the traffic flows and frame processing features of the MDS 9513 director compared with the traffic flows of the Brocade 48000, it becomes clear that the 48000 architecture addresses performance more effectively.

MAINFRAME I/O AND STORAGE PERFORMANCE

While IP network performance may be determined by bandwidth alone, in a mainframe storage environment, bandwidth is not a reliable measure of performance.

IBM has employed three primary schemas of mainframe connectivity over the 43 years since the introduction of the S/360. These schemas are parallel: ESCON and FICON. Although the data rate increased significantly from the first 1.2 MB/sec gray cables to the 4.5 MB/sec blue cables on the 308x series of processors, all of the parallel channel implementations were restricted to a maximum distance of 400 feet. While they could be “daisy-chained,” parallel channels were dedicated point-to-point, that is, direct-attached, connections. The capability to do storage networking via switched connectivity did not arrive until ESCON was introduced in 1990.

Installations that included ESCON directors typically had some variation of the configurations illustrated in Figures 4 and 5. These types of configurations are called “4-way “or “8-way” pathing. Load Control Units (LCUs) on a DASD array are accessed via multiple paths through multiple ESCON directors from the host. This was done as part of the planning process to guarantee high availability

Cisco vs. Brocade Director Architecture in FICON Environments 8 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

and performance in the event of issues with cabling, channel card, ESCON director, or anything else in the path from IOS to platter on the DASD array. The vast majority of mainframe installations continue to plan and architect their FICON environments in the same fashion today.

Examples

Using the 4-way pathing example, suppose a CP on one of the Brocade 48000 Directors failed. If there were no local switching, the total bandwidth going to a given LCU would be reduced by 1/8 (or 12.5 percent) Similarly, if 8-way pathing were used, the total bandwidth going to a given LCU would at most be reduced by 1/16 (or 6.25 percent). Assuming that the installation made use of technology they paid for and used the Brocade 48000 local switching capability, those CP failure bandwidth reduction figures would be further reduced by 50 percent or more. (For example, the 6.25 percent reduction would become a 3.12 percent reduction or less.) And that assumes that the mainframe had been pushing the 48000s at or near their limits in the first place—a scenario essentially unheard of in the real world. If the mainframe were pressing the 48000s only to 90 percent of their capacity, a “reduction” of 3 percent, or even 6 percent, would produce exactly 0 percent reduction in application performance.

Figure 4. Four-way pathing design

Cisco vs. Brocade Director Architecture in FICON Environments 9 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

Figure 5. Eight-way pathing design

Channel Performance

Even more damaging to Cisco’s argument is the fact that network bandwidth alone is rarely the best metric for determining application performance in mainframe environments. Otherwise, installations that migrated from ESCON to FICON should have seen much greater improvements in response and/or service times.

Use the example of ESCON and FICON Express 4, and we’ll use theoretical maximum bandwidth numbers for ease of math. FICON Express 4 at 400 MB/sec is 20x the bandwidth of ESCON at 20 MB/sec. Statistical sampling taken of ESCON environments during 2004-2005 showed response times of well tuned DASD environments in the neighborhood of 3 microseconds. Now move forward to the year 2007. Based on Cisco’s bandwidth and performance theory, by moving from ESCON to FICON Express 4, since the bandwidth improved by a factor of 20, response time should have been reduced by a factor of 20. In other words, you would see DASD response times of 0.15 microseconds once the migration to FICON is complete.

The facts do not support this theory. Of course, since ESCON is a different protocol from FICON with different characteristics, a bandwidth comparison such as that described above is not strictly speaking an accurate comparison. But we can make this comparison. Consider installations that had FICON Express 2 and bought one or more new System z9 EC servers with FICON Express 4 channel cards, new DASD arrays, and FICON directors capable of running 4 Gbit/sec? Did they see their DASD response times cut by 50 percent because they doubled the bandwidth of their FICON environment? No, they did not.

In the next section, the basics of DASD performance metrics in a FICON environment are reviewed. (For more detail, consult the references listed at the end of this paper.) This is followed by a discussion of the results of a study correlating factors to response/service time spikes.

Cisco vs. Brocade Director Architecture in FICON Environments 10 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

FICON PERFORMANCE TECHNOLOGY

PEND (pending) time is defined as the time between the acceptance of a Start Subchannel (SSCH) command by the channel subsystem and the receipt of the initial status (CMR) from the storage subsystem. This is the time from issue of the I/O by z/OS until it is accepted by the DASD subsystem. PEND time includes one round trip over the link, as well as the internal response time inherent in the DASD subsystem (the time required to present initial status for the prefix Channel Command Word (CCW).

CONN (Connect) time is defined as the time it takes to transfer data to be read or written plus the propagation time from the subsystem to the channel. CONN time has three major components:

• Data transfer time-the time for actual payload to be transferred at the ESCON or FICON channel speed.

• Command transfer time-the protocol time required to transfer the CCWs and have them be processed by the DASD subsystem.

• Elongation-queuing time that may occur when multiple data transfers (I/Os) are running on a single FICON channel. Recall that unlike an ESCON channel which is capable of only a single I/O operation at a time, a FICON channel is capable of multiple concurrent operations sharing the channel’s bandwidth. This sharing implies that when many operations are running simultaneously, each individual I/O gets only a portion of the available bandwidth.

DISC (Disconnect) time is the wait time after the I/O has been accepted by the DASD subsystem and before the actual data transfer can be started (reads) or acknowledged (writes). There are 3 primary actions associated with Disconnect time.

• Copying information to a secondary controller for synchronous copy (SRDF, PPRC, and TrueCopy).

• Waiting for an internal resource to be available (hot spots in the back-end of the DASD array).

• Cache read misses.

IOSQ (I/O Supervisor Queue time) is defined as the time that z/OS has to wait for a logical volume to be available. IOSQ delays typically occur whenever z/OS already has an active I/O on the DASD subsystem for a particular logical volume. Use of parallel access volumes (PAVs) or HyperPAVs can significantly reduce IOSQ time.

Response time=IOSQ+PEND+DISC+CONN

Response time is the metric that a mainframe installation’s performance analysts, capacity planners, and management will be most concerned about; and this is what Service Level Agreements (SLAs) are based on. No single component of response time is meaningful in isolation; it is the combined metric that is relevant to application performance, and thus to end user experience of the system.

Cisco vs. Brocade Director Architecture in FICON Environments 11 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

Channel Path Activity Report

Two primary RMF reports are of interest for FICON. The first of these is the RMF 73 record known as the “Channel Path Activity” report. See Figure 6 below for a sample of this report.

Figure 6. Channel Path Activity report

For a FICON channel, three subfields are listed under UTILIZATION (%):

• PART denotes the FICON processor utilization due to this Logical Partition (LPAR).

• TOTAL denotes the FICON processor utilization for the sum of all LPARS.

• BUS denotes the FICON channel card’s internal bus utilization for the sum of all LPARs. This is the measured utilization of the PCI bus on the channel card over which all data is transferred.

PART and TOTAL are sometimes referred to as “channel busy” and BUS referred to as “bus busy.” Channel busy is an indication of the measured utilization of the microprocessor (that is, the channel). The FICON processor is busy for channel program processing, which includes the processing of all individual CCWs contained in the channel program, as well as some setup activity at the beginning of the channel program and cleanup at the end. The FICON bus is busy for the actual transfer of command and data frames from the FICON channel to the FICON channel adapter card. This card on the host is what is connected via the FICON link to the director or device control unit.

In their March 2004 paper on FICON channel path metrics, Dr. Pat Artis and Robert Ross, performed an extensive correlation analysis study to explore the relationship between channel busy and I/O rate, as well as the relationship between bus busy and channel MB/sec. The study concluded that there is a strong relationship between I/O rate and channel busy, and between bus busy and channel MB/sec. This relationship was so strong that the correlation coefficient for each was above 0.93 (1.0 is the maximum) indicating a very strong positive correlation. Also, small block size (4 KB) typical of DASD Online Transaction Processing (OLTP) workload characteristics drives the channel utilization/channel busy metric higher, while large block size traffic (typical of tape or batch jobs) drives the bus busy metric higher.

In her January 2007 IBM white paper on IBM System z9 I/O and FICON Express4 channel performance, Cathy Cronin states that “small block 4k bytes/IO typical of an OLTP (on-line transaction processing) workload would not be expected to see any perceptible difference based on link speeds alone.” Cronin goes on to say that “Since changing the link speed has no effect on the number of small block I/Os (4k bytes per I/O) that can be processed, the maximum number of I/Os per second that was measured on a FICON Express4 channel running an I/O driver benchmark with a 4k bytes/IO workload is approximately 13000, which is the same as what was measured with a FICON Express 2 channel.”

Artis and Ross, in the paper cited earlier, took their study a step further. A puzzling phenomenon was occurring in some mainframe installations. Several of these installations were seeing substantial jumps in service time occurring just after the per channel I/O rate exceeded 1750 SSCHs/sec. Service

Cisco vs. Brocade Director Architecture in FICON Environments 12 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

time was employed in their study rather than response time since: a) the experiments conducted were designed to preclude device queue delays (IOSQ time), and b) FICON channels are not directly intended to address IOSQ time issues. After extensive analysis of data, including further study of correlation, Artis and Ross concluded that the channel busy and bus busy metrics did not have a significant correlation (defined as a correlation coefficient >0.90) to the increases in service time being measured. What they concluded was that the relationship was much more complex. The increases in service times were directly correlated to open exchanges. The authors went on to derive the formula for calculating the number of open exchanges (a quantity not currently counted or calculated in RMF).

An open exchange is an exchange I/O that is active between the channel and the control unit. It includes I/Os that are cache hits, which begin transferring data back to the channel immediately. It also includes cache misses. Cache misses can experience delays of 5 to 10 milliseconds before data can begin transferring back across the link. While more traffic (more I/Os) flowing through the FICON network increases the open exchange count, so will a low to moderate cache hit ratio. IBM has increased the open exchange limit on the System z9 from 32 to 64, but does not anticipate this being an issue. Most real production workloads do not approach the previous limit of 32. The primary situation in which customers might exceed this limit involves running I/O driver programs to evaluate new systems.

FICON Director Activity Report

The second RMF report of significant interest in FICON environments is the RMF 74-7 record, more commonly known as the “FICON Director Activity Report.” See Figure 7 below for an example of this report.

Figure 7. FICON Director Activity report

This report has port bandwidth, both read and write MB/sec. If you were to look at many examples of this report for a variety of workloads, from both large and small environments, one common theme would emerge: even though FICON has evolved to speeds of 4 Gb/sec, typical maximum real-world production bandwidth on a FICON Express 4 link will be less than 100 MB/sec. IBM advises to plan for 50 percent or less utilization.

Cisco vs. Brocade Director Architecture in FICON Environments 13 of 14

STORAGE AREA NETWORK Brocade SAN Competitive Brief

CONCLUSION

Mainframe I/O performance is about much more than bandwidth and oversubscription. This paper summarizes key points that Cisco ignores in their marketing collateral and presentations. Even if the Cisco claims about bandwidth and its impact on performance had merit, the planning methodology that has been used by mainframe professionals dating back to the ESCON era ensures that failures have a minimal impact. A properly designed configuration of a Brocade FICON environment would make use of local switching, a capability Cisco cannot provide to customers.

Even with a CP failure, a Brocade 48000 with even minimal use of local switching matches or exceeds the performance of a Cisco MDS 9513 FICON director. The local switching capabilities of the Brocade 48000 are unique and allow it to be the lowest-latency/highest-performing FICON director on the market.

REFERENCES

• Artis, H. Pat and Ross, Robert. Understanding FICON Channel Path Metrics. Performance Associates. 2003.

• Artis, H. Pat and Houtekamer, Gilbert. MVS I/O Subsystems. McGraw Hill. 1993

• Beretvas, Thomas. FICON Channel Performance. SHARE Proceedings. August 2006.

• Cassier, Pierre and Korhonen, Raimo. Effective zSeries Performance Monitoring Using Resource Measurement Facility. IBM Redbooks. April 2005

• Cisco Systems. A Day In the Life of a Fibre Channel Frame. Cisco MDS 9000 Family Switch Architecture. Cisco Systems. 2006

• Cronin, Cathy. FICON and FICON Express Channel Performance Version 1.0. IBM Corporation. February 2002.

• Cronin, Cathy and Basener, Richard. FICON and FICON Express Channel Performance Version 2.0. IBM Corporation. November 2003.

• Cronin, Cathy. IBM System z9 I/O and FICON Express4 Channel Performance. IBM Corporation. January 2007/

• White, Bill and Neville, Iain. FICON Implementation Guide. IBM Redbooks. January 2006

© 2007 Brocade Communications Systems, Inc. All Rights Reserved. 07/07 GA-CB-018-01

Brocade, the Brocade B-weave logo, Fabric OS, File Lifecycle Manager, MyView, Secure Fabric OS, SilkWorm, and StorageX are registered trademarks and the Brocade B-wing symbol and Tapestry are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. FICON is a registered trademark of IBM Corporation in the U.S. and other countries. All other brands, products, or service names are or may be trademarks or service marks of, and are used to identify, products or services of their respective owners.

Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.

Cisco vs. Brocade Director Architecture in FICON Environments 14 of 14