Building interoperability for European civil proceedings online
VPLEX and RecoverPoint Interoperability
-
Upload
khangminh22 -
Category
Documents
-
view
4 -
download
0
Transcript of VPLEX and RecoverPoint Interoperability
docu58231
Best Practices
VPLEX™ and RecoverPoint™ Interoperability Implementation Planning and Best Practices
Abstract This technical note is targeted for Dell EMC field personnel, partners, and
customers who will be configuring, installing and supporting VPLEX with
RecoverPoint. An understanding of these technical notes requires a good
working knowledge of SAN technologies, LAN/WAN technologies, block storage,
VPLEX and RecoverPoint concepts and all the components that bring the
solution together.
June 2019
Revisions
2 VPLEX™ and RecoverPoint™ Interoperability | docu58231
Revisions
Date Description
June 2019 Version 2
Acknowledgments
This paper was produced by the following:
Author: VPLEX CSE Team
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
This document may contain certain words that are not consistent with Dell's current language guidelines. Dell plans to update the document over subsequent future releases to revise these words accordingly.
This document may contain language from third party content that is not under Dell's control and is not consistent with Dell's current guidelines for Dell's own content. When such third party content is updated by the relevant third parties, this document will be revised accordingly.
Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its
subsidiaries. Other trademarks may be trademarks of their respective owners. [3/16/2021] [Best Practices] [docu58231]
Table of contents
3 VPLEX™ and RecoverPoint™ Interoperability | docu58231
Table of contents
Revisions............................................................................................................................................................................. 2
Acknowledgements ............................................................................................................................................................. 2
Table of contents ................................................................................................................................................................ 3
Executive summary ............................................................................................................................................................. 6
1 RecoverPoint and VPLEX Installation Pre-Requisites ................................................................................................. 7
1.1 Supported Topologies ........................................................................................................................................ 7
1.1.1 VPLEX Local ....................................................................................................................................................... 7
1.1.2 VPLEX Metro ...................................................................................................................................................... 7
1.1.3 MetroPoint .......................................................................................................................................................... 7
1.2 VPLEX Configuration Supported by RecoverPoint ............................................................................................ 7
1.3 Multiple Clusters Sharing a Single Splitter ......................................................................................................... 8
1.4 VAAI Support ...................................................................................................................................................... 8
1.5 Splitter Scalability ............................................................................................................................................... 8
1.6 Fake Size ............................................................................................................................................................ 8
2 RecoverPoint and VPLEX Installation Best Practices ................................................................................................ 10
2.1 VPLEX/RecoverPoint installation and integration ............................................................................................ 10
2.1.1 Prerequisites ..................................................................................................................................................... 10
2.2 Importing certificates into RecoverPoint and VPLEX ....................................................................................... 10
2.3 Adding RecoverPoint Cluster to VPLEX ........................................................................................................... 10
2.3.1 Add the RecoverPoint cluster to VPLEX: ......................................................................................................... 10
2.3.2 Registering RecoverPoint ................................................................................................................................. 10
2.3.3 Creating RecoverPoint storage view ................................................................................................................ 11
2.4 Registering VPLEX Credentials in RecoverPoint ............................................................................................. 11
2.5 Add the VPLEX Splitter: ................................................................................................................................... 11
2.6 Creating RecoverPoint Consistency Groups .................................................................................................... 12
2.7 Creating VPLEX consistency groups................................................................................................................ 12
2.7.1 Consistency groups for replication volumes: .................................................................................................... 12
2.7.2 VPLEX consistency group for journals and repository: .................................................................................... 12
2.7.3 Enable RecoverPoint for each Consistency Group: ......................................................................................... 12
2.7.4 RecoverPoint and Host Storage Views ............................................................................................................ 12
2.8 RecoverPoint Cluster Management.................................................................................................................. 13
2.8.1 Limitations ......................................................................................................................................................... 13
3 RecoverPoint and VPLEX Connectivity Best Practices ............................................................................................. 15
3.1 RPA Initiators .................................................................................................................................................... 15
Table of contents
4 VPLEX™ and RecoverPoint™ Interoperability | docu58231
3.2 Cabling .............................................................................................................................................................. 15
3.3 Zoning ............................................................................................................................................................... 15
3.3.1 VPLEX-to-storage zoning ................................................................................................................................. 15
3.3.2 RPA-to-VPLEX front-end zoning ...................................................................................................................... 15
3.3.3 RPA-to-VPLEX back-end zoning ...................................................................................................................... 16
3.3.4 Validating RecoverPoint in VPLEX ................................................................................................................... 16
4 RecoverPoint and VPLEX Best Practices .................................................................................................................. 17
4.1 VPLEX Scalability Limits .................................................................................................................................. 17
4.1.1 MetroPoint Support ........................................................................................................................................... 17
4.1.2 Splitter interoperability ...................................................................................................................................... 17
4.1.3 RecoverPoint Journal ....................................................................................................................................... 17
4.1.4 Splitter to RPA Multipathing .............................................................................................................................. 18
4.1.5 MetroPoint ........................................................................................................................................................ 18
4.2 Storage Best Practices ..................................................................................................................................... 18
4.3 Host Connectivity Best Practices ...................................................................................................................... 19
4.4 LAN Connectivity Best Practices ...................................................................................................................... 19
4.5 Performance Best Practices ............................................................................................................................. 19
4.5.1 IOPS and Throughput ....................................................................................................................................... 19
4.5.2 Added Response Time ..................................................................................................................................... 19
4.5.3 Deployment Considerations ............................................................................................................................. 20
4.6 VPLEX Splitter Best Practices .......................................................................................................................... 20
4.7 RecoverPoint Group Policies ............................................................................................................................ 21
4.7.1 Primary RPA ..................................................................................................................................................... 21
4.7.2 Priority ............................................................................................................................................................... 21
4.7.3 Preferred Cluster .............................................................................................................................................. 21
4.7.4 Distribute group writes across multiple RPAs .................................................................................................. 22
4.7.5 External Management ...................................................................................................................................... 22
4.8 RecoverPoint Link Policies ............................................................................................................................... 23
4.8.1 Asynchronous ................................................................................................................................................... 23
4.8.2 Snap-based Replication ................................................................................................................................... 23
4.8.3 Synchronous ..................................................................................................................................................... 23
4.8.4 Dynamic by Latency ......................................................................................................................................... 23
4.8.5 Dynamic by Throughput ................................................................................................................................... 24
4.8.6 RPO .................................................................................................................................................................. 24
4.8.7 Compression ..................................................................................................................................................... 24
4.8.8 Enable Deduplication ........................................................................................................................................ 25
Table of contents
5 VPLEX™ and RecoverPoint™ Interoperability | docu58231
4.8.9 Snapshot Granularity ........................................................................................................................................ 25
4.8.10 Test a Copy .................................................................................................................................................. 25
5 RecoverPoint and VPLEX Failure Scenarios ............................................................................................................. 26
5.1 MetroPoint Disaster Scenarios ......................................................................................................................... 26
5.1.1 VPLEX Fracture ................................................................................................................................................ 26
5.2 VPLEX Metro Link Failure ................................................................................................................................ 26
Executive summary
6 VPLEX™ and RecoverPoint™ Interoperability | docu58231
Executive summary
There are 3 primary uses cases of VPLEX for the modern data center which are:
1. Providing continuous availability for missional critical applications
2. Enabling data mobility for workload balancing
3. Accelerating storage array migration during technology refresh
Let’s address each use case in more detail.
Application Availability: VPLEX offers two different deployment topologies, VPLEX Local and VPLEX Metro.
VPLEX Local creates local mirrors across arrays. VPLEX Metro creates remote mirrors of the data and
enables applications to seamlessly have access to the mirrored data in case of a planned or unplanned
downtime. Whether the disruption is caused by a natural disaster like a power outage, storm, human error, or
an unexpected hardware failure, VPLEX enables businesses to stay up by providing access to the data.
Workload Mobility (increases business Agility): VPLEX enables you to move data from one array to another or
from one storage tier to another. All this movement happens non-disruptively while the application is servicing
requests. This creates a very agile, and flexible infrastructure in which you can place data according to the
most optimal resource utilization.
Storage Array Migration: As you do tech refresh to upgrade your older storage arrays and to AF arrays,
VPLEX will once again migrate all the data non-disruptively while the application is online and business
operations are running normally. VPLEX shortens your tech refresh process and speeds up your time to value
efficiently. You can plan a “just in time” purchase as your business need arises to do data migration.
While VPLEX provides continuous availability, it is limited to a VPLEX Local which is within the Datacenter or
VPLEX Metro which spans Datacenters at synchronous distance. Additionally, VPLEX Metro only provides
straight replication. It has no rollback or snapshot capability within the produce itself. VPLEX relies on array
based replication for that level of data protection. The challenge comes when restoring the data from a copy
back to the source volume. VPLEX is not integrated with the array copy or replication solutions outside of its
control. Also, the benefit provided to applications is lost on the array-based copy/replication technologies, i.e.
array replication would have to be reestablished and resynchronized after an array tech refresh.
RecoverPoint offers a unique integrated solution with VPLEX that overcomes the array-based copy/replication
shortcomings. A major engineering effort was made by both VPLEX and RecoverPoint engineering teams to
integrate the two products to develop unique and powerful solutions for today’s modern Datacenters.
Both RecoverPoint and VPLEX offer each other unique benefits that neither product can provide on its own.
Due to RecoverPoint configuration residing above VPLEX, VPLEX offers RecoverPoint the same benefits
provided to the applications. In turn, RecoverPoint provides data protection and rollback capabilities to the
VPLEX data volumes for both a VPLEX Local or VPLEX Metro which, in the case of the VPLEX Metro, is
protected using a unique RecoverPoint configuration called MetroPoint.
7 VPLEX™ and RecoverPoint™ Interoperability | docu58231
1 RecoverPoint and VPLEX Installation Pre-Requisites
1.1 Supported Topologies
1.1.1 VPLEX Local RecoverPoint Local Replication: RecoverPoint supports point-in-time protection at a single site.
RecoverPoint Remote Replication: RecoverPoint supports remote replication and point-in-time recovery.
Multiple remote copies are supported.
RecoverPoint Local and Remote Replication: RecoverPoint supports replication to remote clusters and
point-in-time recovery from the local cluster. Multiple replication remote copies are supported.
1.1.2 VPLEX Metro RecoverPoint Local Replication: Metro provides active-active access to data at two sites that are within
synchronous replication distance. RecoverPoint supports a local copy with point-in-time recovery.
RecoverPoint Remote Replication: Metro provides active-active access to data at two sites that are within
synchronous replication distance. RecoverPoint supports remote copies and point-in-time recovery. Multiple
remote copies are supported.
RecoverPoint Local and Remote Replication: Metro provides active-active access to data at two sites that
are within synchronous replication distance. RecoverPoint supports a local copy, remote copies and point-in-
time recovery. Multiple remote copies are supported.
1.1.3 MetroPoint RecoverPoint 4.1 introduces the MetroPoint topology which has the ability to protect a distributed (DR1)
VPLEX virtual volume at both VPLEX Metro sites. A RecoverPoint cluster is attached to each VPLEX Metro
site. One RecoverPoint cluster is the active production copy and the other RecoverPoint cluster is the standby
production copy. The active production copy replicates data to the remote cluster. The standby production
cluster does exactly the same thing as the active production cluster except that it does not replicate to the
remote cluster. It does, however, maintain an up-to-date journal so that the standby production cluster can
instantly become the active production cluster and continue data protection without disruption.
In addition, the MetroPoint topology supports a local copy at each side of the distributed volume. The two
local copies are independent of each other. The standby production copy is only standby relative to the link to
the remote copy. It actively replicates to its local copy, if present.
1.2 VPLEX Configuration Supported by RecoverPoint RecoverPoint 3.5 and later support VPLEX Local and VPLEX Metro configuration. RecoverPoint can replicate
the virtual volumes in VPLEX Metro consistency groups if they are local to the site where RecovderPoint is
attached or distributed virtual volumes with a local extent.
8 VPLEX™ and RecoverPoint™ Interoperability | docu58231
1.3 Multiple Clusters Sharing a Single Splitter VPLEX supports multiple RPA clusters sharing one VPLEX splitter. For the number of RPA clusters
supported per splitter, refer to the “Scalability” section of the release for your version of RecoverPoint.
1.4 VAAI Support vStorage API for Array Integration (VAAI) speeds up certain VMware operations by offloading them to array
hardware. The following table shows the vStorage API for Array Integration (VAAI) commands supported on
the VPLEX storage virtualization platform.
The VAAI commands and RecoverPoint support levels for VAAI commands are described in Dell EMC
RecoverPoint Replicating VMware Technical Notes.
If a VAAI command is rejected, it is not necessary to disable it. As soon as the command is rejected, VMware
reverts to legacy behavior without risk to data or performance.
Table 1. VAAI Commands Supported with the VPLEX Splitter
VAAI Command Support Result
Hardwareaccelerated Locking Supported in GeoSynchrony 5.1 and later
Handled in the VPLEX front end. Resulting read-writes passed to the underlying array
Block Zeroing Supported in GeoSynchrony 5.2 and later
Expanded in the VPLEX front end. Resulting writes passed to the underlying array
Full Copy Rejected Reverts to legacy behavior
1.5 Splitter Scalability The VPLEX splitter supports multiple RPA clusters attached to one splitter. For the number of clusters
supported per splitter, refer to the RecoverPoint release notes for your version of RecoverPoint. For the
number of LUNs that can be attached to a VPLEX splitter, refer to the RecoverPoint release notes for your
version of RecoverPoint.
1.6 Fake Size The RecoverPoint “Fake Size” feature was added in 5.2 and allows the splitter to change the apparent size of
a volume. This is extremely helpful when you must replicate your production volumes to larger replica
volumes.
If a RecoverPoint Fail-Over operation is performed where the roles of the Production and Replica volumes are
swapped the fake-size will be applied on the newly assigned production volume “repl_vol”.
When the VPLEX Splitter attaches to a volume, it indicates what fake size (if any) needs to be applied to a
volume. This is provided in the call to the splitter register volume protection context. Once configured, all
directors can apply the smallest requested fake-size (in the unexpected case of 2 splitters requesting different
sizes).
9 VPLEX™ and RecoverPoint™ Interoperability | docu58231
When the splitter disconnects from a volume, the same state call is made. Once splitters on all directors have
disconnected, the fake-size will be lifted and the volume will be exposed at full size on the front-end.
10 VPLEX™ and RecoverPoint™ Interoperability | docu58231
2 RecoverPoint and VPLEX Installation Best Practices
2.1 VPLEX/RecoverPoint installation and integration
2.1.1 Prerequisites • VPLEX 5.1.0 or later
• RecoverPoint 3.5 or later
• All RecoverPoint appliances must be Gen4 or later
• For RecoverPoint management information to be available at both sites of a VPLEX Metro
installation, the following are required:
- IP connectivity between the Metro sites
- RecoverPoint cluster added to the VPLEX CLI at the non-RecoverPoint site of the VPLEX Metro
In some situations, it may not be possible to add the RecoverPoint cluster to the VPLEX CLI at
the non-RecoverPoint site due to limitations on network connections.
2.2 Importing certificates into RecoverPoint and VPLEX The best practice is to use certificates to avoid user configuration errors. To do so, the RecoverPoint
certificate is imported into VPLEX and the VPLEX certificate is imported into RecoverPoint. Without the
certificates, VPLEX and RecoverPoint cannot obtain management information about volume types. With the
certificates, VPLEX and RecoverPoint will not display unsupported options in their GUI so that the user will
not select them inadvertently. For instance, distributed volumes will not appear in the RecoverPoint Add
Journal Volume dialog box because distributed volumes are not supported for journals.
2.3 Adding RecoverPoint Cluster to VPLEX
2.3.1 Add the RecoverPoint cluster to VPLEX: Best practice is to zone every RPA port to both VPLEX front-end and back-end ports. Administrators who
have purposely configured single-channel mode can safely ignore the warning.
Note: When RPAs are zoned to VPLEX using single-channel mode (2 RPA ports zoned to VPLEX front-
end ports, and 2 RPA ports zoned to VPLEX back-end ports), this command issues a warning. The
system checks that all 4 ports on the RPA are zoned to both VPLEX front-end and back-end ports
(dual-channel mode).
2.3.2 Registering RecoverPoint Register RecoverPoint Fibre Channel initiator ports with VPLEX:
• Log into the VPLEX GUI (https://<ManagementServerIP>)
• Select Provision Storage > <cluster name> > Initiators
• In the Initiator Name field, select an unregistered RecoverPoint initiator. The port WWN will be
preceded by the prefix UNREGISTERED. (Example: UNREGIESTERED-0x5001248061931a3b)
• Click Register and assign a significant name to the port
11 VPLEX™ and RecoverPoint™ Interoperability | docu58231
• Set Host Type = RecoverPoint. Repeat until all RecoverPoint unregistered ports are registered
• The RecoverPoint repository and journals must be local to the VPLEX site protected by
RecoverPoint, and not distributed. If necessary, the RecoverPoint repository and journals may be on
the physical array, without VPLEX virtualization, or on a different array. The best practice is to have
the RecoverPoint repository, journals and replication volumes on the VPLEX
• Create a VPLEX local (not distributed) virtual volume for the RecoverPoint repository. Make sure the
RecoverPoint repository is at least 3 GB
• When setting up a MetroPoint topology, a RecoverPoint repository is required at both Metro sites
2.3.3 Creating RecoverPoint storage view In the VPLEX GUI, create a storage view for the RecoverPoint cluster:
• From the drop-down menu in the VPLEX GUI, select Provision Storage> <clustername> > Storage
Views > Create Storage View
• Use the Create Storage View wizard to create the VPLEX Storage View that will be used for the
RecoverPoint RPAs. Name it RecoverPoint System or other appropriate name to indicate that it
contains RecoverPoint volumes
• Add to the RecoverPoint System storage view:
- All RecoverPoint initiators
- All VPLEX front-end ports used by the RecoverPoint cluster
- VPLEX Virtual Volumes to be used for the RecoverPoint repository, journal volumes and
replication volumes
- Make sure that repository is exposed only to RecoverPoint and not to any host server
• The following volumes may not be added to the RecoverPoint storage view:
- Remote volumes
- Volumes that are already in a different RecoverPoint storage view
- Volumes in VPLEX consistency groups whose members are in a different RecoverPoint storage
view
A RecoverPoint cluster may take up to two minutes to register changes to VPLEX consistency groups. Wait
two minutes after making the following changes before creating or changing a RecoverPoint consistency
group:
• Adding virtual volumes to or removing them from a VPLEX consistency group
• Enabling or disabling the recoverpoint-enabled property of a VPLEX consistency group
• Changing the detach rule of a VPLEX consistency group
2.4 Registering VPLEX Credentials in RecoverPoint
2.5 Add the VPLEX Splitter: • In the GUI, register the VPLEX storage in RecoverPoint:
- Select the RPA Clusters tab > select Storage and click the Add button
- Choose “Register any storage of type” and select “VPLEX”
- Enter the Login Credentials
12 VPLEX™ and RecoverPoint™ Interoperability | docu58231
- Select the method of importing the VPLEX certificate. Press OK
2.6 Creating RecoverPoint Consistency Groups The best practice is to have exactly the same replication volumes in the RecoverPoint consistency groups as
in the corresponding VPLEX consistency groups. For example:
• VPLEX consistency group contains production volumes A, B, C and D; when adding A to a
RecoverPoint consistency group, the system will propose to add B, C and D to the same consistency
group. The best practice is to accept
• The VPLEX splitter supports LUNs of different sizes in one replication set when using RecoverPoint
4.0 and later with GeoSynchrony 5.2 or later. Neither earlier versions of RecoverPoint nor earlier
versions of GeoSynchrony support the fake size feature. In consequence, if a copy volume is
attached to the VPLEX splitter that does not support fake size, it must be the exact same size as the
production volume, regardless to which splitter the production volume is attached
Note: The replication volumes will be attached to the VPLEX splitter automatically.
2.7 Creating VPLEX consistency groups Use VPLEX virtual volumes to create RecoverPoint replication and journal volumes (journals mjust be local
virtual volumes, minimum size 5 GB).
2.7.1 Consistency groups for replication volumes: • Local virtual volumes and distributed virtual volumes should be in separate consistency groups
• If local copy volumes reside on the same cluster as production volumes, they must be in a separate
VPELX consistency group from the production volumes
2.7.2 VPLEX consistency group for journals and repository: • Production and local copy journals may be in the same VPLEX consistency group
• Add the repository to the consistency group for journals
2.7.3 Enable RecoverPoint for each Consistency Group: • Click on the Consistency Group name
• Check the RecoverPoint-enabled checkbox
2.7.4 RecoverPoint and Host Storage Views • Add RecoverPoint replication volumes and journal volumes to the RecoverPoint storage view
• To allow host access to RecoverPoint production or copy volumes, add replication volumes to the
host’s storage view. Make sure that journal volumes are exposed only to RecoverPoint and not to any
host server
13 VPLEX™ and RecoverPoint™ Interoperability | docu58231
2.8 RecoverPoint Cluster Management When protecting both VPLEX Metro clusters with RecoverPoint, the best practice is to add all RecoverPoint
clusters protecting a VPLEX volume to the VPLEX management server at both VPLEX Metro clusters. Doing
so allows VPLEX to manage the volumes from either management server.
2.8.1 Limitations • When the VPLEX splitter is uncontrollable (that is, it is not Fibre-dead but RecoverPoint cannot
communicate with it), RecoverPoint replication is not available
• VPLEX Metro remote exported volumes cannot be replicated by RecoverPoint
• RecoverPoint cannot replicate writes if the Metro link is down and RecoverPoint is on the losing site.
This should not occur but if it does, replication is disabled until VPLEX splitter notifies that the
protected volume has been fully recovered. This limitation does not apply to the MetroPoint topology
• The RecoverPoint repository and journal volumes must be a VPLEX Local volume (not distributed). If
necessary, the repository or journals may be presented directly to RecoverPoint from the physical
array without VPLEX virtualization. If there is a compelling need, the repository may be on a different
array. The best practice is to use virtualized VPLEX volumes for the repository and journals
• RecoverPoint cannot enforce the selection of a supported repository volume type because the
repository must be selected before RecoverPoint system is running and able to retrieve volume
information from VPLEX. The user is therefore responsible to select a supported repository volume
• The following features are not available for VPLEX volumes protected by RecoverPoint:
- VPLEX virtual volume expansion
- RecoverPoint virtual image access (use logged access instead)
- Device migrations between two VPLEX clusters are not supported if one leg of the device is
replicated by RecoverPoint
• A VPLEX cluster supports up to 8192 splitter sessions. One splitter session is required for each LUN
that is used either as a production or as a copy. As a result, if production and a copy reside on the
same VPLEX cluster (continuous local replication), one session is required for the production LUN
and another for the local copy LUN. If the VPLEX splitter is shared among multiple RecoverPoint
clusters, the sessions of each cluster is counted towards the maximum number of splitter sessions
per VPLEX cluster
• VPLEX supports a maximum of 8000 distributed devices (total number of distributed virtual devices)
plus top-level (not child of another device) local devices with global visibility
• VPLEX supports a maximum of 8000 local devices with global visibility
• When more than one VPLEX system are connected to the same RP installation (for instance ,
production, continuous local replication and continuous remote replication), only one VPLEX system
can be upgraded at a time. Upgrade to a second VPLEX system will fail until upgrade to the first
VPLEX system has been completed
• If replicating from VPLEX MetroPoint to VPLEX MetroPoint, the remote distributed volumes must be
fractured
• When using MetroPoint topology with VMware Site Recovery Manager (that is, replicating virtual
machines), the following best practices are highly recommended:
- Use ESXi 5.5 or later
- In Site Replicator Adapter for RecoverPoint, configure a placeholder on a DR1 volume that is
zoned to ESX servers at both Metro clusters. All hosts that have access to production on either
side of the Metro must be able to see the placeholder
14 VPLEX™ and RecoverPoint™ Interoperability | docu58231
- As in all Site Recovery Manager Implementations, a placeholder is also required on the remote
copy
• Disable Disk.AutoremoveOnPDL on the ESX server. Disabling is difficult on ESX 5.1; hence, the
recommendation of upgrading to 5.5. Unless Disk.AutoremoveOnPDL is disabled, the host
automatically removes the Permanent Device Lost (PDL) device and all paths to the device if no open
connections to the device exist, or after the last connection closes. If the device returns from the PDL
condition, the host can discover it but treats it as a new device. Data consistency for virtual machines
on the recovered device is not guaranteed
Note: For the latest and most current information be sure to check the VPLEX Simple Support Matrix,
VPLEX Product Guide and VPLEX Release Notes.
15 VPLEX™ and RecoverPoint™ Interoperability | docu58231
3 RecoverPoint and VPLEX Connectivity Best Practices
3.1 RPA Initiators RPA Fibre Channel ports serve as both initiators and targets. VPLEX supports only RecoverPoint Gen4 and
later RPAs. It is therefore not necessary to separate RPA Fibre Channel ports into initiator and target zones.
The best practice is to use the RPA Fibre Channel ports as both initiators and targets. In this configuration,
maximum performance and redundancy and optimal use of resources are achieved. If, because of Initiator-
Target LUN (ITL) limitations or other non-RecoverPoint considerations, you need to zone RPA Fibre Channel
ports in either the initiator zone or the target zone, but not both, there will be only minor differences in
performance and availability. However, initiator-target separation is not supported at all in the following cases:
• When mixing different splitter types in the same cluster
• When using remote replication over Fibre Channel
• When using distributed consistency groups
3.2 Cabling • As for any other array-based splitter, every RecoverPoint appliance needs to be cabled to two fabrics.
Single-fabric configurations are not supported
• Each RecoverPoint appliance should have at least two physical connections to the front-end fabric
switch
• The best practice is that each VPELX director should have at least one front-end connection to each
fabric
• The best practice is that each VPLEX director have at least one physical connection to each of the
back-end fabric switches
3.3 Zoning • Dell EMC supports port WWN zoning only
• RecoverPoint WWNs can be recognized by their “50:01”24”8x” prefix
3.3.1 VPLEX-to-storage zoning Zone physical arrays to the VPLEX back-end.
• Each director must have redundant I/O paths to every back-end storage array
• Each storage array must have redundant controllers; the best practice is that at least two ports of
each controller be connected to the back-end fabric
• Each VPLEX director supports a maximum of 4 paths per storage volume
3.3.2 RPA-to-VPLEX front-end zoning In each fabric, create one zone that includes all RecoverPoint Fibre Channel ports and all VPLEX front-end
ports involved in RecoverPoint replication. Both VPLEX and RecoverPoint must have at least one Fibre
Channel port per fabric for each VPLEX director.
16 VPLEX™ and RecoverPoint™ Interoperability | docu58231
3.3.3 RPA-to-VPLEX back-end zoning • Create a zone for each VPLEX back-end port
• Add all RecoverPoint FC ports to the zone of each VPLEX back-end port
• VPLEX back-end and front-end ports should never be in the same zone
3.3.4 Validating RecoverPoint in VPLEX • In the VPLEX GUI, select Consistency Groups and click Check RecoverPoint Alignment button near
the bottom of the display
• Start replication
Note: For details about RecoverPoint replication, refer to the RecoverPoint Administrator’s Guide.
Virtual Access and Virtual Access with Roll are currently not supported for VPLEX virtual volumes.
17 VPLEX™ and RecoverPoint™ Interoperability | docu58231
4 RecoverPoint and VPLEX Best Practices
4.1 VPLEX Scalability Limits
4.1.1 MetroPoint Support RecoverPoint 4.1 introduces MetroPoint, a disaster recovery topology for VPLEX Metro. The two active-active
clusters of VPLEX Metro are both protected by RecoverPoint. The MetroPoint topology can maintain up to
five copies of data, including one remote copy (for continuous disaster recovery) and one local copy at each
VPLEX site (for continuous data protection).
4.1.2 Splitter interoperability • RecoverPoint/CL and RecoverPoint/EX support the mixing of VNX/CLARiiON, Symmetrix and VPLEX
splitters between different RPA clusters within the same RPA cluster
• Each volume can be attached to only one splitter type
• A consistency group copy can span both Symmetrix and VNX/CLARiiON splitters but not VPLEX
splitters
• A volume can only be included in one consistency group across all RPA clusters in the case of multi-
cluster (sharable) splitters
4.1.3 RecoverPoint Journal To replicate a production write, while maintaining the undo data that is needed if you want to roll back the
target copy image, five-phase distribution mode is applied. This mode produces five I/Os at the target copy.
Of these, two I/Os are directed to the replication volumes and three I/Os are directed to the journal. Thus, the
throughput requirement of the journal at the target copy is three times that of the production and 1.5 time that
of the replication volumes. For that reason, it is very important to configure the journal correctly.
Misconfiguration may result in a decrease in sustained throughput, an increase in journal lag and high loads.
Journal I/Os are typically large and sequential as opposed to the target copy I/Os that depend on application
write I/O patterns which may be random. For performance reasons, the I/O chunk size that RecoverPoint
sends depends on the array type including:
• VNX – 512 KB
• VMAX – 256 KB
• VPLEX – 256 K for reads and 128 KB for writes
- Starting with GeoSynchrony 5.2, the write size is 1 MB
Example: The application generates throughput of 50 MB/s and 6400 write IOPS. What is the required
performance of the journal at a remote copy? The throughput requirement of the journal would be (50
MB/s x 3) = 150 MB/s. The average I/O size in this example is (50 MB/s/6,400 IOPS) = 8 KB. Thus, many
I/Os (16-64, depending on the array type would be aggregated into a single I/O to the journal. The
IOPS requirement from the journal would be between (150 MB/s/512 KB) = 300 and (150 MB/s/128 KB)
= 1200.
Production journals, as opposed to copy journals, do not have strict performance requirements since they are
used fr writing only small amounts of metadata during replication. In the case of failover, however, these
18 VPLEX™ and RecoverPoint™ Interoperability | docu58231
production journals become copy journals that have major effect on performance. This should be considered
when configuring the system.
4.1.4 Splitter to RPA Multipathing In VNX, starting with R32 MR1 SP1 and R33, and in VPLEX, starting with VPLEX 5.4, the splitter performs
load balancing of the I/Os to the RPA through different paths. Hence, there is an advantage for having as
many paths as possible between the storage array and the RPA. Adding paths help to reduce the load on the
RPAs ports and increase concurrency between the splitter and the RPA.
4.1.5 MetroPoint The MetroPoint solution allows full RecoverPoint protection of the VPLEX Metro configuration, maintaining
replication even when one Metro site is down.
I/O flow during MetroPoint replication is as follows:
• The VPLEX splitter is installed on all VPLEX directors on all sites. The splitter is located beneath the
VPLEX cache. When host sends a write I/O to a VPLEX volume the I/O is intercepted by the splitter
on both of the Metro sites. Each splitter that receives the I/O sends it to the RPA that is connected
and runs the consistency group that protects this volume. Only when it is acknowledged by the RPA
is it sent to the backend storage array. After the I/O to the backend storage array on both Metro sites
is complete, the host is acknowledged
• In this flow, two RPAs receive the I/O, one RPA on each side of the Metro. Only the RPA that runs
the active production replicates the I/O to the remote copy. The RPA that runs the standby production
will only mark the regions of the I/O as direy as if the group is in pause state.
Note: For additional information about MetroPoint, see the EMC RecoverPoint Deploying with VPLEX
Technical Notes and the EMC RecoverPoint 4.1 Administrator’s Guide.
4.2 Storage Best Practices • Dual fabric designs for fabric redundancy and HA should be implemented to avoid a single point of
failure
• Each RecoverPoint appliance must physically connect to both fabrics
• Zoning should consist of a set of zones with a single initiator and up to 16 targets each
• Avoid port speed issues between the fabric and RecoverPoint by using dedicated ports speeds while
taking care not to over-subscribe SAN switches
• Knowing how RecoverPoint handles I/Os:
- Host sends write to storage
- Splitter intercepts write
- RPA acknowledges the write
- Splitter send write to storage
- Storage acknowledges write
- Splitter acknowledges write
19 VPLEX™ and RecoverPoint™ Interoperability | docu58231
4.3 Host Connectivity Best Practices • Dual fabric designs are considered best practice
• Each host is required to have 2 physical connections to each fabric
• Each host is required to have 2 paths to each storage array on each fabric
(Example: Fabric-ASPa0/SPb0 and Fabric-B = SPa1/SPb1)
• A minimum of 4 paths are required for NDU.
• Observe Director CPU utilization and schedule NDU for non-peak times
4.4 LAN Connectivity Best Practices • Supports both IPv4 and IPv6 addressing for the Management Server
• Management Server is configured for Auto-Negotiate (1Gbps)
• VPN connectivity between Management Server requires a routable pingable connection between
each cluster
• Network QoS must be able to handle file transfers during NDU procedures
• The following Firewall ports must be open:
- Internet Key Exchange (IKE): UDP port 500
- NAT Traversal in the IDE (IPsec NAT-T): UDP port 4500
- Encapsulating Security Payload (ESP): IP protocol number 50
- Authentication Header (AH): IP protocol number 51
- Secure Shell (SSH) and Secure Copy (SCP): TCP port 22
For mor detailed security information please consult the following product documentation from VPLEX and
RecoverPoint:
• VPLEX Security Configuration Guide
• VPLEX Release Notes
• RecoverPoint Security Configuration Guide
• RecoverPoint Release Notes
4.5 Performance Best Practices The following sections present the indicators used to measure performance for MetroPoint replication and the
factors that affect those indicators.
4.5.1 IOPS and Throughput Performance tests indicate that MetroPoint maximum IOPS and throughput for sync and async are only 2%-
4% lower than rP replication performance of only one side of the VPLEX Metro (that is, without a standby
production copy). The only observed exception is 10% degradation in IOPS test of sync replication.
4.5.2 Added Response Time Due to the fact that the I/O needs to be sent by two splitters to two RPAs before it is sent to the backend
storage array, RecoverPoint added response time could be expected to be higher than in regular replication
flow. Since, however, this is done in parallel on both splitters, it does not increase the response time.
20 VPLEX™ and RecoverPoint™ Interoperability | docu58231
In async replication, RecoverPoint added response time is 20% higher than VPLEX replication in a non-
MetroPoint configuration. In sync replication they are equal, since in sync replication the splitter at the active
production receives an ACK from the RPA only when the I/O reaches the remote site. In most cases by that
time the splitter at the standby production has already received an ACK from the RPA, since it is only marking
the data and not replicating it.
4.5.3 Deployment Considerations As in regular replication, load balancing of the consistency groups over RPAs can greatly affect RecoverPoint
overall performance. Hence, in MetroPoint, it is advisable to balance the active and standby copies of the
consistency groups between the two production RecoverPoint clusters.
Example:
There are four MetroPoint CGs with throughput of 50 MB/s each. All of the RPA clusters have 2 RPAs each.
How shoud the CGs be configured to balance the load?
Put two CGs on each RPA role. In each RPA role define one of the CGs as active on one of the production
RP clusters and the other CG as active on the second production RP cluster.
In this way, each RPA at production will need to handle an incoming throughput of 100 MB/s but replicate only
50 MB/s and each RPA at the remote site will need to distribute 100 MB/s.
Example:
Two MetroPoint CGs have throughput of 5o MB/s each. All RPAclusters have 2 RPAs each. How should the
CGs be configured to balance the load?
Put one CG on each RPArole. Unless you have WAN restrictions between one of the production sites and the
remote site, then performance-wise it doesn’t matter which copy is active and which is standby.
It would be incorrect to put the two CGs on RPA1 but define one of the CGs as active and one production site
and he other CG as active on the second production site, since in that configuration, RPA 1 on the remote site
will need to distribute 100 MB/s while RPA 2 will be idle.
4.6 VPLEX Splitter Best Practices In VPLEX splitter environments:
• The VPLEX consistency group name will only be displayed in the VPLEX Group column of the
volume list you “Attach volumes to the splitter” for the VPLEX splitter (including certificate)
• After you register your VPLEX storage, you will not be able to select VPLEX volumes and non-VPLEX
volumes in the same consistency group copy. It is recommended that all volumes in a VPLEX
consistency group be configured in a single consistency group copy, and all volumes in a non-VPLEX
consistency group be configured in another consistency group copy
• A VPLEX MetroPoint consistency group can contain a maximum of one remote copy and from zero to
two local copies. If there is no remote copy, there must be two local copies, one on each side of the
VPLEX Metro
• To turn a non-MetroPoint group into a MetroPoint group, when the production volume is a VPLEX
distributed volume and there exists no more than one remote copy, you can “Add a copy”
• To turn a MetroPoint group into a non-MetroPoint group, you can select the group’s standby
production copy in the “Manage Protection” screen and “Remove a copy”
21 VPLEX™ and RecoverPoint™ Interoperability | docu58231
• In a VPLEX MetroPoint consistency group, the Application Source Copy parameter in the “Group
policies” tab cannot be changed unless:
- RecoverPoint is replicating correctly
- In vCenter, neither SRM Test nor a Recovery Plan is running
• After changing the Application Source Copy parameter, Rescan Scan must be run in vCenter,
otherwise SRM will fail
Note: When creating a MetroPoint group, select the MetroPoint group check box. When selected, only
VPLEX distributed volumes are displayed in the volume list.
4.7 RecoverPoint Group Policies
4.7.1 Primary RPA The primary RPA is the RPA that you prefer to replicate the consistency group. When the primary RPA is not
available, the consistency group will switch to another RPA in the RPA cluster. Whether data will transfer
when replication is switched to another RPA depends on the value of the transfer_by_non_preffered
parameter of the config_group_policy CLI command.
Note: Best practice is to ensure that groups that replicate in ”Synchronous replication mode” are set
to use different RPAs than groups that replicate in “Asynchronous replication mode”. Mixing between
the two may result in low I/O rates for the synchronous groups. It is also recommended that dynamic
sync and purely synchronous consistency groups reside on different RPAs whenever possible.
4.7.2 Priority Default = Normal
Only relevant for remote replication over the WAN or Fibre Channel when two or more consistency groups are
using the same Primary RPA.
Select the priority assigned to this consistency group. The priority determines the amount of bancwidth
allocated to this consistency group in relation to all other consistency groups using the same Primary RPA.
Possible values are: Idle, Low, Normal, High, and Critical.
In asynchronous replication, groups with a priority of Critical are provided ten times the priority of normal
groups. Groups with a priority of High are provided three times the priority of normal groups. Groups with a
priority of Low are provided a 50% of the priority of normal groups. Groups with a priority of Idle are provided
1% of the priority of normal groups.
4.7.3 Preferred Cluster
Note: Only relevant when MetroPoint group is selected.
Default = Follow VPLEX bias rules. Sets the preferred RPA cluster of the MetroPoint group to the active or
standby production or to follow the VPLEX bias rules.
22 VPLEX™ and RecoverPoint™ Interoperability | docu58231
4.7.4 Distribute group writes across multiple RPAs Default = disabled
Note: Both enabling and disabling this setting causes the journal of all copies in the consistency
group to be lost.
“Distribute consistency groups” across ultiple RPAs to significantly heighten the maximum available RPA
throughput allowing for a signigicantly larger group. For throughput performance statistics (during
synchronous and asynchronous replication) and feature limitations, see the EMC RecoverPoint Performance
Guide.
When enabled, a minimum of one and a maximum of three secondary RPAs can be selected.
Note: Before changing this setting, ensure all preferred RPAs (both primary and secondary) are
connected by Fibre Channel and can see each other in the SAN and read “What should I know before
setting a group as distributed?”
4.7.5 External Management Default = none
Possible values are None, RPCE, SRM, and REE.
• None: Using an external management application (such as RP/CE, SRM, or REE) is disabled
• RPCE: To enable support for RecoverPoint/CE (Microsoft Failover Clustering)
When Managed by = External Application, hosts in a Microsoft Cluster can automatically fail over from
one site to the other. RecoverPoint assures that the application data is in the identical state at the original
site and the failover site, so that the failover is transparent to the application.
• SRM: To enable support for VMware Site Recovery Manager. This option is only valid if a
RecoverPoint Storage Replication Adapter for VMware Site Recovery Manager is installed on the
vCenter Servers. For more information, refer to the Dell EMC RecoverPoint Adapter for VMware Site
Recovery Manager Release Notes
When Managed by = External Application, Site Recovery Manager manages the group and can perform
automatic failover and test failover from one cluster to another.
• REE: To enable support for Dell EMC Replication Enabler for Microsoft Exchange Server. This option
is valid only if Dell EMC Replication Enabler for Microsoft Exchange Server is installed on all
Microsoft Exchange Mailbox Servers in the Data Availability Group (DAG). For more information
about Replication Enabler, refer to Dell EMC Replication Enabler for Microsoft Exchange Server
Installation and Configuration Guide and RecoverPoint Replicating Microsoft Exchange Server
Technical Notes
When Managed by = External Application, Replication Enabler for Exchange manages the group and can
perform failover from one cluster to another.
When any value other than “None” is selected, “Managed By” is enabled and you have the following options:
23 VPLEX™ and RecoverPoint™ Interoperability | docu58231
• RecoverPoint: Check this option for planned or unplanned maintenance of the RecoverPoint system.
Whe activated, external application support is disabled and user-initiated RecoverPoint capabilities
are enabled. When activated, RecoverPoint user-initiated capabilities, such as image access, image
testing, changing policies and creating bookmarks are available
• External Application: When activated, the specified external application manages RecoverPoint. All
RecoverPoint user-initiated capabilities are disabled. The user cannot access images, change
policies, or change volumes. Bookmarks cannot be created in the RecoverPoint GUI but they can be
created using the RecoverPoint command-line interface bookmark commands
In all cases, Recovery Copy specifies which copy the external application should fail over to, in case it
initiates a failover.
4.8 RecoverPoint Link Policies
4.8.1 Asynchronous Default = enabled
When enabled, RecoverPoint replicates the consistency group data in “Asynchronous replication mode”.
4.8.2 Snap-based Replication Default = disabled
When enabled, RecoverPoint replicates the consistency group data in “Snap-based replication mode”.
• None: Disables “Snap-based replication mode”
• Periodic: Sets the interval between snaps to any value between 1 to 1440 minutes as illustrated in
“How does periodic snap-based replication work?” When set to “Periodic snap-based replication”, an
interval should be defined. The default interval is 10 minutes, however, best practice is to set the
interval to 15 minutes or more to ensure optimal performance
• On Highload: Uses “Asynchronous replication mode” as the default replication mode and
dynamically switches to “Snap-based replication mode” upon high load. When high load is over,
dynamically switches back to “Asynchronous replication mode”
Note: Read “What should I know before enabling snap-based replication?” for the complete list of limitations.
4.8.3 Synchronous Default = disabled
When enabled, RecoverPoint replicates the consistency group data in “Synchronous replication mode”.
4.8.4 Dynamic by Latency Default = disabled
Only relevant for synchronous replication mode.
When enabled, RecoverPoint uses “Dynamic sync mode” and alternates between synchronous and
asynchronous replication modes, as necessary, according to latency conditions (the number of milliseconds
24 VPLEX™ and RecoverPoint™ Interoperability | docu58231
or microseconds between the time the data is written to the local RPA and the time that it is written to the
RPA or journal at the remote site).
• Start async replication above: When the specified limit is reached, RecoverPoint automatically starts
replicating in “Asynchronous replication mode”
• Resume sync replication below: When the specified limit is reached, RecoverPoint goes back to
replication in “Synchronous replication mode”
4.8.5 Dynamic by Throughput Default = disabled
Only relevant for synchronous replication mode.
When enabled, RecoverPoint uses “Dynamic sync mode” and alternates between synchronous and
asynchronous replication modes, as necessary, according to throughput conditions (the total writes that reach
the local RPA, per copy, in kb/s).
• Start async replication above: When the specified limit is reached, RecoverPoint automatically starts
replicating in “Asynchronous replication mode”
• Resume sync replication below: When the specified limit is reached, RecoverPoint goes back to
replicating in “Synchronous replication mode”
4.8.6 RPO Default = 25 seconds
This setting defines the required lag of each link in a consistency group and is set manually in MB, GB, wirtes,
seconds, minutes, or hours.
In RecoverPoint, RPO starts being measured when a write made by the production host reaches the local
RPA and stops being measured when the write reaches either the target RPA or the target journal (depending
on the value of the transfer_by_non_preferred parameter of the config_group_policy CLI command).
Note: When the value of the regulate_application parameter of the config_link_policy CLI command is set to
no (default is no), the specified RPO is not guaranteed. RecoverPoint will try its best to replicate within the
specified RPO without affecting host performance.
4.8.7 Compression Default = Low
To compress data before transferring it to a remote RPA cluster. Can reduce transfer time significantly.
Note: Only relevant for asynchronous remote replication. Both the enabling and disabling of compression
causes a short pause in transfer and a short initialization. Compression decreases transfer time but increases
the source RPA’s CPU utilization.
• High: Yields the highest bandwidth reduction ratio and requires the most RPA resources
• Medium: Yields an average bandwidth reduction ratio and requires an average amount of RPA
resources
• Low: Yields the lowest bandwidth reduction ratio and requires the least RPA resources
25 VPLEX™ and RecoverPoint™ Interoperability | docu58231
• None: Compression is disabled and requires no RPA resources
4.8.8 Enable Deduplication Default = disabled
To eliminate repetitive data before transferring the data to a remote RPA cluster. Can reduce transfer time
significantly.
Note: Only relevant for asynchronous remote replication. Compression must be enabled before
“Deduplication” can be enabled. Both the enabling and disabling of deduplication causes a short pause in
transfer and a short initialization. Deduplication decreases transfer time but increases the source RPA’s CPU
utilization.
4.8.9 Snapshot Granularity Default = fixed (per second)
• Fixed (per write): To create a snapshot for every write operation over a specific (local or remote) link
• Fixed (per second): To create a snapshot per second over a specific (local or remote) link
• Dynamic: To have the system determine the snapshot granularity of a specific (local or remote) link
according to available resources
Note: When you “Distribute consistency groups”, the snapshot granularity of all links in the consistency group
can be no finer than one second.
Note: In VPLEX splitter environments, the VPLEX consistency group name will only be displayed in the
VPLEX Group column of the volume list if you “Attach volumes to the splitter” for the VPLEX splitter (including
certificate). After you enter VPLEX credentials, you will not be able to select VPLEX volumes and non-VPLEX
volumes in the same consistency group copy. It is recommended that all volumes in a VPLEX consistency
group be configured in a single consistency group copy and all volumes in a non-VPLEX consistency group
be configured in another consistency group copy.
4.8.10 Test a Copy The VPLEX Splitter does NOT currently support Virtual Access: “Roll to Image” options.
26 VPLEX™ and RecoverPoint™ Interoperability | docu58231
5 RecoverPoint and VPLEX Failure Scenarios
5.1 MetroPoint Disaster Scenarios The MetroPoint topology is specifically designed to provide protection in the following disaster scenarios:
5.1.1 VPLEX Fracture • A fracture occurs when the communication link between the two VPLEX Metro sites is down. When a
fracture occurs between the two VPLEX sites, RecoverPoint replication will continue through the
VPLEX winner site. The preferred RecoverPoint cluster can be chosen manually in RecoverPoint.
Alternatively, the preferred RecoverPoint cluster can be set to follow the VPLEX detach rules for the
VPLEX consistency group
If the preferred RecoverPoint cluster is not at the VPLEX winning site, a production switchover will occur.
During the fracture, RecoverPoint at the VPLEX winning site will continue to replicate to the remote site.
The losing site will not receive any I/Os during the fracture and will not maintain an up-to -date journal. If
there is a local copy on the losing site, it will not be updated during fracture. If there is a local copy at the
winning site, it will continue to be updated as usual. Once communication between the two VPLEX
clusters is restored, the two VPLEX clusters synchronize their data according to the I/Os that occurred
during the fracture. RecoverPoint replication continues during this synchronizing process. When the
VPLEX clusters are fully synchronized, if there is a local copy at the losing site, RecoverPoint
synchronizes it with the winning site and then resumes replicating to it.
• VPLEX site with RecoverPoint active production fails to write to storage or its production storage fails
When the VPLEX site with RecoverPoint active production fails to write to storage or the production
storage fails, it will notify RecoverPoint that a production switchover is needed. The standby production
cluster will then become the active production cluster and will continue the replication to the remote
cluster. When the failed VPLEX site resumes writing to storage, if it is set as the preferred site, there will
be a production switchback and RecoverPoint at that cluster will resume replicating to the remote cluster.
• RecoverPoint active production cluster fails to write to remote cluster
When the RecoverPoint active production fails to write to the remote cluster, a production switchover
occurs. The standby production copy will become the active production copy and will continue replicating
to the remote cluster. When communication between the previously active production RecoverPoint
cluster and the remote RecoverPoint cluster resumes, if it is the preferred site, there will be a production
switchback and RecoverPoint at that cluster will resume replicating to the remote cluster.
5.2 VPLEX Metro Link Failure If the Metro link between VPLEX sites fails, VPLEX elects a single site for each VPLEX consistency group to
be the winner; the winning site will continue to receive all host I/Os. VPLEX consistency groups must be
attached to the RecoverPoint splitter at the preferred site. RecoverPoint 3.5 and 4.0 can only protect a
distributed device at one VPLEX Metro site. VPLEX will only allow a virtual volume to be protected by
RecoverPoint if the virtual volume is in a VPLEX consistency group that is RecoverPoint enabled and it is the
preferred site.
27 VPLEX™ and RecoverPoint™ Interoperability | docu58231
If the preferred site (site with the RecoverPoint splitter) fails, VPLEX Witness can override the site preference
rule (override the pre-defined winner). At that point, host I/Os will continue at the losing site but new writes will
not be protected by RecoverPoint until the failed site comes back online and is updated. RecoverPoint access
to the volumes will be disabled until the failed site is repaired and updated.
In RecoverPoint 4.1, VPLEX MetroPoint topology can protect a distributed volume at both VPLEX Metro sites.
In the MetroPoint topology, when one site fails, the other site will automatically become the active site. Writes
will then be protected by RecoverPoint at that site.