Post on 03-Apr-2023
A brief history of SPRACE network • March 2004: SPRACE cluster started operaLng
– Main server connected to a shared low-‐end L2 switch inside USP Physics InsLtute • Nov. 2004: SC’04 bandwidth challenge => data transfer record
– 1 + 1 Gbps (in+out): data transfer record between North and South Hemispheres – 3 servers at USP Physics InsLtute, 3 servers at NAP of Brazil
• June 2005: direct connecLon to ANSP – KyaTera fiber connecLon – Cisco Catalyst 3750 donated by Caltech – Iqara Telecom dark fiber available from USP Computer Center to NAP of Brazil
• Feb. to Oct. 2006: SPRACE connected to USP core routers – CTBC Telecom took over Iqara => fibers changed; sLll 1 Gbps link
• Nov. 2006: MRV CWDM system installed – Direct connecLon to ANSP routers reestablished
• March 2009: MRV CWDM replaced by Padtec DWDM – This replacement paved the way to 10 Gbps
• Aug. 2009: SPRACE cluster moved from USP to UNESP datacenter • Nov. 2009: SC’09 bandwidth challenge => new data transfer record
– 8.5 + 8.5 Gbps (in+out, disk-‐to-‐disk) from the new datacenter – Switch w/ 10G ports loaned by Cisco; GridUnesp storage system (Sun thumpers)
12/21/14 SPRACE Workshop 2014 2
A brief history of SPRACE network II
SPRACE network as it was in 2005-‐2006 … 12/21/14 SPRACE Workshop 2014 3
SPRACE/NCC network as it is nowadays
12/21/14 SPRACE Workshop 2014 5
In 2009 we moved to Unesp: network infrastructure evolved accordingly
SPRACE/NCC network: the next generaLon
12/21/14 SPRACE Workshop 2014 6
2014: Dell S6000 switch + 100G lambda (client side: 2x 40G + 2x 10G ports)
Setup for stress-‐test during SC’14
Ampath: InternaLonal Exchange Point in Miami
12/21/14 SPRACE Workshop 2014 7 São Paulo -‐ Miami path has 4X 10G links: 2x ANSP and 2x RNP
Setup for SC’14
12/21/14 SPRACE Workshop 2014 8
Ampath engineers configured 3 VLANs for SP and 3 VLANs for RJ
SC’14: ROADMs channels’ monitoring system
12/21/14 SPRACE Workshop 2014 13
Developed by Beraldo Leal (NCC)
Next challenge: 100G DemonstraLon
12/21/14 SPRACE Workshop 2014 14
• OpenWave 100G Network Testbed – Ampath/AmLight NSF award for leveraging U.S.-‐LaLn America
connecLvity – Main goal is the deployment of an experimental 100G alien wave
between US and Brazil – Challenging engineering experiment on a highly constrained
operaLonal undersea cable systems (three submarine segments => 9800 Km)
– OpenWave project partners: NSF, FIU (via the AmLight project), FAPESP (via the ANSP project), RNP, Padtec, LaLn American NauLlus (LANauLlus), Florida LambdaRail (FLR), Internet2
• OpenWave will need to be stress-‐tested with real data – SPRACE/NCC will be part of the team responsible for designing and
running this experiment
SPRACE network and the LHCONE
• LHCOPN (OpLcal Private Network) – Tier0 <-‐> Tier1s and Tier1s <-‐> Tier1s – Connects CERN and the 12 NaLonal Tier1s – In place and stable since WLCG inauguraLon
• LHCONE (Open Network Environment) – Tier1s <-‐> Tier2s and Tier2s <-‐> Tier2s – A collaboraLve effort among R&E network providers – Based on Open Exchange points – Traffic separaLon: no clash with other data transfer,
resources allocated for and funded by the HEP community
12/21/14 SPRACE Workshop 2014 16
Why LHCONE ?
• In the original design of data movements between Lers (MONARC project)
each T2 was supposed to transfer data only from its corresponding T1 • Recently experiments decided to change their compuLng models, assuming
that each T2 could have access to data stored in every T1 and also in any T2 • Main advantage: beuer management of the LHC traffic on the naLonal and
internaLonal paths • MulLpoint ConnecLon Service: L3 VPN network based on routers and VRFs
=> already in producLon • Point to Point Service: scheduled circuits on demand, SDN
=> early stages (R&D)
12/21/14 SPRACE Workshop 2014 17
è
Planning for the future: SC’15 & beyond
• We need to start SC’15 discussions far in advance
• SDN should probably be the central subject
• SC’15 will take place in AusLn: Dell’s “backyard” (Dell is the largest company in the Greater AusLn area) => new Dell switches with lots of 100G ports
• We consider that interacLon with Unicamp team has been quite posiLve, with real gains to both sides, so we would like to keep interacLng
• There is a chance that Caltech, Unesp, and Fermilab share a booth next year; we should explore that possibility as much as possible
• We should keep the interacLon with Padtec alive; our partnership could be reinforced by leveraging R&D proposals of common interest
12/21/14 SPRACE Workshop 2014 18
SWOT Analysis: SPRACE/NCC Network Strengths
• Triple redundancy (Megatelecom, KyaTera, Metrosampa)
• Grown in house from the very beginning
• Dedicated network, not Led to University commodity network
• Datacenter infrastructure • Directly connected to the router Led
to the InternaLonal links • Strong partnership with network
provider (ANSP) and vendors
• Short on manpower • Lack of spare parts • Limited budget • Physical space for more people • KyaTera network is extremely
unstable; Metrosampa also not so reliable
• Lack of signed agreement with Telefonica
• Datacenter not yet ready for 40G
Weakness
Opp
ortuniLe
s
• Contact with emerging technologies
• InteracLon with Caltech, Padtec, Unicamp, and others
• LHCONE R&D opportuniLes • 100G demo and availability of the
link • Booth at SC’15 to showcase our
work
• Dependence on projects administered by 3rd parLes
• Dependence on warranty extensions and technical support (very expensive)
• CompeLLon among vendors • “AbducLon” of experts as they
become proficient and highly producLve
Threats