D A2.2 - Network architecture and functional specifications for ...

193
Project Deliverable IST - 6th FP Contract N° 507295 MUSE_DA2.2_V02.doc 1/193 PUBLIC D A2.2 - Network architecture and functional specifications for the multi-service access and edge François Fredricx Alcatel Research & Innovation [email protected] Identifier Deliverable D A2.2 Class Report Version 02 Version Date 19 th January 2005 Distribution Public Responsible Partner Alcatel Filename WPA2_0038_V02_DA2.2.doc

Transcript of D A2.2 - Network architecture and functional specifications for ...

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 1/193 PUBLIC

D A2.2 - Network architecture and functional specifications for the multi-service access and edge

François Fredricx Alcatel Research & Innovation [email protected]

Identifier Deliverable D A2.2

Class Report

Version 02

Version Date 19th January 2005

Distribution Public

Responsible Partner Alcatel

Filename WPA2_0038_V02_DA2.2.doc

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 2/193 PUBLIC

DOCUMENT INFORMATION Project ref. No. IST-6thFP-507295

Project acronym MUSE

Project full title Multi-Service Access Everywhere

Security (distribution level) Public

Contractual delivery date 31st October 2004

Actual delivery date 19th January 2005

Deliverable number D A2.2

Deliverable name Network architecture and functional specifications for the multi-service access and edge.

Type Report

Status & version V02

Number of pages 193

WP / TF contributing WPA2

WP / TF responsible François Fredricx - ALC B

Main contributors See list p.3.

Editor(s) François Fredricx (ALC) Acknowledgement to Christofer Flinta (EAB) and Rainer Stademann (SIE) as co-editors of MA2.7

EU Project Officer Pertti Jauhiainen

Keywords Architecture, Data Plane, QoS, Auto-Configuration, Multicast, Security, IP routing and forwarding, IPv6

Abstract (for dissemination) Network architecture solutions and specifications for generic mechanisms, full model of Ethernet-based network, data plane model of IP-based networks.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 3/193 PUBLIC

LIST OF CONTRIBUTORS Note The content of this deliverable is based on previous Milestones MA2.3, MA2.5, and MA2.7. The names of all authors of these Milestones have also been included in the following list. François Fredricx ALC Jeanne de Jaeger ALC Ali Rezaki ALC Christèle Bouchat ALC Erwin Six ALC Lieve Bos ALC Sven Ooghe ALC Peter Domschitz ALC Peter Adams BT Les Humphrey BT Dave Thorne BT Csaba Lukovszki BUTE Sandro Krauß DT Thomas Monath DT Christofer Flinta EABS Hans Mickelsson EABS Anders Eriksson EABS Zere Ghebretensaé EABS Annikki Welin EABS Romain Vinel FT Frédéric Jounay FT Jean-Philippe Luc FT Michel Borgne FT Gilbert Le Houreou FT Michel Herve FT Andreas Foglar IFX Stefan Wavering IFX Tim Stevens IME Koert Vlaeminck IME Govinda Rajan LU NL Miroslav Zivkovic LU NL Philippe Hervé LU NL Antonio Gamelas PTI Teresa Almeida PTI Vitor Pinto PTI Vitor Simoes Ribeiro PTI Francisco Fontes PTI Manuel Fernandes PTI Vitor Marques PTI Enrique Areizaga Sanchez ROB Rainer Stademann SIE Johannes Bergmann SIE Thomas Gremmer SIE Norbert Boll SIE

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 4/193 PUBLIC

Alun Foster STM Pascal Moniot STM Sylvie Danton THO N Hervé Le Bihan THO N Manuel Sanchez Yangüela TID Gabriel Moreno TID Antonio Elizondo TID Jan de Nijs TNO Rob Kooij TNO Pieter Nooren TNO Arnoud van Neerbos TNO Nils Bjorkman TS

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 5/193 PUBLIC

DOCUMENT HISTORY Version Date Comments and actions Status

V01 Jan 17th, 2005 DA2.2 based on MA2.7, including review comments on MA2.7, new introduction paragraph 1.1, new chapter 5 on conclusions

V02 Jan 19th , 2005 More descriptive section 1.1. Clarifications in section 2.2.3. Final corrections

Final

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 6/193 PUBLIC

TABLE OF CONTENTS DOCUMENT INFORMATION .....................................................ERROR! BOOKMARK NOT DEFINED.

DOCUMENT HISTORY............................................................................................................................5

TABLE OF CONTENTS...........................................................................................................................6

LIST OF FIGURES AND TABLES ..........................................................................................................8

ABBREVIATIONS..................................................................................................................................12

REFERENCES .......................................................................................................................................18

EXECUTIVE SUMMARY........................................................................................................................21 Introduction.....................................................................................................................................21 Generic architecture considerations...............................................................................................22 Ethernet Network Model.................................................................................................................23 IP Network Model ...........................................................................................................................24 What's next?...................................................................................................................................26

1 INTRODUCTION ............................................................................................................................27 1.1 Scope of the deliverable .........................................................................................27

1.1.1 Context ...................................................................................................................27 1.1.2 Scope......................................................................................................................29 1.1.3 Content ...................................................................................................................30

1.2 Positioning of DA2.2 ...............................................................................................30 1.3 Focus on Multi-service............................................................................................32

2 GENERIC ASPECTS .....................................................................................................................33 2.1 Positioning of the Access and Aggregation network ..............................................33

2.1.1 Terminology and logical model...............................................................................33 2.1.2 Connectivity models................................................................................................35 2.1.3 Reference Control Architecture ..............................................................................37

2.2 Model of Residential Gateway................................................................................41 2.2.1 Definitions ...............................................................................................................41 2.2.2 Access Gateway – general boundary assumptions ...............................................42 2.2.3 The Residential Gateway Architecture Model ........................................................46

2.3 General Connectivity ..............................................................................................52 2.3.1 Business models in an Access Network.................................................................52 2.3.2 Peer-to-peer traffic..................................................................................................60 2.3.3 Multicast and Multipoint Delivery ............................................................................63

2.4 QoS Architectures...................................................................................................70 2.4.1 QoS architecture principles ....................................................................................71 2.4.2 Traffic classes in the network .................................................................................79 2.4.3 3GPP/IMS-based architecture................................................................................81

2.5 AAA Architectures...................................................................................................88 2.5.1 Auto-configuration in a multi-provider environment................................................88 2.5.2 Control Plane options .............................................................................................91 2.5.3 One Step Configuration ..........................................................................................95 2.5.4 IMS Model adaptation.............................................................................................99 2.5.5 Open Issues......................................................................................................... 107

3 ETHERNET NETWORK MODEL................................................................................................ 108 3.1 Connectivity in the Ethernet Network Model ....................................................... 108

3.1.1 Overview of Ethernet network model................................................................... 108 3.1.2 Using MPLS......................................................................................................... 123 3.1.3 Providing end-end connectivity............................................................................ 126 3.1.4 Summary Ethernet Network Model connectivity.................................................. 140

3.2 AAA architectures................................................................................................ 143 3.2.1 Control Plane....................................................................................................... 143

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 7/193 PUBLIC

3.2.2 Open Issues......................................................................................................... 144 3.3 Qos architectures................................................................................................. 144

3.3.1 Mapping of the 3GPP/IMSarchitecture to the Ethernet model ............................ 144 3.4 Security................................................................................................................ 147

3.4.1 Scope................................................................................................................... 147 3.4.2 Generalities.......................................................................................................... 147 3.4.3 Security threats & mechanisms with IPoPPPoE traffic........................................ 147 3.4.4 Security threats & mechanisms with IPoE traffic................................................. 148 3.4.5 Overview of security mechanisms....................................................................... 149

4 IP NETWORK MODEL................................................................................................................ 150 4.1 Overview.............................................................................................................. 150

4.1.1 Network Scenario ................................................................................................ 150 4.1.2 IP Network Model Characteristics ....................................................................... 151 4.1.3 Use Cases ........................................................................................................... 152

4.2 Use Cases For PPPoE Handling......................................................................... 153 4.2.1 PPP Use Case 1: L2 switching of PPPoE traffic ................................................ 153 4.2.2 PPP Use Case 2: PPPoE relay of PPP traffic ..................................................... 155 4.2.3 PPP Use Case 3: LAC / PTA in the IP Forwarder............................................... 156

4.3 Use Cases For IPoE Handling............................................................................. 158 4.3.1 NAP provides IP transport service....................................................................... 158 4.3.2 NAP provides routed IP service for application wholesale.................................. 167 4.3.3 NAP provides routed IP service for IP wholesale to third-party NSPs ................ 172

4.4 Use Cases For IPv6............................................................................................. 173 4.4.1 Access network assumptions for the IPv6 use cases ......................................... 173 4.4.2 IPv6 address structure......................................................................................... 173 4.4.3 Allocation Efficiency Considerations.................................................................... 174 4.4.4 Static Addressing Schemes................................................................................. 177 4.4.5 Dynamic Addressing............................................................................................ 180 4.4.6 Integration of dynamic and static addressing ...................................................... 189 4.4.7 Access network routing issue .............................................................................. 190

4.5 Topics for further consideration ........................................................................... 190

5 CONCLUSIONS .......................................................................................................................... 191 5.1 Generic mechanisms........................................................................................... 191 5.2 Ethernet-based network model............................................................................ 192 5.3 IP-based network model...................................................................................... 193

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 8/193 PUBLIC

LIST OF FIGURES AND TABLES Figure 1-1. Relation between the different milestones and deliverables for the MUSE

Network architecture. ......................................................................................................31

Figure 2-1. Reference Network based on the DSL-Forum reference service provider interconnection model. ....................................................................................................33

Figure 2-2. In the Ethernet Transport scenario there is Ethernet connectivity between the CPN and the Access Edge Node. In an extended scenario Ethernet frames may also be tunneled further up to the NSP Edge Nodes...................................................................35

Figure 2-3. In the IP Transport scenario there is Ethernet connectivity only in the First Mile link, and optionally in the aggregation part or regional part of the network.....................36

Figure 2-4. Reference control architecture – top view............................................................38

Figure 2-5. Business roles according to MUSE [1].................................................................39

Figure 2-6. The MUSE business roles mapped onto the reference control architecture........41

Figure 2-7 : Bridged RGW model ...........................................................................................49

Figure 2-8: Routed RGW model without NAPT......................................................................50

Figure 2-9: Routed RGW with NAPT......................................................................................51

Figure 2-10: Unified model of a hybrid Residential Gateway configuration............................52

Figure 2-11 : Full wholesaling with PPP to third-party ISPs/NSPs.........................................55

Figure 2-12 : Business model (b) ...........................................................................................56

Figure 2-13 : Business model (c) ...........................................................................................57

Figure 2-14 : Business model (d) ...........................................................................................58

Figure 2-15 : Business model (e) ...........................................................................................59

Figure 2-16. Upstream tunnelling. All traffic is routed by destination address. Upstream traffic is tunnelled up to the Edge Node. Downstream traffic is not tunneled............................62

Figure 2-17 : Service provider oriented model .......................................................................71

Figure 2-18 : Application signalling based model with policy push ........................................72

Figure 2-19. Centralised resource management based on pre-provisioned QoS pipes between access and edge nodes....................................................................................74

Figure 2-20. Centralised resource management based on the capacity of the links of the Ethernet aggregation network. ........................................................................................75

Figure 2-21. Example of resource reservation using signalling..............................................77

Figure 2-22: Components of QoS control in network provider’s domain................................78

Figure 2-23: Main elements of 3GPP/IMS-based architecture for QoS and resource control.........................................................................................................................................83

Figure 2-24 : Set-up of end-to-end session in the application signalling based model. .........83

Figure 2-25. Generalization of architecture for roaming, based on 3GPP IMS roaming. .......85

Figure 2-26. Set up of QoS connection in 3GPP/IMS implementation of service provider oriented model. ...............................................................................................................86

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 9/193 PUBLIC

Figure 2-27, Mapping of IMS-based architecture to DA1.1 business roles. ...........................87

Figure 2-28: Relationship between service session and PPPoE session. .............................89

Figure 2-29: Option #1 for DHCP and RADIUS interaction....................................................92

Figure 2-30: Option #2 for DHCP and RADIUS interaction....................................................93

Figure 2-31: One step configuration and AAA process, DHCP server in AN .........................96

Figure 2-32 : One step configuration and AAA process, DHCP relay agent in AN ................98

Figure 2-33: 3GPP IMS Architecture....................................................................................100

Figure 2-34: 3GPP2 IMS Architecture..................................................................................101

Figure 2-35: IMS architecture with ACS. ..............................................................................103

Figure 2-36: IMS AAA architecture with NAP PDF and Radius. ..........................................105

Figure 2-37: IMS AAA with NAP PDF and public Gq interface. ...........................................106

Figure 2-38: 3GPP IMS architecture with direct link between AAA and AF. ........................107

Figure 3-1 : Functional basis of Ethernet network model .....................................................108

Figure 3-2 : Intelligent bridging (residential users) ...............................................................109

Figure 3-3 : Cross-connecting (residential users) ................................................................110

Figure 3-4 : Business users in the Ethernet Network Model ................................................111

Figure 3-5 : Illustration of MAC FF .......................................................................................119

Figure 3-6 : MPLS for L2 VPN business services ................................................................123

Figure 3-7 : MPLS for L3 VPN service .................................................................................124

Figure 3-8 : MPLS Encapsulation in cross-connect mode ...................................................125

Figure 3-9 : MPLS Encapsulation in bridging mode .............................................................125

Figure 3-10 : Possible failures in the NAP............................................................................127

Figure 3-11: Peer-peer in intelligent bridging mode and cross-connect mode.....................134

Figure 3-12: Multicast server connected to the aggregation network...................................137

Figure 3-13 : Multicast server connected via the ASP .........................................................138

Figure 3-14: IGMP functionalities in the Ethernet NW model...............................................139

Figure 3-15: Connectivity in the Ethernet NW model (Intelligent bridging model) ................142

Figure 3-16: One step configuration and AAA process for a L2 network model. .................144

Figure 3-17, IMS in a cross connected Ethernet model .......................................................146

Figure 3-18, IMS in a bridged Ethernet model .....................................................................146

Figure 4-1: Network Scenario for the IP Models ..................................................................150

Figure 4-2: IP network model characteristics .......................................................................151

Figure 4-3 : L2 switching of IPoPPPoE traffic, combined with IPoE traffic...........................153

Figure 4-4: PPPoE relay.......................................................................................................155

Figure 4-5 : IPoPPPoE traffic handled in IP forwarder (LAC/PTA), combined with IPoE traffic......................................................................................................................................156

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 10/193 PUBLIC

Figure 4-6: Basic scenario with NAP providing IP and PPP transport services ...................158

Figure 4-7: Data plane example for IP transport service......................................................159

Figure 4-8: Session and service aware IP forwarding (network to user forwarding) ............162

Figure 4-9: 802.x based service selection with RADIUS proxy chaining..............................163

Figure 4-10: DHCP based session binding ..........................................................................165

Figure 4-11: An ARP proxy in the IP forwarder always replies with its own MAC address ..166

Figure 4-12: Most applicable business scenario for routed IP in the NAP network..............167

Figure 4-13: VLAN Aggregation ...........................................................................................169

Figure 4-14: IPv6 address ....................................................................................................173

Figure 4-15: division of free bits ...........................................................................................173

Figure 4-16: HD-ratio for an increasing number of ISPs when only static address allocation is deployed. Note that the number of ISPs does not influence the HD-ratio in case of dynamic address allocation...........................................................................................176

Figure 4-17: ISP prefix propagation .....................................................................................177

Figure 4-18: Basic hierarchical model ..................................................................................177

Figure 4-19: Redundancy.....................................................................................................178

Figure 4-20: NAP ER interconnection ..................................................................................179

Figure 4-21: NAP proprietary addressing.............................................................................180

Figure 4-22 Hierarchical aggregation points ........................................................................181

Figure 4-23 Address delegation entities...............................................................................184

Figure 4-24 Address delegation policies ..............................................................................185

Figure 4-25: IPv6 prefix delegation architecture...................................................................187

Figure 4-26: Dynamic/static addressing integration .............................................................189

Table 2-1: Relevant DSL-Forum recommendations...............................................................42

Table 2-2: Configuration of RGW parameters by providers ...................................................51

Table 2-3 : Considered business models ..............................................................................54

Table 2-4: Comparison of different multimedia content adaptation techniques .....................68

Table 2-5: Multicast capabilities per application.....................................................................69

Table 2-6: Proposed traffic classes ........................................................................................80

Table 3-1 : Summary of the basic use of VLANs in Ethernet network model.......................122

Table 3-2 : Possible failures and required updates ..............................................................129

Table 3-3: Summary direct peer-peer ..................................................................................134

Table 3-4: Summary peer-peer via EN.................................................................................136

Table 3-5: IGMPv2 messages..............................................................................................137

Table 3-6 : Security measures for Ethernet Network Model.................................................149

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 11/193 PUBLIC

Table 4-1: Binding of IP sessions to service connections in the IP forwarder......................160

Table 4-2: Session and service aware IP forwarding (user to network forwarding) .............161

Table 4-3: Session and service aware IP forwarding (network to user forwarding) .............162

Table 4-4: address utilisation figures for distribution of /48 IPv6 prefixes ............................176

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 12/193 PUBLIC

ABBREVIATIONS 3GPP 3rd Generation Partnership Project

AAA Authentication, Authorisation & Accounting

AAL ATM Adaptation Layer

ABR Available Bit Rate

ABT ATM Block Transfer

ACS Auto-Configuration Server

ADSL Asymmetric Digital Subscriber Line

AF Assured Forwarding

ALG Application Layer Gateway

AM Access Multiplexer

APON ATM Passive Optical Network

ARP Address Resolution Protocol

ASP Application Service Provider

ASR Aggregation/Switching/Routing

ATC ATM Transfer Capability

ATM Asynchronous Transfer Mode

BAS Broadband Access Server

BER Bit Error Rate

B-NT Broadband Network Termination

BoD Broadband on Demand

BRAS Broadband Remote Access Server

BROL Basic Recursive Operating Language

BPDU Bridge Protocol Data Units

CAC Call Admission Control

CATV Cable TV

CBR Constant Bit Rate

CDN Content Distribution Network

CHAP Challenge Handshake Authentication Protocol

CIDR Classless InterDomain Routing

CLEC Competitive Local Exchange Carrier

CLP Cell Loss Priority

CO Central Office

COPS Common Open Policy Service

CoS Class of Service

CPE Customer Premises Equipment

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 13/193 PUBLIC

CPN Customer Premises Network

CRC Cyclic Redundancy Check

CSP Corporate Service Providers

CST Common Spanning Tree

C-VLAN Customer Virtual Local Area Network

CWDM Coarse Wavelength Division Multiplexing

DBA Dynamic Bandwidth Assignment

DF Default Forwarding

DiffServ Differentiated Services

DHCP Dynamic Host Configuration Protocol

DMT Discrete Multi-Tone modulation

DNS Domain Name System

DoS Denial of Service

DP Distribution Point

DSCP Differentiated Services Code Point

DSL Digital Subscriber Line

DSLAM DSL Access Multiplexer

DSM Dynamical Spectrum Management

EAP Extensible Authentication Protocol

EF Expedited Forwarding

EFM Ethernet in the First Mile

EPG Electronic Program Guide

EPON Ethernet Passive Optical Network

ER Edge Router

EUT End-User Terminal

EVC Ethernet Virtual Connection

FDB Forwarding Data Base

FEC Forward Error Correction

FOO Sample name for absolutely anything ("whatever"). See RFC3092

FPD Functional Processing Device

FQDN Fully Qualified Domain Name

FTTB Fibre To The Building/Business

FTTCab Fibre To The Cabinet

FTTEx Fibre To The Exchange

FTTH Fibre to the Home

FTTO Fibre To The Office

FWA Fixed Wireless Access

GARP Generic Attribute Registration Protocol

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 14/193 PUBLIC

GFR Generic Frame Rate

GMRP GARP Multicast Registration Protocol

GPON Gigabit-capable Passive Optical Network

HSI High Speed Internet

HSS (ETSI TISPAN) Home Subscriber Server

IAD Integrated Access Devices

IANA Internet Assigned Numbers Authority

IFG interframe Gap

IGMP Internet Group Management Protocol

ILEC Incumbent Local exchange Carrier,

ILMI Interim Local Management Interface

IMS IP Multimedia Subsystem

IntServ Integrated Services

IP Internet Protocol

IPCP IP Control Protocol

IPDV IP packet Delay Variation

IPER IP packet Error Ratio

IPG Interpacket Gap

IPLR IP packet loss ratio

IPTD IP packet Transfer Delay

IPv4 Internet Protocol version 4

IPv6 Internet Protocol version 6

ISDL ISDN Digital Subscriber Line

ISP Internet Service Provider

LAC L2TP Access Concentrator

LACP Link Aggregation Control Protocol

LCP Link Control Protocol

LER Label Edge Router

LEx Local eXchange

LMI Local Management Interface

LNS L2TP Network Server

LSP Label Switched Path

LSR Label Switching Router

MAC Media Access Control

MAC DA Media Access Control Destination Address

MAC SA Media Access Control Source Address

MAC FF MAC Forced Forwarding

MBGP Multicast Border Gateway Protocol

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 15/193 PUBLIC

MDF Main Distribution Frame

MEF Metro Ethernet Forum

MIDCOM MIDdlebox COMmunications

MITM Man-in-the-Middle

MLDv2 Multicast Listener Discovery version 2 protocol

MPLS Multi-protocol Label Switching

MSDP Multicast Source Distribution Protocol

MSDSL Multirate Symmetric Digital Subscriber Line

MST Multiple Spanning Tree

MSTI Multiple Spanning Tree Instance

MSTP Multiple Spanning Tree Protocol

NAI Network Access Identifier

NAP Network Access Provider

NAPT Network Address and Port Translator

NAS Network Access Server

NAT Network Address Translator

NGN Next Generation Network

NRCS Network Resource Control Servers

NSP Network Service Provider

NT Network Termination

NT2L / NT2W NT2 functional block in RGW : NT2-LAN part, NT2-WAN part

NV-RAM Non-Volatile Random Access Memory

OSGi Open Services Gateway Initiative

OAM Operations, Administration & Maintenance

OLT Optical Line Terminator

ONT Optical Network Terminator

ONU Optical Network Unit

PADI PPPoE Active Discovery Initiation

PADO PPPoE Active Discovery Offer

PAP Password Authentication Protocol

PBX Private Branch eXchange

PDH Plesiochronous Digital Hierarchy

PDU Protocol Data Unit

PDV Packet Delay Variation

PHY Physical Layer

PIM-DM Protocol Independent Multicast-Dense Mode

PIM-SM Protocol Independent Multicast-Sparse Mode

PIM-SSM Protocol Independent Multicast Single Source Multicast

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 16/193 PUBLIC

PLOAM Physical Layer Operation, Administration and Maintenance

PON Passive Optical Network

PoP Points of Presence

PPP Point-to-Point Protocol

PPPoA PPP over ATM

PPPoE PPP over Ethernet

PPV Pay Per View

PSTN Public Switched Telephone Network

PTA PPP Termination and Aggregation

P-t-MP Point to Multi Point

P-t-P Point to Point

PVC Permanent Virtual Connection

QoS Quality of Service

RACS (ETSI TISPAN) Resource and Admission Control Subsystem

RADIUS Remote Authentication Dial In User Service

RADSL Rate-Adaptive Asymmetric Digital Subscriber Line

RE-ADSL Reach Extended Asymmetric Digital Subscriber Line

RFC Request for Comment

RGW Residential Gateway

RIR Regional Internet Registries

RNP Regional Network Provider

RP Rendezvous Point

RPF Reverse Path Forwarding

RPR Resilient packet ring

RSTP Rapid Spanning Tree Protocol

SAR Segmentation and Reassembly

SDH Synchronous Digital Hierarchy

SDSL Symmetric Digital Subscriber Line

SFM Source Filtered Multicast

SHDSL Single-pair high-speed digital subscriber line

SIP Session Initiation Protocol

SLA Service Level Agreement

SLS Service Level Specification

SNMP Simple Network Management Protocol

SP (MUSE terminology) Sub Project

SPOF Single Point of Failure

SSM Single Source Multicast

SSP Service Selection Portal (website)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 17/193 PUBLIC

STB Set Top Box

STP Spanning Tree Protocol

S-VLAN Service Virtual Local Area Network

T-CONT Traffic Container

TCP Transmission Control Protocol

TDM Time Division Multiplexing

TDMA Time Division Multiple Access

ToIP Telephony over IP

ToS Type of Service

TVoIP TV over IP

UBR Unspecified Bit Rate

UDP User Datagram Protocol

UNI User to Network Interface

URL Universal Resource Locator

VBR Variable Bit Rate

VC Virtual Channel

VCC Virtual Channel Connection

VCI Virtual Channel Identifier

VDSL Very high speed Digital Subscriber Line

VID VLAN-ID

VLAN Virtual Local Area Network

VoATM Voice over ATM

VoD Video on Demand

VoDSL Voice-over Digital Subscriber Line

VoIP Voice over IP

VP Virtual Path

VPI Virtual Path Identifier

VR Virtual Router

WAN Wide Area Network

WFQ Weighted Fair Queuing

WP (MUSE terminology) Work Package

WRR Weighted Round Robin

xDSL xDSL refers to different variations of DSL, such as ADSL, HDSL, and RADSL

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 18/193 PUBLIC

REFERENCES [1] MUSE deliverable DA1.1, "Towards multi-service business models"

[2] MUSE deliverable DA1.2, "Network requirements for multi-service access"

[3] MUSE deliverable DA2.1, “From Reference Applications to Layer 2 and Layer 3 Services”

[4] MUSE milestone MA2.3, "Network Architecture: high-level description for individual architectural issues"

[5] MUSE milestone MA2.5, "Network architecture and functional specifications for the multi-service access and edge – Step 1: shortlist of options and feature groups prioritisation "

[6] MUSE milestone MA2.7, "Network architecture: detailed solution of individual architectural issues for more urgent features group "

[7] MUSE deliverable D TF1.2 – Overview of QoS principles in access networks

[8] Overall description of Public Ethernet Architecture, WPC1_1_1_2v08

[9] IETF RFC 2327: "SDP: Session Description Protocol"

[10] IETF RFC 2373, “IP Version 6 Addressing Architecture”. R. Hinden, S. Deering. July 1998.

[11] IETF RFC 2460, “Internet Protocol, Version 6 (IPv6) Specification”. S. Deering, R. Hinden. December 1998.

[12] IETF RFC 2473, “Generic Packet Tunneling in IPv6 Specification”. A. Conta, S.Deering. December 1998.

[13] IETF RFC 2516, “A Method for Transmitting PPP Over Ethernet (PPPoE)”. L. Mamakos, K. Lidl, J. Evarts, D. Carrel, D. Simone, R. Wheeler. February 1999.

[14] IETF RFC 2543: "SIP: Session Initiation Protocol"

[15] IETF RFC 2547, "BGP/MPLS VPNs", March 1999.

[16] IETF RFC 2607, “Proxy Chaining and Policy Implementation in Roaming”. B. Aboba, J. Vollbrecht. June 1999.

[17] IETF RFC 2748: "The COPS (Common Open Policy Service) Protocol"

[18] IETF RFC 2753: "A Framework for Policy-based Admission Control"

[19] IETF RFC 3021, “Using 31-Bit Prefixes on IPv4 Point-to-Point Links”. A. Retana, R. White, V. Fuller, D. McPherson. December 2000.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 19/193 PUBLIC

[20] IETF RFC 3069, “VLAN Aggregation for Efficient IP Address Allocation”. D. McPherson, B. Dykes. February 2001.

[21] IETF RFC 3084: "COPS Usage for Policy Provisioning (COPS-PR)"

[22] IETF RFC 3090, “DNS Security Extension Clarification on Zone Status”. E. Lewis. March 2001.

[23] IETF RFC 3162, “RADIUS and IPv6”. B. Aboba, G. Zorn, D. Mitton. August 2001.

[24] IETF RFC 3194, “The H-Density Ratio for Address Assignment Efficiency An Update on the H ratio”. A. Durand, C. Huitema. November 2001.

[25] IETF RFC 3580, " IEEE 802.1X Remote Authentication Dial In User Service (RADIUS), Usage guidelines", September 2003.

[26] IETF RFC 3633, “IPv6 Prefix Options for Dynamic Host Configuration Protocol (DHCP) version 6”. O. Troan, R. Droms. December 2003.

[27] IETF RFC 3769, “Requirements for IPv6 Prefix Delegation”. S. Miyakawa, R. Droms. June 2004.

[28] DSL Forum PD-022, "Ethernet Network architecture", May 2004

[29] DSL Forum WT-101 v4, "Migration to Ethernet Based DSL Aggregation", November 2004

[30] DSL Forum TR-025, "Core Network Architecture Recommendations for Access to Legacy Data Networks over ADSL", September 1999

[31] DSL Forum TR-042, "ATM Transport over ADSL Recommendation", August 2001

[32] DSL Forum TR-058, "Multi-Service Architecture & Framework Requirements", September 2003

[33] DSL Forum TR-059, “DSL Evolution - Architecture Requirements for the Support of QoS-Enabled IP Services”, September 2003.

[34] DSL Forum TR-094, "Multi-Service Delivery Framework for Home Networks", August 2004.

[35] www.3gpp.org

[36] News release ETSI TISPAN, at http://www.etsi.org/pressroom/Previous/2004/2004_06_tispan_3gpp.htm

[37] 3GPP TS 23.228 V6.6.0 (2004-06), Technical Specification 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; IP Multimedia Subsystem (IMS); Stage 2 (Release 6)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 20/193 PUBLIC

[38] 3GPP TS 23.060 V6.5.0 (2004-06), Technical Specification 3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; General Packet Radio Service (GPRS); Service description; Stage 2 (Release 6)

[39] 3GPP TS 29.207 V6.1.0 (2004-09), 3rd Generation Partnership Project; Technical Specification Group Core Network; Policy control over Go interface (Release 6)

[40] 3GPP TS 29.209 V1.0.0 (2004-5), Technical Specification 3rd Generation Partnership Project; Technical Specification Group Core Network; Policy control over Gq interface (Release 6)

[41] 3GPP TS 23.107: "Quality of Service (QoS) concept and architecture"

[42] 3GPP TS 23.002: "Network architecture"

[43] 3GPP TS 23.221: "Architecture requirements"

[44] 3GPP TS 29.208: "End-to-end Quality of Service (QoS) signalling flows"

[45] 3GPP TS 29.060: "General Packet Radio Service (GPRS); GPRS Tunnelling Protocol (GTP) across the Gn and Gp interface"

[46] [36] 3GPP TS22.105, "UMTS : Service and service capabilities," Oct. 2001

[47] ITU-T G.1010 "End-user multimedia QoS categories", November 2001

[48] Monath et al., “Business Role Model for Broadband Access”, BB Europe, Brugge, December 8-10, 2004

[49] Cortese et. Al, "CADENUS: Creation and deployment of end-user services in premium IP networks", IEEE Comm. Mag., January 2003.

[50] J.W. Roberts, "Internet Traffic, QoS and Pricing", J. Roberts, to appear in Proceedings of IEEE, 2004.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 21/193 PUBLIC

EXECUTIVE SUMMARY Introduction The access networks to the telecom world are witnessing some major evolutions, where pure hunger for bandwidth is not the only driving force. New applications and associated connectivity modes are emerging, each with particular quality of service requirements and functional mechanisms. The access architecture must be suited to support these services in an efficient and cost-effective manner. The definition of such access architecture is a central goal of WPA2 in MUSE, considering the overall mission statement "to research and develop a future low-cost, full-service access and edge network, which enables the ubiquitous delivery of broadband services to every European citizen."

A service-centric access and edge network architecture, both in terms of design and operation, is the key to achieve this prominent goal. The expectations both from the end-users and from the operators and service providers are key to the success of the architecture. The end-users will expect free choice of services and providers, quality of experience, and cost-effectiveness. The operators demand means for control in terms of security and service-dependent quality guarantees, appropriate accounting, flexibility in terms of service deployment and service support, and cost-effectiveness in terms of deployment and operations. Finally, the service providers will look for ways to deliver more value-generating services not just in terms of content (e.g. broadcasting TV) but also in terms of connectivity (e.g. location-based services).

MUSE addresses these network requirements while also introducing Ethernet and IPv4 / IPv6 (Internet Protocol) technologies in the access network. The trend to evolve to packet-based connection-less technologies (Ethernet, IP) in access & aggregation is one of many evolutions, other examples being the connectivity models (multicasting, peer-peer), the choice of auto-configuration protocol, the definition of a specific AAA architecture and QoS architecture, the role and relationships between the access and service providers, etc... Of course, real-world deployments will have a phased approach to the introduction of these evolutions, and the migration aspects starting from the current situation are covered in MA 2.6 and in SP B. The current document however aims at defining a network architecture that has integrated all the different evolutions (to service-awareness, to Ethernet/IP, etc...).

DA2.2 is the first deliverable on the way to this definition of the MUSE network architecture. It is a summary of the studies in WP A2 during the first year of the project, which were also documented in intermediate milestones (MA2.3, MA2.5, MA2.7).

The scope of DA2.2 has been structured along two lines, namely generic considerations and specific network models. On one hand, generic considerations involve broad issues that are independent of the choice of the network model. The study concentrates on the fundamental features that must be supported in a multi-service, multi-provider, multi-technology environment. Such considerations include a reference interface model, the business roles (wholesale models) and implications, the interworking with the user's residential gateway, the Quality of Service (QoS) architecture, the AAA architecture and the new types of connectivity (peer-peer and multicast). They are described in section 2 of this document. On the other hand, these considerations have been elaborated for two specific network models, namely the Ethernet-based network model (section 3) and the IP-based network model (section 4). These models were identified already at the start of the project as the two main modes of operation for the access and aggregation architectures. Very bluntly summarized, the first is based on Layer 2 connectivity in the access and aggregation network, whereas the second is based on Layer 3 connectivity (both IPv4 and IPv6 being addressed). Both network models

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 22/193 PUBLIC

of course imply a managed respectively Ethernet-based and IP-based access and edge network. Each model has its own variants and options, which were first identified, then analysed and assessed, and finally narrowed down where possible.

Generic architecture considerations Firstly, the access and aggregation network (a.k.a. access and edge network, a.k.a. access and metro network) is positioned in terms of terminology. A basic reference architecture describes the main provider networks and their main nodes. A first sketch of the definition of interfaces at the data plane and control plane is also drawn. The descriptions refer as much as possible to existing material in the DSL-Forum, aiming at alignment. The two main connectivity models, Ethernet and IP, are also briefly introduced.

An adequate model for the Residential Gateway (RGW) is required for the end-to-end story in terms of QoS, connectivity and auto-configuration. Therefore a model for the RGW is presented, based on the on-going work in TF3. It is assumed that the RGW will be either bridged or routed, or a hybrid of both. Routed gateways in IPv4 are assumed to incorporate NAPT, whereas NAT has been ruled out for IPv6. The aim of the model is to use the functional blocks of the CPE (the set of devices present at the customer premises) for defining the interaction with the network, in particular the parameters that have to be addressable the network side (at L1, L2, L3, L4+). Please note that MUSE does however not define the LAN side (technologies) of the home network.

The general connectivity is of course the basis for the architecture. It's all about providing correct forwarding of packets between a server and one or multiple hosts, or between multiple hosts themselves, based on Layer 2 and/or Layer 3 address information. Before taking a technical dive, it is worthwhile to review the different possibilities of connectivity wholesaling and retailing that a Network Access Provider (NAP) can offer to its customers. This leads to several business models describing the possible roles of the providers (NAP, Network Service Provider (NSP), Internet Service Provider (ISP), and Application Service Provider (ASP)). At the same time the new roles of Packager and Connectivity Provider as defined in [1] are introduced. Four business models are retained for residential users, plus one based on Layer 2 (Ethernet) wholesaling for business users.

The more technical considerations review the stakes of peer-peer connectivity; connecting at Layer 2 versus at Layer 3, connecting locally (as close as possible to the users) versus forcing this traffic to an edge node. The conclusion is that while business users require L2 peer-peer connectivity (e.g. for L2 VPN), there is no such requirement for residential users, which will then be connected at L3. Multicasting also poses specific choices and requirements as a connectivity model. A high-level review of several underlying concepts is articulated with relevant multicast applications in paragraph 2, and a more detailed technical implementation (for the Ethernet network model) is given in paragraph 3.

Another topic of research is a QoS architecture that allows for implementing QoS guarantees prescribed by SLSs, whereas also enabling flexible and scalable QoS adjustments following individual service demands (based on requests). The solution also aims for ease of management for all the players in the service chain. The general principles and basic options for such an architecture are reviewed. They are then worked out for the application signalling with policy pull method, following and extending the 3GPP's IP

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 23/193 PUBLIC

Multimedia Subsystem (IMS) approach. This approach is based on linking a network resource management platform (one per involved network) with the service platform (contacted by the user at session start). The network provider interprets the application signalling messages and knows the required resources for the session, which are then negotiated with the resource management platform. The resource management platform keeps a view of the available resources, receives resource requests, deduces the flow path that will be taken, checks for availability and finally grants or refuses the flow establishment by controlling the entry nodes. Resources are reserved after a request has been accepted and released after the session has finished. When multiple domains are involved the platforms must also co-ordinate their reservations to ensure end-end QoS. The resource management itself can make use of pre-provisioned (and controlled) QoS pipes or alternatively request and reserve QoS by means of network signalling. The principles are illustrated by an end-end session set-up.

The QoS study also refers to DTF1.2 for a detailed explanation on traffic classes.

Authentication of the end-user can take place at multiple levels, namely by the NAP and/or the NSP, prior to be granted access to the network. After authentication, auto-configuration is the process by which the Customer Premises Equipment (CPE) obtains autonomously configuration information from the network and its service providers. It is necessary to avoid end-user taking part in CPE configuration process because residential users do not have technical expertise. Auto-configuration must provide all the information necessary to create automatically layer 2 connections and layer 3 flows. There are two main protocols for auto-configuration. Point-Point Protocol (PPP) is currently well-established but is not suited for local peer-peer, QoS nor multicasting. Dynamic Host Configuration Protocol (DHCP) is coming up for new applications, has the advantage to uncouple the data and control plane, but lacks some inherent features and established architectures of PPP.

Both authentication and auto-configuration must work in a multi-provider and multi-service environment. There must be an interaction between the different Authentication, Authorisation & Accounting (AAA) platforms and the auto-configuration servers in order to correlate the authentication records and the associated IP addresses. A suitable AAA architecture is needed to achieve this goal, aiming at reaching feature parity of DHCP-based auto-configuration with current PPP-based auto-configuration. Useful mechanisms that are used in solutions include 802.1x authentication, DHCP options to be added at the CPE and DHCP relay, and including a Remote Authentication Dial In User Service client (RADIUS) in DHCP servers. A review of several tracks is given, and in particular a solution based on a single-step approach.

Finally, the link is made between AAA architecture and IMS architecture, where possible IMS model adaptations are listed.

Ethernet Network Model The first model is based on a layer 2 connectivity from the RGW (or user's terminal in case the RGW is bridged) to the Edge Node (EN). The Access Node (AN) performs connectivity, subscriber management, accounting and security features. The AN is an (enhanced) Ethernet switch. The aggregation network carries traffic between ANs and ENs and is involved in multicast replication. It is composed of plain Ethernet switches. The EN is responsible for providing connectivity to the relevant ISP/NSP/ASP and for implementing accounting and security features. The EN must ensure Ethernet connectivity, at least at the aggregation network side, and further handles the traffic at Layer 3 (except for L2 wholesale).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 24/193 PUBLIC

Connectivity throughout the access and aggregation network is based on Ethernet principles.

Depending on the use of Virtual Local Area Network tags (VLAN tags), there are two possible connectivity modes. In a first option, called "Intelligent Bridging", the connectivity in the AN is based on the MAC (Medium Access Control) addresses, as in an ordinary Ethernet switch, with additional intelligence for security, traffic management and accounting. The VLANs in the aggregation network are used to further separate the aggregated traffic from the different ANs. A typical use is to allocate one VLAN per AN-EN pair. In the second option, the connectivity at the AN is no longer based purely on MAC address but on VLAN-IDs, namely by associating one (or more) individual 802.1Q VLAN-ID to every end-user (i.e. to every line aggregated in the AN). This is called "Cross-connecting", using VLAN stacking in the aggregation network to overcome the scalability problem of a single 802.1Q VLAN. Both options have their pro's and con's, and the intelligent bridging mode has been selected for residential users for its lower complexity and compatibility with existing edge nodes. Business users are a special case, requiring another sort of cross-connecting, this time based on S-VLANs. Residential and business users can be combined in the same network (and on the same platform if required).

The data plane requirements for allowing basic connectivity are analysed in detail for both modes. Each node must contain connectivity parameters that must be set and also updated in several cases. The IP subnetting of the end-users in the NAP can be chosen from different schemes but will have an impact on the requirements for non-PPP based peer-peer communication. Conforming to the decision to switch peer-peer at Layer 3, in the Ethernet network model all peer-peer must be forced to edge nodes to be switched. This can be achieved by a customer separation method based on MAC forced forwarding or based on ARP filtering in the AN and ARP agent in the EN. The basic use of VLANs can be extended with additional meanings, which are listed for completeness. Using VLANs for business users can lead to scalability problems, which can be alleviated by introducing MPLS in the aggregation network. Note that MPLS will be compatible with residential traffic. Multicasting is another connectivity mode that is analysed in detail. An efficient solution is presented based on IGMP as established multicasting protocol for flow replication and multicast tree building.

On top of providing basic connectivity, the network should also provide a level of security against malicious users or malfunctions. The different types of security threats are classified for convenience, and an overview has been established of specific threats together with appropriate security mechanisms, both for the IPoPPPoE traffic and for the IPoE traffic.

Finally, the basic concepts of the Ethernet model are recapitulated in a short summary.

The one-step configuration and AAA process is further elaborated in the specific case of Ethernet network model. Also some concepts of the IMS-based architecture are conjugated in terms of the Ethernet network model.

IP Network Model The IP awareness and functions can be brought closer to the end-user by introducing aggregation nodes acting as layer 3 routers or forwarders in the aggregation network. Traffic flows can then be processed at IP level for QoS, security (there is now clear separation at layer 2 between the aggregated users and the rest of the aggregation network), multicast

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 25/193 PUBLIC

criteria. Another advantage is that peer-peer traffic can be routed at L3 at that node, which is more efficient than via the EN as in the previous model. Note that this node can be based on IPv4, or IPv6, or combining both IPv4 and IPv6.

IPoE traffic is forwarded according to the service policies in the aggregation node, but the traffic handling can be different for IPoPPPoE. Therefore the different use cases for IPoE and IPoPPPoE traffic have been defined and analysed (for IPv4). A single AN can then freely combine an IPoE use case with an IPoPPPoE use case.

In L2 forwarding of PPP, IPoPPPoE traffic is basically forwarded at the AN at layer 2 and sent transparently across the aggregation network to the EN (BRAS (Broadband Remote Access Server)). The aggregation network must be layer 2. This traffic is not dealt with at layer 3 in the AN.

A variant to this is to apply layer 2 termination in the AN for this traffic by including a PPPoE relay functionality in the AN, forwarding based on PPPoE session IDs and using the AN as PPPoE originator (in upstream) and destination (downstream) instead of the end-user.

A different use case is to handle PPP completely in the AN, either by means of PPP termination and further routing at layer 3, or by means of L2TP tunneling to the EN (BRAS or L2TP tunneling switch). The aggregation network can be layer 2 or layer 3, or a mixture of both (e.g. Ethernet islands).

The handling of IPoE traffic can also be categorised in use cases. A lightweight approach is to perform Layer 2 termination at the AN without the need for routing protocols between AN and EN. In this case the AN acts more like an IP forwarder, linking IP sessions with IP service connections in the NAP (connectivity between AN and EN).

The AN can also be a full IP router. Care must be taken not to waste IP addresses (assuming the scarce IPv4 address space), therefore the AN must be able to put the aggregated users in the same subnet and allocate them the same default gateway IP address. Note that the location of the first IP-aware node (seen from user's side) can be chosen in the first aggregation node (e.g. a DSLAM (DSL Access Multiplexer) or fibre terminator), or the second one (e.g. a layer 3 hub DSLAM terminating small layer 2 remotes), or even another one (e.g. a node in the CO aggregating multiple small-size DSLAMs).

In the case of application wholesale (single NSP, NAP controlling the IP address allocation in its network), the AN will not require dynamic routing exchanges with the ENs. However in the case of IP wholesale to multiple third-party NSPs, the routing requirements become more complex and must be investigated. This is for further study.

The introduction of IPv6 presents opportunities and inevitably also new technical challenges. The first and most basic aspect to be analysed for IPv6 is the addressing. It is proposed to include a NAP topology field in the IPv6 addressing structure. An allocation efficiency study quantifies the (in)efficiency of the static addressing method. Two types of static addressing schemes are presented, one based on introducing a strict NAP topological hierarchy in the prefix (from the ISP to the subscriber), another based on NAP-proprietary addressing of its different nodes. Dynamic addressing schemes are also analysed, using dynamic prefix delegation following certain policies and dynamicity. Finally, a possible method is shown to integrate the dynamic prefix delegation mechanism with static addressing schemes as defined above.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 26/193 PUBLIC

What's next? This deliverable reflects the status of the network architecture studies and solutions after the first year of the project. They have been disseminated to the other subprojects and task forces. The basic mechanisms and features for a multi-service, multi-provider, multi-protocol access architecture have been worked and the Ethernet network model is stable and well elaborated. However some models and aspects still require further consolidation.

In the second year, attention will be paid both on the continuation of the current models, especially on the IP network model (routing requirements, IPv6), the AAA architecture, the IMS architecture, the interaction with the CPE, the definition of open interfaces, and on extending the features with e.g. nomadism and service enablers.

A final overview of the MUSE architecture will be presented in DA2.4 at the end of phase I of MUSE.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 27/193 PUBLIC

1 INTRODUCTION 1.1 Scope of the deliverable 1.1.1 Context Broadband access has become widespread over the past few years, spurred amongst others by the vast deployment of DSL networks. But traditional access networks have been built to deliver primarily high-speed Internet access (HSI), and are now facing the demand (both from consumers and from providers) for support of triple play services. Not only bandwidth is impacted by the new applications, but more fundamentally they introduce new connectivity modes, service-specific expectations in terms of quality of experience, and open new roles for the different players (providers). It is clear that a service-centric access and edge network architecture, both in terms of design and operation, is the key for supporting this in an efficient and cost-effective manner.

This document addresses the changes needed in the access and aggregation network from an architectural and functional point of view. It forms a reference within MUSE for a fully evolved access and aggregation network. Of course in real-world situations a phased approach can be taken to the introduction of these evolutions.

Considering the present situation, current broadband access networks can be characterised by several key characteristics. First, they typically use ATM switching as aggregation technology. One (or multiple) ATM PVC provides Layer 2 connectivity between the subscriber modem and the Broadband Access Server (BRAS). The PPP protocol is almost exclusively used (for residential users). The data format is IPoPPPoA or IPoPPPoEoA. Secondly, the subscriber and service intelligence is centralized in the network, e.g. in BRASes where PPP is terminated for subscriber management. Thirdly, the main service offered is HSI (with telephony services being out-of-band), with some limited cases of video distribution.

This results in most cases in all subscribers being connected with best effort connections to a single or one of several Internet Service Providers (ISPs) via a single edge in the aggregation network.

How would a more user-centric interpretation of broadband access look like? As it is all about user experience, he/she would want to

- select a multimedia application from a triple-play service bundle with associated quality (multi-service),

- that is offered by an appropriate provider (multi-provider),

- that is ubiquitously available (reaching most users and allowing nomadism),

- with a specific subscription (accounting, new business roles for providers).

The operator is responsible for providing secure network connections between end-user terminals in the home network and edge nodes (or other end-users) in a multi provider environment. Just keeping pace with the associated bandwidth increase will not be sufficient. From an operator's perspective, several drivers are encouraging network evolution;

- Bandwidth is becoming a commodity, and the resulting price erosion has advanced the introduction of new services in broadband access. Content delivery and value added

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 28/193 PUBLIC

service enablers become vital for the access provider, who is facing the largest investments in the infrastructure. As the applications commonly use IP, this involves bringing IP awareness closer to the end-users.

- Fierce competition also puts heavy pressure on prices and consequently urges operators towards a CAPEX and OPEX optimized network design. For the inevitable bandwidth upgrade in the network, Ethernet technologies are presented to offer a cost advantage over ATM technologies. Furthermore, packet-based technologies allow a more straightforward way to deliver broadcast channles in a bandwidth-efficient (and hence cost-efficient) way by using multicasting techniques.

- The multiple services have their own requirements and characteristics. Supporting multiple services by means of multiple specialized edges has several advantages;

- It is a clean solution supporting evolution from existing high speed Internet networks to the delivery of multiple services by allowing the parallel and cost-efficient introduction of new services or the capacity upgrade of existing services.

- It allows traffic separation at the edge. High availability traffic (such as PSTN replacement VoIP) does not share nodes such as a BRAS with best-effort internet traffic. By separating traffic onto different nodes engineered to different standards, risk is diversified, and the node can be built for an optimal trade-off between functionality and complexity for the set of services it carries.

- Each Edge Node can enforce the security policy which is relevant to the set of services it carries. For example, a Voice Gateway will in general filter out everything that does not look like SIP or H.323 traffic.

The current broadband access networks need to evolve in order to meet these expectations and allow new or additional revenues for the different players (providers). Several functional service enablers (for QoS, accounting, security, auto-configuration, accounting) must be introduced, taking also into account their integration with the deployment of packet-based technologies (Ethernet, IPv4, IPv6). This is the scope of DA2.2.

It is important to compare this declared objective with the current situation (Jan 2005) in standardisation.

- ATM technology and ATM-based networks offering best effort IP connectivity to residential users are well standardised (DSL-Forum TR-25 [30], TR-42 [31]) and widely deployed. A recent standard has extended the scope to multiple services via a single-edge, DSL-F TR-59 [33]. It describes an ATM-based network architecture with the following characteristics; single-edge (1 PVC between CPE and single Edge Node (BRAS)), primary support for DiffServ QoS enabled unicast IP services, no use of L2 QoS in access and aggregation network, QoS mechanisms at BRAS in downstream (hierarchical scheduling) and upstream (packet discard), focus on PPP (2 PPP sessions, one for HSI, one for new services).

- Ethernet is a well established technology readily used in enterprise networks and Metro Ethernet networks of carriers. Ethernet as a technology is quite extensively addressed in standardisation (IEEE). Ethernet-based Metro networks are also covered (MEF, ITU-T), but Ethernet-based acess and aggregation networks for residential services has appeared in 2004 as a new standardisation topic. One main on-going document is DSL-F WT-101 "Migration to Ethernet-based DSL aggregation" [29]. Its current network architecture scope covers Ethernet on DSLAM uplink, L2 QoS-awareness in the access

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 29/193 PUBLIC

and aggregation network, multicasting, single edge with second one for video.

It appears that a complete framework does not exist yet in standardisation for an access and edge network architecture that combines all of the following:

- multiple services

- multiple edges (application gateways or IP QoS enforcement points)

- multiple data formats (IPoPPPoE and IPoE), with corresponding authentication and autoconfiguration phases for DHCP-based services

- per-session QoS-awareness and negociation+enforcement

- efficient multicast replication

- IP awareness closer to the user (at some aggregation point)

- Ethernet-based

- Support of both residential and business users.

1.1.2 Scope The scope of DA2.2 can be summarized by reviewing the evolutions and their drivers:

• Service-driven evolutions

- Higher requirements on bandwidth (especially for video).

- Support of DHCP as auto-configuration protocol for some applications (VoIP, video). As DHCP has no authentication phase like PPP has, extra mechanisms must be foreseen in the AAA architecture for authentication and user record correlation.

- "Voice" (voice over IP and videotelephony over IP) and "Video" (streaming) require service-specific QoS in terms of loss, jitter, and delay. The QoS architecture must guarantee per-session QoS by negociation and enforcement steps.

- Separation of the multiple services in the network. A cost-efficient way is to foresee (specialized) multiple edges in the aggregation network, e.g. a BRAS for HSI and an edge router for video services.

- Some applications rely on new connectivity modes for an efficient data delivery (e.g. gaming via peer-peer, e.g. tele-teaching via multicasting). The access and aggregation network must support such secure connectivities.

• User-driven evolutions

- Nomadism, the ability to retrieve a service environment at different places of the network. Note that this aspect has not yet been elaborated much in this deliverable.

- Combining multiple simultaneous services, each possibly from a different service provider. Increasing the "plug-and-play" feel by shifting the configuration burden from the user to the provider. This has direct impact on the control (e.g. auto-configuration) and management (e.g. installation of a firewall) of the customer premises equipment.

• Technology-driven evolutions

- The introduction of IPv6 brings new opportunities (e.g. addressing schemes, stateless auto-configuration). They must be supported by the network and coexist with installed base of IPv4 equipment and services.

• Cost-driven evolutions

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 30/193 PUBLIC

- Ethernet as low-cost technology for BW upgrades.

- Bandwidth-efficient support of multicast streams in the network.

• Business-driven evolutions

- New roles for the players (e.g. emergence of application wholesale offered by the NAP). The different business models imply differences where PPP is terminated and how IP addresses are allocated.

1.1.3 Content The document first describes the generic functional mechanisms in Chapter 2. It summarises the topics of the architecture that are independent of the transport layer: Reference models, General connectivity, Model of residential gateway, QoS and AAA.

These principles are then mapped on two concrete network models, which propose two different levels of IP awareness for the connectivity handling in the network.

The first is based on a layer 2 (Ethernet) connectivity between the home and the aggregation edge and is described in Chapter 3. The current situation is also based on layer 2 connectivity (via ATM) but the proposed network model has fundamental differences; it is packet-based and incorporates functional mechanisms (QoS, security, AAA, multicast and peer-peer connectivity) for meeting the expectations described above.

The second network model provides layer 3 (IP) connectivity in the aggregation network, as described in Chapter 4. Having IP awareness closer to the end-users opens possibilities for better scalability (layer 2 separation between access and aggregation), more efficient forwarding (local peer-peer), and additional IP features (QoS, security).

Finally, Chapter 5 reviews the conclusions that can be drawn.

1.2 Positioning of DA2.2 This document is the first deliverable in the work of architecture within MUSE. It is the result of the first year’s efforts according to the current planning. As is shown in Figure 1-1 DA2.2 is part of a larger suite of reports and deliverables that eventually will lead to the MUSE architecture deliverable DA2.4 at the end of phase 1.

The architecture work also relies on input from other work packages especially WPA1, business models and services and WPA3, techno-economics. The result also in great deal relies on interaction between the other sub-projects and task forces.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 31/193 PUBLIC

MA2.5 Network architecture

step 1

DA2.2

Network architecture Step 2

DA2.4

Network architecture

Step 3MA2.3

research on individual

issue

WPA.3 SPs & TF

WPA.1 DA2.1 MA2.7

research

on individual

issue

time

Figure 1-1. Relation between the different milestones and deliverables for the MUSE Network architecture.

The architectural work has progressed stepwise with different scope of the milestones MA2.3, MA2.5, MA2.7 and deliverable DA2.2.

Milestone MA2.3 contains a detailed set of considerations for multi-service network architectures, as well as a reference terminology. The considerations cover a wide range of topics on Data plane, QoS architecture, Auto-configuration, Multicasting, Physical Layer infrastructures, Service Enablers and Security.

Two basic network models are introduced in MA2.3:

- Ethernet-based network model, with access node (AN) and the aggregation network being L2.

- IP-based network model, with access node (AN) being IPv4 / IPv6 and aggregation network either L2 or L3.

Milestone MA2.5 focuses on the Ethernet-based network model, building on the results of MA2.3. Since MA2.3 does not offer a complete solution, and does not present complete alignment between the separate topics, MA2.5 further works out issues regarding the Ethernet-based network model, and aligns different topics. It also investigates generic issues such as the relationship with the residential gateway and business models.

Milestone MA2.7 refines the work further. Solutions for general aspects are presented. The Ethernet model and the IP model are also further developed. The scope for the IP model in MA2.7 is general connectivity, while other IP related topics are planned for milestones next year.

The DA2.2 deliverable is essentially a condensed version of MA2.7. The results from the previous milestones have been compiled and summarised into three distinct parts: General aspects, Ethernet solutions and IP solutions.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 32/193 PUBLIC

1.3 Focus on Multi-service MUSE focuses on access to multiple services. This means that

- a user can get multiple services, each service with an associated CoS

- a customer can receive L3 services (e.g. Internet access or IP VPN service, see ([14]) and L2 services (e.g. L2VPN or PWE3 type services )

- a customer can simultaneously receive multiple services over the same physical access line (service multiplexing).

- a user can be connected to one or to multiple edges nodes of the NAP, depending on the considered services

- a user can be connected to one or multiple NSPs (and hence can receive multiple IP @s) in function of the considered services

- a user can be nomadic, connecting at different entry points of to the NAP and retrieve its profile and connectivity.

A distinction is made between "users" and "customers". Customers buy a service form a service provider, while users use a service. In the residential case usually “user” equals “customer”. However in the business customer case the customer is usually a company and a user is an employee of the company located e.g. at a specific location

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 33/193 PUBLIC

2 GENERIC ASPECTS 2.1 Positioning of the Access and Aggregation network 2.1.1 Terminology and logical model The following figure illustrates a high-level logical model where the key elements and logical networks have been defined with the DSL-Forum TR-058 terminology in mind. TR-058 is a good starting framework for a MUSE reference architecture.

However, some modifications are necessary in order to fully reflect the multi-service architecture of MUSE. Hence a change of the definitions might be necessary once business and service roles evolve together with the architecture.

Note that the roles of Packager, Loop Provider and Connectivity Provider are not mapped onto networks or elements in the architecture.

ASP 1

ISP

First

Mile

CPN Aggregation

Network

Regional

Network

Service

Network

End U

ser Terminal

Residental G

ateway

Access N

ode

Access E

dge Node

“Access Network in MUSE”

NSPService E

dge Node

NAP RNP NSP, ISP, ASP

Remote Unit

Figure 2-1. Reference Network based on the DSL-Forum reference service provider interconnection model.

The following high-level definitions of each of the logical networks, elements and business roles are valid for the architecture.

Networks Customer Premises Network - Residential network connecting the residential gateway with the different devices at the customer premises. The CPN can be a hybrid of different technologies (WLAN, phone line wiring, Ethernet cabling, etc...) and is controlled by the user.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 34/193 PUBLIC

Access Network – Provides the connectivity between the end-users and the service providers. It is owned and operated by the Network Access Provider.

First Mile part of the Access Network – The physical link connection between the NT and the Access Node, it can be a DSL,

cat5, fibre or wireless drop.

– May contain optional Remote Units in the field (first aggregation point)

Aggregation part of the Access Network - This is the link layer part of the access network. It aggregates traffic from first mile to the regional network

Regional Network - Optional network. When present, provides connectivity between the Access Network and the Service networks. It is owned and operated by the Regional Network Provider.

Service Network - The Service Network encompasses a number of service provider networks and nodes, each offering one or more services. These services are envisioned to be mainly IP based. It can be run by a Network Service Provider (NSP), an Internet Service Provider (ISP) or an Application Service Provider (ASP).

Network elements Residential Gateway – Element of transfer between the access network and the residential network, comprising a number of controlling and management functions.

Access Node–This is the point where the first mile ends. The access node may be located in the central office or in a remote location. An access node aggregates traffic from the first mile and may optionally have a certain level of service awareness.

Access Edge Node – This node that terminates the access network shall always be placed between the access network and the regional network (or between the access network and the service network in case there is no separate regional network).

Service Edge Node – The point of transfer between the Regional Network (or the Access Network in case there is no regional network) and the Service Network (NSP, ISP or ASP). This is the entry point to various services, e.g. Internet, Telephony, Video etc.

Business roles The reference model reflects different business roles, each one covering a specific part of the model. The main roles are shown in the figure 1-1:

Network Access Provider (NAP) (in DA1.1 also called Access Network Provider (ANP)

– aggregates and forwards/routes traffic between end-user and NSP/ASP corresponding to the specific application

- could cumulate with NSP role as well (IP address management)

Regional Network Provider (RNP) – provider owning the Regional Broadband Network Network Service Provider (NSP) - provides (IP) addressing and connectivity to an IP network (ASP or internet) for end-users. If only internet access is offered, the NSP is an ISP.

Application Service Provider (ASP) - trusts NAP (and NSP) networks for access to users and configuration of their IP addresses. Keeps profile (authentication, QoS) of its users. Offers applications for these users via dedicated application servers.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 35/193 PUBLIC

In addition to these roles there are also more business roles in a MUSE perspective, e.g. Customer, Packager, Connectivity Provider and Content Provider. All business roles are described extensively in the MUSE deliverable DA1.1.

2.1.2 Connectivity models 2.1.2.1 Ethernet model

The Ethernet based architecture requires end-to-end Ethernet connectivity between the CPN and the Edge Node. This implies that all nodes between the CPN and the Edge Node must be Ethernet aware, which means that all nodes must be able to base the forwarding handling on the information of the Ethernet frames. It also implies that Ethernet traffic normally can be seen as point-to-point connections between the CPN and the Edge Node. However, in an extended version the architecture should also support peer-to-peer connections between separate CPNs via the Aggregation network, without having to go through the Edge node. However, direct peer-to-peer support in the Access Network has implications on security, QoS, and accounting, and must therefore be investigated further.

RNP(optional)

CPN

CPN

NSP/ISP

NSP/ISP

NSP/ISP

ASP

NAP

AccessEN

Ethernet (MPLS)aggregation network

AN

Ethernet switch(802.1ad)

bridged

RGW Access

EN

Ethernet switch(S-VLAN aware or 802.1Q)

BRAS or Edge Router

routed(IPv4/IPv6)

RGW

IP termination(IPv4 or IPv6)

ServiceEN

Figure 2-2. In the Ethernet Transport scenario there is Ethernet connectivity between the CPN and the Access Edge Node. In an extended scenario Ethernet frames may also be tunneled further up to the NSP Edge Nodes.

Each service binding for a network service is associated to a pair of Ethernet destination and source addresses, which means that there is one service binding for upstream traffic and one service binding for downstream traffic. The SLS for upstream and downstream service bindings could be asymmetric, e.g. maximum bandwidth for upstream and downstream traffic to each CPN could be different. The service binding is the basis for a network service, e.g. an Internet service, a Local IP service or a Corporate Intranet service, where there will be IP addresses allocated to the Ethernet addresses in the service binding. In the case where the RGW is a routed gateway the network service binding is setup between the external interface of the routed RGW and the NSP edge router, otherwise there will be a network service binding between each host at the CPN and the NSP edge router.

The network service binding will become the basis for creating application service bindings, by using the pair of IP addresses to set up an application service, e.g. a TV channel or a telephony service, see also 3.1.1.1 (“Service separation”).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 36/193 PUBLIC

Forwarding of Ethernet frames is based on the address fields in the Ethernet header or optionally on higher layer information. According to the Ethernet standard a Destination Address may specify either a unicast address, destined for a single station, or a multicast address, destined for a group of stations. A Destination Address of all 1 bits refers to all stations on the LAN and is called a broadcast address.

The Ethernet frames encapsulate higher layer protocols, e.g. IP over Ethernet or IP over PPP over Ethernet. The IP layer can be IPv4 or IPv6.

Ethernet switching in the Aggregation Network can be full-mesh or restricted. In a full-mesh scenario traffic can be switched locally between two Access Nodes or even within an Access Node. Restricted switching implies that there is not Ethernet connectivity between all nodes. The basic scenario for restricted switching is that Ethernet traffic is switched between an Access Node and an Edge Node only. In this case all traffic between two CPNs always go through the Edge Node. There is also a scenario where Ethernet traffic is switched directly between two Access Nodes without going through the Edge Node, but still controlled by the Access Network. An example of this may be a LAN-LAN inter-connect service for business users or a residential peer-peer application for private users. Note that MPLS can optionally be used in the aggregation network to improve the scalability of business conenctions.

2.1.2.2 IP model

The main characteristic at the data plane of the IPv4/IPv6 network model is the separation of access and aggregation part at Layer 2 by the Access Node, which now acts as a Layer 3 forwarder.

RNP(optional)

NSP/ISP

NSP/ISP

NSP/ISP

ASP

NAPAN

(optional)Ethernet for IPoPPPoE

CPE

BRAS or Edge Router

CPN

bridged

RGW

CPN

routed(IPv4/IPv6)

RGW

IP termination(IPv4 or IPv6)

Ethernet or IP (optionally MPLS)

aggregation network

IP router/forwarderfor IPoE (IPv4/IPv6)

ServiceEN

AccessEN

Ethernet switch(S-VLAN aware or 802.1Q)

Router (IPv4/IPv6)

Figure 2-3. In the IP Transport scenario there is Ethernet connectivity only in the First Mile link, and optionally in the aggregation part or regional part of the network.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 37/193 PUBLIC

One major benefit is that the management of the aggregation part at Layer 2 is much simplified compared to the Ethernet network model. Indeed, as the user segregation is now implemented at IP level, it is only needed to transport the frames at layer 2 between the Access Node and the first other IP point (this could be a router inside the aggregation network or a BRAS at the edge), without the need for user segregation at Layer 2. At the same time it avoids the potential security and scalability issues at Layer 2 of the previous network model.

In a general IP model the transport between the CPN and the Access Node can be any layer-2 technology. However, it is assumed that the main link technology will be Ethernet. In the aggregation part of the Access Network, there can be two flavours:

• Pure Layer 2 aggregation network. This case consists of a pure Layer 2 network (Ethernet switches) connecting the Access Nodes with the different edges. The only (optional) Layer 3 interpretation in the intermediate nodes is IGMP snooping for multicasting.

• Layer 3 aggregation network. In the second case the aggregation network is in fact a Layer 3 network and consists of routers.

The IP model can also be extended to a network layout where there may be one or several Ethernet switches between the CPN and the first IP node in the aggregation part. The aggregation part can also have different layout higher up in the network, where some nodes are IP routers and some nodes are Ethernet switches. Parts of the aggregation network can e.g. be Ethernet islands surrounded by IP nodes. The main characteristic of the IP model is thus that the connectivity between the CPN and the Access Edge Node is at the IP layer and not at layer 2, meaning that there must be at least one IP node between each CPN and the Access Edge Node.

2.1.3 Reference Control Architecture This section describes the current status on the control and dataplane work in SPC1.1, sub-milestone MC1.1.7 “Policy Control Framework” with particular emphasis on the control plane part. The work is still on-going so this is a snapshot of the model from November 2004.

2.1.3.1 Reference Control Architecture

The reference control architecture used in MUSE SPC is derived from DSLF work, primarily from TR-058 [30] and additions and modifications proposed in PD-22 [28] see Figure 2-4. The control architecture in this document is further elaborated in order to reflect a multi-service, multi-provider access network.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 38/193 PUBLIC

L2 networkUser

AccessFunction

ServiceAccess

Function

M’

ASP A10Control

CustomerPremise

HGW

M

Service Access Management

Access Network Control

APackager

R

ConnectivityProvider

Network AccessProvider

Uctrl

Figure 2-4. Reference control architecture – top view.

The model above consists of certain management and control functions as well as a couple of defined interfaces. Below all these are described in a generic fashion, details are left for later in the document.

SAM (Service Access Management) – This function has the responsibility to enable services that the end user will access. Hence, it has the knowledge about the available services and the users in the access network. The SAM works independent of the access network technology.

ANC (Access Network Control) – The ANC has the knowledge on how services should be requested and accessed and also what resources that are involved in establishing a service connection. The ANC operates directly on the physical access network nodes.

The interfaces defined can be described as following;

A – This is the interface handling the interaction between the end-user and the SAM. Typically it is used for end-user service subscription, service activation and service selection.

A10control – This is the interface between the application service provider (ASP). It enables the ASP to request capabilities (bandwidth, QoS etc.) from the SAM.

M’ – This is the access technology independent interface between the SAM and the ANC. It hides the network implementation details from the SAM. Typical this is used for real-time network control when setting up, taking down and manage service connections.

M – This is the interface between the different network elements and the element managers. It is used for fault management, configuration and policy enforcement.

Ucontrol - This is the control interface between the Connectivity Provider and the CPN. It is used for remote CPE management.

RGW

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 39/193 PUBLIC

R – This interface handles the packager-to-packager interactions and is of particular interest in cases where the user is in a visited network but needs to be authorized by his home network. Typically this is the case in roaming between access service providers.

2.1.3.2 Mapping to MUSE business model

The business models developed in MUSE differs from the models used in e.g. DSLF TR-058 in the sense that some new role descriptions are introduced. The reason is to allow open access network architecture with the possibility to support multiple services as well as a multitude of providers. A description of the business roles can be found in [1], in Figure 2-5 they are depicted and the relations between them are described.

Two major changes were made with respect to the DSLForum model. The first change is that the DSLForum NSP role was split into two separate roles, namely the Packager role and a new NSP role with a smaller set of responsibilities than the DSLForum NSP role. By introducing the Packager role it becomes possible to set apart certain tasks that are specific to the customer relationship and that have less to do with controlling the network. User authentication and authorization can be one of the tasks for this role.

The second change involves the introduction of another new business role, the Connectivity Provider. This role can be regarded as responsible for providing the connectivity in the Access Network. More details about the specific tasks of each role are given in following sections.

ApplicationServiceProvider

AccessNetworkProvider

AccessNetworkProvider

Customerconsumer

ConnectivityProvider

RegionalNetworkProvider

ContentProvider

ApplicationServiceProvider

Application Service / Content Provider

Customer

Packager

Network Provider

Packager

ApplicationServiceProvider

Applications delivered with an assured QoS

NetworkServiceProvider

Figure 2-5. Business roles according to MUSE [1].

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 40/193 PUBLIC

2.1.3.3 Packager

The Packager role has a central place in the business role model. He combines access network functionality from Access Network Providers (ANPs) on the one hand with core network (Internet, corporate networks) functionality from one or more NSPs or/and application services from one or more ASPs on the other hand and offers this as a package to the Customer [48].

In general the Packager is technology agnostic. All the technology related aspects of the contract with the Customer are put as requirements to the technology specific Connectivity provider. As such the Packager hides away all technology from the end user while at the same time is the single point of contact to the Customer. The Packager may also interface other Packagers in order to provide “nomadism” services. A customer may be able to connect to a “foreign” network if a service agreement exists between the corresponding Packagers enabling them to exchange billing information, user profiles and network requirements. It is the responsibility of the home Packager to put requirements on the “foreign” Packager in order to be in line to its initial contract with the Customer.

Since the Packager has a close relationship with the Customer and other provider roles, this role may also include the management of the residential gateway for the Customer. A management system based on DSLForum standard TR-069 could be a proper tool for the Packager, particularly for more complex service and network settings.

To conclude, it is assumed that there will not be more than one Packager per Customer per CPE. Otherwise, the Packager role is considered to be divided among multiple actors. The Packager role allows Customers to have connections with multiple NSPs/ASPs while at the same time QoS can be assured. He also takes care that the Customer can not subscribe to (more) services than his CPE or network connection supports. As the Packager is the main point of contact to the end user, it is the entity best suited for customer authentication and authorisation. Finally, the Packager can also be the one who collects billing information for the various services and sends an integral bill to the Customer.

2.1.3.4 Connectivity Provider

The Connectivity Provider is overall responsible for providing end-to-end connectivity between the CPE and the NSP or ASP network, guaranteeing the agreed QoS and security characteristics. The Connectivity Provider has SLA’s with the Access Network Provider and the Regional Network Provider regarding the required network resources. The Connectivity Provider can do the assignment of IP addresses to the CPE on behalf of the NSP or ASP. Further, the Connectivity Provider may assemble billing information from network services and provide this to the Packager [48].

In general, there will not be more than one Connectivity Provider per CPE, since otherwise it will be hard to control the total amount of bandwidth that a Customer may use. In practice, the connectivity provider role is often combined with the Access Network Provider role, the Regional Network Provider role (this is the case in TR-058) or Network Service Provider role, depending on the wholesale scenario. In future scenarios that are in the scope of MUSE, it is assumed that IP functionality moves further to the edge of the access network in the direction of the Customer. In that situation it is likely that the same actor fulfills the role of the ANP and Connectivity Provider.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 41/193 PUBLIC

As a conclusion, the CP is mainly responsible for implementing the requests of the Packager. As such it should be able to check the resources used inside the network and take the appropriate measures (by putting requirements to the ANP and RNP) to provide the corresponding technology specific service bindings. In general while the Packager role is technology agnostic hiding all technology related aspects from the customer (mainly the entity that has SLA agreements with the end user), the CP is technology specific responsible for the implementation of these requests.

2.1.3.5 Mapping MUSE business roles onto the reference control architecture

The following figure maps the business roles defined in the MUSE project to a reference control architecture that uses the Cadenus model described in [49].

ResourceMediator

L2 networkUser

AccessFunction

ServiceAccess

Function

M’

ASP

A10RM2

CustomerPremise

HGW

M

SAM

ANCNetworkController

APackager

ServiceMediator

AccessMediator

ServicesServicesServices

A10AM2

R

ConnectivityProvider

Network AccessProvider

Uctrl

Figure 2-6. The MUSE business roles mapped onto the reference control architecture.

2.2 Model of Residential Gateway 2.2.1 Definitions The Residential Gateway function can be subdivided into two major groups, namely the SERVICE GATEWAY and the ACCESS GATEWAY:

- The SERVICE GATEWAY functionality must hide the specificity of a network of devices in the home network which are potentially terminals of a particular service. The functionality offered by the service gateway resides in the OSI layers 4 and above, and as such will generally not reside in a specific device (specifically, it will not necessarily reside in the network access device). The Service Gateway architecture and functions will not be described in detail: It is sufficient to be clear about the term “gateway” to avoid confusion about the physical implementation in a real service-delivery environment.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 42/193 PUBLIC

- The ACCESS GATEWAY refers to a specific adapter device that hides the complexity of the external (access) network from the Residential Network, allowing connectivity to external services as if they are part of the local network. Additionally, it hides the complexity of the home network from the access network, allowing a simplified and uniform model of the customer premises equipment to be presented to service deliverers. The access gateway therefore concerns itself with connectivity issues, i.e. the OSI layers 1, 2 and 3, as well as the management, control and diagnostic features associated with those layers within the context of delivery of services from outside the home.

2.2.2 Access Gateway – general boundary assumptions Using the definition of the access gateway given above, the following simplifying assumptions are used when deriving a synthetic yet useful general architecture of the access gateway function set. The MUSE project partners have chosen to closely follow the work done by the DSL Forum. The generic models and reference points proposed by this forum will be taken up and refined. Though defined in principle for DSL-based access networks, the general principles can applied to other access technologies. A non-exclusive list of relevant documents is given here as reference:

Specification

number Date of validity Description Type of specification

TR-044 December 2001 Auto configuration IP Auto configuration

TR-046 February 2002 Auto-configuration services Auto configuration

TR-058 September 2003 Multiservice framework Framework (future)

TR-059 September 2003 QoS QoS

TR-062 November 2003 Auto config ATM Auto configuration

TR-064 May 2004 CPE LAN configuration LAN configuration

TR-067 May 2004 ADSL IOP DSL physical specification

TR-068 May 2004 Router for retail Router features

TR-069 May 2004 CPE WAN configuration WAN configuration

TR-094 August 2004 Multi-service framework for Home Networks WAN/LAN interfacing

Table 2-1: Relevant DSL-Forum recommendations

Overview of relevant topics:

TR-044: Auto configuration for basic internet services Configuration of connection type, IP address, DHCP server

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 43/193 PUBLIC

TR-046: Auto configuration Architecture and services Defines framework for complex services setup

TR-058: Multi services architecture & framework requirements Define framework for future services deployment

TR-059: DSL evolution architecture requirements for the support of QoS enabled IP services Defines QoS for multiple services delivery Based on diffserv IETF RFC 2474/2597/3246

TR-062: Auto configuration for the connection between the DSL broadband Network termination and the network using ATM Defines automatic configuration of ATM parameters

TR-064: LAN side DSL CPE configuration CPE discovery, configuration procedure through XML/Soap messaging over secured communication Device control protocol compliant to UPnP 1.0

TR-067: ADSL interoperability test plan

TR-068: Base requirements for an ADSL modem with routing Defines basic requirement for a router Aims at defining consistent features for retail market Refers to TR-044, TR-059 and TR-064

TR-069: CPE WAN management protocol

TR-094: Multi-Service Delivery Framework for Home Networks

2.2.2.1 Access network physical layer

The access gateway architecture model will, as far as is possible or relevant, be independent of the physical medium used to provide services to the residence. The model will be compatible with future and legacy access network technologies and practices. For practical purposes, only broadband technologies will be considered, starting with ADSL, having an optical “local loop” connection as future target, but addressing various DSL and fibre technologies in between. Wireless access and cable access, though in principle also compatible with the access gateway model proposed, will not be specifically addressed. However, the difficulty is that the access network physical layer implementation plays a part in the end-end QoS for an application, mostly due to the impact it has on the layer 2 performance. The following characteristics of 1st mile technologies influence QoS:

- Transmission bandwidth. Limited transmission bandwidth not only inhibits certain applications, but also leads to increased packet jitter. This in turn may disable other applications. For example as 128kb/s ADSL uplink carrying general Ethernet packets and VoIP-specific packets in one single ATM VC has a packet jitter of 94ms. This value is due to the transmission delay for a 1500 byte Ethernet packet, and may often be perceived as unacceptable for the VoIP application.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 44/193 PUBLIC

- Transmission delay and jitter. For 1st mile interfaces, delay is generally very low due to the short physical distance. Additional delay may result from bit error correction means such as the interleave matrix for ADSL. The jitter value leads to additional end-to-end delay. The jitter can be reduced by for example separating best-effort service packets and real-time service packets by using the interleaved and fast channels respectively. (It has been shown that this can be done with one single ATM VC). A disadvantage of this approach is that both fast and interleaved paths must be assigned a fixed amount of bandwidth, which can not then be used by the other path. In the above, the Ethernet packets can not make use of the fast path bandwidth, even if there are no real-time packets being transmitted. The document “Proposal for simplified QoS for GSB” discusses the relationship between transmission bandwidth and allowed jitter. Below 10Mb/s bandwidth, special measures have to be taken to keep the jitter of real time packets below given limits. In wireless systems a further complication comes up due to varying transmission bandwidth.

- Bit error rate. 1st mile technologies with high bit error rate need correction methods that use additional bandwidth and/or create additional delay. It must be especially considered with wireless interfaces which have varying transmission quality.

There are also several other QoS related parameters, for which the reader is refered to the MUSE deliverable DA1.2 [2] where this is discussed in detail. Suffice to note that the access gateway must be aware of the specific limitations of the layer 1 technology of the access network it is connected to, though cannot in general influence them in any way. Beyond creative use of the features it offers, these form a fundamental limit to the services that can be offered over a specific physical layer technology.

2.2.2.2 Access network link layer (layer 2)

In addition to providing the mandatory interface to the access network at the physical layer, the access gateway function will play an important role in handling the layer 2 functions. The architecture proposed will resolve the convergence issues which can be encountered when migrating the “legacy” ATM-based access network (the prevailing practice in today’s broadband deployment model) towards future, Ethernet-based standards. The use of extensions to the Ethernet protocol, such as VLANs, and their interaction with the Residential Network will also be resolved in the access gateway.

Layer 2 architecture considerations can play an important part in management and allocation of the QoS parameters of the network with regard to service delivery, despite the tendency to view all services to be delivered using the Internet Protocol (i.e. Layer 3). In addition, a model of the access gateway which totally isolates the layer 2 functions of the Residential Network from the external access network, which would be an ideal case, is already being questioned, due to the early deployment of layer-2 switch-based services to the home. This not only raises additional legacy management issues in future networks, it also highlights the technical complexity and performance requirements of the access gateway devices.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 45/193 PUBLIC

The Point-to-Point Protocol (PPP) is a layer 2 technology introduced to simplify implementation of AAA functions (Authentication, Authorization and Accounting), used to allow individual customers to securely access services, and be billed for their use. This encapsulation standard is very popular and is likely to persist until other standards are introduced. It is however rather inefficient (IP is packaged over PPP, over Ethernet, over ATM: a stacking of layer 2 handling with generally little added value once consumption of a given service is under way). Because of this inefficiency, other standards are under review as a future replacement. In order to allow the consumption of a service anywhere in the home, the PPP function or its future replacement will also be terminated in the access gateway.

2.2.2.3 IP and the access network (layer 3)

The access gateway may also be called upon to manage the interface between the Residential Network and the access network at the IP layer – in other words, it will typically offer layer 3 routing functionality within the home network, isolated from the external network by means of firewalling and NAT functionality. These functions will require specific management and diagnostic schemes to be implemented to hide the complexity from the inexperienced user yet provide the experienced user with the flexibility to manage his own HAN. These management functions will also allow a synthetic and regular model of the service delivery path to be presented to the service providers, independently of the specific configuration of the terminal equipment inside the HAN.

As a consequence, the access gateway functions will include the mapping of IP traffic in the Residential Network to the external network, so it will have an important role in the management of the QoS parameters associated with a particular service. While QoS management standards are already prevalent in the IP world, their practical implementations will require the access gateway to map these standards both towards the external network at the IP level and also towards the layer 2 implementations

2.2.2.4 Access gateway and the Security architecture

Being the main interface point between home and external networks, and also being the point at which most lower-layer services will be terminated and managed, the Access Gateway will incorporate a number of security related features.

At the Link layer (layer 2), the use of PPP to manage basic AAA functionality has already been mentioned. Migration of PPP towards more efficient mechanisms for initialising the Layer 3 functions (e.g. IP address allocation, for which the DHCP protocol, suitably extended, is a possible candidate) must also keep this in mind, offering a suitable alternative with equivalent security at the AAA level.

For Layer 3 (IP), the access gateway is the focal point for firewall functions. However, it can be anticipated that many services may require their own specific firewalling functionality. Presently, IP-based applications which have difficulty in working with firewalls (the “Netmeeting” application is a well-know example, but also many gaming programmes which need to share details of IP address allocation) require the installation of “Application layer Gateway” (ALG) functions inside the router. In future, when multiple and diverse services are envisioned to require to transverse the access gateway, such a solution has limited prospects.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 46/193 PUBLIC

Instead, security architectures built around a two-step firewalling arrangement can be proposed. The basic firewall feature set, offering security services for all applications, forming an integral part of the access gateway functionality, while specific security functions inside the Residential Network take care of the specific needs of given applications or services.

2.2.2.5 The Access gateway and the Residential Network

Ideally, the Access gateway functions will be fully agnostic to the precise configuration of the residential network. The nature of the technology used (physical and link layers only in this context) must be able to be freely chosen by the owner of the network and its equipment in the free-market model targeted.

In simple cases, the home network will be very limited in scope, possibly offering one high-bandwidth service to, example, a television set for video services, and one “best-effort” service for general web browsing using existing network adapters (Ethernet, or possibly wireless LAN technology). However, it is the ambition of MUSE to offer solutions for the provision of many value-added services of diverse types to a wide range of terminals inside the home. In this case, more sophisticated home network configurations must be considered, and it is here that the role of the access gateway in adaptively managing the available functionality towards time varying (QoS) requirements imposed by the specific services being consumed becomes critical.

Though the development of specific Residential Network technologies which meet these diverse requirements falls outside the scope of MUSE, the access gateway architecture described here must be capable of supporting such advanced network management techniques.

2.2.3 The Residential Gateway Architecture Model CPE behaviour and the technical requirements to be expected from the network point of view are captured in a model of the Residential Gateway (RGW), taking into account the RGW itself, the characteristics of the Residential Network and the terminal(s) itself/themselves.

Four application cases are considered :

• Residential Gateway (access gateway part) is bridged (Layer-2 transparent)

• Residential Gateway (access gateway part) is routed, with NAPT

• Residential Gateway (access gateway part) is routed, without NAPT

• Residential Gateway (access gateway part) is hybrid

The Residential Gateway should also work in a hybrid mode, bridged for some connections and routed for others Although the fully routed model offers the most flexibility of connection (both inside the CPN and between terminals and service providers), the hybrid mode should be considered as well.

The RGW plays a crucial role in most considered issues for the network model. It interacts with the access network and service providers on one side and with devices and users in the Customer Premises Network (CPN) on the other. A comprehensive model for the RGW in particular and the CPE in general (the set of devices at customer premises) is indispensable for a correct description of basic connectivity and its auto-configuration.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 47/193 PUBLIC

The model for the CPE has to take several aspects into account:

• In alignment with DSL-Forum terminology, the CPE can be addressed (managed, configured) from the "WAN side" (= from access network side) and from the "LAN side" (= from residential network side).

• The "CPE" is composed of the multiple functional boxes that can be grouped into physical boxes in multiple ways. It can also be interpreted in a generic way as the "device" that is visible from access network side, depending on the considered layer. This is used in the rest of the document, except for the cases where it is more specifically the RGW part (its access gateway part) of the CPE that is dealt with.

• Management from the WAN side : Depending on whether the RGW is bridged or routed, different functional “boxes” become visible to and configurable by the network :

- In the case of a bridged RGW, the RGW will be managed by the NAP and be transparent for the service providers who can directly access and configure the IP point in the CPN behind the RGW.

- In the routed RGW case without NAPT, the IP point in the CPN behind the RGW becomes directly visible to and configurable from the WAN side (by access provider and service providers). When the RGW implements NAPT, only the RGW is visible and directly configurable by NAP and service providers for layers 2 and 3 (and possibly L4, for further study).

- The NAP can configure up to L3. An ACS can configure L4+ service specific parameters, and can also reconfigure L3 (and L2?) parameters that were previously set. Note that the NAP can own an ACS.

• Three levels of mapping for the model :

- mapping to layers L1-L2-L3-L4+

- mapping to functional boxes

- mapping to management by service provider or by user.

Therefore the proposed model is based on the following principles :

• Breakdown in NT1-NT2-NT3:

- Please note : NT(x) does NOT equal "implements OSI layer L(x)" or " implements OSI layers L1 to L(x)"! Depending on whether the data plane, control plane or management plane is considered, NT(x) could be active up to different layers.

- NT1 corresponds to the B-NT (DSL-F TR94 [34]), terminating L1 from the network side. It also terminates L2 in the data plane from the network side if ATM is used on the first mile (interworking ATM in first mile - Ethernet in CPN).

- NT2 corresponds either to

- the RGW bridge (L2 forwarding). Seen from the network side at L2, it is transparent in the data plane (if Ethernet is used on the first mile) and visible in the control and management plane.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 48/193 PUBLIC

- the RGW router (L3 forwarding). Seen from the network side, it terminates L2 in the data plane and is visible at L2 in the control and management plane. In the data plane at L3, it is non-transparent if the routed RGW applies NAPT, transparent if without NAPT. In the control and management plane it is visible at L3+.

- NT3 corresponds to the L3+ termination in the data plane. It is reachable transparently at L3 from the network side with a bridged RGW and with a routed RGW without NAPT. Seen from the network side it is visible at L4+ in the control and management plane.

- Finally, the terminal is also mentioned. It is considered here as the interface to the user. It is not directly involved in configuration. But if the physical terminal is a PC, NT3 will be physically part of the terminal. If the physical terminal is a "simple" voice/video terminal (e.g. a TV set), NT3 is either physically integrated in the RGW or in a separate device like e.g. a set-top box (FPD in DSL-F TR94 [34] terminology).

• The DSL-Forum approach (TR-64 and TR-69) distinguishes a user-accessible part and a network-accessible part of the CPE. In a similar way, NT2 has been split in a LAN-side and WAN-side.

• Service providers can configure NT1, NT2, NT3

• Interaction between NTs :

- NT2 WAN can read configuration information from NT1

- NT3 can read configuration information from NT2

- NT3 can (re)configure NT2 LAN at application level. This can be triggered by configuration of the NT3 by an ACS.

- User can configure parts of NT3 and NT2 LAN at application level

With these definitions we can reconsider the RGW from a functional point of view :

• Access Gateway = NT1 + NT2 with NT2 either bridged or routed or hybrid

• optional Service Gateway = NT3 when it is not part of the terminal

• RGW = Access Gateway (when NT3 is included in the terminal(s)) RGW = Access Gateway + Service Gateway (when NT3 is separate from the terminal(s))

As the bridged and routed model are initially difficult to capture in a single schematic, both are described separately.

2.2.3.1 Model for bridged RGW

In this case the MAC and IP addresses of all terminals will be visible to the Access network, therefore it is responsibility of the providers to address/configure all these elements of the network that are visible (i.e. the terminals themselves).

Configuration of the parameters on NT1 is under responsibility of the NAP. In the case of ATM in the first mile, the NAP must configure the layer 2 connection in NT1.

In the case where Ethernet is used in the first mile NT1 is transparent at layer 2 and NT2 is visible at layer 2. NT2 can be configured by the NAP.

Both NT1 and NT2 don’t need configuration of layer 3+, so layer 3+ configuration will have to be made in NT3, i.e. in FPDs (devices like STBs, etc) and/or in the end user terminals. The NSP will have to configure several parameters such as the IP address, default gateway,

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 49/193 PUBLIC

DHCP and DNS servers’ addresses among others. All parameters referring to the service will be also configured on these devices, by the ASP. Such parameters include QoS settings (Layer 3 parameters), video/audio service codecs to be used (layer 4+ parameters).

(*) L3+ Termination

„Network“

Data Flow

One or

Multiple SP(s)NT2LAN

L2 Forwarding

(*) L2 transparent if Eth. in first mile

Terminal

(*) L1 Termination(*) L2 Termination if ATM in first mile

NT2WAN

NT1

NT3

Configuration from ACS

B Reads Configuration of A

L3 Configuration

User

Application Configuration(Firewall, Web Filtering...)

ASPACS L4+

NSPL3

NAPL1/L2

A B

(*) L2 transparent if Ethernet in first mile

(*) In DATA PLANE, as seen from network side

Figure 2-7 : Bridged RGW model

2.2.3.2 Model for routed RGW

The L1 parameters of NT1 and the L2 parameters of NT1 and NT2 are configured in the same way as in the previous case by the NAP.

• Without NAPT

For the L3 parameters, when the NT2 block is routed and there is no NAPT, NT2 now requires configuration from the NSP. First of all the NSP is responsible for the configuration of an IP address on NT2 and also for the configuration of its routing table.

Since NT3 blocks will also receive a (public or private) IP address from the network, the NT2 block will have to act as a DHCP relay. The configuration of the parameters related to this function will also be provided by the NSP.

NT3 will receive its IP address by the NSP (via the DHCP relay in NT2), and its routing table parameters. The ASP will then configure all service related parameters such as QoS parameters in the NT3 block. Since now NT2 has routing capabilities the ASP may also need to access to this block and configure in it QoS parameters.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 50/193 PUBLIC

(*) L3+ Termination

„Network“

Data Flow

One or

Multiple SP(s)NT2LAN

L3 Forwarding

(*) L2 termination if Eth. in first mile

Terminal

NT2WAN

NT1

NT3

Configuration from ACS

B Reads Configuration of A

L3 Configuration

User

Application Configuration(Firewall, Web Filtering...)

ASPACS L4+

NSPL3

NAPL1/L2

A B

(*) L1 Termination(*) L2 Termination if ATM in first mile

(*) L3 transparent

(*) L2 transparent if Ethernet in first mile

(*) In DATA PLANE, as seen from network side

Figure 2-8: Routed RGW model without NAPT

• With NAPT

When the routed NT2 also performs NAPT, the NT3 block is no longer visible to the providers.

The NSP accesses the NT2 block to perform the same configurations at L3 that were made in the previous situation.

Now NT3 receives a private IP address provided by a DHCP server on NT2. An additional configuration that the NSP can make in the NT2 block is to specify which pool of addresses the DHCP server should use. On the other hand NT3 still needs to get configuration of service specific parameters. This implies crossing the NAPT mechanism, which may not be possible. Instead, a service provider may have to consider providing an “installation wizard” or similar application to be executed on a specific terminal inside the home network. A similar approach can be used for updating the L3 configuration parameters should this be required at any time.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 51/193 PUBLIC

„Network“

Data Flow

One or

Multiple SP(s)NT2LAN

Terminal

NT2WAN

NT1

NT3

Configuration from ACS

B Reads Configuration of A

L3 Configuration

User

Application Configuration(Firewall, Web Filtering...)

ASPACS L4+

NSPL3

NAPL1/L2

A B

(*) L3+ Termination

L3 Forwarding(*) L3 non-transparent

NAPT

(*) In DATA PLANE, as seen from network side

(*) L1 Termination(*) L2 Termination if ATM in first mile(*) L2 transparent if Ethernet in first mile

(*) L2 termination if Eth. in first mile

Figure 2-9: Routed RGW with NAPT

2.2.3.3 Summary of NT configuration by NAP / NSP / ASP

The previous considerations are summarised in the following table :

RGW type NT1 NT2 NT3

Bridged NAP NAP, NSP NSP, ASP

Routed without NAPT

NAP NAP, NSP, ASP NSP, ASP

Routed with NAPT NAP NAP, NSP, ASP -

Table 2-2: Configuration of RGW parameters by providers

Please note that this overview reflects the current view, but that the precise role of the different providers still has to be consolidated.

2.2.3.4 The Hybrid Residential Gateway

As discussed in the above chapters, real-world applications may require a mix of bridged and routed gateway functionality. A model of such a gateway can be constructed by taking the relevant parts of the previous models, integrating them into specific configurations on an as-needed basis. However, such an approach may not be useful in proposing a future-safe generic architecture which can support future access network technologies and at the same time offering a clear migration route towards them. The diagram below is a simplified model that can form the basis of such a “unified residential gateway” architecture.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 52/193 PUBLIC

Oneor

MultipleSP(s)NT2

WANNT2LAN

NT3

Terminal

UserNATP

Conditional L2 forwarding

ASPACSL4+

NSPL3

NAPL1/L2NT1L2TC

Typical Access Gateway functional boundary

Figure 2-10: Unified model of a hybrid Residential Gateway configuration

In this model, differing layer 2 technologies which may exist between the home and the access network are resolved by the Layer 2 convergence function. For example, the mapping of Ethernet frames onto ATM VC’s will be performed here. In the case of Ethernet in the access network, this function becomes null (reverting to one of the specific models above) or may be used to terminate specific diagnostic functions at the layer 2 (e.g. loopback). The configuration of the L2TC function is managed by the NAP.

The problem of mixing bridged and routed models is addressed by means of a “conditional L2 forwarding” function. Here, frames traversing the access gateway will be inspected based on MAC addresses. Those frames identified as belonging to layer 2 switched services will be transferred directly between the NT2L and NT2W functions. All other frames will be directed to the L3 router / NAPT function. Here, in addition to modification of the L2MAC address (e.g. for NAPT), L3 addresses will be managed as appropriate for the routing function.

In all other respects, this model is identical to those described above. The diagram shows a typical boundary of the Residential Gateway functional unit. Given the cost considerations of incremental service deployment, this boundary would represent the minimum configuration which should be considered for future-safe extension of the services delivered, based on the boundary assumptions listed in the introduction.

2.3 General Connectivity 2.3.1 Business models in an Access Network The connectivity between the residential networks and the ISPs / NSPs / ASPs is related to the different business models. It is also related to which Access Network model is used, the Ethernet model or the IP model. Below is described the impacts on connectivity depending on combinations of the business models in an Ethernet based Access Network. Since migration aspects are considered both PPP and DHCP mechanisms are assumed to be used in the Access Network.

Typical current deployments in the access network consist of a single-edge, PPP-based architecture supporting non-differentiated (best effort) services. However, the business roles between NAP-NSP-ASP can be more varied and more complex than this.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 53/193 PUBLIC

A first business consideration that can be taken is the relationship between NAP and NSP. Both could be separate entities (NAP does wholesaling to third-party NSPs). Alternatively, both could have a closer relationship based on trust and common stakeholders. Such associated relationship can also exist between an NAP and ASPs.

The consequent two questions are then 1) who assigns the IP addresses to the users, and 2) what related business (type of wholesale) is enforced between the NAP and the NSP.

There are three types of wholesale between a NAP and a third-party NSP, namely PPP wholesale, IP wholesale, and Layer 2 wholesale. Additionally there is the special case of the NAP playing NSP and providing connectivity for associated NSP/ASPs for specific applications, i.e. application wholesale or retailing.

- In PPP wholesale, the NAP exchanges IPoPPP packets to the NSP, which then terminates the PPP sessions. The allocation of IP addresses is fully managed by the NSP.

- The situation is different with IP wholesale, because the NAP terminates the PPP sessions in a BRAS and exchanges IP packets with the NSP. When PPP is not used, the IP traffic is exchanged between the NAP's EN (router) and the NSP. In both cases the address allocation is done via the NAP but can be controlled by the NSP.

- With Layer 2 wholesale the main point is the exchange of L2 frames between the NAP (which doesn't interpret the higher layers) and the NSP. The NAP doesn't participate in Layer 3 processes (a.o. addressing). The EN is a layer 2 switch. This wholesale model is typically needed for business customers but of no interest for residential users. Note that in this deliverable we address Ethernet as layer 2, not the legacy protocols like ATM or FR.

- Finally, the same player can act both as NAP and NSP, meaning the NAP assigns IP addresses to its retail users, and provides application wholesale to associated NSP/ASPs for specific applications. This can be based on PPP (terminated at NAP BRAS) or DHCP. Note that either specific applications offered in this way can either be specific (e.g. video service) or cover all relevant applications (i.e. HSI + video + gaming + ...)

These options have basic technical impacts :

- Assignment of IP@ to the users => impact on auto configuration (incl. authentication)

- Which auto-config protocol; PPP (de-facto tunnels) or DHCP (really connectionless) =>

- impact on how to provide full wholesale to a specific NSP, and how to handle peer-peer

- impact on multicasting

The different options can be combined in different typical cases of business models, as shown in the following table. Impact on multicast is not shown in the table.

Note that multiple business models may coexist in the same NAP network, e.g. an installed base of users relying on PPP for all their services including wholesale applications (case (a)) can be combined with new users being based on DHCP for wholesaling of all services (case (c)) and business users on a layer 2 VPN (case (e)).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 54/193 PUBLIC

Auto-configuration Wholesale IP@ allocation by

(a) PPP : all services DHCP : none

(1) PPP whs (2) IP whs (3) Application wholesale -

(1) NSP (2) NAP on behalf of NSP (3) same player is NAP and NSP-

(b) PPP : all services DHCP : specific applications

(1) PPP whs (2) IP whs (1) IP whs (2) Application whs

(1) NSP (2) NAP on behalf of NSP (1) NAP on behalf of NSP (2) same player is NAP and NSP

(c) PPP : all services DHCP : wholesaling (incl HSI)

(1) PPP whs (2) IP whs (1) IP whs (2) Application whs

(1) NSP (2) NAP on behalf of NSP (1) NAP on behalf of NSP (2) same player is NAP and NSP

(d) PPP : none DHCP : all services

- (1) IP whs (2) Application whs

- (1) NAP-NSP and other NSPs (=> multiple DHCP servers) (2) same player is NAP and NSP

(e) PPP or DHCP (no difference) L2 wholesale for transparent Ethernet transport

NSPs

Table 2-3 : Considered business models

The different business models are described in more detail below, identifying the technical impacts on:

• IP@ allocation

• Peer-peer connectivity

• Multicasting.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 55/193 PUBLIC

2.3.1.1 (a) : Full wholesaling with PPP to third-party ISPs/NSPs + retailing with PPP to associated ISP/NSP

NAP

NSP 2

NSP 1

ASP 2

ASP 1

BRAS

All services:

* HSI* Applics

ISP

ISP/NSP 0 ASP 3

Figure 2-11 : Full wholesaling with PPP to third-party ISPs/NSPs

This is a common scenario where all traffic is IPoPPPoE and the NAP offers either PPP wholesale to third-party NSP/ISPs (e.g. primarily for HSI), or IP wholesale to the associated NSP/ISP. 2.3.1.1.1 Separate entities NAP - ISP/NSP1/NSP2

• In the case of PPP wholesale, the IP addresses are allocated by PPP server in ISPs/NSPs.

• In the case of IP wholesale, the IP addresses are allocated by PPP server in the BRAS of NAP, on behalf on NSP.

• Full wholesale via PPP implies that all peer-peer connections flow via the edge node, there is no direct connection possible.

• No multicasted transport in the NAP: multicast streams are duplicated to each user in the BRAS.

2.3.1.1.2 Associated entities NAP-ISP/NSP0

• PPP is terminated at the NAP, and a same player combines NAP and NSP 0.

• PPP implies that all peer-peer connections flow via the edge node, there is no direct connection possible.

• No multicasted transport in the NAP: multicast streams are duplicated to each user in the BRAS.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 56/193 PUBLIC

2.3.1.2 (b) : Full wholesaling with PPP to third-party ISPs/NSPs + specific application wholesale (retailing) to associated NSP and ASPs

NAP

ASP 5NetworkService

Gateway

Edge RouterNSP 0 ASP 3

NSP 2

NSP 1

ASP 2

ASP 1

BRAS

ASP 4

ISP

All services:* HSI* Applics

* Some applics

DHCPserver (NAP)

Figure 2-12 : Business model (b)

The situation is identical to (a) for wholesaling, and there is now an additional application wholesale between the NAP and associated NSP (or ASP). In this relationship the NSP 0 (or ASP 5) trusts the NAP for assigning the IP addresses. Specific applications (but not HSI) are offered by NSP0 or ASP 5 as IPoE traffic (DHCP is used as auto-configuration).

2.3.1.2.1 Separate entities NAP - ISP/NSP1/NSP2

Idem to (a)

2.3.1.2.2 Associated entities NAP - NSP0 and NAP - ASP5

• IP addresses are allocated by NAP’s own DHCP server on behalf of NSP 0 or by NAP’s own DHCP server playing NSP for ASP5

• Direct peer-peer connections in NAP (without going to the edge node) is possible between users whose IP@ is handled by NAP. Peer-peer connections via edge node is also possible. The NAP can choose subnet allocation in function of the offered peer-peer connectivity (direct or via edge node); see cases of communication between users of same NSP.

• Multicasting in NAP via L2 distribution tree is possible from the edge router or network service gateway.

2.3.1.2.3 Peer-peer between users on NSP 0 and users on other NSPs

• This peer-peer requires transition from IPoE to IPoPPPoE, and hence must flow between edge nodes.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 57/193 PUBLIC

2.3.1.3 (c) : Full wholesaling with PPP to third-party ISPs/NSPs + full application wholesaling (incl HSI) (retailing) to associated NSP and ASPs

NAP

ASP 5

NetworkService

Gateway

Edge RouterISP/NSP 0 ASP 3

NSP 2

NSP 1

ASP 2

ASP 1

BRAS

ASP 4

ISP

All services:* HSI* Applics

All services :* HSI* Applics

DHCPserver (NAP)

Figure 2-13 : Business model (c)

This scenario combines the situation in (a) for wholesaling to third party-NSPs together with a full application wholesaling (HSI+specific applications) (retailing) between NAP and associated ISP/NSP0 or ASP5 with DHCP as auto-configuration. 2.3.1.3.1 Separate entities NAP - NSP/ISP

Idem to (a)

2.3.1.3.2 Associated entities NAP - NSP/ISP and NAP - ASP

Idem to (b)

2.3.1.3.3 Peer-peer between users on NSP 0 and users on other NSPs

Idem to (b)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 58/193 PUBLIC

2.3.1.4 (d) : Full wholesaling to separate ISPs/NSPs via DHCP + full application wholesaling (incl HSI) to associated NSP/ASPs via DHCP

ASP 5

NetworkService

Gateway

Edge Router

ASP 3

NSP 2

NSP 1

ASP 2

ASP 1

ASP 4

NAP

ISP

ISP/NSP 0

DHCP servers (ISP, NSP)

* HSI* Applics

* HSI* Applics

DHCPserver (NAP)

Figure 2-14 : Business model (d)

In this situation there is no PPP connectivity any more. It allows full wholesaling of IPoE traffic (DHCP as auto-configuration) for HSI and specific applications between the NAP and ISP/NSP1/NSP2 (IP wholesaling), between NAP and ISP/NSP0 (application wholesaling (retailing)), and between NAP and ASP5 (application wholesaling (retailing)). 2.3.1.4.1 Separate entities NAP - NSP/ISP

• IP addresses are allocated by multiple DHCP servers (one per ISP/NSP). For those ISPs/NSPs the user’s authentication phase is handled by the NAP or possibly proxied to the ISP/NSP.

• Full wholesaling implies that all traffic is to be sent to the correct NSP. For the NAP, the IP@ allocation becomes heavy to co-ordinate between different DHCP servers. It would be far simpler if the NAP would take care of IP@s for all (this would also allow the NAP to control the subnetting.

• Multicasting in NAP via L2 distribution tree is possible from edge router or network service gateway.

2.3.1.4.2 Associated entities NAP - NSP/ISP and NAP - ASP

Idem to (b)

2.3.1.4.3 Peer-peer between users on NSP 0 and users on other NSPs

• It is acceptable for this peer-peer connectivity to remain in the NAP domain (higher efficiency) if the NSP can retain a sufficient level of accounting and control.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 59/193 PUBLIC

2.3.1.5 (e) : L2 wholesaling for transparent Ethernet transport

NAP

NSP 2

NSP 1

ASP 2

ASP 1edgeswitch Any services

ISP

ASP 5

NSP 0 ASP 3

ASP 4 Any services

Figure 2-15 : Business model (e)

There is no layer 3 forwarding in the NAP (Ethernet provider), and all customer Ethernet traffic is sent as transparently as possible from NAP to ISP/NSP1/NSP2 and NSP0. The purpose is to provide Ethernet service connectivity to the business end-users, either to provide just IP transport to an NSP/ISP (see also (f)) or to offer MEF type services (L2 VPN and Ethernet virtual lines). 2.3.1.5.1 Any NSP/ISP :

• The addresses are always allocated by NSP itself (never allocated by NAP). The NSP can use PPP(oE) or DHCP, this makes no difference for the NAP. Note that if PPP is used it is not the same as PPP wholesale (a,b,c), where there is a PPP point in the NAP (BRAS).

• Pure L2 wholesaling implies that all traffic is to be sent from the NAP to the correct NSP at L2. Traffic between NSPs should be completely separated in the NAP, i.e. at L2 in this case. If required the NAP could use MPLS tunnels or VLAN-based NSP differentiation in the NAP (1 VLAN per NSP, VAN-concept ). Note that the latter would require the user to generate a NSP-specific VLAN on its RGW towards the access network. This requires that the user be configured with the correct VLAN from the start. This could still be useful for PPP; although PPPoE already is a tunnelling technique, the use of VLANs allows for bulk (i.e. per-NSP) accounting at the AN or edge switch in the NAP. With IPoPPPoE, all traffic anyway needs to be connected via the PPP end-point, which is outside the NAP. With IPoE all peer-peer traffic can be forced at L2 to outside the NAP.

• With IPoPPPoE there is no multicasted transport in NAP, multicast streams must be duplicated to each user at the edge of the NSP. With IPoE, multicasting in NAP via L2 distribution tree is possible from the edge switch on.

2.3.1.6 Conclusion

The business models (a), (b), (c) and (d) are deemed relevant for residential users. The business model (e) is indispensable for business users only, and will not be elaborated further in the network architecture studies.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 60/193 PUBLIC

2.3.2 Peer-to-peer traffic Historically Access Networks are concentrating upstream traffic from customer terminals up to a central node (BRAS). This is partly done because of technical reasons, since ATM PVCs and PPP links is terminated in the central office of the telecom operator networks, but traffic concentration is also desired by the operators, in order to control flows and be able to charge for the traffic.

However, a MUSE Access Network will use transport technologies based on Ethernet and IP that allows traffic to go shorter paths in the network. This implies that technology limitations will not be an argument for concentrating traffic up to a central node. The decision for an operator to concentrate traffic will now depend mainly on other factors, i.e. need for central traffic control versus cost for bandwidth.

Several operators are concerned about that peer-to-peer traffic will be expensive, if traffic first is forced up to a central node and then back down to a CPE in the same Access Network. There are estimates that already at a network size of around 5000 users, there will be a substantial extra cost for bandwidth. With growing peer-to-peer behaviour this critical network size will decrease even further.

Actually there are two factors to be taken into account for peer-peer connectivity. The first is whether this connectivity can be allowed (i.e. accounted and controlled) at Layer 2 or whether it is required to be controlled at Layer 3. The second is whether transport efficiency considerations dominate (i.e. switching as close as possible to the end-users), or whether it is preferred to concentrate the control and accounting of peer-peer traffic in single nodes (i.e. via the EN). Depending on the network model, the combination of both factors determines the point where peer-peer is to be connected, namely locally (as close as possible to the users, i.e. in the AN or at an AS) or via the EN.

The decision on direct peer-to-peer traffic versus all traffic via the edge node has some technical implications, as described below. 2.3.2.1 Direct peering in Ethernet Access Networks

Ethernet traffic can be handled as point-to-point connections between the CPE and the Edge Node. However, in an extended version the architecture should also support peer-to-peer connections between separate residential networks via the Aggregation network, without having to go through the Edge node. However, direct peer-to-peer support in the Access Network has implications on security, QoS, and accounting. If QoS and accounting should be possible for direct peering traffic, it must be supported by the Access Nodes and Ethernet switches in the aggregation part of the network.

Security is an important issue. Ethernet was originally designed for LAN networking. Among the features are:

• All Ethernet frames can be seen (and potentially read) by anyone at the same segment (collision domain).

• All MAC addresses can be reached within the same LAN (broadcast domain).

These are acceptable and often desired features at home or office networks, but not acceptable for a public access network based on Ethernet. In a public access network there must be the option to block any layer 2 connectivity between different residential networks. Therefore it is necessary to implement some separation mechanisms for different aspects of the connectivity:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 61/193 PUBLIC

For unicast or multicast traffic directly between end-users (peer-to-peer), this implies that there will be direct Ethernet connectivity between these users, thus breaking the customer separation requirement. Therefore, direct peering must first be granted by the users involved.

An example of a technical solution is to interconnect the peering residential networks via a VLAN to create a direct LAN – LAN connection. However, the applicability of this solution is depending on the overall solution to Ethernet access. If VLANs are already used in other ways it may not be possible to use this solution. Please note that this solution can also be applied to create a business L2 VPN service.

If there is a desire to force all Ethernet traffic up to the Edge Node there are several solutions, e.g. VLAN stacking and MacForced Forwarding, described in detail in Ethernet Dataplane chapter below.

2.3.2.2 Direct peering in IP Access Networks

In a normally routed IP network all forwarding is based on the destination IP address. All IP addresses that are announced into the network are therefore reachable from any other IP address in the network. This means that there is always Customer separation on Layer 2 in a normally routed network. All Customer-to-Customer traffic can go the shortest path through the network, without going via the Edge Node.

This means that there are no layer-2security issues when doing direct peering in an IP access network. On the other hand, there are still all IP security issues that are already encountered at Internet.

For QoS and accounting to be possible for direct peering traffic, there is the same situation as in the Ehernet Access network model, i.e. they must be supported by functionalities in the Access Nodes and IP routers in the aggregation part of the network.

There are also some issues concerning addressing:

• Do we allow direct Layer 3 routing between NSPs before the edge node? The MUSE operators are in favour of allowing this, but it is only possible when NSPs use separate subnets. It would of course also require sufficient accounting at the place of routing for NSP-NSP billing. This is for further study.

• Could we perform local peer-peer routing in case of overlapping subnets? This would bring additional complexity in the network (separate VR domains in all IP nodes). Moreover, what would happen if a subscriber is connected to two NSPs simultaneously but accidentally gets assigned the same IP address by both, or if two users of different NSPs receive the same private IP address? The current working assumption is that there will be only one entity assigning private IP addresses to the end-users in a NAP's network, namely the NAP itself.

If direct routing between Customers is not desirable, all Customer-originated IP traffic has to be forwarded up to the Edge Node, via some forcing mechanism and then back to the destination Customer. There are mainly two ways for this:

Source-based routing.

This is a special case of policy-based routing, where the forwarding decision of the IP packets is not made by the normal forwarding table, but by some rules defined outside of the forwarding table. (Note that this might be an inefficient way of doing forwarding, if it is not possible to perform it in hardware.) The high level rules can be:

• All Customer originated packets has to be forwarded up to the Edge Node using the source address for routing.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 62/193 PUBLIC

• Packets originating from outside the Access Network or from the Edge Node have to be routed by the destination address.

These rules are, however, not sufficient to solve the Customer separation within the Access Network, since packets between two Customers in the Access Network will only be forwarded up to the Edge Node. Since they always are routed by the source IP address they will never be forwarded to the destination IP address. A solution is to let the Edge Node encapsulate Customer-to customer traffic into an IP tunnel from the Edge Node down to the Access Node for the receiving Customer. The tunnel can now be routed correctly since the source address of the tunnel will be the Edge Node, and packets from the Edge Node will be routed by their destination address. However, if this source-based forwarding is not implemented in hardware it will probably be too inefficient to be useful.

This approach is mentioned here for completeness but is not considered as promising.

Upstream tunneling.

All Customer-originated IP traffic is encapsulated by the Access Node in IP-tunnels, e.g. IP/IP, GRE or GPT tunnels. All tunnels are terminated in the Edge Node. The Edge Node will be able to inspect the traffic before routing it down to a Customer in the Access Network or out of the Access Network, see Figure 2-16. Downstream traffic from the Edge Node is not encapsulated. All traffic is therefore routed by destination address.

Since all aggregation IP nodes are doing normal routing and tunnel endpoints are only at the Access Nodes and Edge Nodes this is the preferred solution for IP traffic forced up to the Edge Node.

SA DA SA DA

DA SA

SA DA

CPN

CPN

Access Node

Access Node

Edge Node

Figure 2-16. Upstream tunnelling. All traffic is routed by destination address. Upstream traffic is tunnelled up to the Edge Node. Downstream traffic is not tunneled.

2.3.2.3 Position on peer-peer connectivity

The distinction must be made between business users and residential users.

Based on discussions within the consortium and feedback from the operators, the following positions have been decided:

• Business users require peer-peer connectivity at layer 2 (Layer 2 VPNs), requiring control and optionally requiring separate acocunting.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 63/193 PUBLIC

• For residential users there is no requirement for peer-peer connectivity at Layer 2 (i.e. locally in the Ethernet network model)

• For residential users it is required that peer-peer must be possible at Layer 3 (hence must be blocked at Layer 2). In the Ethernet network model this means to force the traffic via the EN. In the IP network model, this means either locally (in case of IP routing, see 4.3.2, 4.3.3) or via the EN (in case of IP forwarding, see 4.3.1).

2.3.3 Multicast and Multipoint Delivery 2.3.3.1 Rationale for multicast

Boradcast and multicast services are part of the revenue-generating triple play service offer. Due to the existence of competing, highly efficient, inherently broadcast networks (such as satellite), the solution for multicasting should be as efficient as possible in terms of network use and hence dimensioning costs, and exploit the inherent interactiveness offered by the access and aggregation network (uplinks). Another point of attention is to aim for constant quality to the end-users, in order to avoid help-desk overload due to variable quality.

2.3.3.2 On connectivity

Connectivity is defined in DA1.2 [2], from the application level point of view, as the ability to interact (locally or remotely) between other “humans” or entities (e.g. servers, machines, sensors) to send or retrieve information (including voice, messaging, email, video and other types of electronic content).

Each application imposes its own connectivity constraints that may relay on the network capabilities and that must be taken into account when designing the network architecture. According to the number of edges and the direction that information flows towards, it is possible to find out the following groups:

• 1:1 When only two ends (one transmitter and one receiver) are involved in the communication process. They are also known as Unicast applications.

• 1:N When only one transmitter and several receivers are involved in the communication process. These applications would be used for Multicast/Broadcast purposes.

• N:1 When several transmitters and only one receiver are involved in the communication process. These applications are also known as Narrowcast applications.

• N:M When several transmitters and several receivers are involved in the communication process. VPN, multi-party video or audio-conferences and gaming would be inside this group.

These different connectivity requirements can be fulfilled by the network or by the application itself. In this way, if multicast facilities are provided by the network then bandwidth can be economized as information is replicated only when necessary, being an efficient alternative to multiple unicast transmissions. Unlike unicast, where information is carried over the network multiple times, with multicast one copy of the information is sent to a group address, reaching all recipients who want to receive it, making then a multicast group. Notwithstanding, as multicast facilities are not widely deployed, current Internet applications do not use specific network mechanisms for delivering information to multiple recipients, and hence information is replicated as many times as the number of recipients.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 64/193 PUBLIC

2.3.3.2.1 Necessary conditions for multicasting

If an application wants to derive advantage from multicast mechanism, three conditions are necessary:

• Group learning method

End users belonging to the access network need a method to know the multicast sessions availability.

• Group access method

End users need a method to send in a participating request to belong to a multicast group.

• ANW multicast method

Access network needs methods to convey multicast flows efficiently to end users. 2.3.3.3 Application-specific requirements on multicast functionalities

In order to derive the facilities that the network must provide, it is necessary to further analyse the different requirements of the applications.

The obvious requisite is that the application involves connectivity amongst multiple edges and that the pieces of information that are being transmitted are expected to arrive around the same time. That is, some of the content offered by a popular web server can be accessed by multiple users but at different times. In this way, multicast mechanisms are not useful to avoid the replication of information that is being constantly sent to different users. To prevent this useless information replication, others solutions have appeared based in content caching as, for instance, the content distribution networks (CDN). Basically, the idea consists of moving contents nearer from the end user.

A pragmatic requirement is that the application should not have strict data integrity requirements, that is to say, that the applications are not elastic. The reason is that elastic applications require reliable transport protocols such as TCP that are able to manage retransmissions and that adapt the transmission rates in function of the network state and the available resources. In this way, reliable and responsive multicast would require to manage separately the state of the different flows. Although it is possible to envision multicast protocols that work with TCP (e.g.: TCP-XMO, M-TCP, M/TCP, PRMP, SCE, …), the complexity is so high that it is difficult to justify the adoption of explicit mechanisms and protocols to do this. The problem is even worse when dealing with Internet, as it is not only necessary to envision mechanisms, but they must be also widely adopted and deployed in the network.

Therefore, a preliminary requisite for an application to use multicast mechanisms consists of transporting multiple inelastic flows to different recipients. Another requirement is that some of the paths to the final recipients be partially shared; that is to say, that enough bandwidth be saved by not duplicating unnecessarily the information. This criterion is most difficult to evaluate. In case, for instance, of multi-conferences involving a few partners, maybe it is difficult to justify the usage of multicast capabilities. Amongst the different applications identified in DA1.2, the following ones are clear candidates for using multicast facilities:

• Broadcast TV/Radio

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 65/193 PUBLIC

The broadcasting of TV/Radio channels is the typical example of application that would benefit from multicast capabilities, as it normally involves a huge amount of multiple recipients (indeed, a popular TV channel can be watched at the same time by several millions of people). EPG is associated with the broadcasting of TV/Radio channels, as it is used for obtaining information about the available channels offered by the Service Provider to their subscribers. This information will be transmitted using a specific multicast group known by all the subscribers, which can be joined whenever they want, in order to know the contents in real time.

Finally, and in spite of the fact that the main method used for sending a participation request is based on the own end user uses IGMP for knowing and joining a given multicast flow, due to that broadcast servers would be usually located at the NAP/NSP/ASP premises, a different method for joining could be envisioned (to check whether it is possible with current IGMP implementation) in such a way that normal users should not have to be aware of the multicast capabilities of the network. In this case, multicast joint could be performed by means of using a proxy server. The interaction between the user and the proxy server could be done, for instance, by using a web service provided by the service provider, or specific signalling protocols (in this case, it would be easier that the user performed the multicast join request by himself/herself).

• Near Video on Demand (NVoD)

Video on Demand is not suggested as a multicast application as different users are expecting the same video to arrive at a different time (as they have ordered it at different times). However, NVoD is a trade-off between video broadcasting and video on demand, where the user can ask for a video whenever he wants, but the video is served not instantaneously, but every given amount of time (that is, if different channels with the same video were scheduled at different times), and hence, multicast can be used for transporting these video flows. The associated use of a PVR will allow the user to manage the video program in a more flexible way.

• Interactive TV (ITV)

Interactive TV is very similar to TV broadcasting. The difference is that the user can interact with the TV and ask for special contents (for instance, different views of the same scene, different languages, comments of the film director, additional background information on program content, quiz contestants by answering a series of multiple-choice questions in real-time using remote controls, links with real-time editorial information and so on). This interaction normally involves a feed-back channel to the ITV server that is unicast, and the possibility of selecting amongst different multicast broadcast channels (views and languages would normally be transmitted in a multicast way). A specific requirement for ITV applications would be that these different multicast flows (views and languages) must be well synchronised.

• Video/Audio Multi-conference

Multi-conference involves more than two recipients and does not require stringent data integrity, so it is a candidate for using multicast capabilities. However, current multi-conference systems are implemented by using a central unit (bridge or MCU) that receives unicast flows from all the recipients, and sends to each of them a different flow with a proper mix of the received flows (that is, to avoid echoes, a given recipient receives a flow composed by all the transmitted flows

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 66/193 PUBLIC

but the own one) or a common flow with the mix of all the received flows (in this case, echo cancellers are needed in each recipient). In this case, multicast capabilities could be used to reach the different recipients from this central unit in a more efficient way. Another possibility consists of not using a central unit, that is to say, using a peer-to-peer approach. In this way, multicast could be used for every recipient to reach the rest of them. The benefit of this would be mainly not to waste resources on the access network, and specially on the first mile. The drawback is that users must be available to use the multicast capabilities of the access network, and the complexity of the solution is bigger that in the case of the centralised one. Finally, a special requirement of multi-conference applications is that synchronisation of flows must be very tight, which is more complicated to manage in the peer to peer approach.

• Multiplayer Gaming

Multiplayer gaming requirements with regards to multicast are rather similar to the multi-conference ones. In this sense, multiplayer gaming can be envisioned in a centralised way, by using a common platform where all players are connected, or in a peer-to-peer basis, by exchanging the necessary information directly amongst the players. The centralised way requires that players send unicast flows to the central platform, whereas the latter can use multicast capabilities to reach the players. In the peer-to-peer approach, players could use multicast capabilities of the access network to minimise the consumption of resources. A special requirement of multiplayer gaming is that synchronisation of players must be very tight, even more than in multi-conference applications.

• Tele-learning

Several types of tele-learning can be envisioned. The first one, basic e-Learning, consists more of accessing to different learning material via web-browsing or via streaming (as VoD), and with a minimal interaction with the instructor via e-mail or instant messaging. This does not need to use multicast capabilities. A second one, broadcast learning, consists of accessing to specific learning video channels (pre-recorded) or even to scheduled live-speeches but with a minimal interaction with the instructor (via e-mail or instant messaging). This approach would be basically the same as TV/Radio Broadcasting. Another approach would be the tele-teaching. This consists of an interactive session amongst multiple tele-student and a tele-instructor. In this way, this approach has similar requirements as multi-conference applications implemented in a centralised way, where the central unit would be the instructor premises.

2.3.3.4 Multicast complementary mechanisms for efficiency improvement

2.3.3.4.1 Content Distribution Networks

As has been previously stated, CDN can be used for moving contents nearer from the end user. The use of CDN can reduce the necessity of multicast capabilities. In this way, a common example is the distribution of video broadcasting, where video head-ends are duplicated in NAP/NSP premises, and hence avoiding the use of multicast capabilities outside of the NAP/NSP premises and reducing the end-to-end delay. In this way, a balanced combination between the use of CDN and multicast capabilities is a trade-off between usage of resources and complexity.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 67/193 PUBLIC

2.3.3.4.2 Multimedia Content Adaptation

The type of multimedia devices, such as laptops, pockets PCs, PDAs, smart phones and TV browsers, that are expected to access to multimedia contents is increasing, and hence network heterogeneity is expected to be bigger. These diverse devices have different, and even adaptive (depending on the available resources), capabilities for receiving, processing and displaying multimedia content. Considering besides the diversity of user interests and the different perception of quality they have, it is quite difficult for a Service Provider to adapt multimedia content in function of the different needs and capabilities of all kinds of devices, access networks and users.

In this way, it seems rather logical to design a system that is able to provide some kind of universal access to multimedia contents by adapting the multimedia content to the different necessities. This is even more important when designing a network that is able to provide nomadic services.

Most content adaptation techniques target at producing the optimal version of the content according to context information, such as device capabilities, network characteristics and user preferences, by adapting the bit-rate of the multimedia flow.

Some of these adaptation techniques are:

Simulcast is a technique that uses multiple versions of the stream, encoded at different bit-rate, and hence delivering different quality of service levels. This technique is easy to implement but requires high bandwidth which is expensive. The versions of streams used are then limited in order to avoid too much redundancy. The server switches to the stream version that best matches the client's capacity. Some streaming protocols support dynamic switching among multiple streams. Specially in this last case, the application must be able to ask for different versions of the stream.

For some applications, simulcast can be a good solution. However, if bandwidth is not enough, multicast can disturb the network.

Trans-rating inside the network could offer an alternative solution. With trans-rating only one multicast stream is sent to the AN or EN (BRAS). This node with trans-rating capabilities will process the stream (by means, for instance, of spatial domain or temporal domain process) to adapt it to the specific rate. The main advantage of this solution is that it allows to save bandwidth. The drawback is that the processing capabilities needed in the trans-rating nodes are computationally intensive, and hence expensive. In case of wanting to change the rate on the fly, the end-user application would have to be implemented with feed-back mechanisms.

The purpose of the MUSE trans-rating study is to use this technique either inside the Network, at the transition between Distribution Network and Access Network., or either inside the access node or inside the BRAS. Typically, this innovative way of using trans-rating can significantly increase the audience of a TV over xDSL service and save up bandwidth.

Scalable (layered) simulcast adaptation has been proposed as a solution to bandwidth redundancy caused by simulcast. This approach is based on information decomposition. In this way, rate adaptation can be achieved by adding or dropping enhancement layers that are transmitted according to the network conditions. According to this approach, the flow carrying base layer data could be labelled with high priority, whereas flows carrying enhancement layers data could be labelled with progressively lower priorities. In case of congestion, packets containing data from an enhancement layer would be dropped before those that contain the base layer data. In this case, synchronisation amongst flows is necessary, what can suppose a hurdle. Besides, the application must be specially designed to be able to ask for more or less enhancement layers.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 68/193 PUBLIC

An alternative to this scalable multicast approach would be to send a single multicast flow where packets are marked according the importance of the information that they carry. In this way, packets that transport information of the base layer would be labelled with high priority, whereas packets with enhancement information would be labelled with lower priority. The main drawback would be that QoS mechanisms in the network are required to properly manage the priorities of the packets. Moreover, these mechanisms would not avoid some situations of resources wasting in the access network. In this way, if all the terminals that are connected to a given multicast flow can only manage the base layer, and the AN is dropping all low priority packets, resources have been wasted in the Access Network due to have been forwarding these useless low priority packets to the AN.

Another drawback of these two techniques is that they do not offer so much flexibility as the trans-rating approach.

Advantages Drawbacks

Simulcast Simple Information redundant, Rate adaptation limited, Application must be aware if on the fly bit-rate change is desired

Transrating Very flexible Expensive, Complex, Feedback needed, Application must be aware if on the fly bit-rate change is desired

Scalable simulcast Very efficient Synchronisation, Not so flexible, Application must be aware if on the fly bit-rate change is desired

Alternative Simple, Application independence

Waste of unused information

Table 2-4: Comparison of different multimedia content adaptation techniques

2.3.3.4.3 Synchronisation requirements

Synchronisation can be achieved by the own application or by delegating it to the network. In the first case, the application must deal with long enough play buffers, which can be a serious hurdle in case of real time applications. Therefore, and at least for real time multicast/simulcast applications, it seems more convenient that the network assures relative synchronisation by guaranteeing certain maximum end-to-end delays, and a given delay variation amongst different flows. In this way, specific QoS mechanisms have to be implemented in the network. Some kind of fair queuing seems to be appropriate for dealing with different associated multicast flows (as for instance, with ITV multicast flows). Priority mechanisms, both for the scheduling and for dropping, seem to be more appropriate for managing enhancement information packets. 2.3.3.5 Application of multicast capabilities

According to the previous sections, the following table sums up the main conclusions that can be applied for the different target applications that have been studied. In this way, for each application it has been stated:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 69/193 PUBLIC

• Whether the end user is required to be able to act as multicast server. If this is not required, solutions can be envisioned to avoid the user be multicast aware (by using a proxy node that manages the multicast joins and reports).

• Whether loose or tight synchronisation mechanisms are required.

• Whether content distribution techniques can be applied in order to diminish the necessity of multicast mechanisms

• And whether content adaptation techniques can be used for adapting to the user capabilities and preferences. In this way, it has been considered that multi-conference applications do not benefit from these mechanisms as normally terminals (or gateways) do agree on the codecs and protocols to be used in the communication.

End user is multicast server

Synchronisation Content distribution

Content adaptation

TV/Radio Broadcasting

Not required Loose Possible Possible

Near Video on Demand

Not required Loose Possible Possible

Interactive TV Not required Tight Possible Possible

Video/Audio Multi-conference (cent)

Not required Tight Not possible Not required

Video/Audio Multiconference (p2p)

Required Tight Not possible Not required

Gaming (cent) Not required Very tight Not possible Not possible

Gaming (p2p) Required Very tight Not possible Not possible

Broadcast Learning

Not required Loose Possible Possible

Tele-teaching Not required Tight Not possible Not required

Table 2-5: Multicast capabilities per application

In this way, three different stages for multicast mechanisms deployment can be defined.

In the first stage, common and known multicast techniques can be easily applied for the following applications: TV/Radio broadcasting, NVoD, and Broadcast learning. Gains are well justified as huge amounts of users are expected to access to these services at the same time.

In the second stage, QoS mechanisms have to be available in the access network to be able to guarantee the necessary synchronisation constraints that the following applications have: ITV, Video/Audio multi-conference (centralised approach), Gaming (centralised approach) and Tele-teaching. Gain would be clearly justified in the case of ITV, Gaming and maybe Tele-teaching. Multi-conference applications do not usually involve a very huge amount of users, so maybe multicast would not be specially needed.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 70/193 PUBLIC

In the third stage, the end user must be able to actively use multicast capabilities provided by the access network. In this case, resources saving can be clearly gained for the following applications: Video/Audio multi-conference (p2p approach) and gaming (p2p approach). However, further business studies would have to be done. That is to say, if multicast capabilities are offered for free, users would clearly use them. However, if the user must pay for them, he/she would not have a clear incentive for using a p2p approach of an application that the NSP is also offering in a centralised way.

It is also remarkable that a new paradigm that could be called p2pcast (or viralcast1) is appearing. In this way, p2p paradigm is being used in an optimised way for trying to avoid some of the unnecessary packet replication: a given user would send a copy of every packet that is wanted to be sent to a distribution or multicast list, only to a portion of the members of the given list. These other members would forward the just received packet to a different portion and so on, so that something more efficient (in terms of bandwidth waste, but not in terms of delay) than unicast, but less than multicast is achieved.

Moreover, it is necessary to also consider that CDNs can be used for bringing the content closer and that multimedia content can be adapted in order to get a more efficient and flexible multimedia delivery. Further work on the analysis of the identified content adaptation methods and the proper balance between this and multicast capabilities could be useful. 2.3.3.6 Multicast network requirement: IP multicast

As described above, multicast protocols are becoming more and more recent into the Access Network due to more and more existing applications (broadcast TV) and future ones (Tele learning, interactive games…) using these facilities and advantages. The ATM multicast methods work on a connected mode transmitter (source) based. Unlike IP multicast, where the recipient can create himself his/her connection to a multicast group and therefore participates in the multicast tree construction, in ATM multicast it is the transmitter (multicast source) which is responsible for creating the tree taking into account received recipient requests. Moreover the IP layer is considered as a significant if not an essential layer due to the existing Internet.

That is why whatever model the Access Network will be based on, the multicast protocol must be IP based. The specific protocol requirements are described in Paragraph 3 for the Ethernet MUSE model and in Paragarph 4 for the IP MUSE model.

2.4 QoS Architectures This chapter presents principles to define how QoS can be requested to a network, and how that network can be configured to provide the requested QoS.

QoS can be requested to the network in several ways, leading to several service models that will be presented in the first part of this paper. The second part will present how the network can manage efficiently the use of its resources. Finally, the third part will focus on the structure of the service enabler that will setup an IP flow with QoS.

1 This term is suggested because of the analogy with a viral attack. The organism (the operator) cannot do much for avoiding these kind of things, and the virus will multiply without the organism consent. Besides, if the organism envisions a method for trying to avoid this, the virus will mutate (another way of doing the same thing will be envisioned).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 71/193 PUBLIC

2.4.1 QoS architecture principles 2.4.1.1 How QoS is requested to the network

The QoS functional model architecture is based on the principle of separation between service control and transport network as widely adopted for the Next Generation Network (NGN). QoS can be requested to the network in several ways, leading to 4 service models.

The “user oriented model” where the user requests separately the service to the service provider and the resources for that service to the network provider.

In this model, the A party performs a service request to the service provider. The service provider indicates in the response to the user the QoS characteristics that are necessary for the application, but at that time, network resources have not been reserved. The user then requests the setup of a connection with these QoS characteristics to the network. The most common way for a user terminal to request QoS from the network is user-network signalling. If the QoS request covers several network domains, then the QoS request can be forwarded from one network to another, most likely with network-network signalling.

In the “service provider oriented model” represented in Figure 2-17, the user performs a service request to the service provider (1). Based on the needs of the application, the service provider requires QoS to the network provider (2). The service provider accepts the service request and sends a positive response to the user only when the network has successfully setup the resources for the application.

Network resourcecontrol

Network provider 2Network provider 1

Network resourcecontrol

A party B party

1 : User requestservice

3

Service provider

2 : Service providerrequests QoS

Network resourcecontrol

Network provider 2Network provider 1

Network resourcecontrol

A party B party

1 : User requestservice

3

Service provider

2 : Service providerrequests QoS

Figure 2-17 : Service provider oriented model

For a data path that covers several network providers, this model has 2 variants : either the service provider performs QoS requests to one network provider that forwards the request to other network providers (3), or the service provider performs QoS request to each network provider involved in the path.

The last two models are based on the idea that the network operator is able to analyse the application signalling exchanged between the user and the service provider. Of course, this is only valid if application signalling is based on a standardized protocol (like SIP, H323 or RTSP). We refer to these last two models as “application signalling based models”. These principles are used in 3GPP specifications [37].

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 72/193 PUBLIC

Network provider 2Network provider 1 A party B party

2

Service provider

1 2

3

4 4

Network resourcecontrol

Network resourcecontrol

Application signalling

proxy

Application signalling

proxy

Network provider 2Network provider 1 A party B party

2

Service provider

1 2

3

4 4

Network resourcecontrol

Network resourcecontrol

Network resourcecontrol

Network resourcecontrol

Application signalling

proxy

Application signalling

proxy

Figure 2-18 : Application signalling based model with policy push

In the application signalling based model with policy push” shown in Figure 2-18, the user performs a service request to the service provider (1). A and B parties (or their terminals on their behalf) negotiate the QoS requirements through application level signalling. The application signalling is intercepted and analysed by the network provider, that uses the information contained in signalling to determine the QoS required by the application. The network providers check if the QoS characteristics negotiated by the end users can be delivered by the respective access networks (2). This can be reproduced by every network operator that is on the data path, avoiding the use of inter-network exchanges to control resources. If an access network cannot deliver the requested QoS, the network provider can modify the QoS characteristics. In this case, it is up to the end users to decide if they accept or reject the modified QoS characteristics. After successful end-to-end negotiation, the network provider authorize the use of the resources for the session (3). The resources in the network equipment are configured immediately after sending the authorization of QoS resources to the user (4).

The application signalling based model with policy pull is similar to the previous one in the way the application signalling is treated by the network. After successful end-to-end negotiation, the network provider authorize the use of the resources for the session by sending an authorization token to the end-user terminal. The authorization token correlate the application level signalling and the transport layer signalling. Through transport layer signalling, the A party’s CPE requests from the network the setup of the IP media flow for the session. The signalling includes the authorization token received in the previous step. Using the authorization token, the edge router checks with the network resource control if the requested IP media flow is indeed authorized. The network control confirms the authorization, and the edge node accepts the flow.

It was decided in the MUSE project to continue the studies on 2 out of these 4 service models: the service provider oriented model and the application signalling based model with policy push.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 73/193 PUBLIC

The user-oriented model which implies too much complexity on the user's side was removed. The policy-pull option (3GPP model) of the application signalling-based model (IMS model) was also removed , as the need for having a correlation between application and bearer signalling was considered as not needed in fixed networks. The question was raised if such token concept could be used for another purpose, namely identifying the "session" across different provider domains, but as the token only has local significance this is not possible.

2.4.1.2 How the network manages its resources to provide QoS.

We consider that the aggregation network is based on Ethernet L2 switches. These switches have no specific needs on L3-awareness for QoS, and the goal is to leverage the available L2 QoS mechanisms in order to offer and manage IP service guarantees.

The resource control is based on pre-provisioned resources in the aggregation part of the NAP. The resources are organised by pre-configured “QoS pipes” running in the aggregation network. For each pipe, a certain bandwidth is reserved, and the network is configured such as that the traffic entering the pipe will be delivered with a pre-defined QoS level, at least as long as the reserved bandwidth is not exceeded. Resource management is based on the following principles :

1. Building a view on the network’s static resources : The resources of the NAP are controlled by a centralised platform that has a view of all the resources of the network. In order to have this view, the network resource control platform should acquire informations from the various network elements managers (manager of access nodes, manager of aggregation nodes, manager of edge nodes).

2. Building a view on the allocation of IP addresses : In order to identify the path taken by an IP flow (in other words : route alignment), the network resources control platform should rely on the auto-configuration process of the end-user of the flow. The information that are needed to deduce route alignment consist of user’s IP address, user’s MAC address, selected NSP, originating access node, line to which the user is attached.

3. Building a view of the use of resources : for each link of the network, a view of all granted requests using that link is maintained.

4. Controlling the admission of new IP flow per individual request : The network resource platform acts as mediator between the business parties NAP and NSP/ASP. It receives new QoS requests from NSP/ASP. Based on the real time view of the use of resources in the network, it is able to decide for every new individual QoS request if it can be accepted or not.

5. Controlling the network elements (Access and edge nodes) in order to control the policing and shaping functions of those devices, either at the pipe level or at the IP flow level.

A first approach represented in Figure 2-19, consists of defining pre-provisioned pipes as VLANs that are set-up between access node and edge node. Each VLAN pipe {access node – edge node} would be identified by a VLAN ID. Multiple VLAN IDs could be used for a single pair {access node – edge node} in order to differentiate in terms of service or QoS, but a more scalable alternative is to define a unique VLAN ID per pair {access node – edge node}. The p-bit would then indicate the QoS.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 74/193 PUBLIC

E dgeNode A

Network R esourceControl

NSP/ASP

EdgeNode B

AccessNode A

AccessNode B

AccessNode C

NSP/ASP

QoS pipes = VLANs in L2 agregation network

QoS requests

E dgeNode A

Network R esourceControl

NSP/ASP

EdgeNode B

AccessNode A

AccessNode B

AccessNode C

NSP/ASP

QoS pipes = VLANs in L2 agregation network

QoS requests

Figure 2-19. Centralised resource management based on pre-provisioned QoS pipes between

access and edge nodes.

Bandwidth reservation per QoS pipe (defined by VLAN-ID + p-bit) can be achieved by shaping the traffic entering the aggregation network per VLAN ID and QoS class. Packets which exceed the Committed Information rate (CIR) are marked as best effort.

VLAN pipes are pre-provisioned with certain engineered resource budgets. These budgets can be adapted on a time scale of hours or maybe minutes depending on the network load and state by a central traffic engineering function. When the capacity of a VLAN pipe needs to be adapted, the network resources control function interacts with the network elements (access nodes, edge nodes) in order to modify the traffic shaping of the pipe. The admission control scheme is based on these budgets.

The special case of point to multipoint streams with replication in the aggregation network must be highlighted. In order to reserve resources for these flows, the VLAN-pipe approach requires dedicated pre-provisioned point to multipoint VLANs over the aggregation network, with dedicated resources associated.

For the management of resources inside the VLAN, one can distinguish 2 cases whether the resources of the VLAN pipe are shared between the IP flows of several NSP/ASPs, or if each VLAN pipe carry the traffic of unique NSP/ASP.

• If we consider a wholesale model where SLA specify the bandwidth reserved to each access node for the individual ASP/NSPs (in other words, where the NSP/ASP has “bought” the pipes), then resource sharing between different pipes is not possible, except for best effort flows. Admission control would just count the resources reserved per VLAN pipe and there would be no need for an interaction with network equipments for every new IP flow since the VLAN pipe is policed globally by the network operator.

• However, if the wholesale model allows the network operator to share the VLAN pipe between several NSP/ASPs, then every flow accepted by the admission control would probably require an interaction with network elements (AN, EN) in order to activate a policing function on the IP flows that enters the VLAN.

A second approach represented on, consists in defining the pre-provisioned pipes as the set of Ethernet links that constitute the aggregation network. The network resource control has a view of :

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 75/193 PUBLIC

• the structure of the aggregation network : nodes (Ethernet switches) and links between those nodes, with the capacity of each link.

• the way VLANs are setup over the Ethernet network aggregation (route of each VLAN).

E dgeNode A

NAP EdgeNode B

QoS pipes = links of the agregationnetwork

E dgeNode A

NAP EdgeNode B

QoS pipes = links of the agregationnetwork

Figure 2-20. Centralised resource management based on the capacity of the links of the Ethernet

aggregation network.

This approach uses the same connectivity mechanisms as previously since VLANs can be set-up between access nodes and edge nodes. However, from a resource control view, bandwidth resources are no longer associated to each VLAN, but to the physical link. This approach is well adapted to the case of an SLA where the NSP/ASP just requests to the NAP the setup of IP flows with QoS for a certain duration. The NAP receives such requests from several NSP/ASPs and manages the shared resources of its network in order to satisfy all requests with a limited number of rejects.

When a new QoS request is received, the network resource control determines the path that the new IP flow will follow over the aggregation network, i.e. the list of links that the IP flow will use between EN and AN.

For each link of the path, it checks whether the capacity required for the new flow is available on that link. The new flow is accepted only if capacity can be reserved on all the links followed by the path. If one link of the path has no sufficient bandwidth available, the QoS request is rejected.

When deciding if a new IP flow can be accepted on a link, the admission control takes into account the type of QoS request for the flow and the current load of the link. Real time flows will be accepted up to a certain percentage of the link load, while guaranteed flows without latency constraints will be accepted up to a higher load of the link.

When the new flow is accepted, the network equipment (AN and EN) is controlled in order to open the policing gate for the new flow and to let the traffic go through the aggregation network.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 76/193 PUBLIC

This second approach has significant advantages. Statistical multiplexing is more efficient since resources are managed as a whole and not divided into smaller parts. This is an important point for a network operator. In the VLAN-pipes case, there is a probability for rejecting flows due to exhaustion of resource in the VLAN, even if there is bandwidth remaining in another VLAN of the same link. Here, there is no such risk of misuse of resources since the global capacity of each Ethernet link can be easily used by all the IP flows that need it, independently of their VLAN. This approach also saves operational costs since there is no need for regular resizing of the VLAN pipes (the number of VLAN pipes can be very large for an operator having several thousands of AN, each being connected to several EN). Finally, it can be adapted to the reservation of point to multipoint IP flows. This way of managing resources requires a more complex modelling of the aggregation network than it was in the former case where bandwidth was associated to VLANs. The question whether such a resource management principle can sustain thousands of requests per second is an important point. However, the limiting factor is probably more in the capacity of the network equipments (access nodes, edge nodes) to be controlled in a highly dynamic way than in the complexity of the admission control algorithm.

A third approach to resource reservation consists in using network signalling instead of a centralized resource manager.

Signalling refers to the communication from one node to the other to reserve resources connected to the node to meet certain aspects such as QoS, bandwidth, etc. for a certain flow. In this method, each node in the path of the required flow reserves the necessary resources for each link until the end-to-end path resources are reserved. If the necessary resources are not available on one link, then the node rejects the request. The resources management is distributed in all the nodes. In the most typical form of signalling, the first node in the path of the required flow reserves resources and signals the 2nd node to reserve, the 2nd node then signals the 3rd node and so on until the last node of the path is reached. The last node in the path usually signals the initiating node of the success in setting up the path for the flow. If the required resources are not available in any node, that node rejects the request and indicates the previous node of the failure to reserve the requests. The previous node then releases the resources and signals the failure to the node earlier in the path until the initiating node receives the failure. Then a new attempt can be made using another path or at another point in time for the flow. The protocols and standards that can be used for signalling are (G)MPLS, PNNI, etc. using ATM signalling, RSVP, NSIS, etc.

There two main scenarios for the signalling domain in Access Networks, depending on the role of the user equipment :

• In the first scenario, the signalling domain includes the user equipment. Signalling is generated directly by the user equipment requesting QoS parameters to the next node in the path to the access network, which is usually the residential gateway. The residential gateway then reserves the resources in the home network and the interface to the access network and signals the next node, the Access Node e.g. DSLAM). The Access Node reserves the necessary resources and signals the next node in the Access Network and so on until the Edge Node is reached. In the path, each node takes the decision whether to accept or to reject the request.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 77/193 PUBLIC

Network R esource control

S ervice P rovider

P NNI or R S VP signallingto reserve resources on the

path

Application layer signalling

Service providerrequests QOS

12

BRAS

Switch

DSLAM

Network R esource control

Network R esource control

S ervice P roviderS ervice P rovider

P NNI or R S VP signallingto reserve resources on the

path

Application layer signalling

Service providerrequests QOS

12

BRAS

Switch

DSLAM

Figure 2-21. Example of resource reservation using signalling

• In the second scenario, the signalling domain of the does not include the home network. This reduces the complexity of the signalling in the Access Network. The principle is shown in Figure 2-21. First, the customer requests a service through application signalling (1), and the application requests associated resources to the network (2). In order to reserve those resources in the network, we imagine that there is an action on equipment that is at the boundary of the aggregation network (i.e. the Access Node), and that this action triggers the sending of a signalling message that will setup the connection all over the network with appropriate resources.

2.4.1.3 Structure of the service enabler that will setup IP flow with QoS.

The view of the structure of network control in an access network is shown in Figure 2-22. Here only the portion of the resource control in the access network is shown, the interfaces on resource control between access network and the core network is out of scope here, but many solutions are being proposed by IMS/TISPAN.

The components of the access network within the MUSE domain consists of the following types of equipment : access Nodes, access switches, edge nodes. We assume that each type of equipment has it own equipment controller mechanism for provisioning and controlling the resources.

The network management system is the centralized system which provisions the VLANS over the access network, between access node and edge node. It interfaces with the different equipment controllers for the appropriate configuration of each equipment in the chain.

The resource mediation system has the responsibility to decide whether new IP flows can be accepted, based on the availability of resources in the access network. For this, the resource mediation has a database of resources that contains some informations given by the network management : description of VLANS, nodes, links, bandwidth per pipe. The resource mediation must also acquire a view of the allocation of IP addresses, in order to determine the path taken by an IP flow. These informations are given by the auto-configuration servers, i.e Radius or DHCP servers, that can indicate to the resource mediation on which line an IP address is used.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 78/193 PUBLIC

When new IP flows are accepted, the corresponding resources are decreased in the resource mediation database, and a request is sent to the nodes (access node and edge node) through the equipment controllers to initiate policing for the new flow. The flow policing mechanism could be based on source & destination IP addresses, port numbers & protocol id.

Figure 2-22: Components of QoS control in network provider’s domain.

The resource mediation offers an interface through which applications will be able to request resources to the network provider. This interface is supposed to be simple since it hides the network complexity to the services, and it must be usable by all types of services: data, conversational and multimedia services. This interface carries the parameters suitable for the QoS control. All these parameters are denoted under the term “Service Level Specification” (SLS).

An SLS request can be defined using the following parameters:

• Scope of the SLS: Source and destination IP address of the endpoints within which the SLS takes effect.

• Flow ID: identifies uniquely the flow. It can be composed of one to many source/destination ports or protocol identifier

• Precedence: when it is not possible to satisfy all SLS, the one with lower precedence should be dropped in favour of the one with higher precedence

• Conformance parameters: the traffic description that the flow must conform to.

Through this interface, an application will be able to reserve network resources, delete a reservation, modify a reservation or locate a client based on the IP address.

Access Nodeequipment

control

Edge node

Switch

Accessmultiplexer

Access switchesequipment

control

E dge nodeequipment

control

CentralizedR esource mediation

Agregation networkmanagement

Requests for IPflows with QoS

IP address allocationgiven by autoconfigurationServers (Radius, DHCP)

Access Nodeequipment

control

Edge node

Switch

Accessmultiplexer

Access switchesequipment

control

E dge nodeequipment

control

CentralizedR esource mediation

Agregation networkmanagement

Requests for IPflows with QoS

IP address allocationgiven by autoconfigurationServers (Radius, DHCP)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 79/193 PUBLIC

2.4.2 Traffic classes in the network 2.4.2.1 Introduction

The approach chosen in MUSE for the definition of QoS classes is to provide well defined QoS classes which cover the relevant parameter ranges (delay, jitter, throughput), and to let the applications adapt to these classes.

Notwithstanding, this “network as master” approach must take into account, where possible, the current necessities and characteristics of today’s applications. That is to say, in order to efficiently tackle the problem to provide a good, or at least acceptable, level of QoS to every application or service the end-user is using, it seems reasonable to group these services into different classes, so that the management does not become out of control. It seems also recommendable to have a number of classes not very high, so that management and network mechanisms can remain as simple as possible.

2.4.2.2 Current approaches

On the one hand, according to 3GPP in its annex B of the TS 22.105 standard [46], from a user’s perspective, performance should be expressed by parameters that focus on user-perceivable effects, rather than their causes within the network, are independent of the networks internal design, take into account all aspects of the service from the user's point of view which can be objectively measured at the service access point, and can be assured to a user by the service providers(s).

On the other hand, the ITU Recommendation G.1010 [47] suggests the following classification depicted in the figure below, taking into account the target performance requirements in terms of delay, jitter and packet loss.

These two solutions can be considered as similar, except for some target performance parameters, since one applies to the mobile world (3GPP TS 22.105), while the other (ITU G.1010) does not. Both of them clearly distinguish between data and voice applications.

Furthermore, some authors address the need for a Network Control class. Besides, another approaches are advocating an architecture with no classes of service, that is, only a premium class would exist. This might be possible by combining admission control and fair queuing in what has been called a “cross-protect” router. This might be a very desirable result since even to manage two classes of service could be very complicated. Notwithstanding, these ideas for flow-aware networking are not, at short term, implementable since routers don't yet allow per-flow admission control or scheduling. Further work will have to be done on this last topic, and special on studying pricing mechanisms and scalability problems [49].

2.4.2.3 Traffic classes approach

The natural way to group telecommunication services is in function of the traffic profile and requirements that these traffics have. The main differentiators are :

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 80/193 PUBLIC

• Elasticity level (elastic, inelastic) : Elasticity level refers to the level up to which the traffic original shape can be modified. Not all applications have the same elasticity level. Normally, communication services aspire to keep both the data and temporal integrity. In order to establish the elasticity level of a given service/application, it is useful to assess which of both integrities is more restrictive. So, we can distinguish between elastic and inelastic applications (or traffic generated by those applications) in function of which of these both integrities is more relevant. If data integrity is more relevant (i.e. error-intolerance), we have elastic traffic, if temporal integrity is the point and there is greater error-tolerance, we have inelastic traffic.

• Interactivity level (interactive, non-interactive) : Interactivity level helps to emphasize (or not) the other not so relevant integrity. For instance, elastic traffic with high interactivity level would be generated by an application where data integrity occupy the first place, but where temporal integrity is also very relevant (for instance, e-commerce, web traffic, etc).

• Service availability (standard, high) : Availability is a very important consideration, and of course must be used as an attribute to identify the different traffic classes. Indeed, in core networks (which are not in the scope of MUSE) is one of the most important considerations to take into account.

Other requirements can be, generally, adapted to the current (and evolving) technological and economical constraints. That is to say, further division or classification of traffic classes seems to be more artificial and dependent on the state of the art.

For purposes of compatibility with both ITU and 3GPP recommendations, it is highly recommended that both the elasticity and the interactivity differentiators be implemented, with the following matching :

Traffic class Terminology proposed in MUSE

3GPP ITU

Non-Interactive

Background Best effort Non-critical Elastic

Interactive Interactive Transactional Responsive

Non-Interactive

Streaming Streaming Timely Inelastic

Interactive Conversational Real Time Interactive

Table 2-6: Proposed traffic classes

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 81/193 PUBLIC

2.4.3 3GPP/IMS-based architecture 2.4.3.1 Introduction

2.4.3.1.1 Rationale for using IMS in the MUSE access network

This section presents a QoS control architecture for the MUSE access network that is inspired by the QoS approaches and methods used in 3GPP mobile networks [35] and the IP Multimedia Subsystem (IMS) in particular. IMS has been developed by 3GPP for IP-based mobile networks. Obviously, there are fundamental differences between fixed and mobile access networks, so it is not possible to make a simple one-to-one mapping of IMS functions to the MUSE network. However, wherever possible, the elements and concepts of IMS are reused, as the use of IMS in fixed networks promises two important benefits:

• Fixed-mobile convergence. With IMS in both fixed and mobile networks, opportunities for fixed-mobile convergence immediately become apparent at different levels, such as

o Infrastructure: Sharing of network components and service enablers between fixed and mobile networks.

o Services: Offering one and the same service over fixed and mobile.

• Nomadicity support. IMS, being from the mobile world, inherently supports roaming and mobility.

In June 2004, ETSI and 3GPP jointly organized a workshop to identify potential relationships and to discuss common activities on IP Multimedia CN Subsystem (IMS) between 3rd Generation mobile standardization and the next generation fixed network standards makers [36].

2.4.3.2 Main principles of 3GPP/IMS-based architecture

The principles on which the QoS control architecture is based are:

• A central view is kept of all network resources in the access network.

• A central view is kept of all resources that are currently in use.

• Requests for network resources are accepted or denied individually and on request. Requests are only accepted if the network resources available at the time of the request are sufficient to meet the requested QoS.

• Requests for network resources can be made by end users and by application service providers.

• Resources are reserved after a request has been accepted and released after the session has finished.

• Requests, acceptance and reservation of network resources can be handled independently for the upstream and downstream directions. For example, the upstream and downstream directions can have different delays and packet loss.

The above principles apply to the QoS control within one access network. For the end user, only end-to-end QoS is relevant. Therefore, an additional principle is added that provides the link between QoS control in the access network and end-to-end QoS control:

• The networks involved in an end-to-end communication exchange information on the available resources among themselves and with the end users.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 82/193 PUBLIC

2.4.3.3 Description of the 3GPP/IMS-based architecture

Figure 2-23 shows the relevant elements in the MUSE reference network architecture to which IMS functions has been added. The IMS functions are based on the 3GPP specifications [37] and [38]. The Go and Gq interfaces ([39],[40]) are particularly relevant for QoS and resource control.

The QoS control and resource reservation in the access is controlled by a combination of network elements: intelligent access and edge nodes in the transport layer, controlled by Policy Decision Functions (PDFs). The PDF2 maintains the overview of available resources in the network and contains the intelligence needed for Call Admission Control (CAC). The PDF controls the access and edges node in the transport layer over the Go interface. The access and edge nodes actually open and close the IP flows. All services and applications use this QoS control and resource reservation mechanism.

The service logic is located above the PDF in so-called Application Functions (AFs). The architecture allows for many types of AFs. A requirement on the AFs is that they must be able to communicate their QoS needs to the PDF over the Gq-interface.

The precise interaction between the end user’s CPE, the AF and the PDF depends on the service model (see section 2.4.1.1). The 3GPP IMS architecture can be used to implement all service models in that section. Depending on the service model, the type of AF and the interactions of the AF with the CPE will be different. Although the focus of MUSE is at elements and functions below the Gq interface, it is necessary to analyze the impact of different service models on QoS resource control. This done in sections 2.4.3.4 to 2.4.3.6.

The configuration of services supported by the CPE and a number of other tasks are performed by the Automatic Configuration Server (ACS).

Computer

Modem

PDF/RACS

Telephone EdgeNode

RegionalNetwork

AccessNode

Gq

NSP/ASPnetwork

Signalling interface

Physical connection

Go

MUSE Network

AF: CSCFAF: CSCF

arbitrarynetwork

Computer

End user A

End user B

AF: other

Gq

AF: other

Gq

Originating Terminating

Go

2 In the ETSI TISPAN architecture, the PDF is called Resource and Admission Control Subsystem (RACS). The RACS is still under development, with similar, but not necessarily identical, functions as the PDF from 3GPP IMS.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 83/193 PUBLIC

Figure 2-23: Main elements of 3GPP/IMS-based architecture for QoS and resource control.

2.4.3.4 Application signalling based model

In the 3GPP/IMS based implementation of this service model, the end user sends application level signalling that includes the requested QoS to an Application Service Provider (ASP). The service logic of the ASP is located in a Call and Session Control Function (CSCF)3. The CSCF interprets the application level signalling exchanged by the end users and determines QoS resources being requested. The CSCF passes the request on to the PDF over the Gq interface. The CSCF also communicates the decision by the PDF to the end users through the application level signalling. The QoS control mechanism uses a dedicated, pre-provisioned signalling channel to transfer the service and QoS requests from the end users and ASPs. 2.4.3.4.1 Set up of end-to-end session

The functions of the various networks elements are illustrated in Figure 2-24 through the setup of a communication session between two end users A and B, connected to separate access networks. Before the end-to-end session can be set up, the CPE and transport network need to be configured. This configuration is described in [7].

Computer

Modem

PDF/RACS

Telephone EdgeNode

RegionalNetwork

AccessNode

Gq

NSP/ASPnetwork

Signalling interface

Physical connection

12

3

Go

MUSE Network

AF: CSCF1

AF: CSCF

arbitrarynetwork

Computer

New data channel

1

End user A

End user B

AF: other

Gq

AF: other

Gq

Originating Terminating

Go

1

Figure 2-24 : Set-up of end-to-end session in the application signalling based model.

In the example, end user A initiates the set up of the session. The communication session could, for example, include a videoconferencing service. The description focuses at the events in the access of user A. The events in the access of user B are similar.

1. End-to-end negotiation of the characteristics of the session, including the QoS characteristics such as bandwidth and delay. The end users A and B (or their terminals on their behalf) negotiate the QoS requirements through application level signalling. The PDFs are also involved in this: they check if the QoS characteristics negotiated by the end users can be delivered by the respective access networks. This check is based on three types of information:

3 In the ETSI TISPAN architecture, the CSCF and a number of other functions are grouped together under the name “IP Multimedia Subsystem”.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 84/193 PUBLIC

• View of the all (preconfigured) network resources

• View of the resources already in use for other sessions

• User profile and subscription information

If an access network cannot deliver the requested QoS, the involved PDF can modify the QoS characteristics. In this case, it is up to the end users to decide if they accept or reject the modified QoS characteristics.

2. Authorization of QoS resources. After successful end-to-end negotiation, the PDFs authorize the use of the resources for the session. There are two authorization mechanisms, depending on the mechanism used to set up the IP media flow in the next step.

Method A. Pre-provisioned bitpipes for IP user data flows

The PDF immediately reserves the resources in the pre-provisioned pipe.

Method B. Signalling for user data flows After successful end-to-end negotiation, the PDF authorizes the use of the resources for the session by sending an authorization token4 to the end user’s terminal.

3. Establishment of IP media flow. Depending on the architecture of the transport layer, the actual IP flows for the user data are established through one of the following two methods:

Method A. Pre-provisioned bitpipes for IP user data flows User A’s CPE seizes the reserved resources. The access and/or edge node recognizes the CPE (and its right to seize the resources) based on transport layer parameters. These parameters depend on the transport network type that is present. This method is appropriate if there is no bearer layer signalling in the transport layer. It is worked out for an Ethernet transport network in section 3.3.1

Method B. Signalling for user data flows Step 1. IP media flow request. Through signalling, user A’s CPE requests the set up of the IP media flows for the session from the intelligent edge node. The signalling includes the authorization token received in step 2.

Step 2. IP media flow check. Using the authorization token, the Access Node and/or Edge Node check with the PDF if the requested IP media flow is indeed authorized.

Step 3. IP media flow confirmation. The PDF confirms the authorization and the intelligent router accepts the IP media flow request.

4 In 3GPP-IMS the GGSN (comparable with the EN in the MUSE architecture) uses an authorization token to correlate the new data stream with the authorized request from the authorization phase. In 3GPP-IMS the token is also used for the establishment of a new bearer layer between the terminal and the GGSN. Since the data stream in the MUSE architecture is not conveyed over a new bearer layer, the token is not needed to fulfil this function. Instead, other parameters can be used by the EN to correlate the data stream with the signalling request from the authorization phase, such as source and destination IP address and port numbers. Using a token based on already existing parameters instead of a new, separate token makes the process in the access network less complex.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 85/193 PUBLIC

This method is appropriate if IP flows in the transport layer are set up through bear-layer signalling. In this case, there is a need for the correlation of application level signalling and bearer layer signalling.

2.4.3.4.2 Introduction of roaming and nomadism

The implementation outlined in the previous section can be extended to include nomadic terminals and services. As in 3GPP IMS [37], the CSCF would be split in three parts, see Figure 2-25:

o the Proxy CSCF (P-CSCF) in the visited network, which interfaces with the PDF in the visited network over the Gq interface.

o the Interrogating CSCF (I-CSCF) in the home network

o the Serving CSCF (S-CSCF) in the home network.

The resources in the visited access network are controled by the PDF and P-CSCF in that network. The service control is performed by the S-CSCF in the home network. The S-CSCF is reached through the I-CSCF, which performs a rather straightforward relay function.

ComputerModem

PDF

Telephone

GqP-CSCF

NSP/ASPnetwork

1

CSCF

I-CSCF

arbitrarynetwork

Computer

End user B

Go

Access andRegional Network

(Home)

PDF

Go

P-CSCFGq

End user A connectedto 'home network'

End user A connectedto visited network

Visited network

Home network

Originating Terminating

S-CSCF

Access andRegional Network

(Visited)

Laptop

Modem

1

1

1

1

3

2

Signalling interface

Physical connection

New data channel

Figure 2-25. Generalization of architecture for roaming, based on 3GPP IMS roaming.

2.4.3.5 User oriented model

From the QoS resource control point of view, a 3GPP/IMS based implementation of this model is the same as the one in the previous section for the application signalling based model. The separate exchange for the service request has no impact on QoS resource control.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 86/193 PUBLIC

2.4.3.6 Service provider oriented model

In the service provider oriented model, the end user CPE only makes a service request. The CPE does not explicitly request QoS as is the case in the user oriented model and the application signalling based model. Instead, the ASP derives the QoS request from the service request. It is assumed that the service provider oriented model is used in combination with pre-provisioned bitpipes in the transport network, as in method A from section 2.4.3.4.1.

Modem

PDF/RACS

Telephone EdgeNode

RegionalNetwork

AccessNode

Gq

NSP/ASPnetwork

Signalling interface

Physical connection

2

3

Go

MUSE Network

AF: CSCF

Computer

New data channel

1

End user A

AF: other

Gq

AF: other

Gq

2Go

Figure 2-26. Set up of QoS connection in 3GPP/IMS implementation of service provider oriented model.

Figure 2-26 illustrates the set up of a connection with QoS in a 3GPP/IMS based implementation of the service provider oriented model. There are basically three steps:

1. Service request. The end user requests a service from the ASP.

2. QoS resource determination and reservation. The ASP derives the desired QoS from the service request and checks the availability of the corresponding QoS resources with the PDF. If the resources are available, the PDF immediately reserves the resources in the pre-provisioned pipe.

3. QoS resource seizure. User A’s CPE seizes the reserved resources. The access and/or edge node recognizes the CPE (and its right to seize the resources) based on transport layer parameters. These parameters depend on the transport network type that is present.

2.4.3.7 Mapping to business roles from D A1.1

Figure 2-27 shows the mapping of the network elements to the business roles defined in DA1.1.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 87/193 PUBLIC

• The Connectivity Provider (CP) is responsible for end-to-end connectivity between the CPE and the Network Service Provider (NSP) and/or Application Service Provider (ASP) network. The CP manages the Edge Node, using the Policy Decision Function (PDF) to control connections that demand a certain QoS.

• The Packager has a central role at a higher level, by making SLAs with the other roles. The packager is also the contact point for the Customer and is therefore the logical role to operate the Automatic Configuration Server (ACS) that is used to configure the higher layer services in the CPE.

• The control of the QoS resources in the NAP transport network is performed by the PDF in the CP domain. Since the “ownership” and the control of the resources are intimately related, it is assumed that the NAP and CP role are fulfilled by one business entity. Therefore, the Go interface can be an intra-operator interface: it does not cross the boundary between two business entities.

• The Gq links the service logic to the QoS resource control. Since multiple business entities need to be able to offer services to the end user across one access network, the Gq interface needs to be an inter-operator interface: it needs to cross boundaries between different business entities.

Computer

Modem

PDF

Telephone

Connectivity Provider

Application / NetworkService Providers

Access Network Provider

Regional NetworkProvider

EdgeNode

ASP

NSP

RegionalNetwork

Customer

AccessNode

Gq

GoGo

Gq

CSCF

AF

ACS

NSPnetwork

Packager

Configuration data (ACS Northbound interface)Configuration interface

Signalling interface

Physical connection

Figure 2-27, Mapping of IMS-based architecture to DA1.1 business roles.

2.4.3.8 Services supported by 3GPP/IMS-based architecture

The services that can be supported by the architecture outlined in the previous sections are determined by the capabilities of the transport network and the intelligence built into the application servers. The 3GPP/IMS based architecture can be used in combination with many variants:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 88/193 PUBLIC

• Through the Gq interface, different Application Functions (AFs) can be connected to the QoS resource control. These AFs can be SIP based (as is the case for the CSCF in IMS) but they can also be based on other protocols or concepts. The QoS and resource control elements do not introduce limitations in this respect.

• The 3GPP/IMS-based resource control mechanism work with transport networks that have bearer-layer signalling (as is the case in 3GPP radio access networks) and with transport networks without bearer-layer signalling.

2.4.3.9 Next steps

The overall goal is to define a stable MUSE IMS architecture that includes all the issues currently defined in 3GPP Release 6. The MUSE IMS architecture will be described in terms of adaptations of the 3GPP IMS architecture. For each 3GPP component, it needs to be decided whether it can be re-used “as is” or whether adaptations are needed.

2.5 AAA Architectures 2.5.1 Auto-configuration in a multi-provider environment 2.5.1.1 Introduction

Both PPP and DHCP are described in detail in other documents [4], so only a brief summary is shown here. The purpose here is to further work out the case of DHCP and its interaction with RADIUS servers for the AAA (Authentication, Authorization and Accounting).

In an All-IP/All-Ethernet scenario, which is the scope of MUSE, DHCP can be considered as the “natural” solution, while the legacy scenario in the case of xDSL access is PPPoE, with ATM over the xDSL link.

The scope of MUSE comprises a multiservice and multiprovider scenario. So the end-user can engage multiple services and each service with a different ISP/ASP. ISPs and ASPs are located “behind” the Access Nodes and Edge Nodes.

In the legacy scenario, multiple PVCs, one PVC per service, is not a feasible solution due to the configuration and provisioning complexity. A solution is needed that allows the access to multiple services, provided by different ASPs, through only one PVC.

And in a MUSE scenario, multiples VLANs, one per service, is also a complex solution. A solution is needed that permits the access to multiple services offered by different ASPs using only one VLAN (for the same reason as for ATM).

In case of PPPoE, AAA is performed by means of RADIUS. The dialogue between the RADIUS server and Edge Node (or Access Node if this one is IP aware) permits:

• Session accounting.

• Configuration of IP filters and firewalls per end-user.

• Configuration of QoS profiles per end-user.

• VPN selection.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 89/193 PUBLIC

Nowadays, Service Selection Portals interact with RADIUS servers and permit the creation of multiple service sessions with different Service Providers over an unique PPPoE (and also IPoE) end-user session, with a single IP address:

• Each service session generates its own accounting records, and so each service provider can charge for its service to the end-user.

• Specific IP filters and QoS profiles can be applied for each service session.

The next figure shows the way BRAS links end-user PPPoE session with the different sessions associated with the services invoked by the same end-user.

Figure 2-28: Relationship between service session and PPPoE session.

It would be advisable that both mechanisms, PPP and DHCP, permit the same from the end-user and ASP perspectives. For that reason it will be necessary to combine DHCP for L3 auto-configuration, instead of PPP, and RADIUS, and to find a way to create multi-sessions in the non-PPP scenario.

Another option included in this document is the adaptation of 3GPP’s IMS architecture into MUSE network. Nevertheless, some adaptations must be done in order to completely fit the MUSE objectives.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 90/193 PUBLIC

2.5.1.2 Authentication mechanisms

First of all, the Access Node (AN) can be an IEEE 802.1X authenticator, since IEEE 802.1X is the mechanism through which an Ethernet switch (i.e, Access Node in the scope of MUSE) can authenticate the identity of the end-user before providing layer 2 network access using RADIUS as the Authentication Service specified in RFC3580.

IEEE 802.1X was originally intended for switched ethernet networks, but it has been adapted during the standardization process for use on shared media ethernet environments, such as WLAN or cable broadband access systems. To circumvent port "piggybacking" in shared media Ethernet environments, the Access Node will have to be able to partition a single physical access port (e.g. xDSL port in an Access Node) into multiple distinct logical ports. This allows to map each device (e.g. end-user PC) on the segment with each logical port authenticated independently. So IEEE 802.1X model can be used in both cases: routed and bridged residential gateways.

The Access Node provides end-user’s profile to a RADIUS server, which eventually sends either an Access-Accept or an Access-Reject in response to an Access-Request. The Access Node, based on the reply of the RADIUS server, then allows or denies access to the requesting device. This mechanisms can be used either in a PPP or non PPP scenario, so it’s a good choice in order to make a smooth migration.

2.5.1.3 User and Service auto-configuration

Several options involving DHCP/PPP and RADIUS were proposed in MA2.3 [4]. For L3 auto-configuration in a evolving scenario, it seems reasonable to use DHCP for IP assignation, but this solution has other problems. For instance, linking L2 (access) and L3 (IP) information is not easy, and is an important part of the auto-configuration process. Some solutions involve RADIUS database sharing, or even DHCP modifications to make this task possible.

DHCP is the classical solution in a LAN/MAN environment, and PPP/RADIUS the classical solution for a dial-in environment. The latter option is the one that has been deployed by many carriers.

If carriers try to sell new broadband services and they do not want to follow the flat-fee model for billing, DHCP could be not enough because it does not allow the edge node behaviour to be changed (all the information is for the end-user client and it is not processed by the edge node). DHCP is good enough for IP address provisioning (L3 auto-configuration), but it has to be linked with other solutions for accounting, multiprovider scenarios, edge node behaviour for dynamic service subscription.

A combination between DHCP (IP address assignment) and RADIUS (AAA and dynamic service subscription), or alternatively a new protocol or an external platform that controls edge nodes would be a good solution. RADIUS is one of the most widely deployed protocol for AAA. RADIUS is in a continuous evolution process to include new capabilities, like IP filtering, IP QoS … So some vendors are now developing interworking mechanisms between the BRAS’s DHCP server and the BRAS’s RADIUS client for AAA purposes.

The general idea is to make DHCP work with RADIUS in order to have the same functionality as we have now in PPP/RADIUS model. So this fact will facilitate a later migration from the PPP to the DHCP scenario.

Although different options can be proposed for DHCP and RADIUS integration, all of them involve problems.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 91/193 PUBLIC

For Layer 3+ (service) auto-configuration an Auto-Configuration Server (ACS) is used, although other options will be presented in this document, including an ACS in an IMS-like architecture. This Layer 3+ auto-configuration must also be connected with Service Selection Portals, so a solution to this must be provided.

2.5.2 Control Plane options The problem is illustrated by three different approaches towards a solution (which were also discussed in MA2.5 [5]). Each has shortcomings, but they clarify the technical mechanisms for interaction between DHCP and RADIUS entities.

2.5.2.1 Option #1

Access Node is an IEEE 802.1X authenticator and it also includes RADIUS client and DHCP relay agent features. When the end-user tries to access, his/her profile is checked in a RADIUS server located in NAP network. But this RADIUS server can be a proxy RADIUS that forwards the Access-Request message to the RADIUS server of the respective NSP. Using this scheme, NSP’s RADIUS servers can collect all the information related to their respective end-users and there is no need to correlate NSP´s RADIUS records with NAP´s RADIUS records.

Once access has been allowed, L3 auto-configuration will be performed using DHCP. There are two possibilities:

• The DHCP server is located in the NAP network. In this case, the DHCP server will manage different IP address pools, one pool per NSP, and the NAP’s DHCP server will have to assign an IP address from the pool of the respective NSP.

• The DHCP server is located in the NSP network, at the Edge Node or in a different platform. So, the Access Node DHCP relay agent has to redirect the DHCP request to an specific DHCP server, depending on the NSP. How can this redirection be performed? Would it be done using different VLANs in each Access Node, one per NSP connected to the NAP? This option makes provisioning more difficult, and so it seems to be a complex choice. Maybe a better solution is to have the IP address of the DHCP server sent by the RADIUS server to the RADIUS Client in the Access Node. The Radius client informs the DHCP proxy about the DHCP server address and the DHCP proxy enforces in the DHCP message exchange the use of the correct DHCP server. This implies using a RADIUS vendor specific attribute (#26) that includes the DHCP server IP address.

The main problem of this option is the way to know the IP address assigned to the end-user. As the IP address is assigned by the DHCP server, this one would include a RADIUS client that would send a message with the IP address assigned by the DHCP server (attribute #8, Framed-IP-Address). How can the RADIUS server link this information with the L2 access records?

The RADIUS client in the Access Node will have to include NAS-Port (RFC 2865) and NAS-IP-Address (RFC 2865) RADIUS attributes in the Access-Request messages in order to provide enough information to identify the physical port the end-user is connected. This requirement is necessary to control end-user nomadism. Another option could be the insertion of DHCP option #82 by the Access Node.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 92/193 PUBLIC

Figure 2-29: Option #1 for DHCP and RADIUS interaction.

2.5.2.2 Option #2

It is similar to option #1. The access Node is also an IEEE 802.1X authenticator and it also includes RADIUS client and DHCP relay agent features. When an end-user tries to access, his/her profile is checked in a RADIUS server located either in the NAP network or in the NSP network (in this case, the NAP’s RADIUS proxy forwards the request to the NSP’s RADIUS server).

Once access has been allowed, L3 auto-configuration will be performed using DHCP. In this case, DHCP server is located in the NSP network. The Access Node DHCP relay agent has to redirect the DHCP request to an specific DHCP server, depending on the NSP. Again, the same problem explained in the previous option appears: how can this redirection be performed? Would it be done using different VLANs in each Access Node, one per NSP connected to the NAP?

And there are two new problems:

• It is necessary to define the interaction between the Edge Node DHCP server (or relay agent) and the Edge Node RADIUS client. The DHCPREQUEST received by the DHCP server must be changed by the RADIUS client into an Access-Request message. This interaction would have to include the way to inform about the IP address assigned by the DHCP server.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 93/193 PUBLIC

• In a multi-provider scenario where NAP and NSP don’t have a relationship as associates, how can both carriers correlate their respective RADIUS registers? Maybe using a unique transaction ID being generated by the RADIUS server which is sent to the Proxy client in the Access Node. This ID can then be included in the DHCP message(s) and can be used later on for correlation of the DHCP and RADIUS transaction. This, again, implies using Vendor Specific RADIUS Attribute and maybe modifying the DHCP protocol.

And again, it is necessary to include additional information to control nomadism:

• Access Node RADIUS client can include NAS-Port and NAS-IP-Address RADIUS attributes in the Access-Request messages to control nomadism.

• Access Node DHCP relay agent can include DHCP Option #82.

Figure 2-30: Option #2 for DHCP and RADIUS interaction.

2.5.2.3 Option #3

Access Node must implement “RADIUS Attributes Sub-option for the DHCP Relay Agent Information Option” (IETF draft-ietf-dhc-agentopt-radius-08.txt, which expires on February 16th 2005).

The RADIUS Attributes sub-option for the DHCP Relay Agent option provides a way in which an Access Node can pass attributes obtained from the NAP’s RADIUS server to the NSP´s DHCP server.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 94/193 PUBLIC

At the successful conclusion of IEEE 802.1X authentication, the RADIUS Access-Accept message received by the Access Node includes attributes for service authorizations. The Access Node can store these attributes locally and when it forwards DHCP messages to the NSP’s DHCP server, Access node can send these attributes in a RADIUS Attributes sub-option.

The RADIUS Attributes sub-option is another sub-option of the Relay Agent Information option (IETF RFC 3046, option #82). These sub-options can be sent using option #82 in the DHCPREQUEST messages forwarded by the Access Node DHCP relay agent to the DHCP server.

The shortage of this solution is that only a few RADIUS Attributes are considered.

• Attribute #1: User-Name (RFC 2865).

• Attribute #6: Service-Type (RFC 2865).

• Attribute #25: Class (RFC 2865).

• Attribute #26: Vendor-Specific (RFC 2865).

• Attribute #27: Session-Timeout (RFC 2865).

• Attribute #88: Framed-Pool (RFC 2869).

• Attribute #100: Framed-IPv6-Pool (RFC 3162).

And none of these attributes, excluding perhaps Vendor-Specific, provide enough information about the IP address assigned to the end-user, nomadism control and service accounting.

In fact, the goal would be that the DHCP server provides the AAA server with all the information that the last one needs for service session accounting. And the mentioned draft presents a way to send to the DHCP server information provided by the AAA server, just the opposite. 2.5.2.4 Conclusions

All the options include elements that can be used to build the ideal solution:

• Elements from the option #1 which have to be taken into account:

• The use of RADIUS proxy server as a way to switch the Access-Accept messages sent by the BRASs located at the edge of an IP backbone to the suitable RADIUS server is a common practice.

• Option #1 with DHCP server located in the NAP domain is possible. The only requirement is that the different NSPs allow the NAP’s DHCP server the management of their own IP address pools.

• And option #1 with a DHCP server located in each NSP connected to one NAP (either in the Edge Node or in a different platform) involves the same problem pointed out in the options #2 and #3.

• Element from the option #2 which has to be taken into account:

• the DHCP server has to include a RADIUS client.

• Element from the option #3 that has to be considered:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 95/193 PUBLIC

• The Access Node’s DHCP relay agent has to implement the IETF draft “RADIUS Attributes Sub-option for the DHCP Relay Agent Information Option”. DHCP server (in the Edge Node or in a different platform) will receive RADIUS attributes from the DHCP relay agent (i.e. Access Node), like User-Name attribute. And the RADIUS client of the DHCP server will append this attributes to their requests to the RADIUS server. An so the RADIUS server would be able to link L2 access records with L3 auto-configuration records.

So far, the analysis looked at the way that DHCP and RADIUS can be combined for L3 auto-configuration in an Ethernet scenario (the scope of MUSE), considering a multi-provider environment (NAP and NSP owned by different companies without being associates) and nomadism. But this solution has problems (as mentioned before), like the distribution of the customer’s IP address into the NAP database (connect Layer 2 information with Layer 3 configuration), and in a multi-provider scenario with nomadism this is not a trivial problem. The “One-step configuration mechanism” defined in next section tries to resolve these problems.

2.5.3 One Step Configuration As described, the problem of linking layer 2 and 3 information arises from their separated configuration processes. Layer 2 is configured and then Layer 3 starts. This proposal makes full configuration in one only step, depicted in Figure 2-31. 2.5.3.1 Solution with DHCP server in the AN

Like in the previous options, the Access Node must be a 802.1X authenticator, and must include a RADIUS client, but also must act as the DHCP Server (or DHCP relay agent, see further). First of all, 802.1X process takes place, and the Access Node sends an Access-Request RADIUS message. This request is directed to a NAP RADIUS Server, which is a Proxy Server that forwards the request to the next RADIUS Server (which can be another proxy as well). The NAP AAA Proxy Server redirects Access-Request messages depending on the domain included in the end-user profile (customer@domain). This request contains NAS-IP-Address and NAS-Port attributes, so the customer’s physical access point is perfectly located, and nomadism can be supported. The NAS-IP-Address will also play an important role in Layer 3 configuration.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 96/193 PUBLIC

Figure 2-31: One step configuration and AAA process, DHCP server in AN

This information is forwarded between the RADIUS Proxies, so all network providers involved (NAP and NSP) have the same information (this is optional, from one proxy to another this information can be restricted). Finally the NSP RADIUS Server will reply with an Access-Accept message, which should also include at least these attributes:

• Framed-IP-Address (Attribute #8): customer’s IP address.

• Framed-IP-Netmask (Attribute #9): customer’s subnet mask.

• Framed-Route (Attribute #22): customer’s default IP gateway.

In the case of the Ethernet network model, the one-step AAA proposal has to include a mechanism that permits to provide the VLAN IDs to the Access Node. IETF's RFC 3580 provides a way to pass the VLAN ID to the Access Node, by means of RADIUS. So, for the Ethernet network model, additional attributes would have to be included:

• Tunnel-Type (attribute #64): VLAN.

• Tunnel-Medium-Type (attribute #65):802.

• Tunnel-Private-Group-Id (attribute #81): customer VLAN ID.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 97/193 PUBLIC

As seen, the NSP RADIUS server puts in the same message the necessary information to configure customer’s Layer 3. This information should be collected from a local DHCP Server or by any other means like IP address pools defined locally in the NSP AAA (RADIUS) server. As the NSP knows the NAS-IP-Address, it is possible to serve a geographically assigned IP address, reducing routing table complexity.

Once the Access Node AAA client has received the Access-Accept message, the Access Node DHCP server will map "on the fly" the content of RADIUS attributes into respective DHCP fields and options.

This requires the Access Node (NAS-IP) physical location to be known by NSP, and maybe this is not completely desired. Another possible solution is to distribute the NSP IP pool into NAP RADIUS Servers, so that the NAP can assign customer’s IP/mask without giving the location of Access Nodes, although the NAP is not responsible for Layer 3 connectivity. In that case, the NAP RADIUS Proxy server will inform the NSP RADIUS Server about the end-user IP address by means of the Accounting-Request Start message that must include attribute #8.

This information arrives to the Access Node, that finishes L2 access. The customer starts DHCP in order to obtain L3 configuration, and the Access Node acts as a DHCP server, although it only replies the information collected during RADIUS process (attributes #8, #9 and #22), so it is only a “DHCP replier” more than a DHCP server.

When the user finishes working, and the connection is released, the accounting message also acts as a DHCP Release, and the RADIUS Server that provided IP address can release the session.

In this scheme, Layer 2 and 3 are solved in only one step, so the information is directly linked, without needing synchronization of RADIUS databases:

• Both NSP AAA server and NAP AAA proxy server share the same AAA records.

• But all the information about the end-user profile is controlled by the NSP. End-users database belongs to the NSP.

• This model permits a multi-provider scenario: the same NAP can provide access to different NSPs, and one NSP can be accessed through different NAPs.

• The NSP AAA server “knows” the IP address assigned to the end-user. This information is necessary for different purposes:

o End-user session troubleshooting.

o For dynamic service provisioning: end-user IP address knowledge is necessary for a correct QoS and IP filter and firewall configuration in both Access Node (case of IP network model) and Edge Node.

• The NSP AAA server obtains information about the Access Node and the line that the end-user is using to access, and so it can use this information to allow or deny access (nomadism control).

• All the AAA is performed in only one-step. 2.5.3.2 Solution with DHCP relay agent in AN

Alternatively, the Access Node could be a DHCP relay agent in order to reduce the processing load on the Access Node. The end-user's IP address must appear in accounting records and these records must be correlated with end-user authentication records (i.e. the same Accounting-Session-Id attribute).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 98/193 PUBLIC

This goal can be achieved if the AAA Server sends the attribute "Framed-IP-Address = 255.255.255.254" (an IP address assigned by Access Node, instead of 255.255.255.255 or of the IP address assigned by AAA Server) to the AAA client in the AN. So the AN will interprete this attribute as a way to determine that when it receives a DHCP request from this end-user, the DHCP relay in the AN will have to ask for the IP address to the NSP DHCP Server. When the AN receives the IP address from the DHCP server, the AN will include this IP address in Accounting-Requests, in the "Framed-IP-Address" attribute.

But in a multiprovider environment there are multiple NSPs and each NSP will have its own DHCP server. Two options can be considered for the NSP DHCP server IP address:

- A dynamic provisioning by means of RADIUS using VSA attribute. But this option would also increase AN processing load.

- A static provisioning (pre-provisioned) : the AN is configured with one DHCP Server IP address per NSP.

Figure 2-32 : One step configuration and AAA process, DHCP relay agent in AN

2.5.3.3 Possible Issues

There are some problems that must be taken into account:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 99/193 PUBLIC

• As mentioned before, the RADIUS Server responsible for IP assignment is not chosen. If it is located in NSP, it must know physical location of NAP Access Nodes, and if located in NAP network, it must administer IP pools although it is not responsible for Layer 3 connectivity.

• NAP RADIUS knows all necessary information about customers, but another entity should be defined in order to interact with Access Node and Edge Node (maybe COPS?).

• Although the approach was defined with DHCP as Layer 3 configuration protocol, it would be the same with PPP instead. This mechanism may provide a smooth migration path between these two worlds.

• This mechanism, combined with the IMS-like architecture defined in next section, has to provide an architecture that fits all MUSE requirements (including migration proposals). This has to be checked.

2.5.4 IMS Model adaptation Although the IMS model is intended to work with UMTS networks (mobile networks), some of the ideas and objectives are common to MUSE project. Adapting the IMS architecture may be good enough to resolve some of the proposed issues. There are two IMS models, one of them driven by ETSI (3GPP) and another by ANSI (3GPP2). Both of them would allow including AAA architecture proposed for MUSE in an IMS scenario. The IMS model involves AAA and HSS for user’s profiles. In 3GPP's model (Figure 2-33), the AAA initial stage is performed via RADIUS (from Access Node, via NAP proxy, into NSP server). Later, the NSP’s AAA server begins the configuration process (once the client has been allowed to access) via the HSS, which contains user specific profiles (Diameter). These profiles are used by the PDF to download (via COPS) specific filters and policies to the Access Node and the Edge Node, which in IMS terminology are known as PEP (policy enforcement points).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 100/193 PUBLIC

Figure 2-33: 3GPP IMS Architecture

In this way IMS functionality is integrated in the MUSE network, using this architecture as a way to perform all necessary tasks to enable services when a user logs in. This should also be combined with the RADIUS/DHCP interaction explained before (or even better with the one step configuration process explained before) to create a full log process, including AAA, network nodes configuration, and customer’s configuration.

In this scheme, some open issues arise, and they will be treated in 2.5.5.

The scheme of 3GPP2 (seen in Figure 2-24) maybe fits better into MUSE scope, although not all interfaces shown in the figure (marked with numbers) are already specified.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 101/193 PUBLIC

Figure 2-34: 3GPP2 IMS Architecture

In this architecture, the AAA process is done in the same way, so the Access Node's RADIUS client starts the process of communicating with the NAP AAA proxy server, which also communicates with the NSP AAA Server. Then, this server speaks with HSS (profile database) and PDF (Policy Distribution Function) in order to start policy distribution. PDF, maybe using COPS, can create proper filters in Access Node and Edge Node. In this way, the user profile is distributed to all involved elements, enabling the service.

Again, this should be combined with DHCP (IP assignment) in order to complete the configuration process.

2.5.4.1 Multi-ASP

Now it is necessary to consider end-users can select different services from different ASPs and the way that accounting service records can be obtained for billing purposes.

In a legacy scenario (PPPoE) it seems an “easy” task (see Figure 2-28). When one end-user selects one service, it is necessary to configure a specific bitrate, QoS profile and access permission to a fixed set of IP addresses (the IP addresses of the service servers). Service Selection Portals (SSP) include a web server which allow end-users to select different services and they work jointly with the AAA server and the BRASs. So, when one service is activated, the specific QoS profile is configured in the BRAS and it is applied to the respective PPP “pipe”. And this process generates session service accounting records that are collected by the AAA (RADIUS) server. The same scheme is being applied in the case of IPoE. So the same approach could be applied when DHCP is used for L3 auto-configuration.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 102/193 PUBLIC

Different vendors are now providing service selection portal solutions and all of them are following the scheme described in the previous paragraph. But there is no a standard approach, and so each vendor puts forward its own solution. Another open issue is that RADIUS does not include an attribute to link all the service session accounting records associated to the same PPPoE session. One possible solution is to define a new attribute or reuse another one. The attribute #50 (Acct-Multi_session-Id) is not intended specifically for this purpose (it is intended to link related multi-link PPP sessions) but can be used also to identify a specific sub-session inside PPP. Another solution could be an extensive dialogue between BRAS and RADIUS, so every sub-session will have another Id that should be linked in the RADIUS Server.

In any case it is necessary to define a standard model for service selection portals. This standard would have to include:

• Elements involved in the Service Selection Portal architecture:

• Web server.

• Policy Decision Function (PDF)?

• ...?

• Interfaces and protocols:

• Web server/PDF.

• Access Node/PDF.

• Edge Node/PDF.

• RADIUS Server/PDF.

This standard would have to be in accordance with an IMS architecture. In this document there are QoS proposals based on the IMS model. In this model there are two blocks that are specifically related to accounting and policing:

• HSS (Home Subscriber Server). This block performs the following tasks:

• End-users database which contains:

• End-user profiles: identification, IP address, access permissions, services signed up by each end-user.

• End-user location.

• It uses Diameter protocol (RFC 3588 and 3589).

• AAA block to collect accounting records for billing. This block use RADIUS or Diameter protocol.

Diameter is the protocol proposed by the 3GPP for HSS and AAA in the IMS architecture (release 5). Diameter has to be backward compatible with RADIUS (RADIUS Command Codes are the first 256 Diameter Command Codes). This requirement will ease a seamless migration from RADIUS to Diameter.

In this way, in a NGN (Next Generation Network) context, a close interaction between DHCP server and RADIUS server (Diameter for the future) by means of including a RADIUS client in the DHCP server will be a must if a MUSE solution has to be compatible with an IMS architecture.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 103/193 PUBLIC

Besides using the Service Selection Portal or not, the Auto-Configuration Server (ACS) defined in previous documents (MA2.5 [5]) can play a key role. Inside this architecture, ACS can be attached to HSS in order to fulfill customer profile database with L3+ information, that can be used later by the PDF in order to configure the PEPs.

Figure 2-35: IMS architecture with ACS.

The interfaces between ACS and other entities are no defined yet, although some of the previous work done on ACS can be used here.

2.5.4.2 IP assigment

One of the biggest problems arise from the assignment of one specific IP address to one specific customer. If a “random” address is chosen, the routing tables will increase in a dramatic way, so another solution must be chosen.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 104/193 PUBLIC

In this IMS like architecture, combined with the one step configuration process, this problem is solved due to NAS-IP-Address RADIUS attribute. This specifies the IP address of the access node serving to that customer, so the geographical area of this customer is resolved if this information is known by the NSP. In this way, the NSP can choose an appropriate IP address of the specific pool for that area. Actually, in a L2 wholesale model, IP addresses from the same subnet (geographical area) can be assigned to customers in different NAPs, since the packet is routed inside the NSP network to that area, and then traffic is forwarded into the proper NAP according to profile information (recorded during AAA process) and the IP address is not necessary used to forward packet into NAP.

Other options are possible, for instance if a VLAN per NSP is assigned in the NAP, the NSP can use a per-VLAN approach to assign pools, instead of the geographical criteria. In the same way, if one VLAN is chosen per Access Node, the NSP can assign one specific pool to each Access Node.

As seen, this architecture is very flexible and is able to resolve the IP assignation in many ways. Combined with the one step configuration process, and whatever method chosen to assign IPs, the NAP will always know customer’s IP since it is returned via RADIUS/Diameter during AAA.

2.5.4.3 IMS Architecture issues

The IMS architecture is designed specifically for telephony (VoIP) so it must be adapted to the MUSE context.

3GPP2 interfaces are still not defined, although Diameter could be used as the communication protocol between all entities. Moreover, some entities defined could be integrated into the same node, so there is no need for that interface.

Some applications must be controlled in the IP layer, so the PDF must know the customer’s IP address. This is the reason to connect the PDF with the AAA and HSS, because if one step configuration process is performed, the AAA server will know customer’s assigned IP address. Maybe another entity should be involved.

An Auto-configuration server should be included in that architecture, as a separate node, or integrated in another entity. Maybe it should be attached to HSS entity, and so the ACS will be in charge of L3+ profile downloading into the HSS database, so the AAA and PDF can access that information. The ACS must communicate with several ASPs, and update all new information into the customer’s profile. Next, with RADIUS/Diameter, that information can be propagated to the NAP (or NAPs) and the PDF can configure network nodes (COPS). This way, all customer’s information is located in the HSS, but is easily distributed into any other involved provider.

Combining the one step configuration process with IMS-like architecture should be sufficient for MUSE.

As seen, the PDF (located in NSP) has to configure PEPs, including the Access Node which is located in the NAP. Although a public interface can be defined, or some kind of contract can be arranged between NSP and NAP, perhaps another solution is possible. Instead of accessing directly to the Access Node, the NSP PDF can communicate with another PDF located in NAP network, for instance using Diameter, in order to download specific information so the NAP PDF can configure (using COPS) its own PEP (Access Node).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 105/193 PUBLIC

Figure 2-36: IMS AAA architecture with NAP PDF and Radius.

This way, the only thing exchanged between both providers is information, so any contract between them should be easier.

Many options arise as an alternate to this proposal. For instance, according to 3GPP model, a public Gq interface (between AF and PDF) can be defined, so the AF can act over both PDFs (NAP and NSP), and then, each PDF can act (via COPS) over its own PEP (Access Node and Edge Node).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 106/193 PUBLIC

Figure 2-37: IMS AAA with NAP PDF and public Gq interface.

Another possible issue arises from the service selection portal. In 3GPP's architecture, that function can be performed by the AF, but it presents one problem: the AF is not directly connected to the AAA server (is connected via HSS). Maybe another interface can be defined between this two entities, like in 3GPP2's model, where a new undefined interface (#23) is created between PDF and AAA.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 107/193 PUBLIC

Figure 2-38: 3GPP IMS architecture with direct link between AAA and AF.

2.5.5 Open Issues As mentioned, the evolution from PPP to DHCP seems to be logical as several IP services are based on DHCP. So, DHCP has to work with RADIUS in the same way PPP does, and this is not an easy task.

For instance, nomadism makes it difficult to use DHCP without any changes, and including a “line-ID” attribute in DHCP message is a possible solution. But this problem can be solved in the one step model by means of RADIUS which provides through standard attributes all the information necessary for end-user line identification and nomadism control.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 108/193 PUBLIC

3 ETHERNET NETWORK MODEL 3.1 Connectivity in the Ethernet Network Model 3.1.1 Overview of Ethernet network model 3.1.1.1 Basic mechanisms

Ethernet Network in access & aggregation

Whereas most current access networks are ATM-based, there is a trend to evolve to packet-based connection-less technologies (Ethernet, IP) in access & aggregation. Multiple migration scenarios are possible for introducing Ethernet and IP-awareness into the access and aggregation network. This is investigated in a separate MUSE milestone and deliverable.

The Ethernet network model presented here can be considered as the network that has completely migrated to Ethernet. This means that there is Ethernet-based connectivity at layer 2 from the subscriber up to the edge, for traffic in either IPoE or IPoPPPoE format. The AN performs connectivity, subscriber management, accounting and security features. The AN is an (enhanced) Ethernet switch. The aggregation network carries traffic between ANs and ENs and is involved in multicast replication (see further). It is composed of plain Ethernet switches (aggregation switches or AS). The EN is responsible for providing connectivity to the relevant ISP/NSP/ASP and for implementing accounting and security features. The EN must ensure Ethernet connectivity, at least at the aggregation network side, and further handles the traffic at Layer 3 (excepted for L2 wholesale).

The migration scenarios should consider the appropriate considerations of this model, depending on the scenario. This model is also the basis for newly deployed Ethernet-based networks.

NSP/ISP

NSP/ISP

NSP/ISP

ASPNAP

CPN

EN

Ethernet (MPLS)aggregation network

AN

Ethernet switch(802.1ad)bridged

orrouted

CPE EN

Ethernet switch(S-VLAN aware or 802.1Q)

BRAS or Edge Router

Figure 3-1 : Functional basis of Ethernet network model

Connectivity throughout the access and aggregation network is based on Ethernet principles. Depending on the use of VLANs, there are two possible options.

In the first option (Figure 3-2), the connectivity in the AN is based on the MAC addresses, as in an ordinary Ethernet switch. However the aggregation network is a special case compared to a plain Ethernet LAN, because the AN is an asymmetric node, aggregating multiple end-users on one side and connecting to a large WAN at the other side. There is a clear difference in upstream and downstream traffic behaviour. Therefore the AN will have additional intelligence for security, traffic management and accounting. This is called

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 109/193 PUBLIC

Intelligent Bridging. The VLANs in the aggregation network can be used to further separate the aggregated traffic from the different ANs. A typical use would be to allocate one VLAN per AN-EN pair. The AN acts as a 802.1Q bridge (single 802.1Q tag) or a 802.1ad provider bridge (single S-VLAN tag). It is recommended to use the S-VLAN tag for being able to seamlessly support business users (see further).

NSP/ISP

NSP/ISP

NSP/ISP

ASPNAP

CPN

ENEthernet (MPLS)

aggregation networkAN

MAC @ FWDUS : S-VLAN taggingDS : S-VLAN stripping

Routed => L2 terminationBridged => MAC@ FWD

CPE

EN

MAC @ FWDwithin S-VLAN

US : L2 terminationDS : S-VLAN tagging

S-VLAN1

S-VLAN4

S-VLAN2

S-VLAN3

Figure 3-2 : Intelligent bridging (residential users)

In the other option, the connectivity at the AN is no longer based purely on MAC address but on VLAN-IDs, namely by associating one individual VLAN-ID to every end-user (i.e. to every line aggregated in the AN). The AN then behaves as a cross-connect, switching ports via the VLAN-IDs for connectivity in the downstream direction, and switching upstream traffic to the uplink. The obvious limitation of 4094 users in the network can be circumvented by applying VLAN stacking in the aggregation network, grouping customer VLAN-Ids from a same AN inside a predefined outer VLAN tag. The AN then acts as a IEEE 802.1ad provider bridge (double tag S-VLAN + C-VLAN).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 110/193 PUBLIC

CPN

NSP/ISP

NSP/ISP

NSP/ISP

ASPNAP

CPN

EN

Ethernet (MPLS)aggregation network

C-VLAN-based Cross-connecting : 1(+) C-VLAN(s) <=> 1 lineUS : S-VLAN taggingDS : S-VLAN stripping

Routed => L2 terminationBridged => MAC@ FWD

CPE EN

MAC @ FWDwithin S-VLAN

US : L2 terminationDS : S-VLAN + C-VLAN tagging

S-VLAN 1

S-VLAN 4

S-VLAN 2

S-VLAN 3

C-VLAN 1C-VLAN 2

C-VLAN 1C-VLAN 2

C-VLAN 1C-VLAN 2

Figure 3-3 : Cross-connecting (residential users)

Note that business users also have to be considered. These users generally will generate 802.1Q-tagged Ethernet frames, and expect them to be transported transparently across the network to one or multiple other business locations (L2 VPN). In order to support this service, the access network should have pre-provisioned S-VLANs configured (one per business customer) to connect ANs mutually and with a long-haul transport provider. In the ANs, the upstream frames are transparently sent and tagged with the corresponding S-VLAN (based on the line), and the downstream frames are transparently sent (after stripping the S-VLAN tag) to the line corresponding to the S-VID. This approach can be combined with the bridged model and with the cross-connect model. In both cases some S-VLANs must be reserved in the network for business users only.

Note that MPLS can be used for improving the scalability, as explained in 3.1.2.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 111/193 PUBLIC

Corporate NW

NAPEthernet (MPLS)aggregation network

AN

S-VLAN based cross-connecting : 1 S-VLAN(s) <=> 1 lineUS : S-VLAN taggingDS : S-VLAN stripping

802.1Q bridgedor routed

MAC @ FWDwithin S-VLAN

Optionally LER/LSR

S-VLAN 102

S-VLAN 100

C-VLANs X,Y,...

C-VLAN XC-VLAN Y

Corporate NW

S-VLAN 101

EN Long-haul MPLS/IP

MAC @ FWDwithin S-VLANOptionally LER

Figure 3-4 : Business users in the Ethernet Network Model

Connectivity has to be investigated from several angles, and for both options.

Customer separation

Obviously any traffic needs to flow only between the intended end-points. In other words, the traffic from the different customers must be separated from each other. Whereas in a classic LAN all Ethernet frames can be seen by anyone at the same segment and all MAC addresses can be reached, these features are not acceptable for a public access network based on Ethernet. In a public access network there must be the option to block any layer 2 visibility and connectivity between different residential networks.

Therefore it is necessary to implement some separation mechanisms for different aspects of the connectivity :

• Unicast traffic should only be visible to the intended end-points. There should obviously never be more than one residential network connected to each collision domain in the access network. This is ensured by the bridging functionality at the Access Node. A further requirement can be to block direct traffic between users at layer 2. This requires a separation of end-users at layer 2 in the whole network.

• Broadcast domain; the broadcast domain must be limited between different residential networks, and broadcasts towards the network must be controlled. This means that the access node must implement broadcast filters.

• Multicast traffic. Downstream-multicast traffic should be allowed. Upstream-multicast traffic is for further investigation.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 112/193 PUBLIC

• Peer-to-peer traffic. For unicast or multicast traffic directly between end-users (peer-to-peer), the traffic can either flow locally via the first common switching node (AN or AS), or it can be forced to flow via the Edge Node. The applicability of each option is investigated (see further). Besides what is needed to allow peer-peer, it must also be possible to block local peer-peer traffic, if required by the operator.

Examples of mechanisms for limiting the broadcast domain are:

• Tunneling (e.g. L2TP, GRE, Ethernet over ATM PVC, PPPoE). All Ethernet traffic is forwarded in a tunnel between the Access Node and the Edge Node. The tunnel will thus prevent all Ethernet traffic to go directly between the Access Nodes. Note that this is automatically the case for all IPoPPPoE traffic.

• One VLAN per residential network. When each subscriber has its own VLAN it means that no Ethernet frames, including broadcast frames, can be transmitted between the residential networks. Because of the inherent limitation of 4k subscribers, a double VLAN tag is needed, leading to the cross-connect model.

• One VLAN per AN-EN pair. This limits the broadcast domain between an AN to a single EN, the other ANs and ENs will not receive broadcasts originating from the users on this AN (or originating from the AN itself).

• MAC address filters. In principle a filter in the Access Node could be configured to forward only those Ethernet frames that have a destination address of a known Edge Node. However this rule cannot be strictly applied as the EN must be able to broadcast ARP messages down to all or parts of the residential networks, and the residential networks must be able to broadcast ARP messages to edge nodes (e.g. during DHCP auto-configuration). This means that the broadcast domain will be asymmetric, i.e. the Edge Node can send broadcast messages to all residential networks, but each residential network can only send broadcast messages to the Edge Node. Note that peer-peer traffic brings its own set of broadcasts that must be allowed and controlled.

Mechanisms for enforcing mutual customer separation (= forcing peer-peer via the edge)

• VLANs : with 1 VLAN per {AN,EN} pair and traffic filtering at the AN to its associated VLAN-IDs, users on different ANs will be blocked to establish direct L2 connectivity.

• MAC Forced Forwarding MAC FF: all end-user upstream traffic is forced to the edge node, blocking any users to establish direct L2 connectivity.

Customer traceability

The Provider may need to keep track of user traffic in order to treat traffic to or from different Customers in different ways. There are two main needs:

• SLS fulfilment. The Provider may perform charging, policing, shaping and other Customer specific treatment.

• Security. The Provider must be able to warn or stop Customers using spoofed addresses or misbehave in other ways.

In order to be sure that the treated or monitored Customer traffic is sent or received by the correct Customer, the Provider must have some mechanism that can trace (i.e. identify) the Customer by using the monitored traffic information.

Examples of mechanisms for enforcing customer traceability are:

• Tunnel end point identifier. Since the tunnel is controlled by the operator, its end-points

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 113/193 PUBLIC

also define the last mile port identity, which is associated to a Customer identity.

• Line-ID at auto-configuration. If all incorrect source addresses are dropped at the Access Node by an anti-spoofing filter (MAC address or IP address), the rest of the Ethernet frames / IP packets are considered correct. Therefore the source address of these packets can be used for Customer identification. In order to associate the physical line from which a user connects to the network with the IP address(es) allocated to that user, a DHCP relay agent or PPPoE relay agent can include the line-ID in the auto-configuration process.

Service distinction

Each Customer must be able to choose from different Application Services and different Network Services delivered by the Service Providers. The network must allow application services (offered by NSP/ASP) to distinguish themselves as perceived by a user, in terms of IP addressing, quality of service and accessibility. The network must also allow different users to experience different qualities of experience depending on their subscription profile.

• Application Service distinction

The Application Services are separated at the IP layer, by using different IP addresses and subnets (clients and servers). Each NSP/ASP can offer different properties of the connectivity to the Application Service.

One scenario is when the Application Service is offered using local (private) IP addresses. In this case the application is only accessible for the customers that belong to the NSP which controls this private IP address space. Customers to other NSPs cannot get access to this Application Service. A broadcast TV service could be an example of this scenario. Another scenario is an Application Service that is offered from a global (public) IP address. This IP address is controlled by a certain ISP and reachable from all end users on the Internet. The Application Service will thus be accessible on the Internet, but customers to this certain ISP may get better QoS guarantees than customers to other ISPs. An example could be a VoD service where the customers within the access network get high quality due to the use of QoS bits in the network, where the Customers to other ISPs only get best-effort quality through the Internet backbones.

• Network Service distinction by NAP

For certain undifferentiated services, like plain HSI, the NAP can offer different basic connection qualities (e.g. a guaranteed bit rate) to different users depending on a user profile. The NAP must then treat the user's traffic according to the profile to which he/she subscribed. This can include upstream policing at AN and priority indication in aggregation network by means of the p-bits of the VLAN tag in the aggregation network.

• Network Service distinction by NSP/ASP

The user of differentiated application services, like VoIP, can also have a profile at the Multi-Media service provider (NSP/ASP), e.g. stipulating the codecs that can be used and the quality of the connection. The user must then be able to negotiate and receive an associated network service quality in the NAP to get access to that application service, depending on his/her profile at the NSP/ASP. Each NSP/ASP must also be able to set up and manage the properties of the service bindings in collaboration with the NAP. This is elaborated in the IMS-model.

NSP/ASP separation

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 114/193 PUBLIC

If different NSP/ASPs use overlapping private IP address subnets, the different NSPs must be separated at layer 2 in the IP nodes of the access network so that each NSP can manage its own IP address space. The EN is the place where such traffic must be separated in order to select the corresponding VR instance. The (routed) RGW could be the other place to foresee VRs, in case one RGW should be able to connect to different overlapping subnets simultaneously. However the assumption is that this will not occur.

One way to achieve separation is to let each NSP have access to its own "logical access network" in the NAP. Such separation adds extra complexity to the network and potentially to the RGW. On the other hand, a beneficial side-effect of such separation is that the NSP/ASPs can be segregated in the NAP in terms of quality of service, in other words an NAP could offer different levels of QoS per NSP (e.g. different priorities in case of link failure). The logical access networks can be implemented as virtual access networks (VAN) in the physical network.

Examples of network separating mechanisms at layer 2 are:

• Tunnels between RGW and EN, e.g. PPPoE. Each tunnel can be associated with a separate NSP/ASP. This implies that each NSP will control a set of tunnels.

• Different VLAN-IDs per NSP/ASP. The VLAN standard provides much of what is expected from a VAN mechanism, e.g. no MAC addresses or IP addresses belonging to other VLANs could be reached within the VLAN. But this implies that the RGW must be able to generate an NSP-specific VLAN ID.

Another way to separate the traffic at layer 2 at the EN is to assign separate IP address and MAC address to each VR.

Scalability

Scalability in an Ethernet network depends on multiple factors. The two modes, Intelligent Bridging and Cross-Connecting, each have their own pro's and con's.

• Ability to segregate traffic between many different users. Each mode not only has a certain scalability characteristics, but also has other characteristics such as the need (or not) to ensure MAC address unicity in the network.

• The S-VLAN tag in the aggregation network can also be used for separating traffic to individual Access Nodes and/or individual service edges, and optionally for other distinctions (see 3.1.1.4). The maximum is still limited by 4094. Note that business services (e.g. VPNs) also use VLAN-tags. When business users are to be supported it makes sense to consider MPLS in the aggregation network.

• Next to segregation, the Ethernet spanning tree protocol will also experience scalability issues in large networks for achieving expected performance. The restoration time is determined by the maximum time needed to distribute this information to all nodes in the network. This maximum time is determined by multiple factors such as the amount of nodes, interconnection complexity, number of MAC addresses to be flushed from the tables.

• Finally, although the protocols allow certain intrinsic scales, practical network elements (Ethernet switches, BRAS) will impose a practical limit due to their limited capacity in terms of user and session handling. This must be taken into account in the network planning.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 115/193 PUBLIC

• One way to increase the scalability is to split the access/aggregation network into so-called Ethernet islands. Each island can use the full range of VLAN IDs, and the islands are connected together via the IP edge (IP router or BAS), which separates them at L2 level. Each island also limits the number of MAC addresses to be treated. Another benefit is the limitation of broadcast domains.

Robustness

In a large Access Network based on Ethernet, there must be some redundancy mechanisms in case of link or node failures. It is normally not economically meaningful to cover link failures in the first mile, so the requirement is to protect SPOFs affecting a certain number of residential networks, by means of redundant links or nodes.

Redundant links can be set-up using different schemes (1+1, 1:1, m:n). Protection mechanisms can be provided at different transport layers, e.g. Ethernet over MPLS over SONET. Each layer may include methods for detecting failures and restoring service, without the support of lower or higher layers. This does not preclude from a lower layer to inform a higher layer that it detected a failure.

For the Ethernet layer several protection mechanisms can be used :

• Link protection : Link Aggregation Control Protocol (LACP), Spanning Tree Protocol (RSTP and optionally MSTP), Resilient Packet Ring (RPR).

• Node protection : RSTP (optionally MSTP), RPR, Ethernet Automatic Protection Switching (EAPS)

Topology aspects

Different physical and logical topologies are possible in the access part and aggregation part of the network. The physical topology reflects the physical wiring and connection possibilities. The logical topology reflects the different paths that packets will follow on top of the physical topology. It can be of a different nature to the latter. Any type of physical topology can carry both the point-point and point-multipoint types of connection.

In Ethernet networks the logical basis is set by the spanning tree. Once the spanning tree is in force, a logical topology (point-point or point-multipoint) can be enforced with VLANs, or MAC address learning (unicast), or multicast MAC address configuration (multicast), or flooding (unicast during learning, broadcast) over the appropriate ports.

• Spanning Tree

For Ethernet networks, the spanning tree mechanism is used to prevent loops in the network. The spanning tree protocol (STP) runs between all switches of the network. STP also reconfigures automatically in case of failure and therefore provides a network automatic protection mechanism.

The RSTP protocol (IEEE 802.1w) provides significant improvements in the speed of spanning tree convergence for bridged networks (to a 100 ms—2 s range) for failures that involve point-to-point links. Since the Metro networks built today are designed with point-to-point links, RSTP provides significant improvement in their convergence characteristics.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 116/193 PUBLIC

MSTP allows frames assigned to different VLANs to follow separate paths, each based on an independent Multiple Spanning Tree Instance (MSTI), within Multiple Spanning Tree (MST) Regions composed of LANs and or MST Bridges. MSTP is typically used for large networks to reduce the impact of reconfiguration to only a region.

• First mile and optional remote units

The subscriber lines can be connected by point-point (DSL) or tree (PON, fixed wireless).

Remote units (e.g. FTTCab) introduce a supplementary aggregation stage in the topology. The remote units themselves can be connected to a hub multiplexer by means of point-point (e.g. optical), tree (e.g. PON), or ring links. The point-point links are the easiest to planify (dedicated bandwidth per link), but consume a lot of fibre (connections). The tree and ring links imply a sharing of the common bandwidth amongst the different remote units. In the case of PONs each ONU can have a similar configuration. In the case of rings, the planning is more complex because the amount of traffic that a node receives depends on its position in the ring. QoS mechanisms then become important for managing the bandwidth. On the other hand rings are suited for offering resiliency.

The choice of the network topology will also depend on the layout of the fibre plant. With star topology requiring the longest fibre plant with potentially less splices while a ring saves on fibre but requires frequent splicing.

• Aggregation part

In the simplest case the Access Node is directly connected to the (possibly multiple) service edge(s) by means of point-point links. In larger networks the aggregation network is composed of a ringed or meshed ensemble of Access Nodes, switches and service edges. In the case of ring structure the edge nodes (Access Nodes, service edges) are either part of the ring itself, or are connected in point-point to an Add-Drop Multiplexer on the ring. The advantage of a ring is the resilient behaviour (the amount of fibre connections is less relevant at the aggregation side). Given the expected size of an aggregation network, a meshed version doesn’t have to be fully meshed.

Finally, the aggregation network could be divided into so-called Ethernet islands, being connected by IP nodes. In this case the single L2 network has been partitioned in isolated L2 networks with L3 nodes in between.

Nomadism

Nomadism is the ability, for a person who is in a moving situation, (but without moving of his terminal as in mobile world), to recover, whatever his access of connection to the network, his services environment.

A nomadic application/service can be of any type (internet access, telephony, voice/video), and it will be nomadic in the sense that it can be used at various locations by a certain user, and it gives this user his "service environment" at each of these locations.

A nomad user may move around. Nomadism is different from pure mobility in the sense that there is no session continuity. But nomadism may include roaming in the sense of the ability to use any one of multiple Internet service providers (ISPs), typically when being connected via a visited network, while maintaining a formal, customer-vendor relationship with only one.

The service environment of a user is defined as the service agreements contracted with the NAP, NSP and ASP respectively for each application suite.

The impact of nomadism will be investigated further in future deliverables.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 117/193 PUBLIC

3.1.1.2 Intelligent bridging mode

Customer separation

The separation and switching is operated on basis of the MAC addresses in source and destination. If no VLANs are used, this implies that the whole access and aggregation network is flat, based on MAC learning, and shared by all MAC addresses. One direct consequence is that MAC address unicity must be guaranteed over the whole network. Another disadvantage is that the whole network is prone to broadcast storms or security attacks.

Therefore it is recommended to add a VLAN at the border of the aggregation network (both user side and NSP/ISP side) in order to distinguish to which edge node a traffic is bound / coming from, or even to distinguish a pair {Access Node, Edge Node}. MAC learning is performed per VLAN in the Ethernet switches. The VLAN is stripped before being sent (in downstream) on the user line. It is recommended to use the S-VLAN (IEEE 802.1ad) in order to align with the cross-connect mode (see further). In networks consisting of 802.1Q bridges, the same function can be performed with the 802.1Q tag, however this is not future-proof.

The advantages are that broadcast domains are limited, MAC address uniqueness region is reduced to the VLAN domain, the Access Node can filter unwanted traffic based on the VLAN of incoming downstream frames, and edge devices can be prevented to communicate directly at L2. These advantages are best exploited by assigning a VLAN per {AN, EN} pair.

In order to prevent direct layer-2 connectivity between user-devices (if required), two solutions exist. The correct handling of peer-peer via the edge node is described in §3.1.3.4

• Combination of ARP filter, upstream filter and S-VLANs.

Several hosts located at different premises can belong to the same IP subnet. Consequently, if a host wishes to communicate with another host connected to a different home network, an ARP request is issued to obtain that host's corresponding MAC address.

In order to prevent local intra-AN connectivity, the AN should only allow ARP requests to known IP edges and discard all other requests. As smart users could still try to establish connectivity by directly using someone else's MAC@ and IP@, the AN should also drop any traffic to unknown MAC destination addresses (not corresponding to known IP edges).

In order to prevent local inter-AN connectivity (via an AS), the AN should again filter ARP requests (either drop or send only to EN, where EN responds with its own MAC@) and for the smart users should put the ANs in separate VLANs, thereby impeding layer 2 communication between ANs.

• MAC Forced Forwarding.

All end-user upstream traffic can be forced to the edge node using a scheme called MAC forced forwarding (MAC FF). The scheme is an alternative to the use of VLANs for traffic separation for unicast traffic. In order to apply broadcast and multicast traffic separation, the method can be combined with the use of VLANs.

The basic property of MAC FF is that the Access Node ensures that upstream traffic is always sent to the Edge node, even if the IP traffic goes between customers hosts located in the same IP subnet. The solution has three major aspects:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 118/193 PUBLIC

- Obtaining the IP and MAC addresses of the Edge Node, either by pre-provisioning or by snooping on DHCP replies sent to the host and using ARP to find the corresponding MAC address. An access network may contain multiple edge nodes, and different hosts may be assigned different edge nodes. This implies that the access node must register the assigned edge node address on a per-host basis.

- ARP agent in AN. As mentioned before, if a host wishes to communicate with another host connected to a different home network, an ARP request is issued to obtain that host's corresponding MAC address. This ARP request is intercepted by the access node’s ARP agent and responded to with an ARP reply, indicating the edge node MAC address as the requested layer-2 destination. Note that this is not the same behaviour as the "proxy ARP" mechanism described in [RFC1009]. In this way, the ARP table of the requesting host will register the edge node MAC address as the layer-2 destination for any host within that IP subnet. An exception to this rule is made when a host issues an ARP request for the MAC address of another host located within the same home network. If the access node recognises this host as already allocated on the same line, the access node simply discards the ARP request making the assumption that the host within its home network will respond with ARP reply. Since the ARP functionality is not used in Ipv6, MAC FF is not applicable in IPv6 networks.

- Filtering Upstream Traffic. Since the access node’s ARP agent will always reply with the MAC address of the edge node, the requesting host will never learn MAC addresses of hosts located at other premises. However, malicious customers or malfunctioning hosts may still try to send traffic using other destination MAC addresses. The filtering rules implemented in the access forces it to discard all upstream frames whose destination MAC is address different than the edge node. In addition, Ethernet broadcast and multicast frames originating on a subscriber line to other subscriber lines (peer-peer) are forwarded to the edge node.

Note that the features of VLAN (e.g.VLAN per service and service provider) can be combined with the concept of MACFF to create a powerful traffic separation mechanism. In this solution shown in Figure 3-5, VLANs are used for separating the traffic that belongs to different services within a service provider, e.g., a network service provider’s Internet, voice and video traffic can be allocated to different VLANs, while MAC FF can be used for user traffic separation within these VLANs. The combination can be used to provide Internet, Video, VoIP and other services. Other services such as LAN-to-LAN transparent services can be built by employing service VLANs alone, without MAC FF.

VoIP SP

Video SP

IP BB

AD SLModem

Direct Service VLAN with MAC FF

ISPService VLAN with MAC FF

802.1p

Local v ideo servers

D iffServ / M PLS Best Effort / N oprioritisation

ISP

Internet

Serv ice VLANService VLANMAC Forced Forwarding inside Service VLANLink to V ideo serversTransparent LAN Service in VLAN

Serv ice VLANService VLANMAC Forced Forwarding inside Service VLANLink to V ideo serversTransparent LAN Service in VLAN

Edge node

Access node

Ethernet Sw itch

Ethernet Sw itch BRASEdgeBRASEdgeEthernet

Sw itchEthernet Sw itch

Ethernet Sw itch

Ethernet Sw itch

R outerR outerR outerR outer

Serv ice Edge Node

RouterRouterRouterRouterCore

Router

Ethernet Sw itch

Ethernet Sw itch

Access node

VLAN per Service

VoIP SP

Video SP

IP BB

AD SLModem

Direct Service VLAN with MAC FF

ISPService VLAN with MAC FF

802.1p

Local v ideo servers

D iffServ / M PLS Best Effort / N oprioritisation

ISP

Internet

Serv ice VLANService VLANMAC Forced Forwarding inside Service VLANLink to V ideo serversTransparent LAN Service in VLAN

Serv ice VLANService VLANMAC Forced Forwarding inside Service VLANLink to V ideo serversTransparent LAN Service in VLAN

Edge node

Access node

Ethernet Sw itch

Ethernet Sw itch BRASEdgeBRASEdgeEthernet

Sw itchEthernet Sw itch

Ethernet Sw itch

Ethernet Sw itch

R outerR outerR outerR outer

Serv ice Edge Node

RouterRouterRouterRouterCore

Router

Ethernet Sw itch

Ethernet Sw itch

Access node

VLAN per Service

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 119/193 PUBLIC

Figure 3-5 : Illustration of MAC FF Figure 3-5 shows, three VLANs. Within two of the VLANs MAC FF is used for customer traffic separation while the third VLAN is used for transparent LAN to LAN service.

Customer traceability

Assuming the AN performs anti-spoofing checks, the customer can be traced at layer 2, by means of the MAC address of his/her terminals (bridged RGW), of his RGW (routed RGW). When MAC address unicity is limited per VLAN, the VLAN-ID must also be taken into account.

Alternatively and more likely, the customer can be traced at layer 3, by means of its allocated IP address(es).

Scalability

The amount of MAC addresses that must be supported is between the total amount of users (routed modems) and the total amount of terminals connected at the users’ premises (bridged modems). This could run well into multiple tens of thousands for one aggregation network with tens to hundreds of ANs. The total supported amount of users and service edges is scalable, as either 4094 edges are supported, or 4094 {Access Node, Edge Node} combinations. 3.1.1.3 Cross-connect mode

Customer separation

An alternative method for user segregation is the use of one C-VLAN per user (line) and apply cross-connecting. At the Access Node (or first aggregation point in the access network, could be a remote unit), a separate C-VLAN ID per user line is added on upstream frames. At the Access Node and Ethernet switches the traffic is then handled according to 802.1D and 802.1Q (independent VLAN learning). Note that strictly speaking the MAC addresses are learnt but no longer really used for switching, as every frame will be sent to an egress port corresponding to its C-VLAN.

The advantage is that MAC unicity is no longer an issue. However the immediate limitation is the amount of users, linked to the limitation of 4094 VLAN IDs, which is clearly too low for an access environment. Therefore VLAN stacking is required.

Customer traceability

At layer 2, a user (subscriber line) can be traced based on the combination {C-VID, S-VID}. Note that in the case of protection switching in the network, the S-VID could change and the tracing parameters should be updated.

Alternatively and more likely, the customer can be traced at layer 3, by means of its allocated IP address(es), when anti-spoofing is in place at the ANs.

Scalability

The total amount of user lines in the network can reach 4095*4095 = 16.8 M users, which is scalable.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 120/193 PUBLIC

3.1.1.4 Use of VLANs

Basic use

As described in the previous paragraphs, the use of VLANs can be different for the Intelligent bridging mode and for the cross-connect mode, but some general rules apply to both. Also, VLANs used between the RGW and the AN can be different and can have a different meaning to the VLANs used inside the aggregation network (between AN and EN).

• VLAN between RGW and AN

When the frames are just untagged, no QoS or service info can be sent at L2.

If the upstream frames are priority-tagged, this allows the RGW to send QoS information at layer 2. The RGW must be configured by the NAP with the allowed p-settings and with the mapping of applications on corresponding p-bit settings. This raises the issue of potential abuse of priority by the end-user. The AN must then look at a per-user level and perform either p-selective policing (according to QoS profile of the user), or p-aware accounting in order to control the traffic from the user.

By adding a 802.1Q-tag to the upstream and downstream frames, the p-bits can be used in this same way, and the VLAN-ID can carry additional information. One possibility for the VLAN-ID is to indicate the service provider, in order to have layer 2 separation of NSPs (see 0). But this involves extra complexity (the RGW must be configured by the NAP with the allowed C-VIDs and with corresponding mapping, the AN must translate the C-VID into a specific S-VID) and decreases scalability in the cross-connect model.

The other use of 1 (or multiple) C-VID(s) per customer is the cross-connect model. For residential users the choice is free to apply cross-connecting or not. But business customers will generate C-VLAN tagged frames and expect them to be delivered (almost) transparently at another point of the network. The C-VID must remain unchanged and the frame must be transported over the aggregation network. The business users can be dealt with in different ways, all requiring cross-connecting at the AN :

- Single separate S-VLAN for all business users

All ANs with business users are part of this S-VLAN. Hence an agreement is needed between NAP and customer for freezing which VIDs the customer may use. With a single S-VLAN the scalability is the same as for a 802.1Q Metro Ethernet NW. It is not flexible nor scalable because there maximum 4094 business connections for all business customers together.

- One separate S-VLAN per L2 VPN

The direct advantage is that every VPN can use any VIDs, but on the other hand every VPN consumes one S-VLAN. So this is not very scalable because there are maximum 4094 business customers (VPNs)

- Combining with MPLS

Every VPN can use any VIDs, and each AN can use up to 4094 S-VLANs if needed. This is the most scalable solution allowing up to 4094 business customers (VPNs) per AN.

Note that it is not recommended to allow S-VID tags generated by the user nor to forward S-VLAN tags to the user, for the following reasons :

- S-VID can change after a protection switch-over, this would then require reconfiguration of the RGW with the new value.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 121/193 PUBLIC

- Security : NAP doesn’t want to show network settings to its users, with C-VLAN the AN can hide the S-VLAN settings and can filter C-VLANs between first-mile and aggregation NW.

- 4094 C-VIDs per user or customer are far sufficient.

• VLAN(s) between AN and EN

It is not recommended to use untagged frames because the aggregation network would become a single flat Ethernet network, with a fair share of security issues.

In the intelligent bridging mode it is therefore recommended to use single-tagged frames in the aggregation network. Supposing we use S-VLANs, the choice of the S-VID is as follows;

- 1 S-VID per AN

If the ANs filter ingress traffic based on their S-VID, this allows to separate the traffic at AN (block traffic for other ANs), and allows to impede inter-AN direct peer-peer traffic (even if ARPs would be blocked at the AN, users could still try to communicate directly once they know their respective IP@ and MAC@). It is however an incomplete scheme as there could be no separation nor differentiation (QoS) between ENs (all ENs would accept all S-VIDs).

- 1 S-VID per AN-EN pair

Besides the capabilities of the previous scheme, this scheme also allows traffic separation between ENs (with S-VID filtering at ENs), and (QoS) differentiation for traffic with different ENs. It uses NxM S-VLANs (N ANs, M ENs).

Note also that this allows to assign 1 subnet per AN (per S-VID). Other subnetting schemes require more attention.

- 1 S-VID per (group of ANs)-EN pair (including the case of 1 S-VID per EN)

Compared to 1 S-VID per AN-EN pair there are less S-VLANs to be consumed but the level of security is lower; in downstream, the AN is not sure that source was an EN, the source could be another AN. Also, if direct peer-peer traffic is to be blocked, the AN must now filter on MAC D@ for upstream traffic (only allowing traffic to known ENs).

Note that it allows to assign 1 subnet per group of ANs, other subnetting schemes require more attention.

Note; stacked VLAN (S+C) must be supported in bridge mode for business users!

In the cross-connecting mode we must use VLAN stacking. Again there are some possible choices for the S-VID :

- S-VID per AN

The most straightforward is 1 S-VID per AN to segregate traffic between ANs, but it doesn't allow separation nor differentiation (QoS) between ENs.

- 1 S-VID per AN-EN pair

Now traffic can be separated between ANs and ENs. But a maximum 4094 users can be connected per AN (assuming all users must be able to connect to all ENs).

- Multiple S-VIDs per AN-EN pair, if high # users per AN

A variant to the previous scheme, to allow more than 4094 users per AN. However it requires more configuration effort and can lead to a less efficient consumption of S-VIDs.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 122/193 PUBLIC

- 1 S-VID per (group of AN)-EN pair, if low # users per AN

Another variant to the previous scheme, to allow a better efficiency of S-VID use. But the configuration effort reaches another level as co-ordination is needed between ANs for C-VLAN unicity per S-VLAN.

VLAN Intelligent bridging mode Cross-connecting mode

Between RGW and AN

• Untagged • Priority-tagged by RGW • 802.1Q-tagged by business users

• Untagged • Priority-tagged by RGW • 802.1Q-tagged (by AN)

Between AN and EN

• Single 802.1ad (S-VLAN) tagged by AN - 1 S-VID per AN-EN - 1 S-VID per (group or all ANs)-EN

• Dual 802.1ad (S-VLAN+C-VLAN) tagged by AN - 1 S-VID per AN-EN - 1 S-VID per (group or all ANs)-EN if

low # users per AN - multiple S-VIDs per AN-EN if high #

users per AN

Table 3-1 : Summary of the basic use of VLANs in Ethernet network model

Optional uses

On top of these main uses, additional meanings can be associated to the VLANs in the aggregation network, allowing for more control of the traffic. The downside is an increase in operational and management complexity, and a reduced scalability (limited by the 4094 possible values for the outer VLAN tag).

• Upstream / Downstream

• per NSP

• per CoS, for implementing pre-provisioned bit pipes (see IMS model)

• 1 for working path, 1 for protection path

Guidelines

Before using S-VLANs for the previous purposes, they must be configured in the NAP.

• Configuration in AN and EN

By filtering S-VLANs at ingress at ANs and ENs, the traffic can be separated and the nodes can be protected against attacks and broadcasts.

• Configuration in AS

In a first option, all VLANs can be configured on all ports in the aggregation network, which is the simplest situation when spanning tree updates must occur. However the VLANs can then no longer be used for separate QoS pipes.

Alternatively, the VLANs could be configured following a particular scheme to form QoS “pipes” in the aggregation network. This requires configuration effort. Spanning tree upgrades become more complex, as the topology must ensure that there are 2 alternate paths per S-VLAN.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 123/193 PUBLIC

3.1.2 Using MPLS 3.1.2.1 Introduction

Deploying MPLS in an aggregation network is required especially in Metro networks for the support of business services. This is to solve the inherent scalability restrictions of S-VLAN implementations (provider bridge 802.1ad). A plain Carrier Ethernet network can only support about 4k different L2 service instances (i.e. VPN connections) using S-VLAN. For most networks this is definitively not sufficient if business services are to be supported.

MPLS solves this scalability issue as it uses MPLS-LSPs per service and per business customer instead of using S-VLANs. MPLS also allows support of circuit emulation services based on the Martini drafts. Considerable work on pseudo wire emulation and Layer 2 VPN services is available from the pwe3 and l2vpn working groups of the IETF. The motivation for MPLS is for supporting business users, but it is also compatible with traffic to and from residential users.

An advanced (IP-based) MPLS control plane helps to reduce the administrational overhead of end-to-end service provisioning and eases traffic engineering. Connection-oriented MPLS seems to be the more natural approach for realizing the cross-connect modes in the data plane as opposed to an connectionless Ethernet.

On the other hand the heavier control plane increases complexity. Also MPLS enabled equipment is usually more expensive than plain Ethernet equipment.

The purpose of this chapter is to show the principles of interworking Ethernet based DSL and Metro access network areas with an MPLS based Metro core network.

3.1.2.2 MPLS interworking in the case of L2 VPN business services offered by the NAP

Figure 3-6 : MPLS for L2 VPN business services

Figure 3-6 shows an example frame flow between two locations of a business customer 1 which are interconnected by an Ethernet service of the NAP via the MPLS core. In this example the business customer requires transparent transport of VLAN tags (C-VLAN) between the UNIs. Service multiplexing is not used. On the left hand side the customer location is interconnected to an access node, while on the right hand side the customer location is directly interconnected to the LER node.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 124/193 PUBLIC

The access node adds/strips a service specific S-VLAN to the Ethernet frames when forwarding frames between the residential network and the NAP network. The left-hand LER forwards Ethernet frames between S-VLANs and VC-LSPs in a one-to-one fashion following the Martini encapsulation. The LER strips the S-VLAN tag form the Ethernet frames arriving from the access node before forwarding them on the associated VC-LSP. In the other direction the LER adds the respective S-VLAN tag associated with the VC-LSP, e.g. 105. For directly connected customer locations the right hand LER just forwards Ethernet frames between physical ports and VC-LSPs in a one-to-one fashion. Multiple VC-LSPs are encapsulated into one Tunnel-LSP within the MPLS network.

In case on the right hand side the customer location is also connected via an access node, the LER on the right hand side also has to add/strip a service delimiting S-VLAN, e.g. 211, to/from the Ethernet frames of the metro access area. Which of both operations apply can be derived by the LER e.g. from an attribute administered at the physical downlink port.

In the case that two customer location are interconnected to the same access node the AN node will directly bridge the frames as a provider bridge 802.1ad. In the case that two customer locations are connected to different AN but the same next level aggregation node, the latter node can also operate as a provider bridge and the LER function of the node is not involved.

3.1.2.3 MPLS interworking in the case of access to L3 VPN service providers

Figure 3-7 : MPLS for L3 VPN service

Figure 3-7 shows an example network scenario in which the NAP provides an access service to an IP VPN service provider. The IP VPN service provider can either be the NAP itself or a third party network service provider. Usually IP VPNs are today implemented following RFC 2547bis. The encapsulation can be done in the same way as discussed above for the case of L2 VPN services.

3.1.2.4 MPLS interworking in the case of residential traffic

In addition to business services, residential DSL services must be supported by the same NAP network. In this case S-VLAN tagging must also be used for residential service delimiting. Two different options can be distinguished

a. residential traffic is additionally tagged with a C-VLAN for residential customer segregation (cross connect mode)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 125/193 PUBLIC

b. residential traffic is not tagged with a C-VLAN and forced forwarding is used for customer segregation within an access node (bridged mode).

Figure 3-8 : MPLS Encapsulation in cross-connect mode

Figure 3-8 shows the encapsulation for case a). The same encapsulation principles are applied as discussed in 3.1.2.2 for business traffic. Therefore residential as well as business traffic can coexist in the same network.

For each residential service one S-VLAN is reserved for service delimiting purposes in each metro access area. C-VLANs are used for residential customer segregation at the same access node. The LER A maps frames between (S-VLAN, downlink port) and (T-LSP, VC-LSP) in a one-to-one manner. Multiple VC-LSPs, each carrying traffic of a single residential service from one access node, can be transported in the same tunnel LSP. LER B maps T- and VC-LSP to a physical port and an S-VLAN tag which is identifying service and access node. Note, as of today there is no inter domain MPLS specified.

In this scenario the S-VLAN has local significance only and is stripped/added in the LER. The NSP uses the triple (physical port, S-VLAN, C-VLAN) to identify service and access port. Note, that it may be possible to re-use S-VLANs between different physical ports at the LER.

Since an LSR in the Metro core is neither aware of the VC-LSPs nor of Ethernet frames this architecture scales.

Figure 3-9 : MPLS Encapsulation in bridging mode

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 126/193 PUBLIC

Figure 3-9 shows the encapsulation for the case residential traffic is using the bridged mode. In this case there is no C-VLAN for residential customer segregation. Each access node forwards all residential traffic to the LER irrespective of the destination MAC. S-VLAN tagging and mapping to VC-LSPs is done as in the preceding “cross-connect” case. Since there is no C-VLAN the NSP may evaluate DHCP option 82 for identifying the access port if this option is supported by the NAP.

3.1.3 Providing end-end connectivity 3.1.3.1 Connectivity parameters

Nature of connectivity parameters

Once an end-user CPE has successfully been auto-configured, it will start communicating with either a server in the network (client-server connectivity) or with another user (user-user or peer-peer connectivity). In order to ensure basic connectivity between the end-points, the different network elements and the CPE must store connectivity parameters at layer 2 and layer 3, and use them for the forwarding of data frames. The purpose of this subchapter is to identify these needed parameters, together with the method to configure them.

The considered cases are unicast client-server, multicast client-server, and unicast peer-peer. The "chain" of considered nodes is : CPE (Terminal + RGW) - AN - AS - EN. The ACS, DHCP servers, PPP servers, AAA servers are needed for setting these connectivity parameters but are not part of the data path.

As the multi-service aspect is important, we identify which (set of) parameters must be set per "NSP" and per service type. Note that "NSP" means the entity offering IP @s to the user, so this does not take a position on the business model handled; "NSP" could be unique and equal to the NAP, or one of multiple third-party ISPs.

Updates of the connectivity parameters

The parameters not only need to be set but also sometimes updated. Indeed, a user can ask for a new service, requiring the use of a new IP@ for his CPE. Or a link somewhere in the aggregation network can be broken, initiating an update of the spanning tree. These are just examples, and in general there are two types of events that trigger an update of connectivity parameters; normal events and disruptive events.

• Normal events

As listed here, there are multiple occasions of normal behaviour requiring some updates, each with a specific trigger for the update :

- New hosts, nodes and servers: normal learning process (e.g. new host is connected => FDB is updated in AN, EN and all AS’s by normal flooding and learning).

- Ageing of tables (e.g. ARP table): table refreshes the entry after ageing (e.g. by sending ARP request)

- Time-out of settings (e.g. IP@ via DHCP, multicast group membership via IGMP): timer expiration triggers specific protocol messages when needed (e.g. dynamic allocation in DHCP, when lease period of a user is over the IP@ is freed and can be allocated to another user).

- New location of nomadic user : new access authentication phase (see security).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 127/193 PUBLIC

- New service asked by a user : CPE updates its routing table, if PPP=> new PPP(oE) session / if DHCP => via DHCP INFORM / DHCP ACK sequence with DHCP server without need to get new own IP@.

- Operator-driven : via management platform (e.g. for node maintenance)

• Disruptive events

But all does not always run smooth in the access network. Failures can occur in links and in nodes, and they also have to be accounted for in connectivity updates.

- Node failure. Some nodes are made resilient either by duplication of internal boards or by duplication of the node. When a failure happens, a protection switch-over is initiated from working entity to protection entity (equipment either inside the node or a separate node on its own). The L2-L3 parameters must then be updated in the other nodes. Note that the protection entity could have a different <MAC@, IP@>, or just a different MAC@, or the same <MAC@, IP@> as the working entity.

- Link failure in the network. The links at the edge (AN - aggregation network, aggregation network - EN, EN - NSP network) can (should) be made redundant. The overall aggregation network topology should also be tolerant to link failure (meshed, protected ring). When link failure occurs, a protection switch-over is initiated to another path. The spanning tree and forwarding tables in the nodes must then be updated.

• Overview of possible failures between user and NSP

Figure 3-10 depicts all possible node and link failures. It considers several protection points. First there is the possibility of dual homing of the CPE (for business users) leading to an active LT(0) and a protection LT(1) at the AN. For protection the uplink of the AN should be protected by means of an active NT(0) and a protection NT(1). In the examples it is assumed that they are each connected to a different first switch in the aggregation network (resp. AS1 and AS1'). The Ethernet aggregation network has a meshed or ring structure and contains a spanning tree. The spanning tree starts at the blade of the EN (root bridge) and stops at NTs of each AN. For protection the EN should have a working and protection blade at aggregation network side, and a working and protection blade at NSP side. Alternatively the whole EN can be doubled in the network.

Note that another type of link failure has no impact on the connectivity parameters ; when a link between two nodes is physically doubled and both are managed by LACP, the switch-over doesn't change the logical port information as seen by the higher layers.

AN EN(1)(Note : EN(0) and EN(1) either

internal redundancy or two nodes)

AS1 AS2

AS1’ AS2’

NT(0)

NT(1)

LT

LT

EN(0)(1) (2) (3)

(4)

(5) (6) (8)

(7) (9) (10) (11) (12) (13)(14)

Figure 3-10 : Possible failures in the NAP

• Requirements resulting from updates

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 128/193 PUBLIC

The update of the Ethernet spanning tree is one of the major events at failure. Depending on the place of the failure w.r.t. the current tree topology, the update can be very local or on the contrary quite dramatic (change of root bridge in the extreme case).

Requirement on S-VLAN provisioning : The provisioning of the network should ensure that there are S-VLANs for each back-up part, so that it is directly available and that there is no need for any VLAN reconfiguration after a failure : an S-VLAN between NT(0) - EN(0), and an S-VLAN’ between NT(1) - EN(1). Note that S-VID and S-VID' could be the same or could be different (decision by operator).A direct consequence is that the S-VID can change due to protection switch-over.

Requirement on the EN when switch-over from NT(0) to NT(1) : the EN must retrieve the S-VID' for downstream traffic after a switch-over, either by ageing of ARP table (if no L2-L3 learning from upstream traffic), or by L2-L3 interpretation of upstream traffic (traffic from user will be in new S-VLAN), or directly triggered by the management platform.

Requirement on the CPE when switch-over from EN(0) to EN(1) : when there is a protection switch-over from EN(0) to EN(1), the L2 and/or L3 address of EN(1) could differ from EN(0). In this case the CPE should update its routing table because it now has to send its traffic to EN(1). The required update could be <MAC @, IP @> of the EN, or just the MAC @ (EN(0) and EN(1) keeping the same IP @). The issue is that the CPE is not aware of this switch-over from EN(0) to EN(1) at layer 2 nor layer 3. If the connection is IPoPPPoE, the connection is broken and must be started up again. If the connection is IPoE there is also no way that the CPE can on its own detect a connection failure to the EN at layer 2 or layer 3. Therefore the most elegant solution (if economically viable) is to avoid any impact on the CPE by supporting VRRP at the EN (EN(1) and EN(0) then share a common virtual <MAC@, IP@> and work in master-slave mode via VRRP)

Failure

Update actions

CPE * AN : MAC@ of CPE disappears from FDB via ageing, IGMP snooping table is updated if needed when no more joins or no response to queries. * AS and EN : update of FDB * DHCP server : release of IP@ after lease period PPP server : release of IP@ after time-out of non-activity

AN LT(0) Downlink CPE – AN

If dual homing : * CPE : protection switch-over to other interface * AN : switch-over LT(0) to LT(1), update of IGMP snooping table if needed (if new/aged multicast group per LT) If no dual homing : see CPE failure

AN NT(0) Uplink AN – AS1 (board on) AS1

* AN : switch-over NT(0) to NT(1), change of S-VLAN to S-VLAN', initiate spanning tree reconfig. * AS1’ : participate in spanning tree reconfig (port state updates + send BPDUs), update FDB via flooding and learning. See description of spanning tree.

* other AS’s : participate in spanning tree reconfig (port status updates + send BPDUs), update FDB via flooding and learning.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 129/193 PUBLIC

(board on) intermediate AS Link intermediate AS’s

* AS whose root port is down : trigger spanning tree reconfig * AN : participate in spanning tree reconfig (updates port state + sends BPDUs), update FDB via flooding and learning. * other AS’s : participate in spanning tree reconfig (updates port state + sends BPDUs), update FDB via flooding and learning. Note 1.

(board on) AS 2 Link AS2 – EN EN ingress board

* CPE : IPoPPPoE => connection down, auto-config must be restarted by user IPoE => CPE needs to know new IP@-MAC@ of the EN. Note 2 * AN : entry of EN in ARP table after ageing (if same IP@ but new MAC@)

* AS whose root port is down (AS2 or AS2’) : initiate spanning tree reconfiguration

* AS’s : participate in spanning tree reconfig (port state update + send BPDUs), update FDB via flooding and learning

* EN : switch to other interface (both on FDB and VR connections)

EN egress board Uplink EN – NSP

* No repercussion on NAP * EN at NSP side : switch to other interface via L3 protocol (e.g. IS-IS)

Table 3-2 : Possible failures and required updates Note 1 : possible change of S-VID

Note 2 : see previous description on update requirement at CPE.

3.1.3.2 IP address distribution and L2 network topology

Before going in more detail on the different types of connectivities and required parameters, it is useful to review the different choices of IP subnetting that are possible and the link with the layer 2 topology of the network. This will be primarily relevant for peer-peer communication.

Every user must receive one or multiple IP addresses, which can be private or public addresses. In the case of public IP addresses, the ISP or NSP has reserved a certain IP address space from IANA that can then be distribute freely (free choice of subnetting rules) to its subscribers. For private IP addresses, the NSP or NAP is in principle free to choose the IP address space, and again free in the choice of subnetting to its subscribers.

It is the network topology at layer 2 that will influence the subnetting choices that can be made. These choices in turn have impact on the amount of subnets needed (pool of public IP subnets to be reserved) and on the capabilities for peer-peer connectivity (i.e. how can 2 peers get in contact with their addresses belonging to the same or to different subnets).

As the users are grouped per AN in the topology, the main subnetting choices (per NSP) are:

• 1 subnet per user.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 130/193 PUBLIC

This is not recommended in terms of address waste (up to 75% due to the two default addresses and the separate IP address at the EN interface) and in terms of amount of subnets to manage. Therefore this option will not be considered further.

• Multiple subnets per AN

If the IP address space is scarce (public IPv4 addresses), it can be beneficial to assign several subnets (of different sizes) per AN in function of the amount of users, in order to limit the amount of unused (wasted) addresses. However this requires management of IP address pools and planning in order to allow increases of address demand per AN.

• 1 subnet per AN

This is a logical choice, albeit with lower address usage efficiency than the previous case. The subnet should be chosen large enough to foresee possible growth in address demand per AN.

• 1 subnet per group (or all) of ANs

In terms of management complexity, the simplest solution is to centralise the distribution of IP addresses from a single (or several) subnet to all users in the NAP.

Allocating subnets in function of the AN requires that at auto-configuration the server recognises the AN to which the user is connected, and select the subnet accordingly. In DHCP the giaddr set by the DHCP relay in the AN can be used for this.

Please note that CIDR is assumed to be supported in the edges and NSP/ISP/ASP IP networks.

3.1.3.3 Unicast client - server connectivity

Terminal

The terminal is visible from the access network side at L2 and L3 when the RGW is bridged, and at L3 when the RGW is routed without NAPT. With a routed RGW with NAPT, the terminal is not reachable at L3 from the network.

The terminal parameters are

- IP @ (per NSP), set by auto-configuration - Routing table (per user's IP @), set by auto-configuration or user configuration - ARP table (per user's IP @), set via the ARP mechanism

Bridged RGW

The RGW is transparent at layer 3 and only contains PHY parameters (set by ILMI or alike) and a bridging table (FDB, set by MAC adress learning mechanism.

Routed RGW incl. NAPT

On top of the previous parameters, the RGW also contains

- its own IP@ (per NSP, set by auto-config) - a routing table (entries per user's IP@, set by auto-config) - a NAPT conversion table - an ARP table (per user's IP @)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 131/193 PUBLIC

- parameters for the local DHCP server on the RGW.

Access Node

The AN is a layer 2 device (for forwarding) in intelligent bridging or cross-connecting mode, based on a 802.1Q (bridging mode) or 802.1ad bridge (bridging mode, cross-connect mode) functionality. There is also some IP awareness, for unicast mainly for anti-spoofing filters.

The parameters and corresponding configuration method can be summarised in the following table;

PHY parameters per line management by operator

Bridging table user side : {MAC@s, port, C-VLAN} per line network side : {MAC@, port, S-VLAN} of EN, servers (Radius, DNS, Application)

self-learning (MAC@)+management (C-VLANs) management only (for security)

If VLANs to RGW, optional mapping table VLAN user side < > VLAN network side

by management

S-VLANs for business users management

ARP table (ARP relay functionality in AN) IP@ and MAC@ users IP@ and MAC@ ENs

ARP management and ARP

If DHCP relay, IP@ of DHCP servers management

Aggregation Switch

The AS doesn't need to have L3 awareness (for unicast). It is a 802.1Q (bridging mode) or S-VLAN-aware bridge (bridging mode and cross-connect mode) and contains mainly a bridging table (set by self-learning and management).

EN in IPoPPPoE case

The EN (BRAS) can either terminate the PPP sessions (IP wholesale) or tunnel them to the corresponding NSP (PPP wholesale). The EN is then respectively a router with PPP server or a LAC. It also requires 802.1Q (bridging mode) or 802.1ad (bridging mode, cross-connect mode) functionality at the aggregation network side.

Table (note 1) :

{IP@(CPE), L2 parameters CPE (MAC@, S-VLAN, C-VLAN), PPP session}

learning form PPPoE session and PPP session (snooping)

table {subnet mask, NSP} (note 2) management by operator

(IP wholesale) routing table VRs (per NSP) (note 3)

management + routing protocols with NSPs

(PPP wholesale) L2TP parameters (per NSP) management

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 132/193 PUBLIC

Bridging table {MAC@s, port, S-VLAN}

self-learning (MAC @s)+ management (for S-VLANs)

ARP table : MAC@s of users and of IP@s in routing tables of VRs (per NSP) (note 3)

ARP mechanism

Note : for simplicity the term CPE is used here for the device visible from the aggregation network side

Note 1 : in downstream, AN (and hence S-VLAN) is selected by IP Destination Address

Note 2 : in upstream, the NSP is selected by the subnet of IP Source Address (if non-overlapping). If overlapping subnets, requires L2 parameter.

Note 3 : if multiple VRs are needed.

Edge Node in IPoE case

The difference with IPoPPPoE is that the EN no longer requires PPP nor L2TP handling functionalities.

Table (note 1) :

{IP@(CPE), L2 parameters CPE (MAC@, S-VLAN, C-VLAN)}

ARP or contacting DHCP server or others...

table {subnet mask, NSP} (note 2) operator config

(IP wholesale) routing table VRs (per NSP) (note 3)

operator config + routing protocols with NSPs

Bridging table

{MAC@s, port, S-VLAN}

self-learning (MAC @s)+ management (for S-VLANs)

ARP table : MAC@s of users and of IP@s in routing tables of VRs (per NSP) (note 3)

ARP mechanism

Note : for simplicity the term CPE is used here for the device visible from the aggregation network side

3.1.3.4 Unicast user-user connectivity (peer-peer)

Why is peer-peer special ?

The connectivity parameters in the nodes are the same as for client-server traffic. However peer-peer traffic cannot be offered just like that, without extra considerations and mechanisms. Moreover, there is a basic choice of allowing peer-peer locally (at layer 2) or only via the edge (at layer 3), and the requirement to be able to block local peer-peer at layer 2. The issues are also related to the subnetting choice for the users in the NAP.

Three orthogonal situations must be considered for peer-peer connectivity:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 133/193 PUBLIC

- Connection between users belonging to same subnet / connection between users of different subnets

- Connection between users on same AN (intra-AN, users A-B on the figures) / connection between users on different ANs (inter-AN, users A-C on the figures)

- Direct connection (i.e. at L2, without involving EN) / connection via EN (forced forwarding / tunneling to EN).

These considerations are mostly relevant for IPoE because with IPoPPPoE all peer-peer traffic is handled at the EN anyway due to the termination of PPP tunnels there. The questions addressed in this subchapter are :

- How can direct connections be supported at layer 2?

- How can connections via the EN be supported? Implementation concerns the content of the routing table of the user and the possible ARP mechanisms.

In all cases the connectivity depends on general switching/routing behaviour in the AN and the EN. The general behaviour depends on the subnet of both users, and the logical LAN on which they are attached (in the case of the Ethernet Network Model the "LAN" is the part of the access and aggregation network whose reach is limited by the interpretation of VLANs at AN and EN).

• Direct connections in AN and via aggregation network

- Users on the same subnet and same LAN can be directly switched (at layer 2) by means of an entry in the user’s routing table to the LAN, and ARP mechanism.

- Users on different subnets but on the same LAN require their routing table to contain entries for the other subnets to the same LAN and ARP requests to be launched for these other subnets.

- Users on different LANs cannot be directly connected at layer 2, they need a router. An alternative is to foresee a separate common VLAN for all ANs, thereby acting as a common LAN.

• Connections via EN

- Users on the same IP subnet and same LAN (as seen by the EN) but forced to the EN will have to be redirected by the EN back on its same interface. Normally, this would result in an ICMP redirect message [RFC7921], being sent to the originating host, and the dropping of further packets (after one or a few routing attempts) to the same destination. The calling user cannot solve this at layer 2. To prevent this, the ICMP redirect function for the aggregation network interfaces must be disabled in the Edge node. In addition, even when ICMP is disabled, the router should not drop the frames but route them back to the same interface. This behaviour is not described in standard RFCs and could be router-proprietary. However it is expected that most routers can be configured for such behaviour.

- Users on same subnet but on different LANs and forced to the EN can be routed at the EN with an ARP proxy functionality at the EN [RFC 1027]. More specifically, for users on the same subnet but in different VLANs, it is possible to aggregate these different VLANs at the EN and group their users in the same broadcast domain, give them the same default gateway and enable communication between them via a ARP proxy variant. This solution exists for single 802.1Q-VLANs [RFC3069] and could be applied for S-VLANs.

- Users on different subnets and on the same LAN. The traffic could be routed at the EN, but EN *can* (to be checked if this can be avoided) generate an ICMP redirect and this will block further traffic (to be checked).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 134/193 PUBLIC

- Users on different subnets and different LANs and forced to EN can be routed at the EN, when there is an entry in the user’s routing table to forward everything de facto to the EN.

Putting these considerations in practice, together with the subnetting considerations in 3.1.3.2, leads to the following conclusions for the intelligent bridging and cross-connect mode. The format considered is IPoE, as for PPP all traffic is automatically sent to the EN.

user A

user B

user C

user D

VLAN1

VLAN2

P2P VLAN

AN 1

AN 2

EN

A

B

C

D

C-VLAN AC-VLAN B

S-VLAN 2

S-VLAN 1

C-VLAN CC-VLAN D

Figure 3-11: Peer-peer in intelligent bridging mode and cross-connect mode

Allowing peer-peer locally at layer 2

Only the bridged model is suited for direct peer-peer communication. Direct peer-peer communication is impossible with the cross-connect modes (single or double VLAN), as the connections are VLAN-based and no longer just on MAC @.

In the bridged model, communication between different subnets requires setting these subnets individually in the user's routing table (otherwise the traffic would flow to the default gateway being an EN). It is recommended to limit the amount of subnets in the network to keep this manageable. mode Intra-NSP

1 subnet / AN

Intra-NSP

1 subnet / many ANs

Inter-NSP

Bridged Extra requirements

* at auto-config request, DHCP server must 1) associate subnet i.f.o. AN 2) allocate IP @ from that subnet 3) deduce other subnets 4) set other subnets in user's routing table.

* ARP relay in AN

Extra requirements

* if >1 subnet over the aggregation NW, same requirement for DHCP server as in 1 subnet/AN.

* ARP relay in AN

Extra requirement

* subnets of users of other NSPs must be known in advance. Requires co-ordination of subnets by NAP (must know subnets and set the routing tables accordingly).

* ARP relay in AN

Table 3-3: Summary direct peer-peer

Blocking peer-peer locally at layer 2

In case local peer-peer traffic is not allowed in the network, some extra mechanisms must be taken in the case of the intelligent bridging mode (in the cross-connect mode the local traffic is automatically blocked).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 135/193 PUBLIC

• Blocking intra-AN traffic by not flowing user-originated ARP requests to the other users (avoids user to learn IP@ of another user), and by blocking traffic to unknown MAC destinations (avoids user to communicate directly if he knows IP@ and MAC@ of a user via other means) (see §0).

• Blocking inter-AN traffic by filtering user-originated requests to only known IP nodes (avoids user to learn IP@ of another user), and by putting the ANs in different VLANs (avoids user to communicate directly if he knows IP@ and MAC@ of a user via other means).

Allowing peer-peer via the EN at layer 3

Both bridged and cross-connected models can allow peer-peer via the EN, however again implying some extra mechanisms and complexity.

In the bridged mode, the connection requires the EN to behave correctly even if source and destination are on same LAN and same subnet. Otherwise tunneling could be used, but this is not a practical solution given the large amount of tunnels that would have to be managed at the EN (one per user).

In the cross-connect mode, the situation is the same, but an alternative is that the EN interprets the stacked VLAN as a separate logical LAN. In that case all users will automatically be interpreted on different LANs.

mode Intra-NSP

1 subnet / AN

Intra-NSP

1 subnet / many ANs

Inter-NSP

Bridged Extra requirement

* at auto-config request, DHCP server must 1) associate subnet i.f.o. AN 2) allocate IP @ from that subnet

Requirement on EN

* If EN cannot cope with same subnet on same interface, requires tunneling for A-B, but this is a large amount of tunnels => EN must be configured to route correctly.

Extra requirement

* if >1 subnet over the aggregation NW, same requirement for DHCP server as in 1 subnet/AN.

Requirement on EN

* If EN cannot cope with same subnet on same interface, requires tunneling for A-B, but this is a large amount of tunnels => EN must be configured to route correctly.

Extra requirements

* If >1 subnet over the aggregation NW, same requirement for DHCP server as in 1 subnet/AN. Requirement on EN

* EN must be able to handle different SNs on same LAN (case A-B)

Cross-connect

Extra requirements

* at auto-config request, DHCP server must 1) associate subnet i.f.o. AN 2) allocate IP @ from that subnet Requirement on EN * If EN cannot cope with same subnet on same interface, requires tunneling

Extra requirements

* if >1 subnet over the aggregation NW, same requirement for DHCP server as in 1 subnet/AN. Requirement on EN * If EN cannot cope with same subnet on same interface, requires tunneling

Extra requirements

* If >1 subnet over the aggregation NW, same requirement for DHCP server as in 1 subnet/AN.

Requirement on EN

* EN must be able to handle different SNs on same LAN (case A-B), OR

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 136/193 PUBLIC

for A-B, but this is a large amount of tunnels => EN must be configured to route correctly OR must be able to interpret {S-VID, C-VID} as separate logical LAN

for A-B, but this is a large amount of tunnels => EN must be configured to route correctly OR must be able to interpret {S-VID, C-VID} as separate logical LAN

to interpret {S-VID, C-VID} as separate logical LAN.

Table 3-4: Summary peer-peer via EN

Summary

The different options for peer-peer connectivity have been reviewed in the consortium, see 2.3.2. Based on gathered feedback on the preferences from operator's perspective, it is decided to favour the peer-peer connectivity at layer 3. For the Ethernet Network model this implies being able to block local peer-peer traffic inside and between ANs, and to force all peer-peer traffic via the EN.

Note that point-point connections for business users can also be considered as "peer-peer" in nature, but their implementation is different. They will rely on statically provisioned connectivity pipes at L2 (S-VLANs), and typically have a flat-fee. Therefore there is no need for forcing this traffic to an EN. Multipoint-multipoint connections for business users can be considered in the same way.

3.1.3.5 Multicast server-client connectivity

Introduction

If IPoPPPoE is the data format, PPP requires per-user stream replication at the edge (separate PPP sessions). Hence PPP is not suited for efficient transport of multicast.

On the other hand, with IPoE as data format it is possible to have stream replication in the NAP, closer to the users. The multicast distribution of multicast flows uses 1 dedicated S-VLAN per EN (as source of the multicast traffic) for all its multicast streams, and all ANs accept this VLAN on their uplink port. Per multicast stream (hence per EN as source of multicast stream), a L2 distribution tree is built between the EN and the related ANs by IGMP snooping in the intermediate nodes and IGMP termination in EN. Replication of a single stream viewed by multiple users on an AN takes place in the AN, also based on IGMP snooping (or proxying).

The EN retrieves multicast streams from servers in the NSP / ASP network via IP multicast routing protocols such as PIM.

Multicast Protocols

The most familiar multicast protocols are the IGMPv2, and IGMPv3 protocols (layer 2) and the PIM (SM or DM) protocols (layer 3). IGMP is the protocol used manage the multicast flows at layer, by building multicast trees in the layer 2 aggregation network, and inside the AN. Table 3-5 presents the main IGMP messages.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 137/193 PUBLIC

IGMP version : 2Multicast address : multicast group IP@Type : Membership QueryMax Response Time : 1 sec

multicast group IP@

Querier IP@Multicast group MAC@

MAC@ Specific Query for a multicast flow

IGMP V2 Membership Specific Query

IGMP version : 2Multicast address : multicast group IP@ Type : Leave Group

224.0.0.2"all routers"

Client or proxy IP@

Group MAC@"all routers"01:00:5e:00:00:02

Client or proxy MAC@Leave a multicast flowIGMP V2 Leave Group

IGMP version : 2Multicast address : 0.0.0.0Type : Membership QueryMax Response Time : 1 sec

224.0.0.1"all systems"

Querier IP@group MAC@"all systems"01:00:5e:00:00:01

MAC@General Query for all multicast flows

IGMP V2 Membership General Query

IGMP version : 2Multicast address : Multicast group IP@Type : Membership Report

Multicast group IP@IP@ selected by the client among the pool proposed by NSP or ASP

client or proxy IP@IP@delivered by NSP (autoconfigDHCP)

Multicast group MAC@Ethernet/IP @ association

client or proxyMAC@Hardware intrinsic

Join a multicast flowOr Query response

IGMP V2 Membership Report

IGMP parametersIP Dest @IP src @MAC Dest @MAC src @CommentMessage

IGMP version : 2Multicast address : multicast group IP@Type : Membership QueryMax Response Time : 1 sec

multicast group IP@

Querier IP@Multicast group MAC@

MAC@ Specific Query for a multicast flow

IGMP V2 Membership Specific Query

IGMP version : 2Multicast address : multicast group IP@ Type : Leave Group

224.0.0.2"all routers"

Client or proxy IP@

Group MAC@"all routers"01:00:5e:00:00:02

Client or proxy MAC@Leave a multicast flowIGMP V2 Leave Group

IGMP version : 2Multicast address : 0.0.0.0Type : Membership QueryMax Response Time : 1 sec

224.0.0.1"all systems"

Querier IP@group MAC@"all systems"01:00:5e:00:00:01

MAC@General Query for all multicast flows

IGMP V2 Membership General Query

IGMP version : 2Multicast address : Multicast group IP@Type : Membership Report

Multicast group IP@IP@ selected by the client among the pool proposed by NSP or ASP

client or proxy IP@IP@delivered by NSP (autoconfigDHCP)

Multicast group MAC@Ethernet/IP @ association

client or proxyMAC@Hardware intrinsic

Join a multicast flowOr Query response

IGMP V2 Membership Report

IGMP parametersIP Dest @IP src @MAC Dest @MAC src @CommentMessage

Table 3-5: IGMPv2 messages

Multicast Options In the Ethernet Model

Two cases must be studied, one positioning the Multicast server directly connected to the aggregation network, another one where the multicast server is positioned in the NSP/ASP domain and connected to the NAP(s).

• Multicast server directly connected to the aggregation network

In this case, the IGMP termination can be positioned directly at the Access node level, at the aggregation switch level or at the multicast server level.

Access Node

NAP ASP

NSP

ASP

CPN

RG

Edge Node

Aggregation switch

ISP

MC server

Access Node

NAP ASP

NSP

ASP

CPN

RG

Edge Node

Aggregation switch

ISP

MC server

Figure 3-12: Multicast server connected to the aggregation network

If the IGMP termination is performed at the Access node level, then no multicast protocol is necessary in the aggregation network, and all the multicast flows are arriving at this stage. With regards to BBTV, this solution involves to define the aggregation network dimensioning in an appropriate way. Indeed the choice regarding the AN uplink interface will depend on the number of channels encompassed in the bundle. Moreover there will be more than only one video service provider.

The IGMP termination can also be managed at the aggregation switch level or at the Multicast server level.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 138/193 PUBLIC

IGMP termination can be also supported from the AN to the switch connected to the video Headend. To optimize the aggregation network the later case is preferred. The most efficient solution would be to provide the IGMP termination directly to the Multicast server connected to the access node and provide IGMP snooping/proxying in the access (remote) node for local replication of multicast streams. This solution permits to provide only the required multicast streams at the access network level. However if we consider a ring, STP spanning tree protocol or an equivalent will be applied and in case of failure during a video flows broadcast, the flows initiated by an IGMP join are lost. To restore the broadcast, an end user must request again the channel previously watched to update the IGMP tables. In the IP network model this issue is solved by using an IP multicast protocol (PIM for instance) which is more valuable in terms of time restoration.

For a scalability point of view, taking into account the number of video POPs (Point of Presence) and the use of Multicast servers from internet access, it will be more valuable and scalable to assume that Multicast servers are positioned at the NSP/ASP level.

• Multicast server connected to NSP/ASP

In this case, the IGMP termination can be located in the aggregation switch or at the edge node level, the PIM protocol (SM, DM) positioned in NSP/ASP.

For instance if we consider a centralised video headend, we have to take into account the existing and available equipment in networks. The more suitable approach to deserve several video PoP (more or less significant) is to use the IP NSP networks to transport the multicast flows. So that avoids concentrating all the traffic dedicated to all the AN on only one aggregation network. Moreover starting from a centralized video Headend connected to the NSP network, each aggregation network corresponding to each PoP video can have the same data plane configuration. In other words if we want to support 1 vlan per [AN:EN] this approach is more interesting in terms of vlan scalability.

Access Node

NAP ASP

NSP

ASP

CPN

RG

Edge Node

Aggregation switch

ISP

MC server

IGMPIGMP termination in the switch or EN

PIM

Access Node

NAP ASP

NSP

ASP

CPN

RG

Edge Node

Aggregation switch

ISP

MC server

IGMPIGMP termination in the switch or EN

PIM

Figure 3-13 : Multicast server connected via the ASP

As in the previous case, IGMP-based replication at the AN favours scalability. Concerning the IGMP termination itself, the positioning of the IGMP termination at the POP level (multicast server) seems to be more scalable:

- That will allow to optimise bandwidth in the aggregation network for the benefit of others coming services like internet access, or visio conference.

- sometimes we could have less customers connected to AN than the total number of offered channels in the bundles (small cities)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 139/193 PUBLIC

- more and more video bundles will appear

- often the same channels are asked for.

The choice which consists in terminating IGMP in the Edge Node instead of the AN could impact on the channel change time. It would be valuable to add in this part any available values from different configuration.

If IGMP is terminated as described before at the switch level, a special video VPN in the NSP network could transport the multicast streams. Then the flows are transported in the aggregation network at level 2 up to the switch.

Multicast Features required in the Ethernet access model

The figure below presents the elements of multicast features required for such a model by taking into account as well the case of a remote unit. In that case multicast mechanisms envisioned initially in the AN must be implemented in the remote unit

IGMP snooping

IGMP proxy IGMP snooping IGMP Termination

IGMP snooping

IGMP proxy

DHCP option 82 (CLID)

E dge Node

switch

Access Node

NAP

RG

Access Node ASP

NSP

ASP

ISP

MC source

R emote AN

IGMP snooping

IGMP proxy IGMP snooping IGMP Termination

IGMP snooping

IGMP proxy

DHCP option 82 (CLID)

E dge Node

switch

E dge Node

switch

E dge Node

switch

Access Node

NAP

RG

Access Node ASP

NSP

ASP

ISP

MC source

R emote AN

Figure 3-14: IGMP functionalities in the Ethernet NW model

• IGMP snooping : upon receipt an IGMP message, the equipment is able to see in his table if there are no members already associated with the multicast group specified. This feature allows replication at level 2.

• IGMP proxy : the proxy function replaces the MAC source @ indicated in a specific received message (for instance IGMP) by the receiving equipment MAC source @.

• DHCP option 82 : allows to know where is located equipment asking for an IP @ (to have an equivalent to the CLID in ATM). (see in the data plane chapter).

To illustrate how the multicast tree is performed in the Ethernet model, the example of a broadcast TV can be described.

When a first end-user wants to connect to a TV channel, the first step will be :

1) successful bootup of the set-up box (STB) and RG using a DHCP protocol,

2) collect the association table between the TV channels and the IP multicast address

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 140/193 PUBLIC

The IGMP request from this first user is terminated at the Edge Node level, where the PIM protocol is used at the NSP/ASP level to fetch this multicast stream if it was not available yet. The AN and the intervening switches between Access Node and Edge Node snoop this request. And will forward this multicast stream on the corresponding ports. The tree towards this Access Node is built.

When a second end-user on the same Access Node wants also to visualize this channel, then the IGMP snooping process and stream replication is performed at the Access Node level.

When a third end-user, on another AN, is selecting the same channel, the IGMP snooping process is used at the aggregation switch level, and the replication stream is performed at this level. The multicast tree is now also built towards this other AN.

Multicast Features required in the nodes • Terminal

The terminal requires to know the multicast group IP @ associated with the channel.

• Bridged RGW

The RGW can also contain an IGMP snooping functionality to recognise on which physical LAN port it should send a given multicast channel.

• Routed RGW incl. NAPT

Same remark, and also requires an IGMP proxy functionality to hide the private IP@ of the terminal in the residential network to the network.

• AN

The AN contains IGMP snooping or proxy functionality for the replication of multicast streams. If snooping it must keep a table with {L2 parameters user, user line, active multicast groups}. If proxy it must keep the correct {S-VLAN, port} for upstream IGMP messages (for building L2 duplication tree in aggr NW). Additionally, the AN may perform authentication of the requesting user (via Radius).

• AS

In order to participate in building the multicast tree, the AS must perform IGMP snooping (keeping track of multicast S-VID, multicast MAC@ and associated port).

• EN with IpoE

The EN is the termination point for IGMP, so it should keep a table with {user IP@, requested multicast IP@, port (=S-VLAN)} for keeping track of the requests, and a table {requested IP multicast @, NSP} for requesting multicast streams from NSP.

3.1.4 Summary Ethernet Network Model connectivity Two different connectivity modes are possible in the Ethernet network model, the intelligent bridging mode and the cross-connecting mode.

The intelligent bridging mode in the aggregation network has several advantages for residential users;

• Simpler provisioning of VLANs; single tag to be configured in the aggregation network (AN, EN, AS).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 141/193 PUBLIC

• The EN only has to interpret a single tag.

• Local peer-peer traffic at layer 2 is possible if allowed (for IPoE). Note that local peer-peer traffic can also be blocked if required.

• Local peer-peer traffic via EN is possible if required.

However, the cross-connect model could be advantageous in the specific situation of an AN aggregating several subtending nodes (e.g. VDSL remote units) which each serve a limited number of users. Using the cross-connect model locally between AN and subtending nodes can be an efficient way for user segregation (note that the AN works in bridge mode at aggregation network side).

The business users cannot be dealt in a pure bridging mode (L2 VPNs) and can carry their own VLANs in their traffic. Traffic from business users should be cross-connected transparently in the AN based on S-VLAN tags, and if more than 4094 business VPNs are to be supported it is recommended to use MPLS in the aggregation network (from the AS on).

The business users can be accommodated by an AN working in both modes; intelligent bridging for residential lines, cross-connect for business lines.

Recommendations for the intelligent bridging model;

• Use a VLAN per (EN,AN) in order to limit the broadcast domain per AN

• Use S-VLAN for being future-proof and allowing stacking with the VLANs generated by business users.

• If local peer-peer is to be offered, the amount of subnets in the NAP should be limited as they must be configured in the CPEs.

• For multicasting traffic, it is recommended to apply stream replication in the AN by means of IGMP snooping or proxy, and to build corresponding multicast trees in the aggregation network by means of IGMP snooping in the AS. It is recommended that IGMP be terminated at the EN, the EN connecting to the multicast servers.

Requirements for correct operation of the intelligent bridging model;

• If redundancy is needed, foreseeing working and protection VLANs between AN and EN via provisioning.

• If redundancy of the EN is applied, it is recommended that the ENs support VRRP in order not to impact the CPEs at switch-over.

• If peer-peer traffic is to be routed via the EN, the AN and EN should have the needed mechanisms to force the traffic to the EN (ARP filter in AN + EN replies, or MAC FF, see §3.1.1.2), and the EN should be able to route traffic back on the same interface (logical LAN) even if both users are in the same subnet.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 142/193 PUBLIC

• It is assumed that a CPE will freely choose a provider that could assign him a private IP address, but only one simultaneously. If multiple NSP/ASPs use overlapping private IP address spaces, there must be layer 2 separation of the traffic in the network. The EN must have dedicated VRs, and the CPE must indicate at layer 2 to which (but in our assumption doesn't need multiple VRs). However, unless contrary evidence is shown, the working assumption is that in realistic scenario's only one entity (the NAP) will assign private IP addresses.

• MAC address unicity; all MAC addresses must be unique per S-VLAN. This must be enforced by the network, and it is recommended to foresee a NAP-based authentication including the MAC address.

VLAN options for the intelligent bridging model;

• Using S-VLANs for pre-provisioning bit pipes in the aggregation network, see IMS description.

• Using priority-tagged VLANs between CPE and AN for upstream QoS classification.

• Using 802.1Q-tagged VLANs between CPE and AN is optional and for further study.

NAP

Corporate NW

CPN

CPN

ISP/NSP

ISP/NSP

ISP/NSP

ASP

EN

802.1ad bridge

MAC FF or ARP agentARP filterIGMP snooping/proxy

Ethernet (MPLS)aggregation network

AN

S-VLAN tagged :1 S-VID per {AN,EN}

(optional : MPLS from first AS)

802.1Q bridge

IGMP snooping

UntaggedPrio-tagged

802.1Q tagged

BRAS (IPoPPPoE)

RFC3069, RFC1027, ARPLAC or PTAVRsIGMP termination (if PTA)

S-VLAN aware bridge

IGMP snooping

802.1Q tagged

RGW EN

AS

Router (IPoE)

VRsIGMP termination

UntaggedPrio-tagged Optionally 802.1Q (1 C-VID per line)

router (NAPT)

IGMP snooping

RU

C-VLANs 1,2,...

UntaggedPrio-tagged

L2TP

IP

router (no NAPT)

IGMP snooping

UntaggedPrio-tagged

S-VLAN 1

S-VLAN 2

S-VLAN 3

S-VLAN 4

Figure 3-15: Connectivity in the Ethernet NW model (Intelligent bridging model)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 143/193 PUBLIC

3.2 AAA architectures This part focuses in the case of an Ethernet network, where DHCP is the logical choice instead of PPP (legacy). As seen, DHCP must provide same functionality as PPP provides now, so RADIUS/Diameter integration is a must, beside any other modification needed.

3.2.1 Control Plane Three options were discussed in MA2.5 document and also in the previous chapter. The final proposal is shown in Figure 2-31. But some points need to be fixed for L2 network model.

In a L2 network model VLAN would have to be used for L2 wholesale and L2 VPN services. So it is necessary to include the VLAN ID in the AAA and auto-configuration process.

This can be done following the model proposed by the IETF in RFC 3580 [25]. And that model has been included in Figure 3-16. This figure is basically the same as Figure 2-31. But in Access-Request messages new standard RADIUS attributes must be included:

• Tunnel-Type (attribute #64), with the value “VLAN”.

• Tunnel-Medium-Type (attribute #65), with value “802”.

• Tunnel-Private-Group-ID (attribute #81), with the VLAN ID.

The VLAN ID can be obtained by the NSP AAA Server taking into account:

• NAS-IP (and NAS-Identifier?) attribute. These attributes can be used to identify the AN and the NAP domain.

• NSP’s EN that receives/forwards IP traffic to the end-user.

• End-user profile.

In case VLAN stacking were necessary, there will be a problem because in IETF RFC 3580 [25], attribute Tunnel-Private-Group-ID provides a unique VLAN ID. So there would be only 4094 VLANs. So, there would be two options:

• An amendment of this RFC, in order to include Q in Q solution. • Use of RADIUS VSA to provide the second VLAN ID.

However in the Intelligent Bridging mode, there is only a single VLAN tag required in the aggregation network.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 144/193 PUBLIC

Figure 3-16: One step configuration and AAA process for a L2 network model.

3.2.2 Open Issues As seen, the evolution from PPP to DHCP seems to be logical as future scenario will be based in IP. So, DHCP has to work with RADIUS in the same way PPP does, and that’s not an easy task.

For instance, nomadism make difficult to use DHCP without any changes, and including a “line-ID” attribute in DHCP message is a possible solution. But this problem, in the one step model can be solved by means of RADIUS which provides through standard attributes all the information necessary for end-user line identification and nomadism control.

3.3 Qos architectures 3.3.1 Mapping of the 3GPP/IMSarchitecture to the Ethernet model The use of the 3GPP/IMS in the access network allows the operator to do session based billing and to dynamically allocate bandwidth with the required QoS when it is demanded by the customer or service provider, see section 2.4.3.

When IMS is used in the Ethernet model, QoS is achieved by controlling the customer’s access to a pre-provisioned pipe. This is method A from 2.4.3.4.1. The access control is performed by the combination of PDF, access node and edge node, see figure Figure 2-24. To have full control on the QoS all traffic to and from a user should pass through the pre-provisioned pipe.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 145/193 PUBLIC

Using 802.1Q in Ethernet, pre-provisioned pipes can be defined in three ways:

- based on priority bits;

- based on C-VLAN id’s;

- based on both priority bits and C-VLANs.

For example, C-VLAN tagging can be used to differentiate between services from different providers or which need different QoS. Priority bits can be used in addition to C-VLAN tagging in order to create a finer QoS differentiation if desired.

Session Admission Control In the Ethernet model it is proposed to use pre-provisioned pipes to implement QoS in the transport layer (see Figure 2-19). This is an implementation of model A from section 2.4.3.4.1. Since there is no bearer layer signalling, there is also no need for the authorization token that is used in pure 3GPP IMS to correlate application level signalling and bearer layer signalling. Nevertheless, in order for the EN and AN to control the data stream, the data stream has to be identified and correlated with the service request previously done by the CPE in the authorization phase. The Application Functions, which handles the service requests, and the PDF/RACS, which is responsible for the resource reservation in the EN (see also Figure 3-17 and Figure 3-18), are located behind the EN. This means that the CPE that has requested a service is identified based on layer 3+ information. The EN can use this information to identify the new data stream, but the AN can not, because in the Ethernet model it was assumed that the AN is not IP aware. To have session admission control functionality in the AN based on L2 information will be rather complex since L3 identifiers have to be translated into L2 identifiers. This could be done by the EN, which then provides this L2 information to the AN over the Go interface.

An extra complicating matter is that at layer 2 it is not possible to distinguish between multiple sessions of the same CPE, which is possible at layer 3.

If L2 information is used to identify the CPE following parameters may be used:

• L2 parameters in the cross connect model (see Figure 3-17)

- C-VLAN tag (possibly in combination with priority bits)

- MAC address

- Physical port

• L2 parameters in the bridged model (see Figure 3-18)

- MAC address

- Physical port

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 146/193 PUBLIC

In case local P2P traffic is not transported to the EN or the NSP network, but runs directly over the AN, the AN must support session admission control. In case all traffic is transported to the EN, it may be considered not to control sessions in AN, probably making the AN cheaper.

Modem

PDF/RACS

Telephone EdgeNode

RegionalNetwork

AccessNode

Gq

NSP/ASPnetwork

Signalling interface

Physical connection

Go

Computer

Data stream

End user A

Applicationfunctions

User VLAN

Go

Figure 3-17, IMS in a cross connected Ethernet model

Bridged model

The data stream can be identified based on port (in the Access Node only) or MAC address (in both Access Node and Edge Node) in combination with the priority bits.

Modem

PDF/RACS

Telephone EdgeNode

RegionalNetwork

AccessNode

Gq

NSP/ASPnetwork

Go

Computer

End user A

Applicationfunctions

Service VLAN

Signalling interface

Physical connection

Data stream

Go

Figure 3-18, IMS in a bridged Ethernet model

Signalling channel Signalling traffic may or may not be conveyed in the same pipe as the data traffic. It is important to keep the delay under certain limits. Packet loss should be avoided at all times.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 147/193 PUBLIC

3.4 Security 3.4.1 Scope Security issues are related to threats posed by malevolent users to network elements, to the network(s) as a whole, and to other users.

Threats will be posed in any network model, however the level and propagation breadth of the threats depend on the model considered. The purpose of this chapter is to provide a reference list of security mechanisms that must be considered in the Ethernet network model. This includes secure authentication of the user as a requirement. The relationship between authentication and auto-configuration is elaborated in the chapter on auto-configuration.

3.4.2 Generalities We can classify the security mechanisms in three classes;

• “basic” : needed for the protection of the Network Elements (especially the AN).

• “broad” : needed for the protection of the access network and infrastructure

• “extra” : needed for the protection of other users and of other networks (ASPs)

Of the many types of attack, the two most relevant ones are DoS and masquerading :

- DoS (incl. service degradation) on network, on network elements and on other users

- Masquerading of Network elements and of other users in order to divert traffic or gain illegal access

Security functions are to be put as close as possible to the source, i.e. in the AN. In the Ethernet network model the access and aggregation networks are connected at layer 2. It is important for the AN to include extensive layer 2 security mechanisms. Moreover the AN can also inspect layer 3 parameters (IGMP snooping with IPoE, IP addresses of the connected users in auto-config messages). So the AN should also include some layer 3-aware security mechanisms for IPoE traffic.

Secure authentication is a general requirement to authenticate a user before granting him/her access to a network or a service. Such authentication is needed prior to the broad and extra protection measures that check for wrong user parameters :

- Access authentication is needed prior to checking if a user can be connected to a particular line

- User authentication is needed prior to checking if a user (MAC @) uses a correct IP @.

We have to make a distinction between IPoPPPoE traffic and IPoE traffic, as the formats and control messages are different.

3.4.3 Security threats & mechanisms with IPoPPPoE traffic Traffic generated by CPE when using PPP as auto-configuration protocol follows an allowed data format, and several protocols are detected by the AN :

- data plane : traffic is IPoPPPoE, with Ethernet being untagged or priority tagged or single tagged (802.1Q).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 148/193 PUBLIC

- control plane : PPP, 802.1x, ARP (Note : ARP requests are only to be sent in downstream)

- management plane : PHY OAM (ILMI, E-LMI), Ethernet OAM (802.3ah, 802.1AB, 802.1ag)

The threats can be summarized as follows;

• DoS on data plane : the user sends bogus traffic to the network, overwhelming the AN or the network. This traffic can use valid or invalid MAC @s and send unicast or broadcast traffic. With PPPoE the destination MAC@ can be checked with a list of allowed values.

• DoS on control plane : the user sends bogus control messages to network elements, disrupting their normal behaviour and overwhelming the network. This can be an overload of allowed protocol messages (e.g. broadcasts) or the generation of non-allowed messages.

• Masquerading : the user modifies his identity, either by sending data packets with spoofed MAC @ or IP @, or via unsolicited ARP responses with false information.

3.4.4 Security threats & mechanisms with IPoE traffic Traffic generated by CPE when using DHCP : allowed format, protocols detected by AN

- Ethernet : untagged - priority tagged - single tagged

- data plane : all traffic IPoE (distinction IPv4 / IPv6 could be made depending on expected IP version)

- control plane : DHCP, 802.1x, ARP, IGMP

- management plane : PHY OAM (ILMI, E-LMI), Ethernet OAM (802.3ah, 802.1AB, 802.1ag)

The threats themselves can be summarized as follows;

• DoS on data plane : the user sends bogus traffic to the network, overwhelming the AN or the network. This traffic can use valid or invalid MAC @s and send unicast or broadcast traffic.

• DoS on control plane : the user sends bogus control messages to network elements, disrupting their normal behaviour and overwhelming the network. This can be an overload of allowed protocol messages (e.g. broadcasts) or the generation of non-allowed messages.

• Masquerading : the user modifies his identity, either by sending data packets with spoofed MAC @ or IP @, or via unsolicited ARP responses with false information.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 149/193 PUBLIC

3.4.5 Overview of security mechanisms A list of security mechanisms can be used to address the different threats listed above.

PPP specific Mechanism DHCP-specific

* Check on upstream traffic

* Control #MAC S@ per line

* Protected provisioning of L2 of EN

* Control # protocol broadcasts per sec

* Static entries in AN for EN addresses

* Filter non-allowed protocols on Ethertype + MAC D@

* Separate secure management access on AN

Check on allowed MAC D@ * Check MAC D@ Only possible if no local peer-peer

Block data multicast/broadcasts * Filter data multicast/broadcasts Filter data multicasts (IGMP snooping), block broadcasts

* Limit #protocol broadcasts per sec

* Filter non-allowed protocols on Ethertype + MAC D@

Snoop PPPoE and PPP messages

* Anti-spoofing filter L2-L3 Note : must be preceded by secure user authentication.

Snoop DHCP messages

* Static entries in AN for EN addresses

* Anti-spoofing filter L2-line : - block duplicate MAC@s (Note : must be preceded by secure access authentication, which first ensures user access via valid lines)

Snoop PPPoE and PPP messages

* Anti-spoofing filter L2-L3 - block spoofed packets - filter ARP replies Note : must be preceded by secure user authentication.

Snoop DHCP messages

* Static entries in AN for EN addresses

Table 3-6 : Security measures for Ethernet Network Model

Bro

ad

Bas

ic

Extr

a

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 150/193 PUBLIC

4 IP NETWORK MODEL 4.1 Overview 4.1.1 Network Scenario

RNP(optional)

NSP/ISP

NSP/ISP

NSP/ISP

ASP

NAPAN

(optional)Ethernet for IPoPPPoE

CPE

BRAS or Edge Router

CPN

bridged

RGW

CPN

routed(IPv4/IPv6)

RGW

IP termination(IPv4 or IPv6)

Ethernet or IP (optionally MPLS)

aggregation network

IP router/forwarderfor IPoE (IPv4/IPv6)

ServiceEN

AccessEN

Ethernet switch(S-VLAN aware or 802.1Q)

Router (IPv4/IPv6)

Figure 4-1: Network Scenario for the IP Models

Figure 4-1 shows the general network scenario. The aggregation network between the AM and the NSP edge node can be either an Ethernet or a routed IP network or a combination of both. Also the aggregation network may deploy MPLS, either over an Ethernet transport or another transport like PoS. In any case the NSP edge node will be an IP forwarder.

Note, that even if a routed IP network is used in the aggregation the NAP can still provide Layer 2 services between the AM and the reference point A10, e.g. using L2TP or MPLS.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 151/193 PUBLIC

4.1.2 IP Network Model Characteristics

Figure 4-2: IP network model characteristics

The IP awareness and functions can be brought closer to the end-user by having aggregation nodes acting as layer 3 forwarders in the aggregation network. Traffic flows can then be processed at IP level for QoS, security (there is now a clear separation at layer 2 between the aggregated users and the rest of the aggregation network), multicast criteria and other service policies. Peer-peer traffic can be routed at that node, which is more efficient than via an EN as in a pure layer 2 model. Note that the IP forwarder can be based on IPv4, or IPv6, or a combination of both.

An IP network model is characterized by IP forwarders which are deployed in the access and aggregation network and which completely terminate the Layer 2 between the user and the network side ports while the IP traffic is forwarded between the ports. In a multi edge network IP forwarding decisions cannot only be derived from the IP destination address (classical IP routing) but must take also into account additional information like selected service provider and selected service. This is because the same IP host may be reachable via alternate network service providers and with different service profiles, but always using the same IP address.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 152/193 PUBLIC

An IP forwarder can be either the access multiplexer itself, or it can be a second or third level aggregation node. In the later case the access nodes are operating at layer 2. In this case the cross-connect mode (point-to-point C-VLAN) is advantageously used to connect remote residential networks at Layer 2 with the IP forwarders. The C-VLAN may either identify the access line or a single VLAN being used for service multiplexing on the access line. In any case ambiguities of C-VLANs must be avoided, by appropriate provisioning and auto-configuration or C-VLAN translation in the access multiplexer.

Due to the C-VLAN scalability restrictions, IP forwarders will usually reside on the Access Multiplexer. However it may make sense to use the cross-connect mode for collecting traffic from small remote units.

The IP forwarders are terminating Layer 2 with respect to all residential IPoE traffic. This means that the Layer 2 payload is completely new encapsulated, using the MAC address(es) of the IP forwarder as source address. Neither MAC addresses from the residential networks are further propagated into the aggregation network, nor are MAC addresses from the aggregation network propagated to the residential networks.

Two main options exist for the NAP to forward the IP traffic to the peering points with the NSP/ASP.

• The NAP transports the IP packets by a L2 service between the IP forwarders and the Edge Node. In this case the IP forwarder is not required to operate as an IP router. Especially no routing protocols are run between the IP Forwarders and the Edge Node. I.e. the border of the IP network stays at the Edge Node. If the peering point to an NSP is located at the Edge Node, IP packets can be directly forwarded from the L2 service to the NSP. If the peering point is higher up in the network hierarchy either a L2VPN or L3VPN service can be used to transport or route the IP packets to the peering point with the selected NSP/ASP.

• The NAP routes the IP traffic between the IP forwarders and the peering points. In this case the IP network is extended down to the IP forwarders. I.e. the border of the IP network is now located at the IP forwarders. IP traffic in between the IP forwarders and the peering points is routed. The population of the routing tables in the IP forwarders and related ENs is for further study, especially in the case of IP wholesale to multiple NSPs (e.g. use of IP routing protocols).

In parallel to the handling of IPoE traffic it may be necessary to support in parallel PPPoE traffic in the access and aggregation network. The different options are further on discussed through some typical use cases in more detail.

4.1.3 Use Cases Many options exist for IP network models, e.g.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 153/193 PUBLIC

• models with rather complete IP control plane in the IP forwarder, including full IGP, LAC, L2TP and PTA support

• models with a reduced IP control plane in the IP forwarder, e.g. only static routing or policy based forwarding, no IGP, LAC, L2TP or PTA

• Models with dynamic on demand service selection or with statically configured services

• models with different IP tunnelling options from the IP forwarder to the NSP EN, e.g. VLAN, MPLS, L2TP, IPv6 tunnelling ([9] RFC 2473)

If a NAP is required to support MEF type Business Services, there may be the need to implement in parallel an Ethernet Network Model for supporting these Layer 2 services. Also the PPPoE based services to a remote BRAS can be supported through a hybrid “Ethernet and IP” network model.

Altogether there exists a rather large solution space, which cannot be covered in this document comprehensively. It is therefore the approach of this document to describe a smaller number of “use cases” covering the main basic networking options.

Since a network can handle PPPoE traffic and IPoE rather independently separate use cases are introduced for both traffic types.

4.2 Use Cases For PPPoE Handling 4.2.1 PPP Use Case 1: L2 switching of PPPoE traffic The most straightforward way to deal with (IPo)PPPoE traffic is to keep the PPP termination point at the same place as in the Ethernet Network Model. This implies that the AN should switch this traffic transparently at layer 2 towards the EN, where it will then either be terminated (IP wholesale) or tunnelled towards the NSP (PPP wholesale via L2TP).

EN (BRAS)

EN

IPoPPPoE

IPoPPPoEIPv4or

IPv6

Ethernet / MPLS

Ethernet

NSP/ISP 2

NSP/ISP 1

ASP

NSP/ISP 0

Figure 4-3 : L2 switching of IPoPPPoE traffic, combined with IPoE traffic

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 154/193 PUBLIC

4.2.1.1 Supported wholesale/retail models

By merely switching the IPoPPPoE traffic transparently at the AN, the same wholesale and retail models can be supported as in the Ethernet NW model:

• PPP wholesale to 3rd party NSP/ISP for HSI + other services

• IP wholesale to 3rd party NSP/ISP for HSI + other services

• IP connection for retail users to the associated NSP/ISP (NSP/ISP 0 on the figure) for HSI + other services

Note that a possible migration path from PPP to DHCP could be to start with the existing PPP connectivity for wholesale users and retail users (existing users having only HSI), and then to give new retail users connectivity via DHCP for HSI, and gradually switching existing retail users from PPP to DHCP. The advantage is that this allows to keep the installed ENs (BRASes) for wholesale traffic only (typically lower volume than retail traffic), without the need to extend their capacity for a growing number of retail users.

4.2.1.2 Data plane requirements

• Client-server connection

The AN first has to recognise the PPPoE payload, by the Ethertypes 8863 and 8864. These packets must be switched at layer 2, by MAC-based forwarding and applying appropriate VLAN tagging. This is similar to the situation of the intelligent bridged Ethernet network model.

The aggregation network must consist of L2 aggregation switches (at least 802.1Q bridges, preferably S-VLAN-aware bridges) in order to allow PPP transport to the ENs. For residential users, a single VLAN tag is used per {AN,EN} or per {AN,EN,service type}. For business users a single VLAN tag is used per VPN, possibly combined with MPLS.

The EN has the BRAS functionalities of PPPoE server, PPP server and LAC. The EN also has IP service classification (based on DSCP) and L3 routing, probably with multiple VR instances. It runs an EGP routing protocol towards the RNP or NSP/ISP.

• Peer-peer connection

The only way to perform peer-peer is via the EN (IP wholesale), or even deeper via the NSP network (PPP wholesale).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 155/193 PUBLIC

4.2.2 PPP Use Case 2: PPPoE relay of PPP traffic

Figure 4-4: PPPoE relay

One important advantage of an IP network model over a switched network scenario is the increased security against Ethernet attacks like MAC address flooding, ARP attacks, MAC address spoofing, etc. This advantage is largely lost if PPPoE traffic is switched at layer 2 as described in the use case above. PPPoE relay is an alternative which avoids this disadvantage by allowing for Layer 2 termination also for PPPoE traffic, while still forwarding the PPPoE payload between a PPPoE client and a remote PPPoE server. The basic principle is shown by the example in Figure 4-4. In this example Layer 2 is terminated by the access multiplexer also for PPPoE traffic. It is assumed that the access multiplexer has a user side MAC address M2 and a network side MAC address M3. Other realizations may also use the same MAC address M2=M3 on both sides. With respect to the PPPoE client the PPPoE relay agent behaves like a PPPoE server, with respect to the PPPoE server it behaves like a PPPoE client. If the PPPoE client wants to set up a PPPoE session it broadcasts a PADI frame on the local LAN segment. The frame is intercepted by the AN and the AN selects a locally unique relay- session-id and inserts it as a PPPoE tag into another PADI message which it broadcasts into the L2 network segment. This PADI frame uses the MAC address M3 as source MAC address. The relay session id is needed for session identification since there may be multiple parallel PPPoE sessions in discovery stages at the network side and PPPoE session ids are not assigned until the PADS frame. The relay session id tag is defined in RFC 2516 ([13]).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 156/193 PUBLIC

The PPPoE relay agent keeps state of the PPPoE sessions on the user and the network sides and installs for stable PPPoE sessions a forwarding rule in the data plane which forwards the PPPoE payload based on the assigned PPPoE session ids and the MAC addresses. Additionally the PPPoE relay agent may also assign the PPPoE session on the network side to a service or service provider specific S-VLAN, i.e. an Ethernet service connection. This can be done by evaluating PPPoE service and/or AC-name tags. Thus NSP specific service level agreements can be implemented by the NAP also for PPPoE traffic. 4.2.3 PPP Use Case 3: LAC / PTA in the IP Forwarder 4.2.3.1 Principle

A more radical approach to IPoPPPoE traffic is to fully process the PPP sessions at the IP forwarder, either by L2TP tunnelling towards an EN (IP FW is then a LAC, for PPP wholesaling), or by terminating and aggregating the PPP sessions (IP FW then performs PTA, for IP connectivity). There is a full separation at L2 between users and aggregation network for this traffic.

NSP/ISP 2

NSP/ISP 1

ASP

NSP/ISP 0

optional EN (L2TS or BRAS)

EN

IPv4or

IPv6

Ethernet / IP / MPLS

optional L2TP

optional L2TP

Figure 4-5 : IPoPPPoE traffic handled in IP forwarder (LAC/PTA), combined with IPoE traffic

4.2.3.2 Supported wholesale/retail models

The same models can be supported:

• Tunnelled PPP (L2TP) : PPP wholesale to 3rd party NSP/ISP for HSI + other services

• Terminated PPP : IP wholesale to 3rd party NSP/ISP for HSI + other services

• Terminated PPP : IP connectivity for retail users to associated NSP/ISP for HSI + other services.

4.2.3.3 Data plane requirements

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 157/193 PUBLIC

• Client-server connections

The AN recognises upstream PPPoE traffic by the Ethertype and terminates the PPPoE sessions (PPPoE server). There are then two options;

• The AN first performs the LCP phase of the PPP sessions. The PPP is then encapsulated and tunnelled to the EN via L2TP with session aggregation per tunnel. The AN is a LAC.

• The AN fully terminates the PPP session and sends the IP packets to the IP forwarder. Both IPoPPPoE and IPoE traffic can then be handled in the same way (see section 4.3). The AN performs PTA.

Either way, the AN now has to perform some BRAS functionalities, which is a heavier load on the platform than in use cases 1 and 2.

The aggregation network can consist of L2 switches, or L3 routers, or a mixture of both. MPLS can optionally be used for coping with business users.

The EN can be an ER when the AN performs PTA, or it can be optional in the case of L2TP tunnelling, acting as a L2TP Tunnel Switch (L2TS) for aggregating tunnels from different ANs, or a BRAS. The EN also has IP service classification (based on DSCP) and L3 routing, probably with multiple VR instances. It runs an EGP routing protocol towards the RNP or NSP/ISP. It may need an IGP protocol towards the ANs (acting now as IP forwarding point for the IPoPPPoE traffic).

• Peer-peer connection

Local peer-peer traffic is now possible in the case of PTA, and can be controlled at L3 at the AN.

Peer-peer traffic via the EN is also possible, but when L2TP is not used another way of tunnelling is required and for simplicity it is recommended to have then a L2 aggregation network between the ANs and the ENs.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 158/193 PUBLIC

4.3 Use Cases For IPoE Handling

4.3.1 NAP provides IP transport service 4.3.1.1 Basic Network Scenario

Figure 4-6: Basic scenario with NAP providing IP and PPP transport services

Figure 4-6 shows the basic network scenario for the switched IP transport scenario. In this case the NAP transports IP packets from the IP forwarder, being in the above example the access multiplexer, to the appropriate ISP/NSP/ASP. For doing so the NAP installs appropriate service connections between the IP forwarders and the peering points to the service providers.

The concept of service connection was introduced in DA2.1 ([1]). A service connection is administratively set up by the NAP between one ore more access multiplexers and the peering points to an NSP or ASP. Service connections may e.g. either be implemented with connectionless VLAN means or by connection oriented means like MPLS LSPs or IP transport tunnels.

A service connection represents a dedicated network resource by which the NAP can implement an SLA for a service provider. E.g. the NAP can provide point to multipoint service connections for an ISP with an average bandwidth of 0.4 Mbps per subscriber port on the DSLAM uplinks. In another example the NAP provides for a video ASP a multicast capable service connection which guarantees delay and loss limits for a certain amount of subscriber traffic. IP forwarders must be able to bind IP sessions of users to the appropriate service connections.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 159/193 PUBLIC

4.3.1.2 Influence on the business/wholesale model

The main advantage of this use case as opposed to the routed IP use case is that it allows for a strict separation between the NAP role and the network service provider role. This is because IP traffic is not routed based on IP addresses within the NAP network. Therefore there is no need to run routing protocols between the NAP and the NSP networks. Also there is the option that even un-coordinated IP addressing between the networks can be supported. The ISP/NSP can independently assign IP subnets to the logical interfaces of the service connections at the peering points. These subnets have not to be co-ordinated, neither with the NAP nor with any other NSP/ISP.

Based on the IP transport model the NAP can on one hand sell the basic broadband access line service to the end customers. On the other hand the NAP can offer an IP transport wholesale with guaranteed SLAs to service providers. On this base the service providers sell their specific IP services and applications to the service customers.

Note also, that in this model a service user does not necessarily need a business relation ship to the NAP, which is especially useful for supporting nomadic users in wireless access networks.

4.3.1.3 Data plane for IP transport service

Figure 4-7: Data plane example for IP transport service

Using the simple example of Figure 4-7 this section will explain the data plane for the current use case. In this example the NAP has provided two IP service connections for supporting two IP services from different network service providers with appropriate SLAs.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 160/193 PUBLIC

In the figure’s snapshot CPN a is using the IP service from NSP 1, CPN b is using the IP service from NSP 2 and CPN c is simultaneously using both services. For this purpose CPN c has two different VLANs in the CPN and is using service multiplexing on the access line to the NAP with C-VLANs C1001 and C1002. Also there are two virtual router instances in the CPN, which in this example are assumed to have two different MAC addresses M3 and M1. Doing service multiplexing by C-VLAN is not a MUST for the described IP network option. If overlapping IP address spaces in the CPN can be excluded, another option is it to use also the source IP address for service multiplexing on the access line. Also different source MAC addresses could be used for this purpose.

CPG routers get the IP addresses Ia1, Ib1, Ic1, Ic2 and the appropriate default gateway addresses Ir1 and Ir2 via DHCP auto-configuration.

The Ethernet layer of the access lines is terminated in the IP forwarders. In the example the upper AM has a MAC address M5 on the user side which is used by the CPNs as destination MAC address for all upstream traffic. This requires an ARP proxy functionality at the AN (see 4.3.1.7). On the other hand all downstream traffic from the AM to the CPNs has M5 as source MAC address.

On the network side the NAP has provided two IP service connections. In this example it is assumed that the service connections are provided through two service VLANs, S2011 and S2022. The service connections are terminated in the upper AM with MAC address M6.

On the edge nodes the service connections are terminated with MAC addresses M7 and M8, respectively.

Table 4-1: Binding of IP sessions to service connections in the IP forwarder

The IP forwarder binds IP sessions on the CPN side to the network side IP service connections based on the information shown in Table 4-1.

IP Sessions are characterized by the MAC and associated IP address in the CPN and optionally also by a C-VLAN in the case service multiplexing is used. Optionally also additional data like DSCP code points, .1p bits or UDP ports may be used to identify sub-flows within a single IP session.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 161/193 PUBLIC

In this example IP service connections are characterized by the service VLAN and the MAC address of the edge node. Optional data may include DSCP or .1p code points or specific service profiles to apply.

Table 4-2: Session and service aware IP forwarding (user to network forwarding)

If a frame arrives from a user in the IP forwarder, the forwarder looks up a matching session entry from the data which is depicted in Table 4-2 with yellow background. If no match is found the frame is silently discarded, thus preventing e.g. IP address spoofing. If a match is found and the frame is untagged the appropriate S-VLAN of the service connection is added. If the frame is tagged the C-VLAN tag is replaced by the appropriate S-VLAN tag. The MAC addresses are replaced with the MAC addresses of the service connection M6 and M7 (or M8).

If peer-to-peer traffic is to be routed locally by the IP forwarder, the IP forwarder must check additionally if the destination IP address matches a local session IP address bound to the same service connection. If a session entry is found the IP packet can be forwarded locally to the appropriate destination session. If IP address overlapping across multiple service connection can be excluded, local forwarding of peer-to-peer traffic across different service connections is also possible. Otherwise forwarding must only occur between sessions bound to the same service connection.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 162/193 PUBLIC

Table 4-3: Session and service aware IP forwarding (network to user forwarding)

If a frame arrives from the network in the IP forwarder, the IP forwarder looks up a matching entry based on the S-VLAN and the destination IP address which must match that of the user (fields with yellow background in Table 4-3). If a match is found the S-VLAN tag is removed and a C-VLAN tag is added if required for the user’s session. The MAC addresses of the service connection are replaced by the MAC addresses of the session.

Note, that these data plane forwarding rules also support overlapping IP address spaces between IP services on different service connections.

4.3.1.4 Statically provisioned service selection

Figure 4-8: Session and service aware IP forwarding (network to user forwarding)

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 163/193 PUBLIC

A straight forward way of service selection is by static provisioning as shown in Figure 4-8. If an end customer subscribes to a specific service the NAP assigns to the access port the appropriate service connection. If service multiplexing is used, different service connections can be assigned to different C-VLANs at the CPN.

Beside the missing dynamic service selection the main disadvantage of this approach is that subscription data must be administered in the distributed IP forwarders.

4.3.1.5 Dynamic 802.1x based service selection with RADIUS proxy chaining

Figure 4-9: 802.x based service selection with RADIUS proxy chaining

A much more flexible approach to service selection is based on 802.1x and RADIUS proxy chaining as depicted in Figure 4-9. This example refers to CPN c of the network scenario of Figure 4-7.

Since CPN c uses service multiplexing, CPN c first selects a C-VLAN (C1001) which is so far unused at this access line. As a 802.1x supplicant CPN c sends a EAPOL start message with the C-VLAN tag C1001 to the IP forwarder. The IP forwarder acts as 802.1x authenticator and handles the C-VLAN C1001 on access port c as a logical port which must be authenticated. Note that C-VLAN tags have per access port significance and can therefore be re-used on each access port, avoiding additional provisioning effort.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 164/193 PUBLIC

After the EAP Request is received the client selects the service (or service package) and service provider by using a fully qualified domain name like “TriplePlay.ProviderXYZ.net” and sends it with his user credentials to the IP forwarder. Service request and user credentials are further forwarded via RADIUS and RADIUS proxy chaining ([14]) to the RADIUS server of the selected service provider which acts as 802.1x authentication server. The RADIUS server checks the credentials and sends a response with a service profile S1 (RADIUS attributes) back through the RADIUS proxy chain to the RADIUS client in the IP forwarder. A RADIUS proxy in the NAP network will collect the session and accounting data.

After successful authentication the IP forwarder binds the session to the selected service by setting up the appropriate forwarding rules in the data plane. In this example C-VLAN C1001 on access port c is bound to service VLAN S2011. In addition to the basic binding information additional service subscriber and/or access specific service profiles, e.g. policers, can be installed. Note that so far no IP address is assigned to the session.

There are several important advantages of this 802.1x based service selection and authentication approach:

• Services can be selected dynamically by the CPN and service profiles can be installed on-demand in the IP forwarders

• New service can be introduced with minimal administrational effort, because service profiles can be centrally provided and downloaded per session into the IP forwarders, which are enforcing the service policies.

• The architecture is open to include policy decision functions.

• All service subscriber data is stored centrally in the AAA databases of the service providers. There is no need to store subscriber specific data in the IP forwarder.

• Nomadic users can be supported based on RADIUS proxy chaining

• Inter NAP-NSP accounting and service provisioning is based on a RADIUS centric control plane which is currently available in many provider networks.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 165/193 PUBLIC

4.3.1.6 DHCP based session binding

Figure 4-10: DHCP based session binding

After authentication and service selection, the CPG router instance uses DHCP to retrieve auto-configuration data including the IP address/subnet from the DHCP server of the NSP(see Figure 4-10). A DHCP relay agent in the IP forwarder adds the IP address/subnet and the lease time to the session data. If the DHCP auto-configuration was successful the IP forwarder activates the forwarding rule in the data plane.

If the DHCP session is released or if the DHCP lease time expires, the IP forwarder will remove the forwarding rule from the data plane and the session terminates.

Note, that DHCP proxy and RADIUS client require an IP control interface in the IP forwarder. This interface is not shown in Figure 4-9 and Figure 4-10.

4.3.1.7 ARP proxy/relay

An ARP proxy or relay function is needed in the IP forwarder since the Ethernet layer is terminated. This situation is similar to the MAC forced forwarding option of an Ethernet based network model.

An easy solution implements a simple ARP proxy in the IP forwarder which answers to ARP requests from the user and the network side always with its own MAC address (see Figure 4-11).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 166/193 PUBLIC

Figure 4-11: An ARP proxy in the IP forwarder always replies with its own MAC address

This requires that the MAC addresses of the default gateways per service connection is either administered or retrieved by the IP forwarder by issuing itself an ARP request to the default gateway when the session is set up.

Note, that this simple solution cannot cope with the situation that multiple default gateways are auto configured per session, e.g. for redundancy purposes.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 167/193 PUBLIC

4.3.2 NAP provides routed IP service for application wholesale

NAP

ASP 5Service

Gateway

Edge RouterNSP 0 ASP 3

NSP 2

NSP 1

ASP 2

ASP 1

BRAS

ASP 4

ISP

All services:* HSI* Applics

* Some applics

DHCPserver (NAP)

Figure 4-12: Most applicable business scenario for routed IP in the NAP network

This use case is applicable for business models and network scenarios in which the NAP not only provides access services but is simultaneously also an NSP, or has a close business relationship with the NSP (=NSP0 in the above Figure 4-12). As such the NAP/NSP0 are offering an IP network service to the end customers. This can be used for specific applications, e.g. video applications but optionally also for high speed internet. This business scenario was identified in MA2.5 as scenarios (b) and (c). This business scenario covers the realistic case in which NAP/NSP0 are developing their broadband access and aggregation networks to an IP network platform for application wholesale to third party ASPs.

This use case avoids the need to support different host address schemes of multiple NSPs within the same routed IP network

Note, however, that in the case of Figure 4-12 there is no “Equal Access” for NSP1 and NSP2 when compared to NSP0.

4.3.2.1 IPv4 / IPv6 subnetting

In an access network with routed IP there is no end-to-end layer-2 connectivity between the CPN and the Edge Node. The CPN will therefore be connected via layer-2 to the nearest IP forwarder in the network. This implies that the IP address scheme will be spread out to all the IP forwarders that have direct layer-2 connectivity to the CPNs.

4.3.2.1.1 Avoiding waste of IP addresses

If each CPN will be directly connected via standard Ethernet to a port in a standard router, each first mile link will be seen as a separate LAN (IP subnet). Even if the CPN only requires one single IP address, the first mile link must still be an IP subnet, with at least two IP addresses issued: the CPE address and the default gateway address.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 168/193 PUBLIC

In IPv4 there is also need for two more addresses per subnet: net address and direct broadcast address. This means that each IPv4 subnet requires at least 4 IP addresses, equivalent to at least a /30 subnet. In the case where there is only need for one IP address per CPN, there will be a waste of up to 75% of the address space. In case two IP addresses are needed for the CPN a /29 subnet must be issued, thus using 8 IP addresses.

Since IPv4 addresses are scarce this is not acceptable.

Therefore more efficient options are identified in the following sections.

Note that these alternative solutions for IPv4 address waste will not be in conflict with the preferred solution for IPv6 deployment. Thus, either VLAN aggregation or LAN aggregation can be used depending on where the first IP forwarder is located. The decision of where to locate the first IP forwarder can then be based on other criteria. 4.3.2.1.2 Non-optimal solution : Using 31 bit prefixes

An alternative solution for IPv4 is to use a modified IP access node, where 31 bit prefixes following RFC 3021 ( [17]) are used.

A subnet for a port to the CPN can be redefined to have IP addresses only for the individual network interfaces belonging to the LAN. Thus the net and broadcast address can be excluded. In a CPN with only one host or a routed RGW only two IP addresses will thus be needed, reducing the IPv4 address waste to 50%. The subnet will be /31 for each port. This procedure has already been proposed for general point-to-point links and is described in RFC 3021 ([17]). However, this solution cannot be used in the case of allocating more than one IP address per CPN (e.g. in case of bridged RGW). Thus it will only be usable when the RGW is a router, since the router only needs one address.

4.3.2.1.3 Complete solution

In order to completely avoid the address waste, the IP aggregation node should be able to assign the same subnet to multiple aggregated users, and present them with the same default gateway (same IP address at the aggregation node side for these multiple aggregated users). There are different ways to implement such a behaviour, without impact on either the users nor the rest of the access and aggregation network. The choice itself of which method is a matter of implementation and hence beyond the scope of the network architecture study of this document.

• VLAN aggregation solution (RFC 3069 19)

One solution to this is to let the first IP forwarder from the CPN perspective to be higher up in the network. The CPNs can be separated by an Ethernet solution in a small part of the access network under the IP forwarder, using e.g. PPPoE, MFF or C-VLAN, described in MA2.5, see Figure 4-13. Now the IP forwarder will have one subnet available for a larger amount of CPNs, which will make it possible to use the IP addresses more efficiently.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 169/193 PUBLIC

For the VLAN case the IP forwarder should use the VLAN aggregation mechanism described in the informational RFC 3069 ([20]), where several C-VLANs, called sub-VLANs can be gathered under one big VLAN, called super-VLAN. The C-VLANs are allocated per port in the first Ethernet aggregation point, so it is not required that the CPE themselves generate these C-VLANs. This super-VLAN can be treated as one big subnet by the IP forwarder. Since aggregation in the Ethernet domain will not be large in order to increase address efficiency use, the VLAN maximum limit of 4096 will not be an issue. As an example, it is sufficient to aggregate 100 CPNs to get down to around 3 % wasting of IP addresses.

This solution would fit well for a small site, where a small number of Ethernet access nodes are aggregated to a site router. The site router is connected to the routers of the other sites.

At larger sites there may be another step of Ethernet aggregation, where a number of Ethernet access nodes are aggregated to an Ethernet aggregation node. This aggregation node is in turn aggregated to an IP forwarder. This IP forwarder can be a site router, or, in the case of a very large site, there will be several IP forwarder nodes at the site connected to a large site router.

Please note that an Ethernet aggregation switch functionality and the router functionality can be implemented in a single node, which then becomes the IP aggregation node (bottom case in the following figure). In that case the first aggregation node can become the IP node.

CPE

CPE

CPE

CPE

CPE

Ethernet Access Node

IP Forwarder

CPE

CPE

CPE C-VLAN

IP Forwarder

C-VLAN

Ethernet Aggregation Node

IP Forwarder

Figure 4-13: VLAN Aggregation

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 170/193 PUBLIC

• LAN aggregation solution

In analogy to VLAN aggregation ([21]RFC 3090) another variant for aggregating the ports of the IP access node, called LAN aggregation, is proposed.

The aggregated ports are now considered to belong to one big subnet, called super LAN. All CPNs in this super LAN have the same default gateway, which implies that each corresponding port on the router must have same IP address. The default gateway can therefore be reused, thus reducing the waste of IP addresses further. The LAN aggregation can be used independently of net and broadcast address removal, described earlier. All CPNs belonging to a super LAN have the same super LAN subnet mask, which implies that hosts of different CPNs think they are on the same LAN and may ARP for each other’s MAC address. In order for the hosts to communicate, an ARP proxy should be used in the IP forwarder. If required, LAN aggregation behaviour in a router could be standardized (informational RFC similar to RFC 3090) in order for the behaviour of such IP access nodes to be referenced and the interworking with routed RGW to be covered (e.g. there must be no damage caused if an external host sends IP packets to the default gateway address of a super LAN, e.g. requirement on the router to be able to handle the same IP address on several ports, etc...).

Another possibility is to require the IP forwarder and the RGW to behave as in the PPP case, where there is a point-to-point link between the host and the router. In this case the RGW should always send the uplink traffic to the MAC address of the IP forwarder interface. This option needs standardization of new behavior on IPoE forwarding.

4.3.2.1.4 IPv6 solution

Waste of IP addresses due to one LAN per CPN is probably no issue for the IPv6 address space, since there are lots of IPv6 addresses. In the case that each CPN gets a huge subnet from start, each host in the CPN will be able to use the 64 bits of the IP address devoted for MAC address. This means that using one IP address per LAN for default gateway will not make any difference to the overall efficiency of address usage.

This means that for an IPv6 Access Network there is no need to have Ethernet access nodes as the first part of the access network, as described above for an IPv4 network. The preferred solution, considering address space, is thus to have an IPv6 access node. However, there may other arguments, e.g. be cost / performance considerations, for having Ethernet aggregation before the first IP forwarder.

If the first IPv6 node is located higher up in the network, Ethernet based mechanisms can be used to separate user traffic. However, MFF and VLAN aggregation are yet only specified for IPv4 and therefore cannot be used for IPv6 until further standardization is done. Instead one VLAN per CPN can be used without VLAN aggregation.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 171/193 PUBLIC

4.3.2.1.5 IPv4 / IPv6 coexistence

If IPv4 and IPv6 are to be used in parallel in the IP access network all IP forwarders must have dual stacks and dual routing mechanisms. This means that each IP forwarder can forward both IPv4 and IPv6 packets on the same links.

Using the proposed address aggregation mechanisms for IPv4, the location of the first IPv4/IPv6 node can now be chosen based on other arguments than address waste. There are two options for the location of the first IP node in the aggregation network. Each option will require different solutions for IPv4 and IPv6:

• Access Node as first IP node:

o IPv4: Use LAN aggregation (or PtP forwarding)

o IPv6: Use normal IP configuration = one LAN per CPN

• Node higher up as first IP node:

o IPv4: Use MFF or one C-VLAN per CPN with VLAN aggregation)

o IPv6: Use one C-VLAN per CPN (without VLAN aggregation)

4.3.2.2 Service multiplexing

Users in a CPN may use simultaneously multiple applications/services, e.g. video streaming requiring low packet loss in addition to best effort high speed internet (HSI) browsing.

4.3.2.2.1 With L2 service segregation in the CPN

This option segregates services/applications in the CPN at Layer 2 using VLAN tagging. Therefore multiple independent IP address spaces can be supported in the CPN. E.g. a PPPoE connection for HSI and simultaneously an IPoE/DHCP session for video applications. Service selection/multiplexing can be done as described in 4.3.1.

4.3.2.2.2 Without CPN L2 service segregation in the CPN

In this case multiple applications use the same LAN in the CPN (no separation by VLANs). Nevertheless the RGW and the NAP network must apply application specific policies (e.g. with respect to QoS) to the different application flows. In this case two mechanisms for service segregation have been identified (compare also TR-059 [28]):

• DSCP code points In this case the RGW and the NAP network applies appropriate policies based on the DSCP code point.

• Routing in the RGW with auto-configuration by RIPv2 In this case upstream traffic to a certain application is identified in the RGW by one or more specific IP address prefix(es). RIPv2 is used by the NAP network to auto-configure the RGW with these prefixes. The RGW forwards the traffic to the appropriate L2 connection (PVC or C-VLAN) on the access line.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 172/193 PUBLIC

4.3.2.3 Routing

A prerequisite for this use case is that either NAP and NSP are the same organizational entity or have a close trust relationship with one another. Therefore NAP and NSP networks can be in the same Autonomous System (AS), in which case IGP routing can be used.

In some cases large IP networks are structured into multiple AS in which case an EGP protocol like BGP-4 may be required between the NAP and NSP networks.

In any case the IP addresses are assigned by the NAP network based on a fixed subnetting scheme. These subnets or the summarized subnets are propagated by IGP/EGP into the NSP network. On the other hand the NSP network propagates summarized global routes into the NAP network.

4.3.3 NAP provides routed IP service for IP wholesale to third-party NSPs

In this situation multiple third-party NSPs are served by the NAP, which means that several autonomous systems coexist in the NAP. The requirements and methods for the population of the routing tables in the IP nodes is for further study.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 173/193 PUBLIC

4.4 Use Cases For IPv6 4.4.1 Access network assumptions for the IPv6 use cases For the IPv6 use cases described in this section, it is assumed IPv6 nodes (routers) are deployed in the access and aggregation network between the subscriber residential gateway (RGW) and aggregation network edge nodes. Throughout the discussion presented below, the access node (AN) is considered to be the first IPv6 node in the access network (from the subscriber’s point of view). As such, in each AN LT new subnet originates. It is clear additional layer-2 aggregation can be deployed between the subscriber RGW and the first IPv6 node, if this is considered beneficial from a technical or economical point of view.

In this version of the document, possible advantages IPv6 could bring in conjunction with other access and aggregation architectures such as PPP(oATM) or Ethernet aggregation (VLAN, etc.), are not investigated and considered as future work.

4.4.2 IPv6 address structure

An IPv6 address consists of 128 bits, equally divided between the routing prefix and the interface identifier ([9] RFC 2373, [11] RFC 2460). The routing prefix of a global unicast address is a globally unique prefix identifying the subnet the interface belongs to, while the last 64 bits of the IPv6 address represent the – also globally unique – EUI-64 identifier of the network interface. Global unicast provider prefixes are assigned by the Internet Assigned Numbers Authority (IANA) through their Regional Internet Registries (RIR). At this moment, RIRs (e.g. RIPE for Europe) assign 32-bit prefixes to ISPs [RIPE267], allowing ISPs to structure their topology with the remaining 32 bits of the prefix (Figure 4-14). Some recent RIPE allocations indicate that RIRs are willing to distribute smaller prefixes (e.g. /20), however.

Figure 4-14: IPv6 address

Figure 4-15: division of free bits

As depicted in Figure 4-15, hierarchical addressing can create artificial subdivisions of the IPv6 prefix, including:

1. NAP selection bits: only necessary when an ISP operates with different NAPs within the same country, otherwise these bits can be omitted;

2. Some bits representing the NAP access network topology, necessary to enable efficient routing in the access network;

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 174/193 PUBLIC

3. Some bits representing the subscriber network topology: can be omitted if no end-user subnetting is allowed.

In this scenario, it is assumed that NAPs are responsible for the completion of the bits representing the access network topology. This includes the NAP has to configure its DSLAMs and edge routers for each ISP he is servicing, in order to advertise the correct prefixes to the subscribers. The number of bits reserved for representing the subscriber network topology may also be ISP dependent.

In practice, subscribers, NAPs and ISPs will have to agree on the number of bits reserved for each part. An ISP offering a large subnetting area to his subscribers, cannot service as many customers as an ISP offering a 64-bit prefix to each end-user.

Within IETF RFC 3769 ([27]) the following statement is made for subscriber prefix lengths: “The prefix delegation mechanism should allow for delegation of prefixes of lengths between /48 and /64, inclusively. Other lengths should also be supported.”

The current tendency is to distribute /48 prefixes to customers when it is expected they would need further subnetting. When it is known that exactly one subnet is needed, a /64 prefix is provided. If only one address is needed, a 128-bit IPv6 address might be specified.

4.4.3 Allocation Efficiency Considerations

For this section we assume each subscriber to receive at least one IPv6 subnet (i.e. a 64-bit prefix), individual assignment of 128-bit addresses is not considered. This is in accordance with the current IPv6 address assignment policies.

In order to avoid excessive renumbering due to future growth and network topology restructuring, some margins have to be provided, causing address allocation inefficiency. This is not a new problem, as every large network (e.g. telephone networks, the current IPv4 network) sacrifices some address space to implement a manageable hierarchy. In RFC 3194 ([24]), a measure of address allocation efficiency, the host density ratio (HD-ratio), is introduced as follows:

)log(max)log(

addressesavailableofnumberaddressesallocatedofnumberHD =

Equation 4-1: HD definition

Equation 4-1 yields a real number between 0 and 1 for any number of allocatable addresses >1, and any number of allocated objects ≥1 and ≤ the number of allocatable addresses.

Some case studies presented in RFC 3194 ( [24]) demonstrate that an HD-ratio of 80% indicates a good tradeoff between address allocation efficiency and network manageability. An HD-ratio smaller than 80% implies an inefficient address allocation policy, while networks with an HD-ratio larger than 80% quickly become unmanageable, with 87% being the practical upper limit. (Currently, the IPv4 Internet has an HD-ratio of 91% and NAT (Network Address Translation) functionality is required to maintain its manageability.)

From Equation 4-1, it is straightforward to calculate the practical number of bits in the IPv6 prefix necessary to represent the NAP access network topology, given the number of subscribers and a desired HD-ratio:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 175/193 PUBLIC

=

HDssubscriberbits

*)2log()log(## ,

Equation 4-2

where x denotes the smallest integer larger than or equal to x .

The following table, partly copied from RIPE DOC 267, provides absolute address utilisation figures for distribution of /48 IPv6 prefixes, corresponding to an HD-Ratio of 0.8. From the table, we can derive that the practical number of subscribers that can be supported by a /32 prefix is limited to 7132, provided that each subscriber receives a /48 prefix. In practice, this means providers will need larger address blocks from RIRs, or otherwise providers will have to assign smaller subnets to subscribers (e.g. /64 prefixes).

Prefix length 48-Prefix length Total /48 subnets Threshold (HD=0.8)

48 0 1 1

47 1 2 2

46 2 4 3

45 3 8 5

44 4 16 9

43 5 32 16

42 6 64 28

41 7 128 49

40 8 256 84

39 9 512 147

38 10 1024 256

37 11 2048 446

36 12 4096 776

35 13 8192 1351

34 14 16384 2353

33 15 32768 4096

32 16 65536 7132

31 17 131072 12417

30 18 262144 21619

29 19 524288 37641

28 20 1048576 65536

27 21 2097152 114105

26 22 4194304 198668

25 23 8388608 345901

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 176/193 PUBLIC

24 24 16777216 602249

23 25 33554432 1048576

22 26 67108864 1825677

21 27 134217728 3178688

20 28 268435456 5534417

19 29 536870912 9635980

18 30 1073741824 16777216

17 31 2147483648 29210830

Table 4-4: address utilisation figures for distribution of /48 IPv6 prefixes

When multiple ISPs are active within the same access network (which is a common situation), each ISP propagates its own prefix through the access network. When static addressing is deployed, ISPs are assumed to be able to address the entire access network, regardless of the amount of customers they serve. It is clear this limitation reduces the HD-ratio of the access network as the number of ISPs increases, as depicted in Figure 4-16. This allocation inefficiency can be avoided by using dynamic addressing.

Figure 4-16: HD-ratio for an increasing number of ISPs when only static address allocation is deployed. Note that the number of ISPs does not influence the HD-ratio in case of dynamic

address allocation.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 177/193 PUBLIC

4.4.4 Static Addressing Schemes The next subsections discuss two different ways of employing the NAP topology field introduced in Section 4.4.2. In the first scheme, ISP prefixes are propagated to each NAP IP node, thereby creating a prefix hierarchy from the ISP up to the subscriber. The second scheme assumes NAPs to number their access nodes independently of ISP prefixes. As illustrated in Figure 4-17, it is straightforward for multiple ISPs to distribute their prefixes in the same access network, because each IPv6 interface can have multiple addresses.

Figure 4-17: ISP prefix propagation

4.4.4.1 Hierarchical model (+variations)

Figure 4-18 presents the most straightforward layer-3 architecture for the access network, assuming access multiplexers are IP aware on all UNI and NNI. The root-level nodes represent different ISP's edge routers, while the intermediate nodes are NAP edge routers. Because of the very simple architecture, it is clear that routing can be done very efficiently by aggregating prefixes from adjacent access multiplexers into one larger prefix. The disadvantages of this strictly hierarchical model are that there are no redundancy provisions (robustness in case of e.g. edge router failure) and NAP-local traffic between two access multiplexers connected to a different NAP edge router has to pass through an ISP edge router, possibly limiting future scalability.

Figure 4-18: Basic hierarchical model

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 178/193 PUBLIC

As illustrated in Figure 4-19, providing NAP edge router redundancy can be obtained by attaching two or more edge routers to the same layer-2 network, where each router announces the same subnet prefixes. Using IPv6, access multiplexers do not need to run a dynamic routing protocol (e.g. Open Shortest Path First) to cope with edge router failures: IP datagrams can be forwarded based on IPv6 anycast features. While redundancy issues are solved, this mechanism does not provide a way to balance the load between the edge routers. One edge router is processing the complete load, while the others act as hot standby machines, ready to start processing when a failure occurs. Running dynamic routing protocols on the access multiplexers could provide load-balancing facilities (e.g. by forwarding traffic from/to different ISPs to different NAP edge routers).

Note that this strategy does not impact the number of bits necessary for NAP subnetting, because no new subnets are created.

Figure 4-19: Redundancy

Thus far, possible locality of traffic (i.e. data sent and received within the same access network) has not been taken into account. Data exchanged between two users connected to the same access multiplexer or NAP edge router can be processed without ISP involvement, but there is no direct path from one NAP edge router to another. While this is not strictly necessary from a functional point of view, it could avoid bottlenecks in the access networks due to the increasing peer-to-peer traffic volume.

The layer-3 architecture presented in Figure 4-20 uses some of the NAP address space, to enable a flexible NAP edge router interconnection. By sacrificing the address space necessary for one access multiplexer, enough subnets are available for supporting interconnection links between NAP edge routers.

Because the strict hierarchy is not maintained, it is obvious edge routers need a dynamic routing protocol to take full advantage of the enriched architecture and the number of routing entries in the NAP edge routers will be substantially larger (because prefix aggregation is not always possible).

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 179/193 PUBLIC

Figure 4-20: NAP ER interconnection

4.4.4.2 NAP proprietary addressing

If access and aggregation nodes receive their IP addresses independently from the ISPs offering Internet access to subscribers, subscriber address assignment can proceed in a very flexible and efficient way. The bits in the IPv6 address reserved for the NAP topology can simply identify the DSLAM and port the user is connected to, thereby reducing access network topology overhead in the ISPs' address space. By consequence, ISPs can address more subscribers using this addressing scheme than with the hierarchical model, for the same global prefix length. As another advantage of this scheme, it should be noted that NAPs can hide their internal topology from the outside world in a natural way. Address allocation inefficiency due to multiple ISPs offering connectivity to the same access network, as presented in section 4.4.3 remains present, however.

For the purpose of the internal addressing of NAP access nodes, one might consider using IPv6 link-local or site-local addresses, as there is no direct need for these nodes to be publicly accessible. In practice, this is not possible however, because routers have to be able to notify the originator of a datagram in some situations (e.g. IPv6 path MTU discovery). This means access nodes need global unicast addresses and a gateway to the Internet (which can be physically or logically separated from the gateways to the ISPs). As depicted in Figure 4-21, this gateway should only be used for traffic originating from – or directly addressed to (if possible at all) – one of the access nodes (mainly control traffic). Non-local data originating from subscriber equipment or addressed to a subscriber is routed through ISP gateways, as indicated on the picture.

A major drawback of this flexible solution is the need for dynamic routing functionality in all layer-3 access nodes, including the access multiplexers, while basic routing functionality is sufficient at the DSLAM when applying the hierarchical addressing scheme.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 180/193 PUBLIC

Figure 4-21: NAP proprietary addressing

4.4.5 Dynamic Addressing Dynamic addressing can overcome the address allocation inefficiencies inherently present in static allocation mechanisms. Most of the topology considerations introduced above, remain valid however. Prior to discussing prefix delegation mechanisms in IPv6 routed access networks (4.4.5.2), a novel address space management algorithm is presented in Section 4.4.5.1.

4.4.5.1 Address Space Management for IPv6 Dynamic Prefix Delegation

Basically, an ISP receives IPv6 address space represented with /32 prefix to manage the needs of its own network, including network topology and its customer requirements. In the following we consider that customers can ask for a subnet represented by a prefix requests. Using this subnet the customer can describe its own network topology, while the management of this subnet must be handled by the customer itself. On the network side the access network must have address bits to describe NAP, ISP/NSP topology, as well.

Using IPv6 for provider separation, a customer connecting to a certain provider (NAP and/or NSP) in the access network, it must have its own IPv6 address from the provider address space. In the multi service-multi provider environment, a customer can be connected simultaneously to more than one provider in arbitrary way, having address spaces from all of the connected providers. A packet originated from a customer network belongs to that provider whose address space contains the packet source address. The management of multiple addresses at the customer side implies additional efforts from the customer, but the other hand it is a simple and natural way of doing provider separation. The addresses can be used only for NSP selection, but could be extended to support NAP selection, as well. In the second case the NAPs must have continuous address space, representing its own subnetwork.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 181/193 PUBLIC

4.4.5.1.1 Customer prefix requests

According to the prefix delegation requirements [27]a customer can request prefixes from /64 to /48, which represent maximum 16 bit address prefix. Besides, the addressing mechanism must support both long and short-term address delegations. Basically, this dynamical model supports only short-term assigned addresses, but with the appropriate usage of address timings long-term association can be also managed by the access network.

4.4.5.1.2 Hierarchical Addressing

During the investigation of dynamic addressing a fully hierarchical model is considered, since the management of provider selection and the routing in the access network could be handled easier, if the addressing follows strict rules in the network. In the following we outline some considerations on the selection of hierarchical address aggregation points.

4.4.5.1.3 Hierarchical Aggregation Points in the Access Network

A hierarchical aggregation point with its own continuous address space represents all the inherent lower level addresses. The selection of these hierarchical points has influence on routing efficiency and the servicing architecture (if we use IPv6 addresses for provider separation). From the other side we must consider the lavishing of address spaces. At each hierarchical aggregation point the aggregated address space will be presented with at least the maximum number of bits from inherent lower level address spaces. Here we can consider the situation when only two subnetworks are connected to a hierarchical aggregation point one with /48 and one with /64 prefixes. Then the hierarchically aggregated address space can be represented at least with a /47 prefix, where nearly the half of the address space is unused. The selection of these points is always dependent to the actual access network topology and the service architecture.

NAP topology

ISP topology

HG

AM

EN/NAP

NSP

Sub-ISP

ISP

Figure 4-22 Hierarchical aggregation points

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 182/193 PUBLIC

In the Figure 4-22 the possible hierarchical aggregation points are depicted. While the topological aspects of the home network are out of the scope of MUSE, at the customer home network only one hierarchical point is assumed.

Home Network: At the customer side in the home network there can be one hierarchical aggregation point representing the user prefix requests. If a customer requests for a prefix between /64 and /48 one IPv6-aware node must represent this address space. This node could be the RHG or an IPv6 router connected to BHG. In the Figure 4-22 HG represents this point of aggregation. As a DHCP client this node will send requests towards to the access network and it behaves as a DHCP server for the CPEs in the home network.

If the customer requests for only one IPv6 address, it will be served from the address space of the next hierarchical aggregation point in the access network (e.g. AM). Basically, the /64 requests could be served from a separate IPv6 address space reserved to this type of requests. Another solution if the next aggregation point has no separate address space to serve /64 addresses, but serves these requests from the beginning of its own address space. In this case the address unicity among the customers must be assured with additional mechanism (eg. With DAD).

Access Multiplexer (AM): For efficient routing purposes AM must be a hierarchical aggregation point. The main differentiating point is the routing table size for different scenarios. Let’s assume two scenarios with peer-to-peer traffic, which implemented by routing among AMs not including EN. In the first scenario AM is not a hierarchical point. For the sake of simplicity let’s assume that #ISP are connected to the access network. Besides #EN represents the number of ENs in the access network, #AM the number of AMs and #HGs stands for the number of customers connected to an AM. In the first case the routing table size N at the AMs is about:

( )AMHGHGENISPN ##1### ∗++∗∗=

Equation 4-3

In the second scenario AM is considered as a hierarchical aggregation point. Here at all HGs belonging to the same AM is represented only one routing entry:

( )AMHGENISPN #1### ++∗∗=

Equation 4-4

If we consider numerous HGs per AM, de difference between the routing tables could be significant. We must emphasize that this difference is only happens when peer-to-peer is enabled. In the centralized communication there is no significant difference among the routing table sizes.

Edge Node: For the practical and efficient routing purposes it is useful if EN is a hierarchical aggregation point.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 183/193 PUBLIC

Network Access Provider (NAP): If the servicing architecture enables a NAP to have more than one EN, A logical hierarchical addressing point also could be represented by a NAP. In this situation a specific node could represent an aggregation and delegation behaviour, and also could be implemented at the physical EN virtually. If a NAP exists on the network logically independent from and EN functionalities, the EN could not have to aggregate requests.

Internet Service Provider (ISP): This is the highest level of the hierarchy, it must be considered as a hierarchical aggregation point. Other points in the ISP topology can have other points of hierarchical aggregation, as well.

4.4.5.1.4 Address Request Aggregation

All the hierarchical aggregation points send aggregated address requests to the connected upper node with the number of bits representing the size of the address space needed to overcome incoming requests from lower level aggregation nodes. When a node sends a request to an upper node in the hierarchy, aggregation is needed. All the requests arriving to a hierarchical aggregation node are aggregated values. The aggregated request at a node can be calculated by the inbound requests from lower nodes, as follows:

If the N means the number of lower entities requesting prefix from the hierarchical node and ni is the requested prefix length by the i-th lower node in bits, then the prefix length needed by the upper level node, to fulfil address requests forms as

= ∑

=

N

i

nin1

2 2log

Equation 4-5

in bits, where upper-closed brackets represents the smallest integer number that is bigger than the factor.

Additionally, an aggregation point can reserve separate address space n0 to serve requests with /64 prefixes, and a branch of addresses represented with nspare bits to give delegation flexibility to this node. If address reservation is considered the aggregated address space is represented by

++= ∑

=

N

i

nnn sparein1

2 222log 0

Equation 4-6

in bits.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 184/193 PUBLIC

4.4.5.1.5 Address Delegation

Address delegation is the answer for the request sent by the requesting node. The requesting node acts as a client and the delegation node as a server. Let’s assume that the requesting node sends a renewal message, while it uses address space represented with its size (nx(t-1)) and the beginning of the address space (ax(t-1)). The requesting node represents its request in bits represented by n and the beginning of the address space represented by ax(t-1). See Figure 4-23.

requesting node

delegating node

addr

ess

requ

est

addr

ess

dele

gatio

n

ax(t-1), nx(t-1)

ax(t-1), n a(t), n(t)

Figure 4-23 Address delegation entities

When the client sends the aggregated request (ax(t-1) n) it can receive five answers from the delegating node at the delegation phase:

i.) acceptance: The requesting node receives the address space as requested. (n(t) <= nx(t) = nx(t-1), ax(t) = ax(t-1))

ii.) expansion: The requesting node receives bigger address space as the actual including the current address space. (n(t) <= nx(t) > nx(t-1), ax(t) = ax(t-1))

iii.) retraction: The requesting node receives smaller address space than the actual but it still meets its requirement. The retraction does not cause renumbering at this side. (n(t) <= nx(t) < nx(t-1), ax(t) = ax(t-1))

iv.) renumbering: The delegation node serves the request, but the delegated address space does not meet the current address space of the requesting node. (nx(t) >= n(t), ax(t) != ax(t-1))

v.) denial: The delegation node could not have the possibility to serve request from its address space. (nx(t) = -1)

Delegation of available address poll at a hierarchical aggregation point happens in proportion to the subsequent requests. Depending on the address delegation policies the hierarchical point can delegate addresses in different way.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 185/193 PUBLIC

4.4.5.1.6 Address Delegation Policies

When a hierarchical node delegates addresses it can choose from different address delegation policies. When considering the requirement fulfilments two different approaches could be used:

i.) Sparse delegation: In this situation the delegation nodes delegate addresses to the requesting nodes with the same size as requested, while sparse address utilization could happen as depicted in Figure 4-24.

ii.) Complete delegation: In the other case delegating node gives address space completed to a full branch to the requesting node. This is the second case in Figure 4-24.

The second aspect is the address reservation.

iii.) Address reservation: At the delegation node address reservation could be used to give delegation flexibility. When a delegation node has reserved a complete branch of addresses new incoming requests could immediately served at the node, without sending aggregated requests to the upper level node, and waiting for the delegation. Address reservation efficiently might be used at the first hierarchical point (typically the AM) at the access network.

iv.) No address reservation: There is no reserved address space to include immediate delegation possibility. This is applicable at hierarchical points far from the requesting end points, typically the EN, and ISP.

i.) and iv.)

n bits

ii.) and iv.)

n bits

ii.) and iii.)

n bits

Figure 4-24 Address delegation policies

4.4.5.1.7 Delegated Address Dynamicity

i.) When a client receives an address space as an answer to its request, there are timings associated to this address space. There are two intervals must be considered:Renewal time (tr): in the time tr after address pool has received the client has to renew its request, the answer from the server could be i.)-v.) (See section 4.4.5.1.7). If the requesting client uses this address pool to serve address requests arriving to it. During the time of tr the requests could be served from this address pool, while the poll is active.

ii.) Lease time (tl): when an address is associated to a node it can use the address space for communication while tl is not expired. The requesting node could use passively the address space during this time.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 186/193 PUBLIC

In the multi-levelled hierarchical addressing architecture, we consider the same tr and tl values at a certain level. In the following index i represents the level in the hierarchy. Index 1 represents the lowest level (customer site) and N represents the highest level (ISP site).

In the following we hold the equation:

riilli ttt += − )1(

Equation 4-7

This equation represents that a hierarchical aggregation point has to use its address space while all the underlying nodes will use it. The passive address space could not expire (tli), while the delegated addresses will not expire (tl(i-1)), and while the delegation asynchronously happens the delegating node has to wait for all renewal messages (tri).

4.4.5.1.7.1 Address space utilization

It is easy to realize that each delegation node must have at least one active address pool, and it can be kept for time tr, after that must send a renewal message to the above layer delegation node. At the highest point in the hierarchy, ISP makes the decision if it starts renumbering in the access network optimally according to the arrived requests, or the efficient addressing can be managed using the original address distribution.

It is straightforward if the ISP makes renumbering frequent enough compared to the leased times in the access network, the address utilization can reach zero. If we calculate with smallest time when renumbering could not happen again (trenum), we can say that the address utilization which is the portion of the summarized active address polls in the access network compared to the total address space can be formulated as follows:

( ) ∑∑ +−=

+===

=i

rirl

renumN

iril

renum

l

renum

r

renum

l

raddress ttt

t

tt

ttt

tt

ttU

11

21

717

1 *

Equation 4-8

Where tr1/tl7 is the natural address utilization, since customer will use actively its address space for tr1 but this address could not be reused again for tl7. The trenum/tr1 comes from the limited renumbering time.

The smallest utilization we could reach while using renumbering to reach optimal address distribution is ½. The one half of the address space is always active, and the other is used for renumbering. In this cast the smallest time for renumbering is:

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 187/193 PUBLIC

( )

+−= ∑

irirlrenum tttt 112

1

Equation 4-9

Where (tl1-tr1) is the application tolerance, which is the time while customer could use its address space after for a renewal message it receives denied-type answers. The set up of this number correctly the access network can adapt to the users request.

4.4.5.2 Subscriber Dynamic Prefix Delegation Mechanism

The prefix delegation (PD) option provides a mechanism for IPV6 prefix delegation using DHCPv6 ( 19 RFC 3633). The main purpose of this mechanism is to allow the delegation of an IPv6 prefix from a delegating router to a requesting router, across an administrative boundary, where the delegating router does not know the network topology behind the requesting router. In MUSE architecture this mechanism will become particularly useful if the RGW is routed, and the provider wants to address the CPN with a prefix between 64 bits (no subnetting in the CPN, when using stateless auto-configuration) and 48 bits (the maximum prefix length advisable for a single site). The prefix delegation process will allow the hosts inside the CPN to use stateless auto-configuration and will allow to the CPN to be addressed with at least one /64 prefix. The use of this mechanism is more suitable for the delegation of prefixes with a long lifespan, although renumbering is also easily done.

4.4.5.2.1 Description

As mentioned in Section 4.4.1, the access multiplexer is assumed to be an IPv6 router on each port (LT). Furthermore, For the sake of clarity, it is supposed subscribers receive /48 prefixes throughout the remainder of this section. Note that the same mechanism can be used to delegate prefixes of arbitrary length, however.

P C

NAP

ASP

NSP

ASPISP

RGW

S TB

CPN

AM

RADIUS Server

DHCPv6/ND PD through DHCPv6

AAA

AAA

Figure 4-25: IPv6 prefix delegation architecture

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 188/193 PUBLIC

The dissertation presented below illustrates one possible solution of how this prefix delegation can work in MUSE architecture. Other solutions are possible.

1. A PPP session is established between the RGW and the AM. The RGW authenticates itself by the username/password in the PPP session establishment. Alternatively Ethernet could be used, with the known authentication problems.

2. The user name sent to the AM in the PPP session would specify the provider the RGW wants to connect to. The AM would then contact the AAA server of the provider (in our example the RADIUS server of the ISP). If a successful authentication is done the RADIUS server will return to the AM a /64 IPv6 prefix (and other configuration information). This prefix is then announced to the RGW through a normal router advertisement message. With this RA the RGW is able to configure the IPv6 address of its WAN interface.

3. From now on there is IP connectivity between the RGW and the AM. The RGW can now send a DHCPv6 SOLICIT message to discover DHCPv6 servers for prefix delegation (with the IA_PD option – RFC 3633 ([26])). The AM can act as a DHCPv6 server and sends a DHCPv6 OFFER message.

4. In the following step the RGW sends a DHCPv6 REQUEST message to the AM, to ask for a /48 prefix. At this time the AM can ask to the AAA server in the ISP for a /48 prefix to address the CPN (if RADIUS is used, it can be done through the use of IPv6 Prefix option – RFC 3162([23])) or it can address the CPN with a /48 prefix from its local prefix pool.

5. After getting a /48 prefix to give to the RGW, the AM sends the prefix to the RGW in a DHCPv6 REPLY message. This message can contain other useful information (DNS servers list, etc).

6. When the RGW gets the /48 prefix it can derive from it /64 prefixes to assign to its LAN interfaces.

7. In the LAN interfaces the RGW now starts to send router advertisements messages. Terminals on the links will configure their IPv6 addresses through stateless address auto configuration.

8. If the router advertisement messages are sent with the O bit set, the terminals in the CPN will know that other configuration information (such as DNS server address) can be retrieved from a DHCPv6 server. The RGW can act itself as a DHCPv6 server, or the information can be retrieved to the terminals from an external DHCPv6 server.

4.4.5.2.2 RGW requirements

• First of all the RGW must be a router.

• PPP must be enabled on the WAN interface.

• DHCPv6 SOLLICIT messages must be sent on its WAN interface. This same interface must also be configured to accept router advertisements from the AM, to configure its own IPv6 address though stateless configuration.

• When the RGW gets the IPv6 /48 bits prefixes needs 16 more bits to configure its WAN interfaces. This additional 16 bits can be configured by default in the RGW.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 189/193 PUBLIC

• Optionally the RGW may act as a DHCPv6 server for retrieving additional configuration information to the terminals in the CPN.

4.4.5.2.3 AM requirements

• Must act as a PPP server to the RGW.

• Must act as an AAA client for authenticating the RGW in the provider’s AAA server.

• Must act as a DHCPv6 server with prefix delegation option enabled, so that the RGW can get a /48 prefix, and other configuration information.

• Router advertisement messages must be sent over the PPP connection, so that the RGW is able to get a /64 prefix to configure its WAN IPv6 address.

4.4.6 Integration of dynamic and static addressing The prefix delegation mechanism described above can be easily integrated with one of the static addressing schemes described before. The following picture gives a general idea of a possible integration between the two addressing schemes.

P C

NAP

RGW

S TB

CPN

AM

RADIUS Server ISP 1

DHCPv6/ND PD through DHCPv6

AAA

AAA

RADIUS Server ISP 2

AAA

ISP 1

ISP 2

ISP2 prefixpropagation

ISP1 prefixpropagation

Static AddressingDynamic Addressing

DHCPv6 PD Server

Figure 4-26: Dynamic/static addressing integration

In this integrated scheme the two ISP prefixes are propagated until they reach the AM. The AM subnets those prefixes and that way fills the DHCPv6 PD pool of prefixes. In our scenario two ISP’s coexist so two separated prefix pools are necessary, one for each ISP. So in this scenario when the RGW asks for prefix delegation to the AM, the AM will contact the AAA server on the ISP network only to get permission to delegate a prefix to the RGW, but the prefix itself won’t be provided directly by the AAA message that the AM receives from the ISP, instead it’s obtained from the previous filled correspondent ISP prefix pool. This integration of two addressing schemes will allow to maintain addressing hierarchy in the access network and to increase addressing efficiency.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 190/193 PUBLIC

Despite in the above figure static addressing ends in the AM, it can end in a upper point in the access network, so the main issue in this integration is to decide in which point of the access network dynamic addressing begins and static addressing ends: for example, the more ISPs are active in the access network, the more address space is wasted when static addressing is deployed too far in the access (as depicted in Figure 4-16). On the other hand, if in a future scenario the NAP will distribute IP addresses, there will be only one prefix distributed in the access network and static addressing might be deployed up to the AM. For this reason further research on this subject is necessary.

4.4.7 Access network routing issue Traffic destined to a foreign access network, needs to be routed through an ISP edge router. Because IP routing tends to follow the shortest path, it is possible a NAP edge router decides to forward data from one ISP's subscriber to a competitor ISP edge router. Of course, ISPs will not route data that does not generate any revenues.

Three solutions were identified to cope with this issue:

1. Policy-based routing: access network nodes select the correct ISP edge router based on the source address.

2. IPv6 routing header: an IPv6 routing header ([11] RFC2460) can be attached to each IP datagram destined to another NAP.

3. IPv6-in-IPv6 tunnelling: a NAP can set up IPv6 tunnels in the access network, forcing non-local traffic to the ISP edge.

4.5 Topics for further consideration In this deliverable the MUSE IP network model for IPv4 and IPv6 has been elaborated. Some scenarios and use cases are already very well developed. But due to the large number of possible solution the work on other topics will continue for the next project milestones. The following list contains some of these topics for this future work:

• A questionnaire was elaborated to gain additional insight in the different operator views on the IPoE access and aggregation network model. The questionnaire was already sent to the operators within the MUSE project. The answers will allow prioritizing of the single solutions and so concentrating the effort within the project.

• The scenario “NAP provides routed IP service for IP wholesale to independent NSPs” was already discussed by the team working on MA2.7. For the upstream data flow solutions based on virtual routers, source based routing or IP tunnels were considered. But these possible solutions need further elaboration to come to a consensus.

• Mechanisms for assigning an IP address to the user of a service were already elaborated for this milestone. But some questions still remain open, e.g.: How to deal with overlapping IP addresses provided by different ISPs/ASPs if this would occur?

• During the work on the IPv6 chapter for this milestone, new ideas on the usage of IPv6 addresses for access networks came up. These ideas also need further consideration in the following milestones.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 191/193 PUBLIC

5 CONCLUSIONS This deliverable addresses the different evolutions of the broadband access and aggregation network in order to meet expectations both from the users and from the operators and providers.

It presents the required functional mechanisms to achieve this, which are then applied in two concrete network models. They are to be considered as reference models for fully-evolved networks, which can be reached via phased approach, depending on the operator's priorities. The mechanisms and models described in the deliverable serve as reference and guidelines for the specific implementations in the other MUSE sub-projects.

5.1 Generic mechanisms The access and aggregation network has been positioned in terms of terminology and basic reference architecture (describing the provider networks and their main nodes), and a first sketch of the definition of interfaces at the data plane and control plane is also drawn. It aligns with existing material in the DSL-Forum.

An adequate model for the Residential Gateway (RGW) is required for the end-to-end story in terms of QoS, connectivity and auto-configuration. Therefore the model for the RGW was used as defined in TF3. It is assumed that the RGW is either bridged or routed, or a hybrid of both. Routed modems in IPv4 are assumed to incorporate NAPT, whereas NAT has been ruled out for IPv6.

The general connectivity is of course the basis for the architecture. The different possibilities have been reviewed of connectivity wholesaling and retailing that a Network Access Provider (NAP) can offer to its customers, identifying the network architectural implications of the different possible business models and roles as described in DA1.1. Four business models are retained for residential users, plus one based on Layer 2 (Ethernet) wholesaling for business users. Technical considerations on connectivity have tackled the stakes of peer-peer connectivity; connecting at Layer 2 versus at Layer 3, connecting locally (as close as possible to the users) versus forcing this traffic to an edge node. The conclusion is that while business users require Layer 2 peer-peer connectivity (e.g. for L2 VPN), there is no such requirement for residential users, which will then be connected at Layer 3. Multicasting also poses specific choices and requirements as a connectivity model. A high-level review of several underlying concepts and protocols has been articulated with relevant multicast applications.

The QoS architecture must allow for implementing QoS guarantees prescribed by SLSs, whereas also enabling flexible and scalable QoS adjustments following individual service demands (based on requests). The general principles and basic options for such an architecture have been reviewed. They have been worked out for the application signalling with policy pull method, following and extending the 3GPP's IP Multimedia Subsystem (IMS) approach. Given the lack of a mature protocol for dynamically requesting QoS by means of network signalling, the approach of pre-provisioned QoS pipes will be the further focus of attention. The QoS part also presents a flexible scheme for combining traffic classes.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 192/193 PUBLIC

Authentication of the end-user must take place at multiple levels (by the NAP and/or the NSP) and in a multi-provider and multi-service environment. After authentication, auto-configuration of the CPE must be performed at different levels (L1, L2, L3+) by the network and service providers. There must be an interaction between the different Authentication, Authorization & Accounting (AAA) platforms and the auto-configuration servers. Solutions for a AAA architecture have been developed, aiming at feature parity between DHCP and PPP protocols, combining mechanisms such as 802.1x authentication, DHCP options to be added at the CPE and DHCP relay, and a RADIUS client in DHCP servers. In particular a solution based on a single-step approach has been identified. Finally, the link was made between the AAA architecture and IMS architecture, where possible IMS model adaptations have been listed.

5.2 Ethernet-based network model After the first year, there is a stable and mature definition of this model, which is based on a layer 2 connectivity from the CPE to the Edge Node (EN). Connectivity throughout the access and aggregation network is based on Ethernet principles.

Depending on the use of Virtual Local Area Network tags (VLAN tags), two possible connectivity modes have been developed. In a first option, called "Intelligent Bridging", the connectivity in the AN is based on the MAC (Medium Access Control) addresses, as in an ordinary Ethernet switch, with additional intelligence for security, traffic management and accounting. The VLANs in the aggregation network are used to further separate the aggregated traffic from the different ANs. In the second option, the connectivity at the AN is no longer based purely on MAC address but on VLAN-IDs, namely by associating one (or more) individual 802.1Q VLAN-ID to every end-user (i.e. to every line aggregated in the AN). This is called "Cross-connecting", using VLAN stacking in the aggregation network to overcome the scalability problem of a single 802.1Q VLAN. Both options have their pro's and con's, and the intelligent bridging mode has been selected for residential users for its lower complexity and compatibility with existing edge nodes. Business users are a special case, requiring another sort of cross-connecting, this time based on S-VLANs. Residential and business users can be combined in the same network (and on the same platform if required).

The data plane requirements for allowing basic connectivity have been analysed in detail for both modes. A comprehensive set of issues has been solved, such as user separation (by MAC forced forwarding or ARP filtering) for a.o. peer-peer via the edge node, the setting and updating of connectivity parameters in the network elements, the impact of IP subnetting scheme of the end-users in the NAP on the requirements for non-PPP based peer-peer communication, recommended implementation of multicast tree build-up and replication based on IGMP, the possible and recommended use of VLANs and optionally MPLS for the residential and business users, and recommended security mechanisms for IPoPPPoE and IPoE. The one-step configuration and AAA process is further elaborated in the specific case of Ethernet network model. Also some concepts of the IMS-based architecture are conjugated in terms of the Ethernet network model.

Project Deliverable IST - 6th FP

Contract N° 507295

MUSE_DA2.2_V02.doc 193/193 PUBLIC

5.3 IP-based network model Handling the traffic at layer 3 (IP forwarding or routing) closer to the end-user has several advantages. It allows to separate the end-users and the aggregation network at layer 2, which benefits scalability and security. It also allows to implement full IP QoS in the access and aggregation network. With IP awareness closer to the users, it also becomes possible to distribute service enablers in the access nodes, and to perform local peer-peer connectivity without looping the traffic back to an edge node.

The foundations for the IP-based model have been laid by analysing the different options for terminating the traffic at layer 2 at an aggregation point in the network. There the traffic flows can then be processed at IP level for connectivity (including peer-peer), QoS, security, multicast criteria. This node can be based on IPv4, or IPv6, or combining both IPv4 and IPv6.

IPoE traffic is forwarded according to the service policies in the aggregation node, but the traffic handling can be different for IPoPPPoE. Therefore the different use cases for IPoPPPoE and IPoE traffic have been defined and analysed. The use cases respectively depend on the level of the PPP(oE) and IP address processing. A single AN can then freely combine an IPoE use case with an IPoPPPoE use case.

For IPv4, several possibilities have been identified for avoiding IP address waste (by allowing to assign the same default gateway to all users per AN). This allows for a free choice of the location of the IP-aware aggregation point. In the case of application wholesale the AN will not require dynamic routing exchanges with the ENs. However in the case of IP wholesale to multiple third-party NSPs, the routing requirements become more complex and this will be investigated in the second year.

The introduction of IPv6 has been tackled by starting with the most basic aspect, namely addressing. It is proposed to include a NAP topology field in the IPv6 addressing structure. Two types of static addressing schemes have been presented, one based on introducing a strict NAP topological hierarchy in the prefix, another based on NAP-proprietary addressing of its different nodes. Dynamic addressing schemes are also analysed, using dynamic prefix delegation following certain policies and dynamicity. Finally, a possible method has been shown to integrate the dynamic prefix delegation mechanism with static addressing schemes.