Next-Generation Network Services Robert Wood
Cisco Press Cisco Press 800 East 96th Street Indianapolis, IN 46240 USA
ii
Next-Generation Network Services Robert Wood Copyright © 2006 Cisco Systems, Inc. Published by: Cisco Press 800 East 96th Street Indianapolis, IN 46240 USA All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission from the publisher, except for the inclusion of brief quotations in a review. Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 First Printing November 2005 Library of Congress Cataloging-in-Publication Number: 2003107970 ISBN: 1587051591
Warning and Disclaimer This book is designed to provide information about Cisco next-generation network services. Every effort has been made to make this book as complete and as accurate as possible, but no warranty or fitness is implied. The information is provided on an “as is” basis. The author, Cisco Press, and Cisco Systems, Inc., shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from the use of the disks or programs that might accompany it. The opinions expressed in this book belong to the author and are not necessarily those of Cisco Systems, Inc.
Trademark Acknowledgments All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.
Corporate and Government Sales Cisco Press offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales. For more information please contact: U.S. Corporate and Government Sales 1-800-382-3419
[email protected]
For sales outside the U.S. please contact: International Sales
[email protected]
Feedback Information At Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each book is crafted with care and precision, undergoing rigorous development that involves the unique expertise of members from the professional technical community. Readers’ feedback is a natural continuation of this process. If you have any comments regarding how we could improve the quality of this book, or otherwise alter it to better suit your needs, you can contact us by e-mail at
[email protected]. Please make sure to include the book’s title and ISBN in your message. We greatly appreciate your assistance.
iii
Publisher Editor-in-Chief Executive Editor Cisco Representative Cisco Press Program Manager Production Manager Development Editor Project Editor Copy Editor Technical Editors
Editorial Assistant Team Coordinator Book/Cover Designer Composition Indexer
John Wait John Kane Brett Bartow Anthony Wolfenden Nannette Noble Patrick Kanouse Dayna Isley Marc Fowler Emily Rader Russ Esmacher Mike Lee Andy Schutz Raina Han Tammi Barnett Louisa Adair Octal Publishing, Inc. Julie Bess
iv
About the Author Robert Wood has 25 years of experience with the design, engineering, marketing, and technology leverage of enterprise and service provider internetworks. As a senior network architect for TEKsystems, Robert provides design, engineering, and consulting to BellSouth, Qwest, and EDS on a managed service provider Multiprotocol Label Switching (MPLS) network for the State of Tennessee and Tennessee educational institutions. Formerly, Robert was a vice president/network manager and lead internetwork architect of a large regional bank, and a network systems engineer and communications specialist with IBM marketing for large enterprise Systems Network Architecture (SNA) and IP networks. Robert holds the Cisco certifications of CCNP, CCDP, CCNA, and CCDA.
About the Technical Reviewers Russ Esmacher is a senior product manager with Cisco Systems, responsible for dense wavelength division multiplexing (DWDM), synchronous optical network (SONET)/synchronous digital hierarchy (SDH), and optical crossconnect platforms (specific focus is the ONS 15454 multiservice transport platform [MSTP] DWDM and ONS 15600 Multiservice Switching Platform [MSSP]). He has more than ten years of telecommunications experience in DWDM transmission, SONET/SDH, and optical fiber and component design. In prior experience with Corning, Russ was an optical fiber engineer supporting the development and manufacture of many G.652 and G.655 optical fibers. He holds bachelor of science and master of science degrees from Clemson University’s Center for Optical Materials Science and Engineering Technology (COMSET). Mike Lee, CCIE No. 7148, is currently working in the Service Provider Systems Engineering group at Cisco Systems. His focus areas are L2VPN technologies such as Any Transport over MPLS (AToM) and Layer 2 Tunneling Protocol version 3 (L2TPv3), quality of service (Qos), and MPLS. Mike currently holds numerous industry certifications and is a triple CCIE in Routing and Switching, Service Provider, and Security. Mike has been in the networking industry for more than ten years, starting with his time in the U.S. Army, and has been working for Cisco since 2000. Andy Schutz has been with Cisco for almost five years, acting as a Technical Marketing Engineer (TME) in a number of different groups. Andy was one of the original TMEs on the Cisco 10000 ESR platform after beginning as a TME for the Cisco IP digital subscriber line access multiplexer (DSLAM). Andy has also served as the lead TME for broadband aggregation and related technologies for Cisco. Andy obtained his CCIE in the service provider track with a digital subscriber line (DSL) focus shortly after coming to Cisco. Prior to working for Cisco, Andy worked at a CLEC, providing DSL service, and earlier at Sprint. Andy enjoys spending time with his wife, Teresa, and his two daughters, Lauren and Elizabeth.
v
Dedications To my wife, Leesa, who is the love of my dreams; thanks for your unfailing devotion, encouragement, and the sacrifice of time and togetherness during this effort. We’ve endured and met another one of life’s challenges together. To our children, Barrett, Beau, and Jenna; thanks for your understanding. We’re proud of your achievements. To my parents, Glenn and Nancy Wood; you gave me love, direction, motivation, and a pursuit for excellence. Thanks to both of you and to the rest of my family for your prayers. And above all, to Jesus Christ, my Lord and Savior; You prepared me, answered me, and brought me to this opportunity—most of all You saw me through it! I can do all things through Him who strengthens me. Philippians 4:13
vi
Acknowledgments I would like to acknowledge several people for their contribution to the delivery of this work. Many thanks to the Cisco Press team: Executive Editor Brett Bartow, who coupled his vision for the book with mine. Brett’s patience, guidance, and directional beacons on this project helped me to understand why he is so successful in publishing. Thanks to Development Editor Dayna Isley for applying the professional polish to get from author draft to final copy. Thanks as well to the rest of the team for their specialized talents in getting this work into print. Thanks to Wendell Odom for your friendship and faith. You are professional grade. Thanks to the technical reviewers, Russ Esmacher, Mike Lee, and Andy Schutz of Cisco Systems, for your technical feedback and contextual suggestions. As the book’s first comprehensive readers, it is materially stronger due to your collective recommendations. Thanks also to Cisco Systems for your product, industry, and internetworking leadership; substance worth writing about.
vii
This Book Is Safari Enabled The Safari® Enabled icon on the cover of your favorite technology book means the book is available through Safari Bookshelf. When you buy this book, you get free access to the online edition for 45 days. Safari Bookshelf is an electronic reference library that lets you easily search thousands of technical books, find code samples, download chapters, and access technical information whenever and wherever you need it. To gain 45-day Safari Enabled access to this book: • Go to http://www.ciscopress.com/safarienabled • Enter the ISBN of this book (shown on the back cover, above the bar code) • Log in or Sign up (site membership is required to register your book) • Enter the coupon code 1TF4-EH4E-1X7S-KUE1-9M3X If you have difficulty registering on Safari Bookshelf or accessing the online edition, please e-mail
[email protected].
viii
Contents at a Glance Introduction
xix
Chapter 1
Communicating in the New Era
Chapter 2
IP Networks
Chapter 3
Multiservice Networks
Chapter 4
Virtual Private Networks
Chapter 5
Optical Networking Technologies
Chapter 6
Metropolitan Optical Networks
Chapter 7
Long-Haul Optical Networks
Chapter 8
Wireline Networks
457
Chapter 9
Wireless Networks
523
Index 577
3
35 103 161 227
307 397
ix
Contents Introduction
xix
Chapter 1 Communicating in the New Era New Era of Networking The Fences Are Down
3
5 8
Technological Winners 11 IP Everywhere 12 Optical Anywhere 14 Wireless Through the Air
17
Building Blocks for Next-Generation Networks 20 IP Networks 21 Multiservice Networks 21 VPNs 22 Optical Networks 23 Wireline Networks 23 Wireless Networks 24 Using Next-Generation Network Services 25 Network Infrastructure Convergence 26 Services Convergence 28 From Technology Push to Service Pull 29 Chapter Summary
30
End Notes 31 Resources Used in This Chapter Chapter 2 IP Networks
32
35
IP Past, Present, and Future 36 IP Influence and Confluence 36 IP Version 4 38 IP Version 6 40 IP Network Convergence
44
Local IP Networks: LANs 44 LAN Technologies 46 Ethernet—From Zero to 10 Gigabits in 30 Years 48 IP Routing 50 LAN Switching 55
x
Long IP Networks: WANs 62 WAN Bandwidth 63 Wide Area Changes 64 Wide Area Technologies and Topologies 65 Mobile IP Networks 69 Wireless IP LANS 73 Mobility Networks 82 Global IP Networks 87 Global Capacity 88 Globally Resilient IP 89 The Internet—A Network of Networks 90 Beyond IP 92 Technology Brief—IP Networks 93 Technology Viewpoint 93 Technology at a Glance 95 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 96 End Notes 100 References Used in This Chapter Chapter 3 Multiservice Networks
101
103
The Origins of Multiservice ATM 104 Next-Generation Multiservice Networks 107 Next-Generation Multiservice ATM Switching 108 Cisco Next-Generation Multiservice Switches 110 Multiprotocol Label Switching Networks 114 Frame-Based MPLS 115 Cell-Based MPLS 118 MPLS Services 121 MPLS Benefits for Service Providers 123 MPLS Example Benefits for Large Enterprises 124 Cisco Next-Generation Multiservice Routers 125 Cisco CRS-1 Carrier Routing System 126 Cisco IOS XR Software 132 Cisco XR 12000/12000 Series Routers 133
xi
Multiservice Core and Edge Switching 138 Multiservice Provisioning Platform (MSPP) 140 Cisco ONS 15454 E Series Ethernet Data Card 142 Multiservice Switching Platforms (MSSP) 144 Technology Brief—Multiservice Networks 148 Technology Viewpoint 148 Technology at a Glance 150 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 157 End Notes 158 References Used in This Chapter Chapter 4 Virtual Private Networks
158
161
Frame Relay/ATM VPNs: Where We’ve Been IP VPNs: Where We’re Going
161
163
IP Security (IPSec) 165 IPSec Protocols for Data Integrity 166 IPSec Data-Forwarding Modes 168 Summarizing IPSec Technologies 170 Access VPNs 171 IPSec VPNs for Remote Access 172 Secure Socket Layer (SSL) VPN for Remote Access 177 Wireless Remote-Access VPNs 179 MPLS VPNs for Remote Access 182 Intranet VPNs 186 IPSec Site-to-Site VPNs 186 Additional Intranet IPSec VPN Designs 188 MPLS Layer 3 VPNs 190 MPLS Layer 2 VPNs 194 Layer 2 Tunneling Protocol version 3 (L2TPv3) VPNs 202 Multicast VPNs (MVPNs) 205 Extranet VPNs 211 Multiservice VPNs over IPSec
213
VPNs: Build or Buy? 216 Enterprise-Managed VPNs 216 Provider-Managed VPNs 217
xii
Technology Brief—Virtual Private Networks 218 Technology Viewpoint 218 Technology at a Glance 221 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 223 End Notes 224 References Used in This Chapter Recommended Reading
224
225
Chapter 5 Optical Networking Technologies Light—Where Color Is King
227
227
Understanding Optical Components 228 Light and Lambdas 229 Electromagnetic Spectrum 230 Light Emitters 232 Optical Fiber 233 Light Receivers 238 Understanding Optical Light Propagation
239
Optical Networks—Over the Rainbow 241 WDM 242 DWDM 244 CWDM 254 Understanding SONET/SDH 257 SONET/SDH Origins and Benefits 258 SONET and SDH Hierarchy 259 Packet over SONET/SDH 263 SONET/SDH Challenges with Data 266 Understanding RPR and DPT 266 RPR/802.17 Architecture 268 DPT Using SRP Architecture 271 RPR and DPT Benefits 274 Optical Ethernet 274 Gigabit Ethernet and 10GE over Optical Networks 275 Ethernet over Next-Generation SONET/SDH 280 Ethernet over RPR/DPT 284 Ethernet Directly over Optical Fiber 285
xiii
Optical Transport Network (ITU-T G.709 OTN) IP over Optical 294 Unified Control Plane 295
292
Technology Brief—Optical Networks 297 Technology Viewpoint 297 Technology at a Glance 300 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 302 End Notes 303 References 304 Chapter 6 Metropolitan Optical Networks
307
Business Drivers for Metropolitan Optical Networks 308 Functional Infrastructure 309 Metro Access 311 Metro Edge 317 Metro Core 321 Service POP 327 Metro Regional 330 Metro SONET/SDH 331 Virtual Concatenation (VCAT) 332 Generic Framing Procedure (GFP) 333 Link Capacity Adjustment Scheme (LCAS) 333 Moving Packets over Metro SONET/SDH 335 Metro IP 340 Resilient Packet Ring (RPR): Packet Power for the Metro 341 Dynamic Packet Transport (DPT): The Cisco RPR Solution 346 IP/MPLS in the Metro 348 Metro DWDM 349 Drivers for Metro DWDM 349 Metro DWDM Technology 351 Metro DWDM Design Considerations 356 Metro CWDM 359 Metro DWDM-Enabled Services 360 Metro Ethernet 361 Ethernet—from LAN to MAN 362 Metro Ethernet Services 363 Comparing Metro Ethernet Services 367 Taking Metro Ethernet to the Market 367 Service Orienting Metro Ethernet 371
xiv
Metro MSPP, MSSP, and MSTP 372 MSPP 372 MSSP 372 Multiservice Transport Platform (MSTP) Metro Storage Networking 377 Fibre Channel 377 Enterprise Systems Connection (ESCON) Fiber Connection (FICON) 381
373
380
Technology Brief—Metropolitan Optical Networks 383 Technology Viewpoint 383 Technology at a Glance 386 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 392 End Notes 393 References Used in This Chapter Chapter 7 Long-Haul Optical Networks
393
397
Understanding Long-Haul Optical Networks 397 Networks of Nodes 400 Cisco Long-Haul Technologies 402 Long-Haul DWDM 410 Extended Long-Haul Optical Networks 429 Advanced Fibers 430 Use of the L Band 430 Raman Amplification 430 Forward Error Correction (FEC) 431 Modulation Formats 431 Ultra Long-Haul Optical Networks 432 Highly Accurate Lasers 433 Dispersion Management 433 Amplification 434 OXC Architectures 434 Data Modulation 435 Submarine Long-Haul Optical Networks 435 Submarine Network Fiber Types 436 Submarine Fiber Amplifiers 437
xv
Optical Cross-Connects (OXCs) 438 Optical to Electrical to Optical (OEO) 439 Optical to Optical to Optical (OOO) 440 Hybrid OOO and OEO Technologies 444 Technology Brief—Long-Haul Optical Networks 444 Technology Viewpoint 445 Technology at a Glance 447 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 452 End Notes 454 References Used in This Chapter Chapter 8 Wireline Networks
454
457
Narrowband—Squeezing Voice and Data 458 Residential Loop for Analog Transmission 459 Going Digital with PCM and TDM 460 Narrowband Aggregation for DS1 and E1 461 ISDN 464 Frame Relay 467 Narrowband Aggregation Layer and Digital Loop Carriers 472 Broadband—Pushing Technology to the Edge 474 DSL 475 DSLAM Broadband Aggregation Layer 490 Cable 493 Ethernet to the Masses 502 Technology Brief—Wireline Networks 509 Technology Viewpoint 510 Technology at a Glance 513 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 515 End Notes 519 References Used in This Chapter Recommended Reading Chapter 9 Wireless Networks
520
520
523
Cellular Mobility Basics 523 Analog Cellular Access Technology 524 Digital Cellular Access Technologies 529
xvi
Cellular Standards 534 Generation Upon Generation 541 Mobile Data Overlay 544 Mobile Radio Frequency Spectrum 549 Navigating the Mobile Spectrum 551 Wireless LANs 552 802.11 Physical Layer (PHY) Techniques 553 Orthogonal Frequency Division Multiplexing (OFDM) 802.11—11 Mbps and Beyond 555 802.16 559 Wireless Personal Area Networks 560 Wireless Optics 562 Fixed Wireless 563 Satellite Wireless 566
555
Technology Brief—Wireless Networks 566 Technology Viewpoint 567 Technology at a Glance 568 Business Drivers, Success Factors, Technology Application, and Service Value at a Glance 570 End Notes 573 References Used in This Chapter Index 577
573
xvii
Icons Used in This Book Communication Server
PC
PC with Software
Terminal
File Server
Sun Workstation
Macintosh
Access Server
Cisco Works Workstation
ATM Switch
ISDN/Frame Relay Switch
Token Ring Token Ring
Printer
Laptop
Web Server
IBM Mainframe
Front End Processor
Cluster Controller
Catalyst Switch
Multilayer Switch
FDDI Gateway
Router
Bridge
Hub
DSU/CSU DSU/CSU
FDDI
VPN Concentrator
ADM
Router with Firewall
Network Cloud
Access Point
Line: Ethernet
ADM
Optical Services Router
Line: Serial
firewall (alternate)
Line: Switched Serial
Optical Cross-connect
Optical Transport
Frame Relay Virtual Circuit
xviii
Command Syntax Conventions The conventions used to present command syntax in this book are the same conventions used in the IOS Command Reference. The Command Reference describes these conventions as follows: • Boldface indicates commands and keywords that are entered literally as shown. •
Italics indicate arguments for which you supply actual values.
•
Vertical bars (|) separate alternative, mutually exclusive elements.
•
Square brackets ([ ]) indicate optional elements.
•
Braces ({ }) indicate a required choice.
•
Braces within brackets ([{ }]) indicate a required choice within an optional element.
xix
Introduction The 21st century marketscape of the telecommunications industry is undergoing substantial change, more so for service providers than for any other users of communication technology and services. A global technology infusion of IP, optical, and wireless mobility has presented new opportunities for the service delivery of data, voice, and video for communications and computing in real time—anytime, and anywhere. These and other technology innovations are creating a dynamic shift in the number of service substitutions now available across all provider segments. The future of telecommunications has already been inexorably changed, as next-generation network services are capable of reaching markets and customers worldwide. The world has indeed become internetworking-centric and communications service dependent. Next-generation network services are the collective conduit through which to meet the needs of a technology-enabled culture. More specifically, next-generation network services are the inventive optimization of technology and service platforms to meet a new era of IP-centric networking requirements and customer opportunity. Service is the emphasis, as IP has become a prolific communications portal through which to deliver interactive solutions that improve business execution, tie the individual consumer into commerce, and extend market reach by removing the last barriers of time and distance. Also evident is a fundamental change in network and systems architectures from vertical silos based on low-layer proprietary systems to horizontal architectures based on higher-layer, open standards such as IP. This is the essential distinction: next-generation network services transcend the physical layer at Layer 1 and move upscale into Layers 2, 3, and beyond. Services are decoupled from transport as a result of IP-based any-to-any networking. Higher-margin services are now easily layered on any type of transport. With IP everywhere, many service providers are wrestling with how to get to a converged IP infrastructure and also how to migrate traditional systems in order to extend their advantages in the new architecture. The challenge becomes how to combine standard building blocks in new ways to create a network and service that can be differentiated from the competition. Time to market and speed of innovation become important factors when every competitor has access to the same raw materials. Deriving service value from provider technology is now a critical skill. Technology-based providers should intensely “service-orient” their offerings, while positioning their solutions appropriately in advance of the customer’s value distinction—becoming experts at the customer’s business. To put it another way, a properly executed transition from technology push to service pull is in order. No longer just a communications prop, provider technology has moved center stage. A convergence of networks, services, and providers is occurring. This book increases your knowledge of the expanse of new provider technology and services. The understanding of service-centric, next-generation network technology is paramount, because internetworking innovation is an enabler of convergence, and convergence is a launch pad for services—communications and computing services that increase customer value by magnitudes and enhance that value year after year. Therefore, next-generation network services are more than connectivity, communication, and collaboration. They are about technology-leveraged, service-centric platforms combined with a service-valued mindset for the purpose of engaging customers on an immersive, interactive level—not only solving their challenges but also anticipating their future dreams regarding business and personal communications.
xx
Purpose of This Book Today’s service provider segmentation is so wide, and communication technology options so deep, that until now it has been difficult to achieve a contextual equilibrium, at least one that you can hold in one hand. This book consolidates a diverse amount of provider background, networking technology, nextgeneration services, and even marketing considerations, as applicable to the service provider, large enterprise, or anyone else with an interest in digital communications. As such, the approach has been an expansive cast of provider technology and service coverage at an introductory to intermediate depth, arming you with the technology talking points and business advantages necessary to understand, research, strategize, evaluate, propose, justify, sell, and consult regarding nextgeneration network services. With this information, you should be able to • Understand the dynamics of the new-era service provider market •
Apply service-differentiating techniques to strategic business planning
•
Recommend the advantages of service provider solutions
•
Select and justify technology and products that leverage your value proposition
•
Prepare for marketing opportunities and customer presentations
So, this book is more of a who, what, where, when and why—the business, functional, technical, and educational backdrop that prefaces any implementation. The question posed by many, of “where does a particular technology fit,” is addressed, and the book is also useful for expanding your knowledge of the overall service provider field of play. This book provides a window to a new era of communication, a door to expanding service value through technology leverage, and a walk through some of the service-oriented technology options now available from Cisco Systems. The intent is to inform, educate, and, most of all, stimulate ideas for new opportunities tomorrow.
Who Should Read This Book? Both service provider and information technology fields require a high degree of technical marketing in order to move communication and service innovations into revenue-producing markets. Technical marketing is a “trusted relationship” style of sales model. The sales representative establishes the “relationship,” and the technical marketing professional/engineer supplies the “trust.” This book can help you with both. Service providers are heavily leveraged technology organizations with the highest per capita of networkingoriented individuals in any sector. As such, this book can benefit a broad audience such as networking visionaries, architects, consultants, product developers, product marketers, project managers, network engineers, presales/systems engineers, sales representatives, and executive management in service provider companies, many of whom are technical marketers, individual contributors, decision makers, and advocates for purchasing or leveraging strategic networking technology. Large enterprises and aspiring businesses are codependent on network technology for creating innovations and enhancing customer service. This book helps enterprise networking strategists understand network technology options and service provider capabilities.
xxi
Although this isn’t a “how to configure it” guide, engineers should read this book, because it exposes them to a comprehensive view of the technology market in which they work. Venture capitalists and technology analysts can also benefit from this book as a broad technical overview of the telecommunications sector.
How This Book Is Organized The opening chapter, “Communicating in the New Era,” explores the service provider opportunity in the new era, discussing what has changed and why a service-centric focus is the new ascendancy. This is essential information for any provider, enterprise, or network professional. The remaining chapters introduce an extensive portfolio of network technology, with each chapter also discussing the market advantages and service value of the technology, as well as some of the applicable Cisco products and solutions. The text is oriented to an intermediate level of experience but employs both beginner and advanced perspective as necessary. Chapters 2 through 9 focus on various provider technology topics. The outline of each chapter is purposeful to put provider technology in the appropriate context, and it’s evident that some technologies are cross functional. IP networks, multiservice networks, Virtual Private Networks (VPNs), optical networking technologies, metropolitan optical networks, long-haul optical networks, wireline networks, and wireless networks are prime topics, each developing into a comprehensive review of relative subtopics. Chapters 2 through 9 each conclude with a technology brief that you can use as a quick reference for key facts and business motivations related to the particular topic at hand. This book is flexible as a selective chapter read; a periodic, topical reference; or as a cover-to-cover, comprehensive study. If you decide to read all of the chapters, reading them in sequence is recommended. Chapters 1 through 9 cover the following topics: • Chapter 1, “Communicating in the New Era”—This chapter introduces a new era of networking influenced by the pervasiveness of IP; the land run of competition; the technical prowess of IP, optical, and wireless mobility; the impact of behavioural change; and the persuasion of the Internet economy. •
Chapter 2, “IP Networks”—This chapter covers the rise of IP networks into local (LANs), long (WANs), mobile (wireless IP), and global networking applications. The essential message of IP networks is that IP is today’s dynamo of network convergence and service creation, extending productivity benefits, service variety, and innovation into the start of the 21st century.
•
Chapter 3, “Multiservice Networks”—This chapter introduces multiservice network architecture as a next-generation network infrastructure that is essential to delivering service variety and capitalizing on IP services. Purpose-built networks evolve to service prolific networks in the process. This chapter covers next-generation Asynchronous Transfer Mode (ATM), IP/MPLS, Multiservice Provisioning Platform (MSPP), and MSSP platforms.
•
Chapter 4, “Virtual Private Networks”—VPNs provide a strategic market position through which to harvest new revenues. This chapter starts with where we’ve been and where we’re going with the service pull of IP VPNs. VPN technology is explored through access and intranet VPNs,
xxii
as well as extranet VPN major topics. Covered here are IPsec, Secure Socket Layer (SSL), wireless, site-to-site, multicast, multiservice, and Layer 2 and Layer 3 MPLS VPNS, as well as Virtual Private LAN Service (VPLS). •
Chapter 5, “Optical Networking Technologies”—Optical fiber is the physical layer medium of choice. As a result, optical networking is the ascendant Layer 1 technology on which to build the new era of networks. This chapter is the first of three core optical networking chapters, serving as an excellent introduction to optical technology components and optical features including SONET/ SDH, Resilient Packet Ring (RPR), dense wavelength division multiplexing (DWDM), course wavelength division multiplexing (CWDM), optical Ethernet, and IP over optical.
•
Chapter 6, “Metropolitan Optical Networks”—Metropolitan optical networks are reaching farther to accommodate the broadband communication needs of the sprawling urbanization of people and business. This has impact on the functional infrastructure of the metropolitan area network, and one model of a next-generation metro infrastructure is introduced. In addition, metro-specific network technology is covered here, including metro SONET/SDH, IP, DWDM, Ethernet, reconfigurable optical add/drop multiplexers (ROADMs), CWDM, metro MSPP/MSSP/ MSTP platforms, and metro storage networking.
•
Chapter 7, “Long-Haul Optical Networks”—Long-haul optical networks are at the core of global information exchange. This chapter covers long-haul optical network topics such as the Cisco ONS 15454 MSTP, long-reach DWDM considerations, extended long-haul and ultra long–haul optical networks, submarine optical networks, and optical cross-connects.
•
Chapter 8, “Wireline Networks”—This chapter covers the fundamentals of wireline networks and is an examination of the latest access layer technologies and services. As such, this subject is related to metropolitan networks where wireline networks are deployed. Topics here include narrowband, ISDN, Frame Relay, Digital Loop Carrier, broadband xDSL and cable, and Ethernet in residential applications.
•
Chapter 9, “Wireless Networks”—Wireless networks now cover the spectrum from cellular phones to wireless Ethernet PCs and handhelds, to fixed wireless and satellite wireless services. This chapter starts with a review of mobility basics and the digital access technologies of time division multiple access (TDMA), code division multiple access (CDMA), and Orthogonal Frequency Division Multiplexing (OFDM). Cellular standards are reviewed, along with the data overlay technologies of HSCSD, General Packet Radio Service (GPRS), Enhanced Data rates for the GSM Evolution (EDGE), CDMW2000 1x and 1xEV-DO, and wideband CDMA (WCDMA). An extensive coverage of wireless LAN (Wi-Fi) technology is reviewed, along with fixed and satellite wireless.
This page intentionally left blank
This chapter covers the following topics:
• • • • •
The New Era of Networking The Fences Are Down Technological Winners Building Blocks for Next-Generation Networks Next-Generation Network Services
CHAPTER
1
Communicating in the New Era The communications industry is experiencing historic change. A superconfluence of Internet technologies, deregulation, fresh competition, and consumer behavioral change are blurring traditional service boundaries between communication providers and consumers. The bandwidth limitations in the last mile are disappearing at an incredible rate due to discoveries in optical switching, advancements in wireless, and extensions in wireline technologies. Rapid progress in miniaturization of semiconductor memories, embedded systems-on-chips, minivolt power plants, and dynamic logic are intercepting a bandwidth wormhole, fulfilling the promise of a personal device through which you can individually hold the World Wide Web in one hand. With so many ideas converging on opportunity, we are truly communicating in a new era. In this new era, service providers are wrestling for mind and market share as they restructure their networks to attract an increasingly diverse set of clients. Once-protected markets are now riddled with competition, and yesterday’s leisurely pace has become a race for both survival and profit. Value propositions are in flux as service differentiation moves to the storefront and as the storefront globalizes its geographic reach. New technologies and new players have added to the choices you have for business and personal communication. These technologies allow service providers to build and operate networks, providing local, long, global, and mobile voice, data, video, and Internet services to businesses and consumers. The surging demand for enhanced data and voice services has the attention of these providers, who are morphing their business plans to satisfy the appetite of bandwidth consumers in exchange for dollars. In an open playing field, service providers must learn to differentiate both their technology and customer service in order to migrate up the revenue stream, and in many cases, even to survive.
4
Chapter 1: Communicating in the New Era
Several shining stars have emerged as technological must-haves:
•
Internet Protocol (IP)—The fundamental accessibility of IP during the infancy of the Internet and its rapid emergence as the enterprise’s network protocol of choice positioned IP on both sides of service provider networks. An early game of “IP keepaway” rapidly ported into a free-for-all, daily course of dodge ball with both old and new providers juggling and throwing more and ever faster IP services at the market. With physical network transport commoditizing, IP-based service offerings become the absolute minimum entry fee to play in the new communications era.
•
Optical communications—Optical technologies, often relegated to unseen backbone passageways, have made a quantum leap in capacity, reach, capability, and economics. With an era of data-pervasive traffic eclipsing that of voice traffic, bigger and faster communications pipes now spring from the only medium capacious enough to support the oncoming data avalanche. With an optical network at the epicenter of the Internet, there are endless possibilities for yet-to-be-developed applications and services. Optical fiber already has or will soon surround a neighborhood near you. The optical superhighway is seeing clear to your street tomorrow, and eventually, to your very doorstep.
•
Wireless mobility—Where automobiles brought each of us individual mobility, wireless communications is personalizing our reachability. A feverish desire to be completely untethered from anything that impacts productivity, flexibility, and even recreation has stoked the fires of wireless possibilities. The appeal of the wireless World Wide Web is becoming paramount to the technology-enabled user. Knowledge is necessary to discern and pursue opportunities. With wireless, why go to the library? The worldwide library, that is, the Internet, comes to you, any time and any place.
While communications technologies in and of themselves will inherently find acceptance by improving some important aspect of the way we learn, live, work, and play, much of their success is based on standardization, pervasiveness, and affordability: ingredients that cut a short path to the commodity market. Properly differentiating technology-based offerings becomes essential. Today, it is no longer valid to compare one’s apple with another’s orange. It’s all fruit, and shelf life is decreasing. With everything in similar food groups, time to market and nutritional value become defining properties. While the pace of technological advancement is often hallowed, it is service value that desperately needs a 10x improvement. In the coming years, it is rather the pursuit and packaging of definitive services that promises the greatest of communications opportunities. Historically, service providers have been, at their core, technology companies. Business drivers begat new technology, which becomes new product, to which is stapled market penetration forecasts, then delivered via sales quotas. The new offering is wrapped with the traditional customer relationship and customer service stories, then broadcast into the market with term pricing and early adopter incentives. Enormous resources and time are expended in closing, booking, and build-out of the offering. This process defines
New Era of Networking
5
technology push, and for decades has metered out gradual improvements in innovation and value because, frankly, it was the only show in town. Presently, service providers are still primarily technology, or product-oriented, companies. The aforementioned changes in technology and legislation are at work, intent on leveling the playing field through increased competition and consumer choice, so distinguishable significance must come from elsewhere. In the new era, product models must move beyond 20th century service fundamentals and into service-valued innovations—the best of offerings delivered in advance of customer needs that place intense emphasis on customer value distinction and advantageously define the provider. By doing so, marketable products with new-era service value can pull customers. To put it another way, a properly implemented transition to service pull is in order. Technology push must become service pull. Envision for a moment a future promised land that uniquely blends service-centric technology and service differentiating techniques into a solution greater than the sum of its parts. This could lead to services that save or even recover lost time, solutions that propel productivity, extend flexibility, accelerate innovation, and create time value. To an extent, this future already exists. The legacy of technology push is giving way to service pull. For both service providers and enterprises, a service-value orientation of next-generation networks is becoming the crux of a new ascendancy. The new era of networking can be recognized as a global sphere of convergence, the confluence of separate entities coming together to connect, collaborate, create, carry, collect, and commune. Indeed, several threads of convergence from businesses, services, technology, and consumers are now intersecting along multiple vectors, providing breakaway opportunities for the swift, the talented, and the brave. The road to definitive next-generation network services is paved with both success and peril, yet the rewards are greater than ever for those willing to plan, execute, and succeed as we communicate in the new era. In this chapter, you learn about several identifiers of the new era of networking, how competition is leveling fences, about the technological stars of the new communications age, and get an introduction to the building blocks of next-generation provider networks and services that this book is predominantly about.
New Era of Networking Several technical events occurred to usher in the new era of communications networking, including
• • • • •
The technological refinement of the electromagnetic spectrum The rising power of computing The ascendancy of IP The dominance of data The service pull of the Internet
6
Chapter 1: Communicating in the New Era
The most notable communications technology event of the 20th century is the realization that the electromagnetic spectrum, a large field of propagating waves from the shortest gamma wavelengths to the longest wavelengths of AM radio, is essentially infinite in regard to the carriage of information. Previous musings that the spectrum was limited to wireless form only were all but archived in the closing chapter of the recent millennium. The electromagnetic spectrum is indeed, medium neutral. It can use copper-laden power, phone and coaxial cable plant (electrons), air-bridged radio frequencies (microwaves), and glass-based fiber (photons). Both wavelengths and frequencies are key defining properties of the electromagnetic spectrum, from utility line pulses, to light beams, to cosmic rays. The spectrum is wide open for use and reuse, and perhaps we’re not yet even aware of the extent of its boundaries. Fiber optics has become the most powerful user of the electromagnetic spectrum, particularly the infrared portion of the field. By integrating, overlapping, and perhaps multiplying spectra across all available mediums, usable communications bandwidth (and, therefore, information-bearing capability) appears to be infinitesimal as we launch into a new century. From megawatts to milliwatts, from bits to petabits, from kilometers to nanometers, the superior harnessing of electric and magnetic waves is hyperextending the once-stoic barriers of bandwidth to the benefit of both mind and matter. The computer age was built on the power of two. The one and the zero, represented and replicated in a transistor’s ability to switch itself on or off as binary equivalents of yes and no, has for the last 50 plus years been increasingly numerated on both sides of the decimal point. The mathematical power of two is awesome. It has created thousands of protocols, hundreds of thousands of usable software programs, millions of computers, terabytes of digitally storable information, and quad zillions of accurate calculations—all resulting in a multibillion dollar industry. The “soft” power behind the hardware is the ability to arrange the ones and zeros in variable sequences of code, in slices of time, in matrices of instructions that create software programs, protocols, packets, and petabytes of storage. The sharing of stored information is at the heart of any business opportunity and transaction. The content of millions of computer islands of information, as data-unique as a strand of DNA, increases its value once copied beyond its embedded storage media. The opportunistic need for intercommunication gave rise to packet networking to reduce the barriers of time and distance between information sharing. Networking became more efficient as intelligence migrated from the heart of the computer into the data-ways between. Central to that intelligence is the worldwide adoption of the Internet Protocol, often layered with Transmission Control Protocol to form TCP/IP. With most of the cocoons shed from the proprietary communications protocols, the pervasiveness and affordability of IP converged along with communication technology advancement, the sum of which rendered the CPU peripheral and the network central. IP was the defining catalyst that repositioned intelligence at the epicenter of transmitters and receivers and became the global communications language of the new networking era.
New Era of Networking
7
Also note that the rapid adoption of Internet access services by businesses and consumers has reversed the 80/20 percentage ratio of voice traffic to data. In a voice-driven world, demand was steady and predictable, and a highly structured network architecture made perfect sense. In the early 1990s, the appearance of enterprise client/server applications brought new requirements for increased data bandwidth, networking protocols, and upgrades to communications infrastructure. This rise of data bandwidth gathered momentum with which to swing the 80/20 rule into reverse. Service providers benefited from a fresh demand for data growth, fertilizing their Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) infrastructures and plowing investments into Asynchronous Transfer Mode (ATM) and IP overlay platforms, paving the way for the urbanization of the Internet. Much like taking a startup company public, a significant usability enhancement quickly made the Internet a household name. The cryptic “guidebook” of Internet searching was simplified with the World Wide Web’s uniform resource locator (URL) interface, through use of an easy-to-remember “www.url.com” convention. Upon that, point-and-click navigation was added through the linkage of Hypertext Markup Language (HTML) and a PC software-based Internet browser. Quickly, there became a million places to go. Web browsing was layered with useful information content and garnished with color, cartoons, and software cookies. Baking this concoction on high bandwidth created a data-rich recipe for success—a visually tempting data buffet, continuously forked by millions of ravenous personal computers that heretofore were isolated and bored with mere productivity applications. The data portions were sizable, from small bites to large gulps. The newfound ease of use and content richness powered Internet usage logarithmically through the 1990s. A voracious appetite for Internet data is now dominant over voice. Was the initial attraction of the Internet merely an appeal to human nature, which delights in window browsing, taste sampling, globetrotting, and fresh adventure? Perhaps at first it was. Years later, we continue to feast on the services, now so easily accessible. The Internet is a post office, a library, a bank, a brokerage, a pharmacy, a travel agency, a school, a grocery store, a market, a flower shop, a bookstore, a mall, and an auto dealer. It’s also a phone book, a television, a radio, a newspaper, a magazine, a catalog, a map, a weather forecast, a filing cabinet, a utility, a digital neighborhood, and a community. It’s even a mailman, a banker, a doctor, an investment broker, a real estate agent, a teacher, a car salesman, a ticket broker, and a delivery boy. Businesses use the Internet as a living brochure, a sales and pricing catalog, an advertiser, an order taker, a distributor, a sales and customer service channel, and a global street address. These embedded services save time and, therefore, pull users into digital commerce and entertainment. The Internet is one definitive example of service pull. Perhaps the remaining signpost of the new era of networking was further deregulation of the United States national telecommunications network infrastructure in 1996. This historic action fueled competition (a few thousand new service providers), brought an influx of venture capital, and relaxed restrictions in offered services. Service providers were placed into a multicompetitive environment and have been restructuring their networks to provide and offer new and additional services to an increasingly diverse set of clients.
8
Chapter 1: Communicating in the New Era
Consider that the expanding communications spectrum, optical technology advances, personal computers, ubiquitous IP, mobility, the Internet, deregulation and competition, and monetary wealth and investment are all significant, pervasive, and widely available. Most opportunistically, they converged together in short order. Their confluence has blurred lines of demarcation between service providers, enterprises, and consumers; between network access technologies; and between voice, data, and video transport; providing both market discontinuities and market opportunities. These mutually gainful pursuits created a fresh inflection point from the long lineage of legacy networking. Taken individually, all are worthy of mention. Considered collectively, their synergies are intersecting, redefining, and sustaining opportunity—ushering in a new era of networking.
The Fences Are Down On February 8, 1996, President Bill Clinton signed into law the Telecommunications Reform Act of 1996. This Act was a 128-page bill designed to restructure the entire telecommunications industry. Actually, he signed it twice—once with a presidential pen, and then again on computer screen with an electronic stylus, instantly posting the new legislation on the Internet. Lily Tomlin, who attended the historic signing via Internet video, jokingly predicted the following morning’s headline as “Bill signs Bill!” Congress had been trying to pass such a law for nearly a decade; a new law to transform the legal framework that historically regulated the telephone, cable TV, and broadcast industries. It would promote the advancement of a modern telecommunications and information infrastructure. It would create jobs and help connect every child in a school classroom to the information superhighway. Consumers would see lower prices, better quality, and more variety of choice in telephony, data, and video services. Of course, the Federal Communications Commission (FCC) still had to write detailed rules of engagement, not in its previous role as a monopolistic supervisor but rather as a judge of competition. The creation of simplified, clear, and fair rules would make competition real and not rhetoric. While much of that work was six months or more away, the disruptive impact had already begun in the minds of America’s boardrooms and on Wall Street, because suddenly local could go long, long could go local, cable could play back dial tone, telephony could talk video, fixed-wireline could untether, wireless could tether up, and broadcasters could expand their markets. In effect, all were invited to a new, open market opportunity with liberty and Internet for all. In a historical, electronic moment, the fences were down. In the mid- to late 1990s, dot-coms were popping up everywhere on the Internet and on the Initial Public Offering (IPO) radar. Internet access was exploding from businesses and consumers, as web servers, HTML, and personal computer browser software created needed transparency, colorful graphical user interface (GUI), and autonavigation of the Internet. Service providers of all kinds were caught in the middle, as the challenge of “connect the dots with the dollars” became the new objective of nearly every incumbent, and the golden
The Fences Are Down
9
fleece of every new telecommunications entrant. Soaring on the effervescence of the current stock market, bandwidth angels sent seemingly free “pennies from heaven” in search of business plans for advanced telecommunications services that would carry wave after endless wave of Internet surfers across a new, global ocean of information offerings. According to the Telecommunications Industry Association’s annual market forecasts, by 1997, the overall telecommunications market grew by more than 11 percent, generating revenues of $406 billion. Services represented about 74 percent of the 1997 total with equipment comprising the remaining 26 percent.1 By the end of 1998, the market had grown to $467 billion.2 By the turn of the century, the market was $518 billion, growing at a pace more than twice as fast as the U.S. economy. Competitive Local Exchange Carriers (CLECs) had grown to 158 operators from only 20 in 1996. With 24 million fiber miles laid in 1999, CLECs had also grown their fiber share to 22 percent from 10 percent in 1996.3 It appeared that local competition was taking root as many new entrants joined the bandwidth, services, and subscriber race by creating new wireline, wireless, and optical networks of their own. Telecommunications was now valued at about one sixth of the U.S. gross domestic product (GDP) at the divide of the centuries. Not to be silenced, cable system operators invested nearly $20 billion in system upgrades in the three short years post-Act, and by 1999 touted 1.2 million cable modem subscribers. The Incumbent Local Exchange Carriers (ILECs) made significant mergers and began aggressive Digital Subscriber Line (DSL) build-outs, totaling about 250,000 DSL subscribers that same year. Wireless communications spending had increased from $27 billion in 1996 to $45 billion in 1999.4 By the Christmas season of 2001, the overall market reached revenues of $663 billion. Double-digit growth was tallied in specialized services such as DSL and cable, and highspeed Internet access gained a whopping 78 percentage points. But this was provisioned mostly across premillennium capital infrastructure, as spending on network equipment and facilities lost 13 percentage points that same year.5 A sharp falloff in equipment spending was barely offset by a welcome surge in wireless services, eking out only a 3.5 percentage gain to a total U.S. telecommunications revenue mark of $681 billion by January 1, 2002.6 Beginning in late 2000, sharply lower growth in carrier capital expenditures (CAPEX) and operating expenditures (OPEX) began to deteriorate return-to-capital ratios, spooking investors who responded with investment reduction or abandonment. Many of the new CLECs, ISPs, and dot-coms were highly leveraged and overextended, still needing substantial amounts of debt and equity capital to maintain operating expenses. With access to that nurturing capital all but dried up, many a storybook tale ended suddenly in Chapter 11 bankruptcy. Other carriers attempted to use “creative” accounting to mask the growing revenue problem. As allegations reached the public tabloids, investors were left with absolute uncertainty regarding actual telecom market growth and earnings. All of these negative influences were converging about the time that the overall U.S. economy ran out of both steam and track. The U.S. stock market’s NASDAQ index deflated in April 2000, about 20 percent in less than two weeks, almost instantaneously wiping out
10
Chapter 1: Communicating in the New Era
$1.2 trillion in market value and net worth. As a major contributor to GDP, the communications industry would not be spared. Three years later, the index was 71 percent of that historic year-2000 high, with its aggregate market value reduced to $2 trillion of a once-lofty $6 trillion high. As an example of downstream effects, in 1999 Corning had increased its optical fiber manufacturing capacity by 50 percent and again by another 50 percent in 2000. By year-end 2002, Corning had decreased optical fiber manufacturing capacity by 80 percent. In short, telecommunication’s stellar boom went super bust, and like a ribbon of dominoes, it reverberated throughout the industry and its food chain. Hindsight has always been most revealing, and many now agree that several issues appeared to change postderegulation excitement to turn-of-the-century disappointment:
•
Competitive carrier business models suffered from “fundamentals” fractures, creating a massive waste of investor capital.
•
The new regulatory models, severely underexecuted, fell short of stimulating true competition for advanced services.
•
Incumbents and competitors alike overreached and over invested, as a predicted exponential rise in bandwidth demand came up short of a few exponents.
•
A substantial number of dot-coms spouted like geysers, then sputtered and fumed from a sudden shortage of visitors.
•
The money bubble gained too much altitude and then burst, deflating values and net worth and disintegrating options for quick debt retirement.
The rest of the story lends itself to overarching, significant causes of the telecom meltdown. While the Telecommunications Reform Act of 1996 intended more unilateral deregulation and less FCC micromanagement, the reality appears to be more of a forced “competition” or reregulation in which no one could make any money. The topics of “unbundling,” lingering antitrust decrees, and merger orders were still on the FCC agenda and still burdensome to cable and telephony companies, stifling investment in last-mile broadband for America. While the explosive, new telecom competition was highly leveraging itself with new broadband networks, the U.S. economy began a slide into deflation, forcing debtors to pay back loans in dollars 30 to 40 percent more expensive than the ones they borrowed, resulting in significant bankruptcy proceedings.7 Many service providers, though positioned to survive the telecommunications downturn, experienced a significant debt load with the infrastructure capacity build-outs of 1998–2000 causing them to be slow to reinvest again until excess capacity could be absorbed. With a comet-like 150 percent plus explosion in telecom debt during the boom years, chased by a deflationary tail of up to 40 percent dollar value, a shot at a stationary orbit rapidly decayed until the fireball hit something. The explosive growth of telecom during the latter 90s also gave way to falling prices. While many carriers had factored price declines into their business financial plans, no one expected prices to fall so fast. Deflation appears at the surface to be dangerous in that falling prices often lead to falling wages, making it difficult for companies and households to pay
Technological Winners
11
back debts. This is the traditional thought about deflation and since it hasn’t been in the economic headlines for decades, much thinking about its contemporary causes and effects are ongoing. But there’s another possibility that might be worthy of consideration, which contributes pressure to deflation. As the economy has seen percentage improvements in productivity over the last several years, brought on by the absorption and execution of technology and process benefits, the increased productivity benefit typically shows up as more profit in a company’s bottom line. With several companies making similar gains in productivity within and across industries over the last several years, however, it’s possible that this is allowing companies to lower prices while still maintaining profits and wages. Lower prices expand markets and can result in temporary, competitive differentiation. Today, the fences are still down. Some of the rubble remains, so much so that the frontrunners of the “telecommunications land run” often stumble and trip over the residue. Yet competition has increased in all segments; service variety is better than ever; technology has no shortage of futurists and tinkerers; and communications policy should wobble its way through continuous improvements. Market opportunities still exist, but not in the context of the late 1990s. The negative posture of the industry, born in the early 2000s, will turn out to be temporal, with lessons learned layering new wisdom upon old fundamentals. Common sense will return to common practice. The telecom sprinter will catch his breath, and retrain for the developing endurance race ahead. As the initial gold rush becomes a long and grinding journey for some, it’s clear that the communication landscape will witness significant change, both nationally and internationally. No longer pressed by coastal boundaries, the 1997 passage of the World Trade Organization’s Agreement on Basic Telecommunications Services extends competition opportunities to about 90 percent of the globe. That’s a lot of free-roaming territory. With the seemingly galactic Internet very much alive and growing in usage year after year, IP, optical, and wireless will continue as technology propellers for the next-generation of services growth opportunities that are still there, somewhere.
Technological Winners Communications technology is everywhere and anywhere that you look today. The hunger and thirst for increasing productivity, saving time and labor, enhancing recreation, extending lifespans, and most interestingly, profiting from a technological winner are at the heart of every innovator. Often, new technology is successful on its own merit when nothing like it has existed before. Sometimes a new technology is a reassembly or unique packaging of existing technologies much like a technical version of Scrabble. When a technology can be enhanced, yielding a 10x improvement in price, performance, or time value over the technical roots from which it sprang, it has an excellent chance of widespread adoption.
12
Chapter 1: Communicating in the New Era
Many times, a new technology is the missing link in a chain, which suddenly bonds with other technologies or services to form a new, breakthrough solution. Most telecommunications service providers are still technology-based, and clear, technological winners will remain at the foundation of new offerings. IP, optical, and wireless technologies contain inherent service values that help providers service-orient their offerings, deriving even better service from provider technology.
IP Everywhere During the 1970s and 1980s, the most abundant form of networking protocols and networking revenues centered around the IBM protocols. Both asynchronous and synchronous data transmission were used with the most popular becoming the BiSync and Synchronous Data Link Control (SDLC) protocols within the IBM Systems Network Architecture (SNA). IBM SNA was an umbrella of integrated software platforms that formed a powerful and reliable data network communications system, which became the primary networking thread among large enterprises. At that time, IBM’s SNA was the most pervasive data communications protocol in American and international business. On a particular late 1980s day in Tulsa, Oklahoma, an IBM customer’s meeting was in process where one of the customer’s Chicago-based employees was speaking evangelistically about a new communication device that connected this to that, bridged front to back, and seemingly translated circles into squares. The customer was talking about a Cisco MGS series router. As he expounded further, this router required no raised floor or glass rooms, could be configured on-the-fly, and was multilingual with IP, Internetwork Packet Exchange (IPX), and AppleTalk as primary language skills. Those IBMers in attendance that day silently puffed up inside with immediate jealousy, not only at hearing about a serious, non-IBM communications competitor, but also at the revelation that there was an extensive technology and computing world beyond IBM’s meadow where they were born and bred. During the 1990s, as it would turn out, Cisco developed a strategy for supporting the IBM SNA protocols via TCP/IP and acquired many of the SNA architects and designers from IBM Raleigh’s communications division stronghold; and TCP/IP began its run as the new networking protocol of choice in business. Today, the Internet Protocol, or its more colloquial reference of IP, is everywhere. The Internet Protocol suite, as it is commonly referred to in official standards documents, became the protocol engine of choice for networks worldwide because of IP’s ability to be implemented on disparate computer systems. By allowing these diverse computers and their networks to interoperate with each other, information sharing was nimble and quick using the simple, yet powerful capabilities of IP. Many benefits led to IP becoming the de facto standard of networks and computer communications around the world. IP is inherently connectionless and distributed, reducing restrictions on network design, adding reliability through seamless flow across multiple communication pathways, and providing low overhead.
Technological Winners
13
IP is a scalable and extensible protocol suite, bringing flexibility and investment protection, which are key requirements of designers and decision makers. For example, you can extend the protocol’s default, connectionless, best-effort orientation by combining IP with a Layer 4 protocol such as TCP. This layering or stacking of TCP with IP or TCP/IP, adds connection-oriented, reliable data transport capabilities to IP communication with ancillary flow, congestion, and duplicate data-suppression controls. This allows IP to be a suitable alternative to many computer manufacturers’ proprietary network protocols, typically designed for and supported only on the manufacturer’s computer platforms. Perhaps the most appealing benefit is the openness, mutual development, and control that the Internet Protocol suite enjoys. With all application and networking developers having access to the same information regarding the IP protocol structure, research and development efforts become collaborative and self-perpetuating. From a grass roots beginning, the open nature, flexibility, and affordability of IP led to its pervasiveness. The pervasiveness and distributed architecture of IP across multiple computer platforms positioned IP as the unifying protocol of choice for enterprises and the connected Internet. The service-oriented nature of the connected Internet provided service pull, which rapidly led to IP’s ubiquity. Because of these enablers and the Internet, IP is now everywhere. According to the Gartner Group, IP grew so fast that by the end of 2001 more than 98 percent of all corporations were using it as part of their networking architecture.8 So rapid was the growth that a January 2000 Internet Domain Survey by the Internet Software Consortium (http://www.isoc.org) counted over 109.5 million hosts using the current version 4 of IP, or IPv4.9 With a 32-bit addressing scheme that can support up to 4.3 billion IP hosts, it would seem that plenty of space remains: but with the popularity of consumer PC Internet use and the desire for all handheld devices and cellular phones to be Internet addressable, IPv4 addresses will one day border on exhaustion. Not to mention that globally, the IPv4 address shortage is more evident. Some individual U.S. universities have more registered IPv4 address space than the entire continent of China. Many countries have been lagging technologically and economically and as a result have come up short on publicly registered IPv4 addresses. Once again, as a testament to mutual cooperation and the extensibility of the IP protocol suite, an IETF standard (RFC 2460 and others) for a next generation of IP known as IPv6 is available. IPv6 increases the address scheme from 32 to 128 bits, ensuring the availability of IP addresses into the next few decades. In addition, IPv6 improves networking efficiency through prefix routing, better traffic distinction, built-in security, and co-existence and compatibility with IPv4. Many service providers have already applied for and received registered IPv6 address space. The initial implementation of IPv6 is much more prevalent outside of the United States due to the aforementioned shortage of IPv4 addresses and the rapid uptake of mobile teleputers. IPv6 most likely will grow from different regional networks and over time spread both nationally and globally. In addition, IPv6 is a streamlined addressing architecture that better supports mobile IP. With wireless mobility devices requiring IP intelligence, the need for unique IP addresses, as well as seamless IP roaming across networks, is paramount to IP mobility.
14
Chapter 1: Communicating in the New Era
As IP increasingly moves into mobility devices such as pocket PCs and cellular phones, IP mobility support will allow a mobile device to maintain the same IP address, known as its home address, wherever it attaches to a network. This is conceptually similar to the way your cell phone works today when you are traveling or roaming beyond the reach of your wireless provider’s cellular network. So far in the new century, IP is still rapidly growing despite an economically soft start. According to the Internet Software Consortium’s (http://www.isc.org) January 2005 Internet Domain Survey, the number of DNS-advertised IP hosts accessible via the Internet reached approximately 317.6 million, compared with approximately 14.3 million hosts recorded in January 1996.10 These numbers don’t include the IP host addresses that are resident in enterprise and service provider networks or in consumer PCs that don’t advertise to a domain name server, and don’t even include the hosts that use private IP addressing space. This is evidence that growth in both numbers of IP hosts and usage of IP-based applications continues to accelerate. IP is the dominant Layer 3 networking protocol among local, long, mobile, and global internetworks. The emergence of IP networking has decoupled network services from their dependence on transmission media at the physical layer. IP is capable of leveling the telecommunications playing field. Regulation is an exercise in wealth distribution, but IP provides anyone the same tools with which to craft new opportunities. Those who champion expertise in IP networking will dominate. IP will increasingly be used as internetworking finds applications beyond today’s networks into tomorrow’s household, automobile, personal communications, and health monitoring devices. Much like the Internet, the IP acronym is rapidly approaching name recognition at the consumer and household level. Because of the Internet, IP technology has successfully integrated into a service-valued product model that effectively pulls users and their transactions. Thanks to the Internet, IP is vaulting the continents as an internationally global, perhaps one day universal, communications language. IP, unlike any other protocol, is uniquely positioned as the central theme in the new era of networking.
Optical Anywhere Light travels through glass. Otherwise, you wouldn’t have any windows in your home, and skyscrapers might be wrapped in concrete skins. Light is also abundant, thanks to the sun and stars, luminous gases, and spark-generating electricity. Communicating with light has primitive beginnings. Glass, which is easily made from sand-born silica, flint, spar, and other silicious materials, is also highly abundant. The geologic glass, obsidian, was first used thousands of years ago to form weapons and jewelry. Man-made glass objects date back into the Mesopotamian region, as early as about 1700 BC. The Romans made glass in 1 AD and spurred rapid development and expansion of the art in the Mediterranean region. Therefore, glass making has been around awhile.
Technological Winners
15
Of the many types of glass, optical glass made its 1590 AD debut in early glass telescopes in the Netherlands. Edison used glass for his invention of the light bulb and its first public demonstration on December 31, 1879. Glass was used for train lights shortly after. Window glass and glass television tubes were used extensively in the 1900s. Optical glass is also the basis of focusing and pulling an image’s attributes into a camera body to excite photographic film. As a kid, you might recall a spyglass with a 90-degree bend that you could use to “spy” around the corner of a building. It used an angled mirror at the bend to reflect the light from end to end. While that was a reflection of light through the air inside the spyglass, the same reflective properties work for light that is propagating through optical glass or optical fiber. If you had a cylindrical seven-foot-tall hallway of pure glass wrapped in a flexible mirrorlike cladding and then took a flashlight and beamed it at the end of the glass hallway, the beam would travel inside of the glass to one edge of the glass hallway, reflect off the mirrored cladding, and continue bouncing back and forth until it reached the other end— almost as visibly bright as if the flashlight itself had passed completely through. Now, in secret agent fashion, if you switch the flashlight on and off in random patterns such as Morse code, someone on the other end can decipher this timed sequence of light and dark flashes, checking it against a Morse decoder ring to understand what is being communicated through light pulses. This rudimentary description of communicating with light in glass is the elementary principle behind the use of fiber optics. For the purposes of telecommunications, optical-grade glass is reduced to long, thin strands of extremely pure glass, known as optical fiber. The glass strand is so thin that it takes on the flexible properties of a human hair. The light that is used to pass through an optical fiber is nonvisible light, from the infrared portions of the electromagnetic spectrum. The infrared light’s wavelength is scientifically measured in nanometers. Frequency is another property of light waves, and for the infrared portion of the spectrum, the frequency is measured in TeraHertz. The light is generated and focused through very small lasers to concentrate the light before it enters one end of an optical fiber. Pushing photons through optical fiber is a combination of technologies that improves many-fold over the traditional excitation of electrons through copper-wire cable. Also with optical, the raw materials are more abundant and manufacturing improvements are making it ever cheaper, competing with copper cables of equivalent length. With optical, the diameters are smaller, the information-carrying capacity is higher, there’s less interference and signal loss, fewer errors, less power expended, and much lighter handling weight. By improving speed, capacity, and clarity, fiber optics provides service-improving values that are useful in many industries and superior for use in communications. In 1970, the world’s first low-loss, silica optical fiber for communications was created at Corning Inc., in Corning, New York. Corning was a high technology manufacturer of glass, and many of the company’s achievements in the field, even as early as the 1934 invention of fused silica, became part of the success story of producing optical fiber that was suitable for long-distance communication with low loss of photonic energy. By 1978, Corning had
16
Chapter 1: Communicating in the New Era
perfected the process of creating single-mode fiber in volume. Today, there are multiple types and grades of fiber, often referred to as application-specific fiber, each specially designed for a particular deployment such as long haul, metropolitan, undersea, or access and premise. By the early 1980s, the beginnings of telecom deregulation prompted new entrants to make the first commercial use of optical fiber in backbone sections of their newtechnology networks to alleviate burgeoning, traffic choke points. These new competitors advertised the technical advantages of their optical backbone networks with an appeal to improved clarity and capacity. Like a rock thrown into a pond, optical fiber deployment has steadily followed these bandwidth choke points, in ripple-like effect, from the center of these national networks ever closer to the communicating end user. Today, the choke points have been pushed into the last few miles of consumer access. While cable TV and telephony companies have been at work extending the performance of their copper plants, the probability is very high that fiber optics to the business, the desk, and home will become one of the shining stars of broadband opportunity over the next few decades. Erbium-doped fiber amplifiers in the 1990s created another optical leap forward. Not only did this technological advance increase the distance that light could travel before needing optical-to-electrical-to-optical (OEO) regeneration, but it also unleashed the optical fiber to use multiple wavelengths, or lambdas, of infrared light. It’s doubtless you’ve seen how light refracts through a prism to form several colors of light. This principle is at the basis of optical wavelength division multiplexing (WDM) and allows a single optical fiber to carry more colors or wavelengths of communication per fiber, currently up to about 640 distinct lambdas or channels per strand. More are on the way. This, in effect, has multiplied the information-bearing capacity of a fiber strand by hundreds of times over. It creates a price/ performance improvement of sizable proportions that helps future-proof a fiber network. It also makes available an opportunity to lease or purchase bandwidth by the lambda instead of by the whole fiber strand. With fiber span distances stretching ever further between OEO regenerations, new deployments significantly reduce capital costs and operating expenses, shortening the return on investment timeframe and improving the profitability of the network over the system’s life. Fiberless optics technology is emerging as another usage of optics through air that is wellsuited for high data rates in urban high-rise developments. Using optical and holographic technology with self-focusing, small aperture dishes, wireless optics provide a quick and cost-effective way to connect downtown buildings without cutting the streets or floors for cable passage. Optical switching and routing is fanning the flames of optical advancement by seeking to remove the electrical-to-optical tax that is paid where optical regeneration requires conversion to electrical in order to be traffic-switched or regenerated. Based on wavelength manipulation technologies from prisms to bubbles to minimirrors to waveguides, this developing field has already produced commercial products that combine, split, and redirect lambdas to create optical cross-connects and add/drop multiplexers (ADMs). The
Technological Winners
17
equipment does this without converting the bit stream to electrical until it’s handed off to the last mile. With such an abundance of lambdas and optical switching in your grasp, you might indeed circle back to the concept of circuit-switched networks (voice networks), only this time where they are optically, or rather, lambda–switched. Optics as a technology for communications is white hot. In fact, optics is increasingly being complemented with IP to reduce complexity and streamline offerings with familiar technologies. In a few short years, there’ll be optical communication available anywhere that information is generated or consumed. With more than 300 million kilometers of optical fiber deployed worldwide, lambda switching at the meet points and free-space (through the air) optics filling the gaps, optical is at the heart of the fibersphere and belongs on the short list of technological winners.
Wireless Through the Air By all accounts, wireless communication is a winner. In a little over a decade and a half, mobile phone penetration has surpassed 50 percent of the U.S. population. This has happened faster than the rate of adoption of many significant technologies. Worldwide, the adoption rate has been even higher with over 300 million cellular phones counted in a year 2000 audit, with many of these sporting digital capabilities. That’s about 30 percent more than all the personal computers sold up until the drop of the millennium ball. That’s a lot of people who want to stay in touch while they get away. As PCs and cellular phones collide and converge, broadband mobility will increasingly get you out of the office or out of the house, with no sacrifice of productivity or entertainment. Where fiber optics keeps you bandwidth-happy indoors, wireless can carry you cheerfully outdoors, enjoying equal fidelity with the wireless World Wide Web in one hand and a fishing pole in the other. We desire to communicate in both places. According to the Cellular Telecommunications & Internet Association website (http:// www.ctia.org), the summer of 2003 started with 146,610,088 U.S. wireless subscribers, an increase of over 70 million since 1997. By the end of the first quarter of 2005, U.S. wireless subscribers totaled over 180,000,000.11 The global wireless mobile market was estimated at 2 billion subscribers at year-end 2005. According to a 2005 Telecommunications Market Review and Forecast, an annual study published by the Telecommunications Industry Association, wireless communications spending is expected to increase to $212.5 billion by 2008.12 All of these numbers suggest that wireless is enjoying a compound growth rate of approximately 9 to 10 percent per year. Much of this growth is being driven by wireless Internet access for cell phones along with camera, color, and multimedia, wireless Ethernet local area networks in the enterprise (private Wi-Fi), and a robust growth of public Wi-Fi (802.11x) access points popping up in cafés, coffee shops, airports, hotels, even Mount Everest.
18
Chapter 1: Communicating in the New Era
In fact, Wi-Fi is experiencing accelerated growth. With Wi-Fi spreading its spectrum in the Industrial, Scientific, and Medical (ISM) band, which is a largely unregulated frequency band for those who transmit at less than 1 watt, it will creep rapidly in a bottom-up advance. With Wi-Fi access points scattering beyond enterprises into retail establishments and households, the meshed bandwidth potentially improves with more users. Providing computing portability, Wi-Fi will compete with wired LANs everywhere but will complement mobile wireless technologies like code division multiple access (CDMA) and fiber. The leading competing standards for wireless mobility communications include the following technologies:
•
Global system for mobile communications (GSM)—The original European digital cellular standard, based on TDMA, used throughout Europe and much of the United States. GSM is migrating to a variant of CDMA called WCDMA.
•
Time division multiple access (TDMA)—A wireless digital transmission method that multiplexes multiple wireless signals via distinct preallocated time slots onto a selected frequency channel.
•
CDMA—A spread-spectrum digital communications transmission method that identifies each separate wireless transmission with a unique coded identifier, deemed more bandwidth efficient than TDMA.
Other wireless technologies include
•
General Packet Radio Service (GPRS)—A standardized wireless packet-switched data service, an extension of GSM to support data. Generally considered a secondand-a-half-generation (2.5G) data service using TDMA.
•
Personal Communications Services (PCS)—Digital wireless communications services based in the 2 GHz frequency range.
•
Enhanced Data rates for the GSM Evolution (EDGE)—A third-generation (3G) wireless data standard for GSM, using TDMA.
•
Variants of CDMA such as CDMA2000 1X—A Qualcomm-developed technology supporting both wireless voice and data within a standard CDMA channel.
•
Wideband CDMA (WCDMA)—Essentially a non-Qualcomm version of CDMA standardized as a 3G overlay for GSM heritage mobile systems, targeted at higher data speeds than EDGE.
•
High Data Rate Technology CDMA 1x EV-DO—A data-optimized version of Qualcomm’s CDMA2000 1X targeting wireless data rates over 2 Mbps.
Multiple generations of these mobility networks called 2G, 2.5G, 3G, and beyond, provide bit-rate improvements and creature comforts with each new wireless generation. This list is necessary to introduce the complexity with which mobility systems operate. All of these protocols and platforms are looking to carve a technical edge and increase market share among wireless providers. For world travelers who must maintain personal,
Technological Winners
19
seamless reachability, however, the surplus of incompatible protocols and platforms continues to talk over each other in a Tower of Babel. Today, a single provider builds a wireless network footprint, sells to its customers the specific cellular phones that work with its protocol and assigned range of frequencies, and then provides the service. This is a proprietary business model that tends to be vertically integrated. When you think about it, numerous wireless networks invisibly (except for the babbling antenna towers) overlay your coverage area, and this is the case the world over. The boom years of wireless technology would fill a technology digest with multiple acronyms due to competing radio technologies, access standards, data standards, network standards, technology generations, and usability specifications. The demand overruns the methodology of efficient use of the wireless spectrum and the global integration of networks. Today’s wireless, then, is both untethered and tethered. It’s untethered to the extent that it allows mobility, but it’s functionally tethered to a vertically integrated wireless provider. As the wireless market matures, it will likely consolidate and integrate into more of a horizontal industry. Much like the wire-line telephony industry, cell phones will migrate up the bits-per-second scale, as low-bit-rate voice communication gives way to high-bit-rate wireless data applications, such as mobile Internet, mobile messaging, mobile banking, mobile voting, mobile books and magazines, mobile music, mobile pictures and video, mobile health monitoring, and all the essentials that we like to carry with us wherever we go. Someday, the ultimate mobile phone might integrate and incorporate at least the top two or three technology standards, code-hopping across these disparate protocols as necessary, perhaps even bouncing off satellite services to fill in the global gaps in coverage. Another opportunity for the ultimate mobile phone or mobile PC might be an evolution of today’s fixed-chip designs into reconfigurable chips that use adaptive computing techniques. These techniques might allow mobile phones and PCs to seek out the most suitable radio frequency and wirelessly and automatically connect. For customers and consumers, updating could become as simple as downloading the latest reconfigurable chip firmware. That could involve mobile devices that work anywhere in the world, with less resistance to hardware obsolescence. With continued advancement in wireless computing capabilities, the ultimate mobile phone will include key features of PC laptops so that your productivity remains powerful and seamless as you’re on the move. Yes, there’s a lot of growth still ahead for additional wireless subscribers. And there’s certainly a plethora of opportunity ahead for better integration of wireless networks and feature portability as you hop from coverage area to coverage area. Success awaits the smart provider who maps technology advantages and user requirements into value-responsive, application-specific services that the customer will buy.
20
Chapter 1: Communicating in the New Era
In summary, IP, optical, and wireless are the primary technological winners in the new era of networking. The pervasiveness of IP networking, the speed of optical networking, and the anywhere flexibility of wireless all save time. That’s why these technologies rise above others: their contribution to time value is recognizable and lucrative. More than ever, saving time generates money.
Building Blocks for Next-Generation Networks The fundamental building blocks of next-generation networks, applications, and services start with IP. At Layer 3, IP is the networking messenger between data computing applications, IP telephony conversations, and IP video sessions. The success of IP has been beneficial to another ancillary building block: the rise of Ethernet technology at Layer 2. From its early beginnings in the 1970s, Ethernet has withstood all Layer 2 competitors, defeating the technology push of all deterministic Layer 2 challengers with the pull of Ethernet’s simplicity, adaptability, and interoperability with all Layer 1 mediums. Where IP is the Layer 3 packaging, Ethernet is the Layer 2 conveyor belt that leads to the digital versions of mail bags (wireline), photonic locomotives (optical), and stealth jet planes (wireless), all at Layer 1. IP, Ethernet, optical, and wireless are the must-have networking layers, in essence, the new-era building blocks with which to construct and enhance networks that are flexible, fast, and service rich. Using these technologies, providers are adapting their networks toward architectures that better support data, voice, and video convergence, providing a variety of access interfaces to deal with customer choice and augmenting their options to offer next-generation broadband services that find success with customers. The provider’s network heritage, customer base, and capitalized assets often determine the centrifuge from which tactical and strategic network infrastructure investments are launched. The chapters that follow discuss service-valued technologies within the context of general classifications or types of provider networks. For the purposes of this book, these encompass
• • • • • •
IP networks Multiservice networks Virtual Private Networks (VPNs) Optical networks (including metropolitan and long haul) Wireline networks Wireless networks
Each of these chapters provides a useful framework in which to examine applicable provider technology, the position of the technology within the framework, and the service opportunities therein. Providers might use one classification of network exclusively, or they might
Building Blocks for Next-Generation Networks
21
incorporate several network classifications depending upon their business plan, relative strengths, customer needs, and market opportunities. Any, many, or all network types are often in play for next-generation service providers.
IP Networks Unlike any other Layer 3 protocol, IP is uniquely positioned as the central theme in the new era of networking. It is the unique and crucial point of convergence for networks, services, and applications. All types of service providers now use IP networks. IP networks allow providers to interface directly with the type of networks most familiar to their customers. As a Layer 3 protocol, IP networks stitch together various purpose-built networks and are the fundamental access layer to the Internet. IP is the most desired networking interface for advanced applications, because it can reach the largest customer markets. Also, IP networks are moving center stage into carrier-class networking. Data, voice, video, and Internet data must come together. By standardizing various types of data—formerly associated with entirely separate technologies—IP provides a powerful solution. A converged IP network creates the foundation for greater collaboration, opening new ways to work and interact, simplifying network management, and reducing operating costs. Converged networks are fueling the development of an array of dynamic applications, such as e-learning, unified messaging, and integrated call center and customer support systems. IP became the networking convergence engine of the late 20th century. IP is the dynamo of network convergence and service creation, extending productivity benefits, service variety, and innovation into the start of the 21st century. From local to long, from mobile to global, IP is unifying the convergence of networks while facilitating the purposeful and appropriate combination of data. You learn more about IP networks in Chapter 2, “IP Networks.”
Multiservice Networks Multiservice networks generally spring from the metropolitan network providers. In the metropolitan areas, service variety is a critical success factor to stratify and tailor narrowband, wideband, and broadband communications services to the unique requirements of businesses and consumers. With metro networks historically built for Layer 1 physical transport services, the demand for and the profitability of Layer 3 IP services is transforming these provider networks to offer multiple services and multiple interface types. In next-generation taxonomy, multiservice networks are infrastructures that provide not only a robust mix of communications interfaces, but also a portfolio of Layer 1, Layer 2, and Layer 3 IP services in platforms at the edge of metropolitan networks. Multiservice can also encompass IP/Multiprotocol Label Switching (MPLS) networks that are
22
Chapter 1: Communicating in the New Era
increasingly used as a common network infrastructure in the core of provider networks. IP/MPLS networks provide the best features of routing and switching to enable transparency and convergence of network equipment, protocols, and customer services. Multiservice networks are often found in metropolitan network providers of voice, data, video broadcast, and interactive services. Multiservice networks are next-generation networks that help to bridge circuit-based offerings to packet-based services. You learn more about multiservice networks in Chapter 3, “Multiservice Networks.”
VPNs VPNs are logically partitioned, private data networks deployed over a shared, public network infrastructure. VPNs are implemented with a wide range of technologies and can be self-implemented or managed by a service provider. VPNs allow end customers to realize the cost advantages of a shared network, while enjoying exceptional security, quality of service, extensibility, reliability, and manageability, just as they do in their own private networks. VPNs are all about IP accessibility. Large enterprise IT networks are all about private reachability. Quickly delivering service, decision data, and new product into the hands of the worker, partner, or customer is all about profitability. Businesses are going where their customers are, extending the essential inputs and outputs of customer information, homesteading new frontiers of distribution, and sustaining business application computing at any hour from any time zone. The pursuit of customer centricity can take the whole organization with it, occasionally in a physical sense yet frequently in the virtual sense. Organizations that can replicate themselves quickly and virtually on the customer’s doorstep will succeed in this pursuit. VPNs become the computing and networking backbones of these virtualized organizations. For providers, VPNs are a service foundation. VPN solutions can apply to any network layer of the Open System Interconnection (OSI) protocol stack, and that is essential to their appeal. Providers can build or enhance their networks to offer any or all VPN types:
• • •
Access Intranet Extranet
Existing VPN services can be enhanced, while new VPN services are fashioned to exploit the service pull of IP networks. From access to extranet, from local to international, and from wired to wireless, providers are building on their VPN foundations, crafting new types of VPN offerings with which to engage their customers. The service foundation of today’s VPNs not only augments the
Building Blocks for Next-Generation Networks
23
architecture of a provider’s VPN framework but also provides a strategic market position through which to harvest new revenues. IP is the communications facilitator for the internetworking of virtual organizations. Globally and universally extensible across the Internet, IP is everywhere. The Internet keeps IP traveling faster and ever farther. VPNs keep it secure. Chapter 4, “Virtual Private Networks,” describes VPNs in more detail.
Optical Networks The era of data-pervasive traffic has arrived. To build bigger and faster communications pipes to transport the growing avalanche of data-oriented services, providers are constructing an optical superhighway. Optical networks have emerged as a relatively new science within telecommunications networking. By communicating with photonic light, optical networks are super-analog locomotives best suited for the carriage of digital data transmission. Optical networks provide an ultra-broadband capaciousness, with no theoretical speed limits, distance bounds, or spanwidth shortage. Extremely power efficient, lightweight, and virtually error-free, next-generation optical networks are moving from the core of national long-haul networks and metropolitan networks, even beyond the metro edge to reach into the last few miles of the customer domain. While the industry is stretching the global reach of fiber, it’s also widening the spectrum spanwidth and throughput of each individual fiber through wavelength division multiplexing (WDM). The capacity expansion of WDM, borne in the long-haul optical arena, is exploitable for metropolitan area optical network requirements requiring scalable bandwidth and differentiated services. WDM sends many colors of infrared light down a single fiber thread, drastically multiplying available bandwidth by every distinct wavelength, often referred to as a lambda. The basic physical premise of WDM is that the different optical wavelengths don’t interfere with each other, allowing them to cohabit the same fiber core. Providers are now pushing optical fiber networks ever closer to the customer interface. Over time, optical networks will become the most pervasive medium for all types of provider networks and is the Layer 1 broadband medium of choice. You learn more about optical networks in Chapter 5, “Optical Networking Technologies.” Chapter 6, “Metropolitan Optical Networks,” describes metropolitan optical networks, and Chapter 7, “Long-Haul Optical Networks,” describes long-haul optical networks.
Wireline Networks For well over 120 years, residential wireline has been a narrowband domain. Today, postderegulation, dial tone is a gateway to a very complex communications world. The business markets have been the growth and innovation engines of the wireline providers, especially
24
Chapter 1: Communicating in the New Era
for the incumbent local exchange carriers with solid experience and a quality reputation. This market has supported the innovation of many options for business broadband. Digital Subscriber Line (DSL) has kept the traditional wireline industry alive in residential broadband. Providing a technique to carry voice and data services over the same twisted pair of wire, DSL has allowed ILECs to participate in the residential and small business broadband game. Multiple-speed varieties of DSL, referred to colloquially as xDSL, are drawing significant attention from implementers and service providers, because they hold promise to deliver high-bandwidth data rates to dispersed locations with relatively small changes to the existing Telco copper infrastructure. By accommodating multiple services on the same wire facility, it allows wireline providers an incremental fee opportunity for broadband-based data services and a platform to move into IP-based video. United States cable providers, turning over their technology for the purpose of interactive TV, found themselves in a position to lead the residential broadband market. IP technology has opened up new opportunities for cable operators. IP-based services command premium fees and thus improve profit margins. Cable operators build and upgrade their hybrid fiber coaxial (HFC) networks, offering high-speed Internet access to the residential market. With the business market representing a small percentage of the cable providers’ overall business today, these operators are seeking a triple play—to follow the delivery of video and data with voice. Currently, the wireline service providers intend to lead the speed race, staying out in advance of the value distinction and the hopefully insatiable communication desires of the hundreds of millions of residential and small business customers in North America and abroad. Leading with speed would essentially release the current bandwidth bottleneck between the nation’s businesses and the future computing and entertainment needs of a technologyenabled population. Following that with innovative, service-valued solutions and super customer service will solidify their influence. Collectively, the wireline providers want to leverage their capitalized infrastructures as the quintessential medium of choice for broadband services. They will no doubt do what they can to quantize, optimize, and publicize the merits of their precious metal: the underground of copper cages that scooter rambunctious little electrons back and forth. In the new era of wireline, they will compete not only amongst themselves, but also with service substitutions such as optical fiber, fixed and mobile wireless, and satellite—all trying to follow the IP and lambda switching paradigms to rainbows of revenue. You learn more about wireline networks in Chapter 8, “Wireline Networks.”
Wireless Networks One of the primary undercurrents of the new era of networking has been the unrelenting rise of wireless communications traffic. Wireless networks now cover the spectrum from cellular phones, to wireless Ethernet teleputers, to fixed and satellite wireless services.
Using Next-Generation Network Services
25
Today, wireless mobility with data services and wireless local area network (WLAN) services are the primary technologies that are fueling double-digit growth year after year. These have the opportunity to converge, coupling a mobile LAN and a mobile WAN with blended integration, yielding advantages to both pedestrian wireless and vehicular mobility. Applications such as wireless Internet access, text messaging, digital image transfer, wireless gaming, wireless video, and continued wireless substitution for fixed wireline local and long distance are the key drivers of wireless services. The pursuit of an allinclusive, wireless personal device through which we can work, view, communicate, and entertain remains the essential impeller of wireless innovation. Mobile productivity is the necessity that is providing the thrust. Wireless networks are flourishing due to rapid growth in subscriber demand for functionally improved units and the latest mobility features. Wireless mobility is relatively in its infancy by telecommunications standards, approaching 30 years at best. The wireless LAN is not quite ten years old. The key to maintaining wireless growth is the rapid technological advancement of adaptable digital handsets and networks, and the development of open standards–based mobile applications. Less understood, but also key to sustaining this growth, is the efficient allocation and use of wireless spectrum for ever-present coverage. While the overall wireless market creates complexity, options are increasing for wireless providers to craft personable, mobile services that customers desire. Wireless manufacturers and wireless providers are preparing for increased mobile data usage in the years to come, as mobility and computing as well as voice and data come together to provide a seamless, untethered, spectrum-efficient, and robust mobility experience. With all of the wireless opportunities available, innovation and creativity will be the daily regimen of researchers and developers. A wireless handheld is unquestionably the ultimate device through which to deliver personal services. As more services are integrated, many of the upscale handhelds will become less and less disposable commodities and throw-aways of technological advance. Wireless will be best defined as the pursuit not only of customer loyalty, but also of partnering and relationships to form an ecosystem of mobile services that the customer will use. You learn more about wireless networks in Chapter 9, “Wireless Networks.”
Using Next-Generation Network Services Next-generation network services leap from a service-valued emphasis made possible by the appropriate application of next-generation network technology along with process optimization and cultural shift. Innovative technology, process, and a company’s culture each have part and parcel in the delivery of a new-era communication service. Some examples of next-generation network services are
26
Chapter 1: Communicating in the New Era
• • • • • • • •
Internet-access services VPN services at both Layer 2 and Layer 3 Ethernet as metro area and wide area LAN extensions IP services including Layer 3 data routing, IP voice, and IP video Optical wavelength services Content, database, and application delivery services Storage and security services Managed network services
Notable of the preceding list is that many of these services reside at networking Layers 2 and 3, and also at Layers 4–7 for hosted applications. This is the essential distinction: nextgeneration network services transcend the physical layer at Layer 1, traditionally considered the heart of the provider transport model, and move upscale into Layers 2, 3, and beyond. Services are decoupled from transport as a result of IP-based any-to-any networking. Higher-margin services are now easily layered upon any type of transport. These types of services enhance the functionality, reach, and manageability of a network. Almost without question, these services are all broadband in nature. The burgeoning financial and information affluence of the age makes it a challenge to discriminate and sift only the best opportunities each day, and, therefore, time has become the most precious of all resources. The value of the customer’s time is becoming paramount, worthy of holding in high regard. The recognition of the customer’s time value should be the essence of any new service offering’s research, development, and justification efforts. Service is more than technology; it is, in fact, a unique blend of technology, process, and culture. Much like the development of a new medicine, this trio of elements must undergo multiple mixtures and strains of formula to achieve the targeted result and be widely effective. The measurement of service value will be increasingly calculated as a success ratio with the amount of time saved as the most important factor. The customer’s time is sovereign, and to the customer, service is king. Providers that are engaging in next-generation network services are doing so through the recognition and tailored exploitation of convergence trends. Seeking to rapidly market an expanded catalog of services, they are converging their technology platforms and network infrastructure as well as exploring selective convergence of various communications services.
Network Infrastructure Convergence Technological innovation is an important enabler of convergence. With today’s supervelocity of technology innovation, the traditional telecommunications market leaders face tough decisions as promising new technologies converge to form the next stage of industry evolution, and with it, pervasive competition. Infrastructure convergence for providers is about
Using Next-Generation Network Services
27
network convergence, primarily the migration of many product-specific, purpose-built platforms toward packet-based network structures such as IP/MPLS. Provider networks have been tasked with supporting many network transport services such as wireline voice, private line time-division multiplexing (TDM) data, SONET/SDH, Frame Relay, ATM, wireless mobility, private IP with Internet access, and metropolitan Ethernet services. This represents perhaps from six to eight distinct network overlays, each with their own operations, administration, management, and provisioning (OAM&P) platforms. Even the packet-switched architectures of private IP, Internet access, and metro Ethernet are typically deployed as network overlays. This approach scales both complexity and cost and severely inhibits service integration. Infrastructure convergence seeks to consolidate all types of Layer 1 and Layer 2 services around and onto a common Layer 3, packet-switched core network for the delivery of integrated IP data, IP voice, and IP video services. Customers are demanding the integration of these services at their desktops, laptops, PDAs, and phones. It is now imperative to maintain that integration throughout the service provider’s regional, national, or global network. To enable infrastructure and network convergence, highly capable, highly reliable, easily manageable routing, switching, and optical platforms are needed to push service interface variety and selective intelligence to the edge of networks and interface that variety into a simplified core network with IP/MPLS packet infrastructures. This is the rock in the pond approach—dropping carrier-class IP/MPLS technology and platforms into the middle of provider networks, causing ripples of service-prolific intelligence toward the edges, embodied in the multiservice edge. Even purpose-built network platforms can be migrated toward the access layers until these technologies can be fully depreciated and retired. The benefits of a converged network infrastructure allow
• • • • • • •
Simplified, single-protocol IP/MPLS core with multiservice edge Service richness, leverage, and speed to revenue Any Layer 2 and Layer 3 service, anywhere High-margin service convergence On-demand provisioning Scalable capacity for customer and revenue growth Reduced operational expense and complexity
IP is strong enough and IP-based routing and switching platforms are now reliable enough to execute a convergence directive of the provider’s network infrastructure. Converged IP networks seamlessly blend various technologies to create new business tools, leading to new applications, processes, and services that wouldn’t be possible with discrete networks. By handling all forms of electronic communication within a single packet-based IP infrastructure, benefits can include reduced capital and operational expenditures and
28
Chapter 1: Communicating in the New Era
unique and exponentially better levels of customer service and user experiences. That’s why IP-based applications will continue to crash out of the enterprise and into the service provider space. It is natural to converge on IP’s efficiency. It is sensible to converge on IP’s usability. It is prudent to converge on IP’s cost savings. And for many, converged infrastructures are paramount to improving market opportunities for new IP services.
Services Convergence Next-generation network services are the new drivers of industry profitability. With any and all communication providers possessing the ability to participate in voice, data, and video telecommunications, service substitutions are relentlessly proliferating. Customer markets are widely fragmenting, sometimes into customer sets of one, and new service providers and new service offerings are pursuing them. Today, for a small to medium business customer, you potentially have up to two dozen providers from which to select options for local communications. With each provider bundling together multiple services and further segmenting within customer sets, the number of options reaches into the multiple dozens. If you add to that competition from cable providers, wireless providers, long-distance providers, local utilities providers, and so on, the possibilities boggle the mind. For example, wireline companies, particularly those with positions in wireless providers, are exploring the bundling and sharing of wireline minutes with wireless minutes, taking separate services and merging them together from a usability and billing perspective. Integrating wireline and wireless further, companies are offering single voice mail and wireless to wireline forwarding. Seeking to add broadcast quality video to their offerings, many telephony providers are partnering with direct broadcast satellite companies while they develop IP television. The result of these explorations and adventures represents a mathematically large number of business and consumer choices. As such, communications options and service substitutions are multiplying at a frenetic pace. Service convergence is also occurring through business convergence. Some technology companies and provider segments have already merged, such as Sprint with SprintPCS and Nextel, SBC with AT&T, and so on. Traditional wireline providers are using wireless divisions or ownerships to access high-growth markets and, in turn, finance new strategic builds. Cable operators are using convergence to add voice to their video and Internet data services. Some can consider acquisitions or mergers with service-based content companies to find mutual synergies and to place a service “spin” on their product offerings. Others might partner and multipartner to gain breadth and scale. Virtually all of the telecommunication titans are developing multiple services and then converging on the ones that show opportunity and promise. Communication services for end-user devices converge around the individual person, their automobile, and their home. Mobility applications are enjoying tremendous growth, feeding the furnace of wireless offerings with the convection of open, IP-based protocols. Increasingly used for both business and personal connections, IP services link these
Using Next-Generation Network Services
29
different user spaces together. Providers are using convergence to deliver voice, video, and data via Layer 2 VPNs, Layer 3 VPNs, Ethernet, storage, and Internet into a services amalgamation that meets the needs of businesses and consumers. Services convergence provides multimedia services anywhere, and does it seamlessly over any access and device. Data is unquestionably the predominant driver of value. Swirling around data are other forms of technology, such as IP voice and IP video. Traditional voice and video have been there all along, but their creators and keepers were expectant that the rest of the industry would be content to surround their favorite technology, hopeful that their bread and butter would forever be the center of attention. The any-to-any computing paradigm of IP-based data has now become the crown prince around which the others seek favor. Proprietary networks naturally converge to open ones. Video has gone IP. Voice has gone IP, and voice over IP (VoIP) technology is finally enabling technology, business, and services convergence—the multiplexing of voice, video, and data onto a single IP infrastructure. Is it the best yet that it can be? No. Is it getting there? Yes, and as it does, it will be better than we ever imagined. Already available are cellular phones that switch to Wi-Fi phones using VoIP when you’re at home. This crucial, network convergence of a traditionally disparate infrastructure is leading to a converged services revival that accompanies us, no matter where we are or where we go. For today, telecommunications services encompass broadband Internet access; e-commerce; multimedia of various forms including digital music, digital video, digital voice, digital books, and digital photography; interactive gaming and entertainment; home networking and automation; personal security monitoring; medical conferencing; online education; teleworking; and so on. These services are available in devices large and small, both fixed and mobile. Consider that you could use a cell phone to remotely link to a home-based TiVo unit, and set it up to record a must-have video broadcast, verifying the first few minutes of the broadcast on your cell phone screen. This service convergence example and many others are available today. Tomorrow, communication services will ascend to the edge of fantastic.
From Technology Push to Service Pull Today, large businesses and enterprises have several years of IP networking experience and are using the open standards of IP networking to innovate successful network solutions. As these applications gain company acceptance and usage, customers need providers who can help them geographically scale. These sophisticated customers are looking for someone to deliver complex networking solutions and comprehensive packages with all the parameters that they need. They need to extend IP, Ethernet, and storage, and they are challenged with rapidly scaling their large IP networks while keeping them secure. They find value in service offerings that integrate and interface well with their own system architecture. With today’s renewed emphasis on business fundamentals and financial return, large businesses and enterprises are looking to service providers to supply information up front regarding how investments in a service provider’s advanced network services can pay the return on
30
Chapter 1: Communicating in the New Era
investment (ROI). They want to know if they can trust the service provider with their networking “jewels.” Whether customers can trust providers with their networking jewels is both a question and the crux of opportunity. Many providers are relatively new to IP-based networking. If you consider that Cisco Systems, Inc. recently celebrated 20 years as a company and that deployment of IP-based routers began in earnest at the enterprise level, many providers are relative newcomers. Some providers, arguably, built services on IP networks in the 1980s, but most have only moved IP-based services to the mainstream since the mid 1990s, staking claims as a result of the Internet “gold rush.” Entering IP services as initial Internet service providers (ISPs) or as new divisions within established incumbents, many providers were challenged with building IP skills and propagating them throughout their respective companies. With the help of their heritage of technology orientation and their technology suppliers, providers have rapidly climbed the IP learning curve and most are moving into advanced IP services that are on par and often in excess of their enterprise customers. New greenfield providers with strong financial positions are rather intriguing to enterprises, perceived as innovators in their respective fields of specialty. The larger, established providers are examined with a bit more scrutiny as the legacy of technology push lingers on many balance sheets, and a track record of slowto-market innovations overshadows their respective reputations with customers. But that was yesterday. In the new era of networking, providers must be able to demonstrate 21st century networking awareness, operations, management and reporting skills, and above all, service-valued market urgency. Service substitutions abound, and in today’s hotly contested environment, the innovative emphasis changes from the traditional moderation of risk to the mitigation of missing whole service and market opportunities. Next-generation network services and strategies are increasingly launched knowing that some adjustments will need to be made on the fly—in real time. The transformation from yesterday’s technology push organizations into tomorrow’s service pull innovators is perhaps the most important fundamental for newera service providers. Next-generation network technologies should be properly blended with methods of service differentiation to increase revenue through innovation and to create distinctive value in customer service. IP-centric technologies can help, astute convergence of wireline, wireless, and optical networks and services can accelerate, and customer centricity and satisfaction might sustain; but the rapid internalization and continuous execution of service pull craftmanship is at the heart of 21st century ascendancy.
Chapter Summary The last ten or so years have been the most dynamic ever in the past 120 years of telecommunications and service provider history. A new, urgent demand for data has created an information gigawake, leaving behind voice’s one hundred-plus-year supremacy. IP-based,
End Notes
31
data-centric designs have become essential. With a storm surge of technological advances, service substitutions are mounting a tidal wave of business and consumer communications options. With the Internet as the centrifuge of a communications and digital economy, revenue opportunities are swirling around broadband, IP, optical, and wireless-enabled applications. The future of telecommunications has already been inexorably changed, as new networking services are capable of reaching markets and customers worldwide. With the fences down, competition stampedes from anywhere and from anyone. Each fundamental type of provider network is facing convergence of voice, video, and data on its principal architecture. The lines of demarcation are blurring as service providers are increasingly offering total voice, data, video, and Internet communications solutions that include service provider owned and managed equipment services, web hosting, content delivery, storage and application services. With the walls breached, many service providers are considering the benefits of consolidating and partnering with each other in an attempt to
• • • • • •
Augment end-to-end solution offerings Amplify their share of the customer wallet Expand their markets Differentiate their service at the storefront Globalize the storefront’s geographic reach Engage customers with next-generation network services
Bottom line, service providers must do the hard but rewarding work of optimizing their technology, processes, and culture in order to transition from technology push to service pull, targeting a 10x improvement. Service-valued, next-generation networks are becoming the new directive. New advances with internetworking have arrived to overcome the defining scarcity of communications bandwidth and to unleash an abundance of global knowledge, intrinsic information, and the availability of time. You cannot ignore the fascinating technology and application advances in IP, optical, wireline, and wireless that are both fueling and feeding off of the global, seemingly galactic Internet. The Internet itself is an excellent example of service pull, using the pervasiveness of IP to become the premier communications utility. We’ve imagined even more. IP-centric, next-generation networks and services are redefining communications opportunity for all of those willing to plan, execute, and succeed in the new era of networking.
End Notes 1 Telecommunications
Industry Association. 1998. Press Release, “1998 Market Review and Forecast.” P.A. Release 98-09/2.11.98. http://www.tiaonline.org/media/press_releases/ 1998/98-09.cfm
32
Chapter 1: Communicating in the New Era
2 Telecommunications
Industry Association. 1999. Press Release, “1999 Market Review and Forecast.” P.A. Release 99-18/02.24.99. http://www.tiaonline.org/media/ press_releases/1999/99-18.cfm 3,4 Telecommunications
Industry Association. 2000. Press Release, “2000 Market Review and Forecast.” P.A. Release 00-13/02.08.00. http://www.tiaonline.org/media/ press_releases/2000/00-13.cfm 5 Telecommunications
Industry Association. 2002. Press Release, “2002 Market Review and Forecast.” P.A. Release 02-33/03.21.02. http://www.tiaonline.org/media/ press_releases/index.cfm?parelease=02-33 6 Telecommunications
Industry Association. 2003. Press Release, “2003 Market Review and Forecast.” P.A. Release 03-14/02.25.03. http://www.tiaonline.org/media/ press_releases/index.cfm?parelease=03-14 7
Gilder, George. 2002. Telecosm: The World After Bandwidth Abundance. Simon & Schuster. 8
Gartner Group. http://www.gartner.com/Init.
9, 10
Internet Software Consortium. ISC Internet Domain Survey. http://www.isc.org/ index.pl?/ops/ds/ 11
CTIA. http://www.CTIA.org.
12 Telecommunications
Industry Association. 2005. Press Release, “2005 Market Review and Forecast.” P.A. Release 05-05/02.10.05. http://www.tiaonline.org/media/ press_releases/index.cfm?parelease=05-05
Resources Used in This Chapter Cisco Systems, Inc., at http://www.cisco.com TeleGeography free resources at http://www.telegeography.com/resources/ index.php?PHPSESSID=51058cbe7422746243fe7f143130e02a How Stuff Works at http://www.howstuffworks.com Federal Communications Commission at http://www.fcc.gov Corning at http://www.corning.com
This page intentionally left blank
This chapter covers the following topics:
• • • • • • •
IP Past, Present, and Future IP Network Convergence Local IP Networks: LANs Long IP Networks: WANs Mobile IP Networks Global IP Networks Beyond IP
CHAPTER
2
IP Networks Your digital consciousness. Once upon a time, around the 1960s, you could turn it off as you left work at the end of the day. Lumbering mainframes consumed so much power—a scarce resource of the time—that a common exit procedure was to power down after the work cycle, saving not only valuable kilowatts, but also the longevity of vacuum tubes and other precious electronic life. Prior to 1960, information was customarily stored in analog form—photographic papers and phonotubes, audiotapes and vinyl platters, videotapes and millimeter movie projectors, 24-track music recordings, and so on. To use any of these items required the application of power, followed by an acrobatic thread and load, or otherwise positioning of a pickup stylus to retrieve text, sight, and sound from these conventional forms of analog storage. Although many large corporations of the 1960s had computers and computer networks that punched and pulsed, printed and indexed and spun information in decidedly digital form, it was seemingly about 1969 when someone left the office, turning the lights out but forgetting about their computer. The darkness of night presented a suitable contrast to the blinking lights of instruction register readouts, processor cache contents, and magnetic core memories. A quick check of the serial line solicited a response from the other end, another computer that was apparently burning the midnight oil. For the next four years, an interconnected computer collective stayed up all night, every night, growing to about 2,000 data points by 1973. In 1974, a new digital network protocol by the name of Internet Protocol, or IP, was born and introduced to the central processors of the collective, quickly gaining favor as the language of choice for sharing and swapping essential information from each of the cataloged, computer hives of semiconductor storage. The ease of IP networking increased the core group of hosts in the collective, determined to operate around the clock so as not to miss new, developing information. New information became more important than saving power. The computing didn’t quit. The sharing didn’t stop. The transmitting didn’t terminate. The receiving didn’t rest. A low-level digital consciousness, tired of sleeping, began to seep across America and soon across the sea to England. The early Advanced Research Projects Agency Network (ARPANET) became the Internet, a network of networks fit and joined together using IP. In short order IP became the accredited surgeon, stitching together a fabric of digital matter in which to store and retrieve
36
Chapter 2: IP Networks
the creativity, ingenuity, and imagination of our minds. Instructing the interconnected computers to keep watch and vigil over our newfound digital consciousness, we delightedly enjoyed the best sleep in years. Through these humble beginnings, IP networks grew from defense, research, and academia to commercial and enterprise. Local, long, mobile, and global IP networks are now universally pervasive and increasingly inhabitants in homes. Today, people seek to stay connected with the Internet as they move and revolve around a World Wide Web of knowledge and opportunity that is pulsing on the backbones of IP networks. Indeed, IP networks are the beating hearts of our developing, digital consciousness. This chapter introduces IP networks. Many excellent references about IP networking exist, so the purpose for this chapter is to express those benefits, features, and products that appeal most to service providers. The chapter also highlights the service orientation for IP networks, because today’s opportunities for local, long, mobile, and global networking are certain to be more customer oriented than in the past.
IP Past, Present, and Future IP networks are built on the fundamental cinder blocks of the Internet protocols. The Internet protocols make up the world’s most popular open-system protocol suite, because you can use them to communicate across any set of interconnected networks and disparate computer systems. The Internet protocols are equally suited for LAN (local) and WAN (remote) communications. This allows for a convergence of networks, which has been the defining strength of IP networking. IP is one member of the multilayer suite and is a Layer 3 network protocol containing the addressing structure and control information necessary to allow IP packets to be routed from an origin host to a destination host. Figure 2-1 depicts the Internet Protocol Suite as compared to the Open System Interconnection (OSI) Reference Model. This section introduces the influence of IP on network evolution and the two versions of IP: IP version 4 (IPv4) and IP version 6 (IPv6).
IP Influence and Confluence The power and influence of IP lies in its grass roots approach. Unlike proprietary network protocols that were self-created for the purposes of technology push and market share, IP began as a way to perform technology-share, pooling talent and innovation with significantly less R & D investment than the leading computer manufacturers. Distributed freely as an open systems networking protocol, the ensuing collaborations created a groundswell of innovation that made IP networking good enough to integrate and converge disparate computer networks.
IP Past, Present, and Future
The Internet Protocol Suite and OSI Reference Model
OSI Reference Model
Internet Protocol Suite NFS
Application Presentation
FTP, Telnet, SMTP, SNMP
XDR
Session
RPC
Transport Network
TCP, UDP Routing Protocols
IP
Data Link Layer
ARP, RARP
Physical
Not Specified
ICMP ith 2801
Figure 2-1
37
Source: Cisco Systems, Inc.
IP grew up in the defense, research, and academic environments and became the de facto network protocol of choice for “piecing together” the communication networks of minicomputers. As computing became more affordable and decentralized, computer and communication budgets migrated from the divisional level to the departmental level of many enterprises. As the pace of business competitiveness increased, the demand for new application computing solutions quickly outgrew the central mainframe application development factories, leaving business units with little recourse but to build them on their own. In a position to self-determine computing solutions for departmental needs, business managers were empowered to make purchases of personal computer products and services. By connecting PCs into local area networks (LANs), departments enabled the sharing of files, documents, and internal correspondence. What was previously point-to-point communication from dumb terminal to intelligent mainframes became augmented with an overlay of any-to-any networking between intelligent peers, leveraging collaboration, ingenuity, and innovation. LANs, departmental applications, and client/server computing were born. It was at this important juncture that IP, and for that matter Ethernet, moved beyond research, defense, and academia and into the “green space” of commercial markets. IP and the multiprotocol router became the fundamental enabler for interconnecting departmental LANs. LANs were functional and affordable, and using IP to integrate them was smart investment protection. LANs, bolstered by the rapidly improving multiprotocol router, spread through the enterprise. For the first time, central IT managers understood the intrinsic benefits of routerenabled IP networking. When Cisco Systems, Inc., further introduced Systems Network
38
Chapter 2: IP Networks
Architecture (SNA) protocol over IP networking, this signaled an inflection point for IP networking. With both higher-speed IP networks and lower-speed SNA multidrop networks working in parallel in most enterprises, it was time to seriously consider the merits of operating both. The influence of IP was powerful, and the Cisco SNA over IP capabilities gave IP a disruptive march up market on IBM’s SNA data transport business. The separate networks came together in a networking confluence, with IP as the dominant transport protocol. The cost savings, time savings, convergence options, and innovation engine of IP networks cannot be ignored. IP is a system-level enabler, a core technology, and foundation on which you can build many other systems. Designed and implemented as a low-cost and extremely efficient communications vehicle, IP is the collaborative standard, making IP the most widespread network protocol suite in use in the world. Although early versions of IP began in the 1970s, IPv4 is the most popular version, enjoying worldwide use. A more recent standard is IPv6, which is an extension to IPv4’s capabilities. Of the two, IPv4 is the most pervasive to date.
IP Version 4 Version 4 of the Internet Protocol addressing architecture was considered a particularly drastic endeavor, because it targeted a globally unique addressing architecture. IPv4 reached the Internet in the late 1970s. In 1981, the Internet Engineering Task Force (IETF) standardized IP in RFC 791. With a protocol header specified at 32 total bits, IPv4 allows for a theoretical 4.3 billion distinctive addresses in order to serve its purpose as a unifying address structure for a growing semipublic network utility. The addressing structure is divided into classes, primarily distinguished by the number of bits allocated to each of the network/host delineations. The IPv4 addressing structure is presented as the following:
•
Class A space—An 8/24-bit structure with up to 127 networks, each with 16,777,216 host identities.
•
Class B space—A 16/16-bit structure with up to 16,218 networks, each with up to 65,536 host identities.
•
Class C space—A 24/8-bit structure with up to 2,031,616 networks, each with up to 256 host identities.
•
Class D and E space—An 8/24-bit structure with up to 30 networks between them, half allocated for multicast and half held in reserve, representing about one eighth of the overall, IPv4 address space.
IP Past, Present, and Future
39
Another way to consider the IPv4 addressing structure is through the following percentage of overall pool allocation:
• • •
Class A equals 50 percent of total IPv4 space and uses networks 1.0.0.0–126.0.0.0.
•
Classes D and E equal the remaining 12.5 percent of total IPv4 space, using networks 224.0.0.0–239.255.255.255 and 240.0.0.0–254.255.255.255.
Class B equals 25 percent of total IPv4 space and uses networks 128.0.0.0–191.254.0.0. Class C equals 12.5 percent of total IPv4 space and uses networks 192.0.1.0– 223.255.254.0.
With these allocations as a backdrop, it is easy to understand how publicly routable IP address space is in short supply. The first 127 Class A networks are allocated to 127 theoretical businesses, leaving essentially 37.5 percent of the IP address space to be allocated to the rest of the world’s corporations and businesses. Most of the Class B space was quickly allocated, leaving the last 12.5 percent, the Class C networks, as the “crumbs” of IP public addressing. Other methods were then needed to subdivide remaining address space, allocate private address space, and so on. The terms subnetworking, subnetting, and subnet are variable-length subnet mask (VLSM) terminology that are interchangeably used to denote an organization’s further breakdown of an assigned classful network number into multiple subnets to apply to their computing structure. Even with subnetting, the resulting address efficiency of this scheme is less than optimal in practical implementations, because building structures, subnet mask allocation, and IP addressable hosts are not physically deployed in a manner in which you can maximize the number of hosts per IP subnetwork. Due to the limitation of 32 bits and the use of the dotted decimal system to simplify address administration, the resulting practical usability of IPv4 addresses falls far below 4.3 billion hosts to estimates of about 250 million usable addresses. IPv4 address exhaustion has been predicted for many years, and fortunately the Internet community has postponed this watershed event through the use of address conservation techniques such as the following:
•
Classless interdomain routing (CIDR)—The aggregation of classful networks that are represented with a higher-level classless prefix, for efficient summarization and smaller Internet route processing. CIDR is used within the Internet.
•
Variable-length subnet masking (VLSM)—The ability for an organization to take an assigned classful network and optimize its addressing structure to its specific network and host needs. VLSM is used within an organization.
•
RFC 1918 private addressing—Used for organizations that cannot obtain public IP network space, or desire internal IP addressing security from the Internet, or want flexibility in their choice of internal IP class addressing scheme (A, B, or C). Private
40
Chapter 2: IP Networks
addressing can be freely chosen by an organization as network 10.0.0.0/8 (Class A), network 172.16.0.0/12 (Class B), and 192.168.0.0/16 (Class C). These network addresses aren’t publicly routable on the Internet.
•
Network address translation (NAT)—The ability to translate private IP internetwork addressing (RFC 1918 addresses) into public IP internetwork addresses that are routable through the global Internet. Private addressing is represented as network 10.0.0.0/8 (Class A), network 172.16.0.0/12 (Class B), and 192.168.0.0/16 (Class C). This allows many private IP addresses within an organization to use very few public IP addresses to reach and route through the global Internet.
Using these techniques, the longevity of IPv4, in the absence of significant bursts of address consumption, is anticipated to last beyond the year 2010 and potentially as far as 2031 or even 2037. Current estimates say a little over two billion IPv4 addresses are unallocated or held in reserve and are a suitable buffer that provides a confidence of probability in the above estimates.1 One example of a significant burst of address consumption is a breakaway use of IP addresses for mobile applications such as handheld phones, personal digital assistants, pocket computers, automotive mobility and telemetry, and so on. To address the long-term requirements for globally unique IP addressing, the IETF designed the 128-bit IPv6 addressing structure, which you learn about in the next section.
IP Version 6 In the next few years, the rising number of portable devices, home area networks, peer-topeer applications, networked automobiles, and military applications will place enormous demand on IP addressing requirements, Internet-wide. Also consider that many of the telephones in the world, both fixed and mobile, might one day require globally significant IP addresses. The need for scalability and, most of all, global reachability is considered vital to the future of internetworking. IPv6 increases the IP address scheme from 32 bits to 128 bits, ensuring the availability of IP addresses into the next few decades, perhaps beyond our lifetime. The 128-bit addressing structure of IPv6 provides a great number of addresses and subnets, specifically 3.4 times 1038 end points. That’s 340,282,366,920,938,463,463,374,607,431,768,211,456 to be exact. This is enough to provide every user with multiple, global IP addresses. Using the first 64 bits to represent the network identifier and the second 64 bits for the host identifier, IPv6 removes the concept of an address class system such as IPv4’s A, B, C, D, and E classes. Figure 2-2 shows a comparison of the IPv4 and IPv6 headers. What is most notable at this point of comparison is the difference in bits that are allocated to the source and destination address fields of IPv4 (32 bits each) compared to the source and destination address fields of IPv6 (128 bits each).
IP Past, Present, and Future
Figure 2-2
41
Comparing IPv4 and IPv6 Headers
IPv4 Header Version
IHL
Type of Service
Total Length
Identification Time to Live
Flags Protocol
Fragment Offset
Header Checksum
32-bits
Source Address
32-bits
Destination Address Options
Padding
IPv6 Header Version 4-bits
Traffic Class 8-bits
Payload Length 16-bits
Flow Label 20-bits Next Header 8-bits
Hop Limit 8-bits
Source Address 128-bits Destination Address 128-bits Field Name Kept from IPv4 to IPv6
Field Not Kept in IPv6
Name and Position Changed in IPv6
New Field in IPv6
Source: Cisco Systems, Inc.
IPv6 reintroduces end-to-end security and quality of service (QoS) features that are not always available through a NAT-based network. Peer-to-peer applications don’t work well through NAT, so IPv6 is of immediate benefit. In addition to meeting the worldwide demand of globally unique IP addresses, IPv6 also improves networking efficiency through the following:
• • •
Larger address space for global scalability Embedded security with mandatory IP Security (IPSec) implementation Enhanced support for Mobile IP and mobile computing devices
42
Chapter 2: IP Networks
• • •
Autoconfiguration, duplicate address detection, and plug-and-play support
•
Hierarchical network architecture and policies for routing efficiency and deeper route aggregation
•
Coexistence and compatibility with IPv4 using features such as IP dual stack
Increased number of multicast addresses Simplified packet headers for efficient routing and packet processing, taking advantage of 64-bit computer architectures
The IETF’s IPng (next-generation) project, called 6Bone, was the first Internet-wide IPv6 virtual network to be established using IPv6. The 6Bone, layered on the existing IPv4 Internet backbone, has been used for initial IPv6 protocol testing, validation, and IPv6 application development. Serving its usefulness, the 6Bone is phasing out. In Europe, the 6NET Internet research project is being deployed to test IPv6 in realistic network conditions. A similar national research project in the Netherlands, known as SURFNet5, is underway to deploy and test IPv6. Many such service providers have already applied for and received registered IPv6 address space. IPv6 is used in production networks. A shortage of global IPv4 addressing space in latedeveloping regions such as Asia Pacific has accelerated the adoption of IPv6 address assignments and usage. For example, in Japan, network equipment must support IPv6 capabilities today. Cisco Systems has completed a multiphase development effort to integrate IPv6 functionality into the Cisco IOS software. This effort began with the first commercial release of IPv6 functionality in May 2001 with Cisco IOS Software Release 12.2T. Table 2-1 is a simple chronology of the IPv6 introduction into the Cisco IOS. Table 2-1
Cisco IOS IPv6 Releases Cisco IOS Software Release
First Customer Ship
IPv6 Feature Applicability
12.2T
May 2001
Technology development
12.0S
November 2001
Cisco 12000 Series. Service provider infrastructure
12.2S
January 2003
Service provider and enterprise, Layer 3 switches
12.3
May 2003
Mainline release for general production
12.3B
August 2003
Broadband access
12.3T
October 2003
Technology development
Adding IPv6 support is not as simple as adding just the IPv6 protocol and the IPv6 addressing architecture to the IOS software. There are at least 80 distinct features of IPv4
IP Past, Present, and Future
43
that must be adapted to IPv6 functionality and deployed into production-ready IOS software. A few of the IPv6 features include the following:
• • •
Internet Control Message Protocol version 6 (ICMPv6)
•
Simple Network Management Protocol (SNMP) over IPv6
Dynamic Host Configuration Protocol version 6 (DHCPv6) Remote Authentication Dial-In User Service (RADIUS) Internet Protocol version 6 (IPv6) AAA
Services such as the following must adapt to be IPv6 compatible:
• • • • • •
Multicast Tunneling Data link layer protocols Quality of service Switching services (CEFv6) Mobility services
IPv6-capable routing protocols such as the following are also essential to create robust IPv6 routing environments:
• • • • •
RIPv6 ISIS for IPv6 OSPF for IPv6 EIGRP for IPV6 Multiprotocol BGP extensions for IPv6
Security features such as an IPv6 firewall are also required. Just as IPv4 is often considered the umbrella terminology for a collection of various Internet protocols and services, IPv6 must replicate the same functionalities while maintaining backward compatibility with IPv4 protocols. Some IPv6 features are also being ported into hardware acceleration technology. Cisco has continued executing these and additional IPv6 feature availability throughout 2005 and beyond. To maintain currency with IPv6 development within Cisco products, you can find additional information at http://www.cisco.com/ipv6. The anticipated rollout of wireless IP data services is considered as the primary driver of IPv6. The overall market adoption of IPv6 will be determined by the architecture’s ability to accommodate Internet growth, new mobility applications, and new IP services.
44
Chapter 2: IP Networks
IP Network Convergence Data, voice, video, and Internet data must come together. By standardizing various types of data—formerly associated with entirely separate technologies—IP provides a powerful solution. A converged IP network creates the foundation for greater collaboration, opening new ways to work and interact, simplifying network management, and reducing capital and operating costs. Converged networks are fueling the development of an array of dynamic applications, such as e-learning, unified messaging, and integrated call center and customer support systems. Getting data into the hands of users and decision makers is essential for key decision making and prompt customer service. Extending the benefits of computing, data on an individual PC, laptop, or handheld is just as important to the company as is data on a centralized mainframe. With data and computing distributed across an enterprise and its users, any lack of bandwidth for data transport creates electronic gulfs between effective computing and desired productivity curves. Data is further diverged across multiple computing platforms, storage types, and network facilities. The ability to connect all data points together into a high-speed computing backbone is the job of the only protocol capable of achieving companywide, data convergence. IP networking is the convergence protocol of choice. IP is the dynamo of network convergence and service creation, extending productivity benefits, service variety, and innovation into the start of the 21st century. From local to long, from mobile to global, IP is unifying the convergence of networks while facilitating the purposeful and appropriate combination of data.
Local IP Networks: LANs Traditionally, service providers have had very little to achieve with local IP networks. As formerly mentioned, IP reached the commercial market en masse in the early 1990s, primarily from a departmental, grass roots approach, through sections of the enterprise that didn’t normally transact business through provider relationships. Local IP networks were separated from providers via wide area network (WAN) points of demarcation. Providers with voice switching and data transport business models were relatively immature in IP technology, having very little to offer if, indeed, an IP market opportunity did exist. Today, and likely for the next few years, local IP networks will largely remain under the purview of company IT organizations. Yet a potential shift is occurring. As companies and state and local governments increasingly reevaluate core and context business approaches, many will seek to out-task WANs and eventually LANs. Many companies are requesting Layer 2 and Layer 3 Virtual Private Network (VPN) services (covered in Chapter 4, “Virtual Private Networks”), a potential precursor to future opportunities in provider-managed local IP networks. Many providers that are pushing into the managed services arena through
Local IP Networks: LANs
45
provider-owned, Layer 3 customer premise equipment (CPE) are getting their first glimpse of local IP networks, as the provider-owned CPE is now the point of demarcation between the provider and the customer’s LANs. Additionally, the residential market is perhaps the initial opportunity for provider-managed local IP networks, as consumers network their homes and residences with a number of IP devices and LAN switches. Small to medium businesses, operating in a more competitive climate than ever, need complex internetworking solutions to compete and expand. These developing organizations are well-versed in core versus context strategies and will largely look to providers for valueenhancing solutions and managed services. For providers, understanding local IP networks is increasingly important, because these networks support the areas that harbor the application context of an organization. Understanding customer applications and communication needs will allow providers to better craft value-distinctive network services that companies and consumers will buy. Local IP networks matured beyond proprietary technologies within enterprise departments to embrace more open standards, leading to their contributions to mission-critical computing for the organization. Key catalysts were the price/performance of Ethernet, the convergence of IP routing, and the advent of LAN switching.
From Proprietary to Open and from Green Space to Mission Critical The protocol soup of LANs was marshaled and integrated through the use of IP and multiprotocol routers. Any-to-any computing with LANs became a highly scalable, suitable substitution for proprietary PC networking solutions. While intra- and interoffice communications were the first to benefit from local IP networks, many companies such as Microsoft, Sun, Oracle, Sybase, and others actively engaged in defining the application development software and database tools required to drive IP-based applications. In an effort to maintain the open systems theme, these highend, client/server–based tools provided open, flexible architectures to bring systems application development and delivery to IP infrastructure. These efforts enabled the creation of flexible, robust, value-added IP network environments capable of running mission-critical environments. These capabilities enabled enterprises to run strategic data applications, first in LANs and campuses, and then across metropolitan enterprise networks to the long IP networks and global IP networks beyond.
LANs are the gateways to distributed data. LANs are made up of physical media, LAN protocols, network operating systems (NOSs) and the PCs, servers, printers, and other computing devices that they are designed to interconnect. Despite their initial absence of IP support, LANs from Novell, Apple, and 3Com were some of the first to find success in enterprise departments. Later, Microsoft would leverage its
46
Chapter 2: IP Networks
personal computer office productivity applications into server-based versions and database systems, joining the list of LAN NOSs. Physical LAN technology decisions were originally an afterthought, as departments were really purchasing PC productivity solutions—that is, products that were focused on the specific applications and communications messaging needs of the business. PCs were capable of sending data at high speeds, and LAN technology was the best solution available to couple PCs with their higher-speed sources and repositories of data. Many disparate LAN topologies were budgeted, procured, and installed department by department, location by location. When departmental computing needs changed from plan and build to operate and maintain, department managers quickly tired of LANs, PCs, printers, and server administration and maintenance, seeking relief by letting central information technology (IT) departments assimilate and provide those services. A natural progression to multidepartmental computing extended the benefits of communications sharing and productivity. To achieve those gains, the introduction of the multiprotocol router addressed multidepartmental connectivity, often carrying these disparate LAN protocols within IP—the open, connectionless protocol that added intelligence, reliability, and scalability to LAN internetworking. With the recentralization of networking decisions within IT departments, company LANs were interconnected via multiprotocol routers within buildings and campuses, and extended to WANs to leverage productivity enterprise-wide. LANs are a basic type of network structure that connect computers in a home, single office, department, or building. The appearance of local area networking followed the personal computer and personal productivity wave first throughout enterprises and then into small business. Personal computer productivity found strength in numbers by swarming around shared printers; file, print, and application servers; and communications gateways. By organizing PCs, servers, and printers into LANs using a hub-and-spoke wiring approach, information could be shared and leveraged department-wide.
LAN Technologies Early LAN physical technologies included 10BASE5 and 10BASE2 coaxial bus-type Ethernet, Arcnet, Token Ring, and Token Bus, to name a few. When 10BASE-T Ethernet technology reached the market, many of Ethernet’s early scalability issues began to dissipate. The introduction of 10BASE-T Ethernet introduced centrally wired, shared Ethernet hubs, allowing for the use of category 3 wiring rather than expensive coaxial cable. This also permitted a structured wiring design that would eventually facilitate connection of different LANs through backbone LAN technologies such Fiber Distributed Data Interface (FDDI), Layer 2 bridges and, eventually, Layer 3 multiprotocol routers. Of the LAN technologies, both Ethernet and Token Ring became the primary physical layer LAN technologies for enterprises. IBM had chosen Token Ring as its LAN technology and introduced both 4 Mbps and 16 Mbps shared-network versions. Many enterprises were
Local IP Networks: LANs
47
IBM-centric, allowing Token Ring the early lead in LAN technology. When 10 Mbps shared Ethernet was enhanced to 10 Mbps switched Ethernet, Ethernet and Token Ring were operating at fairly even par. When 100 Mbps switched Fast Ethernet entered the market, Ethernet seized market share and even today continues a relentless outpacing of LAN technology competitors. While Token Ring and Ethernet are examples of Layer 1 LAN technologies, LANs also require networking protocols at Layer 2. Because early PC networking purchases were justified primarily around a departmental application solution, specific LAN technologies and protocols were just part of the underlying connectivity solution. Many LANs became islands of information, such as Novell with the Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX) protocols and Apple with AppleTalk protocols, in addition to LAN systems that used Ethernet, Xerox, and DecNet protocols. Notable was that each of these proprietary protocols was optimized for its purposes but developed within manufacturer and market silos; they were not designed to easily integrate or communicate with the other. In addition, the continued use of high-speed LAN media was assumed, so there was little emphasis on conserving signaling bandwidth overhead and acknowledgment timers, leaving each unique protocol with rather “chatty” characteristics during operation. Even more detrimental is that each protocol’s addressing scheme was locally significant. As such, each NOS’s native protocol had very limited scalability within the larger campus environment as well as beyond. They would ultimately depend on the IP multiprotocol router to achieve such expansions. LANs aren’t productive based solely on hardware technology and Layer 2 protocols. These enable LAN connectivity for PCs, but additional software was needed to provide LAN communication. Software technology such as Novell Netware, Banyan Vines, LAN Manager, OS2, and Microsoft Server were important overlays and represent many of the NOSs. Once, many of these NOSs used specific and often proprietary communication protocols to enable file, print, and application sharing with client PCs. Multiprotocol routers were then required to integrate the various communication protocols of the NOSs, as many different varieties could be found within a single organization. Figure 2-3 shows a picture of the Cisco Advanced Gateway Server (AGS), a multiprotocol router introduced in 1989 by Cisco Systems. Many client/server solutions were often built over an underlying NOS. For example, Groupwise is layered upon the Novell Netware LAN NOS, and Microsoft Outlook is most often spread upon the Microsoft Server suite of solutions. Most of the popular NOSs today, such as Microsoft Server, Novell Netware, Unix, Linux, and others, can use the Transmission Control Protocol (TCP)/IP protocol stack at the LAN level. Ascending to this popular Layer 3 protocol is a must-have for organizations seeking to structure, integrate, scale, and efficiently manage their LAN environments and businesscritical NOSs.
48
Chapter 2: IP Networks
Figure 2-3
The Cisco Advanced Gateway Server
Source: Cisco Systems
LANs and their NOSs brought reconvergence to wandering data, which had for many years diverged and replicated onto standalone PCs, servers, and localized LANs and minicomputers. Both then and now, routing and switching vendors, such as Cisco Systems, design and deliver effective solutions to aggregate, integrate, and operate LAN technology with scalability, manageability, and user transparency. LANs enable organizations to contribute and distribute important data while allowing organizations to become network centric once again. Of all the solutions in the local networking space, Ethernet, IP routing, and LAN switching are significant enablers for today’s local IP networks.
Ethernet—From Zero to 10 Gigabits in 30 Years Born in 1973, Ethernet is a simple, probabilistic network technology that continues to beat the odds of its deterministic competitors. Outpacing Token Ring, then FDDI, and a better price/performer than Asynchronous Transfer Mode (ATM), Ethernet is a death-defying technology that, according to its inventor Bob Metcalfe, “works in practice but not in theory.” Ethernet comprises both Layer 1 and Layer 2 of the OSI model, with Layer 1 flexibility to use different forms of copper and fiber interfaces. Ethernet’s collision detection capability can react to congestion in a network, allowing retransmission to occur transparently from user applications. The introduction of Layer 2 switching to Ethernet topologies largely mitigated the opportunities for collisions, allowing Ethernet to scale campus-wide using switching and routing technologies. Because Ethernet is unsophisticated, it does not fall prey to technology overshoot, keeping price points the lowest of all network interface technologies. The combination of Ethernet, LAN switching, and TCP/IP yields an internetworking solution that creates its own type of service pull.
Local IP Networks: LANs
49
Ethernet is still installing at a pace exceeding dozens of millions of ports a year. The simplicity, volume, and physical medium adaptability of Ethernet lend to the descending cost curve and the ascending adoptability of Ethernet. Outpacing its rivals, Ethernet has moved from LAN technology to metropolitan area network (MAN) technology. People are taking Ethernet home with them. Ethernet is the physical media for not only data but new era voice and video as well. Every new installation of a Voice over IP (VoIP) phone or IP video connection uses an Ethernet port. Within the enterprise and the home, Ethernet networks are soaring for users of broadband internetworking connections worldwide. Media transparent, Ethernet is riding optical fiber, surfing LAN switching capability, rocketing on computer gigahertz improvement, and breaking speed and distance barriers to 10 Gbps and potentially beyond. Using primarily optical fiber, Ethernet will continue into the WAN space. Early Ethernet began at about 2.9 Mbps. The subsequent advance to 10 Mbps and the attributes of low cost, simplicity, and good enough reliability helped Ethernet keep pace with LAN port shipments and flourish in local IP networks. When Ethernet LAN technology moved from shared-hub ports to dedicated switch ports, the price/performance of Ethernet represented a 10-fold improvement. A 10x price/performance of anything has long been heralded as a catalyst for rapid adoption. Indeed, it was the advent of Ethernet switching technology at 10 Mbps that was super concussive to many other LAN media technologies. When Ethernet advanced to 100 Mbps switched (another 10-fold improvement), the battle of the LANs was decided in favor of Ethernet, and networks applauded. Through the benefits of multimode fiber, single-mode fiber, and continuing advancement in Ethernet technology, Gigabit Ethernet is now a familiar tenant in enterprises and service provider metropolitan offerings. 10 Gigabit Ethernet is the current champ of the Layer 1/Layer 2 speed race. Even IBM, the undisputed king of mainframe technology with its high-speed, byteoriented channel paths, has Gigabit Ethernet adapter technology under the covers for open systems connectivity. Through the amalgamation of optical fiber, Gigabit Ethernet, and IP, the IBM OSA Express Adapter aggregates Layer 1 and Layer 2 technologies, feeding the mainframe operating system and IP subsystem—Multiple Virtual Storage TCP/IP (MVS TCP/IP)—at Layer 3 with a stunning combination that renders both of IBM’s networking children—Token Ring and SNA—obsolete. Technology push has once again succumbed to service pull. True to its name, Ethernet is equally suited to travel through air or “ether” to form the basis of wireless LANs. Mobile, cellular systems already operate with wireless protocols similar to Ethernet. Mobile teleputers will, no doubt, follow the Ethernet/Internet success model. Ethernet is well-standardized, pervasive, and affordable. These are enviable attributes that are fundamental to its continued success. Granted, Ethernet has also been fortunate— propelled by increases in computer power at the edges of the network, through lower-bit
50
Chapter 2: IP Networks
error rate optical transport and through LAN switching technologies. But overall, the phenomenal success of Ethernet results because it follows the essential nature of business—build it for a dime, sell it for a dollar, and make it habit-forming. Figure 2-4 shows the chronology of Ethernet speed advances, including the year a particular standard was ratified and the resulting media specifications. Beginning with the Fast Ethernet (100 Mbps) standard in 1995, the figure illustrates how Ethernet technology is advancing at 10 times performance every three to four years. Figure 2-4
Ethernet Speed Advances
Ethernet Speed Advances 10 Gigabit Ethernet (802.3ae) (10GBase-R, 10GBase-X, 10GBase-W) 10 Gbps
IEEE 802.3ae Ratified 2002
IEEE 802.3z Ratified 1998
Fast Ethernet (802.3u) (100Base-FX, 100Base-TX) 100 Mbps
IEEE 802.3u Ratified 1995
IEEE 802.3 Ratified 1975
Gigabit Ethernet (802.3z) (1000Base-X, 1000Base-T) 1000 Mbps or 1 Gbps
Ethernet (802.3) (10Base5, 10Base2, 10Base-F, 10Base-T) 10 Mbps 1975
1985
1995
2000
2005
IP Routing IP routing is a catalyst for networking innovation. While it is common to use IP to communicate only within the same LAN, it is often essential and more powerful to allow IP to communicate between computers on other LANs, WANs, and the Internet. When you leverage IP to communicate locally, at length, and globally, you allow digital data and applications to be shared far beyond their origin. It is this ability to communicate almost anywhere in a relative instant that allows IP-based applications to remove the barriers of time and distance. Because of IP routing and the Internet, the world is a smaller place. IP routing intelligence most often comes through periodic enhancements to IP multiprotocol router software and services. While the basic feature and functional set of the Internet protocols remain standardized and managed through open systems approaches, IP router software enhancements find ways to improve IP routing performance, packet redirection, QoS queuing controls, and security mechanisms to adapt IP communications to nearly any
Local IP Networks: LANs
51
networking challenge. This ability to adapt is the primary appeal of IP-based routing and IP-based product solutions. IP in conjunction with IP-based multiprotocol routers are adaptable to almost any network design topology, physical technology, data type, and innovative product philosophy.
Routing IP Packets In essence, an electronic IP packet has a place for a source IP address and a place for a destination IP address. Because the IP packet contains information about where it’s been and where it’s going, it can be routed through IP’s destination-based routing model. Like a letter in the mail, or a telephone number, a publicly addressed IP packet can ride interconnected LANs, WANs, or the Internet to arrive at its intended destination computer and application. Interconnected IP-based routers, named for the IP routing protocol functions they perform, are aware of destination IP networks at large and, as such, serve as an electronic road map as well as traffic directors to get data to where it’s going.
NOTE
Public IP addresses are registered and assigned by the Internet Assigned Numbers Authority (IANA). A public IP address is globally unique and routable between organizations over the Internet and other service provider network infrastructures. Private IP addresses are not registered and are defined by RFC 1918. Private IP addresses can be used by anyone, but they aren’t routable beyond the organization; for example, they are not routable through the Internet.
Participating at Layer 3 of the OSI model (which allows interlayer modularity), the IP network layer is rather independent of the physical transmission medium below; so any Layer 1 broadband, baseband, or wireless technology can be used. That’s why an IP router generally supports a variety of different interfaces. A multiprotocol router can direct and send an IP packet (as well as non-IP) over the following:
• • • • • • • • •
Ethernet Token Ring ATM circuit Frame Relay circuit SONET optical channel Serial line Digital Subscriber Line (DSL) Cable modem Wireless channel
52
Chapter 2: IP Networks
The power of IP routing is the ability to use almost any physical interface with almost any logical protocol to create networks of networks. A router is a packet “redirector” that is used to create an internetwork. Routers forward packets based on network addresses. Routers can summarize network address reachability information, and with effective routing protocols, can allow internetworks to scale to extremely large sizes. IP packet routing is accomplished using one or more routing protocols, such as the following:
• • • • • • •
Routing Information Protocol (RIP) versions 1 and 2 Interior Gateway Routing Protocol (IGRP) Enhanced Interior Gateway Routing Protocol (EIGRP) Border Gateway Protocol (BGP) Open Shortest Path First (OSPF) Intermediate System to Intermediate System (IS-IS) On-Demand Routing (ODR)
There are at least eight significant IP routing protocols, each with particular attributes that favor local, long, mobile, or global networks. As service providers and enterprises grow their networks into all of these theaters, the possibility of using multiple routing protocols increases as well. Perpetuated through a common, open-system, collaborative approach to technology enhancement, IP routing, in general, uses central features of the Internet protocols to network computers of all shapes, sizes, and network languages. IP provides both locally and globally significant addressing, error recovery features for reliability, traffic flow control, and application multitasking. These features of IP, discussed next, combine with IP routing to make internetworks smarter, faster, and flexible enough for year-after-year investments.
Globally Significant Addressing IP addressing is the international post office of the Internet. By using a pair of public IP addresses from any of IP’s assigned Class A, B, or C address ranges, you can move information all over the world. In fact, the primary appeal of IP networks is their ability to reach out to, or be reached from, anywhere else on or above the globe. As mentioned earlier in the section “IP Version 4,” IPv4 is capable of 4.3 billion theoretical addresses, of which practical use is much less. Enhancing and maintaining global addressing significance in the world depends on the 128-bit address structure of IPv6 to extend and accommodate the unique addressing requirements of the future.
Local IP Networks: LANs
53
Error Recovery (Reliability) When paired with IP, TCP provides for reliable data transfer, building on the routing benefits of IP. TCP exchanges both sequencing and acknowledgement numbers, within the TCP header, between two communicating devices to keep order and rank of packets sent and received. If a packet is lost, the receiver sends back an acknowledgement to the source with the starting sequence number of the packet it expected to receive. TCP facilitates this reliability on behalf of the upper-layer application so TCP can signal to the sending application which data is necessary for retransmission to maintain the proper sequence of data delivery. This reliable transmission mechanism is one of the most noteworthy reasons that TCP/IP networking is good enough for transporting mission-critical data. That said, it is not altruistic to consider TCP/IP in and of itself as incapable of dropping any packets as would be required for VoIP applications. By design, TCP/IP drops packets during congestion, recognizing these lost packets and retransmitting them. VoIP requires additional flow-control and protection features to achieve reliable transmission over IP.
Flow Control Using Windowing One of the advantages of using TCP with IP is that you gain a TCP-based, flow-control mechanism for data transfer. Taking advantage of the sequence and acknowledgement fields in the TCP header, along with another field known as the window field, TCP can adjust the speed of IP data to approach optimization with whatever link bandwidth is present. By using a sliding-window approach, TCP sends a starting amount of data, requesting an acknowledgement from the receiver of the data. If the receiver collected the sent data correctly, the receiver will send the acknowledgement along with an increment in the window field, requesting that the transmitting station send more packets between acknowledgments. In effect, this allows TCP/IP-based stations to continue to increase data transmission and throughput until an error is encountered with respect to the receiver. Continuously keeping track of the sequence field, the receiver knows if a packet or packets have been lost in the transmission. In those cases, the receiver sends to the transmitter the sequence number of the lost packet(s) and adjusts the window value, signaling to the transmitting station to slow down. This inherent TCP feature acts as a self-regulating data transmission speed control, protecting buffer space in routing devices and end stations. Figure 2-5 shows the layout of the 12 fields within the TCP header, including the sequence, acknowledgement, and window fields just discussed.
54
Chapter 2: IP Networks
Figure 2-5
TCP Packet Format
Destination Port
Source Port Sequence Number Acknowledgment Number Data Offset
Reserved
Flags
Checksum
Window Urgent Pointer
Data (Variable)
S13443
Options (+ Padding)
Application Multiplexing The TCP/IP and User Datagram Protocol (UDP)/IP protocols are capable of multiplexing transmission between multiple applications on an IP-capable host computer. UDP is another Layer 4 protocol of the IP suite that is designed for connectionless transport. This means that UDP has no inherent reliability, error recovery, or packet sequencing, instead depending on higher-layer protocols or applications to provide this function if needed. Because of UDP’s simplicity, UDP headers are smaller and require less overhead for transmission of applications that don’t require network-level reliability. As Layer 4 protocols, both TCP and UDP use the concept of port numbers to uniquely identify different data streams and deliver them to the particular application within the computer. For example, your PC might be accessing the Internet through a web browser, while also using e-mail from a corporate mail server. Inbound data from both of these remote applications is addressed to your PC, using the same IP address number of your PC and using TCP in the applications’ packet headers. However, a unique Layer 4 port number is used to isolate and differentiate each application’s data packets. In this way, the TCP/IP or UDP/IP protocol knows which computer application that inbound data is destined for and delivers one data stream to the Internet browser while multiplexing the other data stream to your e-mail application window. This is similar to having multiple post office mailboxes in front of your house, with each mailbox dedicated to a different member of your family. (The combination of your IP address, the protocol number—TCP=6 or UDP=17—and the Layer 4 port number is referred to as a socket.)
Local IP Networks: LANs
55
Application multiplexing is one of the more scalable features of open systems protocols like the Internet protocols. New applications are continuously designed to use unique or dynamic port numbers while still using the fundamental services of TCP/IP and UDP/IP. In this way, hundreds upon hundreds of application sessions can be supported using the same networking protocol. Doing many things at once is an important advantage of the Internet protocols. IP-based routers can last a lifetime, with respect to technology lifecycles. As long as routers are architected with a modular approach, allowing for occasional upgrades in CPU capacity, memory capacity, and interface port density, an IP-based router can be a revenue-generating product for well beyond ten years. By building your technology platforms around the Internet protocols, you can continue to derive value and services from IP-based hardware and software. Even if a router should be overshadowed by a higher-speed, higher-density, or enhanced function replacement, you can redeploy an IP router further out into the network periphery or dedicate it to a specific IP function. As long as the hardware continues to support the IP protocol and feature set, you can repurpose it in the ever-expanding IP networks of service providers and enterprises.
LAN Switching LAN switching extends the performance of LAN-based topologies. LAN switching enables higher-bandwidth speeds to connected devices, helps mitigate collision domains with technologies such as Ethernet, and simplifies troubleshooting, problem isolation, and operational management. LAN switching began at Layer 2 but has matured into Layer 3 and Layer 4 switching abilities to address any organization’s scalability needs.
Layer 2 Switching When you take a simple Layer 2 Ethernet switch and connect two PC hosts via Ethernet cards, both hosts can send information to each other with very little delay or latency. The latency experienced is generally related to the switching speed of the Ethernet switch, which takes the first packet, looks up the destination media access control (MAC) address, determines the Ethernet port of the destination, and switches the data packet from the source port to the destination port. This happens very fast because the traffic is local, and the Ethernet switch’s processor is appropriately designed to hardware-accelerate this type of switching function. Using the Fast Ethernet over copper limitation of 100 meters distance, a local Ethernet switch domain would serve about a 660-foot radius with the wiring closet at the center point. Figure 2-6 shows a diagram of a simple Layer 2 LAN switch domain, which operates at the data link layer of the OSI model (Layer 2).
56
Chapter 2: IP Networks
Figure 2-6
A Layer 2 LAN Switch
OSI Reference Model Application Presentation Session Transport LAN Switch Network Data Link Physical Source: Cisco Systems, Inc.
When local, independent LANs must scale to larger building or campus-based networks, the number of Ethernet hosts aggregating at a distribution point becomes quite large. This can exhaust the available number of hosts in the LAN’s IP subnetwork range and accumulate both user and broadcast traffic to the point that data throughput performance is compromised. Segmenting the Ethernet network becomes necessary. Segmenting the network into virtual LANs (VLANs) is a common approach. This isolates the LAN segments from each other, helping to reduce the collision domain (in the case of Ethernet) and helping with IP subnet management. However, most organizations require any-to-any communications, so interconnection of Layer 2 VLANs is still required. Layer 2 bridging is allowed for interconnection of Ethernet segments and VLANs, yet this continues to tie everyone into the same Layer 2 broadcast domain. Layer 3 routing was the initial approach to interconnect Layer 2 LANs and VLANs, providing any-to-any connectivity while also providing scalability.
Layer 3 Routing for Layer 2 Scalability Network designers first used multiprotocol routers at distribution and core aggregation points primarily to reduce the size of Layer 2 broadcast domains, and secondarily to separate IP networks (for more host connections), with the added benefit of providing more control and services such as QoS, security, and accounting. These advanced services require the intelligence of a routing processor in a multiprotocol router. As multiprotocol routers route packets between different LANs or different IP networks, the routing processor becomes the middleman or intersection for all host-to-host conversations
Local IP Networks: LANs
57
that traverse the router’s position in the network. Because routing decisions are made at Layer 3, the Route Processor is involved for every packet in the session from the start of the conversation until the conversation is finished. Interrupting a multitasking Route Processor can add a few milliseconds of delay, so routing processors were further optimized with hardware-based forwarding designs to speed up Layer 3 routing between Layer 2 LAN domains. Figure 2-7 shows a traditional router and hub campus design. In this design example, Layer 3 routers are used to segment Layer 2 LAN switch domains, isolating Layer 2 broadcasts within the segments, while Layer 3 routing maintains any-to-any connectivity within the multibuilding campus. Figure 2-7
Traditional Router and Hub Campus Design Workgroup Server Building A
Workgroup Server Building B
Hubs
Hubs
Layer 3 Router
Layer 3 Router
FDDI Backbone Dual Horned
Layer 2 Switches
FDDI Dual Ring
Layer 2 LAN Backbone
Enterprise Servers
Source: Cisco Systems, Inc.
Routing processors are normally designed as general-purpose, software-controlled microprocessors to easily expand functionality and to allow new feature implementation via software programming. General-purpose routers perform both routing and switching at a basic level, but the architecture of the router determines the performance of the switching function. The more custom hardware and memory cache can be distributed closer to the router’s hardware interfaces, the better the options for improving switching performance.
58
Chapter 2: IP Networks
Multilayer LAN Switching For all the scalability, internetworking, and management benefits gained by inserting routers into buildings and campus LANs, these capabilities come at the cost of increased packet latency (albeit a few milliseconds) between two hosts communicating through the router, as compared to a pair of hosts communicating through a local LAN switch. An example of this might be a PC-based IP client workstation on the 14th floor communicating with an IP-based application server in the basement or across campus, exceeding general wiring distance specifications. In such cases, you need to blend the features of Layer 3 routing control with the benefits of Layer 2 switching performance. LAN switching, also called multilayer switching, was developed to meet those performance-conscious requirements. This is where product manufacturers can derive differentiation, to the extent that they can bring together the best features of routing and switching. To demonstrate the benefits of LAN switching, consider an analogy. You were invited to dinner at a home to which you’ve never been, and you were given street directions in order to locate the house within the town or city. By following the street directions from your house, you can drive via a linkage of connecting streets, pausing to check street signposts to make sure that you’re following directions. Once you arrive at the house, your destination is complete. Now, suppose you’re invited to dinner again the next evening, and when the time arrives, you drive via the same connecting streets from your home to theirs, only this time you didn’t need to check the street signs, because you mentally had “stored” the path and recognized that you were on direction. By not pausing to check street signs, your trip ideally took less time in transit, reducing the latency between your house and theirs. You can extend this analogy even further by supposing that as you are invited to dinner to the same house a number of times, you discover a shortcut that reduces the total number of streets you must travel, and this reduces your drive time even further. Comparing this analogy to packets, the Route Processor in a typical router acts like a map with street signs, to direct packets to the proper network interfaces based on the IP address. Once this is completed for the first packet, the directions from source to destination are known, and this information is useful for any subsequent data packets between the two same hosts. A router that is multilayer switching enabled can look up the information at Layer 3 to properly route the first packet, and then send the results of that determination to a hardwarebased switching processor cache or memory that can provide the shortcut or bridge between the two interfaces for all subsequent packets between the two hosts for the duration of the conversation. This allows packets two to n to benefit from the hardware-optimized switching capability of the router’s architecture without burdening the general-purpose, software-controlled Route Processor at Layer 3. It is called multilayer switching because Layer 3 is involved for first packet route lookup, and Layer 2 is involved for the remaining packets delivered during the conversation.
Local IP Networks: LANs
59
In this way, multilayer switching allows for higher performance throughput of LAN traffic, enhancing performance of conversations within buildings and campuses that are connected and reachable by LAN-switching technology. To perform switching on a more granular basis, many devices also implement Layer 4 switching. At Layer 4, you can distinguish between two different TCP/IP conversations by looking at the TCP port number. If one conversation involves a simple mail transfer and the other involves a VoIP conversation, both are TCP/IP and UDP/IP flows and the switch, not able to distinguish these dissimilarities at Layer 3, might then examine them at Layer 4, allowing any configured QoS mechanisms to distinguish and apply a higher QoS to the VoIP conversation than to that of the Simple Mail Transfer Protocol (SMTP) conversation. Cisco switches perform multilayer switching at both Layer 3 and Layer 4. At Layer 3, most of the product family will cache traffic flows based on IP addresses. At Layer 4, the traffic flows are cached based on source and destination addresses, in addition to source and destination application ports. Layer 3 switching is hardware-based, accelerated routing. Layer 4 switching is hardware-based but also considers the application in its switching decisions. Cisco multilayer switching products are designed for both Layer 3 and Layer 4 switching in hardware to provide equivalent levels of performance. The traditional type of Layer 3 switching design uses a route-cache model to maintain a fast lookup table for subsequent packet flows based on an efficient route-cache match. The cache entries are periodically aged out to keep the route-cache current and to immediately invalidate routes that are no longer usable if the network topology changes. This demandcaching approach—maintaining a very fast access subset of the routing topology information—is optimized for scenarios where the majority of the traffic flows are associated with a subset of destinations. Figure 2-8 shows a typical multilayer switching design and also introduces a three-tier, hierarchical network topology design using the following layers:
•
Access layer—The access layer is made up of Layer 2 switches that connect to user devices via Ethernet.
•
Distribution layer—The distribution layer is made up of multilayer switches, incorporating Layer 2, Layer 3, and Layer 4 routing and switching capabilities, connected between the access layer and the core layer via Fast Ethernet or Gigabit Ethernet.
•
Core layer—The core layer is the campus backbone, usually high-speed Gigabit Ethernet for maximum traffic aggregation and performance.
60
Chapter 2: IP Networks
Figure 2-8
Typical Multilayer Campus Design North
West
South
Layer 2 Switching
Layer 2 Switching
Access Layer
Distribution Layer
Si
Si
Multilayer Switching
Si
Si
Multilayer Switching
Si
Si
Core Layer Gigabit Ethernet
Multilayer Switching
Si
Gigabit Ethernet
Si
Multilayer Switching
Server Distribution Fast EtherChannel-Attached Enterprise Servers
Fast EtherChannel-Attached Enterprise Server
Source: Cisco Systems, Inc.
The server module shown in Figure 2-8 is another access layer, usually in a secure data center, that is connected via a multilayer switching distribution layer to reach the core layer backbone and the Ethernet devices.
Optimizing Multilayer LAN Switching The Internet, with its constantly changing routing information, can make route-caching inefficient. As the Internet routing topology changes, the route-cache would age out entries and attempt to cache new ones, increasing an effect called caching churn. This type of Internet traffic profile required a new switching model to eliminate the increasing cache maintenance resulting from growing numbers of dispersed destinations and dynamic network changes. Cisco Express Forwarding (CEF) avoids the potential overhead of continuous cache churn by eliminating the need for a route-cache. Through use of a forwarding information base (FIB), which mirrors the routing table, and a separate adjacency table, CEF is more topology driven than traffic driven. As a result, CEF switching performance is largely independent of and unaffected by network size or dynamics. For large, complex networks with dynamic traffic patterns, this type of switching design offers benefits in terms of better performance, larger scalability, increased network resilience, and functionality.
Local IP Networks: LANs
61
An extension of CEF called distributed CEF (dCEF) distributes this capability to the router or switch line card level. CEF and dCEF are key technology enhancements that can multiply routing and switching performance in routers and multilayer switches. LAN networks can also be optimized through topology design. An enterprise campus network can be broken down to small, medium, and large locations. In most instances, large campus locations will have a three-tier design with a wiring closet component (Ethernet access layer), a distribution layer, and a core layer. Small campus locations will likely have a two-tier design with a wiring closet component (Ethernet access layer) and a backbone core (collapsed core and distribution layers). Medium-sized campus network designs will sometimes use a three-tier implementation or a two-tier implementation, depending on the number of ports, service requirements, manageability, performance, and availability levels required. The practice of centralizing important data servers and the increase of intranet- and Internet-based traffic within the application profiles of organizations has flipped the 80/20 rule of switched to routed traffic. When 20 percent of the traffic stays locally switched on the LAN, the other 80 percent of the traffic must travel to another LAN, requiring the services of a Layer 3 router or multilayer switch. This shift in traffic ratio puts a greater burden on the routing technology of a network backbone. With Layer 3 routing examining the data packets in greater depth, the added load can create performance bottlenecks. Designing products and networks to perform Layer 3 and Layer 4 switching as quickly as Layer 2 is the resulting goal. The performance of multilayer switching matches the requirements of the new 20/80 traffic model. Cisco continues to optimize LAN switches and multilayer LAN switches with CEF, dCEF, and hardware acceleration as appropriate. The Catalyst line of 3500, 4500, and 6500 series devices is an example. These products can blend Layer 2 LAN switching, Layer 3 routing, and multilayer LAN switching for a flexible approach to campus network design. LAN and router products that use effective, application-specific integrated circuit (ASIC) hardwarebased multilayer switching and hierarchical topology designs are the tools you use to address high-performance local IP network requirements.
62
Chapter 2: IP Networks
Table 2-2 highlights local IP network services. Table 2-2
Local IP Services Service Categories
Service Types
Technology Options
Design Options
Data/IP
File sharing
Wired LANs
Print sharing
Ethernet, Fast Ethernet, and Gigabit Ethernet
Application sharing
Token Ring
Departmental
Messaging e-mail, I-chat
FDDI and ATM
Campus
Database access
IP routing
Metro
Desktop integration/ productivity
LAN switching
Wireless LANs
Data backup Intranet Internet Voice/video
IP telephony
Powered Ethernet
Wired LANs
Unified messaging
VoIP
Wireless LANs
IP communications
VoATM
Departmental
IP video
IETF SIP
Campus
IETF MGCP/Megaco
Metro
ITU H.323 ITU H.248
Long IP Networks: WANs The pendulum swing from primarily local traffic to principally nonlocal traffic is one of the driving factors for long IP networks called WANs. The distribution of data applications, data storage, and data servers among and beyond the enterprise create requirements for IP networks that scale beyond the building, the campus, and regularly beyond the metropolitan area. WANs refer to business networks that must cross public thoroughfares and use public service provider facilities to do so. WANs can be citywide, statewide, regional, and national. Some are global in scope. These IP networks are used to integrate, congregate, and facilitate data communications and, increasingly, voice communications across the expanse of the enterprise. A company’s voice telephone network was one of the first to facilitate coordination of business from boardroom to branch, in effect comprising a wide area voice network. Mainframe
Long IP Networks: WANs
63
data networks were the next carriers of company information, using narrowband multidrop asynchronous, bisynchronous, and synchronous communication over telephone circuits outfitted with data modems and line conditioning. These data networks were local, regional, national, and global in coverage and could be classified as purpose-built WANs. Facsimile machines became another form of long-distance communication, combining image scanning, digitization, and modulation over the public switched telephone network (PSTN). The distribution points and retail storefronts of the business usually define the coverage area of a WAN. Headquarters’ information must propagate to regional offices, warehouses, branch offices, storefronts, and suppliers on an immediate basis. A WAN must touch and interconnect all of these elements of the business. WANs reduce and eliminate the barriers of time and distance in your business in many ways, including the following:
• • • • •
Assimilate and distribute internal company information Communicate sales information Take and book customer orders Provide command and control of supply-chain and distribution functions Provide customer-service touch points with the business
For companies that primarily sell digital information—whether it be software, music, video, or digital books—a WAN is often the heart of the distribution channel. When the customer markets grow beyond city and regional areas, organizations often use the Internet to extend the reach of their WANs and their geographic storefronts. This section describes a few drivers of long IP networks such as wide area bandwidth, changes in wide area regulatory policy and architecture, and wide area technologies and topologies including Frame Relay, VPNs, and metro Ethernet.
WAN Bandwidth WAN facilities from public carriers are traditionally priced on a distance basis and are expensive compared to LAN bandwidth services. Customers are forced to design their WANs using a compromise of bandwidth to expense. Furthermore, the fiscal cost-tobandwidth ratio drives customers to optimize the traffic load such that bandwidth upgrades are postponed as long as possible. This also increases a customer’s tendency to distribute computing and to replicate and diverge data across the WAN. There is not enough bandwidth to otherwise centralize, and if there were, it isn’t affordable. (This is a marketing problem.) The expense of bandwidth remains a driver of network convergence for long IP networks. As two or more purpose-built networks have been deployed over the company’s computing horizon, the need to converge these networks into one higher-speed IP network is operationally essential.
64
Chapter 2: IP Networks
Wide Area Changes WANs of one form or another have been used for a little over 30 years. Relative to the discussion of long IP networks, WANs began a shift to heavy IP usage in the early 1990s, introducing broadband WAN terminology at the end of the 1990s. In addition to bandwidth increases fostered by continued innovation, WANs are effected by changes in regulatory policy, architectural directions, and continued introduction of new WAN technology options.
Regulatory Policy Changes The WAN market has been the bread and butter of data revenues for the traditional service provider. Leased lines and Frame Relay networks are common components and building blocks of a flat-rate business model based very much on bandwidth transport and relatively little on added services. For years, local, regional, and national providers of these solutions remained segmented by regulatory decree. From an enterprise view, regulatory segmentation was the environment in which communication was purchased, and from the service provider perspective, this segmentation kept competition at bay while protecting individual provider markets. Over the last few years, significant changes have occurred that are impacting the industry and affecting the service provider value chain and business model. The changes in regulatory policy have enabled more competition and service substitutions, in effect driving a regulated services industry toward a commodity services industry. This has impacted the competitive structure of carriers who now find themselves transforming their business models deeper into customer-centric orientation. This is necessary because product differentiation is becoming less distinctive.
Architectural Changes In a competitive environment, revenue generation remains paramount, yet more difficult to close due to more discriminating customers with many available options. Flat-rate bandwidth offerings also remain, but value recognition must be marked beyond mere transport functionality. With customers placing value distinction in IP-based services, providers must find ways to incorporate IP value into their product wares. It is this search for IP value that leads providers to a fundamental change in system architectures—architectures based on open standards rather than vertical, purpose-built networks. An open standards approach is a horizontal end-to-end model that yields a product-and-services methodology based on standard building blocks. Modular in nature, providers can combine these IP-based products and services in different ways to create unique networks and services.
Long IP Networks: WANs
65
Wide Area Technologies and Topologies Yesterday’s WAN technologies were built with a mixture of X.25 packet, switched 56 Kb, synchronous multidrop, and leased line DS1 and DS3, along with early ISDN services. These were generally deployed in a hub-and-spoke fashion at the access layer, with a typical mesh of higher-speed, leased lines connecting the distribution hubs to the core of the network. These technology designs served proprietary WAN protocols very well, because they were generally optimized to operate lean and mean with minimal overhead over expensive network facilities. Follow-on network technologies such as Frame Relay, ISDN, and ATM would introduce the multiplexing and transport of multiple data types and the integration of multiple traffic types, such as voice, video, and data. These technologies found opportunities in the wide area markets. More contemporary IP networks desire as much bandwidth as their local area IP counterparts, because these long IP networks are really extensions of LAN-based protocols and applications. Even Ethernet, a technology bred in the local area, has moved quickly into the metropolitan market, a precursor to Ethernet’s use as a WAN technology. LAN-based applications use IP as a launch pad to soar beyond the campus into citywide, statewide, regional, and national IP networks. These IP-based applications placed higher demands on required wide area bandwidth. The practice of deploying private line, DS1based hub-and-spoke IP networks is very common given existing provider options. The bursty nature of IP networks was a new wide area traffic phenomenon that tended to cause IT managers to overprovision bandwidth links in order to deliver on service assurances. But throwing expense at the unknown will only get you through a budget cycle or two. Soon there was a rising challenge to long-term affordability of such deployments, leading enterprises to insist on ways to optimize bandwidth expense for bursty, high-value IP networks. Frame Relay became the 1990s technology of choice to meet these requirements, while VPNs and metro Ethernet became options in the 2000s.
Frame Relay Frame Relay is the primary mover for long IP networks. Both domestic and international carriers consider Frame Relay as the leading product in their data portfolios. Frame Relay is a high-speed, fast, packet-switched data communications service that is ideally suited to the requirements of bursty IP and non-IP traffic for WAN transport. Offered by domestic and international service providers, Frame Relay supports various data frame sizes at granular data rates from 56 Kbps to 155 Mbps speeds. Frame Relay essentially moved carriers beyond the physical pipe layer and into the Layer 2 networking business.
66
Chapter 2: IP Networks
The compelling value for both service providers and end customers is that Frame Relay service is fast, protocol independent, and private; and it reduces both capital and operating costs compared to a point-to-point mesh of physical private-line circuits. Frame Relay allows organizations to reduce costs because all Frame Relay subscribing customers essentially share the overall capacity of a provider’s Frame Relay network infrastructure on an as-needed basis. The provider anticipates that its customers will burst traffic into the backbone at slightly different instants in time, allowing the provider to “oversubscribe” the number of customers carried on its Frame Relay infrastructure. Additionally, Frame Relay brings quicker bandwidth provisioning flexibility to providers and customers, increasing the majority of these speed upgrades into days rather than weeks. Frame Relay also helps mitigate distance sensitivity for geographic access links. Billed as a flat-rate service, it isn’t usage sensitive or necessarily distance sensitive, because the provider has prebuilt the network backbone over the provider’s complete area of coverage. This reduces month-to-month operating costs compared to distance-sensitive, often overprovisioned, leased T1 services. Frame Relay assists with lowering capital expenditures by reducing the required physical interface ports on central site and hub-based router equipment. For example, using a central site, high-speed physical interface configured for the Frame Relay protocol, you can communicate with multiple end sites via private virtual circuits at Layer 2 that aggregate and funnel through the single Layer 1 physical interface of the customer’s router. Figure 2-9 shows this simple concept in the typical Frame Relay hub-and-spoke design. Figure 2-9
Frame Relay Hub-and-Spoke Design
x
x Physical Connection
Spoke 1 56 Kbps
Frame Relay Cloud
S0
x Hub
S0
Kbps
x 56 Kbps
Logical Connections x 3
S0
Spoke 2 56 Kbps
x x
S0
Spoke 10 Source: Cisco Systems, Inc.
Frame Relay was initially defined as an ISDN frame mode service and is essentially a subset of the High-Level Data Link Control (HDLC) protocol. To make the service fast, Frame Relay puts responsibility for any data errors back onto the customer equipment, which is a perfect match for the error-checking and retransmission capabilities of the
Long IP Networks: WANs
67
TCP/IP protocol. If a burst of traffic exceeds the negotiated bandwidth guarantee, the packet can be dropped, and TCP will be responsible for retransmission of the discarded data. In fact, Frame Relay enjoys its wide success based on the requirements to connect enterprise LANs through WANs via the TCP/IP protocol. Frame Relay networks, through the concept of permanent virtual circuits (PVCs), scale much better than leased-line counterparts. Frame Relay infrastructures were purpose-built by providers to address the surging LAN internetworking data opportunity. For service providers, Frame Relay was a doorway to Layer 2 solutions. By building out networks of frame switches, service providers were creating Layer 2 infrastructures that allowed them to stretch beyond the dumb pipe model into a build once, sell many times model. Frame Relay became a point of service differentiation for providers as variations in port speeds and committed information rates (CIRs) allowed providers options with which to distinguish and better “tune” their service offerings to the bandwidth and availability needs of the customer. For the first time, service providers were able to engage their customers into discussions of advanced networking solutions that carried application context. This added value to the customer relationship and strengthened business partnering. Advancing to Layer 2 solution selling also positioned providers to develop managed services for any customers desiring to outsource. Today, many service providers are pursuing an Internet Protocol/Multiprotocol Label Switching (IP/MPLS) infrastructure as the latest convergence mechanism, folding ATM and Frame Relay networks onto the edges of the IP/MPLS core. This movement is nearly industry-wide as providers are looking to add flexibility, scalability, and IP intelligence into their infrastructures. Providers are justifying the IP/MPLS directive based on both operational network convergence and revenue opportunity in Layer 2 and Layer 3 VPNs. More information on IP/MPLS is provided in Chapter 3, “Multiservice Networks.”
VPNs The appeal of VPNs is the ability for the customer to realize the cost advantages of a shared network while enjoying the same security, QoS, reliability, and manageability as they do in their own private IP networks. The primary goal is to extend IP connectivity beyond the enterprise WAN in a cost-effective manner. VPNs can be used for mobile teleworkers, business-to-business connections, and for WAN extensions. VPNs are an effective WAN solution that come in many forms, such as customer-provided VPNs and provider-managed VPNs. To a large extent, customer-provided VPNs are used to leverage Internet bandwidth connections, in effect removing large portions of circuit expense by substituting the Internet for the long-distance, mileage-sensitive segments of these connections. Provider-based VPNs offer from limited to comprehensive manageability and a flexible number of design options that encompass both Layer 2 VPNs and Layer 3 VPNs.
68
Chapter 2: IP Networks
VPNs look promising as growth options for WANs. Products such as Virtual Private LAN Service (VPLS) and MPLS VPNs are fresh approaches to WAN connectivity. VPLS is essentially a multipoint Ethernet service over an IP-based MPLS network. MPLS VPNs can offer both Layer 2 and Layer 3 VPNs. Examples of MPLS Layer 2 VPNs include the following:
• • •
Ethernet over MPLS (EoMPLS) ATM over MPLS (AoMPLS) Frame Relay over MPLS (FRoMPLS)
MPLS Layer 3 VPNs allow providers to offer Layer 3 IP-based VPNs and IP services. VPNs simplify site-to-site and multisite LAN connectivity for businesses and give service providers innovative platforms from which to launch new sources of product revenue. You learn more about these VPNs in Chapter 4.
Metro Ethernet Ethernet has been moving from local IP networks to long IP networks. Most of the early action has been in the metropolitan areas, where metro Ethernet is increasingly an IP-based WAN access-link option. Many providers are positioning edge platforms with varieties of Ethernet support in their next-generation networks. The appeal of using Ethernet in the metropolitan WAN is the low cost and the future scalability of the interface. The lifetime cost curve of a T1 interface, being replaced with a T3 interface, and then being usurped by an OC3 interface is logarithmically steep compared to a single Ethernet interface that can scale from 1.5 Mbps to 1000 Mbps (1 Gbps) without a truck roll or equipment change. Ethernet interfaces are packet-based and provisioned quickly, with more granular speed options enabled through software configuration. Ethernet, Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet interfaces are all finding success today in metropolitan networks and over long-haul optical fiber for very long IP networks. You can find specific coverage of metro Ethernet in Chapter 6, “Metropolitan Optical Networks.” Many other technologies meet unique requirements for bandwidth selection in long IP networks. Features such as Multilink Point-to-Point Protocol (PPP), Packet over SONET, ISDN, and others are commonly used to match the right technology and bandwidth requirements with a justifiable cost. Often, technology access options are restricted to those that are within the product set of the provider. For example, a customer might prefer a 10 Gigabit Ethernet connection, yet the provider might be limited to only Packet over SONET/ SDH (POS) interfaces due to a large investment in and dependency on SONET/SDH infrastructure. Next-generation network services are about increasing the variety of services and interfaces that are available to customers. WAN technologies are expanding beyond the popularity of
Mobile IP Networks
69
Frame Relay and SONET/SDH services into VPNs and Ethernet services, both wired and wireless. Service providers and enterprises will continue to grow their markets, perpetuating the demand for effective options to enhance and expand long IP networks. Table 2-3 depicts long IP services for data and voice. Table 2-3
Long IP Services Service Categories
Service Types
Technology Options Design Options
Data/IP
WAN business data
Serial data, HDLC, PPP Point to point
Network convergence
Frame Relay, SMDS
Point to multipoint
Distributed computing
ISDN
Hub and spoke
Messaging e-mail, I-chat ATM Application sharing
Mesh
Ethernet
Database access Client/server Intranet Internet Voice/Video
WAN business voice
TDM voice
Point to point
H.323 video
Powered Ethernet
Point to multipoint
IP telephony
IP telephony
Hub and spoke
Unified messaging
VoFR
Mesh
IP communications
VoATM IETF SIP IETF MGCP/Megaco ITU H.248 ITU H.323
Mobile IP Networks Time, opportunity, and money remain the fundamental business requirements and primary drivers of networking technology. Today’s company networks are now strategic business assets that must be planned, leveraged, and, most of all, successful to deliver business value that is measurable in customer revenue and satisfaction. The continual quest for productivity has reached beyond network wall jacks and into the airwaves, hoping to hear IP conversations on unseen networks—digital conversations invisibly streaking through the “ether.” And they are there: air links of digital information propagating through the atmosphere seeking those who would tune in to their frequency, available for whoever possesses the access key.
70
Chapter 2: IP Networks
Mobile networking allows more time in a day for professional knowledge workers to contribute to productivity. According to industry studies, professionals are actually at their desks just 30 percent of the time. They spend about 49 hours a month in meetings.2 That’s why we use voicemail and, to a larger extent, e-mail—attempting to achieve at least a marginal productivity gain by communicating with others asynchronously in time. Mobile networks extend user access to applications, content, and the communication tools necessary to make decisions and complete project tasks. Adding mobility to IP networks brings mobile computing closer to real-time applications for communicating with individuals on more time-synchronous terms. Mobile IP networks combine wireless and IP technology to create anytime, anywhere connections to the Internet and enterprise networks. Mobile computing requires a portable computer, such as a laptop or pocket PC. Portable computing coupled with Mobile IP communications allows knowledge workers to be productive whenever and wherever it makes sense, contributing to the freshness and synchronization of information. Whether in a campus environment or distant mobile location, high-speed, secure wireless technology enables users to be constantly connected—even as they move between wireless cells or in and out of wired LAN environments. Mobile network operators and their customers are eager for new capabilities, ranging from image transfers and text messages to web surfing and video streaming, all delivered over cell phones or other portable computing devices. These capabilities are easy to build and then extend via IP. To assist wireless service providers with the migration to IP-based networking that is truly mobile, the IETF developed a standard for taking IP into motion, publishing this capability as the Mobile IP standard IETF RFC 2002. The Mobile IP standard allows wireless users of the technology to “roam” their IP address across multiple wireless networks, similar to the way a cellular phone number is portable across the globe. The ability to maintain the IP address when roaming across different wireless networks and coverage areas allows users to stay connected and to maintain application connectivity throughout. There are likely to be gaps in Mobile IP coverage for awhile, yet there is natural incentive for providers to deploy the standard within their networks. IP applications are high-margin services, and supporting them is a good revenue opportunity. Mobile IP eliminates a stop-and-start approach to IP connectivity that is required with network location changes, thus enabling users to maintain the same IP address, regardless of their point of attachment to the network. This is the real power of Mobile IP: a user could be in a VoIP conversation or video streaming media conference and not lose connectivity while en route to his or her destination. Despite a user’s movement between different networks, connectivity at the different points is achieved seamlessly without user intervention. Roaming from a wired network to a wireless or WAN is also done with ease. Mobile IP provides ubiquitous connectivity for users, whether they are within their enterprise networks or away from home.
Mobile IP Networks
71
Mobile IP has the following three components, as shown in Figure 2-10:
•
Mobile node—The mobile node is a device such as a cell phone, PDA, or laptop whose software enables network-roaming capabilities via Mobile IP functionality.
•
Home agent—The home agent is a router on the home network serving as the anchor point for communication with the mobile node; it tunnels packets from a device on the Internet, called a correspondent node, to the roaming mobile node. A tunnel is established between the home agent and a reachable point for the mobile node in the foreign network.
•
Foreign agent—The foreign agent is a router that can function as the point of attachment for the mobile node when it roams to a foreign network, delivering packets from the home agent to the mobile node.
Figure 2-10 Mobile IP Components and Relationships Mobile Node Visiting Foreign Network
Foreign Network
Foreign Network
Mobile Node at Home
Internet Foreign Agent
Home Agent
Home Network
Foreign Agent
Source: Cisco Systems, Inc.
When a Mobile IP node roams to a new network, that remote network’s foreign agent assigns a care-of address (CoA) as the termination point of the tunnel toward the mobile node that is visiting the foreign network. The home agent maintains an association between the home IP address of the mobile node and its CoA, which is the current location of the mobile node on the foreign network. This association is maintained for all Mobile IP nodes in a mobility binding table at the home network. In the example that follows, a Mobile IP node in transit has been assigned a home IP address of 1.1.1.7. This node is roaming from one foreign network to another foreign network that assigns the Mobile IP node a CoA of 10.31.2.1 (the IP address of the current foreign agent).
72
Chapter 2: IP Networks
This association is registered as a new data path via an IP tunnel to the home agent, which updates the mobility binding table for 1.1.1.7 to the current CoA point of attachment (now 10.31.2.1). Every other network transit point between the foreign agent and the home agent is transparent to the Mobile IP tunnel that is in session. This way, the Mobile IP support is only implemented in home networks and on the edges (in radio-access networks, or RANs) of wireless networks. This example of Mobile IP roaming is depicted in Figure 2-11. Figure 2-11 Roaming with Mobile IP MN—Mobile Node FA—Foreign Agent HA—Home Agent
MN
Mobility Binding Table: MN CoA 1.1.1.3 10.31.1.1 1.1.1.7 10.31.2.1 1.1.1.8 10.31.2.1 1.1.1.5 10.31.3.1 FA
FA
1.1.1.7 10.31.2.1
10.31.3.1 New
MN 1.1.1.7
Dat
aP
FA
ath
HA
Old Data Path
No Change Is Propagated to Correspondents
10.31.1.1
The Movement Is Transparent to All Other Devices Source: Cisco Systems, Inc.
The mobile wireless segment of the telecom sector is one of the highest growth areas of the industry. Operators are asking Cisco innovators to assist with the transition from circuit architectures to IP-based packet systems to deliver their new mobile data services. Anywhere, anytime access is the goal. Creating mobility options for high-value knowledge workers can have a positive impact on productivity, as well as company and customer responsiveness, and can bring about a higher utilization of company core assets. Beyond the benefits of the Mobile IP standard, Mobile IP networks encompass mobile computing and mobile teleputing. Mobile IP networks consist of wireless IP LANs for data and voice—both public and private—and wireless broadband data for mobile telephony.
Mobile IP Networks
73
Wireless IP LANS Wireless LANs (WLANs) maximize the effectiveness, freshness, and the reach of IP applications. WLANs began in the enterprise in the late 1990s. Early adopters used this movable LAN technology to solve business problems, streamline processes, and enable the mobility of their workforce within the campus. Expanding the productivity zone of knowledge workers was the primary justification for deploying the technology. Though wireless LANs began in private user space, the technology is also well applied to public access areas.
Private WLANs Private WLANs have been deployed for several years in private enterprise and small and medium business markets. Meeting rooms, lobbies, and other common areas of business campuses are excellent spots in which to install WLAN access points. In fact, over 50 percent of United States enterprises now use WLAN technology. Service providers are offering private WLAN services, primarily to small- and mediumsized businesses and consumers, who are generally short on the IT skills necessary to plan, implement, and operate these environments. Service providers and operators with large managed service opportunities might propose and offer WLAN management services, in effect beginning entry into the LAN space of enterprises desiring such services. For IP networks, wireless LAN technology is an extension of a wired LAN. Wireless LAN technology pushes back barriers, giving IP applications the mobility necessary to broaden the reach and time utilization of mobile professionals.
Wireless LAN Standards The radio-based WLAN standards—802.11b, 802.11a, and 802.11g—are moving mobility into the mainstream. The standards have helped to mature the technology and lower the overall cost of wireless solutions. Over one half of all new laptop computers, key enablers for untethered connectivity, now include integrated support for one of these wireless standards. Corporations and businesses are large purchasers of laptop computers in order to increase worker productivity and responsiveness. For them, WLANs extend these benefits even further by extending accessibility to network files and applications. Wireless technology is also being extended beyond laptops and into printers, private branch exchange (PBX)-based phones, PDAs, and Pocket PCs, and embedded in other devices that benefit from mobility. The attractive price points have also moved WLANs into households in the consumer market. Today more than a half dozen, significant wireless technology vendors concentrate on the consumer market. It is now very common to detect wireless access points in small, rural towns of America and abroad.
74
Chapter 2: IP Networks
The benefits of mobile data technology will naturally follow the appeal of mobile telephony. Multiphone families are largely multi-PC families these days, due to the affordability of PCs and the adoption of PC computing by all members of the household—workers, home managers, and school-aged children. Multiple computing devices present a connectivity issue within the home as every PC or printer needing to be connected must be cabled to the broadband modem for Internet access. These are in-home LAN connections, and installing such wiring in homes is difficult on a postconstruction basis. The appeal of wireless LANs in the home is the ease and mobility of connecting multiple devices. As such, wireless LAN technology is a natural for homes, a technology that you can take with you when you move.
Wireless LAN Capacity Wireless LAN technology, specifically the Institute of Electrical and Electronics Engineers (IEEE) 802.11x standards of WLANs, operates at up to 11 Mbps per client (802.11b) or up to 54 Mbps per client (802.11g and a). Many WLAN equipment vendors are also using techniques to push single-user data rates to 108 Mbps. WLANs use the concept of accesspoint technology. To reach these maximum rates, there is generally just one user at a time, and you must be in close proximity to the WLAN access point. Like a wired Ethernet switch has a throughput capacity by design, a WLAN access point has an aggregate throughput capacity. The wireless transmission rate will operate at less than the maximum data rate allowed, depending on distance from the WLAN access point and on the concurrent number of wireless clients in use. The more concurrent clients or users there are sharing an access point, the less an individual’s data rate. Table 2-4 provides a useful perspective for approximate network capacity, throughput, and channel comparisons of 802.11 wireless radio technology. As an example, an 802.11x WLAN access point has a maximum capacity that ranges from about 18 Mbps for 802.11b radios to up to 66 Mbps for the 54 Mbps 802.11g radio standard. The 802.11a standard, also a 54 Mbps technology that runs in the 5 MHz spectrum, allows more than three channels— from 12 up to 24 radio channels—giving 802.11a radios an aggregate of about 300 Mbps of throughput capacity at 12 channels and 600 Mbps at 24 channels. As a result, the 802.11a radios are more expensive and best fit the needs of larger corporations that can justify them across a sizable number of employees—workers who can benefit from the freedom to compute. The mass adoption of the 802.11a standard, due to its design for the 5 MHz spectrum, might well be tempered as 802.11a technology isn’t backward compatible with the 802.11b/g standards.
Mobile IP Networks
Table 2-4
75
Approximations for 802.11b, 802.11g, and 802.11a Networks
Throughput per Client (Mbps)
Number of Concurrent Wireless Client Channels
Access Point Total Capacity (Mbps)
11
6
3
18
802.11g 54 (with 802.11b clients)
8
3
24
802.11g
54
22
3
66
802.11a
54
25
12
300
802.11a
54
25
24
600
IEEE Standard
Maximum Data Rate per Single Client (Mbps)
802.11b
Refer to Chapter 9, “Wireless Networks,” where wireless capacity is covered in more detail.
Wireless Digital-Access Technologies There are many radio transmission, digital-access technologies that are at work across the WLAN standards. The primary types are
•
Diffused Infrared—Used for several years in laptop-based technology (optical mice and keyboards), diffused infrared is a short-distance, low-power, low-bandwidth technique for wireless transmission, usually in the 1 to 2 Mbps range. Using a wide beam and reflected light, this technology doesn’t require line of sight. It is limited to about 4 Mbps with a range of 9 to 15 meters.
•
Frequency Hopping Spread Spectrum (FHSS)—A technique used in the 2.4 GHz range, Industrial, Scientific, and Medical (ISM) band. By frequency hopping at a known rate between the transmitter and receiver, you can achieve wireless data transmission while limiting interference with other devices in the crowded ISM band of frequencies. Generally limited to 2 Mbps data rates, FHSS is primarily an intrabuilding radio technology.
•
Direct Sequence Spread Spectrum (DSSS)—This physical transmission layer got its start as a secure transmission technology. The DSSS radio signal is modulated on a radio channel with a unique code known only to the communicating parties. Essentially kin to code division multiple access (CDMA), the technology operates well in the 2.4 GHz range, providing wireless data rates up to 11 Mbps. This is very common because it is used in the 802.11b radio standard.
•
Orthogonal Frequency Division Multiplexing (OFDM)—A digital-access technology that uses advanced frequency modulation and multiplexing techniques. The WLAN radios with speeds greater than 11 Mbps use variants of the OFDM digital-access technique to do so.
76
Chapter 2: IP Networks
Figure 2-12 shows the various WLAN radio speeds, technology, and timeframes. Figure 2-12 Wireless LAN Radio Speeds and Technology 802.11g 2.4 GHz-OFDM/DSSS Up to 54 Mbps
Network Radio Speed
802.11a 5 GHz-OFDM Up to 54 Mbps 802.11b 2.4 GHz-DSSS Up to 11 Mbps Proprietary IEEE 802.11a/b Ratified
1999
2000
2001
2002
2003
Wireless LAN Security As with any IT infrastructure, security must be properly planned and deployed to keep wireless traffic secure. While security might be a discretionary feature for wired LANs, it is often mandatory for wireless LANs. That is because it is much easier and covert to “sniff the air” for wireless data packets than it is to tap a specific wire within a wired LAN environment. Wireless data can often be detected and copied beyond an organization’s facilities and physical security measures. Proper security designs and a defense in-depth approach are important considerations for wireless LANs. Some of the wireless LAN security techniques include the following:
• • • • • •
Wired Equivalent Privacy (WEP) Lightweight Extensible Authentication Protocol (LEAP) 802.1x Extensible Authentication Protocol (EAP) Protected Extensible Authentication Protocol (PEAP) IP Security (IPSec)
The 1999 IEEE 802.11 wireless LAN standards include a low-level security feature called Wired Equivalent Privacy (WEP). WEP was never intended to support a highly secure, impenetrable encryption for the wireless air link. WEP uses a 128-bit RC4 encryption algorithm, but that alone is not sufficient to ensure air link security. With WEP, the encryption algorithm could be broken with a determined intercept of a significant number of
Mobile IP Networks
77
encrypted packets: the more packets available allow for the software “cracker” to easily derive the WEP key, performing its work in less time. The use of WEP is essentially a fixed security key that is employed for all users of an access point on each and every session throughout the day. Reconfiguring the WEP key is administratively prohibitive for both the WLAN access point and all the wireless clients. Stronger encryption measures, authentication options, and manageability are required. Cisco introduced a proprietary version of wireless encryption technology called Lightweight Extensible Authentication Protocol (LEAP). LEAP makes use of the same WEP-based, 128-bit RC4 cipher mechanism; however, LEAP enhances WEP security in a couple of different ways. First, LEAP automatically and randomly changes the WEP key per user as well as per session. As a result, it is difficult to intercept a significant number of packets containing the same encrypted key, increasing the difficulty in cracking the cipher. Even if a per-user, per-session WEP key were determined, the key would change during the next user session. Second, LEAP adds usage of the Remote Authentication Dial-In User Service (RADIUS), requiring wireless users to authenticate via the username and password factors stored in the database. Wireless clients that fail authentication cannot complete wireless session setup and are, therefore, dropped from wireless access. In addition, a RADIUS timeout feature can be used to automatically send an in-session wireless client a new WEP key at periodic intervals, enhancing security for users who stay online wirelessly for long durations. These capabilities of LEAP help to strengthen wireless air link encryption. Both WEP and LEAP are considered Layer 2 security. In 2001, the IEEE introduced the 802.1x standard for stronger wireless security. 802.1x added port-based access control, the Extensible Authentication Protocol (EAP) for authentication between wireless users, and an authentication server such as a RADIUS server. The standard even provides a method for WEP key or other key distribution and management that can include per-session keying for increased security. The 802.1x standard also equally applies to wired Ethernet LANs. You might also encounter Extensible Authentication Protocol-Transport Layer Security (EAP-TLS), defined by RFC 2716, that is often used with certificate-based security environments. EAP-TLS is an extension of the PPP authentication method used for PPP connections. Another protocol, Protected EAP (PEAP), was developed by Microsoft, RSA Security, and Cisco Systems, Inc. PEAP adds encryption and integrity to the initial negotiation and authentication requests of the EAP protocol. Tunneled Transport Layer Security (TTLS) is yet another wireless protocol seeking to ease certificate management. For a defense-in-depth approach, you can add Layer 3 security to wireless clients, such as an IPSec VPN client. IPSec is a proven, highly secure encryption method for VPNs, and the use of IPSec VPNs at Layer 3 over WEP-encrypted Layer 2 wireless sessions is considered secure. Using an IPSec VPN provides security beyond the WLAN access point all the way across any wireline backhaul networks, or even the Internet, to the employee’s home organization. Designs can include tokens, intrusion detection systems, and firewalls, which
78
Chapter 2: IP Networks
are technologies used to enhance security for nonwireless computing clients. Figure 2-13 shows an example of wireless LAN security using several of the mentioned techniques. Figure 2-13 A Highly Secure Wireless Network
Corporate Network
MAC and/or UserAuthenticated Clients
AP
VPN
Firewall
RADIUS
WEP, LEAP, 802.1x IPSec Complete Security Policy
Source: Cisco Systems, Inc.
Significant gains in wireless security have occurred in recent years, including the delivery of a new industry standard encryption algorithm called the Advanced Encryption Standard (AES). AES works within IPsec. Along with IPsec and VPN technology, several effective security techniques are available to extend the privacy of corporate data and voice networks into the wireless space. Chapter 9 contains more detail about 802.11 WLAN technology.
Public Wireless LANs Personal computer laptops are responsible for carrying 802.11b wireless LAN technology out of the enterprise and into the public domain. Mass production of 802.11b standard chipsets improved the affordability of the technology for mass consumer acceptance, becoming an integrated standard feature in business and consumer-level laptops. Public WLANs and home-based WLANs are the result of that acceptance.
Mobile IP Networks
79
During the rapid growth of consumer demand for Internet access, numerous service providers emerged with public WLANs (PWLANs). Known as wireless Internet service providers (ISPs), their business approach is to provide wireless Internet access at public locations by taking advantage of the unlicensed 2.4 GHz wireless spectrum of 802.11b and 802.11g wireless LANs. PWLAN services have become very popular because they offer high-bandwidth Internet access at select locations where users gather for short-term periods. These are generally public access areas including cafés and coffee shops, hotels, airports, and convention halls, to name just a few. Extensions of PWLAN technology to in-flight airplanes and trains are also increasing. Both the industry and the media generally refer to public WLAN areas as hotspots. The PWLAN market is still in its early stages of development. PWLAN deployment currently leads in Europe, followed by Asia Pacific and then North America, although North America has the largest density of WLAN-enabled laptops at present. Many service providers and operators are making plans for PWLAN services. Found in different stages, some are deploying wireless service, some are conducting trials, and others are still monitoring the market—waiting for either technology maturity or anticipating a revenue forecast trigger. Cities and municipalities are also examining and deploying PWLANs as a way to improve productivity, first for downtown-area workers and then for constituents. PWLANs are likely to be a mixture of “for free” and “for fee” access. The desire to combine a WLAN hotspot fee contract with a mobility contract is still very much in flux. PWLANs, like WLANs, are generally classified as portability technology, distinguished from mobility technology. PWLANs tend to bridge the gap between fixed networking services, such as an Ethernet wall jack at work, and mobility networking services, such as cellular broadband data using CDMA or Global System for Mobile Communications (GSM) data features. PWLANs at 11 Mbps and 54 Mbps also fill in the speed gap between fixed networking (100 Mbps to 1 Gbps) and mobility networking (80 Kbps to 2.4 Mbps). PWLANs must also deal with concerns of user segmentation, security, user roaming between PWLAN networks, billing, and competition from many traditional public networking service offerings. PWLANs are being deployed across all segments of service providers. Some are seizing market opportunity through service differentiation, some are complementing existing wireless services for bundling opportunities and coverage expansion, and others are deploying in response to competitive positioning. Many other technologies such as WiMAX, wireless mesh, and mobility data all have the potential to complement or corner the market for profitable PWLANs. WiMAX seeks to deliver higher bandwidth (up to 70 Mbps shared throughput) and at a greater range (up to 31 miles) than Wi-Fi 802.11 technology. Wireless mesh is a relatively new twist on WLAN technology, using additional dedicated wireless channels between access points (wireless backhaul) rather than using a wired uplink back to a nearby Ethernet switch. Mobile cellular phones are increasing their data speeds and approaching broadband rates.
80
Chapter 2: IP Networks
According to an IDC forecast, worldwide PWLAN hotspots are reaching 136,000 installations in 2005 and are forecasted to approach 250,000 in 2008.3 Based on such rapid growth, coverage is filling in quickly. Near the end of this decade, the ability to seamlessly roam wirelessly across WLANs will make mobility networks and PWLANs a reality in major metropolitan coverage areas. Cisco has assembled a PWLAN solution using carrier-class platforms, and many deployments are already in service worldwide. Figure 2-14 shows a diagram of the Cisco PWLAN architecture. Figure 2-14 Cisco PWLAN Architecture Overview Public WLAN Operator Cisco SESM
Public and Private WLAN Services
Cisco CNS Access Registrar Cisco ITP MAP Gateway ITP
AZRSSG Location/Provider Branding
Cisco Access Points
Cisco Packet Gateway Wi-Fi Zones
SS7 Network
802.1x/EAP-SIM Authentication
Internet
Billing/ Prepaid Partner
Cisco Access Zone Routers
Backhaul Network
HLR/AuC
7600 + CSG
VPN
Corporate Network
Premium Services
AZR GPRS/CDMA
Mobile Data Integration
Managed Guest Access
Source: Cisco Systems, Inc.
The components and features of the Cisco PWLAN architecture include the following:
•
Access points—The Cisco PWLAN solution uses Cisco 1100, 1200, and 1300 Series access points.
Mobile IP Networks
81
•
Access Zone Router (AZR)—Originally based on the Cisco 1700 platform with solution features now available for Cisco 2600 and Cisco 3700 platforms, the AZR provides connectivity, client address management, security services, and routing across a WAN from each access point to an operator’s point of presence (POP) or data center.
•
Access control and service enablement—Access control is based on the Cisco IOS Service Selection Gateway (SSG) technology that is now available across a broad range of platforms, including the Cisco 2651XM Router, Cisco 2691 Router, Cisco 3725 Router, Cisco 3745 Router, Cisco 7200 Series, and Cisco 7301 Router. Together with the Cisco CNS Subscriber Edge Services Manager (SESM), the Cisco SSG provides subscriber authentication, service selection, service connection, and accounting capabilities to subscribers of Internet and intranet services.
•
Captive portal and branding server—The Cisco CNS SESM works with the Cisco SSG to provide complete control over the subscriber experience, supporting customization and personalization based on device, client, location, service, and other criteria to offer higher value to end users and maximize service and advertising revenue.
•
Access Policy Server—The Cisco CNS Access Registrar is a RADIUS-compliant, access policy server used to support web and 802.1x/EAP user authentication. When used in conjunction with the Cisco IP Transfer Point (ITP) Manufacturing Automation Protocol (MAP) gateway, Cisco CNS Access Registrar performs home location register (HLR) proxy services in support of EAP-subscriber identity module (SIM) authentication for mobile operator networks. Cisco CNS Access Registrar provides carrier-class performance and scalability, as well as the extensibility required for integration with evolving service management systems.
•
Mobile operator Signaling System 7 (SS7) interconnect—The Cisco ITP is a product for transporting SS7 traffic over IP (SS7oIP) networks. When deployed in a mobile operator’s PWLAN network, the Cisco ITP acts as a gateway by taking SIM authentication credentials from 802.1x/EAP-SIM and formatting them into standard SS7 MAP messages for routing to the operator’s HLR/AuC (Authentication Center).
•
Network management—Cisco provides a feature-rich element management system combined with a scalable service management layer for robust fault, configuration, and performance capabilities of the PWLAN solution. This includes the CiscoWorks Wireless LAN Solution Engine (WLSE), CiscoWorks LAN Management Solution (LMS), Cisco Distributed Administration Tool (DAT), Cisco Signaling Gateway Manager (SGM), Cisco Information Center (Cisco Info Center), Cisco Networking Services Configuration Engine, and Cisco CNS Performance Engine (CNS-PE).4
82
Chapter 2: IP Networks
Mobility Networks In 2004, mobile cellular networks reached a ten-year milestone—a boom period that increased wireless phone penetration rates to greater than 50 percent worldwide. The mobile telephony industry is preparing for an expected two billion wireless users by the year 2010. Mobile users are eager to replicate their fixed-location computing experiences into their mobile endeavors. Early mobile packet data applications such as short messaging service and wireless Internet access have enjoyed great success. As mobile data becomes a larger percentage of overall mobile minutes, the anticipated uptake of data-enabled mobile handsets, PDAs, Pocket PCs, and other handheld devices will demand more interactive and higher-bandwidth data solutions. In fact, mobile data services at 384 Kbps and beyond will open up enormous possibilities for exciting applications that will imitate the fixed-location data experience. This will necessitate more IP intelligence and greater data integration into the mobile network infrastructure in order to deliver a superior mobile Internet. IP is quickly becoming vital within wireless mobility networks. As the worldwide protocol of choice within and beyond the Internets, IP is the universal gateway to advanced mobile data applications. Mobile operators will increasingly look to IP packet-based core networks to facilitate sophisticated wireless handset services based on IP data transport. Much like wireless LAN security, mobility-based IP services will need IP VPN solutions that protect company and personal data while “in flight.” One of many vendors supplying solutions to this space, Cisco Systems has developed IP-based solutions that apply to several areas of a mobile network infrastructure, such as
• • • • • •
Packet gateways running on router platforms SS7 signaling over IP solutions IP RAN transport Integrating complementary WLAN 802.11 technology Packet-based VoIP IP and MPLS core networks
Service providers might use any or all of these to migrate their mobility networks to an IP-capable delivery platform.
Packet Gateways Running on Router Platforms In mobility networks such as digital cellular, the RAN is the portion of the operator’s network that interfaces via radio signals with the user’s mobile phone or handheld device. The RAN is usually operating with one of the well-known cellular technologies such as GSM or CDMA.
Mobile IP Networks
83
Packet gateways running on IP router platforms lower the cost of mobile data connectivity. By pushing IP-based packet gateways to the edge of the RAN, you can apply IP services through a consistent user interface independent of the RAN technology in use. This allows a mobile operator to mix and match radio access technologies as appropriate for their customer set and business model. For example, operators can easily integrate unlicensed spectrum technologies, such as 802.11 WLANs, along with their CDMA or GSM mobility offerings.
SS7 Signaling over IP Solutions SS7oIP solutions dramatically lower the cost of signaling networks. The worldwide SS7 network, designed primarily for basic call control, is already passing over one billion short message service (SMS) messages per day, and improved features such as user authentication and mobile number portability will continue to apply stress to this signaling infrastructure. IP transfer points running on IP-optimized platforms can help operators choose the appropriate technology for signaling requirements, in essence providing a gateway between SS7 and IP networks. By migrating signaling traffic to IP networks, operators can increase their flexibility and lower their signaling transport costs.
IP RAN Transport IP RAN transport lowers the expense of backhaul. Backhaul of data from cell sites typically makes up from 20 to 30 percent of the operating expenditure budget. By using open standards-based IP platforms and network solutions, operators can potentially lower their outlay on transport while facilitating a progressive migration to IP-based networks that enable more service value.
Integrating Complementary WLAN 802.11 Technology Integrating mobility networks with WLAN 802.11 networks can provide complementary wireless coverage. As mobile professionals move from location to location, they can use CDMA or GSM with General Packet Radio Service (GPRS) or Enhanced Data rate for the GSM Evolution (EDGE) and others along with 802.11 WLANs to stay in constant touch with data applications. The higher-bandwidth performance of WLAN networks is especially desirable when users are semimobile, such as in a hotel or convention center. By using IP-based products to integrate these types of radio-access networks, you can shield the underlying complexity of radio access from the user. Key to ease of use are features such as Mobile IP. When you consider that many mobility users of IP applications might use IP VPNs, the movement of the user across different radioaccess technologies can break the IP VPN session, causing users to reinitiate the IP VPN each time they “roam” across different RAN technologies. The use of Mobile IP allows a user’s IP address and resultant IP VPN session to roam across different radio-access
84
Chapter 2: IP Networks
technologies. The goal is seamless roaming across a number of mobility and Wi-Fi coverage types. This allows the application of a consistent set of user features as mobile professionals move about, while increasing the total time that the professional can use the operator’s network services. The unique application of Mobile IP, which runs in a Cisco mobile access router, allows an IP network to be in motion without users having to necessarily start and stop their IP connectivity as they travel. This can also facilitate identity- and location-based services.
Packet-Based VoIP Operators can apply packet-based VoIP to mobile networks to accomplish convergence of voice and data anywhere. With packet voice technology in both wired and wireless phones, operators can either build afresh, or migrate networks from a circuit-switched platform to a packet-switched platform, reducing costs through convergence at the IP services layer. The same cost benefits that are driving packet voice technology within enterprises and businesses can be applied to mobile network infrastructures.
IP and MPLS at the Core of Mobility Networks IP and MPLS networks help to lower overall capital expenditures (CapEx) and operating expenditures (OpEx) through the benefits of convergence and the simplicity of integration. IP/MPLS-based networks allow the core integration of disparate network technology while enabling an abundance of IP-based services. As mobile handsets become IP addressable, features such as IPv6 forwarding, multicast, security, and QoS become essential. Operators are applying strategic investments in IP/MPLS network technology in order to achieve these benefits and position for new IP service demand at the edge of the RAN. A first step toward increasing IP service value is to converge the backbone transport technology to an IP- or IP/MPLS-based network, as this can contribute to both OpEx improvement and to revenue generation from new IP services.
Cisco Mobility Exchange Framework The next step is to drive IP benefits closer to the edge of remote-access networks. Cisco has developed a mobility architecture called the Cisco Mobile Exchange Framework. With business users and consumers all desiring access to mobile VPNs and the mobile Internet, the variety of devices and RAN methods makes technology integration extremely challenging, especially when trying to deliver a seamless, content-rich experience and a single monthly bill to the user. The Cisco Mobile Exchange Framework is a set of components and services that can be implemented in provider wireless mobility networks to deliver IP-based services to mobility users.
Mobile IP Networks
85
In Figure 2-15, mobile users are located at the radio edge, desiring to use cellphones, PDAs, and wireless 802.11 PCs to access their company intranets, the public Internet, or valueadded services from ISPs, application service providers (ASPs), or mobile virtual network operators (MVNOs). With such a variety of user devices and technologies at the radio edge, a number of components and services are available within the Mobile Exchange Framework to accomplish these IP data session experiences. In the aggregation section, a number of standard network protocols are used to interface a provider’s RAN or RANs with the appropriate packet gateways. These gateways connect GSM/GPRS users (gateway GPRS support node [GGSN]), CDMA users (Packet Data Serving Node, or PDSN), or WiFi users (802.11) into the Cisco Mobile Services portion of the framework to access service gateways, billing gateways, and security agents. Over a dozen components, services, and Cisco routers with Mobile Exchange software are available to help providers craft their mobility networks to connect mobile users with IP-based applications. Figure 2-15 The Cisco Mobile Exchange Framework Radio Edge
Aggregation
Cisco Service Edge
Core IP
BSC/PCF Cisco Mobile Exchange Intranet
Cisco Mobile Services Cell Phone GGSN
SGSN 2/2.5/3G
MIP GRE IPSec L2TP MPLS GTP SS7
PDSN
802.11
WLAN
Source: Cisco Systems, Inc.
Service Gateways
Acct and Charging
Identity
SSG CSG Home Agent VPN Firewall Load Balancing
SSG CSG
CNS Access Registrar ®
Management
MIP GRE IPSec L2TP MPLS
VPN
Internet
ISP/ASP MVNO
86
Chapter 2: IP Networks
By applying the Cisco Mobile Exchange Framework, mobile operators can integrate multiple types of radio-access networks, provide consistency in user authentication, streamline signaling transport and backhaul operations, and extend IP value all the way to the mobile handset. Over the next ten years, wireless LANs in both private and public space, wireless mobility networks, and IP mobility features such as Mobile IP will be fundamental enablers to increasing IP-based service value in mobile networks. As mobile professionals, commuters, teleworkers, and consumers ascend the productivity and entertainment curves, they will persist with demand for tailored, IP-based access while in motion. Mobile networks that contain high-value IP services will provide operators with the flexibility to package a variety of value-added services on which to target specific customer segments. Mobile services subscribers will continue to distinguish value among different provider offerings. The ability to uniquely and rapidly intersect customer needs with service-oriented mobility solutions is an essential competitive skill, best leveraged on the basic tenants of simplicity, reliability, and a clear value proposition. Mobile networking solutions built on IP are the electronic fuel that is propelling mobility into the mainstream. Table 2-5 shows available Mobile IP network services. Table 2-5
Mobile IP Network Services Service Categories Wireless Data/IP
Service Types
Technology Options
Mobile data
Wireless LANs 802.11x
Messaging e-mail, I-chat IP VPNs
Public wireless LANs 802.11x (Wi-Fi), 802.16 (WiMAX)
Intranet
Mobile IP
Internet Telecommuting Remote telemetry IP telephony Wireless campus voice Surveillance
Global IP Networks
Table 2-5
87
Mobile IP Network Services (Continued) Service Categories
Service Types
Technology Options
Mobility
Wireless mobility
AMPS
E-mail
FDMA
Mobile Internet
TDMA
Short Message Service (SMS)
CDMA
Multimedia Messaging Services (MMS)
GSM/GPRS
Paging services
CDMA20001xRTT
Televoting
CDMA-1xEVDO
CDMA2000
W-CDMA
Global IP Networks IP networks are global. The nanometric flashes of light pulsing through optical fiber create a photonic Morse code of sorts for IP packets from Asia and Australia to North America, Latin America, and Europe. Using the building blocks of the Internet Protocol, IP routers, switches, and optical fiber, globally minded providers are using IP around the world. Global IP networks are often purpose-built primarily as a “carrier’s carrier.” Many providers wholesale international capacity to telecommunications carriers that have multinational interests or multinational customers—in essence, providing for carriers to access a global network backbone. Some of these carrier’s carrier providers also sell to the retail markets to supplement their business model or to expand into other market segments such as data, local, and long-distance voice services, hosting services, and so on. Many new global IP network providers appeared during the telecommunications boom of the 1990s, using abundant investment capital to build greenfield networks leveraging new breakthroughs in optical price/performance and IP equipment. Many of these executed a classic business model, the enterprise archetype, to globalize IP-based services, in effect using service pull opportunity on a wide-reaching scale. Built for IP, these networks can be used for worldwide LANs and WANs, Internet transport and hosting, and scores of other IP-based services. Ascending to Layer 3, these new global providers are riding optical lambdas and improved price/performance curves to take IP around the world. While the Internets, both the original Internet and Internet2, are truly global in reach, there remains a superstratified layer of ringed and meshed, purpose-built global IP networks that crisscross the sphere like vapor trails in a bright blue sky. These global networks provide carrier and business class availability, guaranteed QoS, and wide-reaching IP services the world over. As companies go national and then international, they look to global IP network providers to help them do so.
88
Chapter 2: IP Networks
When networks are global in nature, any downtime of network equipment or network links can have a serious impact on multinational and international connectivity as these ultra long-haul conduits usually aggregate the most customer connections of any network. In addition to adequate bandwidth capacity, global network designs often include highavailability operations and alternate network facilities to meet service and availability guarantees. Brief perspectives on global bandwidth capacity, global network resiliency, and the Internets (original Internet and Internet2) are discussed next.
Global Capacity Global IP networks are usually defined by tens of gigabits per second (Gbps) of capacity. Collectively, some international network routes are in the terabits per second (Tbps) of capacity. For example, according to Telegeography’s (a division of PriMetrica, Inc.) Global Internet Geography 2005 Executive Summary, transatlantic Internet bandwidth capacity was projected at about 1000 Gbps (1 Tbps), and transpacific capacity was projected at about 400 Gbps.5 (See Figure 2-16.) Figure 2-16 Transoceanic Internet Traffic and Capacity, 2003–2007
5,000 4,000
Trans-Pacific
Internet Bandwidth Peak Traffic Average Traffic
3,000 2,000 1,000 0 2003
2004
2005
2006
2007
Internet Traffic and Capacity in Gbps
Internet Traffic and Capacity in Gbps
Trans-Atlantic 2,000
1,500
Internet Bandwidth Peak Traffic Average Traffic
1,000
500 0 2003
2004
2005
2006
2007
Source: TeleGeography Research (Division of PriMetrica©)
With long-haul and intercontinental fiber priced at a premium, bandwidth capaciousness is vitally important to minimize operational cost per bit and to forestall any capital expense necessary to increase fiber strand facilities. Wavelength division multiplexing (WDM) is a principal technology for increasing the data carrying capabilities of a fiber strand, and many of today’s global networks are employing the technology for that purpose. Chapter 5, “Optical Networking Technologies,” covers WDM.
Global IP Networks
89
Many individual global IP networks are still confined by SONET- and SDH-driven optical links between large network POPs and between continents. SONET and SDH data rates are increasingly expressed in gigabits per second (Gbps). For example, OC-48/STM-16 represents about 2.5 Gbps of capacity, while an OC-192/STM-64 represents about a 10-Gbps bit rate. If a provider advertises that they have backbone capacity of 80 Gbps, then they could be using multiple strands of optical fiber at a 10-Gbps (OC-192/STM-64) bit rate per strand, or if using WDM, use multiple lambdas at 10-Gbps per lambda within the same strand(s) of optical fiber. The next step above a 10-Gbps bit rate for SONET is a bit rate per second of 40 Gbps, and a few IP routers and optical products are now capable of a 40 Gbps per interface card slot (for example, Cisco CRS-1). This increase in capacity extends a fiber strand’s bandwidth life another 400 percent. But WDM and dense wavelength division multiplexing (DWDM) can multiply the capacity of a fiber strand by the hundreds. As such, a common approach is the multiplying of lambdas or wavelengths to increase concurrent connectivity, not necessarily increasing the bandwidth bit rate per lambda. IP over optical technology has the potential to hollow out the SONET and ATM multiplexers and transmitters of yesterday. By using IP over Gigabit Ethernet on the edges and IP over optical through the core, complex layers of hardware and software are avoided, communication is optimized, management is streamlined, and costs are lowered. These pursuits are the holy grail of the new era of networking and will likely be advanced first and foremost in the global IP networking theater.
Globally Resilient IP Many companies operate in the global theater. Worldwide financial services, manufacturing, as well as international service providers require the highest levels of availability and resiliency in their networks. When international stock markets are considered, every subsecond counts for an international financial trade. It’s easy to derive why some customers are demanding network connections from their global providers that do not fail. Used for decades and relatively mature, circuit-switched networking equipment is a highly fault-tolerant architecture. There was no reason that IP networks could not achieve the same levels of resiliency in order to enhance the deployment of carrier-grade global IP networks. Taking a holistic approach beyond mere hardware platforms and device-level redundancy, Cisco developed additional software capabilities and routing features that could be used to architect an end-to-end, carrier-class IP network. Announced in 2002, this product development effort was referred to as Globally Resilient IP (GRIP). Since then, GRIP has become one of the focal elements within an overall Cisco emphasis on high availability for IP networks. GRIP is implemented within the Cisco IOS software on Cisco routing and switching platforms. When considering high availability in IP networks, you must consider several networking layers in addition to Layer 1 physical link redundancy such as Layer 2 media
90
Chapter 2: IP Networks
connection information, Layer 3 routing protocols, and Layer 4 and higher services, all involved in a comprehensive approach to high availability. Layer 2 considerations include the ability to statefully switch and converge traffic to a different Layer 2 MAC address connection instantaneously. Layer 3 contributions include routing protocols that converge almost instantly, can perform graceful restart, and can continue to forward packets via Layer 3 routing information while transparently switching over to a surviving Route Processor upon hardware processor failure, or a redundant network component in the event of a power failure. Layer 4 would involve protocols such as TCP and UDP that are used to establish IPsec VPNs and multicast data traffic. The Cisco GRIP features cover all of these functional areas. GRIP addresses resiliency as a network-wide endeavor at both the device level and across the network as a whole, building high availability and recovery into multiple IP features. Service providers and operators of large IP networks can increase their operational resiliency not only in the core backbone but also at the network edge, traditionally the primary single point of failure for network services. GRIP features provide overall benefits in the following:
• • •
Network-edge resiliency Network-wide resiliency Service provider core and enterprise backbone resiliency
The Internet—A Network of Networks The Internet is the world-facing embodiment of global IP. The Internet and IP are symbiotic—codependent on one another to breathe and grow, expand, and show the world a vast library of resources and services. The Internet is humanity’s library, and IP is the library card. As a global network of networks, the Internet enables communication, collaboration, and cooperation among diverse communities and cultures. Using this network of networks, IP vaults the continents to become a universal communications language, a digital porthole through which to view the Internet—the terrestrial distiller of a large part of the e-commerce and communication of the human race. The United States is essentially the global hub of the Internet. With international backbone connections from Western Europe to the eastern United States, and with bandwidth pools from the western United States across to Asia Pacific, the United States is a critical transfer and switching point for these international bandwidth poles. There are about 32 significant
Global IP Networks
91
providers of Internet backbone services both nationally and internationally across about 60 countries that make up the largest percentage of usage. There are over 300 Internet backbone providers worldwide. Figure 2-17 shows a diagram of major international Internet routes. Figure 2-17 Map of Major International Internet Routes
Source: TeleGeography Research (Division of PriMetrica©)
The Internet is truly a megaglobal network of networks. The fundamental design of the Internet and the essential building blocks of the Internet protocols provide scale. That is the intrinsic beauty of IP. Whether networks are local, long, mobile, or global, IP scales. From the IP layer up, this mass assemblage of networks is rendered transparent to the users and contributors of content and communiqué. Although there are skills required to engineer and operate such transparency, those efforts are leveraged arithmetically with every new router, switch, Ethernet cable, and optical fiber installed and connected to the Internet. In doing so, the networking efforts of standards bodies, providers, businesses, and individuals tend and grow a global internetwork of networks to facilitate the needs of the many.
92
Chapter 2: IP Networks
Table 2-6 lists the services that global IP provides. Table 2-6
Global IP Network Services Service Categories
Data/IP Services
Voice Services
Capacity Services
Access Services
Service types
IP VPN
VoIP termination
Collocation
Local access
ATM
Carrier outbound
Private line
Integrated T1
Frame Relay
Calling card
Wavelength
Integrated T3
IP transit
CIC transport
Wireless
Internet
1+ dial
Metro services
Internet peering
Toll-free
Managed hosting
International
Application
Wireless roaming
Virtual servers Target customers
Internet service providers Cable operator Wireless operator Service provider Multinational enterprises Enterprises Government Medium businesses
Beyond IP IP might well be extensible beyond our respective lifetimes. An interesting question is whether IP technologies will carry us another hundred years, much as analog telephony served communication needs for the same. Considering that the year 2000 (Y2K) date issue was shortsightedness of 20th century computer manufacturers and programmers, there is hopefully some caution against the assumption that IP is totally endless in scope and capability. While IP is generally thought of as long-term, long is a relative duration. In the near term, IP will expand and stratify. It is likely that some of the intelligence of IP will become further embedded in silicon and perhaps even nanotechnology—or maybe integrated in some form with optical fiber— layering intelligence upon capaciousness. Having been around since 1974, IP might well carry IT needs into 2074. IP, unlike any other protocol, is uniquely positioned as the central theme in the new era of networking. With a
Technology Brief—IP Networks
93
worldwide contingent upon engineering recruits, volunteers, innovators, and tinkerers, IP has become the “flux capacitor” of communication networks, handling however many terabits, petabytes, gigaflops, or “jigawatts” that you can throw at it. For the foreseeable future, IP is in vast abundance—the only scarcity is what can be done with all of it. Relatively undiscovered, the time to ponder what’s beyond IP is likely several years, perhaps even decades away.
Technology Brief—IP Networks This section provides a brief study on IP networks. You can revisit this section frequently as a quick reference for key topics described in this chapter. This section includes the following subsections:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding IP networks.
•
Technology at a Glance—Uses figures and tables to show IP network fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance—Presents charts that suggest business drivers and lists those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts shown in the figures in this section to see how business drivers are driven through technology selection, product selection, and application deployment in order to provide solution delivery. Additionally, business drivers can be appended with critical success factors, and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
Technology Viewpoint Time, opportunity, and money remain the fundamental business requirements and primary drivers of IP networking technology. Today’s company networks are now strategic business assets that must be planned, leveraged, and, most of all, successful to deliver business value that is measurable in customer revenue and satisfaction. Lack of data bandwidth drives the divergence of data. Divergence of data creates differences in technology, which are the fundamental enablers, carriers, and harborers of data. Lack of data bandwidth mandates a distributed approach to business computing. Multiple data formats beget multiple data networks. The scarcity of bandwidth justifies the need for packet data networking, in order to afford a distributed computing architecture. You can only diverge data so far, pushing the boundaries of diminishing returns, before you have to converge and reassimilate data and technology to derive further advantages. IP networking stitches data and technology back together.
94
Chapter 2: IP Networks
Companies around the world are doing this with an open-system, IP framework. The Internet Protocols are equally suited for LAN (local) and WAN (long) communications, allowing companies to converge networks. That convergence has been one of the defining strengths of IP networking. Over the last few years, significant changes have occurred that are impacting the industry and affecting the service provider value chain and business model. The changes in regulatory policy have enabled more competition and service substitutions, in an attempt to drive a regulated services industry toward a commodity services trade. This has impacted the competitive structure of providers who now find themselves transforming their business models deeper into customer-centric orientation. This is necessary since product differentiation is becoming less distinctive. In a competitive environment, revenue generation remains paramount yet more difficult to close due to more discriminating customers with many available options. Flat-rate bandwidth offerings also remain, but value recognition must be marked beyond mere transport functionality. With customers placing value distinction in IP-based services, providers must find ways to incorporate IP value into their product wares. It is this search for IP value that leads providers to a fundamental change in system architectures—architectures based on open standards rather than vertical, proprietary, purpose-built networks. An open-standards approach is a horizontal end-to-end model that yields a product-and-services methodology based on standard building blocks. Modular in nature, these IP-based products and services can be combined in different ways to create unique networks and services. The cost and time savings, convergence options, and the innovation engine of IP networks cannot be ignored. IP is a system-level enabler, a core technology and foundation on which many other systems can be built. Designed from the ground up and implemented as a lowcost and extremely efficient communications vehicle, IP is now the most widespread network protocol suite in use in the world. Everyone can benefit from IP, and anyone can be beat by it. Data, voice, video, and Internet data must come together. By standardizing various types of data—formerly associated with entirely separate technologies—IP provides a powerful solution. A converged IP network creates the foundation for greater collaboration, opening new ways to work and interact, simplifying network management, and reducing capital and operating costs for all. Converged networks are fueling the development of an array of dynamic applications such as e-learning, unified messaging, and integrated call-center and customer-support systems. As service substitutions continue to proliferate, subscribers of IP services will increasingly distinguish value among different provider offerings. The ability to uniquely and rapidly intersect customer needs with service-oriented solutions is an essential competitive skill, best leveraged on the basic tenants of simplicity, reliability, and a clear value proposition.
Technology Brief—IP Networks
95
The networking convergence engine of the late 20th century was IP. IP is today’s dynamo of network convergence and service creation, extending productivity benefits, service variety, and innovation into the start of the 21st century. IP is unifying networks while facilitating the purposeful and appropriate combination of data. The fundamental design of the Internet and the essential building blocks of the Internet Protocols provide scale. That is the intrinsic beauty of IP. Whether networks are local, long, mobile, or global IP scales. Today, we seek to stay connected to the Internets, to maintain periodic touch as we move and revolve around a World Wide Web of knowledge and opportunity—pulsing on the backbones of IP networks. Indeed, IP networks are the beating hearts of our developing, digital consciousness. IP has become the new ascendancy in communications, handling nearly any internetworking task you can throw at it. Globally and universally extensible, IP is in vast abundance. The only scarcity is what we can do with all of it.
Technology at a Glance Table 2-7 summarizes IP network technologies. Table 2-7
Bandwidth Speed Range
IP Network Technologies Local IP
Long IP
Mobile IP
Global IP
Hundreds of megabits to several gigabits
Kilobits to many megabits, one or more gigabits
WLANS, 11 megabits to 108 megabits
Tens to hundreds of gigabits
Mobility, kilobits to > 2 megabits Seed Technology Internet Protocols
Internet Protocols
Internet Protocols Internet Protocols
Ethernet
TDM T1/E1 to
802.11x
Fast Ethernet
SONET/SDH OC12/ STM-4
Mobile IP
Gigabit Ethernet
10 Gigabit Ethernet Frame Relay ATM Token Ring FDDI ATM GRIP
Metro Ethernet, 100 Mbps, and 1 Gbps GRIP
FDMA/TDMA GSM/GPRS
SONET/SDH from OC48/STM-16 to OC-768/STM-256 10 Gigabit Ethernet
CDMA2000
Ethernet over SONET/SDH
CDMA1xRTT
Ethernet over RPR
CDMA-1xEvDO
Ethernet over MPLS
W-CDMA
GRIP continues
96
Chapter 2: IP Networks
Table 2-7
IP Network Technologies (Continued) Local IP
Long IP
Mobile IP
Global IP
Range
Short to medium
Short to long
Short to medium
Long to ultralong
Application
LANs
WANs
Campus networks
Business data
International voice and data
Intranet
Video
Mobile LANs, WLANs, and PWLANS
IP SANs
IP telephony
Mobile telephony
Business data
IP telephony
Distributed computing
Mobile data
IP VPN
Internet access
Internet access
Business data
Disaster recovery
Home networking
Business voice
Telecommuting
IP transit Home networking Internet access Internet peering IP telephony Managed hosting Application Virtual servers Carrier’s carrier
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solutions and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. The charts shown in the following figures list typical customer business drivers for the subject classification of networks. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products and their key differentiators, and applications to deliver solutions with high service value to customers and market leadership for providers.
Technology Brief—IP Networks
Figure 2-18 charts the business drivers for local IP networks. Figure 2-18 Local IP Networks
High Value
Critical Success Factors Invest Strategically Extensible Technology Architecture Optimize Operations Expenses
Market Leadership
Ethernet, IP Packet Migration Cost Leadership Service Differentiation Niche Markets
Market Value Transition
Cisco Common Building Blocks — Common IOS Sftw for Wired and Wireless — Powered Ethernet Grow, Optimize, Protect the Network Improve Productivity and Content Increase Mobility
Market Share
Ensure Availability Secure Communications Single Converged Network
Low Cost Competitive Maturity
Optimize Resources Reduce Expenses Business Drivers
Industry Players
Technology Wired LANs Ethernet, Fast Ethernet, Gigabit Ethernet, Token Ring, FDDI Wireless LANs 802.11a 802.11b 802.11g 802.1x
Cisco Product Lineup Cisco IOS Cisco Mobile Exchange Cisco IP Communication Multi-Layer Switches 6500 Series 4500 Series 3550 Series 2950 Series
Applications Service Value File/Print Sharing Messaging E-Mail Voice Video Database
Managed LAN and WLAN Services Increased Customer Satisfaction Double-Digit Productivity Gains Improved Network Flexibility Enhanced Responsiveness
Data Backup Desktop Productivity
Cisco Key Differentiators Desktop Integration Low TCO Leader — Transparent Data Handoff Intranet and w/Mobile IP — Centralized Mgmt and Security
Routers Internet 10000 Series Access 7500 Series Diffused 7300 Series Department InfraRed, 7200 Series FHSS, Campus, 3700 Series DSSS, Metro 2600 Series OFDM, VoIP 17XX Series COFDM, VOFDM Web Aironet Series Hosting WLAN IP Routing Content 1200 Series Layer 2,3,4 1100 Series Delivery Switching 350 Series Private Mobile IP WLAN Home
Intranet Access Enterprise LANs Distributed Computing Home-Based Networking Network Convergence and Integration Solution Delivery
Equipment Manufacturers — Cisco Systems — LinkSys-3Com-Lucent — Nortel Networks — Enterasys — HP -
Local IP Networks
97
98
Chapter 2: IP Networks
Figure 2-19 charts the business drivers for long IP networks. Figure 2-19 Long IP Networks
High Value
Critical Success Factors
Technology
Invest Strategically Lower Total Cost of Ownership Sustain Innovation Market Leadership
SONET/ SDH
Add Distinctive Value
ISDN ATM
Ensure Availability Transition to Broadband and Enhanced IP
Common Sftw and EMS, NMS, OSS – Global Sales and Service Presence – Convergence of Multi-Vendor Data Platforms/Networks Open-Standards Packet-based Networking Broadband, Mobility, Content
Market Share
Applications
Cisco IOS
Imaging IP Telephony
Routers
TDM
Improve Productivity
Market Value Transition
Cisco Product Lineup
Service and Application Speed Optimize Bandwidth Expense
Frame Relay PPP Packet over SONET Ethernet
CRS-1 12000 Series 7600 Series 7500 Series 7300 Series 7200 Series 3700 Series 2600 Series 17XX Series ACNS VPN 3000 Optical ONS 15454 ONS 15600 ONS 15302 ONS 15305 ONS 15310 ONS 15327
Service Value Customer Oriented Solutions
Remote Connectivity Web Based Apps Business Video
New Services Development Managed WAN Services High-Value IP Services Improve Profitability
Cisco Key Differentiators Low TCO Leader – Line Rate Performance – Integrated Core and Edge Functionality
Content Delivery VPNs
ATM IP VPNS Frame Relay Metro Ethernet
Demand for IP Value Low Cost Competitive Maturity
OC-3 to OC-12
Optimize Resources
TDM T1/E1, T3/E3
Business Drivers
Industry Players
Solution Delivery
Service Providers – Sprint – Verizon – AT&T – BellSouth – SBC – Qwest – ICI – MCI – – Equipment Manufacturers – Cisco Systems – Nortel Networks – Enterasys – Lucent/Avaya – – – –
Long IP Networks
Technology Brief—IP Networks
Figure 2-20 charts the business drivers for Mobile IP networks. Figure 2-20 Mobile IP Networks
High Value
Critical Success Factors
Technology
Increase Consumer Loyalty and Retention Increase Average Revenue Per User Branding Market Leadership
Service Selection – Self-Provisioning Multiple Billing Options: Pre, Post, Tier
Cisco Product Lineup
Cisco IOS Wireless LANs 802.11a 802.11b 802.11g 802.1x
WLAN 1200 Series 1100 Series 350 Series
Applications Service Value Superior Voice Quality and Features Private WLANs Public PWLANs Telematics
Mobile Internet Premium Content-Based Services Brand Identity Captive Portal
Cisco Mobile Exchange
Seamless Handoff Mobility to WLAN LBS Stellar Customer Service CNS SESM, GPS CNS SSG, Music CNS AR, Cisco Key Differentiators Market Gaming ITP, IP RAN TDMA, Value Imaging WLSE, FDMA, Transition High Quality Wi-Fi – Structured Wireless – Radio Push to Authentication Transparency – Open Access Zone CDMA Agnostic Solutions – AES Ready – Talk Mobile Architecture – Flexible Service Billing – Routers, GSM/GPRS Payment Mobile /EDGE UMTS/UTRA Mobile Communications Internet Wireless SMS EGPRS Mobile Data Applications Center GSM/GPRS/EDGE MMS W-CDMA Service Mix and Location Based Services CDMA2000, CDMA1x and 1xEV DO CDMA 2000 3200 Mobile E-Mail Multi-Access Anywhere Personal Communications Services Access I-Paging Market CDMA Router Share Common User Experience Data Digital Cellular/Analog Cellular 1xRTT Roaming Secure Transactions Public Wireless LANs – HotSpots CDMA Mobile IP Voice Wireless Security 1xEVDO Wireless Local Area Network Overlay Calling Low Enhanced Mobility Features Wireless Home Networking Cost Voice Competitive Business Drivers Solution Delivery Maturity Optimize CapEx and OpEx Carrier-Class Reliability
Industry Players
DI, FHSS, DSSS, OFDM
Service Providers – Verizon Wireless – Cingular Wireless – SprintPCS – T-Mobile – Alltel – U.S. Cellular – Western Wireless – Boingo Wireless – Handset Manufacturers – Nokia – Motorola – Samsung – Siemens – LG Electronics – Sony Ericsson – Kyocera – NEC WLAN Manufacturers – Cisco Systems – Buffalo – LinkSys – 3Com – Lucent/Agere – Enterasys – Symbol
Mobile IP Networks
99
100
Chapter 2: IP Networks
Figure 2-21 charts the business drivers for global IP networks. Figure 2-21 Global IP Networks
High Value
Critical Success Factors
Cisco IOS
Lower Total Cost of Ownership
2.5 Gigabit to 40 Gigabit per Slot
Cisco CRS-1
TDM
12000 Series
Add Distinctive Value Improve Productivity Ensure Availability
ATM
Core Consolidation
Frame Relay
Edge Consolidation Market Value Transition
Fiber Optics — Common Sftw and EMS, NMS, OSS — Global Sales & Service Presence Convergence of Multi-Vendor Data Platforms/Networks Open-Standards Packet-based Networking
Market Share
Broadband, Mobility, Content Service and Application Speed Optimize Bandwidth Expense
Low Cost Competitive Maturity
Cisco Product Lineup
Invest Strategically Sustain Innovation Market Leadership
Technology
Demand for IP Value
SONET/ SDH
Cisco IOS Routers
Applications Service Value Imaging IP VPNs VoIP CRM/ERP
Secure 10000 Series Web Portals 7500 Series 7600 Series 6500 Series ACNS/ Content Engines Globally Resilient IP
Online Training Web Hosting
Optimum Services Convergence Optimized Bandwidth Services Rapid Service Delivery High-Value IP Services Cisco Key Differentiators
Low TCO Leader — Line Rate Performance — Distributed Integrated Core & Edge Functionality Application Hosting Global WANs
MPLS Optical ONS 15454 ONS 15600 ONS 15302 ONS 15305 ONS 15310 ONS 15327
Business Drivers
Industry Players
Reduced Training Costs Service Level Agreements
Managed IP Telephony Managed WAN Services Carrier s Carrier IP VPNs International Business Voice International Business Data Connectivity Based IP Services Solution Delivery
Service Providers — TeleGlobe — France Telecom — MCI — Global Crossing — AT&T — NTT/Verio — Sprint — Equipment Manufacturers — Nortel — Lucent — Alcatel — Fujitsu — Cisco Systems — NEC — CoSine —
Global IP Networks
End Notes 1 Cisco Systems, Inc. The Internet Protocol Journal, Volume 6 Number 4, December 2003 2
Cisco Systems, Inc. “Cisco SMB Class Mobility Solutions.” http://www.cisco.com/en/ US/netsol/ns339/ns395/ns176/ns314/netbr09186a0080201f22.html 3
Bakhshi, Shiv K., Evelien Wiggers, and Tim Crowley. “IDC Worldwide Hotspot 2004– 2008 Forecast and Analysis: Still Spotty, but Gaining Salience,” Study No. 32697, December 2004 4
Cisco Systems, Inc. “Public Wireless LAN for Service Providers Solutions Overview.” http://www.cisco.com/en/US/netsol/ns341/ns396/ns177/ns436/ netbr09186a00801f9f3d.html 5 Telegeography
Research. “Global Internet Geography 2005 Executive Summary.” http:// www.telegeography.com/ee/free_resources/gig2005_exec_sum-01.php
References Used in This Chapter
101
References Used in This Chapter Gilder, George. Telecosm: The World after Bandwidth Abundance. Simon & Schuster, 2002 Cisco Systems, Inc. “IPv6 At-A-Glance.” http://www.cisco.com/warp/public/732/Tech/ ipv6/docs/ipv6-cheat-sheet.pdf Cisco Systems, Inc. “Convergence to IP/MPLS Infrastructure – The Service Provider View.” http://www.cisco.com/en/US/partner/netsol/ns341/ns396/ns301/ns208/ net_value_proposition0900aecd800c895a.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Mike Volpi Discusses Cisco Router Technology Advances.” http:// newsroom.cisco.com/dlls/ts_121003.html Boyles and Hucaby. Cisco CCNP Switching Exam Certification Guide. Cisco Press, 2000. Cisco Systems, Inc. “Cisco Express Forwarding.” Cisco.com. http://www.cisco.com/warp/ customer/cc/pd/iosw/iore/tech/cef_wp.htm. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Solutions for Mobile Network Operators.” http://www.cisco.com/en/ US/partner/netsol/ns341/ns396/ns177/networking_solutions_white_ paper09186a00801fc7fa.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Extending the Enterprise: A Bottom-line Look at Extending Network Access to Mobile Workers.” http://www.cisco.com/application/pdf/en/us/guest/netsol/ ns176/c714/ccmigration_09186a008011536f.pdf Haviland, George and Cisco Systems, Inc. “Designing High-Performance Campus Intranets with Multilayer Switching.” http://www.cisco.com/en/US/partner/netsol/ns340/ ns394/ns147/ns17/networking_solutions_white_paper09186a00800a4883.shtml. (Must be a registered Cisco.com user.)
This chapter covers the following topics:
• • • • •
The Origins of Multiservice ATM Next-Generation Multiservice Networks Multiprotocol Label Switching Networks Cisco Next-Generation Multiservice Routers Multiservice Core and Edge Switching
CHAPTER
3
Multiservice Networks Multiservice networks provide more than one distinct communications service type over the same physical infrastructure. Multiservice implies not only the existence of multiple traffic types within the network, but also the ability of a single network to support all of these applications without compromising quality of service (QoS) for any of them. You find multiservice networks primarily in the domain of established service providers that are in the long-term business of providing wireline or wireless communicationnetworking solutions year after year. Characteristically, multiservice networks have a large local or long-distance voice constituency and are traditionally Asynchronous Transfer Mode (ATM) Layer 2-switched in the core with overlays of Layer 2 data and video solutions, such as circuit emulation, Frame Relay, Ethernet, Virtual Private Network (VPN), and other billed services. The initial definition for multiservice networks was a converged ATM and Frame Relay network supporting data in addition to circuit-based voice communications. Recently, next-generation multiservice networks have emerged, adding Ethernet, Layer 3 Internet Protocol (IP), VPNs, and Multiprotocol Label Switching (MPLS) services to the mix. IP and, perhaps more specifically, IP/MPLS core networks are taking center stage as multiservice networks are converging on Layer 2, Layer 3, and higher-layer services. Many provider networks were built piecemeal—a voice network here, a Frame Relay network there, and an ATM network everywhere as a next-generation voice transporter and converged platform for multiple services. The demand explosion of Internet access in the 1990s sent many providers and operators scrambling to overlay IP capabilities, often creating another distinct infrastructure to operate and manage. Neither approach used the current investment to its best advantage. This type of response to customer requirements perpetuates purpose-built networks. Purpose-built networks are not solely a negative venture. These networks do serve their purpose; however, their architectures often overserve their intended market, lack sufficient modularity and extensibility, and, thus, become too costly to operate in parallel over the long term. Multiple parallel networks can spawn duplicate and triplicate resources to provision, manage, and maintain. Examples are resource expansion through additional parts sparing, inimitable provisioning and management interfaces, and bandages to the billing systems. Often a new network infrastructure produces an entirely new division of the company, replicating several operational and business functions in its wake.
104
Chapter 3: Multiservice Networks
The new era of networking is based on increasing opportunity through service pull, rather than through a particular technology push requiring its own purpose-built network infrastructure. Positioning networks to support the service pull of IP while operationally converging multiple streams of voice, video, and IP-integrated data is the new direction of multiservice network architecture. In the face of competitive pressures and service substitution, not only are next-generation multiservice networks a fresh direction, they are an imperative passage through which to optimize investment and expense. In this chapter, you learn why the industry initially converged around ATM; about nextgeneration multiservice network architectures that include Cisco multiservice ATM platforms, IP/MPLS routing and switching platforms, and multiservice provisioning platforms; and about multiservice applications that converge data, voice, and video.
The Origins of Multiservice ATM In the early 1980s, the International Telecommunication Union Telecommunication Standardization sector (ITU-T) and other standards organizations, such as the ATM Forum, established a series of recommendations for the networking techniques required to implement an intelligent fiber-based network to solve public switched telephone network (PSTN) limitations of interoperability and internetwork timing and carry new services such as digital voice and data. The network was termed the Broadband Integrated Services Digital Network (B-ISDN). Several underlying standards were developed to meet the specifications of B-ISDN, including synchronous optical network (SONET) and Synchronous Digital Hierarchy (SDH) as the data transmission and multiplexing standards and ATM as the switching standard. By the mid-1990s, the specifications for the ATM standard were available for manufacturers. Providers began to build out ATM core networks on which to migrate the PSTN and other private voice networks. Partly justified by this consolidation of the voice infrastructure, the ATM core was positioned as a meeting point and backbone carrier for the voice network products and the Frame Relay data networks. ATM networks were also seen as enablers of the growing demand for multimedia services. Designed from the ground up to provide multiple classes of service, ATM was purpose-built for simultaneous transport of circuit voice, circuit-based video, and synchronous data. ATM was not initially designed for IP transport but rather was designed as a multipurpose, multiservice, QoS-aware communications platform. It was primarily intended for converging large voice networks, H.320 video networks, and large quantities of leased-line, synchronous, data-based services. ATM theory was heralded as the ultimate answer to potentially millions of PC-to-PC, personal videoconferencing opportunities. It was anticipated that its fixed, cell-based structure would be easily adaptable to any type of data service, and, indeed, adaptation layers were designed into ATM for transport of IP and for LAN emulation.
The Origins of Multiservice ATM
105
In essence, ATM was part of a new PSTN, a new centrally intelligent, deterministic pyramid of power that was expected to ride the multimedia craze to mass acceptance. As such, many service providers who needed a core upgrade during the 1990s chose ATM as a convergence platform and launch pad for future services. ATM is a system built on intelligence in switches and networks. In contrast, IP-based products are built on intelligence in the core and intelligence distributed to the edges of networks, primarily in customer edge computers that summon or send data at their master’s will. In fact, it is the bursty, variable, free-roaming data characteristics of IP that effectively cripple the efficiency of ATM for IP data transport. Running IP packets through the ATM Adaptation Layers (AALs) creates a hefty overhead referred to as the ATM cell tax. For example, an IP packet of approximately 250 bytes will need to be chopped and diced into several 48-byte payloads (5-byte ATM header per cell for 53 total bytes), and the last cell will need to be padded to fill out the full data payload, the padding becoming extra overhead. A 250-byte IP packet using an AAL5 Subnetwork Access Protocol (SNAP) header, trailer, and padding swells to 288 bytes with a resulting cost of about 15.2 percent overhead per packet. The shorter the length of an IP packet, the larger the percentage of overhead. TCP/IP is all over the map—size wise—with many data packets, especially acknowledgements, shorter than 100 bytes. Using ATM Inverse Multiplexing over ATM to bond T1 circuits for a larger bandwidth pool in ATM networks imposes significant, additional overhead. Adding it all up, the total fixed and variable cell tax overhead can be decimating to linkage of IP traffic. Back in the late 1990s when IP networks were coming on very strong, ATM products for enterprises cost about twice as much as Ethernet-based products, cost twice as much to maintain, and were intensive to configure and operate due to the ATM addressing structure and virtual circuit mesh dependencies. ATM was just too expensive to purchase and maintain (more tax) to extend to the desktop, where it could converge voice, video, and data. ATM initially entered the WAN picture as the potential winner for multiple services of data, video, and voice. As with any new technology, the industry pundits overhyped the technology as the answer to every networking challenge within the provider, enterprise, and consumer markets. As IP networks continued to grow, and voice and video solutions were adapted to use IP over Fast and Gigabit Ethernet optical fiber spans, the relevance of ATM as a universal convergence technology waned. Due to ATM’s complexity of provisioning, its high cost of interfaces, and its inherent overhead, ATM gravitated to the niche bearers of complex skill sets, such as in service provider core networks, in large enterprise multiservice cores, and as occasional backbone infrastructure in LAN switching networks. ATM has also been a well-established core technology for traditional tandem voice operators and as backhaul for wireless network carriers. Much like ISDN before it, the technology push of ATM found a few vertical markets but only along paths of least resistance.
106
Chapter 3: Multiservice Networks
From a global network perspective, the ascendancy of IP traffic has served ATM notice. According to IDC, worldwide sales of ATM switches were down 21 percent in 2002, another 12 percent in 2003, and nearly 6 percent through 2004. Further, IDC forecasts the ATM switch market to decline at roughly 8 percent per year during the 2006 to 2009 timeframe.1 With the Digital Subscriber Line (DSL) deployments by the Incumbent Local Exchange Carriers (ILECs), ATM networks moved into the service provider edge, extending usefulness as broadband aggregation for the consumer markets. DSL has been an important anchor for ATM justification, bridging consumer computing to the Internet, but even there, DSL technology is signaling a shift to Ethernet and IP. The DSL Forum has presented one architecture that would aggregate DSL traffic at the IP layer using IP precedence for QoS rather than at the ATM layer. In Asia, many DSL providers already use Ethernet and IP as the aggregation layer for DSL networks, benefiting from the lower cost per bit for regional aggregation and transport. Soon, ATM switching will likely be pushed out of the core of provider networks by MPLS networks that are better adapted to serve as scalable IP communications platforms. In fact, many providers have already converged their Frame Relay and ATM networks onto an MPLS core to reduce operational expenditures and strategically position capital expenditures for higher margin, IP-based services. ATM will settle in as a niche, edge service and eventually move into legacy support. However, for providers that still have justifiable ATM requirements, there remains hope by applying next-generation multiservice architecture to ATM networks, which you learn about in the next section. Because providers cannot recklessly abandon their multiyear technology investments and installed customer service base, gradual migration to nextgeneration multiservice solutions is a key requirement. However, the bandwidth and services explosion within the metropolitan area, from 64 Kbps voice traffic to 10 Gigabit Ethernet traffic, is accelerating the service provider response to meet and collect on the opportunity. Figure 3-1 shows a representative timeline of multiservice metropolitan bandwidth requirements. Through the 1980s and into the 1990s, bandwidth growth was relatively linear, because 64 Kbps circuits (digital signal zero or DS0) and DS1s (1.5 Mbps) and DS3s (45 Mbps) were able to address customer growth with Frame Relay and ATM services. The Internet and distributed computing rush of the late 1990s fueled customer requirements for Gigabit Ethernet services, accelerating into requirements for multigigabit services, higherlevel SONET/SDH services, and storage services moving forward. The bandwidth growth opportunity of the last ten years is most evident in the metropolitan areas where multiservice networks are used.
Next-Generation Multiservice Networks
Figure 3-1
107
Primary Metropolitan Traffic Timeline
Primary Metro Traffic Multi-Gig Services
Gigabit Ethernet and Storage Voice Circuits, Modem Traffic DS0s
FR and ATM Growth DS1s and DS3s
STS/VC4-n, Up to OC-192/STM-64
Moving from Centralized to Distributed 1980s
1990s
2000–2004
2005+
Source: Cisco Systems, Inc.
Next-Generation Multiservice Networks Traditional multiservice networks focus on Layer 2 Frame Relay and ATM services, using a common ATM backbone to consolidate traffic. This generation of ATM switches was easily extended to support DSL and cable broadband build-outs. In contrast, next-generation multiservice networks provide carrier-grade, Layer 3 awareness, such as IP and MPLS, in addition to traditional Layer 2 services. These next-generation multiservice networks can take the form of ATM-, blended IP+ATM-, IP/MPLS-, or SONET/SDH-based networks in order to deliver multiple traffic services over the same physical infrastructure. Even with the existence of next-generation technology architectures, most providers are not in a position to turn over their core technology in wholesale fashion. Provider technology is often on up-to-decade-long depreciation schedules, and functional life must often parallel this horizon, even if equipment is repurposed and repositioned in the network. Then there is the customer-facing issue of technology service support and migration. Though you might wish to sunset a particular technology, the customer is not often in support of your timetable. This requires a measured technology migration supporting both heritage services along with the latest service features. Next-generation technology versions are often the result, to allow new networking innovations to overlap established network architectures. The topics of next-generation multiservice switching, Cisco next-generation multiservice ATM switches, and MPLS support on Cisco ATM switches are discussed next.
108
Chapter 3: Multiservice Networks
Next-Generation Multiservice ATM Switching Next-generation multiservice ATM switching is often defined by a common transmission and switching infrastructure that can natively provide multiple services in such a manner that neither service type interferes with the other. This independence between different services requires a separation of the control and switching planes in multiservice equipment. The control plane acts as the brain, apportioning resources, making routing decisions, and providing signaling, while the switching plane acts as the muscle machine, forwarding data from source to destination. Separation of the control and switching planes makes it possible to partition the resources of the switching platform to perform multiple services in a native fashion. In much the same way that you can logically partition an IBM mainframe processor into multiple production operating systems, apportioning CPU cycles, memory, storage, and input/output channels to individual logical partitions (LPARs), you can resource partition next-generation multiservice switches to accomplish the same concept of creating multiple logical network services. Resource portioning in many of the next-generation multiservice switches is accomplished through a virtual switch interface within the control and switching planes. Through a function such as the virtual switch interface, you can have multiple service controllers, each sharing the control plane resources to manage the switching plane, which is the switch fabric that forwards data between a source port and a destination port. Within the Cisco MGX line of multiservice switches, the virtual switching instance (VSI) allows for an ATM Private Network to Network Interface (PNNI) controller to act as a virtual control plane for ATM services, an MPLS controller to act as a virtual control plane for IP or ATM services, and a Media Gateway Control Protocol (MGCP) controller to act as a virtual control plane for voice services. Each type of controller, through Cisco VSI, directs the assigned resources and interfaces of the physical ATM switch that have been partitioned within its domain of control. You can run all three controllers and, therefore, multiple services in the same physical ATM switch. If partitioned on a switch, each of these service types is integrated natively and not running as a technology overlay. For example, when running MPLS over an ATM switching fabric, all the network switches run an IP routing protocol and an MPLS label distribution protocol (LDP), which is in contrast to running IP as an overlay via classic ATM permanent virtual circuits (PVCs). Every switch in the MPLS-enabled multiservice network is aware of the multiple services that it provides. The multiple controller capability can allow for a migration from classic ATM switching to MPLS within the same physical architecture. Figure 3-2 shows the conceptual representation of the Cisco Virtual Switch Architecture. The virtual switch architecture is a Switch Control Interface (SCI) developed by Cisco Systems, Inc., and implemented in the Cisco MGX product line of multiservice switching platforms. The virtual switch works across the control and switching planes, the switching plane essentially performs the traffic-forwarding function. While the control plane and the
Next-Generation Multiservice Networks
109
switching plane represent the workhorse functions of the multiservice switch, within the Cisco design there is also an adaptation plane, a management plane, and an application plane that completes the multiservice system architecture. An example of a requirement for the adaptation plane would be the inclusion of support for Frame Relay services, the adaptation plane facilitating the use of Frame Relay to ATM service interworking. A management plane is required for overall switch control, configuration, and monitoring. Figure 3-2
Cisco Virtual Switch Architecture
Cisco Virtual Switch Architecture Application Plane
Control Plane
PNNI Controller
MPLS Controller
Management Plane
Standard IN/Signaling APIs, Interfaces, and Protocols
MGCP Controller
Virtual Switch Control Function
Forwarding Plane Switching Fabric Virtual Switch Function
Adaptation Plane
IP
VoIP
VoATM
TDM
FR
ATM
Logical Port Function Source: Cisco Systems, Inc.
The advantages of next-generation multiservice switching are as follows:
•
Multiple service types of ATM, voice, MPLS, and IP are supported on the same physical infrastructure, allowing the provider to leverage both circuit-based and packet-based revenue streams.
•
Control plane independence allows you to upgrade or maintain one controller type independently, without interrupting service for other controllers.
•
You have the ability to choose and implement a control plane that is best suited to the application requirements.
•
The separation of the control and switching planes allow the vendor to develop functional enhancements independently of each other.
•
The cost-effective approach of adding MPLS to ATM switch infrastructure allows for the migration to MPLS as a common control plane.
110
Chapter 3: Multiservice Networks
Using next-generation multiservice ATM architectures, providers can maintain existing services such as circuit-based voice and circuit-based video, while migrating to and implementing new packet-based network services such as packet voice, Layer 2 and Layer 3 VPNs, MPLS, and MPLS traffic engineering features. Many providers will maintain ATM infrastructures and might need to bridge from a traditional ATM platform to a next-generation multiservice ATM platform. As an example, Figure 3-3 shows the concept of migrating a Layer 2, full-mesh PVC network to a next-generation multiservice ATM network that uses MPLS rather than discreet PVCs. By adding a Route Processor Module (RPM) to the MGX 8800s in the figure, this next-generation multiservice ATM platform can support Layer 3 IP protocols, and use MPLS to get the best benefits of both routing and switching.
Cisco Next-Generation Multiservice Switches Using next-generation multiservice network architecture, Cisco offers several solutions that support today’s revenue-generating services while accelerating the delivery of new highvalue IP-based services. By combining Layer 3 IP and Layer 2 ATM in a straightforward and flexible manner, providers can establish networks that support existing and emerging services. This provides carrier-class data communication solutions that free providers from the economic and technical risks of managing complex multiservice networks. Cisco implements next-generation multiservice capabilities in the following products:
• • • • •
Cisco BPX 8600 Series Switches Cisco MGX 8250 Series Switches Cisco MGX 8800 Series Switches Cisco MGX 8900 Series Switches Cisco IGX 8400 Series Switches
The next sections describe and compare these Cisco switches.
Cisco BPX 8600 Series Switches The Cisco BPX 8600 Series Multiservice Switches are IP+ATM platforms providing ATM-based broadband services and integrating Cisco IOS to support MPLS and deliver IP services. The heart of the system is a 19.2 Gbps cross-point switching fabric capable of switching up to two million cells per second, in a multislot chassis. The chassis employs a midplane design, allowing front cards to be adapted to a variety of back cards that provide Layer 1 interface connections such as T3/E3, OC-3/STM-1, and OC-12/STM-4 (622 Mbps). The largest BPX node has a modular, multishelf architecture that scales up to 16,000 DS1s. With heritage from the Cisco acquisition of Stratacom, the BPX switches are often deployed as carrier-class core switches or broadband edge switches in voice, Frame Relay, ATM, wireless, and MPLS provider networks, where OC-12 core links can supply appropriate capacity.
Next-Generation Multiservice Networks
Figure 3-3
Network Migration from Layer 2 to Next-Generation Multiservice ATM Networks
Cisco MGX 8800s
Layer 2 Network PVCs
Cisco BPX 8620s
Cisco MGX 8800s with RPM
Cisco MGX 8800s Multiservice Network MPLS
Cisco BPX 8650s
Source: Cisco Systems, Inc.
111
112
Chapter 3: Multiservice Networks
Cisco MGX 8250 Edge Concentrator Switch The Cisco MGX 8250 Edge Concentrator is a multiservice switch used primarily at the service provider edge supporting narrowband services at 1.2 Gbps of switching capacity. Supporting T1/E1 to OC-12c/STM-4, Ethernet and Fast Ethernet, this switch family is very flexible for providing ATM edge concentration and even MPLS edge concentration where cost-effectiveness is the primary requirement. Switches deployed at the edge of networks need a good balance between port density and cost. The 8250 has 32 card slots for good capacity. A general target for this platform is a maximum capacity of 192 T1s, which would aggregate to 296 Mbps of bandwidth, well under the OC-12/STM-4 uplink capability for this 8250. That leaves bandwidth headroom within the OC-12’s 622 Mbps of capacity to also support several Ethernet and a few Fast Ethernet ports. All port cards support hot insert and removal, allowing the provider to add card and port density incrementally in response to demand.
Cisco MGX 8800 Series Switches The Cisco MGX 8800 Series Multiservice Switches provide significant flexibility at the service provider edge. The Cisco MGX 8800 family is a narrowband aggregation switch with broadband trunking up to OC-48 (2.5 Gbps). The MGX 8800’s cross-point switching fabric options operate at either 1.2 Gbps (PXM-1) or up to 45 Gbps (PXM-45) of nonblocking switching. The aforementioned virtual switch architecture allows for multiple control planes via individual controller cards such as PXM-1E for PNNI services, an RPM-PR controller for IP/MPLS services, and a VISM-PR card for packet voice services using MGCP, packet cable Trunking Gateway Control Protocol (TGCP), H.323 video, and Session Initiation Protocol (SIP). The 8800 series supports narrowband services of T1/E1 ATM, n * T1/E1 inverse multiplexing over ATM (IMA), Frame Relay, high-speed Frame Relay, Systems Network Architecture (SNA), circuit emulation, ATM user network interface (UNI) 3.0/3.1, and switched multimegabit data service (SMDS). These are useful for integrating services such as IP VPNs, Voice over IP (VoIP) and ATM, PPP aggregation, managed intranets, premium Internet services, and IP Fax Relay. Supporting 100 percent redundancy and automatic protection switching (APS), the 8800 series is often deployed as an MPLS multiservice ATM switch on the edges of ATM-based provider networks.
Next-Generation Multiservice Networks
113
Cisco MGX 8900 Series Switches The Cisco 8900 Series Multiservice Switch, specifically the 8950, is a high-end multiservice broadband switch designed to scale multiservice networks to OC-192c/STM-64. Supporting a range of broadband services from T3/E3 to OC-192c/STM-64, the MGX 8950 supports the aggregation of broadband services, scaling of MPLS VPNs, and network convergence. With up to 180 Gbps of redundant switching capacity or 240 Gbps nonredundant, the MGX 8950 is a superdensity broadband switch supporting up to 768 T3/E3s, 576 OC3c/STM-1s, 192 OC-12c/STM-4s, 48 OC-48c/STM-16s, or up to 12 OC-192c/STM-64s in flexible combinations. This switch is specifically architected with a 60 Gbps switch fabric module (XM-60), of which four can be installed to meet the demands and service levels of 10 Gbps ATM-based traffic at the card interface level. The modularity of the XM-60 module allows a provider to incrementally scale switching capacity as needed, starting with one and growing to four per MGX 8950 chassis.
Cisco IGX 8400 Series Switches Cisco also has a family of multiservice switches that are designed for large enterprises with ATM requirements or for service providers with low cost of ownership requirements. The IGX 8400 series of multiservice WAN switches support line speeds of 64 Kbps up to OC3c/STM-1 with a 1.2 Gbps nonblocking switching fabric. MPLS is also supported on this IP+ATM switch family. The IGX 8400 represents the lowest cost per port of any ATM switch on the market.
Comparing Cisco Next-Generation ATM Multiservice Switches In summary, the complete family of Cisco multiservice switches support switching speeds from 1.2 Gbps to 240 Gbps, line speeds from DS0 to OC-192c/STM-64 including Fast Ethernet, and ATM edge concentration, PNNI routing, MPLS routing, and packet voice control functions. Both modular and compliant with the various specifications, these products are used to build today’s next-generation multiservice ATM networks. Figure 3-4 shows the relative positioning of Cisco next-generation ATM multiservice switches.
114
Chapter 3: Multiservice Networks
Figure 3-4
Cisco Next-Generation ATM Multiservice Switches MGX 8950
OC-192/ STM-64
OC-48/ STM-16
MGX 8850
OC-12/ STM-4
MGX 8250
OC-3/ STM-1
IGX 8400
MGX 8850
BPX 8600
MGX 8830 DS3/E3 DS1/E1 1.2 G
10 G 45 G 180 G Cisco ATM Switching Capacity
Multiprotocol Label Switching Networks Demand for Internet bandwidth continues to soar. This has shifted the majority of traffic toward IP. To keep up with all traffic requirements, service providers not only look to scale performance on their core routing platforms, but also to rise above commodity pricing by delivering intelligent services. Ascending to IP at Layer 3 is necessary to prospect for new high-value services with which to capture and grow the customer base. New Layer 3 IP service opportunities are liberating, yet there is also the desire to maintain the performance and traffic management control of Layer 2 switching. The ability to integrate Layer 3 and Layer 2 network services into a combined architecture that is easier to manage than using traditional separate network overlays is also a critical success factor for providers. These essential requirements lead you to MPLS, an actionable technology that facilitates network and services convergence. MPLS is a key driver for next-generation multiservice provider networks. MPLS makes an excellent technology bridge. By dropping MPLS capability into the core layer of a network, you can reduce the complexity of Layer 2 redundancy design while adding new Layer 3 services opportunity. Multiple technologies and services can be carried across the MPLS core using traffic engineering or Layer 3 VPN capabilities. MPLS capability can be combined with ATM, letting ATM become Layer 3 IP-aware to simplify
Multiprotocol Label Switching Networks
115
provisioning and management. Because of these attributes, MPLS has momentum as a unifying, common core network, as it more easily consolidates separate purpose-built networks for voice, Frame Relay, ATM, IP, and Ethernet than any methodology that has come before. In doing so, it portends significant cost savings in both provider capital expenditures (CapEx) and operational expenditures (OpEx). MPLS is an Internet Engineering Task Force (IETF) standard that evolved from an earlier Cisco tag switching effort. MPLS is a method of accelerating the performance and management control of traditional IP routing networks by combining switching functionality that collectively and cooperatively swaps labels to move a packet from a source to a destination. In a sense, MPLS allows the connectionless nature of IP to operate in a more connected and manageable way. An MPLS network is a collection of label switch routers (LSRs). MPLS can be implemented on IP-based routers (frame-based MPLS) as well as adapted to ATM switches (cell-based MPLS). The following sections discuss MPLS components, terminology, functionality, and services relative to frame-based and cell-based MPLS.
Frame-Based MPLS Frame-based MPLS is used for a pure IP routing platform—that is, a router that doesn’t have an ATM switching fabric. When moving data through a frame-based MPLS network, the data is managed at the frame level (variable-length frames) rather than at a fixed length such as in cell-based ATM. It is worthwhile to understand that a Layer 3 router is also capable of Layer 2 switching.
Frame-Based MPLS Components and Terminology Understanding frame-based MPLS terminology is challenging at first so the following review is offered:
•
Label switch router (LSR)—The LSR provides the core function of MPLS label switching. The LSR is equipped with both Layer 3 routing and Layer 2 switching characteristics. The LSR functions as an MPLS Provider (P) node in an MPLS network.
•
Edge label switch router (eLSR)—The eLSR provides the edge function of MPLS label switching. The eLSR is where the label is first applied when traffic is directed toward the core of the MPLS network or last referenced when traffic is directed toward the customer. The eLSR functions as an MPLS Provider Edge (PE) node in an MPLS network. The eLSRs are functional PEs that send traffic to P nodes to traverse the MPLS core, and they also send traffic to the customer interface known in MPLS terminology as the Customer Edge (CE). The eLSRs use IP routing toward the customer interface and “label swapping” toward the MPLS core. The term label edge router (LER) is also used interchangeably with eLSR.
116
Chapter 3: Multiservice Networks
It is also helpful to understand common terms used to describe MPLS label switching. Table 3-1 shows these terminology comparisons. Table 3-1
MPLS Label Switching Terminology
MPLS LSR Function
Performs:
Also Referred to As:
MPLS Functional Use
MPLS Network Position
Ingress eLSR
IP prefix lookup for label imposition
Label pushing
Provider Edge (PE)
Service provider edge
LSR
Label switching
Label swapping
Provider (P)
Service provider core
Penultimate LSR (last LSR before egress eLSR)
Label disposition (label removal)
Label popping a.k.a. penultimate hop popping
Provider (P)
Service provider core
Egress eLSR
IP prefix lookup for outbound interface
Routing
Provider Edge (PE) to Customer Edge (CE) link
Service provider edge to customer premise
It’s important to understand that an eLSR device provides both ingress eLSR and egress eLSR functions. This is bidirectional traffic movement and is analogous to source (ingress eLSR) and destination (egress eLSR).
Frame-Based MPLS Functionality MPLS fuses the intelligence of routing with the performance of switching. MPLS is a packet switching network methodology that makes connectionless networks like IP operate in a more connection-oriented way. By decoupling the routing and the switching control planes, MPLS provides highly scalable routing and optimal use of resources. MPLS removes Layer 3 IP header inspection through core routers, allowing label switching (at Layer 2) to reduce overhead and latency. With MPLS label switching, packets arriving from a customer network connection are assigned labels before they transit the MPLS network. The MPLS labels are first imposed at the edge (eLSR) of the MPLS network, used by the core LSRs, and then removed at the far edge (destination eLSR) of the destination path. The use of labels facilitates faster switching through the core of the MPLS network and avoids routing complexity on core devices. MPLS labels are assigned to packets based on groupings or forwarding equivalency classes (FECs) at the ingress eLSR. A FEC is a group of packets from a source IP address that are all going to the same destination. The MPLS label is imposed between Layer 2 and Layer 3
Multiprotocol Label Switching Networks
117
headers in a frame-based packet environment, or in the Layer 2 virtual path identifier/ virtual channel identifier (VPI/VCI) field in cell-based networks like ATM. The following example presumes the use of frame-based MPLS in the routing of an IP packet. Customer site “A” sources an IP packet destined for customer site “B” that reaches the service provider’s eLSR and then performs an ingress eLSR (PE) function. The ingress eLSR examines the Layer 3 IP header of the incoming packet, summarizes succinct information, and assigns an appropriate MPLS label that identifies the specific requirements of the packet and the egress eLSR (PE). The MPLS label is imposed or, more specifically, “shimmed” between the Layer 2 and Layer 3 headers of the current IP packet. Prior to the first packet being routed, the core LSRs (P nodes) have already predetermined their connectivity to each other and have shared label information via an LDP. The core LSRs can, therefore, perform simple Layer 2 label swapping and then switch the ingress eLSR’s labeled packet to the next LSR along the label-switched path, helping the ingress eLSR get the packet to the egress eLSR. The last core LSR (penultimate hop P node) prior to the target egress eLSR removes the MPLS label, as label swapping has served its usefulness in getting the packet to the proper egress eLSR. The egress eLSR is now responsible for examining the Customer A-sourced Layer 3 IP header once again, searching its IP routing table for the destination port of customer site B and routing the Customer A packet to the Customer B destination output interface. Figure 3-5 shows the concept of frame-based MPLS label switching. Figure 3-5
Frame-Based MPLS Label Switching Frame-Based MPLS Network
Customer Site B
Customer Site A Ingress LSR
Source Router
Packet
LSR
69 Packet
LSR
56 Packet
Penultimate LSR
158 Packet
•
Egress LSR
Packet
Packet
Packet Flow Label Imposition
Label Swapping
MPLS Label Imposed Between L2 and L3 Headers
Source: Cisco Systems, Inc.
Label Swapping
Penultimate Hop Hopping
Destination Router
Label Removing
118
Chapter 3: Multiservice Networks
Cell-Based MPLS Adding MPLS functionality to ATM switches allows service providers with ATM requirements to more easily deploy Layer 3, high-value IP feature capabilities, supporting MPLS VPNs, MPLS traffic engineering, packet voice services, and additional Layer 3 managed offerings. This is the ultimate definition of next-generation multiservice networks— networks that are capable of supporting circuit-based Layer 2 and packet-based Layer 2 and Layer 3 services on the same physical network infrastructure. By leveraging the benefits of the Cisco IP+ATM multiservice architecture with MPLS, operators are migrating from basic transport providers to service-oriented providers. MPLS on ATM switches must use the Layer 2 ATM header, specifically the VPI/VCI field of the ATM header. Since this is pure ATM, all signaling and data forwarding is accomplished with 53-byte ATM cells. Therefore, MPLS implementations on the ATM platforms are referred to as cell-based MPLS. Non-ATM platforms such as pure IP-based routers also use MPLS, but that implementation uses frame headers and is referred to as frame-based MPLS, as you learned in the previous section. In the discussion that follows, cell-based MPLS is presumed.
Cell-Based MPLS ATM Components Implementing MPLS capability on the Cisco Multiservice ATM Switches requires the addition of the Cisco IOS software to the ATM switching platforms. This is accomplished through either external routers such as the Cisco 7200 or via a co-controller card (essentially a router in a card form factor) resident in the ATM switch. To understand the various MPLS implementation approaches, you first need to familiarize yourself with the following MPLS terminology:
•
Label switch controller (LSC)—The central control function of an MPLS application in an ATM multiservice network. The LSC contains the following: — IP routing protocols and routing tables — The LDP function — The master control functions of the virtual switch interface
•
MPLS ATM label switch router (LSR)—Created by combining the LSC with an ATM switch. In MPLS networks, the LSR can support the function of core switching nodes, referred to as the MPLS Provider (P) node, or function as an eLSR to form an MPLS Provider Edge (PE) node. As an example, the BPX 8620 ATM Multiservice Switch is paired with a Cisco 7200 Router acting as the MPLS LSC, and this combination forms an MPLS ATM LSR. The ATM switch provides the Layer 2 switching function, while the 7200 LSC provides the Layer 3 awareness, routing, and switching control. This combination of the Cisco 7200 LSC, and the BPX 8620 is given a model number of BPX 8650.
Multiprotocol Label Switching Networks
119
•
Co-controller card—For MPLS on ATM, this is a router-on-a-card called a RPM. The RPM-PR is essentially a Cisco 7200 Network Processing Engine 400 (NPE400), and the higher-performance RPM-XF is based on the Cisco PXF adaptive processing architecture. Either style of RPM can be used based on performance requirements. Both Layer 3 RPMs are implemented on a card-based form factor that integrates into the Cisco MGX 8800 and MGX 8900 Series multiservice ATM switches. Since the RPM has control function that complements the base ATM switch controller card (PXM), the RPM is generically referred to as a co-controller card. With MPLS configured on the RPM, these ATM switches become MPLS ATM LSRs.
•
Universal Router Module (URM)—This is an onboard Layer 3 Route Processor controller card that is platform-specific terminology for the Cisco IGX 8400 ATM switch. The URM allows the IGX 8400 to participate as an MPLS ATM LSR.
Cell-Based MPLS ATM LSR and eLSR Functionality Using the background terminology information from Table 3-1, it is worthwhile to briefly describe the MPLS ATM LSR and eLSR functionality, examining how they cooperate together to move a packet from customer site “A” to customer site “B” (a unidirectional example). The example is similar in all respects to the frame-based MPLS example, with the exception of the particular header field that is used to carry the MPLS labels, and the fact that fixed-length ATM cells are used between the eLSRs. Customer site A sources a packet destined for customer site B that reaches the service provider’s eLSR or ATM eLSR and then performs an ingress eLSR function. The ingress eLSR examines the Layer 3 IP header of the incoming packet, summarizes succinct information, and assigns an MPLS label that identifies the egress eLSR. The MPLS label is imposed and placed within the ATM VPI/VCI field of the ATM Layer 2 header. This MPLS label allows IP packets to be label-switched as ATM cells through the core ATM LSRs (P nodes) of the MPLS network without further examination of the IP header until the cells reach the egress eLSR (which reassembles the cells back into packets prior to delivery to customer site B). The core ATM LSRs have already predetermined their connectivity to each other and have shared label information via an LDP. The core ATM LSRs can, therefore, perform simple Layer 2 label swapping within the ATM VPI/VCI field, converting the ingress eLSR labeled packet to cells and switching the labeled cells to the next P node along the label-switched path, helping the ingress eLSR get the sourced packet to the egress eLSR. The last core ATM LSR (penultimate hop P node) prior to the target egress eLSR removes the MPLS label, as label swapping has served its usefulness in getting the cells to the proper egress eLSR. The egress eLSR is now responsible for reassembling all cells belonging to the original packet, for examining the Customer A-sourced Layer 3 IP header once again, searching its IP routing table for the destination port of customer site B, and routing the Customer A
120
Chapter 3: Multiservice Networks
packet to the Customer B destination output interface. Figure 3-6 shows the concept of cellbased MPLS label switching. Figure 3-6
Cell-Based MPLS Label Switching Cell-Based MPLS Network
Customer Site A Source Router
Customer Site B Ingress LSR
Packet
LSR
69
Cell
56
Penultimate LSR
LSR
Cell
158
Cell
•
Egress LSR
Cell
Destination Router
Packet
Packet Flow Label Imposition
Label Swapping
Label Swapping
Penultimate Hop Hopping
Label Removing
MPLS Label Imposed in L2 VPI/VCI Field
Source: Cisco Systems, Inc.
One of the caveats of cell-based MPLS is that the use of the fixed-length VPI/VCI field within the ATM Layer 2 header imposes some restrictions on the number of MPLS labels that can be stacked within the field. This can limit certain functionality, such as advanced features within MPLS Traffic Engineering that depend on multiple MPLS labels. It is worthwhile to consult Cisco support for those features, hardware components, and software levels that are supported by cell-based MPLS platforms.
Implementing Cell-Based MPLS on Cisco ATM Multiservice Switches You can use any of the Cisco switches mentioned earlier to perform the function of an eLSR (PE). The BPX 8600 series uses the external Cisco 7200 router in combination to become an MPLS ATM eLSR. The MGX 8800 and 8900 switches use the onboard RPM-PR or RPM-XF co-controller cards for the eLSR function, and the IGX-8400 uses the URM card for the eLSR functionality. All platforms except for the MGX 8250 can also be configured as core LSRs (P nodes). Table 3-2 shows a summary of these MPLS realizations. Utilizing MPLS, the Cisco next-generation multiservice ATM infrastructure allows the unique features of ATM for transport aggregation to combine with the power and flexibility of IP services.
Multiprotocol Label Switching Networks
Table 3-2
121
MPLS LSR and eLSR Implementation Summary Cisco Switch Series
MPLS ATM LSR (P)
MPLS ATM eLSR (PE)
BPX 8600
With external Cisco 7200
With external Cisco 7200
MGX 8250
Not applicable
Internal RPM-PR cards
MGX 8850
Internal RPM-PR (up to 350,000 packets per second) or RPM-XF (up to 2 million plus packets per second; requires PXM-45)
Internal RPM-PR or RPM-XF
MGX 8950
Internal RPM-PR or RPM-XF
Internal RPM-PR or RPM-XF
IGX 8400
Internal URM or external Cisco 7200
Internal URM or external Cisco 7200
Functionally, both frame-based and cell-based MPLS eLSRs support Layer 3 routing toward the customer, Layer 3 routing between other eLSRs, and Layer 2 label switching toward the provider core, while the core LSRs provide Layer 2 label switching through the core. You could draw the analogy that an MPLS label is a tunnel of sorts, invisibly shuttling packets or cells across the network core. The core LSRs, therefore, don’t participate in customer routing awareness as a result, reducing the size and complexity of their softwarebased routing and forwarding tables. This blend of the best features of Layer 3 routing with Layer 2 switching allows MPLS core networks to scale very large, switch very fast, and converge Layer 2 and Layer 3 network services into a next-generation multiservice network. In summary, both frame-based and cell-based MPLSs provide great control on the edges of the network by performing routing based on destination and source addresses, and then by switching, not routing, in the core of the network. MPLS eliminates routing’s hop-by-hop packet processing overhead and facilitates explicit route computation on the edge. MPLS adds connection-oriented, path-switching capabilities and provides premium service-level capabilities such as differentiated levels of QoS, bandwidth optimization, and traffic engineering.
MPLS Services MPLS provides both Layer 2 and Layer 3 services. Layer 2 services include Ethernet and IP VPNs. Ethernet is migrating from LANs to WANs but needs service-level agreement (SLA) capabilities such as QoS, traffic engineering, reliability, and scalability at Layer 2. For example, the ability to run Ethernet over MPLS (EoMPLS) improves the economics of Ethernet-based service deployment and provides an optimal Layer 2 VPN solution in the metropolitan area. Ethernet is a broadcast technology, and simply extending Ethernet over classic Layer 2 networks merely extended all of these broadcasts, limiting scalability of
122
Chapter 3: Multiservice Networks
such a service. EoMPLS can incorporate some Layer 3 routing features to enhance Ethernet scalability. MPLS is also access technology independent and easily supports a direct interface to Ethernet without using Ethernet over SONET/SDH mapping required by many traditional Layer 2 networks. Using a Cisco technology called Virtual Private LAN Service (VPLS), an MPLS network can now support a Layer 2 Ethernet multipoint network. Additional MPLS Layer 2 services include Any Transport over MPLS (AToM). At Layer 2, AToM provides point-to-point and like-to-like connectivity between broadband access media types. AToM can support Frame Relay over MPLS (FRoMPLS), ATM over MPLS (ATMoMPLS), PPP over MPLS (PPPoMPLS), and Layer 2 virtual leased-line services. This feature allows providers to migrate to a common MPLS core and still offer traditional Layer 2, Frame Relay, and ATM services with an MPLS-based network. Both VPLS and AToM are discussed further in Chapter 4, “Virtual Private Networks.” MPLS Traffic Engineering (MPLS TE) is another MPLS Layer 2 service that allows network managers to more automatically direct traffic over underutilized bandwidth trunks, often forestalling costly bandwidth upgrades until they’re absolutely needed. Since IP routing always uses shortest path algorithms, longer paths connecting the same source and destination networks would generally go unused. MPLS TE simplifies the optimization of core backbone bandwidth, replacing the need to manually configure explicit routes in every device along a routing path. It should be noted that MPLS TE works for frame-based and cell-based MPLS networks; however, in cell-based networks, there are some limitations to the MPLS TE feature set. For example, the ability to combine MPLS TE Fast Re-Route (FRR) isn’t supported, as it requires additional labels. MPLS TE using FRR would require multiple labels, and the ATM VPI/VCI fixed-length, 20-bit field used for cell-mode MPLS cannot be expanded to accommodate the multiple labels. More traditional forms of ATM PVC traffic engineering are options even in a cell-based ATM MPLS network. MPLS also supports VPNs at Layer 3. Essentially a private intranet, Layer 3 MPLS VPNs support any-to-any, full-mesh communication among all the customer sites without the need to build a full-mesh Layer 2 PVC network, as would be required in a classic ATM network. MPLS VPNs can use and overlap public IP or private IP address space since each VPN uses its own IP routing table instance, known as a VPN routing and forwarding (VRF) table. MPLS structures Layer 3 protocols more creatively and effectively on Layer 2 networks. MPLS VPNs are covered in more detail in Chapter 4. For other MPLS information, there are a number of additional MPLS features discussed at the Cisco website (www.cisco.com), as well as books from Cisco Press dedicated specifically to MPLS networks.
Multiprotocol Label Switching Networks
123
MPLS Benefits for Service Providers For service providers, MPLS is a build once, sell many times model. MPLS helps reduce costs for service providers while offering new revenue services at the network layer. Compared to traditional ATM transport, IP routers and technologies are getting faster, sporting less protocol overhead, and costing less to maintain. Within the carrier space, MPLS is one of the few IP technologies capable of contributing to both the top and bottom of the balance sheet, and for this reason, it is gaining popularity with carriers of all sizes and services. With MPLS, service providers can build one core infrastructure and then use features such as MPLS VPNs to layer or stack different customers with a variety of routing protocols and IP addressing structures into separate WANs. In a sense, these are virtual WANs (VWANs), operating at Layer 3, which means that the IP routing tables are maintained in the service provider’s MPLS network. In addition to Layer 3 IP services, MPLS also offers Layer 2 VPN services and other traffic engineering features. For example, service providers can structure distinct services, such as VoIP services, into a unique VPN that can be shared among customers, or create a VPN for migration to IPv6. In addition, ATM and Frame Relay networks can be layered on the MPLS core using MPLS Layer 2 features while maintaining SLAs in the process. The flexibility of MPLS is why service providers are specifying MPLS as a critical requirement for their next-generation networks. Figure 3-7 shows the concept of an MPLS service provider network with MPLS VPNs. The LSRs (P nodes) are not shown, because they are rather transparent in this example. The eLSRs are labeled as PEs 1, 2, and 3 and maintain individual VPN customer routing (VRFs) for VPNs 10 and 15. Border Gateway Protocol (BGP) is used as the PE-to-PE routing protocol to share customer routing information for any-to-any reachability. For example, the VPN 10 routes on PE-1 are advertised via BGP to the same VPN 10 VRF that exists on PEs 2 and 3. This allows all Company A locations to reach each other. The VRF for VPN 10 on PE-1 (as well as the other PEs) is a separate VRF from the VRF allocated to VPN 15, an entirely different customer. This demonstrates the build once, sell many times model of MPLS VPN services.
124
Chapter 3: Multiservice Networks
Figure 3-7
MPLS Core Network with MPLS VPNs
CE Company A Seattle
VPN 10
CE Company A New York City
VPN 10
Service Provider Network MPLS Core
PE-1 VPN 15
BGP
PE-2 VPN 15
BGP
PE-3 CE Company B San Francisco
VPN 10
CE Company A Chicago
CE Company B London
VPN 15
CE Company B Berlin
Source: Cisco Systems, Inc.
MPLS Example Benefits for Large Enterprises For a large enterprise, MPLS can provide logical WANs and VPNs, secure VPNs, mix public and private IP addressing support; can facilitate network mergers and migrations; and can offer numerous design possibilities. For example, a large enterprise that needs to migrate its network to a different core routing protocol could consider using MPLS. For example, one MPLS VPN could run a large Enhanced Interior Gateway Routing Protocol (EIGRP) customer network while a second MPLS VPN could run Open Shortest Path First (OSPF) routing. These two MPLS VPNs can be configured to import and export certain routes to each other, maintaining any-to-any connectivity between both during the migration. In this way, migration of networks from the EIGRP VPN to the OSPF VPN could occur in stages, while access to shared common services could be maintained. Another example is where an enterprise might elect to use separate MPLS VPNs to migrate from IPv4 addressing to IPv6.
Cisco Next-Generation Multiservice Routers
125
Table 3-3 introduces a general application of MPLS technology. Table 3-3
MPLS Technology Application MPLS Characteristics MPLS Features and Solutions Requirements
Consolidated packet-based core Migrate Layer 2 customers to consolidated core Migrate Layer 2 services to Layer 3 services Multiservice provisioning platforms Transfer of complex routing tasks by enterprises to service providers Rapid IP service creation Ease of accounting and billing
Technology options
RFC 3031, “Multiprotocol Label Switching Architecture” MPLS Layer 3 VPNS (IETF 2547bis) MPLS TE Any Transport over ATM
Design options
Frame-based MPLS (IP) Cell-based MPLS (IP+ATM)
MPLS services
Layer 2 VPN services Layer 3 VPN services VPLS QoS Traffic Engineering
Cisco Next-Generation Multiservice Routers For next-generation multiservice networks, routing platforms born and bred on the service pull of IP networking have the advantage. The greatest customer demand is for IP services. Networks built on IP are naturally multiservice-capable, given IP embellished data, VoIP, and video over IP convergence capabilities. IP routing architecture has reached the hallowed five 9s of availability status, and representative platforms are faster, more scalable, and more service rich than any networking technology that has come before. Innovations such as MPLS have created the flexibility to combine both conventional and contemporary networking approaches, achieving more customer-service granularity in the process. The combination of distributed processing architectures, IP and hardware acceleration in programmable silicon, virtualization architecture, and continuous system software operations now deliver high-end, service provider IP routing platforms that are constant, flexible, affordable, and secure.
126
Chapter 3: Multiservice Networks
For high-end service provider multiservice routing, the notable products are the Cisco CRS-1 Carrier Routing System, the Cisco IOS XR Software, and the Cisco XR 12000/ 12000 Series Routers.
Cisco CRS-1 Carrier Routing System Once you turn on the Cisco CRS-1 Carrier Routing System, you might never turn it off. Unlike many routers that have preceded the CRS-1 design, the CRS-1 is scalable and simple, continuous and adaptable, flexible and high performance. None of these individual characteristics of the CRS-1 compromises another, leading to new achievements in nondisruptive scalability, availability, and flexibility of the system. Using the CRS-1 Carrier Routing System, providers can visualize one network with many services and limitless possibilities. Using a new Cisco IOS XR Software operating system that is also modular and distributed, the CRS-1 is the first carrier IP routing platform that can support thousands of interfaces and millions of IP routes, using a pay-as-you-grow architectural strategy. The CRS-1 blends some of the best of computing, routing, and programmable semiconductor and software architectures for a new, high-end routing system that you can use for decade-plus lifecycles. Using the CRS-1’s concurrent scalability, availability, and performance, you can use the CRS-1 to consolidate service provider point-of-presence (POP) designs, collapsing core, peering, and aggregation layers inside the covers of one system. Previous routing platforms had limitations in the number of peers, interfaces, or processing cycles, leading to network POP designs that layered functionality based on the performance constraints of the routing platforms. With the CRS-1, these limitations are removed—hardware works in concert with software for extensible convergence of network infrastructure and services. The CRS-1 represents the next-generation IP network core and is the foundation for IP/MPLS provider core consolidation.
CRS-1 Hardware Design The CRS-1 hardware system design uses the primary elements of line card and fabric card shelves. Each type of shelf comprises a standard telecommunications rack from a dimensional footprint.
Line Card Shelf Line card shelves support the routing processors, integrated fabric cards and the line card slots, each of which is capable of 40 Gbps performance. Known collectively as a Line Card Chassis, the chassis comes in either an 8-slot version or a 16-slot version.
Cisco Next-Generation Multiservice Routers
NOTE
127
Cisco uses shelf as marketing terminology and the term chassis as engineering terminology; both terms are interchangeable.
Two Route Processors are installed per chassis, one for active and one for hot standby. The Route Processors have their own dedicated slots and don’t subtract from the 8 or 16 potential line cards slots of either chassis. Each Line Card Chassis contains up to 8 fabric cards in the rear of the chassis to support the Benes switching fabric for a single shelf system configurations. Each line card is composed of rear-facing Interface Modules and front-facing Modular Services Cards connected together via a midplane design. The Line Card Chassis is where route processing, forwarding, and control-plane intelligence of the system resides. Within each Line Card Chassis are 2 Route Processors, up to 16 Interface Modules, each pairing with 16 Modular Services Cards, and 8 fabric cards. Redundant fan trays, power supplies, and cable management complete the distinctive elements within Line Card Chassis. Each Route Processor is made up of a symmetrical multiprocessing architecture based on a Dual PowerPC CPU complex with at least 4 GB of DRAM, 2 GB of Flash memory, and a 40 GB micro hard drive. One of the Route Processors operates in active mode with the other in hot standby. The Route Processors, along with system software, can provide nonstop forwarding (NSF) and stateful switchover (SSO) functions without losing packets. Another plus of the CRS-1 architecture is that any Route Processor can control any line card slot, on any Line Card Chassis in a multishelf system. Using features of the Cisco IOS XR Software operating system, Route Processors and line cards can be formed across the system chassis to create logical routers within the physical CRS-1 overall system. Any time that supplementary processing power is needed, the architecture supports the addition of distributed Route Processors, providing two additional Dual PowerPC CPU complexes with their associated DRAM, Flash, and hard drive. To create a line card, a combination of Interface Modules and Modular Services Cards are used. The Interface Modules, also referred to as Physical Layer Interface Modules (PLIMs) contain the physical interface ports and hardware interface-specific logic. Interface Modules for the CRS-1 exist for OC-768c/STM-256x, OC-192c/STM-64c, OC-48c/STM-16c, and 10 Gigabit Ethernet. The Interface Modules, installed in the rear card cage of the Line Card Chassis, connect through the midplane to Modular Services Cards in the front card cage of the chassis. The Cisco Modular Services Cards are made up of a pair of Cisco Silicon Packet Processors (SPPs), each of which is an array of 188 programmable Reduced Instruction Set Computer (RISC) processors. These SPPs are deployed two per Modular Services Card, with one for the input direction and one for output packet processing. The SPP is another key innovation, as the SPP architecture achieves 40 Gbps line rates with multiple services, offering new features through in-service software upgrades to the SPP. The Interface Module and the
128
Chapter 3: Multiservice Networks
Modular Services Card work together as a pair to form a complete line card slot. The Modular Services Card interfaces with the fabric cards, using the switching fabric to reach other line cards or the Route Processor memory.
Fabric Chassis The Fabric Chassis is used to extend the CRS-1 into a CRS-1 Multishelf System. Up to 8 Fabric Chassis can interconnect as many as 72 Line Card Chassis to create the maximum CRS-1 Multishelf System. The Fabric Chassis is used as a massively scalable stage 2 of the three-stage Benes switching fabric in a multishelf system configuration. A switching fabric is a switch backplane, and many of the Cisco products use various types of switching fabrics to move packets between ingress interfaces and Route Processor memory and out to egress interfaces. For example, a crossbar fabric is a popular fabric used in many Cisco products, such as the 12000 series and the 7600 series. For hundreds or even thousands of interface ports, a crossbar switching mechanism becomes too expensive and scheduling mechanisms too complex. Therefore, the CRS-1 implements a three-stage, dynamically self-routed, Benes topology cell-switching fabric. This fabric is a multistage buffered switching fabric that represents the lowest-cost N x N cell-switching matrix that avoids internal blocking. The use of a backpressure mechanism within the fabric limits the use of expensive off-chip buffer memory, instead making use of virtual output queues in front of the input stage. Packets are converted to cells, and these cells are used for balanced load distribution through the switch fabric. The cells are multipath routed between stages 1 and 2 and again between stages 2 and 3 to assist with the overall goal of a nonblocking switching architecture. The cells exit stage 3 into their destination line card slots where the Modular Services Cards reassemble these cells into the proper order, forming properly sequenced packets. The Benes topology switching fabric is implemented in integrated fabric cards for single shelf systems and additionally implemented as standalone Fabric Chassis in a multishelf system configuration. Each standalone Fabric Chassis can contain up to 24 fabric cards for stage 2 operation. A CRS-1 Single-Shelf system will use integrated fabric cards within the Line Card Chassis that include all three stages within the card. In a CRS-1 Multishelf System, from one to eight CRS-1 Fabric Chassis are used to form stage 2 of the switching fabric, with stage 1 operating on the fabric card of the egress line card shelf and stage 3 operating on the ingress line card shelf across the fabric. Figure 3-8 shows a conceptual diagram of the CRS-1 switching fabric. Physically, the Cisco CRS-1 fabric is divided into eight planes over which packets are divided into fixed-length cells and then evenly distributed. Within the planes, the three fabric stages—S1, S2, and S3—dynamically route cells to their destination slots, where the Modular Services Cards reassemble cells in the proper order to form properly sequenced packets.
Cisco Next-Generation Multiservice Routers
Figure 3-8
129
One Plane of the Eight-Plane Cisco CRS-1 Switching Fabric
S1
S2
S3
S1
S2
S3
Source: Cisco Systems, Inc.
Together the Route Processors, fabric cards, Interface Modules, and Modular Services Cards work with the IOS XR operating system to create a routing architecture that is scalable from 640 Gbps to 92 Tbps (terabits per second) of performance. These capacities are accomplished through various configurations of a CRS-1 Multishelf System or a CRS-1 Single-Shelf System. The overall CRS-1 architectural design is conceptualized in Figure 3-9.
Cisco CRS-1 Multishelf System The Cisco CRS-1 Multishelf Systems are constructed using a combination of Line Card Chassis and Fabric Chassis. Up to 72 Line Card Chassis can be interconnected with 8 Fabric Chassis to create a multishelf system with as many as 1,152 line card slots, each capable of 40 Gbps, yielding approximately 92 Tbps (full duplex) of aggregate performance capacity. Cisco CRS-1 Multishelf Systems can start with as few as 2 Line Card Chassis and 1 Fabric Chassis and grow as demand occurs.
130
Chapter 3: Multiservice Networks
Figure 3-9
Cisco CRS-1 Hardware Architecture Line Card Modulam Service Card
MD Plane
Interface Module
Cisco SPP
Multi-Stage Switch Fabric
Cisco SPP S1
S2
S3 S3
S1
S2
S3 S3
Line Card
Line Card
Route Processor
Route Processor
Route Processor
Source: Cisco Systems, Inc.
Within a multishelf system, any Route Processor can control any line card on any Line Card Chassis in the system. For example, a Route Processor in Line Card Chassis number 1 can be configured to control a line card in Line Card Chassis number 72 using the Fabric Chassis as an internal connectivity path. Route Processors and distributed Route Processors are responsible for distributing control plane functions and processing for separation, performance, or logical routing needs. Using a Cisco CRS-1 Multishelf System, providers can achieve the following configurations:
• • •
2 to 72 Line Card Chassis 1 to 8 Fabric Chassis Switching capacity from 640 Gbps to 92 Tbps (full duplex)
Cisco Next-Generation Multiservice Routers
•
131
Support for up to 1,152 line cards at 40 Gbps each — 1,152 OC-768c/STM-256c POS ports — 4,608 OC-192c/STM-64c POS/DPT ports — 9,216 10 Gigabit Ethernet ports — 18,432 OC-48c/STM-16c POS/DPT ports
Cisco CRS-1 16-Slot Single-Shelf System The CRS-1 Single-Shelf Systems come as either a 16-slot or an 8-slot Line Card Chassis. Single-shelf systems use integrated Switch Fabric Cards (SFCs), installed in the rear card cage of the Line Card Chassis rather than using a standalone Fabric Chassis. In a singleshelf system configuration, the integrated SFCs perform all three stages of the Benes topology switching fabric operation. Using a Cisco CRS-1 16-Slot Single-Shelf System, providers can achieve the following configurations:
• • •
16-slot Line Card Chassis with integrated fabric cards Switching capacity to 1.28 Tbps (full duplex) Support for up to 16 line cards at 40 Gbps each — 16 OC-768c/STM-256c POS ports — 64 OC-192c/STM-64c POS/DPT ports — 128 10 Gigabit Ethernet ports — 256 OC-48c/STM-16c POS/DPT ports
Cisco CRS-1 8-Slot Single-Shelf System The CRS-1 Single-Shelf Systems also come in an 8-slot Line Card Chassis. The 8-slot Line Card Chassis is one half as tall as a 16-slot Line Card Chassis. As previously mentioned, single-shelf systems use the integrated SFCs, installed in the rear card cage of the Line Card Chassis, performing all three stages of the Benes topology switching fabric operation. Using a Cisco CRS-1 8-Slot Single-Shelf System, providers can achieve the following configurations:
• • •
8-slot Line Card Chassis with integrated fabric cards Switching capacity to 640 Gbps (full duplex) Support for up to 8 line cards at 40 Gbps each — 8 OC-768c/STM-256c POS ports — 32 OC-192c/STM-64c POS/DPT ports — 64 10 Gigabit Ethernet ports — 128 OC-48c/STM-16c POS/DPT ports
132
Chapter 3: Multiservice Networks
Cisco IOS XR Software The Cisco IOS XR Software is likely to be one of the most important technology innovations of this decade. Benefiting from over 20 years of IOS development and experience, the Cisco IOS XR answers the following questions:
•
“Why can’t a router platform be divided into separate physical and logical partitions as the computer industry has done with mainframes for many years?” Now it can.
•
When presented with the question, “Why can’t a router’s control plane be separated to individually manage, restart, and upgrade software images without risk to other partitions?” With IOS XR, now you can.
•
When the inquiry is made as to “when will a router support five nines of reliability?” With IOS XR in use, now it does.
IOS XR answers these questions and more with massive scalability; a high-performance, distributed processing, multi-CPU optimized architecture; and continuous system operation. With IOS XR in a CRS-1 Multishelf System, distributed processing intelligence can take full advantage of hardware interface densities and symmetric multiprocessing power, scaling up to 92 Tbps per multishelf system. IOS XR is built on a QNX microkernel operating system with memory protection that places strict logical boundaries around subsystems to ensure independence, isolation, and optimization. Only the essential operating functions reside in the kernel to strengthen this key element of the overall software system. Through the ability to distribute processes and subsystems anywhere across CRS-1 hardware resources, the IOS XR can dedicate processing, protected memory, and control functions to these resources—creating not only logical routers, but resource-allocated physical routers as well. This leads to the ability to partition operations such that a production routing system and a development routing system can reside on the same physical system. This can become an opportunity to market to a sophisticated customer both a production networking service for mission-critical applications, as well as a development networking partition where new features can be developed and tested without the consequences of impacting mission-critical applications. Or a provider can run multiple MPLS administrative domains on the same physical system, each with attributes and software characterized to a leading edge, edge, or lagging edge type of network service, applying more granularity to customer risk and choice. The separation architecture of IOS XR blended with hardware platforms provides flexibility in IP network design for providers. With IOS XR, multiple partitions can mean multiple software versions running on the same physical system chassis. IOS software levels are distributed in a modular fashion, allowing for software patches and bug fixes in one partition without affecting others. This takes on an in-service upgrade approach, as each partition process can be restarted without affecting the other running systems and their respective routing topology.
Cisco Next-Generation Multiservice Routers
133
In today’s networks, security and reliability are mutual. Perhaps one of the greatest benefits of the IOS XR’s isolatable architecture is the ability to resist malicious attacks, such as TCP/IP-based denial of service and distributed denial of service threats. Even if a TCP/IP subsystem were to be compromised, a compromised TCP subsystem would run outside of the IOS XR system kernel, so the IOS XR system kernel and other protected subsystem processes would continue to operate. The Cisco IOS XR Software architecture is conceptualized in Figure 3-10. Figure 3-10 Cisco IOS XR Software Architecture
BGP Speaker Route Selection Process Manager
BGP
IS-IS
RIP
QoS
FIB
LPTS
CLI
XML
Alarm
Net-Flow
SNMP
SSH
Distributed OS Infrastructure
OSPF
PIM
IGMP
Control Plane
ACL
PRI
L2 Drivers
Data Plane
Management Plane
Source: Cisco Systems, Inc.
The Cisco IOS XR Software assists with making the latest high-end routing systems more scalable, flexible, reliable, and secure. The Cisco IOS XR Software is perhaps the prime catalyst for next-generation IP/MPLS networks that can now operate on a worldwide scale. For a full listing of features and functions, examine the various Cisco CRS-1 and IOS XR information found at http://www.cisco.com/go/crs.
Cisco XR 12000/12000 Series Routers The Cisco XR 12000 Series Routers are so named because they combine the innovative features of the Cisco IOS XR Software with the superior heritage of the Cisco 12000 Series routing platforms. The Cisco XR 12000/12000 Series Routers are optimally positioned for
134
Chapter 3: Multiservice Networks
the next-generation core and edge of provider networks, with a strength in multiservice edge consolidation. The XR 12000s are optimized to run the Cisco IOS XR Software, while the 12000s are the original 12000 series running the Cisco IOS software. Using the Cisco IOS XR Software with the distributed architecture of the XR 12000, the XR 12000 routers achieve both logical and physical routing functionality that can operate independently within a single XR 12000 chassis. A private MPLS VPN service could be completely isolated from a public Internet service for security but also operationally separate. For example, an anomaly affecting the public Internet service might result in a need to restart that service within the router; however, this action wouldn’t affect the private MPLS VPN service running as a separate process. There are four primary elements that comprise the XR 12000 architecture:
• • • •
General Route Processor Switch fabric Intelligent line cards Operating software
XR 12000/12000 Architecture All generic routers use a general Route Processor to provide control plane, data plane, and management plane functions. As line speeds and densities increase, this Route Processor must be able to keep up with the data forwarding rate, while also maintaining control and management functions simultaneously. At higher line rates, centralized processor architectures encounter timing sensitivities that put constraints on parallel feature processing. Distributed processing architectures, as in the XR 12000/12000 series, remove these constraints and leverage multiprocessing for aggregate switching performance gains. The XR 12000/12000 routers are optioned with a premium routing processor known as the Performance Route Processor P2 (PRP-2). The PRP-2 is capable of more than one million route prefixes and 256,000 multicast groups. It assists the 12000 routers with reaching up to 1.2 Tbps of aggregate switching performance in conjunction with an appropriate quantity and speed of the intelligent line cards. In addition to the Cisco IOS XR Software benefits, the distribution of multiple processors within the XR 12000 chassis allows for an extension and separation of the control plane across multiple service instances. This provides control and management plane independence, helping facilitate logical and physical independence. These distributed processors are manifested in IP Services Engines (ISEs) with a particular ISE personalization representing the central intelligence of each line card. ISEs are Layer 3-forwarding, CEF-enabled packet processors built with programmable, application-specific integrated circuits (ASICs) and optimized memory matrices. The primary benefit to the ISE technology is the ability to run parallel, IP feature processing at the network edge—at line rate. The programmability of the ISEs is key to investment
Cisco Next-Generation Multiservice Routers
135
protection, as new features can be added without a hardware upgrade. ISEs are architected for 2.5 Gbps, 10 Gbps, and 40 Gbps operation and are often optimized toward core or edge functions. The ISEs have been proceeding through various technology enhancements over the past several years and are classified relative to functionality. ISE functional classifications, such as the following, are by engine type:
•
ISE engine 0—Known internally as the OC-12/BMA, this original ISE engine 0 uses an R5K CPU. Most features are implemented in software. An example of an ISE engine 0 is the 4-port OC-3 ATM line card. QoS features are rather limited.
•
ISE engine 1—Known internally as the Salsa/BMA48, this engine was improved using a new ASIC (Salsa), allowing IP lookup to be performed in hardware. An example of an ISE engine 1 is the 2-port OC-12 Dynamic Packet Transport (DPT) line card. QoS features are rather limited.
•
ISE engine 2—Known internally as the Perf48, this engine added new ASICs to perform hardware lookup for IP/MPLS switching. On-card packet memory was increased to 256 MB or 512 MB. New hardware-based class of service features were added, such as weighted random early detection (WRED) and Modified Deficit Round Robin (MDRR). An example of an ISE engine 2 is the 3-port Gigabit Ethernet line card.
•
ISE engine 3—Internally referred to as the Edge engine, engine 3 is a completely rearchitected Layer 3 engine. Engine 3 accommodates an OC-48 worth of bandwidth and integrates additional ASICs to improve QoS and access control list (ACL) features that can be performed at line rate. An example of an ISE engine 3 is the 1-port OC-48 POS ISE line card. There is also an engine 3 version of the 4-port OC3 ATM card mentioned earlier.
•
ISE engine 4—Referred to as the Backbone 192 engine, this engine is optimized and accelerated to support an OC-192 line rate. An example of an ISE engine 4 is the 1-port OC-192 POS line card.
•
ISE engine 5—Optimized for 10 Gbps line rates with full feature sets including multicast replication. An example of an ISE engine 5 is the SIP-600 SPA Interface Processor-600 line card.
Depending on an ISE’s functional legacy, an ISE might not be supported by new features in Cisco IOS software or the Cisco IOS XR Software. It is always wise to consult Cisco support tools to determine hardware platform, ISE engine type, and software feature compatibility when designing with these components. The XR 12000/12000 multigigabit switch fabric works in combination with a passive chassis backplane, interconnecting all router components within an XR 12000/12000 router chassis. The active switching fabric is resident on pluggable cards known as SFCs and clock scheduler cards (CSCs), and these SFCs/CSCs are installed in a lower card shelf that interconnects with the XR 12000 backplane. This allows the SFCs/CSCs to be field upgraded easily. For example, changing a router to support 40 Gbps per line card slot from
136
Chapter 3: Multiservice Networks
a 10 Gbps per line card slot can be accomplished through a replacement of the SFCs/CSCs with appropriate SFCs/CSCs that can clock and switch 40 Gbps-enabled ISE line cards and the PRP-2. This allows a XR 12000/12000 router to grow to as much as 1.28 Tbps of aggregate switching capacity. Another performance-enhancing feature of the XR 12000 switch fabric is that any IP multicast packet replication (for example, IP video) is now performed by the switch fabric itself, rather than burdening the general Route Processor (PRP-2). The Cisco XR 12000 Series Routers are capable of running the Cisco IOS XR Software previously described. This software extends continuous system operation, performance scalability, and logical and physical virtualization features to the XR 12000 series routing platforms.
Cisco XR 12000/12000 Capacities The Cisco XR 12000/12000 Series Routers comprise a scalable range of capacity from 30 Gbps to 1,280 Gbps (1.28 Tbps). Multiservice routers are commonly categorized by card slot quantity, throughput capacity per slot, and aggregate switching fabric capacity (full duplex or bidirectional). You can determine these three items via the Cisco model number without referencing any documentation. The model number convention defines the first two digits (12XXX) as the 12000 series family of routers. An XR-capable chassis will be prefixed with an XR (XR-12XXX). The third digit of the 12000 model number represents the full-duplex (FDX) line rate capacity per card slot where XX0XX equals 2.5 Gbps (which is 5 Gbps FDX), XX4XX equals 10 Gbps (20 Gbps FDX), and XX8XX equals 40 Gbps (80 Gbps FDX). The fourth and fifth digits of the 12000 model number convention define the total number of chassis card slots, where 12X04 equals four card slots, 12X06 equals six card slots, 12X10 equals 10 card slots, and 12X16 equals a 16-card slot router chassis. To determine the gross-effective aggregate switching capacity of a particular model, you can multiply the line rate per card slot by the number of card slots, but this is where it can get confusing. Vendor literature often discusses line rate capabilities of the vendor’s products using industry-familiar line rates of 2.5 Gbps (OC-48/STM-12), 10 Gbps (OC-192/STM-64), and 40 Gbps (OC-768/STM-256) services. On closer introspection, that line rate is used in a total aggregate capacity calculation for the router, but the line rate is doubled to reflect a full-duplex mode of operation. Often forgotten is that a 10 Gbps line rate is capable of that speed bidirectionally, both in the transmit and receive directions simultaneously. The calculation of theoretical total capacity becomes the full-duplex line rate (for example, 10 Gbps becomes 20 Gbps FDX) times the number of card slots. Continuing with the Cisco model number convention, you can examine the third digit to determine the full-duplex line rate per card slot (for example, 4 = 10 Gbps half duplex [HDX] = 20 Gbps FDX) and multiply times the number of total card slots indicated by the
Cisco Next-Generation Multiservice Routers
137
fourth and fifth digits of the model number. A model with the number 12410 would calculate as 20 Gbps x 10 cards = 200 Gbps of total aggregate switching capacity for the 12410 platform. A model 12816 would calculate to 80 Gbps x 16 slots = 1,280 Gbps or 1.28 Tbps. This is gross-effective switching capacity, and the actual net-effective capacity will depend on the number of general-purpose processors (for example, PRP-2) configured for the system, as these subtract from the available card slots in most of the systems. Figure 3-11 shows the relative positioning of the Cisco XR 12000/12000 Series Routers based on gross-effective capacities. As the figure shows, most models have a growth path for executing a pay-as-you-grow strategy. Figure 3-11 Cisco XR 12000/12000 Series Router Capacities
Cisco XR 12000 and 12000 Series Routers 12810
Line Rate Per Card Slot (Gbps)
40 G
10 G
2.5 G
12406
12410
12416
120
200
320
12816
12010 12016 12006 12404
30 50
80
640
800
1280 (1.28 Tbps)
Aggregate Switching Capacity (Gbps)
The XR 12000/12000 series router product line includes additional features worthy of mention. The routers use the Cisco I-Flex design, which is implemented as intelligent, programmable interface processors with modular port adapters. This design combines both shared port adapters (SPAs) with SPA interface processors (SIPs) to improve line card slot economics and service density. The SIPs use the IP Services Engine (ISE) technology and are packaged into a SIP-400 or SIP-600 line card for the 12000 platform. The SIP-600 supports 10 Gbps per slot with two single- or double-height SPAs, and the SIP-400 supports 2.5 Gbps per slot and up to four single-height SPAs. A number of different SPAs are available to connect high-speed interfaces. The combination of the SPAs/SIPs creates interface flexibility, portability, and density for the XR 12000/12000 router platforms.
138
Chapter 3: Multiservice Networks
The platforms have enhanced fabrics that now support Building Integrated Timing Source (BITS) and single-router Automatic Protection Switching (SR APS). BITS allows for a centralized timing distribution for multiservice edge applications, particularly where the 12000 is used to aggregate traffic from ATM access networks. These ATM networks have relied on BITS, and the feature is essential to allow migration of ATM access networks onto XR 12000/12000-based IP/MPLS core networks. The SR APS feature enables true APS through the 12000 system platforms. Adding APS to the fabric and the support of a backpressure mechanism in the fabric scheduler eliminates timing slips when switching between active and standby cards, leveraging the fabric mirroring function and locking the timing to BITS. The fabric’s backpressure support keeps the routers from dropping packets if an active card is removed.
Multiservice Core and Edge Switching Networking traffic continues to accelerate at the metro edge and aggregate into the metro core, from large enterprises driving Ethernet requirements into metropolitan area networks (MANs) to rising waves of broadband from small and medium businesses and consumers. In fact, the Ethernet opportunity within the service provider space is wide open, and providers of all types are counting on Ethernet services as a large part of their portfolio growth. While there is a demand shift from circuit to packet traffic within the MAN, the vast installation base of SONET/SDH service functionality precludes a forklift upgrade of metropolitan provider technology, instead requiring an evolutionary migration path to packet-based services from a SONET/SDH heritage. Multiservice Provisioning Platforms (MSPPs) combine the functions and services of different network elements into a single device. For a few more years, voice traffic is predicted to remain the cash cow of provider revenues, making time division multiplexing (TDM) switching support an important requirement. The MSPP market is defined as new-generation provider equipment with SONET/SDH add/drop multiplexer (ADM) functionality, TDM and packet functionality, particularly Ethernet, and is deployed at the metro multiservice edge or core. Multiservice Switching Platforms (MSSPs) are optimized for metropolitan core aggregation requirements, typically consolidating multiple discreet SONET ADMs and broadband digital access cross-connect systems (DACSs), while providing core switching services for multiple MSPP deployments. Eliminating platforms, no matter how reliable, reduces the single points of failure in the overall network architecture. MSPPs and MSSPs integrate multiple device functions to allow consolidation of platforms while introducing new technology for services innovation. MSPPs and MSSPs entered the market at the beginning of a long telecom winter in 2000. However, their inherent value proposition has weathered the fiscal storms and frozen budgets, finding favor first with emerging network providers and then moving into the
Multiservice Core and Edge Switching
139
incumbent provider regions. Providing flexible access services with an optical view toward the network’s center, multiservice provisioning and switching network elements are landing on the customer-facing edges of today’s new optical networks. Figure 3-12 shows the typical positioning of the Cisco ONS 15454 MSPP and the ONS 15600 MSSP within the MAN architecture. The ONS 15454 MSPP is often deployed at the edge of metropolitan provider networks based on SONET/SDH rings. The MSPP provides customer-facing communication services and connects back to the service provider core via optical-based SONET/SDH rings or laterals. The ONS 15600 MSSP provides for broadband aggregation and switching of multiple MSPP rings aggregating into the core of provider networks. The MSSP often facilitates metropolitan connection to long-haul and extended long-haul (LH/ELH) networks. Figure 3-12 MSPP and MSSP Metropolitan Application
LH/ELH Network
MSPP
Metro Edge Ring
Metro Core Ring MSPP
MSSP MSSP Metro Edge Ring MSPP
MSPP
Source: Cisco Systems, Inc.
The next sections describe both platforms in more detail.
140
Chapter 3: Multiservice Networks
Multiservice Provisioning Platform (MSPP) The market for MSPPs emerged in 2000, starting the century strong with network edge technology turnover and service positioning. This market was seeded by technology pioneered by up-start Cerent, which was acquired by Cisco in 1999. One year later, the MSPP market gathered $1 billion in revenue on a worldwide basis. The primary appeal for MSPPs is to consolidate long-established SONET/SDH ADMs in the multiservice metro, while incorporating Layer 2 and new Layer 3 IP capabilities with packet interfaces for Ethernet, Fast Ethernet, and Gigabit Ethernet opportunities. Many MSPPs contain additional support for multiservice interfaces and dense wavelength division multiplexing (DWDM) to optimize the use of high-value metropolitan optical fiber. Deployed as a springboard for the rapid provisioning of multiple services, the intrinsic value of these new-generation platforms is to build a bridge from circuit-based transport to packet-based services. MSPPs help providers to execute that strategy while maintaining established services with TDM switching support and SONET/SDH capabilities. Entering the market near the end of many legacy SONET/SDH ADM depreciation schedules, the MSPPs inherit a sizable portion of their justification from reduced power, space, and maintenance requirements. In doing so, MSPPs help with continued optimization of operating budgets while representing strategic capital investments for new high-value service opportunity. It is difficult to discuss SONET/SDH without a reference of the bandwidth speeds and terminology used by these worldwide standards. Table 3-4 shows a comparison of SONET/ SDH transmission rates. Table 3-4
Comparison of SONET/SDH Transmission Rates
Digital Hierarchy for United States SONET (GR.253)
Digital Hierarchy for European SDH (G.691)
Line Rate (Mbps)
Payload Rate (Mbps)
SONET Electrical Signal
SONET Optical Carrier (OC) Transport Level
SDH Equivalent Transport
51.84
50.112
STS-1
OC-1
STM-0
155.520
150.336
STS-3
OC-3
STM-1
622.080
601.344
STS-12
OC-12
STM-4
2,488.32
2405.376
STS-48
OC-48
STM-16
9,953.28 (10 Gbps)
9621.504
STS-192
OC-192
STM-64
39,813.12 (40 Gbps)
38486.016
STS-768
OC-768
STM-256
Multiservice Core and Edge Switching
141
Many MSPP devices carry support for optical trunk rates from OC-3/STM-1 and OC-12/STM-4 to OC-48/STM-16 and OC-192/STM-64. This provides flexibility in using the MSPP for metropolitan edge access services (trunk rates of OC-3/STM-1 and OC-12/ STM-4) and even for metropolitan core applications when MSPPs include support for OC48/STM-16 and OC-192/STM-64 speed optical interfaces. A small percentage of MSPPs are used in long-haul applications, particularly when the platform includes reasonable numbers of optical interfaces at OC-48/STM-16 and OC-192/STM-64. In the MSPP market, the primary Cisco offering is the ONS 15454 SONET/SDH-based MSPP, supporting DS1/E1 to OC-192/STM-64, TDM switching, switched 10/100/1000 line-rate Ethernet, DWDM, and other features in a compact chassis. Combining STS-1/VC3/VC-4 and VT 1.5/VC-12 bandwidth management, packet switching, cell transport, and 3/1 and 3/3 transmux functionality, the ONS 15454 reduces the need for established digital cross-connect elements at the customer-facing central offices. The ONS 15454 MSPP supports TDM, ATM, video, IP, Layer 2, and Layer 3 capabilities across OC-3 to OC-192 unidirectional path-switch rings (UPSRs); two- or four-fiber bidirectional line switch rings (BLSRs); and linear, unprotected, and path-protected mesh network (PPMN) optical topologies. Figure 3-13 shows the concept of service delivery on the ONS 15454 MSPP. This diagram shows a conceptual chassis layout of the Cisco ONS 15454 MSPP using the cross-connect timing control and SONET/SDH OC-48/STM-16 trunk cards. Also shown is an ML series Ethernet card for the provisioning of Gigabit Ethernet for Transparent LAN Services (TLS). The figure also depicts how these different services can be aggregated via STS bandwidth increments, effectively packing multiple services within the OC-48/STM-16 optical uplink. With Ethernet connectivity services in high demand at the metro edge, the ONS 15454 MSPP delivers a very strong Ethernet portfolio. The ONS 15454 uses multiple series of data cards to support Ethernet, Fast Ethernet, and Gigabit Ethernet over SONET/SDH. These card types are the E series, G series, ML series, and CE series Ethernet data cards. Ethernet over SONET/SDH services can be combined within 15454 Ethernet cards via STS scaling in a variety of increments, depending on the type of Ethernet card used. Table 3-5 shows typical STS values and their respective aggregate line rate. Table 3-5
STS Bandwidth Scaling STS Bandwidth Increment
Effective Line Rate (Mbps)
STS-6c
311.04
STS-9c
466.56
STS-12c
622.08
STS-18c
933.12
STS-24c
1,244.16
STS-36c
1,866.24
142
Chapter 3: Multiservice Networks
Figure 3-13 Service Delivery on the Cisco ONS 15454 MSPP
STS-12 for Ethernet SLAs
OC-48 Bandwith Capacity
OC-3 Service TD
Private Line Gigabit Ethernet Service
XC-10G TCC2 OC-48
STS-24 for Line Rate Gigabit
ONS 15454 with ML-Series
OC-48 TCC2 XC-10G
Network-Side Representation
• Using SONET Infrastructure • <50 ms SONET Recovery • SP Management with TL1, OSMINE
Catalyst • Rate Limiting Per-Port/Service Basis • QoS/CoS Through Traffic Classification • Security Benefits
TLS
Internet Access
Source: Cisco Systems, Inc.
Cisco ONS 15454 E Series Ethernet Data Card The E series data cards support 2.4 Gbps of switching access to the TDM backplane, interfacing at STS rates up to STS-12. These cards support 10 Mbps Ethernet, 100 Mbps Fast Ethernet, and 1000 Mbps Gigabit Ethernet (limited to 622 Mbps) using STS bandwidth scaling at increments of STS-1c, STS-3c, STS-6c, and STS-12c. These cards are useful for setting up point-to-point Ethernet private lines, which don’t need Spanning Tree Protocol (STP) support.
Cisco ONS 15454 G Series Ethernet Data Card The G series data cards are higher-density Gigabit Ethernet cards, supporting access to the ONS 15454’s TDM backplane at rates up to STS-48/VC-x-y. STS/VC bandwidth scaling is available for the real concatenation (RCAT) standard in selectable increments of STS-1, STS-3c, STS-12c, and STS-24c. The extended concatenation (ECAT) standard is supported with increments of STS-6c, STS-9c, and STS-24c. The G series cards yield higher performance with aggregate access rates of four times the E series cards. All Ethernet frames are simply mapped into SONET/SDH payloads, so there are fewer design constraints and
Multiservice Core and Edge Switching
143
ultra-low latency. The cards also support Gigabit Etherchannel or the 802.3 ad link aggregation standard, so that multigigabit Ethernet links can be created to scale capacity and link redundancy. The G series cards are targeted at the point-to-point Ethernet private line market, where speeds beyond 1 Gbps are desired services.
Cisco ONS 15454 ML Series Ethernet Data Card With the ML series data cards, you can create any point-to-point or multipoint Ethernet service using the Layer 2 or Layer 3 control planes or via the software provisioning tools. These cards are used primarily for Fast Ethernet and Gigabit Ethernet support. Multiple levels of priority are available for class of service awareness, as is the ability to guarantee sustained and peak bandwidths. These cards access the TDM backplane at an aggregate level of 2.4 Gbps. The ML series Ethernet ports can be software provisioned from 50 Mbps to the port’s full line rate in STS1, STS-3c, STS-6c, STS-9c, STS-12c, and STS-24c increments. Bandwidth guarantees can be established down to 1 Mbps. ML series cards take advantage of features within Cisco IOS software, sharing a common code base with Cisco enterprise routers. The ML series includes two virtual Packet over SONET/SDH ports, which support Generic Framing Protocol (GFP) and virtual concatenation (VCAT) with software-based Link Capacity Adjustment Scheme (SW-LCAS). EoMPLS is supported as a Layer 2 bridging function. Virtual LANs (VLANs) can be created using the IEEE 802.1Q VLAN encapsulation standard, which can tag up to 4096 separate VLANs and additionally supports the IEEE 802.1Q tunneling standard (Q-in-Q) and Layer 2 protocol tunneling. Layer 2 Ethernet VPNs are best supported via the 802.1Q tunneling standard using this double-tagging hierarchy to preserve provider VLANs. It does this by tunneling all of the customer’s 802.1Q tagged VLANs within a single provider 802.1Q VLAN instance. For Layer 2 VPN delivery across multiple SONET/SDH rings, a combination of IEEE 802.1Q tunneling in the access layer and EoMPLS across the core is a recommended design practice. All of these features allow for a strong Ethernet rate shaping functionality at the edge with highly reliable SONET/SDH protection.
Cisco ONS 15454 CE Series Ethernet Data Card The CE series card is named for “Carrier Ethernet.” This card is designed for optimum delivery of carrier-based, private-line Ethernet services, leveraging enhanced capabilities over SONET/SDH MSPP networks. Specifically, this card supports eight ports of 10/100BASE-T RJ45 Ethernet. What is key is that the CE series card supports Packet over SONET/SDH virtual interfaces, supports GFP, and can use high-order VCAT and LCAS for optimum bandwidth over SONET/SDH efficiency and in-service bandwidth capacity adjustments. Typical Ethernet features and 802.1p Type of Service (ToS) is supported.
144
Chapter 3: Multiservice Networks
The card has a maximum aggregate capacity of 600 Mbps, yielding a low oversubscription ratio if all eight ports are provisioned for full 100BASE-T operation. Each port can be configured from 1.5 Mbps to 100 Mbps, leveraging the capabilities of low-order and highorder VCAT. Each port forms a virtual concatenation group (VCG) using contiguous concatenation (CCAT) or VCAT, and port traffic from these eight Ethernet interfaces is mapped into the virtual Packet over SONET (PoS) interfaces via either GFP or High-Level Data Link Control (HDLC) framing. Each port forms a one-to-one relationship, as each port-based VCG is identifiable within the resulting SONET/SDH circuit that is created upstream of the ONS 15454 MSPP. Since each VCG is identifiable, LCAS can then be used to dynamically adjust individual port bandwidth capacity on-the-fly, in real time. A customer can order 1.5 Mbps Ethernet service and then grow to 100 Mbps capacity in appropriate increments on an in-service basis. This facilitiates a key differentiator for providers looking to craft dynamic provisioning of Ethernet-based services.
Multiservice Switching Platforms (MSSP) The MSSP is a natural follow-on to the success of the MSPP. The MSSP is a newgeneration SONET/SDH, metro-optimized switching platform that switches higherbandwidth traffic from MSPP edge to edge or from edge to core, allowing metro networks to scale efficiently. When you consider that edge MSPPs increase bandwidth aggregation from typical OC-3/ STM-1 and OC-12/STM-4 bulk traffic to new levels of OC-48/STM-16 and OC-192/STM-64, the bandwidth bottleneck can move from the metropolitan edge to the metropolitan core. The increased bandwidth shifts the management focus from DS0s and T1s to SONET STS or SDH VC-4 levels. As this bandwidth is delivered toward the network core, efficient scaling is needed, particularly for large metropolitan areas. The MSSP serves that need by aggregating high-bandwidth MSPP edge rings onto the provider’s interoffice ring. Its highdensity design and small footprint positions the MSSP device to replace multiple, often stacked, high-density SONET ADMs and broadband digital cross-connects (BBDXCs) that are used to groom access rings to interoffice rings. This allows a reduction in network element platforms and single points of failure within central offices of the MAN architecture. Figure 3-14 shows this concept of not only consolidating equipment and functionality within the central office but the added benefit of Layer 2 switching capability using the Cisco MSSP and MSPP architecture.
Multiservice Core and Edge Switching
145
Figure 3-14 SONET/SDH Network Element Consolidation Using Cisco MSSP and MSPP Sonet ADMs
MSPP
OC-48/OC-192 Rings
OC-48/OC-192 Rings
C.O.
C.O. ADMs OC-48/OC-192
MSSP
BBDXC
TDM and L2 Switching
TDM Switching Only
Typical BBDXC Application
MSSP + MSPP Solution
Source: Cisco Systems, Inc.
The MSSP is a true multiservice platform that leverages a provider’s investment in SONET or SDH optical infrastructure. Supporting a wide variety of network topologies makes the MSSP adaptable to any optical architecture. In SONET networks, the Cisco MSSP supports UPSRs, as stated by Telcordia’s GR-1400, and two-fiber and four-fiber BLSRs and 1+1 automatic protection switching (APS), as stated by Telcordia’s GR-1230. In SDH networks, the Cisco MSSP supports subnetwork connection protection (SNCP) rings, multiplex section shared protection ring (MS-SPRing), and SDH multiplex section protection (MSP) topologies as defined by International Telecommunication Union (ITU) recommendations. Additionally, the Cisco MSSP supports the PPMN. A PPMN topology allows for optical spans to be upgraded incrementally to higher bandwidth as traffic requirements dictate, rather than upgrading a complete UPSR span all at once with traditional topology designs. Leveraging the MSSP’s integrated DWDM capability keeps the number of discrete network elements small. DWDM is a critical requirement in the MAN as new lambda-based services become necessary to address the number of discrete service requirements of customers, while also extending the capacity and life of a provider’s metropolitan fiber plant. The MSSP also incorporates MSPP functions, which is necessary to perform the following tasks:
• • •
Connect and switch TDM voice to Class 5 TDM voice switches Switch ATM cells to ATM switches Switch packets to IP routers
146
Chapter 3: Multiservice Networks
All of these devices are typically found in a provider’s service point of presence (POP). By including support for Gigabit Ethernet in the MSSP, the platform can perform MSPP functions at this service POP level, reducing or eliminating the need for a discrete MSPP platform in that portion of the provider’s network. This capability also strengthens integration between MSPP-to-MSSP-to-MSPP services, as MSPP edge traffic passes through the metro core, often destined for other edge MSPPs. The lead Cisco product in the MSSP market is called the ONS 15600 MSSP. The ONS 15600 is optimized for metro MSPP aggregation deployments and typically displaces established SONET ADMs and BBDXCs at service POPs. It also competes well against many of the next-generation optical cross-connects that are more optimized for the longhaul core environment rather than the metro and also lack the SONET MSPP integration and long-reach optics capabilities required in the metro. The heart of the ONS 15600 is a fully redundant 320 Gbps switch fabric with a three-stage pseudo-CLOS architecture in a 25x23.6x23.6 inch shelf. Line card slots are architected for 160 Gbps access to the switch fabric, and current line card densities use 25 percent of that capacity at up to 40 Gbps per line card with less than 25 millisecond protection switching. The use of the Any Service Any Port (ASAP) line card allows the ONS 15600 to be very flexible in supporting SONET/SDH optical interfaces of OC-3/STM-1, OC-12/STM-4, OC-48/STM-16, and Gigabit Ethernet, including the use of multirate small form-factor pluggable (SFP) optics that can be in-service software provisioned to change a selected port’s optical interface from OC-3/STM-1 to OC-12/STM-4, OC-48/STM-16, or Gigabit Ethernet. The 160 Gbps-per-slot architecture positions the ONS 15600 for upgrades to OC-768/ STM-256 capabilities and integrates support beyond Gigabit Ethernet to 10 Gigabit Ethernet and DWDM interfaces. The ONS 15600 uses industry-leading port densities per line card accommodating up to
• • • • •
128 OC-3/STM-1s (using an ASAP line card) 128 OC-12/STM-4s (using an ASAP line card 128 OC-48/STM-16s (using an ASAP line card) 32 OC-192/STM-64s 128 Gigabit Ethernet (using an ASAP line card) per 15600, depending on the line card mixture
Three ONS 15600 shelves can be mounted in a standard seven-foot rack, a typical defacto measure of port and switching capacity, allowing for up to 960 Gbps of switching fabric with up to 384 OC-48/STM-16s, or up to 96 OC-192/STM-64s per rack. The ONS 15600 has a 20-year serviceability lifetime, extending the life of its components by derating their power consumption by 50 percent. Figure 3-15 depicts the positioning of Cisco multiservice switching ATM and SONET/SDH platforms relative to optical capabilities and switching capacity shown earlier in Figure 3-4.
Multiservice Core and Edge Switching
147
Figure 3-15 Cisco Multiservice Platforms OC-192/ STM-64
ONS 15454
MGX 8950
ONS 15600
ONS 15327 OC-48/ STM-16
MGX 8850
MGX 8250 OC-12/ STM-4
ONS 15305
MGX 8850
BPX 8600 ONS 15310
IGX 8400 OC-3/ STM-1 DS3/E3
ONS 15302
MGX 8830
DS1/E1 1.2 G
10 G
45 G 180 G Switching Capacity
320 G
960 G
Figure 3-16 shows the typical positioning of Cisco multiservice platforms within the MAN architecture. Figure 3-16 Cisco Multiservice Platform Positioning
ONS 15454 MSTP Ethernet Optical DSL Cable DS1, DS3 OC-n/ STMn Wireless
MSSP ONS 15600 MSPP
ONS 15454
ONS 15302 ONS 15327 ONS 15305 ONS 15310 MSPP MGX 8230/50 MGX 8850 XR 12000/12000
Access
Metro Edge
MSTP CRS-1 XR 12000/ 12000 MGX 8950 BPX 8600
Metro Core
CRS-1 XR 12000 12000 10000 7600 6500 MGX 8850 Service POP
Long Haul/ Extended Long Haul
148
Chapter 3: Multiservice Networks
Technology Brief—Multiservice Networks This section provides a brief study on multiservice networks. You can revisit this section frequently as a quick reference for key topics described in this chapter. This section includes the following subsections:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding multiservice Networks.
•
Technology at a Glance—Uses figures and tables to show multiservice networking fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance—Presents charts that suggest business drivers and lists those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts in this section to see how business drivers are driven through technology selection, product selection, and application deployment in order to provide solution delivery. Additionally, business drivers can be appended with critical success factors, and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
Technology Viewpoint Multiservice networks are chiefly found in the domain of established service providers that are in the long-standing business of providing traditional voice, TDM leased lines, Frame Relay, ATM, and, more recently, IP communication-networking solutions. Multiservice networks provide more than one distinct communications service type over a common physical infrastructure. Multiservice implies not only the existence of distinct services within the network, but also the ability of a common network infrastructure to support all of these communication applications natively without compromising QoS for any of them. The initial definition for multiservice networks was a converged ATM and Frame Relay network supporting these data in addition to circuit-switched voice. Recently, nextgeneration multiservice networks have emerged, adding Ethernet, Layer 3 IP, VPNs, Internet, and MPLS services to the mix. These next-generation service provider multiservice networks are manifested in the form of technology enhancements to the networking fundamentals of ATM, SONET/SDH, and, since the late 1990s, IP/MPLS. Characteristically, multiservice networks have a large local and/or long-distance voice constituency: a revenue base that is still projected to make up a large share of provider income in the near term. To protect and enlarge this monetary base will require adept handling of new VoIP transport and service capabilities.
Technology Brief—Multiservice Networks
149
The growing trend in packet telephony adoption is one of the significant new revenue opportunities for service providers. It is important for two reasons. Voice revenue is still projected to make up the primary revenue contribution to multiservice-based providers in the near term. A voice portfolio that meets the value distinctions of the customer base is an absolute business fundamental to engage and collect on these revenue opportunities. Secondly, leading service providers are looking to provide managed voice services as a counter-measure to eroding transport revenues. As traditional circuit-switched voice services and equipment have matured, the resulting commoditization pressures margins into a downward price spiral, as evidenced by the continuous decline in cost per minute and the rise of flat-rate pricing for customary voice services. Service providers need a way to reestablish value in voice offerings, and customer-oriented, managed voice services based on packet telephony is that channel. Even with the existence of next-generation technology architectures, most providers are not in a position to turn over their core technology in wholesale fashion. Provider technology is often on up-to-decade-long depreciation schedules, and functional life must often parallel this horizon, even if equipment is repurposed and repositioned in the network. Then there is the customer-facing issue of technology service support and migration. Though you might wish to quiesce a particular technology-based offering, the customer is not often in support of your timetable. This requires a deliberate technology migration supporting both heritage services along with the latest feature demands by the market. Since providers cannot recklessly abandon their multiyear technology investments and installed customer service base, gradual migration to next-generation multiservice solutions becomes a key requirement. Next-generation technology evolution is often the result, allowing new networking innovations to overlap established network architectures, bridging and migrating precommitted service delivery to the latest growth markets. From a global network perspective, the ascendancy of IP traffic has served ATM notice. According to IDC, sales of multiservice ATM-based switches were down 21 percent in 2002, 12 percent in 2003, and another 6 percent in 2004. Both Frame Relay (holding at about 20 percent) and ATM revenues are near plateau, forecasting only modest capacitydriven growth through 2007. Providers with ATM requirements are looking to add MPLS capabilities to their core infrastructures and to push IP features to the edge of the network. Responsible for the development of tag switching, the technology behind the MPLS IETF standard, Cisco Systems has an enviable leadership position in MPLS integration across both ATM and IP networking platforms. The vast installed base of the Layer 1 SONET/SDH optical infrastructure must also be considered in any measured technology migration. The primary appeal for multiservice provisioning and switching platforms, known in the market as MSPPs and MSSPs, is to consolidate long-established SONET/SDH ADMs in the multiservice metro edge, core, and service POPs, while incorporating new Layer 3 IP capabilities with packet interfaces for Ethernet, Fast Ethernet, and Gigabit Ethernet opportunities. Many contain additional support for multiservice interfaces and DWDM. Deployed as a springboard for the rapid provisioning of multiple services, the intrinsic value in these new-generation multiservice
150
Chapter 3: Multiservice Networks
provisioning platforms is to build a bridge from circuit-based transport to packet-based services. Also seen as an edge services platform with which to migrate Frame Relay and other established data services, MSPPs and MSSPs help providers to execute that strategy while maintaining established TDM services and leveraging SONET/SDH capabilities. Entering the market near the end of many legacy SONET/SDH ADM depreciation schedules, the MSPPs inherit a sizable portion of their justification from reduced power, space, and maintenance requirements. In doing so, MSPPs help with continued optimization of operating budgets while representing strategic capital investments for new, high-value IP service opportunity. Multiservice providers are clearly building IP feature-based networks that have scale. Carriers are moving dramatically to embrace IP/MPLS networks, which combine the best features of Layer 3 routing with Layer 2 switching. MPLS provides the simplicity and feature-rich control of IP routing with the performance and throughput of ATM switching. MPLS allows one to restrict IP processing to the appropriate place—on the edges of the network. IP- and MPLS-based routers can operate at much higher speeds, more economically than can an ATM switch. Layer 3 MPLS VPNs based on RFC 2547 are at the top of the requirements list for multiservice network providers. MPLS VPN offerings can help enterprise customers transfer complex routing responsibilities to the provider network. This allows providers to increase value for Layer 2 and Layer 3 IP-managed services. These network enhancements will start in-region, and then move to out-of-region when and wherever opportunity dictates. Where regional Bell operating company (RBOC) providers have Section 271 approvals to provide long-distance voice and data, IP/MPLS-based networks will afford the opportunity to compete nationally for data services against North American Inter-eXchange Carriers. The new era of networking is based on increasing opportunity through service pull, rather than through technology push. Positioning networks to support multiple services, while operationally converging multiple streams of voice, video, and IP-integrated data, is the new direction of multiservice network architecture. In the face of competitive pressures and service substitution, not only are next-generation multiservice networks a fresh direction, they are an imperative passage through which to optimize strategic investment and expense.
Technology at a Glance Figure 3-17 shows the typical positioning of Cisco multiservice platforms within the MAN architecture.
Technology Brief—Multiservice Networks
151
Figure 3-17 Cisco Multiservice Platforms
ONS 15454 MSTP Ethernet Optical DSL Cable DS1, DS3 OC-n/ STMn Wireless
MSSP ONS 15600 MSPP
ONS 15454
ONS 15302 ONS 15327 ONS 15305 ONS 15310 MSPP MGX 8230/50 MGX 8850 XR 12000/12000
Access
Metro Edge
MSTP
CRS-1 XR 12000 12000 10000 7600 6500 MGX8850
CRS-1 XR 12000/ 12000 MGX 8950 BPX 8600
Service POP
Metro Core
Long Haul/ Extended Long Haul
Table 3-6 summarizes multiservice technologies. Table 3-6
Multiservice Technologies
Key Standards
ATM, IP+ATM
IP/MPLS
MSPP
MSSP
ATM UNI V3.1
RFC 2547-BGP/ MPLS VPNs
NEBS Level 3
NEBS Level 3
GR-1089-CORE
GR-1089-CORE
GR-63-CORE
GR-63-CORE
ITU-T I.361, I.362, I.363
RFC 2702 Requirements for Traffic Engineering over MPLS
ITU-T I.555, I.356, I.432
RFC 3031-MPLS architecture
European Telecommunication Standards Institute (ETSI) EN300-386
European Telecommunication Standards Institute (ETSI) EN300-386
ITU-T I.36x.1
RFC 3032-MPLS label stack encoding
SONET GR-253CORE
SONET GR-253CORE
SDH ITU-T G.707
SDH ITU-T G.707
GR-1400-CORE
GR-1400-CORE
G.781, G.782, G.783, G.811, G.812, G.813, G.823, G.825, G.826, G.829
G.781, G.782, G.783, G.811, G.812, G.813
ANSI T1.816 ANSI T1.408
ITU-T H.222 ITU-T Q.2100, Q.2110 ITU-T Q.2130, Q.2931 ITU-T Q.931, Q.933 E1 G.703, G.704, G.804
RFC 3034-Use of label switching on Frame Relay networks specification RFC 3035-MPLS using LDP and ATM VC switching
IEEE 802.3, 802.1p, 802.1Q, 802.1D
OSMINE-certified TIRKS, NMA, transport (formerly TEMS)
RFC 3036-LDP specification continues
152
Chapter 3: Multiservice Networks
Table 3-6
Multiservice Technologies (Continued)
Key Standards (Continued)
ATM, IP+ATM
IP/MPLS
MSPP
MSSP
RFC 1483, RFC 1695
RFC 3037-LDP applicability
OSMINE-certified TIRKS, NMA, transport (formerly TEMS)
SNMP V1, V2, TL1
SONET GR.253.CORE SDH ITU-T G.707 GFP ITU-G.7041
RFC 3038-VCID notification over ATM link for LDP
IEEE 802.3
SNMP V1, V2, TL1
Optical Fiber ITU-T G.652/653 DWDM ITU-T G.692 Processor Architecture Technology
MGX 8200/8800 PXM-1 shared memory switching architecture at 1.2 Gbps MGX 8850/8950 PXM-45 dualprocessor, dualcore architecture, 45 Gbps, and 2.2 Gbps cell bus MGX8950 XM-60 – 60 Gbps nonblocking cross-point switch fabric RPM-PR 400 Kpps RPM-XF 2.6 Mpps
7200/7300/7400/ 10000 - Shared memory with hardware assist 6500/7600 crossbar to 720 Gbps XR 12000/12000/ CU11 IBM PowerPC with distributed crossbar multifabric, multilink CRS-1/ Dual PowerPC CPU complex per Route Processor. Line cards with Cisco Silicon Packet Processor ASIC with 188 32-bit RISC CPUs. 3-stage Benes architecture switch fabric
ONS 15454
ONS 15600
Nonblocking XC and XCVT at VC4-Xc and VC12/3-Xc (future)
Core cross-connect CXC or SSXC
XC10G XC-VXL-10G & 2.5G
320 Gbps fabric Multishelf up to 5-terabit scalability
Technology Brief—Multiservice Networks
Table 3-6
153
Multiservice Technologies (Continued) ATM, IP+ATM
Backplane Switching Speed Range
IP/MPLS
Backplane Switching
Backplane Routing/ 1.2 Gbps to 180 Gbps Switching 7200/7300/7400/ MGX 8200/8800 7500 up to 1 Mpps PXM-1 at 1.2 Gbps 6500/7600 32 Gbps MGX 8850/8950 up to 720 Gbps and PXM-45, 45 Gbps, and 2.2 Gbps cell bus 15 to 30 Mpps 10000/51.2 Gbps MGX 8950
Interface Speed Support
MSPP
MSSP
Backplane Switching
Backplane Switching
240 Gbps total Data plane 160 Gbps
40 Gbps per slot x 8 slots
SONET plane 80 Gbps
STS/VC-4 switching fabric at 320+ Gbps
10 DCC to 68 DCC
6144 STS1 to 2048 OC-48 switching capacity
288 STS1 & 672 VT1.5 to 1152 STS1 & 672 VT1.5
XM-60 – 60 Gbps nonblocking crosspoint switch fabric up to 4 XM-60s for 240 Gbps or 180 Gbps redundant
XR 12000/12000 30 Gbps to 1.2 Tbps
T1/E1 (DS0/DS1)
T1/E1 (DS0/DS1)
T1/E1
Gigabit Ethernet
T3/E3
T3/E3
T3/E3
OC-3/STM-1
OC-3/STM-1
OC-3/STM-1
OC-3/STM-1
OC-12/STM-4
OC-12/STM-4
OC-12/ STM-4
OC-12/STM-4
OC-48/STM-16
OC-48/STM-16
OC-48/ STM-16
OC-48/STM-16
OC192/STM-64
OC-192/STM-64
OC192/STM-64
OC-192/STM-64
OC-768/STM-256
Fast/Gigabit/10 Gigabit Ethernet
E100T-12/E100-12-G
CRS-1
G1000-4/G1K-4
OC-768/ STM-256
ML100T-12/ ML1000-2
CRS-1 single-shelf 640 Mbps/1.28 Tbps Multishelf up to 92 Tbps
E1000-2/E1000-2-G
CE100T-8 FC-MR-4 continues
154
Chapter 3: Multiservice Networks
Table 3-6
Multiservice Technologies (Continued)
Key Capacities
ATM, IP+ATM
IP/MPLS
MSPP
MSSP
MGX 8250
7200/7300
140 x DS1 252 x E1
Up to 192 T1/E1
192 x DS3
3,072 STS-1 bidirectional cross-connects
8 T3/E3
NPE-G100 up to 1 Mpps supports interfaces from DS0 to OC-48/STM-16
8 OC-3/STM-1
6500/7600
16 x OC-12/STM-4
1,344 T1 channelized
MGX 8850/8950
Up to 30 Mpps and 720 Gbps switching Up to 192 T1/E1 with 3 to 13 line card 1,344 T1 channelized slots 192 T3/E3 10000 192 OC-3/STM-1 8 x 3.2 Gbps per line 48 OC-12/STM-4 card slot or 16 x 1.6 Gbps per line card 12 OC-48/STM-16 slot MGX 8950 XR 12000/12000 Up to 768 T3 4, 6, 10, and 16 line 768 OC-3/STM-1 card slots at up to 40 192 OC-12/STM-4 Gbps each on 128XX series 48 OC-48/STM-16 12 OC-192/STM-64
CRS-1 Single-shelf with 8/16 40 Gbps line card slots
248 x E3 48 x OC-3/STM-1 12 x OC-48/STM-16 6 x OC-192/STM-64 144 x FastE 48 x GE 32-64 x FC/FICON 10 Gbps MR-TXP Nonblocking VC-4 cross-connect capacity (line/line, trib/trib, line/trib) Uni- and bidirectional crossconnect HO cross-connect size 384 x 384 VC-4
Up to 5 rings supported per system - 4 SNCP and 1 MS Multishelf, multifabric SPRing or 5 SNCP configuration up to 1,152 40 Gbps line card slots 1,152 OC-768/STM256 POS 4,608 OC-192/STM64 POS/DPT 9,216 10 Gigabit Ethernet 18,432 OC-48/STM16 POS/DPT
128 OC-3/STM-1 128 OC-12/STM-4 128 OC-48/STM-16 32 OC-192/STM-64 64 UPSR/SNCP any combination of UPSR/SNCP, BLSR/ MSSPRing, and 1+1 APS/MSP can be mixed with allowable maximums 32 two-fiber BLSR/ MS-SPRing 64 1+1 APS/MSP uni- or bidirectional PPMN
Technology Brief—Multiservice Networks
Table 3-6
155
Multiservice Technologies (Continued) ATM, IP+ATM
IP/MPLS
MSPP
MSSP
Bandwidth Range
Narrowband to broadband to 10 Gbps
Narrowband to broadband to 40 Gbps
Narrowband to broadband to 10 Gbps
Broadband switching to 40 Gbps
Service Provider Applications
ATM
ATM/Frame Relay convergence
Digital cross-connect
Metro Ethernet
Linear add/drop multiplexer
Multiring (mixed UPSR/SNCP, BLSR/ MS-SPRing, and 1+1 APS/MSP)
Frame Relay Voice adaptation transport
Terminal mode
Private line
Broadband aggregation
DSL aggregation
ETTX aggregation
MSC for WCDMA
IP/MPLS core (long haul and regional)
Four-fiber BLSR
Regenerator
PPMN
Star/hub
Peering
Two-fiber MS-SPRing
High-density broadband ATM aggregation Multiservice bandwidth aggregation Distributed content storage IP VPN Broadband access Wireless switched voice Wireless trunking Class 4 replacement
Optical private line aggregation (OC-48 to DS0) ATM/Frame Relay transport services (over an IP/MPLS core)
Two-fiber UPSR/ SNCP/BLSR
Linear ADM Mesh
Four-fiber MS-SPRing Multiring interconnection Extended SNCP Virtual rings Hybrid SDH network topology Regenerator mode Wavelength multiplexer continues
156
Chapter 3: Multiservice Networks
Table 3-6
Multiservice Technologies (Continued)
Provider and Customer Applicability
ATM, IP+ATM
IP/MPLS
MSPP
MSSP
Voice networking
L2 Ethernet switching to 10 Gigabit Ethernet
Storage area networks
SONET/SDH ADM, BBDXC replacement, aggregation, and TDM switching
Private line aggregation
Exchange/central L2 MPLS (EoMPLS) office colocation and MPLS core and edge interface to LH services L3 MPLS optical core networks L2 MPLS (EoMPLS) Metro Ethernet Metropolitan video L3 MPLS Private line transport, data, and aggregation voice optical WAN aggregation backbone networks Ethernet subscriber IP/VPN aggregation TLS platform Storage area MPLS core and edge Campus and networks services university backbone Disaster recovery network WAN aggregation Internet access Business transport Campus MAN network High-speed WAN Distributed LAN to LAN bandwidth manager VPN Voice switch Storage area interface networks Colocation digital Disaster recovery subscriber line access multiplexer Internet access (DSLAM) and voice Private line aggregator and transport system Cable TV (CATV) transport backbone network Wireless cell site traffic aggregator High-speed ATM/ router link extender
Metro core and service POP switching MSPP metro ring aggregation Circuit to packet transition
Technology Brief—Multiservice Networks
157
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solution and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. The following charts list typical customer business drivers for the subject classification of the network. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products and their key differentiators, and applications to deliver solutions with high service value to customers and market leadership for providers. Figure 3-18 charts the business drivers for multiservice networks Figure 3-18 Multiservice Networks
High Value
Critical Success Factors
Technology
Invest Strategically-Maximize CapEx Minimize Operational Expense
Market Leadership
Migrate Layer 2 Revenue to Next Generation Layer 2, Layer 3 Services Increase Customer ARPU Convergence of ATM, Frame Relay and IP Network Infrastructure Core Requirements for Reliability, Performance and Security
Market Value Transition
Service and Technology Flexibility – Rapid Provisioning – Cisco IOS Leverage –
Frame Relay
Ethernet IP MPLS MGCP
High-Value IP Service Demand at the Edge
Competitive Maturity
SONET/ SDH
MPLS Adaptation
Packet Telephony Services Low Cost
Optical
Worldwide Broadband Growth
Growth in Frame Relay, ATM
Out Tasking of Network Services Business Drivers
Industry Players
BPX 8620 BPX 8650
TDM
ATM
Ethernet to the Internet
Market Share
Cisco IOS
Cisco Product Lineup
PNNI
MGX 8250 MGX 8850 MGX 8950 MGX 8830 MGX 8230/8220 IGX 8400, LS1010 C8500 MSR ONS 15302 ONS 15305 ONS 15310 ONS 15327 ONS 15454 ONS 15600
Applications Service Value Voice Services
Next-Generation IP/MPLS Services Enhanced Service Offerings Portfolio
Packet Telephony Services IP VPNS L2 MPLS L3 MPLS Metro Ethernet
Managed Packet Telephony Services Managed High-Value IP Services SLA Guaranteed Service Offerings Carrier-Class LAN/WAN/MAN Services
Cisco Key Differentiators
Industry Leading IP and MPLS – Service Density DSL, Cable, – Carrier-Class High-Availability Features – Wireless Traffic Aggregation MPLS Core and Edge Solution Internet Access
7200/7300 IP Services 7400/7500 6500/7600 10000 Series Business Continuity and Remote XR Storage 12000/12000 Series Video on Demand Cisco CRS-1
MPLS Traffic Engineering Metro IP Solutions Cisco Voice Infrastructure and Applications Solution Cisco Business Voice Solution Metro Ethernet Switching Solution Mobile Switching Center for WCDMA Solution Delivery
Service Providers - Verizon-BellSouth-SBC-Qwest-Sprint-AT&T-MCI-Infonet-Level 3 Equipment Manufacturers - Nortel-Alcatel-Lucent-Cisco Systems-Marconi-Ericsson-WaveSmith-Vivace-AFC TelliantEquipe-Laurel-net.com-Juniper
Multiservice Networks
158
Chapter 3: Multiservice Networks
End Notes 1
IDC. Worldwide ATM Switch 2005–2009 Forecast. Study # 33066, March 2005
References Used in This Chapter Pignataro, Carlos, Ross Kazemi, and Bil Dry. Cisco Multiservice Switching Networks. Cisco Press, 2002 Yankee Group Report. “Multiservice WAN Switch Market at a Crossroads.” April 11, 2003 Cisco Systems, Inc. “Defining the Multiservice Switching Platform.” http://www.cisco.com/ en/US/partner/products/hw/optical/ps4533/products_white_paper09186a00800dea5e.shtml (Must be a registered Cisco.com user.) Finch, Paul. “Introducing the Cisco ATM Advanced Multiservice Portfolio.” http:// www.cisco.com/networkers/nw03/presos/docs/PRD-8059.pdf Cisco Systems, Inc. “Requirements for Next-Generation Core Routing Systems.” http:// www.cisco.com/en/US/partner/products/ps5763/products_white_paper09186a008022da42.shtml
This page intentionally left blank
This chapter covers the following topics:
• • • • • • • •
Frame Relay/ATM VPNs: Where We’ve Been IP VPNs: Where We’re Going IP Security (IPSec) Access VPNs Intranet VPNs Extranet VPNs Multiservice VPN over IPSec VPNs: Build or Buy?
CHAPTER
4
Virtual Private Networks Virtual private networks (VPNs) are logically partitioned, private data networks deployed over a shared or public network infrastructure. VPNs are implemented with a wide range of technologies, and can be self-implemented or managed by a service provider. VPNs allow end customers to realize the cost advantages of a shared network, while enjoying exceptional security, quality of service (QoS), extensibility, reliability, and manageability—just as they do in their own private networks. VPN solutions can apply to several network layers of the OSI protocol stack: Layer 3, Layer 2, and potentially Layer 1 using IP over Optical. Conventional Layer 2 VPNs are deployed on Frame Relay and Asynchronous Transfer Mode (ATM) infrastructures, while contemporary Layer 2 and Layer 3 VPNs are built on an IP network backbone. For providers, VPNs are a service foundation. Providers can build or enhance their networks to offer any or all VPN types—from access VPNs, to intranet and extranet VPNs. Conventional Layer 2 VPNs can be migrated from Frame Relay and ATM delivery to contemporary Layer 2 and Layer 3 IP VPNs. Existing VPN services can be enhanced, while new VPN services are fashioned to exploit the service pull of IP networks. From access to extranet, from local to international, and from wired to wireless, providers are building on their VPN foundations, crafting new types of VPN offerings with which to engage their customers. The service foundation of today’s VPNs not only augments the architecture of a provider’s VPN framework but also provides a strategic market position through which to harvest new revenues.
Frame Relay/ATM VPNs: Where We’ve Been Frame Relay emerged in the early 1990s as a major data service, bringing the concept and terminology of VPNs to the forefront. Until then, enterprises built their wide area data networks using private leased lines, deployed most often in hub-and-spoke fashion. The enterprise customer effectively owned this leased bandwidth, having exclusive rights to its use or nonuse. Frame Relay introduced a way to create logical leased lines for data transmission, carrying and sharing this traffic over a provider’s public, physical network infrastructure. By leveraging core bandwidth oversubscription across multiple customers, providers could introduce better bandwidth pricing, increasing volumes and revenues in the process.
162
Chapter 4: Virtual Private Networks
Frame Relay was intended to address a purpose. As local IP networks spread into the wide area, the onslaught of IP, IPX, and SNA-over-IP traffic growth in enterprises presented a new data traffic profile—a bursty traffic profile that required instantaneous bandwidth headroom as needed to serve new application demands. This bursty traffic didn’t fit well within the cost model of private leased lines, leaving much of the bandwidth unused for a large percentage of the time; yet it demanded the full circuit bandwidth, often at the single click of a PC mouse. Frame Relay introduced the concept of creating bandwidth with a guaranteed or committed data rate, coupled with the ability to burst immediate traffic requirements above the guaranteed rate. The probability of successfully exceeding the guaranteed data rate was generally better than 95 percent, and any traffic that was discarded would simply be recognized and retransmitted by the TCP/IP applications originating the data. This was acceptable to customers given the price advantages of such a service. This type of flexible bandwidth solution would better serve the bursty nature of IP-based traffic; and when leveraged logically onto shared provider facilities, it presented price/ performance benefits over the heretofore private leased line model. Partitioning customers across the shared network infrastructure was managed by the Frame Relay service provider, creating logical customer networks—in effect, data VPNs using Frame Relay at Layer 2. Later, ATM networks offering Layer 2, ATM-based VPNs would scale the bandwidth capabilities of VPNs beyond OC-3/STM-1 data rates for large enterprises, while offering the flexibility of converging both voice and data needs across the fabric of the provider’s geographic ATM network. ATM networks then became the primary core backbones for Frame Relay networks, integrating the two technologies by employing Frame Relay to ATM interworking specifications. Most businesses require partnering and, thus, business-to-business (B2B) communications are often necessary. These are accomplished by allowing network connections to partners, effectively extending the company’s network into an extranet. Despite their benefits, Layer 2 Frame Relay and ATM don’t lend themselves to an open extranet model, limiting opportunities to integrate external partners and supply chain associates into wide area network (WAN) applications. This is primarily because Frame Relay and ATM have higher costs for equipment and recurring services. Also, the increased mobility of teleworkers demands continuous remote access to business networks but at prices much less than Frame Relay or ATM can deliver. A Layer 3 or IP VPN best fits this requirement, as it can leverage publicly available facilities such as the Internet to establish connectivity between remote users and its company’s private network. The minimum entry protocol for low-cost Internet access options is IP at Layer 3, hence the demand for Layer 3 VPN services, especially as remote-access users proliferate and spend more time online—remotely. Customers with particular Layer 2 application requirements, including enterprises that desire to maintain control over their IP routing, will continue to prefer Layer 2 VPNs, much of which can still be serviced through Frame Relay and ATM network offerings. For the
IP VPNs: Where We’re Going
163
foreseeable future, Frame Relay access will continue a slow, measured growth to meet these types of capacity requirements. The IP services and Internet “gold rush” left many providers with yet another network overlay, dividing new customer services from the old. The past disconnect between legacy Layer 2 and new Layer 3 VPNs has caused providers to create separate, purpose-built networks, increasing complexity and cost through additional network layers that each must be provisioned, operated, and maintained. Now, new capabilities allow providers to create common network infrastructures over which both Layer 3 and Layer 2 VPN services can be effectively delivered. By unifying multiple network layers, software services, and management platforms, service providers can reach a broader customer set while leveraging the capabilities of IP and the Internet, making VPNs truly global in reach.
IP VPNs: Where We’re Going IP-based VPNs enable enterprises to take advantage of the flexibility and scalability of both the Internet and service provider IP networks to create any-to-any WAN communications for geographically disperse sites. Using a common transport service, Internet access, LAN to LAN service, and client/server applications can be simultaneously delivered. The demand for and initial self-deployment of IP VPN solutions by large enterprises has, in part, awakened service providers to the realization that they must transform their Layer 2 core infrastructures more rapidly to Layer 3 IP-based capabilities, in order to capitalize on emerging IP services. IP VPNs require publicly addressable IP routing across shared network infrastructures. If Layer 3 facilities aren’t available, traditional Layer 2 infrastructures are easily bypassed. Internet service providers (ISPs) generally fit this need, and many established service providers began ISP business units not only to profit from the Internet rush but also to legitimize long-term IP services for the purpose of staying in the IP VPN market. The catalyst for this interest is the projection of the U.S. IP VPN services market to exceed $20 billion by the year 2009.1 Service providers have a considerable opportunity to capitalize on VPNs. The reason for this is that IP VPNs carry service pull. First, these are IP services with built-in, world-aware intelligence and service adaptability. Second, VPNs allow customers to optimize WAN expense, converge voice and data, and position for advanced IP services through provider assistance and out-tasking. The networking convergence of voice, data, Internet, and virtual access services can make VPNs a compelling vehicle for keeping everyone in touch. Businesses of all sizes can bypass the distractions of in-house internetworking services design, deployment, and management, better focusing on core processes that boost innovation and customer service. The Internet has helped fuel the growth of VPNs, allowing businesses to enhance and extend their network boundaries and services further than previously possible. Taking
164
Chapter 4: Virtual Private Networks
advantage of secure VPN technology, the Internet becomes a pervasive transport medium for remote access and global workers, and easily extends intranets into partner networks for extranet process integration. Service providers can participate in this IP VPN market with regional, national, and international IP networks. IP VPNs are the answer to international connectivity. The broad reach of the Internet, combined with service provider IP infrastructures, lower the cost of linking dispersed employees, company offices, suppliers, and customers worldwide. Companies can now afford to take their WANs to international markets with global reach. IP VPNs are how they’ll be pursuing these opportunities. Providers of traditional leased lines and data transport services hope to avoid being their own cannibals: in the short term, IP VPNs, and in the long term, Ethernet. Transforming and adding IP VPN services will leverage the power of IP across their investment in communication plants and equipment. IP services create high-bandwidth demands leading to requirements for high-speed Ethernet links. The goal of IP VPNs is to provide IP connectivity over a shared IP infrastructure while maintaining the security and service features of a dedicated private network. In order to extend the capabilities of private networks, VPNs require the following essential attributes:
• • • • •
QoS—Quality of service allows the prioritization of voice, data, and video applications traveling across networks. Security—Security technology such as IP Security (IPSec) provides the critical privacy for network traffic moving across public networks both in the core and network edge. High availability—Carrier networks contain inherent equipment and core link redundancy, broadband backbones, access links, high availability features, and 24x7 management to increase network availability. Scalability—Access to a variety of broadband network connection types such as private line, Point-to-Point Protocol (PPP), Frame Relay, ATM, DSL, cable modem, and Ethernet decrease provisioning times and enhance speed of access. Ease of management—Today’s providers have more network management data points and IP visibility through which to monitor and report on data traversing their networks.
VPNs based on IP protocols have become the most pervasive. The availability of the global Internet accelerated the VPN market as a natural outgrowth of company Internet access connections. Three classes of IP VPNs are most prevalent:
•
Access VPNs—Access VPNs primarily target the remote accessibility requirements of mobile professionals, teleworkers, and workday extenders. Access VPNs deliver work to the worker, wherever they are. Access VPNs use IPSec, Secure Socket Layer (SSL), and other technologies, some of which can be leveraged across the Internet or over a service provider’s shared IP infrastructure to create secure hooks back into the corporate network for private communications.
IP Security (IPSec)
•
165
Intranet VPNs—For intranet VPNs, IPSec site-to-site VPNs have been the norm, because they are cost-effective network extensions for expanding businesses and enterprises. Site-to-site VPNs use VPN equipment to connect two company locations by establishing a virtual point-to-point network connection over the Internet or provider network through each location’s Internet access link. Large enterprises, education, and governmental organizations have been the largest adopters to date. Multiprotocol Label Switching (MPLS) VPNs are also in the intranet VPN market space. MPLS VPNs were originally intended for service providers and carriers, giving providers the capability to provide and manage customer IP routing within their own logical network instance. This can expand data and IP revenue opportunities for the provider by carrying secure VPNs on converged network infrastructures that save network and operational expenses for providers, while creating lower networking costs for customers. Large enterprises are also using or considering MPLS VPNs to meet challenges in their growing networks. Additionally, technology such as Layer 2 Tunneling Protocol version 3 (L2TPv3) has the interest of providers wishing to deploy RFC 2547-like VPNs over an L2TPv3 infrastructure.
•
Extranet VPNs—Extranet VPNs are usually extensions of intranet VPNs or access VPNs. Today, extranet VPNs are largely built on lower-cost, Internet broadband access technology linking noncompany partners, suppliers, and customers together in secure private communications. As a result, extranet VPNs streamline interbusiness processes and improve time to market.
The IPSec IETF standard is frequently an enabling technology for secure VPNs and is discussed next. The remainder of this chapter describes the three general classifications of VPN—access, intranet, and extranet VPNs—in greater detail and then introduces a few considerations for determining whether you should build or buy VPN services.
IP Security (IPSec) IPSec secures Layer 3 IP communications. The base IPSec standard (RFC 2401) and related standards (RFC 2402-2412 and 2451) employ a set of protocols and technologies such as Authentication Header (AH), Encapsulating Security Payload (ESP), Internet Key Exchange (IKE), Data Encryption Standard (DES), Advanced Encryption Standard (AES), and others into a complete system that provides confidentiality and authenticity of IP data. The IPSec standard applies to both IPv4 and IPv6 environments. As an open standard, IPSec ensures interoperability between different manufacturer’s devices and represents a fundamental building block for many types of VPN architectures. Although IPSec is generally deployed for WAN extension over publicly shared facilities, the technology might also be used to encrypt and secure communications within a LAN, a campus, or even a private point-to-point Intranet. For example, many state governments share their WAN topologies with state law enforcement and might choose to encrypt
166
Chapter 4: Virtual Private Networks
data-sensitive applications used by police, sheriff, fire, and investigative bureaus. IPSec can provide this point-to-point confidentiality within an organization’s private WAN. According to the IETF RFC 2401, “Security Architecture for the Internet Protocol,” IPSec is designed to provide interoperable, high-quality, cryptographically-based security for IPv4 and IPv6. The set of security services offered include access control, connectionless integrity, data origin authentication, protection against replays, confidentiality (encryption), and limited traffic flow confidentiality. IPSec data integrity protocols, forwarding modes, and security options are discussed next.
IPSec Protocols for Data Integrity IPSec accomplishes IP traffic security by adding IPSec headers to original IP datagrams. These new IPSec headers, such as AH and ESP, can be used separately or combined together depending on the desired degree of security requirements. Essentially, the headers are added selectively to an original IP packet for the purpose of authenticating the packet as a trusted packet or encrypting the packet for ultimate data protection, or both. Security associations (SAs) are an important part of the IPSec process as they define a level of trust between two devices in an IPSec peer-to-peer relationship. Through SAs, end devices agree on the security policies that will be used and identify the SA by an IP address, a security protocol identifier, and a unique security parameter index value. There are two types of SAs. A key exchange SA is formed first to authenticate the peers, exchange keys, and manage the keys afterward. Once this SA is formed, the IPSec SAs (one per traffic direction) are negotiated and formed, each agreeing on an authentication method, a hashing algorithm, and an encryption algorithm.
Authentication Header (AH) The AH uses a keyed-hash function, implemented in hardware application-specific integrated circuits (ASICs) for speed, to apply integrity and authenticity functions to the transmitted data. AH authenticates an origin host with a destination host through the establishment of a key authentication exchange. A variety of complex authentication key methods and options are available with which to establish IPSec communications. Some of these are listed here:
•
IKE based on ISAKMP/OAKLEY—The IKE is a hybrid key exchange protocol which uses part Oakley and part of another protocol called SKEME within the Internet Security Association and Key Management Protocol (ISAKMP) framework. Keys are preshared either manually or via a certificate authority, and the key exchange and validation is performed by IKE. Peers validate each other based on the IKE process and form an IKE security association. This happens before any IPSec SAs are negotiated and before traffic can pass over the established link.
IP Security (IPSec)
167
•
Diffie-Hellmann 1, 2, and 5—Diffie-Hellman is a key agreement protocol for deriving a shared secret key between two IPSec parties. It is a method for secure exchange of keys that are subsequently used for the data encryption process. DiffieHellman is the basic mechanism of the Oakley key exchange protocol used in the IKE process. There is an extended version called authenticated Diffie-Hellman or Stationto-Station (STS) protocol, which allows two parties to authenticate themselves to each other through the use of digital signatures and public key certificates. This mitigates the “man-in-the-middle” attack exposure of the original Diffie-Hellman protocol.
•
Preshared key—A preshared key is manually configured on each end of a device that will create an IPSec SA. The preshared key is used as a seed to generate a secret key. This is the simplest form of creating the public key and is used with each party’s private key for deriving the shared secret keys for the IPSec SAs. Preshared keys have exposure to key intercept by attackers and are more difficult to scale in large IPSec implementations.
•
RSA digital certificates—These are digital documents that bind a public key to a particular individual or other entity. These are third-party certifications to validate a user as the original certificate issuer and deny any key exchange that appears to be an impersonation using a phony key. Digital certificates are often used to help scale IPSec implementations much more easily than through the use of preshared keys.
•
Perfect Forward Secrecy (PFS) rekeying—The use of PFS allows more security in the event that a secret key was broken by an attacker. It separates the IKE-derived secret key from the process used to create the keys for the IPSec SAs. That is, the IKE key for the IKE SA can be broken, but this will not help reveal the secret keys that make up the IPSec SAs in either traffic direction. The rekeying option allows for this key association to change on a very frequent interval—essentially keys are changing all the time while keeping the sessions alive.
The result of establishing a key exchange between two IPSec peer devices creates both an IKE SA and a pair of IPSec SAs, typically one for each traffic direction. This provides a secure transmission framework to provide an IP packet with data integrity, because it is validated by the resident AH as a true, originating IP packet from the source IP host. Although the AH doesn’t make the IP packet’s data payload undecipherable, it does create a sort of tamper-evident seal so that you can ensure the originality and authenticity of the transmitted data. For IPSec to maintain data integrity as it crosses public networks, the AH uses hash methods such as Message Digest 5 (MD5) from RSA Data Security or the Secure Hash Algorithm 1 (SHA-1) as defined by the U.S. government. These are applied to the origin packet’s IP header, which conceals things like the host IP address and other parameters from public view. The hash method is reversed at the destination end to restore the original IP header to full view so that the packet can be routed within the destination IP subnet. The extra processing of these security algorithms, necessary for every packet, is normally hardware accelerated to increase IPSec performance.
168
Chapter 4: Virtual Private Networks
Encapsulating Security Payload (ESP) If absolute confidentiality of the IP packet’s data payload is required, then data encryption is necessary. In this case, an ESP header and encryption algorithms such as DES or Triple DES (3DES) are added for this level of data fortification. As a result, ESP completely encapsulates user data. The Data Encryption Algorithm (DEA), more commonly referred to as the Data Encryption Standard (DES)—specifically the 168-bit version known as 3DES—is the most commonly used encryption algorithm. Blowfish is another example of a data encryption algorithm. Due to stronger encryption (128 bit, 192 bit, and 256 bit) and faster performance, the newer AES, introduced to the market in November of 2002, is gaining popularity and deployment. ESP can be used in combination with AH, but ESP contains the same data origin authentication and antireplay mechanisms that are present in AH. As such, ESP can use the same key exchange techniques used for AH. This allows ESP to be solely used for IPSec traffic when robust data confidentiality is desired. An example of when to use both the AH and ESP headers is when you need the strongest confidentiality (ESP) and the strongest authentication (AH), because AH will additionally protect the new IP header field, while ESP doesn’t.
IPSec Data-Forwarding Modes IPSec employs two methods of forwarding data across a network for both the AH and ESP protocols:
• •
Tunnel mode Transport mode
Tunnel mode and transport mode are in actuality two different types of SAs. An SA is defined as a simplex connection that applies security services to the traffic carried within the SA. The tunnel mode SA is most often used for securing many hosts to many hosts on each end of an IPSec tunnel mode SA connection, while the transport mode SA is used for securing one IP host to another IP host over an IPSec transport mode SA connection, or when network services such as QoS must be preserved in the original IP header.
Tunnel Mode Both AH and ESP individually operate in tunnel mode. A tunnel provides a specific pathway across a publicly shared WAN through which a number of hosts on either end of the tunnel can communicate. Tunnels are logical endpoints, much like virtual circuits, configured on physical interfaces through which traffic is carried. IPSec can be used between a pair of workstations, a pair of routers, and between firewalls. IPSec Tunnel Mode can completely encapsulate and protect the contents of an entire IP packet including the original IP header. Tunnel mode is generally used for IP unicast-based
IP Security (IPSec)
169
traffic. If there is a requirement to apply IPSec to multicast applications, non-IP traffic, or routing protocols that use multicast addressing, then the additional use of a Generic Route Encapsulation (GRE) header is needed. With IPSec and GRE working together in tunnel mode, support is available for multicast applications; routing protocols such as Open Shortest Path First (OSPF), Routing Information Protocol) (RIP), Enhanced Interior Gateway Routing Protocol (EIGRP); and transport of non-IP traffic, such as IPX or AppleTalk within an IPSec environment. It is important to understand that IPSec tunnel mode will add a new, 20-byte outer IP header to each packet. If this packet expansion is a concern, then IPSec Transport Mode can support adding the IPSec header after the original IP packet header to keep packet lengths within desired parameters. Examples of the IPSec header additions for tunnel mode are shown in Figure 4-1 (IPSec Tunnel Mode AH) and Figure 4-2 (IPSec Tunnel Mode ESP). Note that these headers are placed after the outer IP header of an IP datagram, where they are examined by the ingress and egress tunnel endpoints. The new IP header field contains the IP addresses of the IPSec tunnel endpoint devices. Figure 4-1
Application of IPSec AH Header to IP Datagrams in Tunnel Mode
Original IP Datagram IP Header
Data
IP Header
Data
IPSec Tunnel Mode New IP Header
AH Header
Authenticated
Figure 4-2
Application of IPSec ESP to IP Datagrams in Tunnel Mode
Original IP Datagram IP Header
Data
IP Header
Data
IPSec Tunnel Mode New IP Header
ESP Header
Encrypted Authenticated
ESP Trailer
ESP Auth
170
Chapter 4: Virtual Private Networks
Transport Mode Both AH and ESP can individually operate in transport mode. Transport mode for either protocol encapsulates the upper-layer payload, above the IP layer. These are typical Layer 4 and higher payloads such as Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Border Gateway Protocol (BGP), and so on. This leaves the original Layer 3 IP header intact, because it might be needed for certain network services, such as applications that need to use QoS classifications. (An encrypted original IP header can’t be used for QoS applications.) AH Transport Mode would be used for applications that need to maintain the original IP header and just need to authenticate the data integrity of packets. ESP Transport Mode would be used for applications that need to maintain the original IP header but also want to encrypt the remainder of the packet payload. Figures 4-3 and 4-4 show both AH and ESP Transport Mode packet field layouts. Figure 4-3
IPSec Transport Mode Using AH Original IP Header Original IP Header
AH Header
TCP
DATA
TCP
DATA
Authenticated
Source: Cisco Systems, Inc.
Figure 4-4
IPSec Transport Mode Using ESP Original IP Header Original IP Header
ESP Header
TCP
TCP
DATA
DATA
ESP Trailer
ESP Auth.
Encrypted Authenticated
Source: Cisco Systems, Inc.
Summarizing IPSec Technologies Table 4-1 summarizes many of the IPSec technologies and features.
Access VPNs
Table 4-1
171
Summary of IPSec Technologies
IPSec IP Protocol Characteristics
Encapsulating Security Payload (ESP) Header IP Protocol 50
Authentication Header (AH) IP Protocol 51
Traffic security capabilities
Confidentiality (encryption), connectionless integrity, data origin authentication, antireplay service (optional)
Connectionless integrity, data origin authentication, antireplay service (optional)
Data authentication/ integrity
MD5, SHA-1
MD5, SHA-1
Data encryption
RC4, Blowfish, DES, 3DES, AES --
Authentication key negotiation and management
IKE based on ISAKMP/ OAKLEY, RSA digital certificates, preshared key, Diffie-Hellmann 1, 2 and 5, Perfect Forward Secrecy (PFS) rekeying
IKE based on ISAKMP/ OAKLEY, RSA digital certificates, preshared key, Diffie-Hellmann 1, 2 and 5, Perfect Forward Secrecy (PFS) rekeying
Communication modes
Tunnel mode, transport mode
Tunnel mode, transport mode
Optional
GRE for multicast support
GRE for multicast support
A number of features can be added to these designs such as high availability, VPN headend load distribution, and hardware-accelerated encryption to create robust IPSec VPNs. IPSec tunnels can even be integrated with service provider MPLS networks. You will see that IPSec is a common security technique now used in several types of VPNs, including access, intranet, and extranet VPNs. These are covered next.
Access VPNs Today’s remote-access VPNs are a flexible and cost-effective alternative to yesterday’s private dial-up solutions. Implementing a remote-access VPN can help organizations reduce communication expenses by using the flat-rate, broadband service provider networks that deliver Internet accessibility. Flat-rate local charges and the data gulp of enterprise applications help to fuel the drive to leverage broadband access methods for VPN access. Dial-up, DSL, cable modem access, and, more progressively, wired and wireless Ethernet, are the primary Layer 1 and 2 access methods. With the high-speed advantages of these types of broadband access, mobile workers, telecommuters, and workday extenders use remote-access VPNs to support their computing and networking needs beyond the office LAN. Access VPNs allow companies to take the work to the worker, wherever they are.
172
Chapter 4: Virtual Private Networks
IPSec remote-access VPNs provide remote users with a premium remote networking environment, capable of extending office-based, enterprise-class applications to remote locations, generally to enterprise-owned laptops and desktop PCs. The inherent security features of IPSec help organizations protect the privacy of company data as end users connect over public access networks. IPSec implementations represent the largest share of remote-access VPNs. SSL-based remote-access VPNs provide connectivity from almost any Internet-enabled location through use of a workstation’s web browser and the browser’s native SSL encryption. SSL-based VPNs are excellent for secure access control from nonenterprise owned desktops and, when clientless VPN software security is adequate for the business’s application set, administrative capability and security policy. The use of SSL in the market is growing. Table 4-2 shows business and technical benefits of remote-access VPNs. Table 4-2
Benefits of Remote-Access VPNs Business Benefits
Technical Benefits
Reduce operations and management costs
Scale quickly to expand remote-access coverage
Expand geographic coverage for mobile users
Choose from a variety of remote-access technologies
Save on toll charges for dial-up users
Leverage service provider technical expertise
Maintain privacy of company data
Extend decision data to users anywhere via encrypted communications
Achieve a reduced total cost of ownership
Offer quick provisioning for remote users
Have networks that meet changing business needs
Enjoy simplified, efficient networks
Refocus internal resources on core business needs
Shift risk of technology investment to service providers
The next sections describe IPSec and SSL for remote-access VPNs in more detail. You also learn about the use of wireless and MPLS VPN virtual home gateways (VHGs) for remoteaccess VPNs.
IPSec VPNs for Remote Access One of the primary benefits of IPSec technology for the remote-access environment is the ability to decouple the teleworker’s workstation from a private dial-up infrastructure, removing both cost and bandwidth constraints. Previous remote-access solutions typically employed private dial-access modem sharing and terminal server equipment at Layer 1, long-distance or 800 numbers, secure token
Access VPNs
173
passcodes, in-house authentication, authorization, and accounting (AAA) systems and operations, administration, and management personnel to keep teleworkers productive. This approach ensured a reasonable amount of security, as the dial connection was usually authenticated on a user basis and then trusted as a private host-to-LAN connection. The two primary constraints of this environment were the bandwidth limitations inherent in the public-switched telephone network and the abundance of long-distance and per-user connection charges. Only applications with minimal data transfer were realistically usable in this type of environment. For enterprises requiring higher bandwidth for remote-access users, ISDN was an option but only doubled bandwidth while maintaining the dissuasion of a long-distance, cost-per-minute revenue model. Companies with geographically large remote-access deployments often required a distributed design model using network access servers (NASs) to switch teleworker calls to Layer 2 forwarding and tunneling protocols, providing backhaul data transport to the central computing site. As the security architecture for the IP protocol was standardized in the fall of 1998, IPSec solutions then followed to allow secure remote access over a publicly shared IP infrastructure such as the Internet. By doing so, teleworkers could dial or connect with local Internet access numbers and then build secure, IPSec tunnels across the Internet, connecting to the company’s IPSec VPN head-end concentrator. This VPN concentrator was responsible for authenticating and logically bridging the remote user’s workstation into the enterprise computing environment on a trusted basis. This removed the constraint of long-distance charges for IT budgets supporting remote-access users. Soon to follow, the bandwidth limitations of switched dial access were then outpaced ten times or more using local broadband connections to the Internet, primarily through cable high-speed data access and DSL. Best of all, IPSec removed major concerns with moving enterprise data through publicly shared communications facilities, because all data was authenticated and optionally encrypted. The IPSec open standard benefits the remote-access environment, helping to remove cost and bandwidth constraints through the use of lower-cost, flat-rate broadband Internet access pricing. With stronger authentication and encryption options than any previously available remote-access technologies, IPSec remote-access solutions scale well with Internet and ISP broadband connectivity, providing faster performance, quicker deployment, and more secure communications for mobile workers, home-office workers, and small sites. By using IPSec and local IP broadband connections, companies are able to
• •
Reduce capital costs of the analog/digital modem-sharing equipment
• •
Scale their remote-access networks larger with easier deployment and management
Reduce operational costs using local dial-up or broadband connections as opposed to long-distance and 800 number facilities Meet any data communication security requirements mandated by law
Remote-access IPSec VPN technology might be implemented in software such as a software program in a PC. It might also be implemented in hardware such as a custom ASIC chip within a hardware client. IPSec VPNs can be implemented as software or firmware inside
174
Chapter 4: Virtual Private Networks
a network firewall hardware device. This technology might also be implemented in software or firmware inside a network router. Remote-access IPSec VPNs might be implemented with one or more IPSec form factors, although four options are typically seen in the market. These options are
• • • •
Software IPSec VPN client on a remote workstation IPSec VPN client in a remote-access firewall Hardware IPSec VPN client device at a remote site IPSec VPN client feature in a remote-site router
Figure 4-5 conceptualizes these types of remote-access IPSec VPN designs. Because VPNs are customarily established across public access networks such as the Internet, it is good security practice to deploy firewall and virus scanning technology into these environments.
Software-Based IPSec VPN Clients Workstation software-based IPSec VPN clients are more applicable to remote-access workers who need maximum mobility, connecting from their home office one day, and perhaps from a customer business site or hotel conference center the next. This provides great flexibility for the remote user but also requires software administration and management at the individual workstation level via the user or via central site personnel. The software VPN client program will have specific dependencies on the workstation’s operating system, so if there are multiple PC operating systems in play such as Windows 2000, Windows XP, and perhaps Mac OS 10.x on an Apple PowerBook, then multiple versions of the software client will be required. Using a software-based IPSec VPN client, a remote user will connect via a local broadband facility to the Internet or ISP or use a dial-up connection to reach the company VPN concentrator. A VPN concentrator is a performance-enhanced device that is centrally located for the express purpose of “concentrating” VPN sessions from multiple remote-access VPN users. The VPN concentrator provides specific features that augment the flexibility, performance, and manageability of large numbers of VPN remote-access users. The remote user will first be authenticated by the central site VPN concentrator, validating the user’s identity. If approved, an IPSec tunnel will be built with the appropriate security options. A virtual IP address will be assigned to the client to enable IP routing for VPNdestined traffic along with the IP addresses of name servers, such as Microsoft WINS and Internet DNS. With this knowledge, the remote workstation can access authorized applications and browse intranet and Internet sites.
Access VPNs
Figure 4-5
175
Remote-Access IPSec VPN Solutions Authenticate Remote Site Terminate IPSec Personal Firewall and Virus Scanning for Local Attack Mitigation
Authenticate Remote Site Terminate IPSec
ISP
Broadband Access Device
Broadband Access Device
Broadband Access Device
Home Office Firewall with VPN
Hardware VPN Client
Remote Site Router with VPN
Authenticate Remote Site Terminate IPSec Firewall
VPN Software Client with Personal Firewall
Software Access Option
Remote Site Firewall Option
Authenticate Remote Site Terminate IPSec Firewall
Hardware VPN Client Option
Virus Scanning for Local Attack Mitigation
Remote Site Router Option
Personal Firewall and Virus Scanning for Local Attack Mitigation
Virus Scanning for Local Attack Mitigation
Source: Cisco Systems, Inc.
Authentication of the PC device occurs at the VPN concentrator, typically using an IPSec group preshared key. Often, further identification is needed to validate that the current user of the PC is indeed the authorized user. This is usually implemented with a one-time password (OTP) solution such as a token generator in the user’s possession, linked to a Authentication Dial-In User Service (RADIUS) server/OTP server database of authorized users at the central site. Once authenticated, central site security management servers push current user access policies over the IPSec tunnel to the software IPSec client. New maintenance versions of the IPSec client might also be pushed over the tunnel to the remote PC device.
176
Chapter 4: Virtual Private Networks
Software-based IPSec VPN clients on PC workstations allow for flexible mobility and reasonable cost. This environment is useful for mobile and occasional home office users who generally need best-effort data support, because QoS is not an option for these environments.
Remote-Site IPSec VPN Firewalls Network device software and/or firmware-based IPSec VPN clients can be implemented as operating system feature sets of IOS-based firewalls. Depending on the size of the remote site, some of these firewall devices might include hardware acceleration to get the best performance for IPSec tunnel processing and termination. The remote-site IPSec VPN firewall option is frequently oriented to the prime home office worker or to a small branch or agency with few personnel. Since a firewall only has Ethernet, Fast Ethernet, or Gigabit Ethernet interfaces, it is best installed behind a broadband access device (the client side) of a DSL or cable modem on a broadband connection from an ISP. The IPSec client exists as either software or firmware within the remote-site firewall and originates the remote end of the IPSec tunnel toward a central site firewall with IPSec. As such, the remote-site workstations require only an IP over Ethernet connection to the remote-site firewall and not a software IPSec VPN client. This type of design is frequently postconfigured, administered, and maintained from the central site with minimal, if any, setup by the remote-site user. The stateful firewall functionality strengthens security protection from Internet risks and provides a feature for split tunneling, separating remote-site Internet-bound traffic from the VPN traffic destined to the corporate site. Optionally, user authentication might be used and performed by systems at the central site. Remote-site IPSec firewalls are usually installed at trusted site locations to meet requirements for always-on connectivity, stronger security, and more IPSec user performance. Their personal size allows for some ease of transport if required.
Hardware IPSec VPN Clients Often, there are small remote office locations with a few desktop workstations that never become mobile workstations. These environments can be served with a purpose-built hardware IPSec VPN client, connecting upstream to the broadband Internet service and downstream to a small office Ethernet LAN connecting a few PCs. A hardware IPSec VPN client is a purposebuilt IPSec device, primarily in an ASIC chipset, designed for easier central site administration and management. The device typically has two or more Ethernet ports:
• •
One for connecting upstream to a broadband DSL or cable modem One for connecting downstream to a workstation or an Ethernet LAN switch
Access VPNs
177
The hardware IPSec client device might optionally embed an Ethernet switch to support a few workstation users at a small site. With the IPSec client resident within the hardware, downstream workstations don’t require software-based IPSec clients or their administration. The central site uses an SSL-based Internet browser connection to contact and configure the hardware client, minimizing remote-user dependency. The hardware client authenticates with the central site VPN concentrator, often with a statically configured, preshared group key, to form the IPSec tunnel. Since this device doesn’t contain firewall functionality, a decision to support split tunneling should be coupled with personal firewall software on the remote workstations. IPSec client firmware upgrades are pushed from the central site VPN concentrator during maintenance periods. The hardware IPSec VPN client option is typically installed in a controlled remote-site location. Primary advantages are the alleviation of software-based IPSec clients on PC workstations, simplified central site management, and low cost of ownership.
Remote-Site, IPSec-Enabled Routers Network device software- and/or firmware-based IPSec VPN clients can be implemented as operating system feature sets of IOS-based routers. Many of these routers include hardware acceleration for IPSec tunnel processing and termination. When implemented in an IOS-based router, other router features such as QoS can be leveraged. The remote-site, IPSec-enabled router is often used to connect to a local broadband service and build IPSec tunnel communication with a central site IPSec-enabled router. Optionally, firewall support can be embedded in the router, and a full set of Layer 3 routing features exist such as QoS, different LAN interfaces, and VPN hardware acceleration options for best performance. This design usually bridges the distinction between IPSec remote-access and IPSec site-to-site designs.
Secure Socket Layer (SSL) VPN for Remote Access SSL is a security technology integrated into PC software-based Internet browsers such as Netscape Navigator, Microsoft Internet Explorer, Mozilla, Safari, and others. This built-in security feature was absolutely essential to the uptake of e-commerce across the Internet in order to secure credit card information, the primary e-commerce form of payment for goods and services. Without SSL encryption on browser sessions, online e-commerce would need another way to link consumers with product. SSL established the trust factor between consumers and online shopping. SSL-based VPNs are remote connections across the Internet or other IP network, using the native SSL capability of popular browsers to provide clientless SSL-based secure communications. SSL-based VPNs allow remote users to access web pages and a growing set of
178
Chapter 4: Virtual Private Networks
web-enabled services, transfer e-mail, access files, and run TCP/IP applications, without the use of VPN client software on the remote workstation. Although less robust than IPSec VPNs, SSL VPNs allow for clientless access anywhere from any Internet-connected PC with an SSL-capable browser. SSL VPNs are also a good fit for PC-to-server applications that have less stringent security requirements. The clientless feature of SSL VPNs largely eliminates software integration, customization, and software client deployment across the remote user population, resulting in a dramatic reduction in PC desktop support expense. Most standard web browsers embed SSL software, providing built-in support for DES and Ron’s code number 4 (RC4, named after its creator Ron Rivest of RSA Security), at 128bit and 40-bit encryption levels, and for the 168-bit 3DES encryption standard. 3DES is the most commonly used encryption algorithm. When an SSL browser connects to a VPN concentrator head end, it uses HTTPS (HTTP secure mode), which is TCP port 443. For establishing trust on SSL-connecting users, deep user authentication is available via RADIUS, RSA SecureID, X.509 digital certificates, Microsoft Active Directory and NT Domain authentication, Kerberos, and other OTP solutions. This flexibility allows the organization to choose the security authentication method most appropriate for its environment. SSL VPNs require the use of a web browser as the access portal to applications. Applications used by SSL users need to present traffic through a web interface and not through an application’s native graphical user interface (GUI), as is the case with many client/server applications. This can require some changes to an application’s workflow, but adding webbased capability increases the application’s accessibility for remote users. Some applications are supported in the SSL environment through an application-specific, small Microsoft ActiveX or Sun Java applet, usually downloaded to the remote user’s PC in advance of placing the user in session with the selected application. SSL solutions are essentially open-standard technology when you consider the aforementioned web browsers with their native SSL encryption technologies; they are intended to be. The VPN head-end architecture for supporting IPSec and SSL-based VPNs is the same. SSL VPN solutions, then, must be differentiated not through technology but rather through their user and security management granularity, performance handling and scalability, and the deployment flexibility designed into the VPN concentrator head-end hardware and software. Most vendors will support both SSL and IPSec VPNs within their same product offerings. SSL and IPSec VPNs are complementary technologies that might be deployed together. For example, some organizations go as far as negotiating an IPSec VPN tunnel with a remote workstation device and then use a web browser SSL interface to authenticate the particular user with a multifactor authentication mechanism, such as a user ID and PIN (known only to the user), and an OTP token synchronized with the organization’s OTP database server.
Access VPNs
179
This strengthens security in case the remote workstation falls into the hands of unauthorized personnel—the IPSec tunnel authenticates the workstation, and the SSL client authenticates the workstation’s current user as an authorized employee. As a product implementation example, Cisco VPN concentrator platforms, deployed at the head end of a remote-access VPN, include concurrent support for both IPSec- and SSL-based VPNs by combining both technologies in a single device. The appeal of SSL-based VPNs is growing. A prime advantage of SSL VPNs is to create secure access from any supported web browser, across any Internet or ISP connection, and do it all without VPN client software management at the remote user workstation level. Though SSL VPNs have more limited application availability then IPSec VPNs, the technology can be appropriate for many organizations’ remote-access requirements and security policy. Many organizations use SSL VPN technology to support a specific application set or set of users, while also using IPSec for full network access or robust support for multimedia applications. If you are a user who needs anywhere access, the proper selection is often an SSL-based VPN. If you need access to any application, the choice is likely to be an IPSec-based VPN. The emergence of SSL VPNs adds another level of price/performance and security granularity for companies to consider for remote-access IP VPN support.
Wireless Remote-Access VPNs Wireless remote-access VPNs are not as much an additional VPN technology but rather an alternative form of Layer 2, wired connectivity for remote user workstations. Wireless LAN technology, primarily based on 802.11x standards, is essentially Ethernet-through-the-air having its own set of authentication and encryption technologies to establish trusted air-link transmission between wireless client workstations and access points. Remote-access VPNs primarily use IPSec technology to establish secure network tunnels and trusted data transmission. Combining both Layer 2 and 3 authentication and encryption technologies leads to complexity. Yet this environment will become the most prominent form of Layer 2 and 3 access for remote user workstations because of portability within the residence or office, conference center, or Internet café. The wireless remote-access environment is difficult to homogenize, leading to many brands of wireless access points and wireless adapter configurations. This environment is also a challenge to properly secure, because wireless access points will radiate signals beyond the walls of the remote user’s office or residence. Remote-access VPN workstations with wireless LAN technology will typically use either a software-based VPN client or a hardware-based VPN client. Security is also an important factor to consider in wireless VPNs. The next sections describe these topics in more detail.
180
Chapter 4: Virtual Private Networks
Software-Based Wireless VPNs For a software-based remote-access VPN, wireless LAN (WLAN) client workstations associate with the local wireless access point to establish connectivity at Layer 2 (air-based Ethernet). The access point is normally connected via wired Ethernet to a broadband Internet access device, such as a DSL or cable modem. Upon proper association of the wireless client, the access point allows the client’s Dynamic Host Configuration Protocol (DHCP) request for Layer 3 IP addresses to be passed from the DHCP server to the client workstation so that it might receive an IP address, a default gateway address, and a DNS server address, establishing IP connectivity at Layer 3. Fee-based wireless access domains require the user to establish a login profile, and submit a form of payment prior to providing a Layer 3 IP address to the workstation client. If the wireless client workstation is configured with a software-based IPSec client, the IPSec client requests a Layer 3 VPN tunnel through the network to the central site VPN gateway or concentrator, using one of many types of authentication. If access is approved, then the IPSec VPN tunnel provides secure access for the data transmitted from the central site all the way to the wireless client remote workstation. Figure 4-6 shows a software remote-access VPN for WLAN design. Figure 4-6
Software Remote-Access VPN for WLAN
Internet
Broadband Access Device Wireless Access Point
Wireless Workstation with VPN Software Client
Source: Cisco Systems, Inc.
Access VPNs
181
Hardware-Based Wireless VPNs To support full-time remote-access workers with wireless access, an organization often installs a broadband-connected, hardware-based VPN device attached to a wireless access point. This environment is usually more conducive to remote security management, and the parent organization might choose to authenticate and encrypt the wireless access at Layer 2 in addition to any Layer 3 encryption supplied by the hardware VPN device. For example, using Extensible Authentication Protocol (802.1x/EAP) authenticates the wireless client through a RADIUS server to grant Layer 2 access. EAP is a message authentication algorithm that ensures that the workstation wireless adapter securely communicates with the WLAN access point and the authentication server. Coupling this with the Layer 3 IPSec capability of the hardware VPN device allows for stronger security for remote-access wireless LAN VPN environments. An example of a hardware-based, remote-access VPN with wireless LAN support is shown in Figure 4-7. Figure 4-7
Hardware Remote-Access VPN for WLAN
Internet
Broadband Access Device
Hardware VPN Device
Wireless Access Point with 802.1x/EAP
Wireless Computer with 802.1x/EAP
Source: Cisco Systems, Inc.
182
Chapter 4: Virtual Private Networks
Wireless VPN Security Considerations When deploying IPSec or SSL remote-access VPNs in a wireless LAN environment, security must be planned and implemented at both the Layer 2 Media Access Control (MAC) level (wireless access) and Layer 3 IP level (IPSec or SSL VPN). If the remote users are also mobile users, they normally use personal firewall software on their workstations to protect workstation data as they use their software-based, remote-access IPSec VPN client in public hotspots such as Internet cafés, airports, hotels, and convention centers. Wireless VPN security is a combination of both wireless LAN security and VPN security. Both encryption and authentication mechanisms should be designed to operate at both the Layer 2 (wireless air-link) and Layer 3 IPSec or SSL VPN level to ensure the appropriate security for wireless VPN communication access. For more information on wireless LAN security, refer to the section “Wireless LAN Security” in Chapter 2, “IP Networks.”
MPLS VPNs for Remote Access As mentioned previously in the section “IP VPNs: Where We’re Going,” an MPLS VPN solution is an intranet-style VPN. Using the intranet concept, the connections to the edge of the MPLS VPN are by and large fixed, Layer 2 bandwidth transport services such as Frame Relay, serial HDLC, ATM, PPP, PoS, and Ethernet. Given the worldwide growth of mobile users—workday extenders, telecommuters, and small offices/home offices— opportunity looms large to accommodate these user groups, connecting them to MPLS core networks in order to leverage and provide IP services to this remote-office, knowledge worker populace. One of the distinctions with remote access to MPLS VPNs is that the remote user connection seldom transitions the public Internet but rather stays within the MPLS provider’s private access network until it reaches the MPLS VPN service edge offered by the provider. These remote-access users can then establish VPN access to MPLS core networks wherever they are. For companies that choose to outsource their private WAN networks to provider-managed MPLS VPNs, remote access to MPLS VPNs accommodates the company’s teleworker population. In cases where an Internet connection is used for remote-access, IPSec or SSL VPN technology can be used to secure this public portion of the access link. As you’ve learned in this chapter, IPSec and SSL VPN solutions are excellent options for remote access. As necessary to meet a company’s security guidelines, IPSec and SSL can be extended to the remote-access user groups of organizations using MPLS networking services as their primary WAN infrastructure, whether internally or as a service from a service provider. The following discussion examines remote access to MPLS VPN features, functionality, and benefits.
Access VPNs
183
Remote Access to MPLS VPN Features and Functionality Many providers of MPLS VPNs are stretching MPLS VPN capabilities beyond fixed-access locations into remote-access locations to leverage MPLS VPN services into this extended networking segment. Remote access to an MPLS VPN allows the provider not only to extend the proper MPLS VPN to these users, but also to offer the remote user incremental IP VPN services such as packet telephony, content delivery application hosting, multimedia applications, and many more. As an example, a MPLS VPN services provider to a business client can offer MPLS remote-access services via the provider’s DSL access infrastructure for the client’s remote teleworkers and small branch locations. Remote-access connections to MPLS VPNs use last-mile broadband and narrowband connections such as cable, DSL, dial-up, and wireless types of access. This allows MPLS VPN functionality to be scalable, end-to-end, and extended anywhere that a provider’s MPLS network can reach. An MPLS VPN virtual home gateway (VHG) is essentially a router functioning as an MPLS provider edge (PE) router, with this VHG/PE positioned at the point of demarcation between the termination of remote-access sessions and the beginning of the MPLS VPN core network. Based on VPN-aware, DHCP server-assigned IP addresses, or dynamically assigned IP address space from a RADIUS-based AAA server, the VHG/PE is capable of assigning the proper Layer 3 IP addresses and placing the remote-access user sessions into the proper MPLS VPNs. This functionality is based on true IP routing protocols and IP routing, in contrast to the point-to-point tunnel concept used for IPSec and SSL. The MPLS VHG/PE provides the following features:
• • •
Support of overlapping IP addresses
•
Efficient route summarization at the VHG/PE boundary
Group IP address pools on a per-VPN basis Dynamic assignment of IP address space to the VHG/PE via the On-Demand Address Pool (ODAP) feature
These features allow remote-access design flexibility either for providers or customers of MPLS VPNs. For providers, they allow the flexibility of in-sourcing functions such as DHCP, RADIUS authentication, and IP address assignment on a per-customer or per-VPN basis, yet allow some customers to retain this functionality within their own computing support boundaries through the use of MPLS DHCP relay and RADIUS proxy features. For customers, these advanced features allow the flexibility of maintaining control over these security functions or outsourcing these functions to the MPLS VPN service provider. Because remote-access users are connecting to the VHG/PE from nonbusiness locations, it is prudent to authenticate and authorize approved users via AAA solutions. The cooperative design of the MPLS VHG/PE, DHCP, and RADIUS-based AAA servers work together to facilitate a robust, flexible, and secure remote-access session to MPLS VPN
184
Chapter 4: Virtual Private Networks
customer domains. With the user authenticated and placed into the proper MPLS VPN, enterprise application resources are available to the remote user, whether they use dial-up, cable, DSL, or wireless forms of access. Figure 4-8 shows the concept of the remote access to MPLS VPNs. Figure 4-8
Remote Access to MPLS VPN Solution
Access Technology Specific Solutions
Common Solution Independent of Access Technology
Service Provider AAA Server
Dial Access DSL Access Cable Access, DOCSIS
DHCP Server
Customer Network
Service Provider MPLS Core
MPLS VHG-PE
PE
CE
Customer AAA Server
Customer DHCP Server
Source: Cisco Systems, Inc.
Within each of the various access types, there are a number of options with which to establish user access or design a broadband solution for remote-access services to MPLS VPNs. The service architectures available for remote access to MPLS VPNs are
•
Cable access: — Bridged access of customer premises equipment (CPE) through Data over Cable Systems Interface Specifications (DOCSIS) service ID (SID) — PPP over Ethernet (PPPoE) — PPP over Ethernet over 802.1Q in 802.1Q (PPPoEoQinQ)
•
DSL access: — RFC 1483/2684 bridged — RFC 1483/2684 routed bridge encapsulation (RBE) — PPP over ATM (PPPoA) — PPP over Ethernet (PPPoE)
Access VPNs
185
— PPP over Ethernet over VLAN (PPPoEoVLAN) — PPP over Ethernet over 802.1Q in 802.1Q (PPPoEoQinQ) — PPP over any service (PPPoX) — virtual private dial-up networks (VPDNs) Layer 2 Tunneling Protocol (L2TP)
•
Dial access — Layer 2 Tunneling Protocol (L2TP) VPDN dial-in — L2TP large-scale dial-out (LSDO) — PPP dial-in for ISDN — PPP dial-out for ISDN — Dial backup via VPDN or ISDN
Benefits of Remote Access to MPLS VPNs Remote access to MPLS VPNs provides many benefits to both customers and providers of VPN technology. A few of the possible customer benefits are listed here:
•
Remote-access users can securely access corporate intranet applications over MPLS VPNs using various broadband access technologies such as dial-up, DSL, and cable.
• •
The service provider can provision and secure user connectivity.
•
The QoS features within MPLS networks can be extended all the way to the remoteaccess user.
Remote access, coupled with MPLS VPNs, can increase a provider’s global network scale.
Service providers can also benefit from an investment in a remote access to MPLS VPN solution. For service providers, these are some of the benefits:
• •
Leverages the use of MPLS core services to more users and sites.
•
Achieves greater out-tasking penetration with customers to enhance differentiation, loyalty, and revenues.
Expands the MPLS VPN offerings beyond fixed-location intranet VPN offerings to carrier-class access VPN offerings.
The use of remote access to MPLS VPN technology using the VHG/PE functionality allows providers to provision and support remote-access VPN connectivity via many different access technologies while leveraging the infrastructure investment across many MPLS VPN customers. This solution is one of the most sophisticated, scalable, and carrier-class remote-access solutions for IP networking. Taking advantage of MPLS networking features such as MPLS VPN and remote access to MPLS VPNs, companies can deliver intranet applications closer to the worker while lowering costs. Providers of all kinds can strengthen customer partnerships by providing not only managed intranet services but managed remote-access services as well.
186
Chapter 4: Virtual Private Networks
Intranet VPNs Corporate networks have traditionally been referred to as internal networks or private networks but more contemporarily are called intranets. Retaining the legacy of internal, private networks their reclassification as intranets furthermore captures the spirit of the web-enabled, HTML and XML clients of today’s “Internet look-and-feel” enterprise applications. Intranet VPNs are used to replace or augment existing private networks built on conventional point-to-point, Frame Relay, or ATM infrastructures. Using the capabilities of IPSec VPNs, Layer 3 and Layer 2 MPLS VPNs, L2TPv3 VPNs, and multicast VPN technologies, organizations can meet advancing WAN requirements more cost-effectively and flexibly, building intranet VPNs across shared network infrastructures.
IPSec Site-to-Site VPNs In the intranet model, IPSec remains a dominant tunneling technology for creating VPNs across shared facilities such as the Internet. With IPSec’s inherent tunneling and security capabilities, users can create site-to-site VPNs across networks such as the Internet, extending the reach of their businesses or large enterprises with less expense, less provisioning time, and less restrictions concerning long-haul or international transport providers. The key catalyst for site-to-site VPNs is that they easily leverage publicly routable IP networks, best exemplified by the Internet. With traditional WAN networks, providers were called on to design customer site-to-site connections across the provider’s complete network geography, either through Layer 1 point-to-point circuits, Layer 2 Frame Relay data link connection identifiers (DLCIs), or Layer 2 ATM addresses. This required engineering, provisioning, and deployment processes that were dependent on customer orders to initialize the delivery cycle. With the Internet, the Layer 1 connectivity—an infrastructure that is continuously and transparently upgraded—is already in place, with Layer 2 and 3 addressing weaving this infrastructure into a worldwide network of networks. In fact, it is the Internet’s established and inherent Layer 3 addressability and global IP routing that creates the service pull for site-to-site VPNs. With the Internet, the long-distance portion of the network is already engineered, provisioned, deployed, IP routable, and virtually free. By connecting local access links to an IP routable Internet, control of network provisioning passes into the hands of the customer. Customer wait time for provisioning is circumvented, pricing negotiations and term contracts are avoided, and network rollouts are accelerated. In fact, the global expanse of the Internet allows organizations to extend and modify their company’s intranets beyond local, regional, and national borders, and quickly do it on a worldwide basis.
Intranet VPNs
187
Site-to-site IPSec VPNs are often used for WAN private data link replacement and for complying with various data privacy acts that impose extra security measures on data transmitted across public domain networks. Many organizations introduce IPSec VPN technology into their networks for WAN data backup applications. While important, data backups aren’t as mission critical as prime-time production applications; so WAN data backup is a good application to use to become familiar with IPSec technology deployment, management, and applicability. Regulations such as the Health Insurance Portability and Accountability Act (HIPAA) require companies to protect the privacy of patient medical records, and many choose IPSec VPN technology to apply encryption to data streams as they traverse publicly shared network facilities and replicate information between health provider companies. Yet the prevailing use of IPSec VPN site-to-site applications is to replace WAN private data links. By using IPSec to extend intranet facilities over the Internet or over less expensive service provider IP facilities, companies can stitch together remote sites into a virtual fabric of cryptotunneled logical links. This can be done rather inexpensively through IPSec VPN tunnels, allowing sensitive data to be obscured from view while moving through untrusted network space. Figure 4-9 shows the concept of extending an intranet over publicly shared facilities such as the Internet. Figure 4-9
Intranet Extension Using IPSec
IPSec
Internet Remote Office
Enterprise
Source: Cisco Systems, Inc.
In practice, an IPSec site-to-site VPN is an overlay to an existing IP network. A pair of VPN endpoint devices, most commonly routers, use publicly routable IP addresses to establish an IPSec tunnel between them. Once this is done, private IP address space might be routed through the IPSec tunnel, easily extending an intranet private IP address plan and intranetbased applications. Through the use of IP routing technology, digital certificates, preshared keys, and cryptography, IPSec tunnels are built across existing Layer 3 IP networks, transparently safeguarding network transmission from public view. The site-to-site VPN design involves a mesh of IPSec tunnels connecting remote locations such as branch offices with a central site, regional hub sites of an enterprise, and remote sites wishing to communicate between themselves. The mesh of tunnels is established using public IP addresses (static) on routers so that the tunnels are Internet routable. In this way, any site–to–any site connectivity is supported directly across a shared infrastructure.
188
Chapter 4: Virtual Private Networks
Creating a full mesh of connectivity between sites has traditionally required extra physical data circuits and their recurring costs. However, a full mesh of IPSec tunnels isn’t limited by cost or circuit quantity as long as a site’s physical broadband connection to the Internet is sufficient to support the intended number of logical IPSec tunnels. A full mesh of IPSec site-to-site VPN tunnels is generally only limited by the performance capabilities of the IPSec-based router or firewall, or by the ability of enterprise personnel to operationally manage the IPSec environment, as each tunnel is often defined and configured using static crypto maps in the IPSec-connecting routers. While site-to-site IPSec VPNs remain very popular, additional designs are available to ease management burden or optimize traffic flow.
Additional Intranet IPSec VPN Designs In addition to intranet IPSec site-to-site VPNs, Enterprises can implement other IPSec VPN designs, primarily along three deployment scenarios:
• • •
Hub-and-spoke VPNs Full-mesh on-demand VPNs with Tunnel Endpoint Discovery (TED) Dynamic multipoint VPNs
Hub-and-Spoke VPNs The hub-and-spoke design is often chosen when small, remote sites primarily need to communicate with a regional hub site or core central site of an enterprise. This can be thought of as a one-to-many connectivity, the hub site becoming the one target site for the IPSec tunnels coming from the many remote offices or spokes. This approach allows smaller routers to be used at the spokes, because they only need one IPSec tunnel to the hub site. This also allows for a level of dynamic IPSec configuration for the spokes, because dynamic crypto maps can be used to assign DHCP-based dynamic IP addresses to the spoke routers. The spoke routers initiate connectivity to the hub, authenticating themselves to the hub and then establishing the IPSec tunnels. Spokes can send data to other spokes in this environment but are generally decrypted and re-encrypted at the hub router.
Full-Mesh On-Demand VPNs with TED Full-mesh on-demand VPNs with Tunnel Endpoint Discovery (TED) benefit from a Cisco IOS software feature that allows routers to discover IPSec endpoints across the Internet or shared provider network. Developed for use in large enterprise IPSec environments with full-mesh IPSec requirements, the TED feature enables IPSec configuration to scale through use of dynamic crypto maps.
Intranet VPNs
189
On demand, the TED feature sends a discovery probe packet from the initiator to determine the particular IPSec peer router responsible for a specific publicly routable IP host address or subnet. Once the proper IPSec peer is learned, its address is then used to create a dynamic crypto definition and proceed with IPSec tunnel setup. In a 100-site IPSec deployment, this could reduce 99 static crypto maps per router peer to one dynamic crypto map each.
Dynamic Multipoint VPNs The dynamic multipoint VPN (DMVPN) IPSec design option is an optimal blend of the hub-and-spoke approach and the dynamic, on-demand spoke-to-spoke approach. DMVPN allows for spoke sites to discover and dynamically create IPSec tunnels directly between other spokes on demand. While maintaining primary spoke-to-hub communication, individual spoke sites can dynamically and periodically learn routes and create IPSec tunnels directly with other spokes, without having to pass all communication through the hub router and incurring the performance hit of double encryption/decryption. This allows better IPSec scaling in full-mesh and partial-mesh IPSec VPN environments, providing optimum paths between spoke sites on demand, while maintaining the benefits of the traditional hub-and-spoke design.
IPSec Design Components Key components of these IPSec designs include the following:
•
High-end VPN routers, often hardware accelerated or purpose-built to serve as VPN head-end termination devices at a central enterprise or regional hub site
• •
VPN-capable access routers serving as VPN branch-end termination devices
•
IPSec and optional GRE tunnels interconnecting the head-end and branch-end devices into the IPSec VPN
Layer 3 Internet connection services procured from an ISP or service provider, connecting sites to a public network
A number of features can be added to these designs such as high-availability, head-end load distribution, and hardware-accelerated encryption to create robust IPSec site-to-site networks. IPSec tunnels can even be integrated with service provider MPLS networks. Vendors of IPSec-enabled equipment often add features within their device software to assist with the day two and ongoing management of IPSec VPN environments. For example, Cisco equipment can distribute predefined IPSec and SSL security policies from a central site or head-end device to downstream IPSec clients. This helps to eliminate a substantial amount of local IPSec configuration that would otherwise be required. Cisco Easy VPN is one example of a VPN management feature set that eases IPSec VPN deployment and management. Figure 4-10 positions IPSec intranet VPN solutions in respect to services and productivity.
190
Chapter 4: Virtual Private Networks
Figure 4-10 IPSec Site-to-Site Solution Positioning
Enhanced Services
¥ VPNs with IOS Networking ¥ Full Routing and Application Support
Dynamic Multipoint VPN
Routed GRE/IPSec
¥ On-Demand Dynamic Tunnels ¥ Simplified Scaling and Mgmt
¥ Full Standards Compliance ¥ Vendor Interoperability
Easy VPN IPSec
Standard IPSec
¥ Policy Push for Ease of Deployment ¥ High Scalability
Productivity
MPLS Layer 3 VPNs Original IP VPN services for intranet extension were frequently CPE-based, self-deployed, and self-managed by enterprises. As one of the most significant, next-generation networking services, MPLS VPN services have emerged as an efficient solution for supporting enterprise networking requirements, most notably convergence. As a VPN feature layered on an MPLS network, MPLS VPNs support geographic customer networking, accommodating intranet, extranet, and Internet access applications, while interconnecting sites both securely and privately. MPLS and MPLS VPNs are also meeting a need for service providers looking to ascend to Layer 3 and to expand into IP services. Combining the best features of IP routing and switching, MPLS networks perform better, scale farther, and manage easier. Based on IP, MPLS VPNs are by definition Layer 3 VPNs, containing customer IP route tables and routing protocols. The IETF’s RFC 2547bis specification for Multiprotocol BGP (MP-BGP) defines MPLS VPNs as network-based services. The specification describes a VPN solution that uses MPLS to forward traffic using per-customer labels. The provider uses the MP-BGP protocol to distribute customer routing information across the provider’s backbone. The Layer 3 VPN capability enables providers to manage customer IP routing within the customer’s defined, logical VPN. MPLS Layer 3 VPNs are primarily considered intranet VPNs. It is these types of intranet VPNs that are frequently accessed through the remote access to the MPLS VPN topic that was discussed earlier. The primary intent of the IETF’s specification for MPLS VPNs is to help organizations reduce the complexity and operational costs of IP routing, moving this functionality into the carrier networks.
Intranet VPNs
191
The chief intent for service providers is to present an attractive, carrier-class-and-scale IP network with which to target the enterprise customer segment. MPLS VPNs are flexible and adaptable, emulating and enhancing enterprise IP networks. Organizations now have the choice to outsource complex networking demands to their providers. Providers can then alter their customer relationship from a transport base to a services base. Choosing to leverage their MPLS VPN platforms gives providers new opportunities to invoke the natural service pull of IP. Carriers can expand their data and IP product portfolios with new IP-based applications that businesses need. In fact, it is the business advantages of MPLS VPNs that should be at the forefront of every provider’s VPN offerings. An MPLS VPN infrastructure is service-centric, containing the inherent ability to lower capital expenditure (CapEx) and operational expenditure (OpEx) services by leveraging a centralized, shared physical network infrastructure across many customers. This can provide pricing power and create value distinction in service offerings. By migrating customers from traditional WAN solutions to Layer 3 MPLS VPN services, providers assume the more complex network engineering and operational responsibilities of organizations, deriving positive effects while enhancing customer engagement. An MPLS VPN is a logical IP network infrastructure delivering private network services, layered on a physically shared Layer 3 IP backbone. An MPLS VPN infrastructure
• • • • • • • •
Supports large-scale VPN services Accommodates global IP addressing and nonunique private IP addressing Provides controlled access and QoS Is easily configurable for customers Is scalable for easy provisioning Increases the VPN service provider’s added value Decreases service provider costs of providing VPN services Is flexible enough to support a wide range of VPN customers
MPLS VPNs using MP-BGP are beneficial when customers desire to use Layer 3 connectivity, preferring to offload routing protocol overhead to the MPLS network. Layer 3 MPLS VPNs are access agnostic, allowing the use of different Layer 2 interface types to be part of the same VPN. Typical access interfaces are Frame Relay, PPP, HDLC, ATM, PoS, and Ethernet. In addition, a variety of routing protocols are available for the Layer 2 access link (the CE-to-PE link) including static, RIP, OSPF, EIGRP, Intermediate System to Intermediate System (IS-IS), and BGP, all of them functioning equally well. This creates flexibility for the MPLS service provider, offering a range of choices for the particular connectivity and routing needs of an individual customer.
192
Chapter 4: Virtual Private Networks
VPN Any-to-Any Connectivity Layer 3 MPLS VPNs benefit from the automatic any-to-any connectivity supported by IP routing protocols. For MPLS VPNs specified by RFC 2547bis, MP-BGP is the routing protocol responsible for sharing IP network routes so that any-to-any connectivity is easy to establish. You previously learned about MPLS core networks and MPLS label swapping in Chapter 3, “Multiservice Networks.” Recall that an MPLS core network is made up of provider (P) nodes and provider edge (PE) nodes. The combination of MPLS P and PE routers defines an MPLS domain. MPLS PEs are where the Layer 3 VPN any–to-any connectivity is established. The MP-BGP protocol, when properly configured on a PE device, automatically establishes route forwarding relationships to other PE routers that desire to host the same VPN. Multiple unique VPNs can be defined, each one creating a VPN routing and forwarding (VRF) instance on its respective PE router. Conceptually, each VRF is a unique routing table. A PE router can have several hundred VRFs if properly sized and configured. As a simple example, imagine an MPLS VPN supporting three separate customers attached to the same PE routers. Each PE router would have three separate VRFs in processor memory. VRFs are uniquely named with route distinguishers (RDs). VRFs with the same RD value and mutual import/export values allow MP-BGP to automatically propagate IP routes between those VRFs with the same RD value. This means that with the proper VPN definitions in the MPLS network, a new customer location can be configured to attach to its local PE router, and all customer locations across the MPLS VPN will be immediately reachable without any necessary definitions on the remote PE routers. Customer devices, often called the customer edge (CE) router, connect via various types of access links to the MPLS PE devices. The MPLS PE, preconfigured for this customer’s VPN, places that customer’s IP network prefix into the appropriate VRF table. If this is a new customer IP network route, the MB-BGP protocol will propagate this route to the other PEs that have the same VRF table (identified by the RD value). This shares the customer IP network route to all PE routers enabled for this VPN, and allows all customer sites to have any-to-any Layer 3 IP connectivity with each other, subject to route import/export policies as defined. What is different about MPLS VPNs is that as the PE populates the customer IP network prefix in the PE’s VRF table, it assigns a VPN label to any of the prefixes from this customer and applies this VPN label to any subsequent IP packets received. This VPN label, sometimes called the inner label, uniquely distinguishes this customer’s traffic so that it can be identified from any other customer VPNs on this PE router, be sent across a shared MPLS core network infrastructure, and be properly separated at the destination PE. Consider that two different customers could be using the exact same IP address—for example, an RFC 1918 private address such as 192.168.220.221. Specifically, each VPN RD is a unique 64-bit address that prepends this 32-bit customer address for a 96-bit address that is now unique, even though this pair of customers used the same 32-bit IP address. The resulting 96-bit address is known as the VPNv4 address within the MPLS
Intranet VPNs
193
network. The VPNv4 address is formed at the source PE, and the 64-bit portion (the RD) is stripped by the destination PE prior to delivering the remaining 32-bit addressed packets to the proper interface. The use of the unique VPN RD and resulting VPN label keeps these overlapping IP addresses totally unique and separated in the VRF tables and throughout the network. Prior to the PE router sending the VPN-labeled IP packet into the MPLS core network, the PE also prepends an MPLS Interior Gateway Protocol (IGP) label, sometimes called the outer label. This label represents the path through the core network to reach the destination PE router. From a forwarding perspective, it’s important to understand that a PE-to-PE route connection via MP-BGP is considered a single Layer 3 IP routing hop, regardless of how many P routers might connect the two PE routers together. This is because the P routers are performing MPLS label switching, using an outer label through the MPLS core, based on pre-established label-switched paths (LSPs), effectively a Layer 2 process. Therefore, the Layer 3 IP header of an IP packet is never examined by a P router in the core; it is examined only by the source and destination PE routers. Performing a traceroute to a destination customer network will see the two PE routers listed in the output, but no P routers in between them will be listed, because the P routers don’t forward based on information at the IP layer. A conceptual diagram of an MPLS Layer 3 VPN is shown in Figure 4-11. Figure 4-11 MPLS Layer 3 VPN Customer Edge
Customer Edge MP-BGP VPNv4
VRF
VRF
Label Exchange
LSP
Customer Edge
Provider Edge
LSP
MP-BGP VPNv4
LSP
MP-BGP VPNv4
Provider Edge
Provider Edge
VRF
Source: Cisco Systems, Inc.
Overlapping Addresses Are Made Unique by Appending RD and Creating VPNv4 Addresses
Customer Edge
194
Chapter 4: Virtual Private Networks
MPLS core networks, as any IP network, maintain a global routing table separate from the MPLS VPN VRF tables. This allows a provider to operate a typical Internet access service to customers not requiring VPN services and keep that service transparent from the MPLS VPNs used for VPN-subscribing customers. MPLS VPNs can also have Internet access routes within their MPLS VPN and specific to their VPNs only. This can add a level of performance and security separation from a global shared Internet access service, as might be typical of a generic ISP connection. MPLS VPNs can scale to a very large size. Customer routing knowledge is not maintained in the MPLS core network, which is the case for many traditional networks. The customer routing intelligence is maintained on the MPLS edge (PE), allowing simple but fast switching within the MPLS core (P). Overall network scalability is enhanced because MPLS VPNs are virtually unlimited as to the number of sites that can reside in an MPLS VPNs VRF table, subject to hardware and software capacities of the routers used in the network. You can find a list of additional resources on this topic in the “Recommended Reading” section at the end of the chapter.
MPLS Layer 2 VPNs The appeal of MPLS Layer 2 VPN technology is that it facilitates convergence for existing providers desiring to consolidate their overlay networks of leased line, Frame Relay, and ATM into a single network platform. For new providers, it complements the Layer 3 solution of MPLS VPNs, allowing the new provider a full array of network offerings. By accommodating Layer 2 VPN technology, MPLS backbone networks become the epicenter of convergence, porting and integrating existing Layer 2 services onto the same physical network used to provide Layer 3 MPLS VPNs. Layer 2 networks still accommodate some very important requirements for customers:
•
Support non-IP traffic such as Novell’s IPX, Apple’s AppleTalk, and Digital Equipment Corporation’s DECnet
•
Customer routing control over the WAN
If a customer isn’t ready to outsource Layer 3 IP routing services to a provider, then Layer 2 VPN services are most likely the lead with product discussion. The goal of convergence is to provide many services over one network. Part of those services is a flexible offering of conventional access technologies such as Frame Relay, ATM, HDLC, and PPP, augmented with the more contemporary conduits of Ethernet, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, and beyond. In this chapter, you’ve already learned how MPLS is access agnostic, using nearly any access technology to create routing with network-based Layer 3 VPNs. Yet, if the customer desires to maintain Layer 3 routing control, the solution requires Layer 2 point-to-point, multipoint, or interworking of services between Frame Relay, ATM, Ethernet, PPP, and others. An MPLS core network provisioned with features for Layer 2 VPNs is one answer.
Intranet VPNs
195
Using the appropriate signaling extensions within MPLS, MPLS core networks can also provide Layer 2 connections at the edge of the network to facilitate migration and integration and provide enhanced feature support of Layer 2 offerings similar to those of Layer 3. Layer 2 VPNs over an MPLS network can take the following forms:
• •
Any Transport over MPLS (AToM) Virtual Private LAN Service (VPLS)
AToM allows the construction of Layer 2 VPNs using a variety of like-to-like and any-toany access connection technologies. VPLS is essentially an Ethernet VPN technology that uses MPLS features for enhancing the manageability and scalability of multipoint Ethernet over a provider or large enterprise MPLS network. AToM also supports a virtual private wire service (VPWS), which can be used to create Layer 2 VPNs on a point-to-point basis.
Layer 2 Any Transport over MPLS AToM is a Cisco feature for transporting Layer 2 packets over an IP/MPLS network backbone. AToM provides point-to-point connectivity for several types of media. AToM allows for Layer 2 provisioning and support similar to the existing circuit-switched environment, in order to provide circuit-based services in addition to the newer Layer 3 packet-based IP services. At Layer 2, AToM allows the transparent trunking of the customer’s IGP routing, while providing like-to-like and any-to-any connectivity between broadband access types. AToM also meets the requirement to scale Frame Relay and ATM edge services across a high-speed MPLS core network that can reach OC-192 speeds and beyond. Therefore, AToM supports several non-VPN-related as well as VPN-related functions. Layer 2 VPNs are classified as VPNs because they share a portion of the MPLS network; that is, they’re not dedicated from end to end as a traditional private line circuit, but rather are virtual. Layer 2 VPNs are created by forming Layer 2 tunnels across an MPLS network. Instead of VPN labels based on IP prefixes, as in the MPLS Layer 3 VPN example, AToM uses the concept of Layer 2-emulated virtual-circuit (VC) labels, which represent the customer physical interface at the PE router. These VC labels are dynamically created and are distributed over the MPLS network using the MPLS label distribution protocol (LDP), rather than using the MP-BGP protocol as in MPLS Layer 3 VPNs. The VC label is one of two labels used to surround the Layer 2 data frame, the inner label or uniquely identifying VC label, and the outer label, which is the tunnel label that allows a source PE router to follow a Layer 2 path across the MPLS core network to the destination PE. The inner VC label is bound to the particular Layer 2 egress interface and circuit on the PE router, and this information is shared across the MPLS network to the destination PE, forming a provider edge pair. The outer label functions to quickly switch the VC-labeled packet or frame to the other end of the Layer 2 tunnel (see Figure 4-12). Features are also available to provide traffic engineering services, like/unlike media interworking and QoS services.
196
Chapter 4: Virtual Private Networks
Figure 4-12 AToM Logical Topology and Labeled Frame
P
MPLS Backbone CE
PE Layer 2
PE Tunnel
VC Label
Outer Label
Inner Label
CE
Original Ethernet Frame
Source: Cisco Systems, Inc.
In this way, AToM can provide virtual leased-line services in a point-to-point manner across an MPLS network that can be used to build Layer 2 VPNs for customers. An example of this type of application would be connecting a pair of customer sites using Ethernet over MPLS over AToM, allowing the transmission of protocols such as IPX and AppleTalk. The customer can choose from a variety of Layer 2 ATM, Frame Relay, TDM, PPP, HDLC, and Ethernet protocols. Figure 4-13 shows this concept. Layer 2 VPNs using AToM are also particularly useful for transporting Ethernet LANs and VLANs across a provider’s MPLS network. Advanced enterprises often use virtual LAN technology (VLANs) to create multiple logical Ethernet networks over the same physical cabling infrastructure within the campus. When planning for the provision of Ethernet MAN or WAN services at Layer 2, there needs to be mechanisms to provide service-level guarantees for end-to-end traffic as well as for interapplication priority within the traffic, because Ethernet does not contain these features inherent in the protocol. Using Ethernet over the connection-oriented, path-switching capability of an MPLS network allows for bandwidth to be reserved and for QoS to be administered. In regard to QoS, the Layer 2 class of service bits (802.1p bits) in the Ethernet frame are mapped to the MPLS experimental bits contained in the MPLS label headers. At the destination PE, the MPLS experimental bits are remapped back into the original 802.1p bits in the Ethernet class of service portion of the Ethernet frame. This allows for application priority to be maintained as it moves from a customer sourcing Ethernet LAN across the provider MPLS network and then delivered to the customer’s destination LAN with priorities intact. This is an important feature to ensure service differentiation for Voice over IP (VoIP), video, and mission-critical data applications from standard data traffic. Figure 4-14 shows the use of the MPLS experimental field to carry the Layer 2 802.1p information.
Intranet VPNs
197
Figure 4-13 Virtual Leased Line with Cisco AToM Any Transport over MPLS (AToM) Tunnel
MPLS Backbone
PE1
PE1 Virtual Leased Line
L2 Clouds
L2 Clouds
CPE Router
CPE Router
Source: Cisco Systems, Inc.
Figure 4-14 MPLS Experimental Field with 802.1p Priority MPLS Experimental Field
Ethernet Frame 802.1P Priority
Non-MPLS Domain
MPLS Header
Ethernet Frame
802.1P Priority
802.1P Priority
MPLS Domain
With AToM at Layer 2, no address resolution (ARP mediation) is required. AToM also provides flexibility by interworking multiple access technologies together at Layer 2, such as an Ethernet and a Frame Relay. This can be accomplished by using a mutual encapsulation such as PPP to tie the pair of technologies together. The AToM feature set supplies this interworking capability.
198
Chapter 4: Virtual Private Networks
AToM will continue to be enhanced, providing support for Packet over SONET interfaces (PoS over MPLS) and TDM over MPLS. AToM provides multipoint transparent LAN services for Ethernet using a Cisco solution called VPLS, discussed in the next section. AToM is a definitive component of any Layer 2 solution because it provides the basic encapsulation and signaling techniques required to emulate Layer 2 circuits over an MPLS network. AToM also inherits the traffic management capabilities of the MPLS network to provide service-level guarantees. The same PE routers in an MPLS network can run both Layer 2 AToM-based services and Layer 3 MPLS VPN services, making the integration of Layer 2 and Layer 3 possible, functional, and cost-effective for converged networks.
VPLS (Layer 2 Ethernet Multipoint Services) Virtual Private LAN Service (VPLS) is emerging as an alternative multipoint Ethernet technology. While previous Layer 2 VPN discussion has involved Ethernet over MPLS, the context has represented Ethernet as a point-to-point service. VPLS enables Ethernet multipoint services over a packet network infrastructure. VPLS is a Layer 2 architecture coupled with an MPLS logical model and Cisco IOS MPLS features to create a scalable multipoint environment for running Ethernet optimally over metropolitan areas or WANs. The following sections discuss the need for VPLS, the VPLS logical model, the VPLS hierarchical model, and the Cisco IOS MPLS support for VPLS.
The Need for VPLS The swelling installed base of optical fiber, particularly in metropolitan areas, facilitates Ethernet’s move from the LAN into the metropolitan area network (MAN) and onto the WAN. Maturing as a Layer 2 LAN technology within the campus network of many a business and enterprise, Ethernet is traditionally characterized by PCs, laptops, and computers connecting via hardwires such as Category 5 cabling to a nearby Ethernet port on an Ethernet switch. Inherently multipoint in nature, Ethernet uses Layer 2 broadcasts, multicasts, and an all-ports flooding of destination-unknown MAC addresses across Layer 1 physical media to provide effective, high-speed LAN-based connectivity. To extend Ethernet into the wide area, these Ethernet concepts and building blocks must be re-engineered and replicated within the domain of the service provider’s network technology. VPLS provides this framework.
VPLS Logical Model VPLS is formed within service provider networks using the characteristic model of a shared, high-bandwidth physical infrastructure underlying a logical division of customers
Intranet VPNs
199
into their own unique VPNs. These Layer 2 VPNs are said to be virtual with respect to the customer because the provider is solely responsible for physical connectivity within the metropolitan and wide area portions of the required network geography. The customer isn’t concerned with intimate details of the provider’s VPLS connectivity; so from that perspective VPLS appears as a virtual connection. The customer’s virtual connection acts as a data portal, transporting customer traffic into the service provider’s VPLS domain, where the provider keeps customer signaling and data traffic private through the use of VPN and wide area VLAN capabilities. At a high level, customers perceive the provider’s VPLS as a virtual wide area Ethernet switch, forwarding data frames to appropriate destinations within the customer’s VPN, complete with Layer 2 unicast, broadcast, and multicast capabilities. Figure 4-15 depicts a logical view of VPLS. Figure 4-15 Logical View Of VPLS
IP/MPLS Ethernet
CE
Ethernet
Logical Ethernet Switch
PE
PE
CE
PE
Ethernet
CE Source: Cisco Systems, Inc.
VPLS is commonly built on MPLS network structures within service provider networks, using the familiar concepts of MPLS PE devices to create the logical, private customer VPNs. In its simplest form, VPLS is a collection of customer sites connecting to a number of PE devices that implement the emulated VPLS services. Once a customer’s data packet reaches the PE router(s), the PE devices make the Ethernet frame forwarding decisions, switching them across the packet-switched MPLS network using the selected Ethernet virtual circuit (EVC), often called a pseudowire. The PE devices use routing protocols such as the MPLS label distribution protocol (LDP) to create a logical, full mesh of Ethernet virtual circuits within the provider cloud. An individual customer site can make a single Ethernet connection to the provider’s VPLS network and be afforded reachability to multiple customer destination end sites, therefore creating a multipoint Ethernet service.
200
Chapter 4: Virtual Private Networks
You might recall that the PE device in an MPLS network implements multiple VPNs by creating separate virtual route forwarding or VRF tables, one per customer MPLS VPN. These VRF tables contain Layer 3 routing information, allowing the PE router to be leveraged across multiple customers, while maintaining logical routing separation between them. In the context of an Ethernet VPLS service, the PE device maintains a virtual switching instance (VSI), which is essentially a unique Layer 2 forwarding table per customer VPLS. The provider’s PE devices populate the individual VSI tables with the forwarding information required to switch Ethernet frames within the particular VPLS VPNs. Standard MAC address learning is performed by the PE’s VSI function and is updated as new forwarding information arrives from customer ports on the network edge and from the Ethernet virtual circuits within the VPLS network core. In standard campus Ethernet deployments, Ethernet switching uses the Spanning Tree Protocol (STP) to generate a loop-free topology. In the case of VPLS, this loop-free topology is built using split horizon-based forwarding on the Ethernet virtual circuits within the PE devices. The resulting full mesh of loop-free virtual circuits provides direct connectivity between the PE devices that manage the VPLS VSIs. Figure 4-16 shows the basic components of a VPLS network service. The CE router’s Ethernet circuit is terminated on an MPLS PE device that forms the VSI for this Layer 2 Ethernet VPN. The VSIs are populated with forwarding information that travels to other PEs via the defined EVCs. Figure 4-16 VPLS Components Ethernet Virtual Circuits (EVCs)
CE
PE
PE
IP/MPLS
Ethernet
Ethernet
VSI
VSI PE
VSI
Ethernet
CE Source: Cisco Systems, Inc.
Virtual Switching Instance
CE
Intranet VPNs
201
Hierarchical VPLS Ethernet unicasts, broadcasts, and multicasts are better accommodated and more scalable within the VPLS environment. Whenever Ethernet frames need to be flooded to all VSI ports because of broadcast, multicast, or destination-unknown unicasts, the network edge PE device will perform data frame replication. Depending on the size of the VPLS VSI MAC table, data replication can stress processor and memory resources. It becomes important to consider a Hierarchical VPLS (H-VPLS) design to minimize signaling and replication overhead. An H-VPLS design also allows providers to scale their metro Ethernet services beyond the VLAN imposed, 4000-subscriber limitation. In the hierarchical VPLS model, two types of PE devices are defined:
• •
User-facing PE (u-PE) Network PE (n-PE)
The customer edge VPLS service connects directly to the u-PEs, which aggregate VPLS traffic before optionally providing an 802.1Q in 802.1Q (QinQ) trunking feature into the nPE, where the VPLS forwarding takes place based on the VSI. This double encapsulation, IEEE QinQ tunneling function allows the QinQ trunk from the u-PE to appear as an access port to the n-PE, which connects to the core of the VPLS network. The n-PE can still support up to 4000 VLANs within a VSI, while multiple customer VLANs remain pseudoinvisible, trunked, and transported within the n-PE’s VLAN assignment. Using the H-VPLS design approach, separating the functionality between the u-PEs and the n-PEs, along with QinQ tunneling, allows service providers to better scale their Ethernet domains and Ethernet services across large geographies. Figure 4-17 shows the concept of a hierarchical VPLS network service. Figure 4-17 Hierarchical VPLS 802.1Q
QinQ
EoMPLS
QinQ
802.1Q
CE CE
u-PE
n-PE
CE Source: Cisco Systems, Inc.
u-PE
IP/MPLS
n-PE
n-PE
CE
202
Chapter 4: Virtual Private Networks
Cisco IOS MPLS VPLS Cisco IOS MPLS VPLS encompasses the Ethernet, MPLS, and management components that are essential to an end-to-end VPLS strategy. Cisco Systems first implemented VPLS functionality on its 7600 router series, and in conjunction with MPLS, allows service providers to deploy, offer, and manage a complete Ethernet service portfolio including point-to-point Ethernet services based on Ethernet over MPLS (EoMPLS) and multipoint Ethernet services based on VPLS.
Layer 2 Tunneling Protocol version 3 (L2TPv3) VPNs In addition to Layer 2 MPLS VPNs using AToM or VPLS for multipoint Ethernet, another significant protocol exists for enabling the construction of Layer 2 VPNs across a native IP network—that is, a non-MPLS network. Layer 2 Tunneling Protocol version 3 (L2TPv3) allows providers to have a single core infrastructure over which to offer both IP and non-IP services. The IP and non-IP services can take the form of leased-line protocols such as PPP, HDLC, Frame Relay, ATM, Ethernet, Ethernet VLANs (802.1Q), and Packet over SONET/ SDH. L2TPv3 helps providers migrate to a common IP core infrastructure that improves the utilization of the provider IP network and converges large portions of purpose-built networks such as ATM and Frame Relay. Using L2TPv3, providers or large enterprises can use an IP network core to bridge a pair of Frame Relay networks together using, for example, an IP international network to connect a U.S. Frame Relay network with a European Frame Relay network. The use of an IP core network can be used as a transit network to link noncontiguous networks together, such as multinational networks. L2TPv3 can also be used to help providers stage an ordered migration to an MPLS core network. L2TPv3 is an IETF standard track protocol for transporting Layer 2 protocol data units (PDUs), often called pseudowire emulation services, across an IP packet-switched network. Based on the standard L2TPv2 protocol (RFC 2661) commonly used in remote-access networks, L2TPv3 significantly enhances multiservice, transparent Layer 2 transport, supporting the creation of L2 VPNs across a network infrastructure running native IP and IP routing protocols. As with other Layer 2 VPN designs, this allows the customer to maintain control of IP routing and QoS policies. L2TPv3 is an encapsulation technique that occurs in the provider’s network. It builds a tunnel between two provider edge routers, each of which connects to the customer’s end site through any of the attachment circuit types. The tunnel connection is formed via a peer-topeer session established between the two routers, specifically using their virtual loopback interfaces as the tunnel endpoint addresses. With the tunnel active, multiple emulated pseudowire services, also known as “calls,” can be set up through an established tunnel based on individually assigned session ID parameters that are automatically negotiated at call setup time. Since the L2TPv3 feature supports a variety of access interface types, both like-to-like sessions and any-to-any (interworking) sessions can be formed through the
Intranet VPNs
203
applicable tunnel(s). Both static and dynamic L2TPv3 sessions can be set up, with the static version essentially operating as a nonnegotiated PVC-like service (when no signaling is desired), and the dynamic version using the L2TPv3 control channel to negotiate the session establishment. Configuration for L2TPv3 occurs on the edge routers that make up the tunnel endpoints and is not necessary in the core IP network. This is similar to the benefits of MPLS labelswitching networks that have very little knowledge of the intricacies of customer routing information, allowing the core to scale. In addition, no special configuration is needed on the customer’s premise edge router. The only configuration needed is on the provider edge router that interfaces to the customer. L2TPv3 is architected into a control plane and a data plane. It uses two distinct components or message types called control connections/control messages and data connections/data messages. The L2TPv3 control plane is responsible for using control connections to establish tunnel and session setup, teardown, and operational monitoring of L2TPv3 status. An L2TPv3 data-forwarding plane is responsible for encapsulation and delivery of Layer 2 PDU frames as well as network layer packets. L2TPv3 defines a sophisticated header inserted between the provider’s IPv4 header and the customer’s Layer 2 PDU. L2TPv3 is configured on an interface using an Xconnect CLI command, and subinterfaces are also supported, which is a common practice for multiplexing Frame Relay DLCIs to a physical port interface, for example. The protocol number used for L2TPv3 is 115, which means that an IP packet header will include a value of 115 in the protocol field. This instructs the software that the next header is an L2TPv3 header. Full support for native L2TPv3 is available for Cisco Line Cards that use Engine 3 and Engine 5 technology. Line cards based on Engine 0, 1, and 2 have limited L2TPv3 functionality. Hardware acceleration for L2TPv3 processing is available in various Cisco router platforms. The Cisco implementation of L2TPv3 also supports IOS features such as QoS, multicast, Netflow, and IPSec. Figure 4-18 shows the concept of L2TPv3 Layer 2 tunnel and pseudowire session setup. The provider’s PE 1 and PE 2 routers have been predefined with Xconnect CLI configuration information for the customer attachment circuits (CE 1 and CE 2). PE routers 1 and 2 will form an L2TPv3 Layer 2 tunnel, through which the pseudowire sessions will pass between the CE 1 and CE 2 routers. The unique session information to distinguish this particular pseudowire session from any other is carried in the L2TPv3 header field between the provider edge routers. Figure 4-19 shows the concept of multiple pseudowire sessions carried through the provider’s IP network enabled for L2TPv3. Pseudowire session 1 is flowing through the L2TPv3 tunnel, carrying traffic from customer A router CE 1 to customer A router CE 2. A second customer is supported via pseudowire session 2, connecting customer B Ethernet LAN 1 to customer B Ethernet LAN 2.
204
Chapter 4: Virtual Private Networks
Figure 4-18 L2TPv3 Layer 2 Tunnel and Pseudowire Session Setup CE 1
CE 2 Messages Are Exchanged for Each New PW that Is Provisioned.
1. Xconnected Circuit Transitions to an Active State.
Attachment Circuit
4. Provider Edge 2 Replies to the Request from Provider Edge 1 and Confirms the Call Should Be Processed.
PE 1
5. Negotiated Session IDs Are Now Prepended to the PW and PDUs Can Be Forwarded.
2. Provider Edge 1 Starts a Control Connection with Provider Edge 2 if One Doesn t Already Exist.
3. Provider Edge 1 Requests a Call To Be Setup from Provider Edge 2.
IPv4 Header PW = Pseudowire PDU = Protocol Data Unit
L2TPv3 Header
Attachment Circuit
PE 2
PDU
Bidirectional Session ID Exchange Initiated by One of the PEs
Source: Cisco Systems, Inc.
Figure 4-19 L2TPv3 Layer 2 Tunnel Transporting Two Pseudowire Sessions Customer A Edge Router
Customer A Edge Router
L2TPv3-Based L2 Tunnel
Pseudowire 1 CE 1
CE 2 int4
int1 int2
PE 1
e1
Xconnect CLI Interfaces
Customer B
int3 e2
PE 2 Xconnect CLI Interfaces
Pseudowire 2 LAN1
L2TPv3 Tunneled LAN Source: Cisco Systems, Inc.
Provider IP Network
Customer B
LAN2
L2TPv3 Tunneled LAN
Intranet VPNs
205
Multicast VPNs (MVPNs) MVPNs for MPLS VPNs allow MPLS network providers to include support for multicast applications within MPLS Layer 3 VPNs. As enterprises extend the reach of their multicast applications, service providers can accommodate these enterprise applications over their MPLS core network, allowing IP multicast to stream video, voice, and data into and through an MPLS VPN core. Multicast is a bandwidth-conserving technology that simultaneously sends a stream of information to potentially thousands of workstation users. A stock quote service is an excellent example of a multicast application; the service updates the stock ticker and optimally replicates the source packet(s) to those workstation users who’ve elected to receive the stock quote application. Windows Media Player is another example of an audio/ video multicast application. Multicast applications use IP addressing from the globally assigned IP address range 224.0.0.0 to 239.255.255.255. IP multicast applications are currently used in private networks and across both the Internet and Internet2. Multicast is conceptually a one-to-many broadcast application style and is best described as a tree relationship. The tree trunk is the one source, and the leaves on the tree’s branches form the many destinations. All leaves don’t have to listen to the source, and leaves might listen or not listen based on their current interest. Customers often have many multicast sources running concurrently, so the tree concept becomes a bit of a forest as multiple multicast trees are established in these networks. To support large-scale distribution of data content and video-streaming applications, it is necessary to support IP multicast technology within a customer’s intranet. Customers subscribing to MPLS VPN services often require multicast applications to cross providermanaged MPLS VPNs, in order to reach remote customer locations on the other side. Multicast technology contains the functionality to replicate IP packets at the network point closest to a group of multicast users. It does this by sending a single packet or stream of packets to a multicast group address on a router that is responsible for the packet replication for any clients that have joined at that point of the multicast tree. Figure 4-20 shows the concept of a customer routing a multicast session across a provider IP network. The multicast source at the customer’s main location sends multicast packets into the provider network. The provider network has connectivity to customer remote sites A, B, C, and D, where each site is a PC participating in the multicast session. The provider IP routers must be aware of the customer’s multicast addressing and route the multicast packet to the proper provider edge routers. This will replicate the multicast packet stream as necessary to deliver copies of the multicast stream to each site receiving the multicast.
206
Chapter 4: Virtual Private Networks
Figure 4-20 Multicast Distribution Across Provider Network Multicast Group Service Provider Cloud
Receiver A
Receiver B
Multicast Packet Replication Multicast Source
Receiver C Customer Site Main Receiver D
Customers Sites A to D Source: Cisco Systems, Inc.
The Need for Multicast VPNs for MPLS The MPLS VPN based on RFC 2547bis technology is a unicast-only routing topology. Historically, with the early deployments of MPLS VPNs, it was necessary to implement multicast technology in a nonnative MPLS mode. In other words, it was necessary to create Generic Route Encapsulation (GRE) tunnels between multicast rendez-vous points (RPs) as an overlay, with more RPs requiring more GRE tunnels in order to maintain a mesh of multicast communication. Much like internal BGP, multicast rendez-vous peering routers require a full mesh to maintain consistent routing information between them, and this requires an n*(n-1)/2 formula to create a mesh between the number of multicast rendezvous peering devices (n), with n representing the total number of multicast rendezvous peers in this example. For 10 devices, this equates to 45 logical/physical connections to create a full mesh between 10 multicast rendezvous devices. Creating a full mesh of GRE tunnels as linkage between multicast RPs on a per-VPN basis is not scalable nor is it optimal for a large number of VPN customers desiring large multicast applications and good performance. It is desirable to carry multicast traffic across the provider network without causing design changes to the customer’s multicast components within that customer’s own network domain.
Intranet VPNs
207
Introduction to Cisco Multicast VPNs (MVPNs) Cisco introduced MVPN support for MPLS VPNs to address these issues. Conceptually, Multicast VPN is like a customer multicast-in-provider multicast encapsulation technique that maintains transparency of the customer’s multicast application environment from the provider’s multicast transport requirements. The multicast technique used for the Cisco Multicast VPN support is Protocol Independent Multicast (PIM). The following discussion serves as a brief overview. The multicast VPN for MPLS architecture is introduced with the primary building blocks of multicast VRFs, Multicast Tunnel Interfaces, Multicast Distribution Trees, and Multicast Domains.
Multicast VRFs (mVRFs) In the earlier section “MPLS Layer 3 VPNs,” you learned that a VRF is created in an MPLS PE router on a per-customer basis. This VRF is a unicast routing and forwarding table containing the customer’s IP routes, which will be placed in the customer’s assigned VRF table and distributed to other PE routers configured with the same VRF, representing that customer’s VPN. For the PE routers to also carry customer multicast traffic, the VRF has to be aware of multicast routes. Configuring multicast routing for an established customer VRF creates an associated multicast VRF (mVRF) table in the PE router for that same customer within the PE. Any multicast routing (identified by the use of multicast IP addressing) received from the physical customer interface into the VRF will now populate the mVRF table on the PE with multicast routing entries, each entry representing a multicast tree of one or both tree types of source tree or shared tree. If the customer has multiple multicast groups then multicast routing entries for each will be present in the mVRF, representing the numerous multicast groups. Therefore, from the PE routers mVRF viewpoint, the PE can see several of the customer’s multicast trees. A requirement of PIM operation is a reverse path forwarding (RPF) check to verify that a multicast packet received by a PE router can be traced back to its directly connected source interface; so a quick check of the associated unicast VRF for that customer VPN serves as the RPF functionality for the mVRF operation. Now that customer multicast routing entries are present in the PE’s mVRF table, the challenge is to distribute this routing information and subsequent multicast data packets as efficiently as possible across the MPLS core network (the P routers)—in order to reach a destination PE router that interfaces customer’s sites interested in participating in multicast applications. To make this scalable, the desire to limit the amount of multicast routing information known by the MPLS P router to a deterministic amount and maintain complete customer multicast transparency from the provider’s multicast network are addressed within the MVPN architecture. This leads to the discussion of the PE’s Multicast Tunnel Interface (MTI).
208
Chapter 4: Virtual Private Networks
Multicast Tunnel Interface (MTI) and Multicast Distribution Tree (MDT) The MTI appears in the PE’s mVRF table as an interface called Tunnelx, x representing the tunnel number. The MTI acts as a gateway between the customer multicast information in the PE’s mVRF and the service provider’s global multicast backbone (made up of the collection of MPLS P and PE routers). The MTI is used to send both multicast control and customer multicast data information from its PE mVRF into the provider core multicast network via the global native multicast entity called a multicast distribution tree (MDT). The provider network MDT can be thought of as a provider network multicast tunnel and will be responsible for carrying customer multicast information and any multicast control information across the MPLS P router network to reach the destination PEs participating in the customer’s multicast groups. It’s helpful to think of the provider network MDT as the multicast IP routing version of the MPLS label-switching process used between PE nodes for MPLS VPN routing. The MDT is a multicast tree within the core of the provider network connecting many interested PEs together that are part of the same customer multicast VPN. The mVRFs can have several multicast groups and trees resident, but all of this is mapped to a single multicast entry (S,G or *,G), and then encapsulated with GRE and sent through the provider network more efficiently. Therefore, the MPLS P router portion of the network uses only native multicast, which minimizes operational risk and avoids any MPLS P router software upgrades that would otherwise be required to accommodate this MVPN functionality. The MPLS P routers use their native multicast function to build a default multicast distribution tree (default-MDT) between PE routers that are within that multicast domain. Each mVRF belongs to a default MDT. Figure 4-21 shows the concept of these MVPN components for a single customer with one mVRF per PE linked together with one default-MDT through the MPLS provider core network. The PE’s MTI can be seen as an access ramp, and the PE-to-P-to-PE path of the MDT is the fast highway across the MPLS core network to reach the off-ramp MTI at the destination PE router(s). There will be one MTI per PE mVRF and one MDT per mVRF domain in an MPLS network enabled for MVPN. Within the Cisco MVPN feature, there is also an optional mechanism to support high data-rate multicast applications more optimally through the use of a Data-MDT, which has the purpose of only sending multicast packets to those PE routers that have active customers for that particular multicast group or groups. For a multicast domain, PIM adjacencies exist between
• • •
The PE router mVRF and the customer’s CE router The PE router mVRF and other PE routers with the same mVRF name The PE router-to-P router global multicast routing instance
Intranet VPNs
209
Figure 4-21 Multicast VPN in an MPLS Network Customer MRoutes Mapped to GRE Tunnel for MDT
Customer Multicast Routing
Customer MRoutes Mapped to GRE Tunnel for MDT
MPLS Provider Network Native IP Multicast
Customer Multicast Routing
Multicast Group
MTI
Default MDT
CE mVRF
MTI mVRF
Receiver A
P
P
PE
CE Receiver B MTI
CE
PE
P
MTI PE
P
Multicast Source
mVRF
mVRF
CE
Receiver C
PE
Customer Site Main CE
Receiver D
Customers Sites A to D Source: Cisco Systems, Inc.
Multicast Domains (MDs) An MD, then, is the collection of mVRFs that can send multicast traffic to each other. The mVRFs in multiple PE routers associated with customer A, for example, make up a multicast domain into which all of the customer’s multicast groups are mapped and transported. The MD can be thought of as the PE mVRFs, with their respective MTIs and global MDT. Multiple MDs are involved when multiple different customers desire to use multicast applications across an MPLS VPN. Based on the above, therefore, a MVPN multicast domain provides:
•
Enterprise multicast application carriage for customers who subscribe to an MPLS VPN service
•
Transparency between the customer’s multicast environment and the provider’s multicast backbone transport
•
An optimized, MPLS provider multicast network separated from the specifics of customer multicast information
•
The ability to scale and deliver high-performance multicast support to hundreds or thousands of MPLS VPN customers
210
Chapter 4: Virtual Private Networks
There are also a number of advanced features for MVPN support known as Source Specific Multicast (SSM) for MDT groups, inter-AS MVPN, and enhancements for extranet multicast.
Source Specific Multicast (SSM) SSM is a newer mode of Protocol Independent Multicast (PIM) that allows for any source to multicast traffic. In this way, a MPLS PE router can directly join a multicast source tree rooted at another PE in the multicast domain tree. To locate the source of the multicast, a source discovery is performed using the same MBGP routing process that distributes customer unicast routes within an MPLS VPN. MBGP now can use a new BGP address family and MPLS route distinguisher to propagate the multicast source information between MPLS PEs. This eliminates the need for PIM RPs in the service provider’s MPLS network. Without RPs, multicast forwarding delay is reduced along with the management and administration, and potential single point of failure of the RPs. SSM works in conjunction with multicast hosts running Internet Group Management Protocol Version 3 (IGMPv3), which implements a source-filtering mode necessary for hosts to work with SSM.
Inter-AS MVPNs The inter-AS multicast feature allows multicast VPNs to span across two different MPLS autonomous systems. These could be within the same service provider, or they could be different service providers. Modifications to BGP to add a connecter attribute are necessary to send the appropriate multicast information on an interdomain session between different MPLS networks. This type of requirement could represent a multicast application needing to be supported nationally or internationally, beyond the boundaries of the primary MPLS provider.
Extranet MVPNs The MPLS MVPN extranet feature offers extranet users outside of a company’s MPLS VPN, a unicast connectivity, without compromising the integrity of the intranet MPLS VPN. This is being extended to include multicast connectivity to the extranet user community as well. The MVPN extranet feature is handled through routing configuration and policies. In practice, the feature allows for multicast content to span different MPLS VPNs. A customer VPN can source multicast content into another customer’s separate VPN by using the import/export functions of MPLS VPNs with multicast traffic. This feature can allow content distribution between separate enterprises or between a provider and many different VPN customers.
NOTE
Another feature to come will be MPLS multicast VPN support for IPv6 environments.
Extranet VPNs
211
Extranet VPNs Extranet VPNs connect a company with its customers, suppliers, and other business partners, providing them with limited access to specific portions of the company-network for purposes of collaboration and coordination. Extranets are extensions of private intranets and, historically, were built with leased-line connections, then Frame Relay connections, and, more recently, Internet-based VPN connections. The obvious need for extranet connections is driven by B2B requirements, including the following:
• • •
Streamlining order entry systems
•
Integrating auditing and business consulting firms
Employing just-in-time manufacturing processes Engaging external engineering and manufacturing design talent to improve time to market
As organizations have gained expertise with B2B partnering and e-commerce facilitation, they find that they need more partnerships, suppliers, and distributors to pursue more customers and a larger share of the customer’s wallet. Extending conventional network connections to link these partners, suppliers, and distributors places negative drag on time to deployment and impacts the profitability of B2B e-commerce execution, especially as partnerships and customer markets reach worldwide significance. The appeal of extranet VPNs via Internet-based connections is primarily two-fold:
•
Improves the ease and speed with which secure communications can be extended to new partners, even for temporary, per-project alliances
•
Reduces or removes both capital and operational budget overhead associated with extranet VPN connections, the primary portion of which is allocated to recurring monthly costs for dedicated WAN circuits
Extranet VPNs use both IPSec VPN and SSL VPN technologies to establish secure communication over the Internet. Access methods might include both dial-up and persistent broadband Internet connections, but all access is subject to rigorous user identification and authorization controls. For example, extranet site/client authentication can use IPSec preshared keys or public key infrastructure (PKI) digital certificate solutions, just like intranet VPNs. It’s also a common practice to use a two-factor authentication method for extranet users through the use of soft token technology. Figure 4-22 shows the Cisco Secure VPN Client in an extranet VPN topology. In this example, clients establish a secure tunnel over the Internet to the hosting enterprise. A certification authority (CA) issues a digital certificate to each client for device authentication. VPN Clients might either use static IP addressing with manual configuration or dynamic IP addressing with IKE Mode Configuration. The CA server checks the identity of remote
212
Chapter 4: Virtual Private Networks
users, and if approved, authorizes remote users to access information relevant to their particular business function. Figure 4-22 Extranet VPNs VeriSign CA Server
Remote Office
Internet Enterprise
Extranet Partner with VPN Clients
Entrust/MSCA CA Server Remote Users with VPN Clients
Telecommuters with VPN Clients Source: Cisco Systems, Inc.
SSL VPN connections are also applicable to extranet VPNs. With SSL connections, the extranet user accesses the hosting organization’s intranet applications through the use of a web browser with native SSL technology. SSL is a good fit because there is no need for manual software deployment on workstations that the hosting company doesn’t own or manage. VPN concentrators at the hosting company can employ granular access control that can limit access to specific web pages or other internal company application resources. In the SSL VPN environment, strong authentication methods to validate and authorize extranet users are a necessity.
NOTE
With the growing sophistication of SSL VPN technology, extranet VPN connections based on SSL are a valid entry point for many organizations that are seeking limited partnering or are early in a B2B partnering life cycle.
Multiservice VPNs over IPSec
213
MPLS networks are also capable of supporting extranet VPNs. Many service providers have MPLS networks in place and can offer intranet, extranet, and remote-access VPN services that enhance security, extend reach, add multiservice function and strong QoS, and lower costs. One of the MPLS VPN features that enables this ability is the import/export feature that is configured on an MPLS VPN (RD) basis. An MPLS VPN can choose to import routes outside of the intranet VPN (the extranet routes) through proper configuration policies. An MPLS common VRF can also be used, in which are resident the extranet customer routes. The intranet customer MPLS VPN can import and export chosen routes between it and the common VRF to enable communication between these distinct MPLS VPNs. Cisco VPN technology based on IPSec, SSL, and MPLS represents many complementary choices with which organizations can construct secure and cost-effective linkage with their business partners. The technologies also allow providers to create managed VPN services with which to develop a full-service IP VPN portfolio.
Multiservice VPNs over IPSec Multiservice VPNs are normally classified by the capabilities of the platforms on which they are provisioned. For example, an MPLS VPN running on an IP infrastructure or on an IP+ATM infrastructure would fit this description. The MPLS VPN Layer 3 and Layer 2 platforms discussed previously would certainly fit the classification of a multiservicecapable VPN. Yet in the context of this chapter, multiservice VPNs are intended to describe multiple services of voice, video, and data over an IPSec site-to-site configuration. A Cisco Systems multiservice VPN example that applies multiservice traffic over an IPSec tunnel is known as the voice and video-enabled IPSec VPN (V3PN). V3PN (Voice, Video, VPN = Vcubed or V3) is an emerging VPN technology designed to accommodate much higher quality VoIP and H.323 video services over the Internet within an IPSec VPN relationship. This is accomplished through better QoS, minimizing conditions that are detrimental to IP telephony and IP-based video traffic. Used with IPSec tunnels over a multiservice provider or potentially over the Internet, better QoS design and IPSec encryption are now available for VoIP and IP-based video. The V3PN model is most applicable to the full-time teleworker or mobile professional. These users need full-time voice and video support within a VPN, in addition to full-time data networking. Using a VoIP softphone application, for example, a mobile professional can speak and listen via a site-to-site IPSec VPN tunnel, saving long-distance telephony charges through either the public-switched telephone network or through a cell phone. Teleworkers can take their corporate office telephone number with them on the road, and once the IPSec tunnel is established and their presence registered, a customer call to their number is routed to their present location. Figure 4-23 shows the concept of the Cisco V3PN solution.
214
Chapter 4: Virtual Private Networks
Figure 4-23 Cisco V3PN
SOHO/Telecommuter
Telecommuter: • Same User Experience as in the Corporate Office • Lower Toll Charges for Voice Calls
HQ
Multiservice Service Provider
Cisco CallManager
Access Provider
Branch Office
Internet
IP Phone IP
Mobile Worker: Mobile Worker
Enterprise B
• Full Corporate Access • Lower Toll Charges for Voice Calls
Branch Offices: • Lower Reoccuring Costs • Faster Deployment
Source: Cisco Systems, Inc.
Using the security of 3DES, voice, video, and data traffic is simultaneously transported over the same IPSec VPN tunnels with QoS enabled for the higher-priority traffic. These IPSec VPN tunnels can be managed by the enterprise customer or offered by a service provider through a managed service offering. With this approach, IP telephony traffic is encrypted while traversing the IPSec VPN tunnel, yet the traffic is transparent to IP telephony management personnel.
Multiservice VPNs over IPSec
215
In regard to IP telephony within V3PNs, the G.729 CODEC with 20 milliseconds (ms) sampling and a transmission rate of 50 packets per second is the recommended scheme for typical IPSec VPN tunnel bandwidth. Also consider that the IPSec encryption and decryption process adds from 2 to 10 ms of delay per tunnel end, so you need to account for this in the overall delay budget. When using VoIP over IPSec, the Compressed Real-Time Protocol (CRTP) compression function is ineffective. The CRTP feature is used in typical VoIP environments to reduce the IP/UDP/RTP header from 40 bytes to about 5 bytes. With IPSec, this original IP/UDP/RTP header is encrypted prior to the compression stage where the CRTP process will no longer recognize the media stream. Compression won’t occur, and the IPSec-encrypted IP/UDP/ RTP VoIP packet will bypass the RTP compressor function and continue through the tunnel. A comprehensive, end-to-end QoS design is required at the campus, the WAN edge, and through the service provider Internet core. Voice and video quality is only as good as the quality of the weakest network link, so end-to-end QoS is critical. Latency, jitter, and packet loss all contribute to degraded voice and video quality. The network manager must understand the impact of the enterprise LAN as well as the level of service that the service provider must deliver to maintain satisfactory voice/video quality. Figure 4-24 shows the recommended components for QoS for V3PN deployment with a site-to-site IPSec VPN. The service provider portion of the network should adhere to the values specified for one-way latency, jitter, and packet loss, and the V3PN user should be aware of the goal of 150 ms round-trip delay, selecting IPSec components and features that facilitate the required level of performance. Figure 4-24 Components of QoS for V3PN Deployment Headquarters
Branch Office
IPSec Tunnel
LAN
IP Phone LAN WAN Access
Multiservice Service Provider
Service Provider One Way Delay <= 60 ms Jitter <= 20 ms Loss <= 0.5% Goal ~150 ms End-to-End Delay
Source: Cisco Systems, Inc.
LAN
IP Phone LAN WAN Access
216
Chapter 4: Virtual Private Networks
Additional design considerations for V3PNs are
•
QoS and IPSec interaction—IPSec encrypts packets, including QoS markings; therefore, having VPN devices that can support QoS on IPSec-encrypted traffic is a crucial element for toll-quality voice and video across the VPN.
•
Multicast support over the VPN—Much voice and video traffic is multicast. IPSec does not natively support multicast traffic. Having VPN devices that can support multicast across the VPN is critical to a Cisco V3PN solution.
•
Support for low-latency network topologies—Having a meshed network topology is often important to reduce latency and jitter. The VPN device must be able to support meshed, not just hub-and-spoke, topologies.
•
Firewall support for VoIP protocols—Many firewall solutions require passthrough of IP telephony traffic, because they cannot statefully inspect the traffic. Having a firewall that can support IP telephony is critical to the security of the Cisco V3PN solution.
Cisco V3PN offers the enterprise a lower-cost multiservice WAN and remote-access alternative that optimizes employee productivity while maintaining the stringent performance and security requirements of a private network. This provides a consistent way of connecting all users to the enterprise network regardless of location.
VPNs: Build or Buy? IP VPN services require security, network reliability, and rapid scaling. Because security is a major component of VPNs, the customer’s comfort level with security features and control of security policies is paramount to the customer’s decision to in-source or outsource VPN technology, and customers with mature VPN environments might utilize a mixture of both. Network reliability covers a large area from link reliability, to redundant equipment and topology design, to guarantees of network performance and QoS. Although security and network reliability are key decision points, express VPN provisioning and scalability is one of the most important business drivers for VPN technology, perhaps second only to cost.
Enterprise-Managed VPNs Because enterprises were the first to significantly deploy and manage IP-based applications within their enterprise WANs, adding support for VPNs was a logical extension of enterprise WAN capabilities, following the traditional model of WAN extension through remote-access and external partner solutions. As the public Internet and the IPSec standard reached critical mass, the desire to speed distribution of IT applications and optimize
VPNs: Build or Buy?
217
network budgets through IP VPN technology were quick decisions with nearly instant payback. Containing IP, security, and Internet connectivity expertise, enterprises are well positioned not only to try IP VPN deployments but to operate them, perhaps into perpetuity. According to a 2004 Forrester Research survey of Telecom managers, greater than 50 percent of VPN users believe that in-house management gives them more control over VPN security, with provider flexibility, and with deployment timelines.2 The granting of security credentials and certificates, knowledge of security breaches, and dynamic adjustments to security policies is largely behind this substantiation. As more providers are ascending both the IP and security skill sets, enterprises will consider cost benefits and reduction of ownership challenges for outsourced IP VPNs.
Provider-Managed VPNs Following the trend of outsourcing noncore business functions—and the growth of the private VPN in both size and complexity—enterprises might choose to use a service provider to manage the VPN. They might choose to manage the customer premises equipment (CPE) and negotiate a service-level agreement (SLA) with the service provider or have a service provider manage the whole WAN and IP VPN, including the CPE. Robust security, trusted intermediation, and customer control of security are key to the success of provider-managed VPNs. It is also important for providers to clarify the advantages of their managed IP VPN offerings and to stratify them based on specific markets and business verticals. Service providers are well respected for managing high-availability networking environments and deployments. As enterprises and small and medium businesses (SMB) begin to mature their IPSec solutions, they often need to add high-availability features such as dual VPN head ends and backup IPSec peers, doing so using software features such as Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP). This increases the complexity of these environments and can be an entry point for providers to market managed VPN services to these customers. The key is how to provide a VPN solution in such a way as to give customers the needed control over their security policies and applications, including management of end-user directories and the ability to control traffic management changes through QoS tuning. It is also a natural play to link managed security services such as firewalls, intrusion detection, and virus management products with VPN opportunities. Service providers might also market their IP-based networks as a “more secure Internet” over which customers can conduct IPSec transactions with less security risk than the open Internet. While this might not serve the needs of a multinational VPN customer, it could be appropriate for the national or regional VPN customer set or for customers that closely match the provider’s network geography.
218
Chapter 4: Virtual Private Networks
Many organizations plan to converge IP-based voice and video traffic onto their VPNs. This convergence trend will require strict guarantees on latency budgets, QoS, and network performance while adding complexity to design and operational functions. QoS guarantees, service-level guarantees, and voice over VPN integration become differentiators. Providers should see this as an opportunity. Providers can move up the VPN value chain through content and application delivery services from provider-managed Internet data centers, as a trusted third party for intermediation of extranet VPN services and for proficiency in design and consultation. While large enterprises represent the core group of in-house managed VPNs, the SMB segment are relative newcomers to VPN technology. The SMB market is less risk averse and less technologically competent, choosing to be aggressively focused on core business development and more partner oriented. For many SMBs, an IPSec VPN might be the only WAN the company chooses to deploy. Providers with a strong catalog of IP VPN offerings can explore opportunities within these customer segments.
Technology Brief—Virtual Private Networks This section provides a brief study on VPNs. You can revisit this section frequently as a quick reference for key topics described in this chapter. This section includes the following subsections:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding VPNs.
•
Technology at a Glance—Uses figures and tables to show VPN fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance—Presents charts that suggest business drivers and lists those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts in this section to see how business drivers are driven through technology selection, product selection, and application deployment in order to provide solution delivery. Additionally, business drivers can be appended with critical success factors and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
Technology Viewpoint VPNs are all about IP accessibility. Large enterprise IT networks are all about private reachability. Profitability is all about quickly delivering service, decision data, and new product into the hands of the worker, partner, or customer, wherever they are.
Technology Brief—Virtual Private Networks
219
Businesses are going where their customers are, extending the essential inputs and outputs of customer information, homesteading new frontiers of distribution, and sustaining business application computing at any time and from any time zone. The pursuit of customer centricity can take the whole organization with it, occasionally in a physical sense, yet frequently in a virtual sense. Organizations that can replicate themselves quickly and virtually on the customer’s doorstep will succeed in this pursuit. VPNs become the computing backbones of these virtualized organizations. For years, private networking was costly. Security of both company and customer information mandated exclusive use of network circuits and facilities to ensure data protection and performance. Then along came IPSec, an open standard technology set—creating a sort of software virtual circuit capable of protecting data but also allowing passage through the “hands” of many public carriers. IPSec is built for the Internet; the Internet is built for distributing IP. The vast expanse of the Internet represents the virtual steel and concrete over which global IP networking now travels. The Internet helps both customers and providers to extend the reachability of their virtual network footprint, fusing technology and transport into IP VPNs. The Internet has helped fuel the growth of VPNs, allowing businesses to enhance and extend their network boundaries and services further than previously possible. Taking advantage of secure VPN technology, the Internet becomes a pervasive transport medium for remote access and global workers and easily extends intranets into partner networks for extranet process integration. Service providers can participate in this IP VPN market with regional, national, and international IP networks. For providers, VPNs are a service foundation, a point of entry into managed IP services. Providers might build or enhance their networks to offer any or all VPN types—from access VPNs, to intranet VPNs, to extranet VPNs. Conventional Layer 2 VPNs can migrate from Frame Relay and ATM delivery to contemporary Layer 2 and Layer 3 IP VPNs. Existing VPN services are enhanced, while new VPN services are fashioned to exploit the flexibility of IP networks. Service providers have a considerable opportunity to capitalize on VPNs. The reason for this is that IP VPNs carry service pull. First, these are IP services with built-in, world-aware intelligence and service adaptability. Second, VPNs allow customers to optimize private network expense, converge voice and data, and position for advanced IP services through provider assistance and out-tasking. Robust security, trusted intermediation, and customer control of security are key to the success of provider-managed IP VPNs. It is also imperative for providers to clarify the advantages and qualify the returns of their managed IP VPN offerings and to stratify them based on specific markets and business verticals. Remote-access IP VPNs target the accessibility requirements of mobile professionals, teleworkers, and workday extenders. Access VPNs are used to deliver work to the worker, wherever they are. The IPSec open standard benefits the remote-access environment, helping to remove cost and bandwidth constraints through the use of lower-cost, flat-rate
220
Chapter 4: Virtual Private Networks
broadband Internet access pricing. The appeal of SSL-based remote-access VPNs is growing. It is a prime advantage of SSL VPNs to create secure access from any supported web browser, across any Internet or ISP connection, and do it all without VPN client software management at the remote user workstation level. The emergence of SSL VPNs adds another level of price/performance and security granularity for companies to consider for remote-access IP VPN support. Thus, access VPNs using IPSec, SSL, and other technologies are leveraged across the Internet or across providers’ shared IP infrastructure to create secure hooks back into the corporate network for private communications anywhere, at anytime. For intranet VPNs, IPSec site-to-site VPNs have been the rule, because they are both costeffective and secure network extensions for growing businesses and enterprises. IPSec encryption is the appropriate VPN solution for customers that desire absolute data confidentiality. With IPSec’s inherent secure tunneling capabilities, users can create siteto-site VPNs across networks such as the Internet, extending the reach of their business or large enterprise with less expense, reduced provisioning time, and fewer restrictions concerning long-haul or international transport providers. MPLS VPNs are also in this market space. MPLS VPNs are at the same time convergence and innovation platforms for service providers, giving providers the capability to touch and manage customer IP routing services within the customer’s own logical network instance. MPLS VPN technology allows service providers and large organizations to accommodate virtually any customer’s requirement for remote access, intranet, and extranet VPNs. As a VPN feature layered on an MPLS network, MPLS VPNs support geographic customer networking, accommodating intranet, extranet, and Internet access applications, while interconnecting sites securely and flexibly. By also accommodating Layer 2 VPN technology, MPLS backbone networks become the epicenter of convergence, porting and integrating existing Layer 2 services onto the same physical network used to provide Layer 3 MPLS VPNs. MPLS and MPLS VPNs are also meeting a need for service providers looking to ascend to Layer 3 and to expand into IP services. For example, VPLS increases productivity and operational efficiencies by connecting geographically dispersed sites into one giant logical LAN over wide area Ethernet and MPLS. The popularity, pricing, and customer pull of Ethernet in the metro and wide area includes requirements for multipoint Ethernet services, and VPLS is a leading technology alternative for consideration. Combining the best features of IP routing and switching, MPLS networks perform better, scale farther, and manage easier. For deploying Layer 2 VPNs without using an MPLS core network, the L2TPv3 protocol is used in conjunction with a native IP network infrastructure. Extranet VPNs are natural extensions of intranet VPNs. The need for extranet connections is driven by B2B requirements: streamlining order entry systems, employing just-in-time manufacturing processes, engaging external engineering and manufacturing design talent to improve time to market, and integrating auditing and business consulting firms are but a few examples. Today, extranet VPNs are largely built on lower-cost, Internet broadband
Technology Brief—Virtual Private Networks
221
access technology, especially as partnerships and customer markets reach worldwide significance. The networking convergence of voice, data, Internet, and virtual access services make VPNs a compelling vehicle for keeping everyone in touch. Businesses of all sizes can bypass the distractions of in-house internetworking services design, deployment, and management, better focusing on core processes that boost product innovation and customer service. From access and intranet to extranet, from local to international, and from wired to wireless, providers are building on their VPN foundations, crafting new types of VPN offerings with which to engage their customers. The service foundation of today’s VPNs not only augments the architecture of a provider’s VPN framework, but also provides a strategic market position through which to harvest new revenues. IP is the communications facilitator for the internetworking of virtual organizations. Globally and universally extensible across the Internet, IP is in vast abundance. The Internet keeps IP traveling faster and ever farther. VPNs keep it secure.
Technology at a Glance Table 4-3 compares various types of VPNs. Table 4-3
Comparison of VPNs RemoteAccess VPNs
Intranet VPNs
Extranet VPNs
Multiservice VPNs
Low
High
Medium/ high
High
No. of Tunnels High
Low
Low
Low/medium
Typical Access Speeds
56K dial-up, broadband highspeed xDSL, cable, ISDN, wireless
Fractional T1/E1, T1/E1, fractional T3/E3, T3/E3, OC-3/STM-1 to OC-12/STM-4
56K dial-up, broadband ISDN, xDSL, fractional T1/E1, T1/E1, fractional T3/E3, T3/E3, OC-3/ STM-1 to OC-12/ STM-4
Fractional T1/E1, T1/E1, fractional T3/E3, T3/E3, OC-3/STM-1 to OC-12/STM-4
Architectures
Network access servers, clientinitiated IPSec or SSL
IP tunnel, virtual circuit, or MPLS
IP tunnel, virtual circuit, or MPLS
Converged voice, video, and data packet-based VPN service
Throughput
continues
222
Chapter 4: Virtual Private Networks
Table 4-3
Comparison of VPNs (Continued) RemoteAccess VPNs Seed Technology
Extranet VPNs
Multiservice VPNs
CPE and network- Network-based based IPSec MPLS VPNs
Network-based MPLS VPNs
Network-based MPLS VPNs
Layer 2 Tunneling Protocol (L2TP)
CPE and networkbased IPSec, GRE, IP, or IP+ATM
CPE and networkbased IPSec, GRE, IP, or IP+ATM
Secure Socket Layer (SSL)
Secure Socket Layer (SSL)
CPE and networkbased IPSec, Generic Route Encapsulation (GRE), IP, or IP+ATM
Point-to-Point Tunneling Protocol (PPTP) Secure Socket Layer (SSL)
Intranet VPNs
Targeted Users
Mobile workforces and telecommuters
Businesses with remote branch offices and teleworkers
Businesses with suppliers, partners, customers, communities of interest
Multisite businesses desiring converged IP data, IP voice, and IP video
Benefits
Offers a variety of access types for effective mobility
Provides full network access and routing features as part of a Wide Area Network (WAN)
Links internal network applications to external partners for process integration
Supports data, voice, video, and scalable multicast applications; enables convergence of purpose-built networks
The following lists some of the applicable IETF standards that are used for IPSec site-tosite VPNs:
• • • • • •
RFC 2401, Security Architecture for the Internet Protocol RFC 2402, IP Authentication Header (AH) RFC 2403, The Use of HMAC-MD5-96 within ESP and AH RFC 2404, The Use of HMAC-SHA-1-96 within ESP and AH RFC 2405, The ESP DES-CBC Cipher Algorithm with Explicit IV RFC 2406, IP Encapsulating Security Payload (ESP)
Technology Brief—Virtual Private Networks
• • • • • • • •
223
RFC 2407, The Internet IP Security Domain of Interpretation for ISAKMP RFC 2408, Internet Security Association and Key Management Protocol (ISAKMP) RFC 2409, The Internet Key Exchange (IKE) RFC 2410, The NULL Encryption Algorithm and Its Use with IPSec RFC 2411, IP Security Document Roadmap RFC 2412, IP OAKLEY Key Determination Protocol RFC 2451, The ESP CBC-Mode Cipher Algorithms RFC 1191, Path MTU Discovery
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solutions and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. The following chart lists typical customer business drivers for the subject classification of networks. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products and their key differentiators, and applications to deliver solutions with high service value to customers and market leadership for providers. Figure 4-25 charts the business drivers for VPNs.
224
Chapter 4: Virtual Private Networks
Figure 4-25 VPNs
High Value
Critical Success Factors
Technology
Scalable IP Network Infrastructure High-Quality, Flexible Security Options
Market Leadership
Service Uptime and Service Level Agreements, Economies of Scale Robust IP VPN Product Set with QoS Greater IP VPN Differentiation
Cisco IOS
Choose Strategic VPN Platforms
Purpose Built VPN Solutions – Robust Security Features – Comprehensive VPN Product Set Anywhere, Anytime Application Access Extending Full Network Services to Remote Branches Secure Communications (Policy or Regulation)
Market Share
Network Convergence to the Edge Out-Sourcing and Out-Tasking Business Globalization
Low Cost Competitive Maturity
Shift from Variable to Fixed-Cost
Applications Service Value Intranet Site-to-Site VPNs
Cryptology
VPN 3000 Concentrator
SSL
Cisco Secure VPN Client
RemoteAccess VPNs
MPLS VPN and VHG
Extranet VPNs
IPSec
QoS
Co-Opt IP and Security Skillsets
Market Value Transition
Cisco Product Lineup
L2TPv3 MPLS
AToM
MPLS VPN
L2 VPN
L2 VPN
L2TPv3
VPLS
VPLS
Ethernet Wire Service
V3PN PIX Firewalls
Ethernet Relay Service
Cisco IOS Routers
Roadmap for Convergence Articulated IP VPN Strategy Value-Added IP VPN Services
MultiService VPNs Multicast VPNs
Managed IP VPN Security Services Fully Managed IP VPN Data Services 24 x 7 VPN Monitoring, Management, and Support
Cisco Key Differentiators Product Performance and Scalability – High Availability – Comprehensive Management – Managed IPSec/SSL CPE VPN Network-Based IPSec/SSL VPN
Virtual WAN Extension
Broadband DSL, Cable, Ethernet
Secure Wireless LANs
Network-Based MPLS VPN and Managed Shared Services Managed Remote Access to MPLS VPN Layer 2 Site-to-Site VPN Layer 2 VPLS for Multipoint Ethernet
Intranet Security
Business Drivers
Voice and Video Enabled VPN (V3PN) Solution Delivery
IP VPN Service Providers – AT&T – MCI – Sprint – SBC – Verizon – BellSouth – Qwest – Cox – Industry Players
Equipment Manufacturers – Cisco Systems – Nortel Networks – Check Point/Nokia – Sonic Wall – Microsoft – Lucent – Intel – Enterasys – CoSine – Quarry – Juniper – Alcatel – Riverstone – Redback –
VPNs
End Notes 1
Kaplan, Ron. U.S. IP VPN Services 2005–2009 Forecast. Study # 33117, March 2005
2 Whitely,
Robert. “IP VPNs: Build or Buy?” Forrester Research, January 27, 2005
References Used in This Chapter Pepelnjak, Ivan, Jim Guichard, and Jeff Apcar. MPLS and VPN Architectures, Volume II. Cisco Press, 2003 Cisco Systems, Inc. “Cisco Dial Remote Access to MPLS VPN Technical Overview.” http://www.cisco.com/en/US/netsol/ns341/ns396/ns172/ns126/netbr09186a008014bdff.html
Recommended Reading
225
Cisco Systems, Inc. “Cisco Remote Access to MPLS VPNs Business Solution Overview.” http://www.cisco.com/en/US/partner/netsol/ns341/ns396/ns172/ns126/ netbr09186a0080108163.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Cisco Remote Access to Multiprotocol Label Switching Virtual Private Network Solution.” http://www.cisco.com/en/US/partner/netsol/ns341/ns396/ ns172/ns126/netqa09186a008009d688.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Deploying IP Multicast VPN, A Cisco Networkers 2004 Presentation.” Session RST-2702 Cisco Systems, Inc. “Voice and Video Enabled IPSec VPN Solution Overview.” http://www.cisco.com/en/US/partner/netsol/ns340/ns394/ns171/ns241/ netbr09186a00800b0da5.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Access VPNs and IPSec Protocol Tunneling Technology Overview.” http://www.cisco.com/en/US/partner/products/sw/secursw/ps2138/products_maintenance_ guide_chapter09186a008007da0d.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “SAFE VPN IPSec Virtual Private Networks in Depth.” http:// www.cisco.com/en/US/netsol/ns340/ns394/ns171/ns128/networking_solutions_ white_paper09186a00801dca2d.shtml Cisco Systems, Inc. “MPLS-Based VPNs: What’s Possible For Enterprises.” http:// www.cisco.com/en/US/partner/netsol/ns341/ns121/ns193/networking_solutions_ white_paper0900aecd800f911c.shtml Cisco Systems, Inc. “Positioning MPLS, A Cisco Systems white paper.” http:// www.cisco.com/en/US/partner/tech/tk436/tk428/technologies_white_ paper09186a00800b010f.shtml
Recommended Reading Pepelnjak, Ivan, Jim Guichard, and Jeff Apcar. MPLS and VPN Architectures, Volume II. Cisco Press, June 2003. ISBN 1-58705-112-5 Pignataro, Carlos, Ross Kazemi, and Bil Dry. Cisco Multiservice Switching Networks. Cisco Press, October 2002. ISBN 1-58705-068-4
This chapter includes the following topics:
• • • • • • • •
Light—Where Color Is King Understanding Optical Components Understanding Optical Light Propagation Optical Networks—Over the Rainbow Understanding SONET/SDH Understanding RPR and DPT Optical Ethernet Optical Transport Network
CHAPTER
5
Optical Networking Technologies Optical networking is the future-proof choice over which to build next-generation networks. With more communication speed and span width, and by being virtually interference and error free, optical networks are the foundation from which new-era provider networks are built. Optical networking is a long and wide topic. To appropriately represent the optical technologies and network models that are deployable in the new era of networking requires more than a single chapter. In fact, there are complete books on a variety of optical topics with more to come. This chapter introduces some of the more popular optical technologies in play today. Topics such as optical components, Synchronous Optical Network/Synchronous Digtal Hierarchy (SONET/SDH), Resilient Packet Ring (RPR), wavelength division multiplexing (WDM), and optical Ethernet are reviewed. You learn more about optical applications for metropolitan networks in Chapter 6, “Metropolitan Optical Networks,” and long-haul networks in Chapter 7, “Long-Haul Optical Networks.” Optical networking is the ascendant Layer 1 technology on which to build 21st century communications and deliver next-generation network services. It is becoming a worldwide transport for Ethernet, which is the reigning Layer 2 technology. At Layer 3, Internet Protocol (IP) completes the network building blocks for the creation of next-generation optical networks.
Light—Where Color Is King Incandescence, luminescence, iridescence, radiance, and rainbows. All of these are manifestations of light. Optical networks use light and further distinguish the light by colors or wavelengths. Perhaps the most significant discovery in telecommunications within the last 20 years is the technology to create nanoshades of light, pump them together through a flexible glass pipe, and uniquely detect and separate them at the other end. While this activity is in the range of infrared light, nonvisible to the natural eye, an invisible rainbow is surreptitiously there— the amalgamation of many photonic infrared colors traveling together, each carrying distinct but unique information in parallel.
228
Chapter 5: Optical Networking Technologies
Optical networking has always been based on color distinction. Multimode optical fiber at 850 nanometers (nm) is shade-distinguishable from 1310 nm single-mode optical fiber. Both are subtle hues of semivisible red with one used for short range optical communications (850 nm) and the other for long (1310 nm). For years, this pair of optical technologies and available optical components would provide ample support for a solitary optical wavelength with which to carry a single bit stream. Leaving the market outlook to bit-rate advances based on rising modulation capabilities of optical oscillators and amplifiers, a solo optical wavelength-per-fiber strand seemed to outpace demand, suggesting that the 20th century optical networking industry was future-proof. The concept of jamming more packets into a single-lane bit stream to optimize available bandwidth reaches a point of diminishing return. It requires burgeoning intelligence at the point of entry—and at mile markers along the way—in order to count, sort, schedule, align, unjitter, launch, steer, and stop without accidental collision. Too much intelligence requires code and complexity and the chips and power to sustain it. Anything that proceeds in serial fashion has its limitations. One thousand automobiles traveling one direction of a two-lane highway are quickly outpaced and swiftly outdistanced by 1000 automobiles racing in parallel, each in one of a 1000 lanes side by side, together seemingly as wide as they are separately long. For lanes, think of optical wavelengths—lambdas—hundreds, thousands, and potentially millions of individually addressable, invisible, yet discriminate points of colorful, infrared light. Optical, then, brings many colors into the light. The appeal of the light is speed. The discovery of the light’s distinct, colorful hues renders massive scalability, as each optical fiber strand is capable of dozens to hundreds to potentially thousands of unique infrared wavelengths per strand, allowing physical fiber to approach a virtually limitless capacity. The effectual use of both optical speed and wavelength scalability reduces the limits of time and distance into the range of the nanocosm, making the world ever smaller. The latest generation of optical networking is perhaps better defined as the conveyance of color-propagated, massively paralleled information, whether by glass or by air. Color permeates everything from clothing to crayons, to cartoons, to communication optics. Color is intelligence, information, and illumination.
Understanding Optical Components Optical components are the photonic tinker toys of optical networking. Optical light spectra can be insulated by glass fiber, plastic fiber, or air. A sunset or a rainbow generates millions of wavelength-specific photons traveling the speed of light, insulated by air, striking and stimulating the eye’s optic nerve. Today’s optical networking components can also generate and manipulate multilambda light, typically through optical glass fiber, distinguishing each
Understanding Optical Components
229
infrared lambda as a separate logical thread of information communication. Optical components are responsible for generating both visible and infrared light, and marshaling the lights’ propagation and detection. The understanding of optical components begins with light and lambdas—sourced from the electromagnetic spectrum—and proceeds with optical fiber, light emitters and detectors, and optical multiplexers and demultiplexers. In addition, optical amplifiers such as erbiumdoped and dual-band fiber amplifiers, couplers and circulators, filters and gratings, waveguides, and transponders round out a plethora of photonic components made of glass and used by optical engineers as the building blocks of optical networking.
Light and Lambdas Everything seen by the human eye is visible light—a Monet painting, an evening sunset, a rainbow, the crescent moon, or the Aurora Borealis. The distance between these objects and the human eye is bridged by light made up of millions to zillions of photons of different wavelengths. Photons are little energy particles that often travel together in electromagnetic light waves. Photons are pure energy, weightless, and always on the move. Photons are the fastest things in the universe, traveling at light speed of 186,282 miles per second (300,000 km per second)—hence, the desire to use photonic light in optical communications networking. The concept of photons, or the smallest glimmer of light, was first introduced in 1905 by Albert Einstein. Take an atom with its orbiting electrons, apply some energy to it (often called excitation), and the atom’s electrons tend to both absorb the energy and widen their orbit around the atom’s nucleus. After an electron moves to a higher-energy orbit, it eventually wants to return to its original state, as would a stretched rubber band. When the electron does this, it releases the absorbed energy as a photon—the name for light energy— which is a particle of light. For example, when the heating element in your toaster turns bright red, atoms excited by heat radiate the reddish color through the release of red photons. Based on these principles of stimulating light with heat or another form of excitation, Albert Einstein introduced in 1917 the concept of stimulated emission of radiation, embodied in the term laser, which stands for light amplification by the stimulated emission of radiation. Lasers were developed by 1960 to create the light needed for optical communications. The light visible to the human eye is only a tiny portion of the vast range of wavelengths that are in the known universe. Some wavelengths are good for carrying power current, some for interstellar reach, and others for communications. Many wavelengths can be recreated. Light of various wavelengths is mathematically referred to with the Greek symbol of lambda. Lambdas, wavelengths, frequencies, and channels are terms used somewhat interchangeably within optical networking. The term, lambdas, is plural and implies more than one distinct, optical signal color. Wavelength represents a lambda from a “length of wave per cycle” perspective. Frequency represents a lambda from a “number of wave cycles per second” viewpoint. Both wavelengths and frequencies are plural forms when
230
Chapter 5: Optical Networking Technologies
referring to two or more lambdas. Optical channels generally refer to the capacity of an optical system. For example, 32 channels imply 32 lambdas, without being specific which distinct lambdas are being referenced. It is photons of various wavelengths or infrared colors of light that can be used to create data streams of different lambdas, down the same strand of fiber, separating them into each distinct lambda at the other end. Much like a prism refracts visible white light from the sun into multiple colors, each with a distinct wavelength measured in nanometers or frequency, optical technology exists to launch and retrieve many colors of infrared light down a singlefiber thread, drastically multiplying available bandwidth by every individual lambda. The basic physical premise of multilambda communication is that optical signals of different wavelengths don’t interfere with each other, allowing them to cohabit and propagate in the same fiber core. Perfect for WDM, a palette of photons make single-mode fiber a massively parallel communication medium. Light and, more specifically, lambdas are found within the electromagnetic spectrum.
Electromagnetic Spectrum The discovery of electromagnetism and its subsequent mathematical representation is credited to Scottish physicist James Clerk Maxwell. Essentially, everything in the universe oscillates, and when it does, it makes waves—or, to be more specific, electromagnetic radiation. It is generally understood that within the atmosphere are radio waves for AM, FM, and shortwave. Television signals and cellular signals are used along with microwaves and X rays, all common terms to the general populace. The entire range of electromagnetic radiation from low-frequency power to cosmic rays are classified and identified within a continuum called the electromagnetic spectrum. The portion of the electromagnetic spectrum that is leveraged for optical fiber communications is within the infrared light section, spectrally located between microwaves and visible light (see Figure 5-1). Infrared means “below red” from the Latin word “infra.” The infrared portion of the spectrum starts “below” the red end of the visible light part of the spectrum, at about a 700 nm wavelength. Infrared proceeds from about 700 nm up to about 1 millimeter (mm). The subset of the infrared spectrum that is most often used for optical fiber communications is generally from 850 nm up to about 1625 nm.
Understanding Optical Components
Figure 5-1
231
The Electromagnetic Spectrum Frequency (Hz)
Wavelength (m)
X-Rays 1 Nanometer Visible Light
Fiber Optic Communications
Ultraviolet 1000 Gigahertz Infrared 1 Millimeter Microwaves FM VHF TV Shortwave Radio
1 Gigahertz
1 Megahertz
1 Meter
1 Kilometer
AM 1 Kilohertz
1000 Kilometer
Range
Band
Description
1260–1360 nm
O-band
Original
1360–1460 nm
E-band
Extended
1460–1530 nm
S-band
Short Wavelength
1530–1565 nm
C-band
Conventional
1565–1625 nm
L-band
Long Wavelength
1625–1675 nm
U-band
Ultra-long Wavelength
Source: TeleGeography research © PriMetrica, 2005
TeleGeography explains the electromagnetic spectrum as follows: The laser light used in fiber-optic communications operates within a narrow band on the electromagnetic spectrum. Radiation (such as TV signals and light) on the electromagnetic spectrum can be measured by both frequency (the number of wave cycles per second, or Hertz) and wavelength (in meters). Frequency and wavelength are inversely proportional (that is, the higher the frequency, the shorter the wavelength), and either can be used to describe communications signals. For example, radio broadcasts are denoted in frequency—a 100-megahertz (MHz) frequency on the FM dial corresponds to approximately a three-meter wavelength. In contrast, signals on fiber-optic cables operate at much higher frequencies, and have tiny wavelengths—only 850 to 1,625 nanometers (billionths of a meter). In scientific literature, a wavelength often is denoted as lambda (?). Individual wavelengths also are referred to as colors—an analogy to frequencies within the visible light spectrum. One of the more important objectives of fiber designers has been to design fiber that has a wider “window” or range of usable frequencies for light signals. The wider the usable band, the more distinct signals can be transmitted. This is determined in part by the composition of the fiber itself. Hence, some recent designs have extended the low attenuation window at 1550 nm (now called the C-band) to 1600 nm (called the L-band), allowing more signals to be transmitted. At the other end, scientists have eliminated water molecules that greatly decrease attenuation at 1400 nm, releasing this band (the S-band) for possible future use.1
Further leverage of the electromagnetic spectrum for optical fiber communications is closely tied to the use of the specific regions of the infrared spectrum where optical attenuation (signal loss) within glass fiber is low. These regions, called windows, lie
232
Chapter 5: Optical Networking Technologies
between areas of high molecule absorption that occur during fiber manufacturing. The earliest systems were developed to operate around 850 nm, the first window in silica-based optical fiber. A second window (S band), at 1310 nm, soon proved to be superior because of its lower attenuation, followed by a third window (C band) at 1550 nm, with an even lower optical loss. Today, a fourth window (L band), near 1625 nm, is also available and deployed in optical fiber systems. These four windows are shown relative to the electromagnetic spectrum in Figure 5-2. Figure 5-2
Typical Windows for Optical Design Radio Waves, Long Waves Visible
Infrared
First Window
Second Window
X-Rays, Gamma Rays
850
1310
nm
Fourth Window “L” Band
900 1000 1100 1200 1300 1400 1500 1600 1700
700 800
Third Window “C” Band
Ultraviolet
1550 1625
Source: Cisco Systems, Inc.
The visible and infrared portions of the electromagnetic spectrum are the universal recipe for the re-creation of “photonic materials.” For optical communications, lasers and light emitters originate the light pulses, glass fiber carries the light pulses over long distances, and photon detectors receive the light pulses at particular communication intercept points. To command control over this electromagnetic spectrum, an engineer will select the preferred optical components that meet the design requirements of the optical transmission system.
Light Emitters Light emitters refer to a category of semiconducters used to create both visible and infrared light. They are devices that convert electrical energy into light. Originally, lasers were the first to be used to create light pulses, both visible and infrared. Today, lasers for use in optical communications are compressed inside semiconductor microchips and quantum well devices, and are generally referred to as laser diodes or light emitters. Light emitters are the source of light-based communications and are used to optically represent an electronic, binary bit stream by modulating (switching on and off) an optical
Understanding Optical Components
233
pulse stream or by blocking and unblocking the continuous output of a light emitter in a timed sequence. Different light emitters are designed to emit a particular wavelength of photons, injecting the light into the aperture of optical fiber for communications. Light emitter classifications include the following:
•
Laser diodes—Laser diodes are complex semiconductors used with both singlemode and multimode fiber. Composed of indium gallium arsenide phosphide, they use narrower spectral wavelengths, exhibit higher-power, faster modulation, and support higher-bandwidths, securing their almost exclusive use in long-reach optic communications as well as WDM applications.
•
Light-emitting diodes (LEDs)—LEDs are primarily made of gallium aluminum arsenide, have a wider spectral width, are used with multimode fiber, and exhibit a very low cost. LEDs are less expensive to manufacture than lasers. Common LEDs have a bandwidth limitation of about 622 Mbps. LEDs produce wide-spectrum coverage but have lower power in comparison to the lasers. LEDs are generally used with multimode fiber for optical transmission distances less than 2 km.
•
Vertical-cavity surface-emitting lasers (VCSELs)—VCSELs are a more recent form of LEDs that have emerged as low-cost light sources for multimode fiber. VCSELs are currently designed around the 850 nm wavelength window. Of late, VCSELs have found promising applications in VCSEL arrays that can be mated to optical ribbon cable for short-reach, high-speed parallel optic interconnections.
Light emitters, particularly lasers, are becoming tunable and are being designed with the ability to alter their wavelength. You learn more about light emitters and tunable lasers in Chapters 6 and 7. Light emitters are key optical components that exploit the knowledge of the electromagnetic spectrum’s visible and infrared light regions to generate light and launch light pulses into optical fiber. A brief introduction of optical fiber follows.
Optical Fiber Optical fiber, particularly glass-based fiber, is the physical transmission medium of choice. As a result, optical networking is the ascendant Layer 1 technology on which to build the new era of next-generation optical networks. Optical fiber is possible because of a rather complex integration of physics, minerals, and the resultant technologies. Optical fiber is composed primarily of silica (SiO2), a silicon dioxide chemical compound that results in very pure glass. Optical fiber is constructed into the three Cs:
• •
Core—The core is where the light travels.
•
Coating—The coating protects the glass fiber as an insulation from the elements and against machine and human handling.
Cladding—The cladding surrounds the core and reflects the light to keep it moving within the core.
234
Chapter 5: Optical Networking Technologies
The two general classifications of fiber are multimode fiber (MMF) and single-mode fiber (SMF). MMF uses a larger core through which to propagate light pulses. It is called multimode because the size of the core allows multiple “modes” of the exact same wavelength of light to travel the core simultaneously. SMF uses a smaller core, having the effect of allowing only a single mode or instance of any injected wavelength. MMF is less expensive to manufacturer and is often used for short-distance optical communication, and SMF is used for long-distance and multilambda systems. Optical fiber has a theoretical information-carrying capacity as high as 30,000 Gbps.
Multimode Fiber Dimensions of optical fiber are traditionally measured in micrometers. In 1976, Corning developed 50-micrometer (measured at the core) MMF. This fiber type was the first to become commonly installed and has been used primarily in Japan and Germany, where 50 micrometer is a data standard. The United States standardized on 62.5-micrometer core MMF, which was developed in 1986. IBM, at the time, originally endorsed 62.5-micrometer fiber, because the larger size of the aperture compensated somewhat for the immature techniques of connector polishing and alignment. This larger fiber core size was also considered to work well with LEDs. AT&T followed and standardized on the 62.5-micrometer fiber, leading to the acceptance of the 62.5-micrometer size as a MMF optical standard. For MMF, the claddings diameter typically measures about 125 micrometers and, with the final protective coating, a cross-section of optical fiber measures about 245 micrometers across. The 50- to 62.5-micrometer cores of these fibers are wide enough to allow multiple modes of light propagation, each taking a slightly different path through the fiber core, hence, the designation of MMF. Figure 5-3 depicts multiple modes of the same wavelength traveling through the core of multimode fiber. Figure 5-3
Multimode Fiber Light Propagation
Cladding Core
Fiber Optic Strand
Source: Cisco Systems, Inc.
Understanding Optical Components
235
While 62.5-micrometer fiber is the typical U.S. standard for multimode, the 50-micrometer fiber is becoming increasingly important as low-cost 850 nm LEDs are being developed. Using 850 nm LEDs with 50-micrometer fiber allows for longer-link distances and higherspeed transmission than using 62.5-micrometer fiber with the same 850 nm LEDs. For example, carrying Gigabit Ethernet (GE) over 850 nm, 62.5-micrometer multimode fiber, the usable transmission range is about 275 meters. Using 50-micrometer fiber, the range can be extended to 500 meters. For the 10 Gigabit Ethernet standard at 850 nm wavelength, 62.5-micrometer fiber will reach about 35 meters, current 50-micrometer fiber will extend that to 86 meters, and next-generation/premium 50-micrometer optical fiber will more than triple the distance to about 300 meters. MMF fiber applications are useful for short transmission distances such as fiber patch cords, local area networks, and campus backbone applications less than 2 km. These multimode fiber cables are generally coated with an orange insulation to distinguish them from single-mode fiber, often wrapped in a yellow plastic coating. Figure 5-4 shows the characteristic dimensions of the core and cladding for SMF and MMF. You learn more about SMF in the next section. Figure 5-4
Core and Cladding Dimensions of SMF and MMF Single-Mode Fiber
Multimode Fiber
8–10 Micrometer
50 Micrometer
62.5 Micrometer
Core
Core
Core
Cladding
Cladding
Cladding
125 Micrometer
125 Micrometer
125 Micrometer
Source: Cisco Systems, Inc.
Single-Mode Fiber SMF uses a much smaller core diameter, about 8.2 micrometers, for example, in Corning’s SMF-28 fiber. This core diameter is five to seven times less than MMF and only allows a single mode of propagated light. When combined with narrow beamwidth laser diodes operating at 1310 nm and 1550 nm windows, SMF allows for long-distance and highbandwidth optical applications.
236
Chapter 5: Optical Networking Technologies
Just as in MMF, the cladding diameter of SMF reaches 125 micrometers, and the outer protective coating diameter usually measures about 245 micrometers; but that’s where the similarities end. It is this difference in core diameters that technically distinguishes SMF’s optical characteristics from MMF. SMF is the fiber classification of choice for longdistance optical communications when using a single lambda, as in pre-WDM optical systems, and for using multilambdas in WDM, coarse wavelength division multiplexing (CWDM), and dense wavelength division multiplexing (DWDM). Figure 5-5 depicts a single wavelength traveling through the core of single-mode fiber. Figure 5-5
SMF Light Propagation
Cladding Core Single-Mode Fiber Strand
Source: Cisco Systems, Inc.
Not all SMF is alike. Since the 1980s, there have been a number of application-specific SMFs developed, each purposely designed for a particular optical installation. To illustrate some of these differences, it is useful to itemize some specific fibers from a particular fiber manufacturer, such as those developed by Corning. Some of these single-mode, application-specific fibers are
•
Corning SMF-28—Often considered the SMF standard, this is perhaps the world’s best-selling fiber. It is an unshifted fiber that is optimized for time division multiplexing (TDM) transmission at 1310 nm. It is also useful for TDM at 1550 nm and WDM at 1550 nm, although not the best for those wavelengths.
•
Corning SMF-28e—This photonic fiber is targeted at optical connectorization and optical component manufacturers with versatility between the 1280 nm and 1625 nm range.
•
Corning SMF-DS—A single-mode, fiber dispersion manufactured to specifically shift the dispersion peak to one side of the range of a 1550 nm laser source. This optimizes the fiber for TDM at 1550 nm, which is a single lambda source, but not for WDM use at 1550 nm, because WDM is multilambda, operating on both sides of the 1550 nm center point of the ITU-G.692 optical wavelength grid.
•
Corning SMF-NZ-DSF—This nonzero dispersion-shifted fiber is particularly optimized for both TDM and WDM use at 1550 nm wavelengths. There is both positive dispersion-shifted fiber (+ NZ-DSF) and negative dispersion-shifted fiber (– NZ-DSF).
Understanding Optical Components
237
•
Corning LEAF—This is a fiber principally optimized for DWDM use in the 1550 nm band. LEAF stands for Large Effective Area Fiber, meaning that among other things, it is especially optimized to support maximum DWDM channel plan flexibility. LEAF is a nonzero, dispersion-shifted fiber with industry-leading polarization mode dispersion specifications, supporting immediate upgrades to 40 Gbps optical transmission systems and ultra-long-haul network distances.
•
Corning VASCADE—A family of optical fibers used for submarine applications in harsh undersea environments. Submarine optical fiber cable is important because of its ability to quickly globalize the Internet and support traffic rates of intercontinental data. Not long ago, geosynchronous (GEOS) satellite systems carried the burden of international voice and data traffic, but submarine optical fiber has mounted a storm surge of capacity in the last few years. If you’re going to build across or along continents, whether using amplified or nonamplified designs, you might be using some of the following: — Vascade R1000 fiber solution for transoceanic networks, generally 3000 km to 10,000 km distances. — Vascade U1000 fiber solution for short-haul submarine networks, generally 100 to 400 km in length. — Vascade LEAF fiber, optimized for submarine DWDM applications. — Vascade L1000 fiber, a positive dispersion, positive slope fiber, generally used in hybrid fiber applications. — Vascade LS+ brand of fiber, a negative dispersion, positive slope fiber, generally used in hybrid fiber applications.
The optical attenuation loss per kilometer of today’s optical SMF, expressed as decibels per kilometer (dB/km), is 100 times better than Corning’s original 20 dB loss per kilometer, considered the benchmark in 1970. At typical .20 to .22 dB/km of signal attenuation, premium SMF at 1550 nm wavelengths achieves longer distances before reamplification is necessary, which is of extreme importance to long-haul fiber applications. More information on dB/km loss is covered in Chapters 6 and 7. Optical fiber might be carrying a digital signal, but the light-propagating carrier is still an analog transmission. As a result, there are impairments that affect light propagation such as attenuation, dispersion, nonlinearity, and distortion. These impairments are present due to the nature of analog transmission, the presence of impurities in the glass core, and nonlinearities in the circumference of fiber. A large part of optical fiber transmission science is devoted to managing these impairments through shaping and compensation devices and sometimes through the use of application-specific fiber. However, that doesn’t negate optical fiber’s usefulness, as fiber offers error rates ten billion times lower and bandwidth rates ten billion times higher than copper wiring ever will. Optical fiber is the quintessential carrier for optical light communications. A whole scientific industry has grown up around optical fiber, and fiber is moving ever closer to the
238
Chapter 5: Optical Networking Technologies
business and residential interface of LANs and WANs. Providers use different types of fiber with different light emitters and light detectors to form optical communication networks for specific applications and for future-proofing network capacity and scalability.
Light Receivers Optical light receivers, also known as photonic detectors or, more colloquially, photodetectors, perform just the opposite of light emitters. They detect light pulses at the receiving end of the optical fiber, converting it into an electrical energy representation that is proportional to that of the received photonic signal. These photodetectors based on semiconductor technology are the main component of light receivers. Called photodiodes with respect to optical communications, these semiconductors are primarily PIN photodiodes and avalanche photodiodes (APDs). PIN is an abbreviation that refers to a three-layer diode, where the P-type (positive layer material) is separated from the N-type (negative layer material) by an intrinsic layer of material, therefore, a PIN diode. The basic premise of photodetectors is that upon receiving photons, they convert this photon energy into electrons or electrical energy referred to as photocurrent. Generally, the photocurrent is then amplified for signal reuse beyond the photodetector. APDs have a statistical multiplication phenomenon known as the avalanche effect, which generates a larger number of electrons for every photon received. Low-speed, low-cost optical systems favor the price point of PIN photodiodes, while multigigabit systems prefer the higher-power conversion of the APDs. Light receivers usually have a wider wavelength sensitivity to appropriately compensate for wavelength drift as affected by optical impairments. One of the keys to light receiver design and operation is the proper balance of receiver sensitivity, to distinguish between an optical signal and optical noise. Figure 5-6 depicts a photodetector receiving laser-originated photons through an optical fiber. Figure 5-6
Photodetectors as Light Receivers Light Pulses
Laser Diode Chip
Lens
PhotoCurrent
Optical Fiber PhotoDiode
Source: Cisco Systems, Inc.
Understanding Optical Light Propagation
239
Understanding Optical Light Propagation Now that you have a basic understanding of a few optical components, it is helpful to elaborate on the propagation (movement) of light and embellish on a few of the components. Sending light waves through optical fiber is a conversion from digital to analog. Electronic equipment electrically presents digital 1s and 0s to an optical laser, in effect modulating the electrical digital signal into representative light pulses that reflect through the core of optical fiber. The movement of these light pulses through the core of the fiber is an analog transmission technique. High-speed propagation, low-bit error rates, and immunity to electrical interference are prime and positive characteristics of optical fiber, much as they are in classic digital systems—but make no mistake, optical fiber transmission is analog from end to end. Optical light signals lose power (attenuation) and shape (dispersion) as they travel. These impairments are due to nonlinearities in the cylindrical shape of the fiber core, as well as the chemical composition of the glass and any impurities within. As minute as these impairments seem, they exhibit effects on analog waveforms. Microscopic impurities in the fiber absorb light energy and also work to scatter and misshape the optical signal. Attenuation and dispersion are common characteristics of analog transmission and determine the use and placement of optical amplification or regeneration as needed for a particular optical network design. To send light through an optical fiber, a laser will launch the light pulses made of photons into the fiber at a particular angle and amplitude. The properties of the fiber type and the characteristics of the laser light wavelength determine how many kilometers the light can travel and remain distinguishable from adjacent light pulses. An optical receiver needs to see between 10 and 40 photons per bit, depending on the optical detection components in use. As the initial amplitude of the optical signal attenuates as it passes through the fiber, it becomes necessary to reamplify that optical signal at some particular distance. Various light wavelengths experience amplitude loss differently in fiber, and specific regions of the optical portion of the spectrum are primarily used; in effect, windows where the optical signal attenuation is particularly low. It is these areas of low attenuation in the overall infrared spectrum that command the most focus of research and development in the optical networking market.
240
Chapter 5: Optical Networking Technologies
Table 5-1 lists these low-loss wavelength regions, which fall into four different windows and present their typical attenuation values. Table 5-1
Low-Loss Optical Windows/Wavelength Regions Optical Window
Wavelength Region
Power Loss (Attenuation)
1st optical window
850 nm
3 dB/km (50% loss)
2nd optical window
1310 nm (S band)
0.4 dB/km
3rd optical window
1550 nm (C band)
0.2 dB/km
4th optical window
1625 nm (L band)
0.2 dB/km
Potential 5th window
1400 nm
Under research
As shown in Table 5-1, a loss of 3 dB/km represents a 50 percent loss in optical power after 1 km of travel at 850 nm. The 850 nm wavelength is most often used with short-reach optics and multimode fiber, generally up to 2 km maximum, a distance at which almost all of the original optical power has faded. This distance can be increased through amplification, but a periodic 2 km reamplification would be cost prohibitive while also introducing distortion. This is why 850 nm to 980 nm wavelengths are primarily used with LEDS and multimode fiber in the LAN market where large volumes of short-reach fiber applications are abundant. The use of multimode fiber and LEDs represent the lowest obtainable costs for optical. In the LAN market, low-cost optics is the primary driver. Laser diodes use higher-power and narrower beamwidths that, when combined with singlemode fiber, allow for transmission distances to exceed 2 km. Conventional laser diodes emit a 1310 nm wavelength and are coupled with single-mode fiber for intermediate and longreach communications. A conventional single-mode fiber is commonly designed to exhibit zero dispersion at the 1310 nm optical window. With a typical .4 dB/km of attenuation, and assuming a 20 db total power budget, this combination allows for a theoretical optical signal distance of about 50 km before the optical signal is indistinguishable. Using a single-mode fiber optimized for the third optical window at 1550 nm experiences even lower attenuation and can allow for a single wavelength at 1550 nm to reach about 100 km of theoretical, unamplified distance. Other factors make the practical distance less than these, but this serves as a reference point. Therefore, most long-reach optical networks are supplemented with optical amplification and eventually optical regeneration to extend optical signal distances into the hundreds or thousands of kilometers. As shown in Table 5-1, the third and fourth optical windows exhibit the lowest attenuating wavelength regions within common single-mode fiber. These windows are used for virtually all long-haul optical communications and are the windows of choice for multilambda systems such as WDM and DWDM.
Optical Networks—Over the Rainbow
241
While optical networks rely on these photonic components, the ultimate goal of optical tinkerers is to create sellable services. The leveraging of optical innovations into ultrabroadband applications is a critical success factor for providers as well as users of optical networks. Robotic surgery, supergranular imaging, supercomputer sharing, and grid computing are but a few examples of collaborative applications occurring on a national and international level, thanks to optical networking. With the benefits of optical components, a single-fiber strand with dense wavelength separation and shortly, ultra-dense to ultimately intense wavelength dissection delivers every kid’s dream: a potentially infinitesimal box of luminescent crayons all in concurrent motion, rendering spectra of instantaneous, copiously tinted images at the speed of light. With autosharpening and autobrightening, no time is wasted as each multilambda lane runs well below capacity, congestion, or mandated optimization. From serial to parallel, from narrowband to broadband, from red to infrared, from electronic to photonic, from packets (back) to optical circuits, everything works freer, faster, and farther.
Optical Networks—Over the Rainbow The installation of optical fiber has always been an expensive endeavor, estimated at about $70,000 per mile. As bandwidth requirements have grown and data has surpassed voice as the anchor tenant of communications, TDM techniques using SONET/SDH are losing favor for data passage. Voice is synchronous, and data is asynchronous. SONET is bit synchronous, and data prefers to be byte asynchronous. Much overhead is required to adapt TDM-based SONET/SDH networks for the carriage of data. Moreover, the protection scheme of SONET/SDH architecture requires that one half of a fiber’s bandwidth be reserved for a transmission backup path in case of a fiber cut or node failure. Additionally, attempts to increase the transmission bit rate through optical fiber are often challenged with more pronounced optical impairments. For example, transmission at OC-192 over single-mode fiber encounters 16 times the chromatic dispersion impairment than does the next-lower speed of OC-48, requiring more cost and complexity to overcome. In conjunction with these constraints, the rapid growth of the Internet and the bandwidth speeds necessary to feed it have pushed installed fiber capacity toward the point of exhaustion. For years, optical fiber was used with one wavelength of light per fiber strand due to the limits of laser technology. This perpetuated an intense focus on bit-rate increases to boost the capacity of high-value fiber. Semiconductor lasers are now smaller and more efficient as photonic transmitters, while new dielectric filters, gratings, and waveguides create suitable options for receivers. The level of engineering precision in these and other optical components allow for more cost-effective exploitation of the infrared portion of the electromagnetic spectrum. These advances propel the effective use, multiplexing, and demultiplexing of multiple wavelengths of light within the same fiber strand. In effect, optical networks can now send communications over the infrared rainbow, using multiple
242
Chapter 5: Optical Networking Technologies
lambdas to yield unique infrared colors as distinct communication signals. As a result, multilambda network characteristics are
•
Transparency—Carries multiple protocols and different services on the same fiber, unlike TDM
•
Wavelength capacity—Has 1.25 Gbps, 2.5 Gbps, 10 Gbps, 40 Gbps, or higher optical bit rates
•
Service aggregation—Packages multiple services or signals onto a single wavelength within a fiber
•
Density—Yields more wavelengths per fiber by using 400 GHz, 200 GHz, and even closer spacing based on the ITU-T G.692 grid
Facilitating optical networks over the rainbow are a number of technologies that are commonly classified as wavelength division multiplexing (WDM), dense wavelength division multiplexing (DWDM), and coarse wavelength division multiplexing (CWDM).
WDM WDM is a long-established technology in the long-haul optical network backbone that enables multiple electrical data streams to be transformed and modulated into multiple independent optical wavelengths. The promise of optical WDM and its dense derivative (DWDM) is akin to defying physics by pushing more water through the same diameter garden hose; one you won’t have to return every couple of years to the point of purchase. WDM is generally used as an umbrella term to refer to all optical multilambda systems. In that sense, WDM spectrum currently spreads from 1310 nm up to nearly 1700 nm in multiple windows of usable, infrared light. Depending on the proximity of separation between different lambdas (which we’ll call wavelengths) within these optimum, infrared bookends, you can further stratify WDM into dense WDM (DWDM) and coarse WDM (CWDM). Future derivatives are likely on the horizon. WDM can be more specifically applied to optical systems that range from 2 to 16 wavelengths (lambdas, or channels) and work exclusively (today) within the 1550 nm, C band optical window. To accomplish 16 wavelengths in this window, 16 lasers are necessary that are spaced at 200 GHz (1.6 nm) of separation between them. WDM systems were the first multiwavelength systems and originally used other optical windows. The tolerances of WDM lasers are less than DWDM, resulting in lower component costs and overall system capital expenditures (CapEx). DWDM can be more specifically applied to optical systems that range from 16 or more wavelengths using an interwavelength separation of 100 GHz (.8 nm) or closer in both the 1550 nm C band and the 1625 nm L band. For example, a common 32 wavelength DWDM system in the C band uses 100 GHz spacing. Increasing the density to 50 GHz of separation
Optical Networks—Over the Rainbow
243
allows for 64 wavelengths within the same range of wavelength spectrum. You learn more about DWDM in the following section. CWDM is classified as coarse, because it uses a very wide separation between wavelengths, specifically 20 nm spacing between up to 18 wavelengths that are spread from the 1310 nm S band through the 1550 nm C band window. Wider separation across multiple optical windows is a key identifier for a CWDM system. Therefore, CWDM tolerances are less than WDM or DWDM, leading to the lowest-cost optical components for creating a multiwavelength system. CWDM is covered briefly following the DWDM section. A WDM system multiplexes (combines) unique optical wavelengths of 16 data-modulated color streams into what could be called one stream of infrared “white light” across one fiber strand, increasing the bandwidth capacity of the link by a factor consistent with the number of distinct wavelengths. At the opposite end, the multiple wavelengths are demultiplexed (light separation as in a prism) into the original 16 wavelengths that represent the original data streams. As such, WDM makes one optical fiber into many virtual fibers (by 16 times for this simple example). WDM and DWDM provide each fiber with potentially unlimited transmission capacity. Given the expense to deploy new fiber in the ground or undersea, this logarithmic increase in fiber capacity represents the preeminent pull of WDM-based technology. Early WDM systems began with the use of the two most available lasers, 850 nm and 1310 nm. Together, these could be used to form a two-channel system using the two different wavelengths. Two-channel systems using 1310 nm and 1550 nm were also common. WDM systems then followed laser technology improvement into the lower-loss third window at 1550 nm, forming up to eight channel systems with each spaced at an interval of 400 GHz. From there, WDM technology began a drive for density, packing distinct lambdas ever closer by divisions of two, from 200 GHz spacing to 100 GHz, 50 GHz, and 25 GHz intervals by the late 1990s, driving channels upward of 128 lambdas that could be launched and received in a DWDM system. WDM increases available bandwidth per fiber by every distinct wavelength, lambda after lambda. The primary wavelengths used for WDM and DWDM are found in the electromagnetic spectrum around the 1550 nm mark, the sweet spot known as the red band or the upper end of the C band of the ITU-T G.692 optical wavelength grid. Recent system designs also exploit the L band, creating additional lambdas at the 1625 nm window. Progress is occurring in the 1400 nm window, a window recently made available through improved techniques in fiber manufacturing. Until recently, manufacturing processes allowed the glass fiber blank to absorb the Hydroxyl (OH-) molecule, effectively causing a water peak that made the 1400 nm wavelength range unusable. Lucent’s AllWave brand of fiber is particularly designed to eliminate the water peak, allowing the use of the 1400 nm window for WDM. This also enables two more wavelengths for CWDM that were previously unusable, bringing CWDM to a full 18 possible wavelengths via this fiber.
244
Chapter 5: Optical Networking Technologies
Ongoing WDM research and development continues the pursuit of tighter spacing and concurrent use of multiple windows to increase the total channel count. With the appropriate fiber and laser components, designers could potentially use all of the C Band through the L band—from 1300 nm to 1650 nm—yielding about 400 wavelengths at 100 GHz spacing or 800 wavelengths at 50 GHz. Tighter spacing between distinct wavelengths such as 25 GHz and 12.5 GHz will drive channel counts ever higher. The optics industry will continue this density march as long as channel count remains a competitive differentiator in WDM systems. WDM and, more prominently, DWDM address the bandwidth and the fiber constraint issue. As such, these technologies find application in both long-haul optical networks, where fiber capacity must be maximized, and in metropolitan networks, where the number of unique services using distinct optical wavelengths can rapidly proliferate. Service transparency, exponential capacity, and service aggregation are the primary drivers.
DWDM The chief difference between DWDM and WDM is fundamentally one of total wavelengths per fiber. The “dense” moniker of DWDM refers to the closer, interchannel spacing between the technology’s signal frequencies. DWDM is a combination of 32 or more tuned wavelength transponders and lasers, optical filters, attenuators, dispersion compensators, and amplifiers. DWDM can also be optically amplified, contributing to the technology’s increasing use in large metropolitan networks, regional networks, and long-haul optical networks. DWDM networks benefit from two complementary ascendancies:
• •
The speed of optical modulation The density of optical wavelengths
A common bit rate of 10 Gbps per wavelength times potentially 800 wavelengths would yield 8000 Gbps of capacity per fiber pair. The next bit rate increase to 40 Gbps times 800 wavelengths would yield 32,000 Gbps of potential optical capacity. Systems capable of 40 Gbps per wavelength still fall very short of the theoretical limit of 30,000 Gbps per optical wavelength. In this capacious context, DWDM becomes the epitome of virtual fiber. DWDM is optimized for bandwidth and distance. Operating in the C band of the International Telecommunications Union Telecommunication Standardization sector (ITU-T) 100 GHz G.692 grid, DWDM commands the highest density of wavelengths due to extremely close interchannel spacing, the highest capacity using up to 40 Gbps of optical bit-rate modulation, and the longest distances using erbium-doped fiber amplifiers (EDFAs) for optical amplification and Raman optical amplification techniques. First-generation DWDM is typically defined as at least 32+ wavelengths per fiber with the wavelength spacing tighter than WDM’s 200 GHz, or less than 1.6 nm between them.
Optical Networks—Over the Rainbow
245
Second-generation DWDM refers to systems that apply the tight wavelength spacing specified by the (ITU) 100 GHz G.692 grid, which is normally less than 1 nm (0.8 nm at 100 GHz interchannel spacing or 0.4 nm at 50 GHz interchannel spacing). With the international ITU standard, the 100 GHz and 50 GHz interchannel spacing specifies explicit wavelengths in nanometers that might be implemented, helping to drive standardization and interoperability between different DWDM systems. DWDM is frequency division multiplexing similar in concept to an FM radio but implemented in the optical infrared range of the electromagnetic spectrum. With DWDM, it is also helpful to understand the relationship between frequency spacing such as GHz and the lambda wavelength expressed in nanometers. The center of the red band within the infrared portion of the spectrum is 193.0 terahertz (THz), a frequency corresponding to a 1553.33 nm wavelength. You can create a 24-channel system across the red band by applying 100 GHz of interchannel spacing on either side of 193.0 THz. The reason that interchannel spacing is specified in GHz is based on a fundamental physics property that frequencies are inversely proportionate to wavelengths. For instance, the higher the frequency, the shorter the wave’s length; so while the THz scale is ascending in GHz increments, the nanometer scale is numerically descending with respect to wavelength. It is easier to design lasers and receivers to distinguish based on frequency tuning than on the actual length of the light wave itself. For example, if you want to select a wavelength of 1552.52 nm, an adjacent wavelength to 1553.33 nm, you tune a second laser to operate at 193.1 THz, which when compared to the first laser at 193.0 THz is a 100 GHz difference. Therefore, the two adjacent “channels” at 100 GHz spacing would be compared as 193.0 THz/193.1 THz if using frequency, or expressed as 1553.33 nm/1552.52 nm if referring to their individual wavelengths in nanometers. What causes much of the confusion is that the lasers are designed to use the frequency specification, but the individual channels are most commonly referred to and often part-numbered by their specific nanometer wavelength. Many second-generation DWDM systems are designed to use the C (conventional) band, which is defined as the infrared blue plus the infrared red band, from about 1530 nm to 1560 nm. Some of these second-generation systems can use the S (short) band, the C (conventional) band, and the L (long) band, commanding the wavelength range of 1300 nm to 1650 nm, particularly if a type of fiber is used that eliminates the water peak at about 1400 nm. This large of a range (1300 nm to 1650 nm) drives possible wavelength channel counts to 400 wavelengths spaced at 0.8 nm apart or 800 wavelengths spaced at 0.4 nm apart. It is somewhat up to designers of these systems to pack wavelengths in the optimum portions of these bands, keeping in mind the price/performance model they are trying to achieve. Developing an exorbitant lambda channel system is not worthwhile if it is cost prohibitive to manufacture or to purchase. As developers consider market requirements, they might eventually design DWDM channel counts that multiply wavelength capacities per fiber over 1000 lambdas and potentially to 3300 channels and beyond.
246
Chapter 5: Optical Networking Technologies
Table 5-2 shows the different bands and windows with their respective nanometer and frequency specifications. Table 5-2
Optical Wavelength Windows/Bands
Infrared Window
Band
Frequency at 100 GHz Spacing
850 nm to 980 nm
Traditional
N/A
~3 dB
N/A
1300 nm to 1490 nm
S (short)
N/A
.4 dB
Low cost
1525 nm to 1545 nm (blue)
C (conventional)
196.50 THz to 194.00 THz
.2 dB
High cost
1546 nm to 1565 nm (red) 1565 nm to 1625 nm
Loss per km Single-Mode Fiber (SMF)
Amplifier Cost
193.90 THz to 191.50 THz L (long)
190.90 THz to 185.00 THz
Low cost --
High cost
Optical Network Impairments Optical fiber power loss and optical pulse shape dispersion are two of the most important optical impairments to consider when choosing fiber types for DWDM networks. As bit rates are driven higher, many of the DWDM impairments become more pronounced.
Optical Power Loss An optical power loss (attenuation) specification per kilometer is a fixed characteristic of the specific fiber type at manufacturing time. Different techniques are used to balance silica core purity with a suitable cost, in effect creating a catalog of unique, application-specific fibers—that is, different fiber types that meet particular applications. If creating a long-haul DWDM network, you would desire a fiber type that minimizes loss per kilometer while also minimizing dispersion properties. Once you have selected a low-loss fiber window, you can address the effect of optical power losses on distance through the use of optical reamplification techniques. Previously, it was necessary at a periodic distance to convert the optical signal back to electrical, and then back to optical to reshape, retime, and reamplify the optical signal. This is referred to as OE-O conversion or reshape, regenerate, retime (3R) regeneration. This 3R regeneration was the only way to deal with the fiber power loss that occurred over a fixed distance. With the advent of erbium-doped fiber amplifiers (EDFAs) and Raman amplifiers, the optical signal can be amplified without conversion, staying completely within the optical domain. An EDFA is a short section of optical fiber that is spliced inline with the primary fiber. The EDFA principally creates photons that match and mate with the photons of the
Optical Networks—Over the Rainbow
247
pass-through optical signal, boosting the pass-through signal power by strengthening the number of photons per light pulse. In addition, this photon multiplication effect is multiwavelength within the range of the EDFA’s fluorescing properties, about 1530 nm up to 1625 nm. Other types of optical amplifiers such as praseodymium (PDFA), thulium (TDFA), and ytterbium (YDFA) are in use or in development as practical optical amplification devices. The combination of Raman+EDFA amplifiers is also popular. You might use one or more of these, depending on the design requirements of the optical network. Table 5-3 shows some comparative characteristics of optical amplifiers. Table 5-3
Optical Amplifier High-Level Comparisons
Characteristic
Erbium-Doped Fiber Amplifier (EDFA)
Raman Amplifier
Semiconductor Optical Amplifier (SOA)
Power gain
~ 30 dB
~ 20-25 dB
~ 10-20 dB
Output power
High
High
Low
Input power
Moderate
High
High
Crosstalk
Low
Low
Very high
Gain tilt
High
Low
High
Typical application
WDM/DWDM metro networks, long-haul networks
WDM/DWDM longhaul and ultra long haul networks
Short-haul, singlechannel, wavelength converters
Source: DWDM Network Designs and Engineering Solutions, Cisco Press
Of the three types of amplifiers listed, only the EDFA and Raman amplifiers are generally used with DWDM systems. The Raman amplifier has a lower gain tilt than the EDFA, which has the effect of amplifying the total DWDM wavelength range rather proportionately. Raman amplifiers also have a lower noise figure, which is the amount of noise induced by each amplifier stage. Noise can build up over cascaded amplifier stages to the point that a 3R regeneration is needed to correct it. In this respect, Raman amplifiers have lower noise figures than EDFAs, leading to their use in long- to ultra-long-haul optical networks. As shown in Figure 5-7, Raman amplifiers are capable of boosting a wider range of DWDM wavelengths than are EDFAs, another benefit that leads to Raman use in high-channel count DWDM systems. EDFAs cost less than Raman amplifiers, so a number of factors must be balanced when designing an optical system to a particular set of requirements.
248
Chapter 5: Optical Networking Technologies
Optical Amplifier Transmission Windows
Saturated Output Power (dBm)
Figure 5-7
40
30
RAMAN
YDFA
EDFA
20 PDFA
10 1000
1100
1200
1300 1400 Wavelength (nm)
TDFA
1500
1600
1700
Source: Cisco Systems, Inc.
In Figure 5-7, you can see that PDFAs are available for the 1310 nm window for singlewavelength systems. EDFAs are typical for the 1530 to 1625 nm windows, often used for DWDM. TDFAs that span from 1450 to 1510 nm are recently available. Raman amplifiers can cover all of these ranges, but generally at higher costs than other amplifiers.
Dispersion Dispersion is a characteristic impairment of optical fiber light propagation at a given bit rate. The fundamental issue with dispersion is that as a light pulse travels through an optical fiber, different characteristics of the fiber cause the light pulse to modify its shape. Dispersion can also affect the achievable distance before requiring the light pulse(s) to be regenerated. For example, using standard SMF G.652-specification fiber at a bit rate of 10 Gbps, a DWDM network’s maximum transmission length is just above 60 km before regeneration is needed. Using a specific type of dispersion-shifted fiber such as G.655+, the transmission length becomes nearly 500 km. Two significant types of dispersion are chromatic dispersion and polarization mode dispersion. Chromatic Dispersion Chromatic dispersion is named for chroma, the different colors or wavelengths in the spectrum. The characteristic is representative of the physical property that different color wavelengths travel through the same fiber core at different speeds. Therefore, a laser pulsing a number of colored wavelengths into a fiber will modulate them together in-phase (overlapping) in a very narrow pulse, but as they travel the length of the fiber, the differing speeds per wavelength will spread the arriving pulses out-of-phase. This effect is usually compensated for with specific fiber application types and dispersioncompensating units.
Optical Networks—Over the Rainbow
249
Polarization Mode Dispersion Polarization mode dispersion (PMD) is another complex effect related to the presence of two axes of polarization, an x-axis and a y-axis, that can spread apart as they travel the fiber. The PMD effect happens because optical fiber is not perfectly cylindrical with dimensional constants. Manufacturing imperfections and mechanical stress on the fiber cause variations in the cylindrical geometry of the fiber, enhancing the PMD effect as these variations become more prominent. PMD is not an issue at low bit rates but merits concern at higher bit rates that exceed 5 Gbps.
Additional Impairments Since WDM and DWDM involve multiple light signals of different wavelengths traveling (at different speeds) in the same fiber, they will suffer from other impairments such as channel interference (crosstalk) as channels are spaced closer, and at higher channel counts that will need to mitigate the effect of resonance interaction (four-wave mixing). These can be addressed through techniques such as unequal channel spacing, per-channel power balancing, and through the use of nonzero dispersion-shifted fiber (NZ-DSF).
Common Fiber Types Fiber has a lot to do with the number of concurrent wavelengths that can be supported for DWDM. An optical fiber must have a spectral capacity wider than the range of DWDM wavelengths that are needed by an optical network design. Table 5-4 lists some of the common fiber types used to optimize DWDM transmission and DWDM channel planning in the 1550 nm window. Table 5-4
Typical Fiber Types used for DWDM Networks
Fiber Type Single-mode fiber (G.652)
Fiber Brand Names
WDM/DWDM (1550 nm)
CWDM
Corning SMF-28
Fine
Good
Fine
Excellent
ATT/Lucent SSMF Lucent Matched Cladding SMF Alcatel 6900
Zero water peak fiber (G.652.C) extended band, 1265 nm to 1625 nm
Corning SMF-28e Lucent/OFS AllWave Alcatel 6901
continues
250
Chapter 5: Optical Networking Technologies
Table 5-4
Typical Fiber Types used for DWDM Networks (Continued) Fiber Brand Names
WDM/DWDM (1550 nm)
Dispersion-shifted fiber (G.653)
Corning SMF/DS
C-band transmission limited by nonlinear effects. L-band transmission mitigates nonlinear effects, and full-DWDM channel count is possible.
Fine
Nonzero dispersionshifted fiber (G.655+)
Corning LEAF
Superior for long-haul Lucent/OFS Truewave applications
Fine
Fiber Type
Pirelli FOS
CWDM
Alcatel 6912 Teralight Ultra Pirelli Freelight Nonzero dispersionshifted fiber (G.655-)
Corning MetroCor Alcatel 6911 Teralight Metro
Superior for metropolitan applications
Fine
Pirelli Widelight
DWDM Design Considerations While it isn’t a goal of this chapter to provide an extensive discussion on DWDM design considerations, it is worthwhile to introduce a few concepts for your understanding. DWDM channel counts, channel plans, and transponders are discussed next. More design considerations are contained in Chapters 6 and 7.
DWDM Channel Count The DWDM channel count requirement is usually dependent on tier 1 (core), tier 2 (distribution), or tier 3 (access) network applications. Using DWDM in an access link might only require a very low channel count, just a few wavelengths. Used in a metropolitan network, it might need a high channel count to support service density and variety. For long-haul optical networks, capability for supporting high DWDM channel counts scales the transmission capacity, preserves the fiber investment, and postpones new long-haul fiber builds. Many metropolitan area networks (MANs) and much of the WAN backbone today contain installed DWDM equipment that has unused capacity. Where there is excess capacity with wavelengths unused, these wavelengths are often referred to as dark wavelengths or dark lambdas. A completely unused fiber pair(s) would be called dark fiber. Possible DWDM channel counts and upgrade options are usually specified by manufacturers of DWDM system equipment.
Optical Networks—Over the Rainbow
251
Channel Plans The standard ITU-T G.652 wavelength reference grid for DWDM is based on 150 lambdas at 100 GHz spacing or 300 lambdas at 50 GHz spacing. This is a range from 1491.88 nm or 200.95 THz to 1611.79 nm or 186 THz. Manufacturers are free to implement DWDM channel plans within these ranges. To reduce the effects of channel crosstalk when DWDM channel plans use 100 GHz or less interchannel spacing, a provider will often implement a channel plan that helps to compensate for impairments, such as adjacent channel crosstalk and four-wave mixing. A common approach to this is to implement a channel plan that contains unequal spacing between channels or to periodically skip a channel to increase isolation. For example, a typical sweet spot for DWDM channel implementation is between 1530.33 nm and 1560.61 nm. This represents 39 channels based on the ITU-T 100 GHz grid spacing. By using a 4-skip-1 channel plan, an ITU grid point is skipped for every four channels, effectively skipping 7 grid channels within this range. The remaining 32 channels are used by the vendor as the overall DWDM channel plan. Incidentally, these 32 channels are in the C band, where the lowest loss per kilometer is specified (.2 dB per km).
Transponders A DWDM transponder is often used to convert a client/customer interface signal— generally a non-ITU-grid-compatible signal—to a DWDM ITU-grid wavelength. This gives customers access to the DWDM spectrum. As a result, the transponder functions as a wideband optical receiver to catch all possible customer optical signal wavelengths, perform a signal conversion to its electrical representation, and then use the electrical signal to modulate a laser, producing an optical signal that is ITU-grid wavelength compliant. The complexity of these combined technology devices—a wideband receiver, O-E-O converter, and ITU-grid laser—contributes to the generally high cost of transponder modules. Some transponders are tunable within a certain range of ITU-grid wavelengths in order to increase their flexibility. Transponders with 3R capability are protocol dependent, such as converting a 10 Gigabit Ethernet data stream to a DWDM wavelength. With 3R transponders, not only reshaping and signal regeneration is needed, but retiming is mandatory since the bit rates don’t match. Transponders with 2R capability are bit rate transparent and, therefore, protocol independent, since they only reshape and reamplify the signal. ITU standard pluggable optics on service platforms help to reduce the need for transponders, normally a very large cost component of WDM/DWDM networks. Pluggable optics are now available in various packaging form like Gigabit Interface Converters (GBICs), small form-factor pluggables (SFPs), Xenpak, and XFP. Pluggable optics are fixed, yet modular.
252
Chapter 5: Optical Networking Technologies
Establishing Balance in DWDM Design Optical design is all about balance—balancing fiber types, laser components, impairment management components and techniques, and even DWDM channel plans. Discrete components such as transponders, optical multiplexers and demultiplexers, amplifiers, attenuators, and dispersion compensators must be chosen, tuned, and implemented on a case-by-case network basis. Much of optical science is applied to these types of networks. For the most part, DWDM network design remains a sophisticated craft-guild technique. As the technology matures, efforts to add intelligence, automation, and integration will lessen the need for commanding a PhD in optics in order to design, provision, and maintain a profitable optical network.
Intelligent, Integrated DWDM Intelligent DWDM promises to expand the market for DWDM networks. Many service providers already possess the skills necessary to design, modify, and maintain optical networks, the heart and lifeblood of their technology-based service offerings. Yet, large enterprises, governments, and higher-education customers are exploring DWDM network solutions to meet specific needs for high-speed network communication. Many of these customer segments will elect to use provider networks and many will not. As DWDM networks become easier to design, install, and maintain, the market momentum for DWDM networks might well pass into the capable hands of these large users. First- and second-generation DWDM primarily addressed the long-haul optical market with a particular focus on fiber relief. In long-haul DWDM networks, critical network tuning occurs at network deployment with very minor changes needed ongoing. Early attempts to port these types of DWDM technologies into the metropolitan optical market met with limited success. Metro optical networks interface with a higher density and a greater diversity of customer optical interfaces than do long-haul networks. To meet customer demand, the speed of moves, adds, and changes are key drivers in a market space where services rapidly proliferate. A newer generation of intelligent DWDM products are making metropolitan optical networks easier to provision, simple to operate, and ultimately more competitive and profitable. In addition to DWDM intelligence, integration of DWDM into multiservice optical products, such as Multiservice Provisioning Platforms (MSPPs), enhances price/performance of capital and operational investments in these networks. Instead of building a separate DWDM transmission layer followed by a service interfacing and service aggregation layer, all capabilities are combined in the same hardware and software platform. This integration reduces the number of discrete products that must be installed, integrated, and maintained. This combined intelligence and integration of DWDM is being facilitated by new-generation components, many of them optically active compared to their passive predecessors. The ability to scale DWDM networks for rapid service engagement is enhanced through reconfigurable optical add/drop multiplexers (ROADMs). Previously, OADMs based on
Optical Networks—Over the Rainbow
253
passive filters and waveguides were much like a prism—inflexible. Changes could cause interruption and delays in service adjustments. ROADMs are used in place of fixed-wavelength OADMs in intelligent DWDM systems. A typical ROADM uses two sets of modules for both east and west connectivity. A 4- or 8-channel ROADM can provide a reasonable capacity of add/drop/block and pass-through support with no hardware changes. At each node, software controls which wavelengths are added, dropped, or passed through, with each event controlled on a per-wavelength basis. Reconfigurable optical ADMs are benefiting from improving price/performance curves, and 32-channel ROADMs will increase flexibility, reduce component sparing, and strengthen service densities. Cisco Systems offers ROADM technology in ONS 15454 DWDM configurations. More discussion on ROADMs is found in Chapter 7. Traditional per-channel optical power equalization requires the addition of physical attenuators or manually adjustable variable optical attenuators (VOAs) at each node. These VOAs allow amplifier gain control to be adjusted based on signal power and channel requirements. As wavelengths are added or removed, this manual retuning becomes a periodic maintenance event, increasing provisioning times. Using advances in PIN diodes, active VOAs can benefit from software-adjustable tuning through a typical 30 dB power budget. These automated VOAs are often used after an integrated EDFA amplifier to properly attenuate the wavelength signals to the proper laser launch power required by the next span of the optical network’s power budget. Boosting an optical signal also boosts noise. Continued amplification over consecutive optical spans can increase amplified spontaneous emissions to detrimental levels, and increase the complexity of DWDM design. As another example of required attention to detail, you can’t just add a new wavelength to a bundle of wavelengths without considering the overall optical power output and the effect on the optical spectrum. Automated optical power management is needed. The power curve of an optical signal can be affected by insertion loss through the adding or dropping of wavelengths, such as to or from an OADM. This creates a variable that must be managed as new optical wavelengths are combined with or cut from pass-through wavelengths. Many of the Cisco platforms utilize automatic optical power management techniques that dynamically sense and control the inputs and outputs of optical power levels on a channel-by-channel basis. These software features can measure the intrinsic power levels of each wavelength and equalize them properly. Network topology autodiscovery and automatic feedback are provided through a lambda or lambdas that are dedicated to management and monitoring of optical power levels as an optical supervisory channel (OSC). Tunable lasers help to complete this optical automation. The optical power management software feature uses the feedback from the OSC and provisioning dynamics, instructing the tunable lasers to vary their output to compensate for changes. This intelligence provides network-wide optical power automation, decreasing provisioning times and allowing the network to become dynamic. The outcome is self-optimizing, selfhealing, intelligent, and dynamic transmission networks.
254
Chapter 5: Optical Networking Technologies
The substitution of active components in place of more traditional passive optical components provides for integration of intelligence. Tunable laser transponders, laser optics, and ROADMs ease installation and planning. Any change or addition of optical wavelengths is dynamically provisioned through software tuning of the transponders and ROADMs. Automated network tuning occurs through automated per-channel optical power monitoring and equalization without human intervention. Automatic amplifier gain control adjusts based on signal power and the number of channels; it also adjusts as channels are added or removed. In addition, automatic adjustment helps account for aging of the optical fiber cable and optical components like lasers. Embedding management and monitoring into elements and components distributes intelligence into the optical network for network management on-the-fly. The result is the ability to quickly add services without frequent reengineering of the optical network. These advanced intelligence and integrated DWDM features combine with MSPPs to create networks with effective cost optimization for any mixture of optical services. The addition of features such as service aggregation further enhances DWDM optical networks. Service aggregation over DWDM complements business continuity solutions requiring increased numbers of high-speed connections between data centers. Business continuity and disaster recovery solutions are often compulsory through regulatory mandates such as FED/SEC Sound Practices for U.S. Financial System Resilience, the Sarbanes-Oxley Act, Health Insurance Portability and Accountability Act (HIPAA), Homeland Security, and, internationally, the Code of Practice for Information Security Management (BS EN ISO 17799). Many companies are now building second and, increasingly, third data centers, creating synchronous disk mirroring and data center mirroring and moving to shared computing. High bit rates and the lowest possible latency are critical requirements for these solutions. Combining many different types of data center communication protocols over DWDM wavelengths helps to facilitate these objectives. As the price of WDM and DWDM descends, the ability to use unique wavelengths for distinct voice, multidata, and video types will create a natural quality of service—in essence, unique optical circuits that share physical bandwidth and spectrum span width together.
CWDM Coarse (or wide) wavelength division multiplexing (CWDM) refers to less expensive optical systems that use wider spacing between wavelengths, normally at least 10 nm to 20 nm. Optical light emitters don’t require the temperature control mechanisms of DWDM, and receivers can, therefore, use more tolerance in their specifications, resulting in less
Optical Networks—Over the Rainbow
255
sophistication. Many of the core components used for CWDM networks are designed as passive optical devices leading to CWDM applicability to passive optical networks (PONs). CWDM transceivers are most often used with pluggable optics such as GBICs and small form-factor pluggables (SFPs) to create networks that are plug-and-play. Therefore, CWDM is optimized for low cost. CWDM is typically 4 to 18 fixed wavelengths per fiber with the wavelengths spread farther apart than WDM and DWDM. CWDM has been standardized on an international level by the ITU in specification ITU-T G.694.2. This standard defines an optical grid of 18 wavelengths within the range of 1270 nm to 1610 nm using 20 nm channel spacing. This range is within the desired low-loss regions of most single-mode fiber, and as such can support unamplified distances of up to about 50 km. The CWDM ITU-T standard wavelength plan includes up to 18 wavelengths for use; however, until recently, not all of these were usable. A couple of CWDM wavelengths on the initial edge of the range (1270 nm and 1290 nm) are affected by Raleigh Scattering, reducing the possible wavelengths to 16. If used with a standard SMF-28 style of fiber exhibiting the hydroxyl ion water peak effect, four or five additional CWDM wavelengths (generally 1370, 1390, 1410, 1430, and perhaps 1450) are unusable. The water peak region is the wavelength region of approximately 80 nm centered on the 1383 nm mark in the infrared spectrum. That’s why until recently there has been a market focus on 8-channel CWDM systems using the 1470 nm to 1610 nm wavelengths at 20 nm spacing. Low water peak fiber and zero water peak fibers are available, affording CWDM designs the capability to use up to 16, and possibly 18, CWDM wavelengths using appropriate fiber. This is known as full-spectrum CWDM, and 16-channel CWDM systems are now common within the market. CWDM can also be supplemented with DWDM on the same fiber. For example, reserving unused CWDM wavelengths for expansion—for example, 1530 nm and 1550 nm—could use 8 wavelengths of DWDM in place of CWDM 1530 and 8 wavelengths in place of CWDM 1550 nm. These DWDM channels would exist between the 1510 nm and 1570 nm CWDM wavelengths, expanding the total system by 8 to 16 channels. This would be accomplished by placing a couple of DWDM nodes on the same fiber pair as the CWDM nodes. This can allow for a smaller startup cost with CWDM, using DWDM as an upgrade option to extend the capacity and, therefore, the life of the fiber plant. Figure 5-8 shows this example design.
256
Chapter 5: Optical Networking Technologies
Figure 5-8
Overlaying CWDM with DWDM DWDM
DWDM CWDM
CWDM
DWDM
DWDM CWDM
Overlaying DWDM Channel Onto Each 1530 nm and 1550 nm CWDM Channel
1470 nm 1490 nm 1510 nm 1530 nm 1550 nm 1570 nm 1590 nm 1610 nm
Wavelength CWDM ITU Grid Source: Cisco Systems, Inc.
CWDM is ideal for small-scale metro optical networks. When using up to 8 wavelengths/ lambdas, this low-cost approach to WDM is positioned from 1470 nm to 1610 nm with 20 nm spacing. A CWDM 16-channel plan would use 1270 nm to 1610 nm. An example of a CWDM 8-channel plan using pluggable optics with Cisco Systems GBICs and Cisco’s assigned color codes would be as follows: 1000BASE-CWDM 1470 nm GBIC
Gray
1000BASE-CWDM 1490 nm GBIC
Violet
1000BASE-CWDM 1510 nm GBIC
Blue
1000BASE-CWDM 1530 nm GBIC
Green
1000BASE-CWDM 1550 nm GBIC
Yellow
1000BASE-CWDM 1570 nm GBIC
Orange
1000BASE-CWDM 1590 nm GBIC
Red
1000BASE-CWDM 1610 nm GBIC
Brown
Understanding SONET/SDH
257
The 18-channel CWDM wavelength plan as standardized by the ITU-T G.694.2 specification is as follows:
• • • • • • • • •
1270 nm 1290 nm 1310 nm 1330 nm 1350 nm 1370 nm 1390 nm 1410 nm 1430 nm
• • • • • • • • •
1450 nm 1470 nm 1490 nm 1510 nm 1530 nm 1550 nm 1570 nm 1590 nm 1610 nm
Unlike WDM and DWDM, CWDM is very difficult to optically amplify. Because a desirable CWDM channel plan will usually spread across multiple low-loss optical “windows,” such as the 1310 and the 1550 windows, you could use an EDFA to amplify the signals around the 1550 nm band, but this does nothing for the 1310 nm band. At about 1310 nm to 1380 nm, the rare earth element of choice for a doped fiber amplifier would be praseodymium, not erbium. Using both types of amplifiers in a combined amplifier design might be feasible, but this increases cost, which is not the objective of CWDM designs. CWDM topologies such as point-to-point, bus, hubbed ring, and meshed ring are commonly used. CWDM network applications are primarily limited to unamplified shorter distances of about 30 to 50 km and will be found mostly in metropolitan, campus, and large enterprise environments. The primary driver for CWDM networks is the support of multi-Gigabit Ethernet services and storage area network services in these campus and metropolitan networks. Additionally, PONs are greenfield space for full-spectrum CWDM.
Understanding SONET/SDH Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH) are two telecommunications standards that have brought order to digital communications. Developed within the same timeframes, SONET is standardized by the American National Standards Institute (ANSI) T1.105:SONET standard, and SDH is standardized by the International Telecommunications Union ITU-T Recommendation G.691 specification. These are base standards of a family of standards that apply to SONET or SDH. As indicated by their respective standards organizations, SONET is primarily used in North America, and SDH is primarily used in Europe, Japan, and the rest of the world.
258
Chapter 5: Optical Networking Technologies
Both SONET and SDH have numerous similarities and a few differences. The purpose of this discussion is not to cover each extensively or differentially, but to briefly introduce these digital communication techniques as they form the basis of a large portion of the provider networks in the world today.
SONET/SDH Origins and Benefits SONET and SDH, often summarized as SONET/SDH, are not necessarily optical technologies or discrete components unique unto themselves. They are information bitmapping methods for implying intelligence into a digital bit stream of data. SONET/SDH, developed in the same time continuum as optical fiber communications, are essentially an electronic bit-router, a coded reflex with which to instruct electrical multiplexers and optical modulation components as to what they should do and when to do it. Because their resulting schema was well-prepared and standardized, SONET/SDH also contributes to higher-speed bit representation, low-overhead transmission, and vendor interoperability. Before SONET/SDH, early optical fiber systems in the worldwide public switched telephone network (PSTN) were built with different equipment having proprietary architectures, with many largely using asynchronous signaling and multiplexing techniques. The primary users of the optical fiber networks were the large regional Bell operating companies (RBOCs) and interexchange carriers (IXCs), and they desired a network standard with which to optimize interconnection and normalize equipment such that their core network wasn’t dependent on a particular vendor, architecture, or pricing structure. The ability to mix and match equipment from different vendors was seen as a critical success factor to improving service providers CapEx and OpEx. Such a standard would simplify the interconnection of different fiber networks both nationwide and worldwide. SONET/SDH networks use high-accuracy stratum clock sources to propagate and form a synchronous clock hierarchy. This allows a central network clock to be distributed to all SONET/SDH elements, creating timing synchronization end to end. The desire for a synchronous transmission system, based on a highly stable clock reference, is fundamental to the benefits of SONET/SDH. With highly stable, Stratum 1 atomic clock reference sources now available via optical, SONET/SDH effectively synchronizes data streams. SONET/SDH enables multivendor compatibility at the mid-span meeting point of optical network interconnections. With more than 152,000 SONET systems in the United States, SONET is the dominant telecommunications transport technology for the regional core networks and metropolitan access markets of America’s wireline providers. SONET took commercial shape in the mid-1980s as a telecommunications industry framing standard for optical transmission.
Understanding SONET/SDH
259
SONET/SDH was designed for first-generation optical networks and was justified based on primarily two factors:
•
To reduce operational costs of the copper coaxial backbone voice network by 50 percent
•
To standardize digital communications over optical fiber transmission plant
SONET/SDH benefits are
• • • • •
Reduces back-to-back multiplexing Facilitates optical interconnection Provides for traffic grooming and segregation Converges voice, data, video, and Asynchronous Transfer Mode (ATM) Enhances operation, administration, and management (OAM) and performance monitoring
SONET and SDH Hierarchy With SONET and SDH, multiplexing is via TDM. Since TDM is nonstatistical, traffic flows in and out of a SONET/SDH node at an equal rate in predetermined time slots. This eliminates any idea of traffic flow issues, because congestion, priority, or bandwidth peak rates aren’t important considerations in a TDM system. The effective bandwidth of the pipe is always present, whether information is traveling in the equivalent time slots or not. SONET/SDH, by design, reserves one half of the optical bandwidth as a protection path in the event of a fiber cut. SONET allows efficient multiplexing and packaging of DS1 signals up to OC-192 signals, per the ANSI standards. SDH allows efficient multiplexing and packaging of E1 signals up to STM-256 signals for the ITU-T standards.
SONET Hierarchy SONET applies a hierarchical design for information transport to carry payloads above 50 Mbps. At a basic DS1/T1 carrier level, the digital signaling (DS1 signals) is packaged by multiplexing equipment into a payload called a virtual tributary 1.5 (VT1.5). The virtual tributary draws an analogy to water runoff principles—tributaries merging to form small streams that feed larger rivers and eventually flow to the oceans. If you aggregate 28 of these VT1.5s, these are then multiplexed (flowed) into another type of payload called a synchronous payload envelope (SPE), which is roughly 45 Mbps worth of signal. The SPE just fits into a DS3 signal and, with overhead, is packaged into SONET’s basic transmission unit of 51.84 Mbps, also known as a Synchronous Transport Signal (STS). The electrical STS/
260
Chapter 5: Optical Networking Technologies
SPE (logical signal) is carried in an Optical Carrier 1 or OC-1 (physical transport pipe). The next SONET hierarchy takes three of these 51 Mbps SPEs and places them in an OC-3, representing 155 Mbps. So, the SONET hierarchy is as follows:
• • • • •
Three OC-1s (51.84 Mbps each) become an OC-3 (155.52 Mbps) Four OC-3s become an OC-12 (622.08 Mbps) Four OC-12s become an OC-48 (2488.32 Mbps or, more commonly, 2.5 Gbps) Four OC-48s become an OC-192 (9953.28 Mbps or, more commonly, 10 Gbps) Four OC-192s become an OC-768 (39,813.12 Mbps or, more commonly, 40 Gbps)
SDH Hierarchy The European standard SDH uses 2.048 Mbps as its basic tributary or, in SDH terminology, a container. Like SONET, SDH applies a hierarchical design for information transport to carry payloads, but SDH considers the base payload rate as 155.52 Mbps, which is a Synchronous Transport Module-1 (STM-1). At a basic E1 container level, the digital E1 signaling (a C-12) is packaged by multiplexing equipment into a payload called a virtual container 12 (VC-12). The VC-12 becomes a tributary unit-12 (TU-12), of which three TU-12s then get mapped into a Tug-3 and seven Tug-3s become a high-order virtual container-3 (VC-3). VC-3s eventually get mapped into SDH’s base signal specification of an STM-1 at a line rate of 155.52 Mbps, an identical line rate to that of the SONET OC-3. The line rates are intentionally similar, but the overhead, alignment bytes, and bit-stuffing mechanisms to accomplish this comparative are different between SONET and SDH. SDH calls both the electrical and the resulting optical signal an STM-1. Following the SDH hierarchy from there results in
• • • •
Four STM-1s (155.52 Mbps each) become an STM-4 (622.08 Mbps) Four STM-4s become an STM-16 (2488.32 Mbps or, more commonly, 2.5 Gbps) Four STM-16s become an STM-64 (9953.28 Mbps or, more commonly, 10 Gbps) Four STM-64s become an STM-256 (39,813.12 Mbps or, more commonly, 40 Gbps)
As you can understand, most telecommunications professionals aren’t initially inclined to become SONET or SDH experts due to the complexity with which the SONET and SDH multiplexing hierarchies are architected and the difficulty in commanding the terminology. Most prefer a quick reference to highlight the more salient points, the effective bandwidth speeds of SONET/SDH. Table 5-5 offers a comparison of SONET and SDH transmission rates.
Understanding SONET/SDH
Table 5-5
261
Comparison of SONET and SDH Transmission Rates Digital Hierarchy for European SDH Digital Hierarchy for United States SONET (GR.253) (G.691) SONET Optical Carrier (OC) Transport Level
SDH Transport Level
Line Rate (Mbps)
Payload Rate (Mbps)
SONET Electrical Signal
51.84
50.112
STS-1
OC-1
STM-0
155.520
150.336
STS-3
OC-3
STM-1
622.080
601.344
STS-12
OC-12
STM-4
2488.32
2405.376
STS-48
OC-48
STM-16
9953.28 (10 Gbps)
9621.504
STS-192
OC-192
STM-64
39,813.12 (40 Gbps)
38,486.016
STS-768
OC-768
STM-256
SONET/SDH Network Elements To make the SONET/SDH hierarchies work in practical transmission networks, SONET/ SDH networks are built using combinations of the following network elements:
•
Terminal multiplexers—Aggregates electrical signals, such as DS1/E1 and DS3/E3, to feed a software-based electrical TDM multiplexer that packages these signals and converts the resulting bit stream into an optical carrier at OC-N/STM-N line rates. Terminal multiplexers are so named because they tend to terminate the optical link and convert the optical signals back to electrical toward the client side.
•
Regenerators—Sometimes called repeaters, these network elements are responsible for amplifying or regenerating an attenuated optical signal as a result of inherent optical fiber signal loss. Regenerators are often cascaded to extend the optical reach of SONET/SDH network elements.
•
Add/drop multiplexers (ADMs)—Similar to terminal multiplexers, ADMs are multiplexers and demultiplexers often positioned within the optical fiber backbone of the SONET/SDH network. The purpose of the ADM is to multiplex or add client-side electrical signals onto the optical carrier, whether SONET or SDH, and to demultiplex or drop signals from the optical bit stream to the electrical client-side interfaces as required.
262
Chapter 5: Optical Networking Technologies
•
Digital access cross-connect systems (DACSs)—Used to make two-way crossconnections between signals of various levels for traffic management purposes such as DS1/E1 signal grooming into DS3/E3s, consolidating several OC-3s/STM-1s into an OC-12/STM-4, and so on. Both wideband and broadband DACSs are typically found in common SONET/SDH networks.
These various network elements are used to create various types of network topologies. Point-to-point, point-to-multipoint, hub, and ring topologies are typical architectures in SONET/SDH networks, with the ring-based architectures enjoying the most popularity due to reliability and overall facilities cost management. SONET/SDH services are usually provisioned over dual-path optical fiber rings using ADMs that feed DACSs. As mentioned, ADMs are multiplexers/demultiplexers that add lower-rate signals upstream to a larger OC-N/STM-N signal passing through the ADM or drop larger signals from the OC-N/STM-N bit streams down into lower-rate interfaces downstream of the ADM. ADMs also provide an interface conversion, from optical to electrical for example. DACSs usually are deployed as SONET/SDH hubs, and you can find them in the following two types:
•
Wideband digital access cross-connects—In a wideband DACS, the digital signal switching is performed for SONET at the VT level, for DS1-level connections and grooming. For SONET, the wideband DACS terminates OC-N and DS3 levels and switches DS1s into and out of them. In an SDH network, the wideband DACS performs digital signal switching at the E1/TU-12 container level. Effectively, the wideband DACS multiplexes and demultiplexes E1/TU-12 containers into and out of SDH E1, E3, E4, or STM-N signals.
•
Broadband digital access cross-connects—A broadband DACS creates two-way cross-connections between various SONET OC-N level signals and DS3s. It is termed broadband because it is designed to switch at the DS3 or higher level. For SDH, broadband DACSs generally switch at the E3/AU-4 level and higher, to other E3s, E4s, or STM-N signals.
Figure 5-9 depicts a typical SONET/SDH design for a metropolitan area using ADMs and DACS. SONET/SDH equipment was designed to efficiently and reliably transport 64 Kbps voice circuits from the customer premises to (and beyond) the nearest exchange; however, it was not intended to support the enormously growing demand for IP bandwidth. Traditional TDM-based metropolitan networks have had to “cram” support for the enormous growth in data traffic, and have largely been adapted to perform this service. Packet over SONET/ SDH was the first of these adaptations.
Understanding SONET/SDH
Figure 5-9
263
Metropolitan SONET Ring Design with ADMs
Customer Premise
Ethernet
POP
Regional Metro
Metro Access
ADM
ADM
CO ADM
OC-48/ STM-16
Broadband DACS
ADM
ADM
ADM
Wideband DACS
OC-12/ STM-4
ADM
ADM
ADM = Add/Drop Multiplexer
Source: Cisco Systems, Inc.
Packet over SONET/SDH With the vast installed base of SONET/SDH infrastructure and the desire to converge both voice and data transport, SONET/SDH was adapted to carry data more efficiently. Since data is not aligned on 64 Kbps boundaries as are voice signals, a new framing interface is needed at the data link layer to take IP packets and map them efficiently into SONET/SDH payloads. Packet over SONET/SDH (PoS/SDH) is a Layer 2 technology that employs a couple of standard techniques to provide very efficient transport of data over SONET/SDH. PoS efficiently encapsulates the IP packets with a low overhead Point-to-Point Protocol (PPP) header. Two RFCs describe the PoS protocol mappings:
•
RFC 1662 originated in 1994 and defined the protocol mappings for carrying packets with PPP headers in high-level data link control (HDLC)-like framing.
264
Chapter 5: Optical Networking Technologies
•
RFC 2615 was specified in June 1999 to take the PPP in HDLC-like framing and place it within SONET/SDH. So RFC 2615 is the PPP over SONET/SDH specification, but it could just as well read as “PPP in HDLC-like Framing (RFC 1662) over SONET/ SDH.” To be specific, it’s IP into PPP, into HDLC into a SONET/SDH frame.
The PoS standard uses OC-3/STM-1 at 155 Mbps as the basic data rate, effectively using 149.760 Mbps of this bandwidth. PoS frames are mapped via the PPP in an HDLC-like framing technique into the SONET/SDH payload envelope as octet streams that are aligned on octet boundaries. Figure 5-10 depicts PoS’s relationship to the Open System Interconnection (OSI) model. As the diagram shows, IP datagrams at the network layer (Layer 3) undergo protocol encapsulation, which places a PPP header around the IP datagram at Layer 2. PPP has always had a link negotiation feature so that capability remains the same as any PPP communication. Further Layer 2 processing (packet delineation and error control) occurs to place the PPP-encapsulated packet into an HDLC frame. The HDLC frame is now byte delineated into a SONET/SDH SPE and delivered over the SONET/SDH optical transport link. Figure 5-10 Packet over SONET/SDH Using the OSI Model
Datagrams Protocol Encapsulation Link Initialization PPP Packet Delineation Error Control
IP
Network Layer
PPP in Byte Synchronous HDLC Framing
Data Link Layer
SONET/SDH
Physical Layer
Byte Delineation
Source: Cisco Systems, Inc.
PoS efficiency is measured by the ratio of overhead bytes to data bytes within the SONET payload envelope transmitted. The larger the packet, the better the efficiency and the less overhead. PoS averages about 97 percent efficiency compared to the ATM average efficiency of 85 percent. Table 5-6 shows a useful comparison of PoS efficiency with typical packet sizes. Table 5-6
Packet over SONET/SDH Efficiency Packet Size (Bytes)
PoS Efficiency (SPE %)
ATM Efficiency (SPE %)
64
86.8
43
128
94
69
256
97
75
512
98.6
85
Understanding SONET/SDH
Table 5-6
265
Packet over SONET/SDH Efficiency (Continued) Packet Size (Bytes)
PoS Efficiency (SPE %)
ATM Efficiency (SPE %)
1024
99.3
86
1518
99.5
88
2048
99.6
89
4352
99.8
89
The use of the PPP encapsulation technique provides a clue as to the intended topological use of PoS. PoS is configured as a point-to-point data transmission service, typically between two routers that connect over a SONET/SDH link, either a linear link or perhaps a SONET/SDH ring. Even in a SONET ring configuration, the service is provisioned as a point-to-point connection between two SONET ADMs that each connect to PoS interfaces on routers. PoS has been a successful technology for high-speed packet transport in WAN applications. The PoS technologies are built into interface cards that can be added to an IP packet router, allowing the router to interface with a SONET/SDH ADM at OC-3/STM-1, OC-12/STM4, OC-48/STM-16, and OC-192/STM-64 data rates. Implemented since 1997, PoS is a technology used in service provider networks and in enterprise networks. Enterprise networks will customarily use PoS for high-speed WAN interconnection of data centers or large campus environments. PoS connections to ISPs are also popular. Service providers will often use PoS interfaces on backbone routes, between routers within their hierarchical metropolitan networks and where POP-based aggregation routers need to be connected between the customer edge and the provider metropolitan core network. Figure 5-11 shows an example of the use of PoS for interconnecting provider or enterprise routers over SONET/SDH ring connectivity. Figure 5-11 IP Packet Router Connectivity Using PoS over SONET/SDH
IP Packet Router
IP Packet Router Logical Point-to-Point PoS Interface
SONET/SDH ADM
PoS Interface
SONET/SDH NE SONET/SDH Ring
Source: Cisco Systems, Inc.
SONET/SDH ADM
266
Chapter 5: Optical Networking Technologies
SONET/SDH Challenges with Data While SONET/SDH and techniques such as PoS are indeed effective for packaging digital voice and, to some extent, digital data, SONET/SDH optical networking platforms are built on the SONET/SDH framework of fixed-bandwidth circuits. This suggests that you have a large voice infrastructure to support along with the newer data transport services. Many of the newer providers are not in the market to provide circuit-based voice services and as such would not deploy a SONET/SDH infrastructure if data services were their intended primary offering. SONET/SDH’s fixed-bandwidth circuit allocation techniques were used to carry IP traffic for years, and the inefficiency of this structure became apparent when the bursty nature of IP data was considered. In terms of transmission efficiency, IP transmission over predefined, fixed-size bandwidth pipes is suboptimal, because only a fraction of the “pipe’s” capacity is utilized at any moment in time. Packet over SONET/SDH helped to address the payload inefficiencies, especially where SONET infrastructure must be maintained. But with the movement of Ethernet technologies into the service provider domain, efficiencies of a different sort became a challenge. Also, the scaling of SONET/SDH occurs in four-fold increase increments and isn’t granular enough to meet the different bandwidth requirements for today’s data networks. For example, interfacing and transporting a 100 Mb Ethernet across a SONET ring would require an OC-3 at 155 Mbps to carry the payload, resulting in about a 33 percent overhead of traffic that is unused (about 55 Mbps of waste). These challenges are due to the limitations of using TDM with standards that have preceded the popularity of Ethernet-based data in the WAN. Next-generation SONET/SDH features are addressing these challenges and are covered in the “Optical Ethernet” section later in this chapter.
Understanding RPR and DPT In the previous discussion, SONET and SDH are really electronic routing systems for an optical carrier, where all traffic is considered high priority with generally predictable traffic patterns. Granted it’s digital, but it is a bit-oriented switching process that, 20 years or so later, becomes a bit dated (pun intended). Statistical multiplexing techniques are needed. Statistical multiplexing takes full advantage of IP’s bursty traffic pattern by multiplexing multiple streams into a single pipe to fully utilize the available bandwidth. Transport for packet data is optimized with statistical multiplexing techniques, rather than with TDM as in SONET/SDH. More efficient use of bandwidth using statistical multiplexing delivers greater revenue per unit of bandwidth installed and, therefore, higher profitability. An important optical core network technology that uses statistical multiplexing techniques is Resilient Packet Ring (RPR).
Understanding RPR and DPT
267
RPR technology is a new, standards-based Media Access Control (MAC) layer, (Layer 2) ring-based protocol, which providers can deploy over Layer 1 fiber-based networks, such as SONET/SDH, over WDM networks, or over dark fiber. Standardized in June 2004 as IEEE 802.17, RPR refers to the category of products and technologies that deliver shared packet ring functionality using the IEEE 802.17 protocol. Devices on an RPR ring are referred to as RPR stations. Dynamic Packet Transport (DPT) refers to the Cisco family of products that implement the shared packet ring functionality, including support for the RPR standard. First deployed in 1999, the Cisco DPT uses the spatial reuse protocol (SRP) and was the forerunner to the IEEE standard of the 802.17 protocol. Since the standard, Cisco has incorporated the 802.17 protocol into its DPT feature set. DPT is based on the SONET bidirectional line switch ring (BLSR) technology without the need to reserve half of the SONET ring bandwidth for protection traffic. Devices on a DPT ring are referred to as SRP nodes. RPR and DPT are different depending on the frame of reference. If comparing RPR and prestandard DPT together, each has minor variations in implementation between RPR’s 802.17 protocol and DPT’s SRP protocol. As a result, an RPR station doesn’t communicate with a DPT (prestandard) SRP node. However, both could exist on the same fiber rings, as long as each had another like station or node with which to communicate. Many networks that began with the DPT prestandard using SRP can be migrated to the RPR standard in this way. Since the standardization of RPR and the 802.17 protocol, Cisco Systems, Inc. has introduced line cards under the DPT family that are capable of both 802.17 RPR and SRP operation. In this frame of reference, RPR and DPT become the same. The main purpose of RPR and DPT is to apply data optimization to ring-based optical networks. Each allows the full bandwidth of a fiber ring to be realized in both directions by advertising traffic utilization information that facilitates reuse of available bandwidth between RPR stations or DPT nodes on the ring. This means that an OC-48/STM-16 ring can now deliver close to 5 Gb of bandwidth, and an OC-192/STM-64 ring can generate close to 20 Gb of bandwidth. These bandwidths are achievable without dedicating half of the ring bandwidth as protection traffic. Both RPR and DPT mirror the SONET/SDH reliability of sub-50 ms ring protection and restoration. In fact, protection is the primary reason to use a ring topology, because every node has two possible fiber paths to every other node on the ring. If a ring connection is broken within an optical span, the RPR stations or SRP nodes on each side of the break will steer or wrap the surviving parts of the rings together to avoid the break. While similar to SONET/SDH automatic protection switching (APS), RPR and DPT use a more intelligent protection mechanism that can steer or wrap based on fiber failure, node failure, or signal degradation.
268
Chapter 5: Optical Networking Technologies
Historically, Token Ring and Fiber Distributed Data Interface (FDDI) were examples of data ring, enterprise-class LAN topologies that circulated packets from node to node around the physical ring, which was formed by a combination of fiber and copper cables. RPR and DPT also are data ring-facilitating technologies, but that’s where the similarities end. RPR and DPT are provider-class, sophisticated packet carriers designed for optimizing bursty, data transport across optical ring topologies in service provider networks, whether they are MANs or WANs.
RPR/802.17 Architecture The IEEE standard of RPR is best described as a Layer 2 transport architecture based on a dual, counter-rotating ring topology. At Layer 2, RPR is considered a Media Access Control (MAC) protocol. One of the prime benefits of RPR is the ability to utilize the optical ring for maximum bandwidth efficiency. To allow RPR to achieve its goal of data optimization, a new MAC layer technique is used, which is referred to as the 802.17 protocol. The 802.17 MAC protocol accepts IP packets from the upper layers and logical link control sublayer to present to the appropriate Layer 1 medium for delivery. The following two Layer 1 transport mediums are commonly used:
•
SONET/SDH using either Generic Framing Procedure (GFP) or HDLC-like adaptations
•
Ethernet via the Gigabit Ethernet Packet PHYsical layer adaptation
Figure 5-12 depicts the RPR 802.17 MAC protocol relative to the upper and lower layers. IP packets are passed through the 802.17 MAC at Layer 2. Depending on implementation, the 802.17 MAC layer can move data through the GFP Reconciliation Sublayer, the SONET Reconciliation Sublayer, or the Gigabit Ethernet and 10 Gigabit Ethernet sublayers. For carrying the 802.17 protocol over SONET/SDH, both a GFP adaptation and an HDLC adaptation are then used to present the data streams to the SONET/SDH physical layer at an OC-3/STM-1 line rate or higher. The 802.17 protocol can also bypass SONET/ SDH, and use the Gigabit Ethernet or 10 Gigabit Ethernet Reconciliation sublayers and physical media or attachment interfaces before presenting the data stream to the Ethernet physical link.
Understanding RPR and DPT
269
Figure 5-12 RPR—802.17 Layer Diagram
To Upper Layers/Logical Link Control Layer 2 802.17 Medium Access Control (MAC) Control/Datapath Layer 1
Layer 1 GFP RS (GRS)
SONET RS (SRS)
GFP Adaptation
HDLC-Like Adaptation
GigE (GERS) 10GigE (XGERS)
GMII/XGMII/XAUI
SONET/SDH PHY (155 Mbps+)
RS – Reconciliation Sublayer GFP – Generic Framing Procedure HDLC – High-Level Data Link Control
Ethernet PHY
GMII – Gigabit Media Independent Interface XGMII – 10 Gigabit Media Independent Interface XAUI – 10 Gigabit Attachment Unit Interface
Source: Cisco Systems, Inc.
The 802.17 protocol achieves several functions as follows:
•
Topology awareness and autodiscovery—RPR stations exchange details such that each station understands the full ring topology. For automatic topology discovery, control packets are sent around the ring to discover stations, building and maintaining a current network topology map in the process.
•
Fairness—Information is exchanged between neighboring stations to permit the fair sharing of the bandwidth between them. Four types of fairness algorithms are defined in the standard: — General — Weighted — Aggressive vs. conservative — Single choke and multichoke
270
Chapter 5: Optical Networking Technologies
•
Traffic classes—The 802.17 protocol supports three traffic classes for high-, medium-, and low-priority transmission.
•
Rate limiting—The 802.17 protocol applies queue limits to control the allocation of bandwidth within the traffic classes.
•
Protection—RPR stations can use two modes of protection to either steer away from a fiber span or station failure, or to wrap the rings in the event of a span or station failure.
With RPR, the first step is the separation of the control packets from their data packets, sending them bidirectionally across the inner and outer rings. With this approach, controlsignaling information, no longer serialized within a data stream, can be accelerated for quicker and better bandwidth adaptation and self-healing protection. Figure 5-13 depicts the RPR counter-rotating ring architecture over inner and outer rings. Figure 5-13 RPR Architectural View
Inner Ring Control
Outer Ring Data Outer Ring Control
Inner Ring Data
Source: Cisco Systems, Inc.
For RPR to achieve carrier-class reliability, it employs two modes of protection-switching capability that can either wrap the rings or steer traffic away from a station or fiber span failure. This occurs at sub-50 ms, which is fast enough to be transparent to Layer 3 protocols, avoiding routing interruption or requiring routing convergence. RPR is commonly used for 622 Mbps and 2.5 Gbps optical rings and will scale to 10 Gbps and beyond. RPR applications are appropriate for
• • •
Service provider intra-POP connectivity Metro and regional MAN and WAN optical ring connectivity Local access aggregation using optical ring
Understanding RPR and DPT
• •
271
Metro IP access rings for Ethernet delivery, campus networking Cable multiple service operator (MSO) regional connectivity and hub/access solutions
When used over existing SONET/SDH infrastructure, RPR rings are usually deployed in a BLSR type of optical ring arrangement. The SONET/SDH BLSR ring structure can continue to support SONET/SDH-based services while data traffic is migrated to the RPR 802.17 protocol. The general recommendation for RPR design is to keep within an overall ring diameter of 2500 km, with no more than 40 RPR stations on a ring and, more concisely, 32 RPR stations per ring for best optimization. This fits well with typical metropolitan ring designs that usually employ 10 to 36 stations or central offices.
DPT Using SRP Architecture Within the Cisco DPT prestandard architecture, the complement to the RPR 802.17 protocol is called SRP—the spatial reuse protocol. SRP supports a subset of the functions of 802.17, as SRP predates the standardization of the IEEE 802.17 protocol. SRP is defined as an informational RFC in RFC 2892. Like RPR’s 802.17 protocol, SRP includes topology awareness and autodiscovery, a fairness algorithm, support of traffic classes, rate limiting, and ring protection. These are conceptually identical to RPR, as much of the SRP technology is the basis for the RPR standard. When SRP was first introduced as a shared ring protocol, a couple of new characteristics were worthy of mention, that of an SRP fairness algorithm, as well as destination-based packet stripping. The SRP fairness algorithm is distributed into all SRP nodes on the ring. Through the algorithm, each node on the ring receives its fair share of bandwidth by controlling that rate at which each node sends packets onto the rings. This is referred to as the global fairness function of SRP. This avoids a condition where a node could monopolize bandwidth on the ring, contributing to variable latency, jitter, and other delay conditions. Older technologies such as Token Ring used source-based packet stripping. In Token Ring, the active ring monitor node was responsible for sending a token around the ring. A sending station would append data to the token intended for a destination node, which would copy the token+data, set the token’s copied bit, and forward the copied token+data frame onto the originating station, which was responsible for removing it from the ring and then reinitiating an unused token back onto the ring for the next node wishing to send data. This token consumed its portion of bandwidth around the entire ring for the duration as a result of this dependency on source-based stripping.
272
Chapter 5: Optical Networking Technologies
With SRP’s use of destination-based packet stripping, the destination SRP node copies the data intended for it and then immediately removes the packet from the ring, freeing up bandwidth for upstream SRP nodes. Using the SRP fairness algorithm along with destinationbased packet stripping allows ring nodes to use more than their fair share of traffic between other local nodes, maximizing the spatial reuse of bandwidth while keeping within the bounds of the overall guidelines of global fairness.
NOTE
Destination-based stripping with either RPR or SRP is used with unicast packets only. Multicast and broadcast packets differ in that the destination address of these packets are for a group of nodes rather than an individual address, such as a unicast address.
The concept of destination-based stripping is illustrated in Figure 5-14. Node 4 and node 7 are exchanging data with each other while sharing some of the bandwidth in conjunction with the node 5 and 6 data exchange. Since destination node 7 has stripped the node 4 sourced data from the ring, and destination node 6 has also stripped the node 5 sourced data from the ring; the ring spans between nodes 1 and 3 are clear of any traffic use, so node 1 can send data at its full global fairness allocation to node 3. Figure 5-14 Cisco SRP Using Destination Stripping
7 6
1
2
Destination Stripped
Data Packets
5
Destination Stripped 3
4
Control Packets
Source: Cisco Systems, Inc.
Cisco SRP uses an Intelligent Protection Switching (IPS) that is similar to SONET/SDH APS but doesn’t require prereservations of ring bandwidth. IPS uses the wrapping form of protection. Additionally, the SRP’s wrapping protection capability doesn’t rely on SONET/
Understanding RPR and DPT
273
SDH overhead byte information, allowing its use over non-SONET/SDH facilities such as WDM and dark fiber. Figure 5-15 shows the concept of an RPR or DPT ring “wrapping” the outer ring to the inner ring upon failure of a fiber span. Figure 5-15 RPR or DPT Protection Ring Wrap Around Failure
Ring Wrap Around Failure
RPR or DPT Ring
Source: Cisco Systems, Inc.
A useful comparison of both the RPR standard and the Cisco DPT/SRP protocols is listed in Table 5-7. Table 5-7
Comparing RPR and DPT/SRP Feature (Year)
RPR (2004)
DPT/SRP (1999)
Owner
IEEE 802.17 standard
Cisco prestandard RFC 2892
Terminology
RPR stations
SRP nodes
Spatial reuse
Single/multichoke
Single choke
MAC traffic classes
Classes A, B, and C
Classes high and low
Protection switching
Steering and wrapping
Wrapping
Fairness granularity
4 types
1 type
Topology discovery
Yes—multicast
Yes—unicast
Bridging support
Yes
No
274
Chapter 5: Optical Networking Technologies
RPR and DPT Benefits RPR and DPT combine the intelligence of IP routing and statistical multiplexing with the bandwidth efficiencies and resiliency of optical rings. In addition, RPR and DPT add the simplicity and cost advantages of Ethernet. Offering end-to-end metro architecture (metro POPs, regional metro networks, and metro access networks), shared-packet, ring-based networks are delivering dramatic advantages to metropolitan service providers. RPR and DPT networks consist of two counter-rotating fiber rings that are fully utilized for transport at all times for superior fiber utilization (unlike SONET/SDH-based networks, which dedicate 50 percent of their available bandwidth for service-affecting conditions). RPR and DPT combine the best features of SONET/SDH and Ethernet into one optical layer that is data oriented, allowing for a shift to byte-oriented services from bit-oriented transport. RPR and DPT also protect existing investments in fiber and other transmission infrastructures. Since most current metro-area fiber is ring-based, RPR or DPT will best utilize existing fiber facilities. Moreover, apart from dark fiber, RPR or DPT can also operate over SONET/ SDH ADMs, WDM equipment, or dark fiber, allowing smooth and efficient migration. RPR and DPT provide multiple priority queues at the transmission level, allowing for QoS delivery of delay- and jitter-sensitive applications such as voice and video. This is accomplished by mapping the IP precedence values of IP packets from the packet’s type of service (ToS) field into the RPR priority field or SRP MAC header. Because RPR and DPT are plug-and-play, they eliminate manual provisioning, allowing stations or nodes to be added or removed from the ring on-the-fly. Thus, service providers can more easily and quickly scale their networks without affecting other ring members. Moreover, there is no need for channel provisioning, because each ring member can communicate with every other member based on the MAC address. Using either RPR/802.17 or DPT/SRP for data optimization allows providers to support the demands of bursty IP traffic services across their metropolitan ring architectures. These infrastructures are designed in high-availability rings to eliminate the cost overhead of meshed networks with their extra facilities costs and port costs. With either, the ability to maximize the utilization of ring bandwidth is paramount, especially when high-speed Ethernet services are offered. RPR’s and DPT’s inherent transmission fairness allows for consistent delay for support of VoIP and IP-based video applications. Both RPR and DPT can layer IP-based services efficiently, optimally, and, best of all, economically upon optical ring-based architectures.
Optical Ethernet Ethernet is the ultimate customer-access interface for IP services delivery. Ethernet is being installed at a pace exceeding 25 million ports a year. The simplicity, volume, and physical
Optical Ethernet
275
medium adaptability of Ethernet all lend to the descending cost curve and the ascending adoptability of Ethernet. Outpacing its rivals by greater than 20 to 1, Ethernet has moved from LAN technology to the MAN and toward WAN consideration. People are taking Ethernet home with them. The number of home-based Ethernet networks is soaring for users of broadband Internet connections worldwide. To link consumer Ethernet with corporate Ethernets for communications, caching, and commerce, optical Ethernet is increasingly the conduit. Media-transparent—whether cable, DSL, optical, or wireless—Ethernet is surfing LANswitching capability, rocketing on computer gigahertz improvement, and riding optical fiber—breaking speed and distance barriers to 10 Gbps and beyond. Using optical fiber, Ethernet has moved out of the Campus LAN and into the MANs and WANs. By using optical Ethernet in provider networks, it becomes possible to set and forget the customer interface equipment, enjoying linear scalability of bandwidth through software control rather than truck rolls and circuit-dependent interfaces. Ethernet in the provider network is a significant contributor to OpEx savings, as it is a cost-effective way to match interfaces, speeds, and protocols with customers. For these reasons, Ethernet carries a lot of value with the customer. Through the benefits of multimode and single-mode optical fiber and continuing advancements in Ethernet technology, Gigabit Ethernet is now a familiar tenant in enterprises and service provider metropolitan offerings. To aggregate the bandwidth of Gigabit Ethernet from desktops, servers, and mainframes, 10 Gigabit Ethernet (10GE) moves the decimal point for a 10x improvement in backbone capacity. At 10 Gbps, 10GE is the current champ of the Layer 2 speed race. Perhaps the most significant of all, 10GE in provider networks enables physical layer convergence of the LAN, MAN, and WAN. Both Gigabit Ethernet and 10GE become customary transports for all enterprise protocols such as Fiber Connection (FICON), Enterprise System Connection (ESCON), Fibre Channel, and IP. Gigabit Ethernet is most often used as an edge connectivity service and will appeal to a large variety of customers. Sporting a ten-fold performance increase over Gigabit Ethernet at about three to four times the cost of GE, 10GE will find many applications as a core and aggregation technology in MANs and long-haul networks using DWDM. It can also be used as an edge service for very high-bandwidth applications. At this line rate, 10GE can be used to link together supercomputers for collaborative, shared, or grid computing; medical imaging; remote medical telesurgery with robotics; geographically synchronized storage farms; and so on.
Gigabit Ethernet and 10GE over Optical Networks Gigabit Ethernet and 10GE are becoming the technologies of choice within enterprise, metropolitan, wide area, and perhaps even residential networks. With the ability to directly modulate Ethernet over optical fiber at distances up to 40 km or more, providers, operators,
276
Chapter 5: Optical Networking Technologies
companies, and users can converge LANs with MANs, and WANs that use the same Layer 2 transport end to end. This enables lower-cost MANs using Layer 3 and Layer 4 Ethernet switching and 10GE backbones. With Ethernet over optical, you can do the following:
•
Market profitable Ethernet services such as — Ethernet LAN extension — Ethernet-based Internet service — Storage area network services — Disaster recovery services — Ethernet home networking
•
Use Ethernet as a simple layer to leverage less expensive point-to-point services equivalent to an OC-3 service.
• •
Use CWDM and DWDM wavelengths to provision multiple virtual Ethernet services.
•
Provide routing services that easily extend intelligent IP services between customer buildings.
•
Use Ethernet as an effective transport for video on demand (VOD).
Sell optical wavelengths to other service providers who want to offer Ethernet services to businesses.
Gigabit Ethernet over optical fiber was first standardized in 1998 (IEEE 802.3z). 10GE was standardized in 2002 as IEEE 802.3ae. Additional Ethernet over optical standardization activity continues as shown in Table 5-8. Table 5-8
Gigabit Ethernet and 10GE Standards Standard
IEEE 802.3z
IEEE 802.3ae
IEEE 802.3ah
IEEE 802.3aq
Year ratified
1998
2002
2004
Estimated 2006
Description
Gigabit Ethernet
10GE
1000BASE-X
10GBASE-xR
Ethernet in the First Mile (EFM)
10GE over multimode fiber
1000BASE-T
10GBASE-xW
100BASE-FX
10GBASE-LX4
100BASE-LX10 1000BASE-SX 1000BASE-LX 1000BASE-LX10 1000BASE-BX-D 1000BASE-BX-U
Optical Ethernet
Table 5-8
277
Gigabit Ethernet and 10GE Standards (Continued) Standard
IEEE 802.3z
IEEE 802.3ae
IEEE 802.3ah
Fiber support
Multimode and Multimode and 100/1000BASE single-mode fiber single-mode fiber unidirectional and bidirectional optics
IEEE 802.3aq >220 meters of FDDI-grade multimode fiber
Multimode and 10 km single-mode fiber Standard optics 1000BASE-SX 850 nm for MMF only 1000BASE-LX 1310 nm for MMF and SMF
10GBASE-SR and SW using 850 nm for MMF only
1000BASE-LX and 1000BASE-BX for SMF
Under definition
10GBASE-LR and LW using 1310 nm for SMF 10GBASE-LX4 using four lanes of 1310 nm over MMF and SMF at 2.5 Gbps each 10GBASE-ER and EW using 1550 nm over SMF
NOTE
The IEEE specifications describe the port types for each of the IEEE 802.3 interfaces. While 1000BASE represents that Gigabit Ethernet (GE) is a 1000 Mbps baseband transmission, the T and the X represent GE over category 5 twisted pair (T for twisted pair) and GE over optical fiber (X for wavelength). Therefore, there are four varieties of Gigabit Ethernet port types that are common in the market • 1000BASE-T for 1000 Mbps operation over category 5 copper • 1000BASE-SX for 1000 Mbps operation over multimode fiber with a short-
wavelength 850 nm laser • 1000BASE-LX for 1000 Mbps operation over single-mode fiber with a long
wavelength 1310 nm laser and 8B/10B encoding on MMF or SMF
278
Chapter 5: Optical Networking Technologies
Gigabit Ethernet for Optical Networks Gigabit Ethernet is a 1000 Mbps (1 Gbps) Ethernet service that is very popular in the access and distribution layers of MANs. Many enterprises need internetworking between campuses, and Ethernet is a logical choice. Because many of these enterprises use Gigabit Ethernet in their LAN backbones, a Gigabit Ethernet offering is usually a minimum requirement. Gigabit Ethernet finds wide application in the metropolitan area. Metro Ethernet networks can use Gigabit Ethernet to deliver both full- and fractional-speed point-to-point and multipoint Ethernet services. Many providers with a SONET/SDH infrastructure add Gigabit Ethernet connectivity to their offerings. Gigabit Ethernet can be scaled through the use of the Cisco EtherChannel feature. Based on the IEEE 802.3ad Link Aggregation standard, Cisco products can bond multiple Gigabit Ethernet facilities into a Gigabit EtherChannel to provide increments of 1000 Mbps (1 Gbps) connectivity up to 8 Gbps. With Gigabit EtherChannel, multiple links become an applicationtransparent bandwidth pool. If one of the links within the channel were to fail, that link traffic is redirected to another link within one second, much less than the trip point of any protocol timers that could cause or report an application session error. Gigabit EtherChannel is a customary high-availability interconnection method between IP routers. A two-link Gigabit EtherChannel between a pair of routers provides not only 2 Gbps of bandwidth capacity but ensures that the connection survives in the event of a link or port failure. Gigabit Ethernet can be provisioned over a metro access ring via single-mode fiber using either CWDM or DWDM. Gigabit Ethernet is also a prime technology for cable service operators to deliver VOD to subscriber households, again generally using DWDM wavelengths to increase the service density and efficiency of their deployed optical fiber backbones that serve their metropolitan subscriber base. Gigabit Ethernet is also a key technology for use with the Ethernet in the First Mile (EFM) standard 802.3ah. Using the 1000BASE-LX and 1000BASE-BX transceivers over singlemode fiber, Gigabit Ethernet connectivity can be extended to business and residential access areas. The EFM standard is rated for the increased fiber distances that are needed to reach these suburban areas, up to 10 km. EFM provides for use of single- or dual-fiber operation and allows for extended temperature support for the optic transceivers. The vision of the EFM standard is the capability to deliver Gigabit Ethernet to each user, business or residential.
10 Gigabit Ethernet for Optical Networks 10GE is the next stride for scaling the performance of both enterprise and service provider networks, combining multigigabit bandwidth and intelligent services end to end. The 10GE-emerging applications are campus extension, disaster recovery, and data center extensions such as a standard transport for data center protocols including Fibre Channel—
Optical Ethernet
279
for example, nine Fibre Channels into one 10GE lambda or ESCON with 40 ESCON channels per 10GE lambda. 10GE is a common requirement to facilitate grid computing and supercomputing. 10GE is based on Ethernet, using the same Ethernet MAC protocol, the Ethernet frame format, and the Ethernet frame size. Interestingly, the specification supports full duplex as the only mode of bidirectional communication. Because it is full duplex only, 10GE is unlimited in reach, limited in distance only by the physics and impairments of the optical media transmission that is used. In a full-duplex link, there are no Ethernet packet collisions, so link distances are determined by the limitations of optics and cost and not by the diameter of an Ethernet collision domain. 10GE is more popular in the core portion of MANs. Roughly equivalent in bandwidth to a SONET/SDH OC-192c, 10GE possesses the capacity to scale core networks even further using 10GE EtherChannel. Many enterprises are considering a move to Ethernet for WAN connectivity, especially in the larger metropolitan statistical areas (MSAs). With an abundance of metropolitan fiber and multigigabit switching products, enterprises can extend fractional or full Fast Ethernet and Gigabit Ethernet across owned or leased fiber wavelengths to create enterprise WAN connections up to 40 km or more. Because 10GE requires no change to the Ethernet MAC protocol or packet format, 10GE also supports all upper-layer services. Large enterprises with high-bandwidth requirements between data centers might elect to acquire a 10GE connection across a leased fiber wavelength or perhaps across a leased OC-192/STM-64 circuit from a provider. Ethernet is an asynchronous link protocol with its network timing and data synchronization maintained within each character in the bit stream. As a router, switch, or hub receives the Ethernet bit stream, it has the opportunity to resynchronize at the beginning of a character and to retime the data transfer. This avoids the use of a network-wide synchronous clock source that must be hierarchically extended end to end within an enterprise network. As you will see in the next section, “Ethernet over Next-Generation SONET/SDH,” a SONET network must share the same stratum level clock (usually Stratum 1) across all SONET/SDH nodes, an underlying clock source which is used to avoid timing drift between transmission and receipt of a datagram or TDM voice sample. The reasons for Gigabit Ethernet’s and 10GE’s ascendancy into the provider and operator domains are multifold, beyond their inherent simplicity. First, the technologies leverage the vast installed base of Ethernet worldwide, from enterprises to consumers. Second, the design philosophy of the Ethernet manufacturing industry consistently uses high-volume manufacturing and low-cost design to create the appropriate price/performance metrics to ensure the product’s market success. Additionally, the open standard specifications of the 802.3 framework enable mass-market competition at every stage of the Ethernet value chain, lowering the cost of components, subsystems, and final products.
280
Chapter 5: Optical Networking Technologies
Most of the complexities of encoding schemes, electronic circuitry, and optical interfaces are embedded in merchant silicon, enabling a wide choice of Ethernet subsystems at competitive costs. 10GE is following the same price/performance curves brought about by such innovation principles—for example, allowing 10GE to be transported up to 40 km using lower-cost, uncooled pluggable optics. As mentioned earlier in the section “Light Emitters,” the use of vertical cavity surface-emitting lasers (VCSELs) is likely to lower these costs even further. Using Ethernet over optical, many choices are now possible, such as Ethernet over SONET/ SDH, Ethernet over RPR/DPT, Ethernet over optical dark fiber using pluggable optic transceivers, and optical transponders on metro and long-haul platforms.
Ethernet over Next-Generation SONET/SDH The success of Ethernet has created a groundswell, charging out-of-enterprise LANs and creating demand for Ethernet connectivity options in the provider domains. Many service providers want to offer Ethernet services and overlay them onto their embedded SONET/ SDH network infrastructure. Next-generation SONET/SDH features have been developed to optimize Ethernet over SONET/SDH bandwidth efficiencies. Many of the metro provider products use Ethernet cards that interface and map Ethernet ports into SONET STS payloads by aggregating at the STS-1 level. With an STS-1 payload equal to 49.5 Mbps, you can provision various-speed Ethernet services by using STS bandwidth scaling. For example, the following can be provisioned:
• • • •
Ethernet (10BASE-T) can be transported within an STS-1 circuit. Fast Ethernet (100BASE-T) can be transported within an STS-3c circuit. Gigabit Ethernet (1000BASE-X) can be transported within an STS-24c circuit. Subrate Gigabit Ethernet can be transported in STS-6c (297 Mbps), STS-9c (445 Mbps), and STS-12c (594 Mbps) circuits.
While STS bandwidth scaling is an effective way to map Ethernet into SONET frames, it is bandwidth inefficient. Delivering Ethernet over SONET/SDH is not without challenge, because Ethernet is an asynchronous form of data transmission, whereas SONET/SDH uses a synchronous payload. Ethernet is a byte-oriented, self-clocking data stream; SONET is a bit-oriented routing system. Ethernet creates dynamic bandwidth, whereas SONET/SDH expects constant bandwidth. Ethernet is connectionless, whereas SONET/SDH is connection oriented. Merging the pair, as is, causes inefficiencies of transmission, in effect wasting bandwidth. Wasting bandwidth on SONET/SDH platforms tends to go against the basic covenants of telecommunications. The move to next-generation SONET/SDH is desired, because several component technologies and standards-based developments are layered on SONET/SDH to increase
Optical Ethernet
281
the optimization of bandwidth for the carriage of packet-based data. Next-generation SONET/SDH extends the utility of existing SONET/SDH networks by leveraging Layer 1 networking and including technologies such as generic framing procedure (GFP), virtual concatenation (VCAT), and the Link Capacity Adjustment Scheme (LCAS). When using Ethernet over SONET/SDH, there are principally two challenges that must be overcome:
• •
Framing challenge Optimization challenge
The first step is to properly encapsulate and frame the Ethernet for SONET transport. This is needed, because the Ethernet frame sizes are variable, from 64 bytes to about 1500 bytes. To benefit throughput for some applications, Ethernet jumbo frames (up to 10,000 bytes) are also supported by optical platforms and must be considered by the framing procedure. GFP is very effective for this. The second step is to optimize the bandwidth efficiency of SONET/SDH rings to carry different-speed Ethernet services. VCAT and LCAS are popular methods used.
Generic Framing Procedure (GFP) Standard GFP is a data encapsulation technique to adapt asynchronous, bursty data traffic with variable frame lengths into TDM-based transport over a SONET/SDH facility. It does this by adapting a frame-based data stream into a byte-oriented data stream by mapping the source data stream—Ethernet, for example—into a general-purpose frame. The generalpurpose frame is then mapped into well-known SONET/SDH frames, using the TDM paths as one big pipe, therefore taking advantage of unused TDM bandwidth. GFP supports multiple services, which lends to its growing popularity. GFP can encapsulate IP/PPP, Ethernet, ESCON, FICON, and Fibre Channel, transporting any of these over SONET or SDH Layer 1 networks. This multiservice capability is in stark contrast to traditional encapsulation techniques such as link access protocol service (LAPS) and the HDLC framing mechanisms. GFP uses two different modes of client signal adaptation:
• •
GFP framed (GFP-F) GFP transparent (GFP-T)
GFP framed mode maps a data signal frame in its entirety into one GFP frame. Services such as Fast Ethernet, Gigabit Ethernet, IP, and so on are mapped frame by frame into the GFP frame. This gives the GFP-F mode a variable GFP frame length, which gets accounted for in the core header. This variable GFP frame length minimizes overhead and guarantees the optimum bandwidth efficiency.
282
Chapter 5: Optical Networking Technologies
The GFP transparent mode maps any data signal block codes into multiple, periodic GFP frames. A data signal such as Fibre Channel, ESCON, FICON, Ethernet, and so on is mapped byte by byte into the GFP frame, clustering GFP frames together to represent the whole data signal block. This GFP-T mode uses a constant GFP frame length, which provides for optimization of transfer delay. For bandwidth efficiency, use GFP-F and for fast transport, use GFP-T. Table 5-9 presents a summary of the two GFP modes. Table 5-9
GFP Encapsulation Modes GFP Mode
Typical Application
Methodology
GFP framed (GFP-F)
Fast Ethernet, Gigabit Ethernet, IP/PPP
Application service mapped frame by frame into the GFP frame Minimal overhead Variable GFP frame length
GFP transparent (GFP-T)
Fast Ethernet, Gigabit Ethernet, Fibre Channel, ESCON, FICON
Application service is mapped byte by byte into the GFP frame; uses multiple frames Facilitates block-coded data signals Optimized transfer delay Constant GFP frame length
Virtual Concatenation (VCAT) The ITU-T G.707 standard concatenation is a contiguous concatenation (CCAT) process that combines adjacent containers and transports them as one container across the SONET/ SDH network. The limitations of this contiguous method require that all of the SONET network elements along the path be able to recognize and process the concatenated container, resulting in inefficiency for data. Gaps can form in the overall flows much like a personal computer disk can become fragmented. Recently defined by the ITU, virtual concatenation (VCAT) addresses these limitations by providing the ability to transmit and receive several noncontiguous containers (fragments) into a single flow, called a virtual concatenation group (VCG). Any number of containers can be grouped together, providing better packing and bandwidth granularity. Because the intermediate network elements treat each container in the VCG as a standard, concatenated container, only the originating and terminating SONET elements need to recognize and process the virtually concatenated signal structure. This capability increases transport granularities and efficiencies, as demonstrated in Table 5-10.
Optical Ethernet
Table 5-10
283
Virtual Concatenation Versus Standard Concatenations
Application Service
Efficiency Without Virtual Concatenation
Efficiency with Virtual Concatenation
Ethernet (10 Mbps)
STS-1c/VC-3 → 20%
STS-1-1v/VC-12-5v → 92%
Fast Ethernet (100 Mbps)
STS-3c/VC-4 → 67%
STS-1-2v/VC-12-47v → 100%
Gigabit Ethernet (1000 Mbps)
STS-48c/VC-4-16c → 42%
STS-3c-7v/VC-4-7v → 95%
ESCON (200 MBps)
STS-12c/VC-4-4c → 33%
STS-1-4v/VC-3-4v → 100%
Fibre Channel (200 Mbps)
STS-12c/VC-4-4c → 33%
STS-1-4v/VC-3-4v → 100%
Fibre Channel (1000 Mbps)
STS-48c/VC-4-16c → 42%
STS-3c-7v/VC-4-6v → 95%
The use of 10 Mbps and 100 Mbps Ethernet over standard SONET STS-x building blocks/ OC-x carriers mapped adequately and was reasonably usable despite some bandwidth wastage. Gigabit Ethernet and 10GE, however, didn’t map well to OC levels and created a lot of waste. Virtual concatenation scales in 50 Mbps increments and drastically increases the utilization efficiency for Ethernet, IBM mainframe channel protocols, and storage area network protocols when transported over SONET or SDH. This allows for the support of more customers within a metropolitan or long-haul SONET/SDH network.
Link Capacity Adjustment Scheme (LCAS) LCAS helps providers meet the on-demand bandwidth needs of customers and their network applications. LCAS provides a mechanism for automatic bandwidth reprovisioning to increase or decrease the capacity in a VCAT group. This is beneficial to making TDM adjustments “hitless,” but LCAS also enhances the flexibility of VCAT. LCAS allows you to add or remove STS channels from a VCAT group on-the-fly. LCAS can take out a VCAT timeslot that might be causing excessive differential delay in the VCAT, the delay being caused from the different distances imposed by the forward path as opposed to the reverse path. Figure 5-16 depicts the framework of GFP, VCAT, and LCAS to carry multiple data services across SONET/SDH infrastructures.
284
Chapter 5: Optical Networking Technologies
ESCON
FICON
Fibre Channel
GFP
VCAT
LCAS
Generic Frame Procedure
Virtual Concatenation
Link Capacity Adjustment Scheme
MUX/DEMUX
Ethernet
Native Interfaces
Figure 5-16 Ethernet and Other Services over Next-Generation SONET/SDH
SONET/ SDH
LAPS/HDLC
Ethernet over RPR/DPT RPR/DPT was previously discussed in this chapter as a Layer 2 MAC-based technology that operates over dual counter-rotating rings formed by an optical fiber ring-based deployment. Operating over multiple physical layers including SONET, RPR provides SONET-like protection while optimizing bandwidth efficiencies with its inherent ability to send data traffic in both directions using the pair of rings. RPR is optimized for data packet transport with its spatial reuse capabilities, fairness algorithms, and traffic classification capabilities. Using statistical multiplexing techniques, RPR allows for oversubscription of bandwidth services, establishing committed information rates and peak-rate thresholds on a per-application basis. RPR uses Ethernet switching and the optical fiber-based, dual counter-rotating rings to deliver bandwidth-proficient, multipoint Ethernet/IP services. The key benefit here is that the support of IP precedence mapping into the RPR and DPT MAC headers allows Ethernet delivery to operate like a switched Ethernet service, with the ability to classify, queue, and schedule packets, creating differentiated services instead of commodity bit transport. RPR/DPT’s ability to statistically multiplex and switch bandwidth around the rings makes multipoint Ethernet applications possible. This expands provider offerings beyond fixed point-to-point Ethernet services. Leveraging the Cisco ML series Ethernet cards in the Cisco MSPP product line, a provider can create an Ethernet private-line service using a guaranteed rate and a peak bandwidth rate, similar to a Frame Relay service. This allows the division of many customers requesting Ethernet services, each receiving a guaranteed rate lower than the rate of the Ethernet port. Used in conjunction with RPR’s bandwidth efficiencies, the oversubscription capabilities of the ML series/MSPP platform increase the overall data utilization, customer density, and profitability for Ethernet services.
Optical Ethernet
285
RPR and DPT are the most effective ring-based optical technologies over which to deploy packet-based IP services with Layer 2 Ethernet transport. Both providers and enterprises can use the technology.
Ethernet Directly over Optical Fiber Ethernet has been deployed over multimode fiber (MMF) for a number of years, primarily in enterprise and campus backbones. Much of the original multimode fiber in this market was installed for the purpose of FDDI and ATM backbone communication between campus wiring closets, data centers, and manufacturing and automotive plants, to name a few. As the price/performance curve of switched Ethernet swept away the operational expense model of Token Ring, FDDI, and ATM, new high-speed Ethernet interfaces such as Gigabit Ethernet were used over MMF optic backbones. With the popularity of Ethernet extending beyond LANs into MANs and WANs, Ethernet (particularly Gigabit Ethernet and 10GE) moves beyond the 2 km MMF domain and into the SMF plant and equipment of providers and operators, larger enterprises, and public and private industry. The previous topics of Ethernet over SONET/SDH and Ethernet over RPR/DPT dealt with encapsulation and data-mapping enhancements at Layer 2, largely adaptations of existing provider infrastructure to support the popularity of the Ethernet model. While both Ethernet over SONET and Ethernet over RPR/DPT use optical fiber as a transport, this topic discusses Ethernet directly over MAN- and WAN-oriented optical fiber without the protocol conversion overhead of either SONET/SDH or RPR/DPT as an intervening layer. Gigabit Ethernet and 10GE, as Layer 2 technologies, can be interfaced directly to optical fiber as the Layer 1 complement. These high-speed versions of Ethernet have defined LAN-PHYs and, in the case of 10GE, have defined an additional WAN-PHY, allowing their use with multimode fiber transmission, SMF transmission, pluggable optics, and all forms of wavelength-based fiber transmission such as WDM, DWDM, and CWDM.
Gigabit Ethernet over Optical Fiber The IEEE 802.3z Gigabit Ethernet standard was defined in 1998 and carries some historical “baggage” that places some limitations on the supported distances of Gigabit Ethernet over optical fiber. The standard supports both full- and half-duplex modes of operation. If halfduplex mode is used, there becomes a natural limitation based on the timing needed to support the Ethernet collision domain. In addition, the 1998 standard used “classic” FDDIgrade MMF as the benchmark optical fiber, therefore artificially limiting the support distances available, based on a comparison with the enhanced optical fiber available today. In practical view, most of the Gigabit Ethernet products the market has adopted have represented the full-duplex product set, optimizing performance and distance of Gigabit Ethernet over fiber as much as possible.
286
Chapter 5: Optical Networking Technologies
Table 5-11 introduces the optical fiber supported by the Gigabit Ethernet standard. Table 5-11
Gigabit Ethernet Optical Fiber Support 1000BASE-SX: 850 nm Wavelength (Short Reach)
1000BASE-LX/LH: 1310 nm Wavelength (Long Reach)
1000BASE-ZX: 1550 nm Wavelength (Extended Reach)
Fiber Type
Modal Bandwidth/ Operating Range
Modal Bandwidth/ Operating Range
Modal Bandwidth/ Operating Range
62.5um FDDI-grade MMF
160 MHz/220 m
500 MHz/1804 ft or 550 m
N/A
62.5um OM-1 MMF
200 MHz/275 m
500 MHz/1804 ft or 550 m
N/A
50um MMF
400 MHz/500 m
400 MHz/1804 ft or 550 m
N/A
50um OM-2 MMF
500 MHz/550 m
500 MHz/1804 ft or 550 m
N/A
50um OM-3 MMF
2000 MHz/no standard
500 MHz/no standard
N/A
9/10um SMF G.652
N/A
N/A/16,400 ft or 5000 m LH to 32,810 ft (10 km)
N/A/43.4 to 62 miles (70 to 100 km)
IEEE 802.3z Gigabit Ethernet
Helping to facilitate Gigabit Ethernet over optical fiber are innovations in pluggable optic transceivers. When you consider that Gigabit Ethernet is capacious enough to extend beyond a local LAN into a MAN, you add a requirement to exceed 2 km distances of typical campus LANs. This requires different fiber (single-mode) and different lasers (1310 nm, 1550 nm), some of the basic building blocks of long-reach optical applications. With much of the market expected to be in the larger metropolitan areas, the ability to extend Gigabit Ethernet up to 10 km is an appropriate design target. Your choices would be to build a specific optical function into a Gigabit Ethernet switch port or to modularize that functionality out-board of the switch. To extend the useful life of Ethernet switching products—in this case, Gigabit Ethernet— adding modularity to the optic capabilities of Gigabit Ethernet switch ports is a worthy goal. A user could start with a multimode optical transceiver and then later repurpose the same port for a long-reach application via a single-mode transceiver and single-mode fiber, optionally extending an unamplified connection up to 10 km.
Gigabit Ethernet Using Pluggable Optics Pluggable optic transceivers such as these are known in the optical Ethernet market as Gigabit Interface Converters (GBICs). The GBIC is an industry standard, hot-pluggable, full-duplex, interface converter that “converts” electrical signals from within the Ethernet
Optical Ethernet
287
switch port to the optical signals necessary to serialize the transmission over optical fiber cables. The GBIC then becomes the bridge between the electrical and the optical domains, containing these differing physics within the space of a few inches. A variety of different, pluggable GBICs add flexibility and value to Gigabit Ethernet products, providing options to extend asset life, shorten provisioning times, and expend capital in smaller increments on an as-needed basis. For Gigabit Ethernet, GBICs are available to support GE over optical fiber using
• • • • •
850 nm over MMF 1310 nm over SMF 1550 nm over SMF 1470 nm to 1610 nm CWDM over SMF ITU-T 100 GHz grid C-band DWDM over SMF
Further innovation has brought about another generation of pluggable optic transceivers known as the SFP. An SFP is a smaller version of the GBIC, designed to consume less space (about 40 percent) and power (1 watt versus 1.5 watts), both of which are critical design elements for increasing densities of Gigabit Ethernet cards and products. GBIC and SFPs are available for all aspects of the Gigabit Ethernet 802.3z and 802.3ah standards and applications, as shown in Table 5-12. Table 5-12
Gigabit Ethernet Optical Standards, Target Solutions, GBICs, and SFPs IEEE 802.3z (Gigabit Ethernet) and 802.3ah (EFM)
Target Solution
GBIC/SFP
1000BASE-SX
Data center/campus environment
GLC-SX-MM for 850 nm MMF
1000BASE-LX
Campus core network
GLC-LH-SM for 1310 nm SMF LX/LH to 10 km
N/A to 802.3 standards but in common use
Point-to-point campus extension/metro access
GLC-ZX-SM 1550 nm SMF
1000BASE-BX-U and 1000BASE-BX-D
Ethernet in the first mile
GLC-BX-1310, GLC-BX-1490 for 1310/1490 nm SMF to 10 km
N/A to 802.3 standards but in common use
Metro access (ring to 50 km, point-to-point to 100 km)
CWDM-GBIC, CWDM SFP for 1470 to 1610 nm CWDM
Metro video on demand (up to 200 km with optical amplification)
DWDM-GBIC for ITU-T C-band 100 GHz grid
Gigabit Ethernet over optical fiber will be an effective delivery platform for metro Ethernet deployments to businesses and residences. The use of optical fiber and modular
288
Chapter 5: Optical Networking Technologies
components such as GBICs and SFPs provides scalability beyond Gigabit Ethernet to 10GE pluggable optics and solutions as required. Chapter 6 provides more coverage of metro Ethernet solutions.
10 Gigabit Ethernet Pluggable Optics 10GE is also supported directly over optical fiber and matches the speed of the premier SONET/SDH optical carrier facilities at OC-192. With such a bit-rate equivalency, the premise for avoiding protocol conversions within the provider domain garners much appeal. The concept of IP over Ethernet over Optical is well understood by customers, and supplying IP over Ethernet over Optical on an end-to-end basis at up to 10GE rates carries all of the attributes of service value, because it can be installed once, can scale almost limitlessly, and might never need changing again. For 10GE, the IEEE 802.3 specification standardizes both a LAN-PHY and a WAN-PHY option. Pluggable optics for 10GE are classified by the particular physical layer they support, either the LAN-PHY or a WAN-PHY. (PHY is an acronym for physical layer.) Wavelength versions 850 nm, 1310 nm, and 1550 nm of the LAN-PHY and WAN-PHY are available for the 802.3 specification. The 10GE LAN-PHY pluggable optics are used to connect a 10GE device directly over optical fiber to another 10GE device using the appropriate wavelength and fiber. The LANPHY communicates between devices at a bit rate similar to a LAN interconnection at 10.3215 Gbps. The use of the 10GE LAN-PHY is commonly referred to as 10GE directly over optical fiber. With the large installed base of SONET/SDH networks in the provider domain, the need to transport 10GE across existing SONET/SDH optical networks creates the requirement for a 10GE WAN-PHY. This is because 10GE LAN-PHY is not an option for 10GE over SONET due to a speed mismatch between the two. As mentioned, 10GE LAN-PHY is the classic Ethernet LAN-PHY that operates at 10.3125 Gbps and, as such, uses a serializer/ deserializer (SERDES) function that creates more bps than an OC-192 (9.95 Gbps) can contain. 10GE WAN-PHY is required for 10GE Ethernet and SONET/SDH to interoperate. For the 10GE WAN-PHY, a unique SERDES 16 x 622 Mbps is used to yield 9.95 Gbps. A rate of 9.95 Gbps is equivalent to that of a SONET/SDH OC-192c/STM-64 payload. The use of a WAN interface sublayer (WIS) effectively wraps Ethernet frames into a concatenated OC-192c. Though the WAN-PHY interface is not SONET/SDH-compliant in terms of optical or electrical specifications, you can consider WAN-PHY to be SONET/SDH friendly, allowing two 10GE devices to communicate across SONET/SDH payloads.
Optical Ethernet
289
Figure 5-17 compares the 10GE LAN-PHY with the 10GE WAN-PHY. In the figure, the physical interfaces vary depending on the following:
• • •
The fiber type used The expected distance traveled The Layer 2 interconnection method, either LAN or WAN
Figure 5-17 Comparing 10GE LAN-PHY and 10GE WAN-PHY
Media Access Control (MAC) Full Duplex
10 Gigabit Media Independent Interface (XGMII) or 10 Gigabit Attachment Unit Interface (XAUI)
CWDM LAN PHY (8 B/10 B)
SERIAL WAN PHY (64 B/66 B + WIS)
SERIAL LAN PHY (64 B/66 B)
CWDM PMDSL 1310 nm
Serial PMDSL 850 nm
Serial PMDSL 1310 nm
Serial PMDSL 1550 nm
Serial PMDSL 850 nm
Serial PMDSL 1310 nm
Serial PMDSL 1550 nm
-LX4
-SR
-LR
-ER
-SW
-LW
-EW
CWDM – Coarse WDM WIS – WAN Interface Sublayer PMDSL – Physical Media Dependent Sublayer or Transceiver PHY – Physical Layer
Source: Cisco Systems, Inc.
These different physical media-dependent sublayer interfaces are designed into discrete devices, namely 10GE transceivers, SFPs, and other pluggable optic form factors that provide specific media and distance support. Consider the following examples:
•
10GBASE-SR—This transceiver is appropriate for LAN interconnection over short distances of multimode fiber operating at the 850 nm wavelength.
•
10GBASE-LR—If more reach is needed, use a 10GBASE-LR transceiver with single-mode fiber operating at a 1310 nm wavelength to extend the facility to longer distance requirements.
290
Chapter 5: Optical Networking Technologies
•
10GBASE-ER—This transceiver is a LAN-PHY interface for use over 1550 nm wavelength WDM or DWDM facilities, generally used in long- and extended longhaul networks.
• •
10GBASE-LX4—Use this transceiver for CWDM facilities. 10GBASE-SW, -LW, and -EW—Use these transceivers for the 10GE WAN-PHY interconnection with SONET/SDH as well as non-SONET/SDH optical facilities for short-reach MMF (850 nm), long-reach SMF (1310 nm), and extended-reach WDM/DWDM (1550 nm), respectively.
10GE uses four classes of pluggable optical transceivers:
• • • •
850 nm for 50/125 micrometer MMF at 65 m + 1310 nm CWDM for 62.4/125 micrometer MMF at 300 m + 1310 nm for SMF up to 10 km or more 1550 nm for SMF up to 40 km or more
Optical pluggable transceivers for 10GE solutions have rallied around at least three different multisource agreement (MSA) platforms:
• • •
Xenpak X2 XFP
All of these are functionally electrical to optical converters, each designed to meet particular power dissipation and thermal specifications, density ranges, price curves, and target solutions. With 10GE enjoying more prominent use within provider and operator backbones, options for carrying 10GE over DWDM networks are descending a cost curve. 10GE might well become the primary application for lambdas in WDM, CWDM, and DWDM networks. Table 5-13 categorizes the 10GE standard, target solutions, and pluggable optics. Table 5-13
10GE Optical Standards, Target Solutions, and Pluggable Optics
IEEE 802.3ae 10GE
Target Solution
Xenpaks, X2s, XFPs
10GBASE-S
Data center connectivity, grid computing, supercomputing
XENPAK-10GB-SR for 850 nm SMF
10GBASE-LX4
Campus core network
XENPAK-10GB-LX4 for 1300 nm MMF
10GBASE-L
Point-to-point campus extension/ metropolitan access
XENPAK-10GB-LR for 1300 nm SMF - LAN PHY
Optical Ethernet
Table 5-13
291
10GE Optical Standards, Target Solutions, and Pluggable Optics (Continued)
IEEE 802.3ae 10GE
Target Solution
Xenpaks, X2s, XFPs
10GBASE-E
Metropolitan access
XENPAK-10GB-ER for 1550 nm SMF to 40 km
10GBASE-L
Enterprise and service provider connectivity over SONET/SDH
XENPAK-10GB-LW for 1300 nm SMF - WAN PHY
N/A
Metropolitan video on demand (up to 200 km with optical amplification and dispersion compensation)
DWDM-XENPAKs for C-band ITU-T G.692 100 GHz grid, 80 km
Table 5-14 introduces the 10GE optical fiber supported by the IEEE 802.ae standard. Table 5-14
10GE Optical Support
10GBASE-L 1310 nm Wavelength (Long Reach)
10GBASE-LX4 1310 nm Wavelength Using Four 2.5G Fiber Lanes (Long Reach)
Modal Bandwidth/ Operating Range
Modal Bandwidth/ Operating Range
Modal Bandwidth/ Operating Range
Modal Bandwidth/ Operating Range
160 MHz/26 m
N/A
500 MHz/300 m
N/A
62.5 µm OM-1 MMF 200 MHz/33 m
N/A
500 MHz/300 m
N/A
50um MMF
400 MHz/66 m
N/A
400 MHz/240 m
N/A
50 µm OM-2 MMF
500 MHz/82 m
N/A
500 MHz/300 m
N/A
50 µm OM-3 MMF
2000 MHz/3000 m
N/A
500 MHz/300 m
N/A
9/10 µm SMF G.652
N/A
32,810 ft or 10 km
32,810 ft or 10 km
N/A/ 25 miles or 40 km
IEEE 802.3ae 10 Gigabit Ethernet
10GBASE-S 850 nm Wavelength (Short Reach)
Fiber Type
62.5 µm FDDI-grade MMF
10GBASE-E 1550 nm Wavelength (Extended Reach)
292
Chapter 5: Optical Networking Technologies
Finally, it is useful to compare the Gigabit Ethernet and 10GE technologies with the popular single-mode fibers that they support (see Table 5-15). Table 5-15
Comparison of Cisco GE and 10GE with Single-Mode Fiber 1310 nm, 1000BASE-LX, 10GBASE-L
1550 nm, 1000BASE-ZX, 10GBASE-E
CWDM
DWDM
Single-mode fiber, SMF-28 (G.652)
Supported
Supported
Supported
Supported
Zero water peak fiber (G.652.C)
Supported
Supported
Supported
Supported
Dispersion-shifted fiber (G.653)
Works, but unsupported by IEEE standard
Works, but unsupported by IEEE 10GBASE-E
Supported
Works, but untested and unsupported by Cisco
Nonzero dispersionshifted fiber (G.655)
Works, but unsupported by IEEE standard
Works, but unsupported by IEEE 10GBASE-E
Supported
Supported
Fiber Type
Optical Transport Network (ITU-T G.709 OTN) The first generation of optical networks was combined with SONET/SDH bitmapping to provide performance monitoring and, if necessary, near instantaneous protection from fiber or equipment failure. The reliability and performance management capabilities of SONET/ SDH have contributed to the long run of success of these protocols in optical networks. With the advent of WDM and DWDM, protection and management schemes were lacking because neither SONET nor SDH are wavelength aware. Though DWDM increased fiber bandwidth enormously, the technology also brought along new challenges through a new sublayer of network elements such as optical amplifiers, multiplexers and demultiplexers, dispersion compensation units, and so on—elements requiring continuous monitoring to ensure reliability. The ITU-T specification of the G.709 Optical Transport Network, or OTN, seeks to apply the operations, administration, management, and provisioning (OAM&P)-like functionality of SONET/SDH networks to today’s DWDM optical networks. The G.709 recommendation, often referred to as digital wrapper (DW), helps to manage multiwavelength networks. Additionally, a feature of G.709 called forward error correction (FEC) increases reliability through reduced bit error rates (BERs), extending optical span distances. Optical networks based on G.709 present a number of advantages such as
• • • •
Addition of standards-based FEC coding Reduction in 3R regeneration Protocol transparency Backward compatibility for existing protocols such as SONET/SDH
Optical Transport Network (ITU-T G.709 OTN)
293
The G.709 framing structure, the digital wrapper, is standardized for interoperability. As client signals are presented to the optical network equipment, overhead information is “wrapped” onto the front of the signal as a header. As the client signal is prepared for optical transport, it will include a combined overhead section at the front and an FEC trailer at the rear, creating an optical channel that is wavelength aware and manageable. In effect, the ITU G.709 mapping is a hierarchical payload packager that starts at 2.5 Gbps and reaches up to 40 Gbps (OC-768/STM-256), as shown in Table 5-16. Table 5-16
G.709 Line Rates Bit Rate with FEC Coding (Reed/Solomon 255, 239)
Frame Period
Optical Transport Unit Type
Payload Nominal Bit Rate
OTU1
2,488,320 Kbps (255/238)
2,666,057.143 Kbps
48.971 microseconds
OTU2
9,953,280 Kbps (255/237)
10,709,225.316 Kbps
12.191 microseconds
OTU3
39,813,120 Kbps (255/236)
43,018,413.559 Kbps
3.035 microseconds
The client payload can be anything such as Gigabit Ethernet, IP, GFP, or SONET. The contents of the overhead wrapper are monitored during transport, affording a standardized way of monitoring for performance in DWDM networks. As optical bit-rate speeds increase and distances between regeneration increase, the BER also increases, petitioning the need for G.709 FEC. The FEC technique uses a Reed-Solomon coding algorithm to check for bit errors. Essentially a sophisticated digital parity checker, the addition of the G.709 FEC feature corrects up to 8 byte errors in a 236-, 237-, or 238-byte code word, providing a BER improvement. This, in effect, delivers about a 4- to 6- db level gain to the optical system, allowing for longer-distance optical rings with fewer regenerations. When planning a traditional BER budget, a BER of 1012 was the typical design target. With the benefits of FEC, you can define an SLA based on a BER closer to 1015 and then monitor and measure with G.709 capabilities. The FEC technique is particularly applicable to 10 Gbps networks at distances between 60 km and 120 km. G.709 FEC also helps with aging fiber if you consider that BERs go up as the age of the fiber matures or as buried fiber is flexed from underground shifts. Adding FEC essentially extends the life of the fiber and helps maintain distances. The G.709 recommendation leverages the capabilities of next-generation intelligent DWDM networks by building management and monitoring into the optical network and
294
Chapter 5: Optical Networking Technologies
through the use of tunable, active components to provide more automation control. Resulting benefits are
• • • • • •
Per-channel management Performance monitoring in the DWDM domain Automatic route discovery Optical transport hierarchy G.709 digital wrapper overhead functions for multilambda support FEC for a better optical signal-to-noise ratio (OSNR)
While more standards work is in process for further definition of the optical control plane, the G.709 OTN recommendation enables providers and operators to manage their networks more efficiently and economically.
IP over Optical IP over optical, sometimes referred to as IP+Optical, is fundamentally about collapsing layers. In the new era of optical and bandwidth abundance, the original OSI seven-layer model is often discussed with regard to whether seven layers is too many or too few. As part of the OSI seven-layer model, IP at Layer 3 requires framing to be transported as frames. TCP and UDP datagrams at Layer 4 become IP packets at Layer 3, which then become frames at Layer 2 in order to be transmitted as bits or bytes over various physical mediums at Layer 1. SONET/SDH, ATM, and Frame Relay have served as traditional Layer 2 framing methods for provider networks. With the mass move to IP-based networking, the desire to eliminate the SONET/SDH and ATM layers of overhead, equipment, and their respective management systems have sparked new developments to get IP packets into optical wavelengths with a minimum of framing overhead. As a standard protocol within G.709 OTN networks, GFP encapsulation is an efficient and applicable protocol that you can leverage with the G.709 OTN payloads to transmit IP packets across optical wavelengths. In an OTN-compliant network, IP packets might be transported directly in the optical transmission network using the Generic Framing Protocol frame (GFP-F) mode encapsulation. Idle frames can serve as a fill character to “pad” out the OTN client payload. Using GFP with OTNs doesn’t require other Layer 2 protocols, such as Frame Relay or ATM, for IP packet transport, nor does it require Packet over SONET framing. Using GFP, IP over optical eliminates layers and uses the same encapsulation needed to support storage area network traffic (GFP-T). MPLS is another ultra-fast protocol that might be leveraged for mapping IP packets into optical wavelengths. Using labels as headers, the labels can indicate the start and end of coagulated IP packets, and these label-distributed packets could be moved through wavelengths. All optical label swapping and label distribution have previously been demonstrated as a feasible option.
Optical Transport Network (ITU-T G.709 OTN)
295
Unified Control Plane All networks use a control plane for network signaling and provisioning, circuit setup, and other housekeeping functions that allow the data plane to transport data and generate revenue. Control planes are often optimized to include automated provisioning, intelligent monitoring, trouble isolation, and feedback of data plane performance. Optical networks also use control planes. Today’s focus on rapid service delivery and operational efficiencies is requiring the distribution of control plane intelligence into all network elements of an OTN. Many of these optical systems use proprietary protocols, such as Network-to-Network Interface (NNI) protocols, for their provisioning, complicating interoperability. Many DWDM network components require static provisioning, limiting improvements in time-to-market for revenue-generating bandwidth services. The unification of a common control plane across all types of optical networks is a challenging but worthy goal to provide interoperability and service velocity to all types of optical services across any type of optical or IP network. The de facto consensus in the optical industry is that IP protocols are the key to providing end-to-end optical network intelligence. IP intelligence is evolving to support the provisioning of connections within an OTN, between OTNs and IP networks, across multiple OTNs, and toward an end-to-end, universal application. IP-enabled optical provisioning is but one of many goals. Standards organizations such as the Internet Engineering Task Force (IETF), the Optical Internetworking Forum (OIF), and the ITU-T are leading various working groups toward Unified Control Plane (UCP)-related standardization. UCP is the Cisco implementation of the generic Optical Control Plane (OCP) technology that enables a client device, such as a Cisco ONS 15454 MSPP, to dynamically signal for a circuit through a third-party vendor core network to another Cisco ONS 15454 MSPP. Cisco UCP is an OIF standards-based implementation using Reservation Protocol-Traffic Engineering (RSVP-TE), Generalized Multiprotocol Label Switching (GMPLS), and Link Management Protocol (LMP) protocols. In addition, the Cisco UCP strategy offers a unique, standards-based approach for managing multivendor network elements to maximize service delivery. With development planned for a multiphase implementation, Cisco UCP is targeting the future of optical network management. The following explains the Cisco UCP strategy:
•
Phase 1: Single-domain end-to-end OTN provisioning—The potential benefits include the following: — Per-domain, point-and-click, end-to-end provisioning — Services on demand — Automated circuit inventory
296
Chapter 5: Optical Networking Technologies
•
Phase 2: Signaling-based provisioning (OIF UNI)—The potential benefits include the following: — Rapid provisioning between IP and optical elements — Accelerated service deployment — Automated inventory management
•
Phase 3: Multidomain end-to-end provisioning—The potential benefits include the following: — Rapid end-to-end optical service provisioning within OTNs — Automated inventory management
•
Phase 4: Integrated IP and OTN intelligence (OIF UNI and IETF GMPLS)—The potential benefits include the following: — Peer-to-peer provisioning across multiple OTNs and IP networks — Rapid network setup and simplified OAM&P via common GMPLS management abstraction — Policy-based bandwidth services
•
Long-term vision of UCP—Long-term goals include the following: — Rapid, distributed, IP-enabled, end-to-end provisioning and management of all service types (circuit, voice, and data) — Simplified, new service delivery such as optical VPNs, wavelength leasing, and bandwidth exchange — Proficient protection/restoration schemes across all network spans and layers — Automated inventory management with the network as the database — Efficient use of network resources via traffic engineering end to end2
The Cisco approach for developing the UCP involves a combination of standards based on the OIF UNI-C and the IETF GMPLS standards work in progress. GMPLS is an extension of the MPLS-TE protocol that can be applied to non-IP layers, such as the optical physical layer. In essence, GMPLS provides IP-based control of Layer 1, such as optical links and network elements. GMPLS becomes the control plane to handle label switching, lambda switching, waveband switching, and port switching, while supporting traditional SONET/SDH and ATM VC/VP switching. GMPLS enables optical packet switching. The GMPLS control plane uses Open Shortest Path First-Traffic Engineering (OSPF-TE) and Intermediate System to Intermediate System (IS-IS) for control routing intelligence and uses RSVP-TE for control signaling. The benefits of using GMPLS include integrated control and recovery, network management, unified procedures for facilities provisioning,
Technology Brief—Optical Networks
297
and rapid deployment for Layer 1 technologies. The IETF GMPLS implementation is based on a peer-to-peer model, such that IP routers and optical transport network elements are peers. GMPLS becomes the common signaling between routers and the optical network elements and between different optical network elements, to affect an end-to-end, labelswitched path setup that automates provisioning. The dynamic setup and teardown of optical wavelengths would be a prime functionality. GMPLS seeks to converge nextgeneration networks and is a key element within a UCP. The primary goal of a UCP is to hasten service turn-up and delivery across any combination of IP and optical systems, changing provisioning from months to minutes. Adding IP intelligence into the optical transport network and IP networks simplifies and accelerates service provisioning within an OTN, between OTNs and IP networks, across multiple OTNs, and, expectantly, end to end regardless of network or service type.
Technology Brief—Optical Networks This section provides a brief study on optical networks. You can revisit this section frequently as a quick reference for key topics described in this chapter. This section includes the following subsections:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding optical networks.
•
Technology at a Glance—Uses figures and tables to show optical network fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance—Presents charts that suggest business drivers and lists those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts in this section to see how business drivers are driven through technology selection, product selection, and application deployment in order to provide solution delivery. Additionally, business drivers can be appended with critical success factors and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
Technology Viewpoint Optical fiber is the physical layer medium of choice. As a result, optical networking is the ascendant Layer 1 technology on which to build the new era of networks. It is becoming a worldwide transport for the reigning Layer 2 technology known as Ethernet. At Layer 3, IP completes the network building blocks for the creation of service pull—new-era optical networks.
298
Chapter 5: Optical Networking Technologies
Optical networks now run the gamut from enterprise and campus backbones, to metropolitan networks, to long-haul networks over land or under sea. Increasingly, optical fiber is moving into residential areas via passive optical networks, initially taking economic advantage of new construction opportunities and high-density neighborhoods. As one of the earliest deployed optical network transmission protocols, SONET/SDH is a mature technology that was designed to efficiently and reliably transport 64 Kbps voice circuits from the customer premises to the nearest telephone exchange and beyond. However, it was not intended to support the enormously growing demand for IP bandwidth with wildly variable data lengths. Traditional time division multiplexing (TDM)-based metropolitan networks have had to “cram” support for the enormous growth in data traffic and have largely been adapted to perform this service. Packet over SONET was the first of these adaptations. A prime advantage of next-generation SONET/SDH allows network providers to introduce new technology, such as Ethernet, into their traditional SONET/SDH networks by replacing only the edge-located network elements. Both TDM and packet-oriented data services are handled on the same optical wavelength. SONET/SDH networks can now manage overall bandwidth more efficiently and support traffic, such as Ethernet over SONET/SDH with more granularity. RPR, an IEEE standard, and the Cisco DPT/SRP are technologies that apply data optimization to ring-based optical networks, metropolitan or otherwise. It allows the full bandwidth of a fiber-based ring to be realized in both directions, potentially doubling the available bandwidth on the ring, while still having the peace of mind of sub-50 ms ring restoration. In fact, protection is the primary reason to use a ring topology, because every node has two possible fiber paths to every other node on the ring. RPR and DPT/SRP combine the intelligence of IP routing and statistical multiplexing with the bandwidth efficiencies and resiliency of optical rings. In addition, RPR and DPT/SRP add the simplicity and cost advantages of Ethernet. Offering end-to-end metro architecture—metro access networks, to metro POPs, to regional metro networks—RPR- and DPT/SRP-based networks are delivering dramatic advantages to metropolitan service providers. Ethernet is the low-cost leader of the Layer 2 protocols. Through the benefits of multimode and single-mode optical fiber, as well as continuing advancements in Ethernet technology, Gigabit Ethernet is now a familiar tenant in enterprises and in service provider metropolitan offerings. To aggregate the bandwidth of Gigabit Ethernet from desktops, servers, and mainframes, 10GE moves the decimal point for a 10x improvement in backbone capacity. Perhaps the most significant of all, 10GE in provider networks enables physical layer convergence of the LAN, MAN, and WAN. Using Ethernet over optical, many choices are now possible, such as Ethernet over SONET/ SDH, Ethernet over RPR/DPT, and Ethernet directly over optical dark fiber using pluggable optic transceivers and optical transponders on metro and long-haul WDM platforms.
Technology Brief—Optical Networks
299
By using optical Ethernet in the provider networks, it becomes possible to set and forget the customer interface equipment, enjoying linear scalability of bandwidth through software control rather than truck rolls and circuit-dependent interfaces. Ethernet in the provider network is a significant contributor to OpEx savings, as it is a cost-effective way to match interfaces, speeds, and protocols with customers. For these reasons, Ethernet carries a lot of service value with customer decision makers. Ethernet in the provider networks sets the direction for taking Ethernet directly to businesses and residences, tying consumer Ethernet LANs at home with business, enterprise, and the Internet. The industry has often wondered if the installed capacity of all the optical networks can be fully leveraged. The eventual ubiquity of optical Ethernet will justify all of the fiber in the world. WDM, DWDM, and CWDM are established technologies in the WAN backbone that enable multiple electrical data streams to be transformed into multiple independent optical wavelengths, also called lambdas, channels, or infrared colors. Collectively, they provide each fiber with potentially unlimited transmission capacity, making optical fiber become virtual fiber. Given the expense of deploying new fiber in the ground or undersea, this logarithmic increase in fiber capacity represents the preeminent pull of WDM- and DWDM-based technology. Optical networks are interesting in that they exhibit digital technology-like benefits in speed, noise immunity, and low error rates, yet optical is an analog transmission medium. Therefore, optical design is all about equilibrium—the delicate balancing of fiber types, laser components, impairment management components and techniques, and even DWDM channel plans. Discrete components such as transponders, optical multiplexers and demultiplexers, amplifiers, attenuators, and dispersion compensators must be chosen, tuned, and implemented, often on a case-by-case network basis. Much of optical science is applied to these types of networks. DWDM network design, for the most part, remains a sophisticated craft-guild technique. As the technology has matured, efforts to add intelligence, automation, and integration ease the design, provision, and maintenance of a profitable optical network. Intelligent DWDM products are making metropolitan and regional optical networks easier to provision, simple to operate, and ultimately more competitive and profitable. In addition to DWDM intelligence, integration of DWDM into multiservice optical products, such as MSPPs, enhances the price/performance of both capital and operational investments in these optical networks. Instead of building a separate DWDM transmission layer followed by a service interfacing and service aggregation layer, all capabilities are combined in the same hardware and software platform. This integration reduces the number of discrete products that must be installed, integrated, and maintained. This collective intelligence and integration of DWDM is being facilitated by new-generation dynamic components, many of them optically active compared to their passive predecessors. Arriving commercially at the end of the 20th century, DWDM is arguably in its infancy.
300
Chapter 5: Optical Networking Technologies
Optical is the past, present, and future prince of network communications. As optical networks continue their march to the masses, new productivity, innovation, and services will be unleashed as a result of end-to-end bandwidth abundances. The latest generation of optical networking is perhaps better defined as the conveyance of color-propagated, massively parallelled information, whether by glass or by air. Color permeates everything from clothing to crayons, to cartoons, to communication optics. Color is intelligence, information, and illumination. Color is king, and optical networking is the king’s royal coach.
Technology at a Glance Table 5-17 summarizes optical technologies. Table 5-17
Optical Technologies
Key Standards
Optical Ethernet
SONET/SDH
RPR/DPT
WDM/DWDM/CWDM
Physical layer standards
RPR-IEEE 802.17 (2004)
ITU-T G.652/653 ITU-T G.652 SMF
GE-802.3z (1999)
SONET GR.253.CORE
DPT-RFC 2892 (1997)
ITU-T G.652.C ZWP SMF
10GE-802.3ae (2002)
ANSI T1.105/T1.106
ITU-T G.655+ NZDSF
EFM-802.3ah
SDH ITU-T G.691
ITU-T G.655- NZDSF
SDH ITU-T G.707 CCAT
DWDM ITU-T G.692
10GE MMF802.3aq
SDH ITU-T G.783 SDH ITU-T G.957 ITU-T-G.707/ Y.1332 VCAT ITU-T G.7042/ Y.1305 LCAS ITU-T G.7041/ Y.1303 GFP RFC 1662 PPP over SONET/SDH w/HDLC RFC 2615 PPP over SONET/SDH
IEEE 802.3, 802.1p, 802.1Q, 802.1D
Technology Brief—Optical Networks
Table 5-17
301
Optical Technologies (Continued)
RPR/DPT
WDM/DWDM/CWDM
Optical Ethernet
Optical fiber
Optical fiber
Optical fiber
TDM
Optical rings
CSMA/CD
M13 TDM
Resilient Packet Ring (RPR) protocol
Lasers, semiconductor lasers, LEDs, VCSELs
SONET/SDH Seed Technology Optical fiber
Digital cross-connect
Spatial Reuse Protocol (SRP-fa) (Cisco DPT)
Photodiodes EDFAs, fiber raman amplifiers (FRAs), PDFA amplification Transponders, multiplexers, demultiplexers, variable optical attenuators, dispersion compensators, arrayed waveguides, passive optical filters, optical circulators, MEMS
GBIC, SFP, Xenpak, XFP Ethernet over SONET Ethernet over RPR/DPT Ethernet over WDM
GBIC, SFP, Xenpak, XFP Distance Range
Long reach 40 km at 1310 and 80 km at 1550 nm
Rings up to 2500 km with 32 nodes
Per standards
Metro access to 75 km
Long reach to 10 km
Metro core to 300 km
Intermediate reach 15 km at 1310 and 40 km at 1550 nm
Long haul to 600 km Extended long haul to 2000 km
Short reach 2 km at 1310 nm Interface Speed Support
Short haul to 2 km
Extended reach to 100 km
Ultra long haul to 3000+ km
T1/E1 (DS0/DS1)
T1/E1 (DS0/DS1)
T1/E1
T3/E3
T3/E3
T3/E3
OC-3/STM-1
OC-3/STM-1
OC-3/STM-1
OC-12/STM-4
OC-12/ STM-4
OC-12/STM-4
OC-48/STM-16
OC-48/ STM-16
OC-48/STM-16
OC-192/STM-64
OC192/STM-64
OC-192/STM-64
Fast Ethernet (100 Mbps)
Fast Ethernet (100 Mbps)
10 Gbps
Gigabit Ethernet (1 Gbps)
Gigabit Ethernet (1 Gbps)
Fast Ethernet (100 Mbps)
10GE (10 Gbps)
Fast Ethernet (100 Mbps) Gigabit Ethernet (1 Gbps) 10GE (10 Gbps)
40 Gbps Gigabit Ethernet (1 Gbps) 10GE (10 Gbps) continues
302
Chapter 5: Optical Networking Technologies
Table 5-17
Optical Technologies (Continued)
SONET/SDH
RPR/DPT
Key Bandwidth Capacities
WDM/DWDM/CWDM Total capacity per fiber pair (bit rate x lambdas) 10G x 32 = 320 Gbps 40G x 32 = 1280 Gbps 10G x 64 = 640 Gbps
Optical Ethernet 100 Mbps 1000 Mbps or 1 Gbps 10,000 Mbps or 10 Gbps
40G x 64 = 2480 Gbps 10G x 128 = 1280 Gbps 40G x 128 = 5120 Gbps 10G x 400 = 4000 Gbps 40G x 400 = 16,000 Gbps Bandwidth Range and Bit Rate
Narrowband to broadband to 10 Gbps
Narrowband to broadband to 10 Gbps
Broadband to 40 Gbps+
Broadband from 100 Mbps to 10 Gbps
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solution and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. In the at-a-glance chart that follows, typical customer business drivers are listed for the subject classification of networks. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products, and their key differentiators, as well as applications to deliver solutions with high service value to customers and market leadership for providers. Figure 5-18 charts the business drivers for optical networks.
End Notes
303
Figure 5-18 Optical Networks
High Value
Technology
Critical Success Factors High-Bandwidth Services 10G, 40G Reduced Provisioning through End-to-End Automation
Market Leadership
Services Leverage of Optical Fiber Cost Optimization for any Mix of Services Choose Strategic Optical Platforms
Market Value Transition
Cisco IOS TDM SONET /SDH
ONS15454
RPR/DPT
ONS15201
WDM
ONS15540
DWDM
ONS15252
Scalable Optical Services Platform
CWDM
ONS15216
Ethernet
ONS15600
Ficon Product and Technology Leadership – MarketLeading Interface Density –
Business Continuance/Disaster Recovery
ESCON TDM Optical Fiber MSPP MSTP
Low Cost Competitive Maturity
ONS15327
Co-Opt IP, Ethernet, Optical Skill Sets
Multiservice Convergence in Campus, Metro, and Long Haul
Market Share
Cisco Product Lineup
High-Bandwidth Web-based Applications
MSSP
Ethernet Movement into MAN and WAN
GMPLS
Storage across MAN and WAN
OIF-UNI GFP
Fiber-Relief
Cisco 12000 Cisco 10000 Cisco 7600 Cisco Transport Manager
Applications Service Value Business Recovery Storage Networks Internet Access High-Speed WAN Metro Ethernet Video on Demand Video Imaging
Managed Multiservice Optical Services Managed Wavelength Services Simplified, Converged Networks Reduced Time to Market Interface Parity with Customers Using IP/Ethernet/Optical Cisco Key Differentiators –End-to-End Optical Solutions – Intelligent, Integrated DWDM – Network Management Long Haul Solutions for Service Providers Metro Core Solutions for Service Providers
Convergence
Metro Edge Solutions for Service Providers
ETTx
Service PoP Solutions for Service Providers
Voice over IP
Cisco Converged Optical Infrastructure for Cable Operators
ECommerce
G.709 OTN Business Drivers
Industry Players
Solution Delivery
Service Providers – IXCs – ILECs – CLECs – ISPs – Cable Operators – Equipment Manufacturers – Cisco Systems – Lucent – Nortel – Alcatel – Ciena – Siemens – Tellabs – NEC – Fujitsu – ECI – Marconi – Sycamore – Tellium – Corvis –
Optical Networks
End Notes 1 TeleGeography 2
research, PriMetrica. Copyright 2005
Cisco Systems, Inc. “The Cisco IP+Optical Unified Control Plane: Accelerating Service Velocity with IP-Enabled Provisioning.” http://www.cisco.com/warp/public/779/servpro/ solutions/optical/docs/ucp_wp.pdf
304
Chapter 5: Optical Networking Technologies
References Barbieri, Alessandro. A Guide to Select Single-Mode Fibers for Optical Communications Applications. Cisco Systems White Paper, 2002 Cisco Systems, Inc. “Cisco Dynamic Packet Transport Technology and Performance.” http://www.cisco.com/en/US/partner/netsol/ns341/ns396/ns223/ns226/networking_ solutions_white_paper09186a0080235814.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “RPR Protocols (802.17 and SRP).” Cisco Systems Networkers 2004 Presentation Session OPT-2043 Cisco Systems, Inc. “Introduction to 10 Gigabit Ethernet.” http://www.cisco.com/en/US/ partner/tech/tk389/tk214/technologies_white_paper09186a0080092958.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Packet Over SONET/SDH.” http://www.cisco.com/en/US/partner/ products/hw/routers/ps167/products_white_paper09186a00800b07e6.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Implementing Optical Ethernet Networks with Pluggable Optics.” Cisco Networkers 2004 session OPT-2041 Cisco Systems, Inc. “Advanced Optical Technology for Next Generation Data Services.” Cisco Networkers 2004 Session OPT-4041 Cisco Systems, Inc. “Introduction to DWDM Technology.” http://www.cisco.com/en/US/ partner/products/hw/optical/ps2011/ products_technical_reference_book09186a0080234230.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Integrated Intelligent DWDM Networking Delivers Financial Advantage for Service Providers.” http://www.cisco.com/en/US/partner/products/hw/ optical/ps2006/products_white_paper09186a00802057b9.shtml. (Must be a registered Cisco.com user.)
This page intentionally left blank
This chapter covers the following topics:
• • • • • • • •
Business Drivers for Metropolitan Optical Networks Functional Infrastructure Metro SONET/SDH Metro IP Metro DWDM Metro Ethernet Metro MSPP, MSSP, and MSTP Metro Storage Networking
CHAPTER
6
Metropolitan Optical Networks Metropolitan optical networks are the epicenters of new-era broadband definition and delivery. They are effectively bandwidth merry-go-rounds, sporting a selection of speedy horses with customized saddles, colorful headdresses, copper stirrups, and glass-beaded reins for super-swift communications carriage. Built for all ages, they are spinning faster, lifting higher, and supplying the ideal mount from which to connect and communicate with the office, the Internet, the enterprise, family, and friends. As these metropolitan merry-gorounds grow larger, a digital world becomes smaller. Today’s metropolitan networks are assimilating all communications and network types— voice, video, and data—into urban webs of glass and light, mixed with traditional copper conduit and electron energy. Business networks hub and hum from metropolitan areas, the source and supply of both their workforce and their market revenues. Metropolitan networks are the opto-electronic glue that binds us together into communities of interactive interest. The march toward a broadband world is pushing optical fiber communications further into the last miles of metropolitan access. Increasingly, optical fiber is supplanting copper into suburban and residential areas via passive optical networks, initially taking economic advantage of new construction and high-density neighborhoods. Metropolitan central offices (COs) become a relative term as these voice and data-switching centers become distributed in support of suburbia. Wireline and wireless networks are customary in the metropolitan space as well, and upcoming chapters will provide an overview of their particular applicability. The essence of the Internet—a great digital consciousness—is largely resident in the metropolitan networks of the world. Metropolitan optical networks cut a path over, under, and along transportation thoroughfares, forming the gigabit bridges between Internet content and Internet clients. Stored, cataloged, and replicated in semiconductor memories and ferrite disks and tapes, electronically resident data is but a few computer cycles and optical pulses away. No auto fuel is expended, no appointment is needed, no mailbag is sorted and stuffed. Whether rural, suburban, urban, or municipal, metropolitan optical connections move us closer to the all-optical, purely photonic, digitally certifiable, electronic transaction. As a result, a significant portion of brick and mortar fades into the great digital catalog of e-commerce. The daily routine of hunters and gatherers is once again transformed.
308
Chapter 6: Metropolitan Optical Networks
There is a lot happening in metropolitan optical networks. This chapter will focus on the particular applicability of metropolitan optical network infrastructure and the myriad of technologies that are layered upon it. This includes some familiar topics such as SONET/ SDH, Ethernet, MPLS, and DWDM—all technologies introduced in other chapters—yet you’ll examine these through a metropolitan optical lens. The metropolitan IP technologies of Resilient Packet Ring (RPR) and Dynamic Packet Transport (DPT) are introduced. Notable metro-specific features of Cisco’s ONS 15000 family of multiservice provisioning platforms (MSPPs), multiservice switching platforms (MSSPs), and multiservice transport platforms (MSTPs) are included along with an introduction of metro storage networking. All of these technologies are kith and kin, spawned by today’s business drivers for metropolitan networks.
Business Drivers for Metropolitan Optical Networks Metropolitan optical networks are the essential linkage between the multimode fiber (MMF) “webs” of enterprises/campuses and the single-mode fiber (SMF) “rails” of longhaul optical networks. Metropolitan optical networks have long led the way for business broadband. In the new era of networking, metropolitan networks exploit copper and glass to lash residential households to the centricity of the metropolis core. In a milli-instant, computer users gain access to urban communication services and seek or supply information over the long-haul optical rails. Whether for business or pleasure, metropolitan networks are the launchpad and the landing strip of digital transactions and electromagnetic communications. For telecommunications, metropolitan optical networks are the gateway to creating broadband service value for customers and affiliated revenues for service providers. Traditionally, large enterprise, research and development, national, and international networks were the primary benefactors of great bandwidth. Particularly over the past decade the industry has sought to link the storefronts of electronic business with the minds and wallets of consumers, taking broadband well beyond big business and to the masses. This effort is driving increased sophistication of telecommunications technology into the metropolitan space, requiring further segmentation of metropolitan communication infrastructure to relieve the funnel effect of broadband to the home and burgeoning small and medium business markets. Much like building a highway to each residential doorstep, the new metropolitan driveway must be wide, incorporating the Internet data protocol while packetizing voice and redefining video. The need for unlimited capacity and utmost scale implies the use of optical and DWDM technologies—ever closer to the end user. Primary and secondary network designs continue to dissolve into tertiary and quaternary physical segments. To broad-link service value distinction with value response, a fresh metropolitan design is necessary. The impact of such a bit blast further stratifies communication services into metro access, metro edge, metro core, and service point-of-presence tiers. Broadband access snakes its
Functional Infrastructure
309
way to the home. Intelligence distributes, moving closer to the edge to facilitate the benefits of a peer-to-peer communication model. A metro core collects and shuttles service-laden IP packets between the intelligent edge and service-dense points of presence (POPs). Further, metropolitan network topologies are transforming into data-centric designs. Today’s hub-and-spoke fabric and cascaded, concentric metropolitan rings may bridge together and interconnect between—blending and morphing into virtual, optical spiderwebs. Mimicking the Internet’s World Wide Web, mesh-like infrastructure design can deal with service densities and capture new efficiencies of peer-to-peer distributed computing on the metropolitan, broadband service model. Metropolitan IP, Ethernet, and optical technologies are the primary building blocks of newera, metropolitan networks. Companies that exhibit artistry in these network, data link, and physical OSI layers will perform well, whether innovating, manufacturing, designing, integrating, or service provisioning. The inherent service pull of each technology forms an amalgam of service value that is extendible, flexible, scalable, and, most of all, profitable for metropolitan optical networks.
Functional Infrastructure Metropolitan networks follow the tendency of humans to homestead in a cluster around a significant point of interest, such as a high-value business center or a city downtown. Two things are largely affecting the functional infrastructure and topologies of new-era metropolitan networks:
• •
The demand for delivery of broadband to the user level Suburban sprawl
As of 2005, broadband speeds to the residential user and small business segment are typically 3 Mbps or higher. Enterprises are moving beyond OC-3s, OC-12s, and OC-48s to 10 Gbps and multigigabit fractions thereof. Such a deluge of bandwidth must be supported by an optical infrastructure to meet bit-rate requirements and service-level agreements. Suburban sprawl is the result of human nature to seek space, security, and quality of life within reasonable proximity of employment opportunities. Metropolitan networks are reaching out to far-flung suburbs to photonically and electronically couple them with a major city center. The resulting increased communication distances are best supported by a metropolitan optical infrastructure. Metropolitan networks route and switch voice, data, and video communications both intraand intermetro via the metro core, metro regional, or metro long-haul optical networks. One model of a functional, tiered infrastructure for metropolitan optical networks is shown in Figure 6-1. The metro access, metro edge, metro core, and service POP are fundamental layers to metro infrastructure. Putting them together provides a broad functional view of a tiered metropolitan networking infrastructure design.
310
Chapter 6: Metropolitan Optical Networks
Figure 6-1
Tiered Metropolitan Networks
Metro Access
Service POP Metro Edge Service POP
Metro Access
Metro Core
Service POP
Long Haul/ Extended Long Haul
Service POP Metro Edge Metro Access
Service POP
Source: Cisco Systems, Inc.
The following describes the elements of a tiered metropolitan optical network:
•
Metro access—This is the “last mile” functional tier, connecting to the customer residence (household) or to the customer premise (business and enterprise). One end of the communication link connects to the customer, and the other end connects upstream to the service provider’s metro edge equipment. A variety of physical link types and services are available in this tier.
•
Metro edge—This tier serves to aggregate large volumes of individual customer connections into the provider’s multiservice edge equipment; it is referred to as the metro edge because it terminates the customer access link. While aggregating customer communications at the edge, it also provides transport of customer communications upstream toward the metropolitan core network. The metro edge focuses on service variety toward the customer.
•
Metro core—This functional tier forms the main, area-wide, high-speed backbone of the metropolitan optical network. The metro core brings together large numbers of metro edge networks from a customer-facing viewpoint. From a provider interior viewpoint, the metro core connects to service POPs to access communication switching, routing, applications, and long-haul services. The core expands to maintain an appropriate proximity with suburban sprawl. The metro core focuses on capacity to collect and transport communications between the metro edge tiers and service POP(s).
Functional Infrastructure
•
311
Service POP—The service POP is an interconnection point between the metro core network and the long-haul network, but is usually the metropolitan concentration point of OSI Layer 2, Layer 3, and higher services. Primary service POPs are often centralized on the metropolitan core, and secondary service POPs are often distributed throughout the metropolitan optical network at the metro edge.
These well-designed tiers are an effective way to structure new-era metropolitan networks to meet the essential infrastructure requirements of broadband delivery and extended connectivity distances, increasing the variety of communication services in the process. Ultimately, provider business strategy and customer requirements influence metropolitan network design, so a variation of tiers and topologies are present in service provider markets. With provider opportunities racing beyond mere transport services, the ability to provide Layer 2 and Layer 3 services, both fixed and mobile, within the metro network are key to the “triple play” (combined voice, video, and data plus mobility offering) and fundamental to the revenue growth and market competitiveness of the metropolitan network provider. A functionally tiered infrastructure assists with bandwidth scale, traffic management, service distribution, dynamic flexibility, and most importantly, rapid implementation and provisioning of new, digitally intelligent communication services. The next sections provide further examination of the individual tiers of metro access, metro edge, metro core, and metro service POP. Many metros are growing to regional scale, so any impact on metro technology is considered.
Metro Access Metro access networks are revenue centric. From this viewpoint, they tether the billable customer of record to the provider’s metropolitan service offerings. Metro access networks touch the customer and, as such, represent a large component of the provider’s billable revenue. Access protection designs and fault-free components and equipment are vital not only to meet high availability, but to protect revenue. A variety of services and interface types are necessary to attract customers and then retain them as their communication needs change. Today’s all-fiber, metropolitan, local-access optical networks are commonly less than or equal to 100 kilometers, or about 62 miles. Normally, these access networks are unamplified, making use of lower-cost passive optical components and pluggable ITU optics. Metropolitan edge and core networks aggregate these access networks together for the local service provider and are positioned between access networks and long-haul networks. The metro access market requires a discussion of next-to-last-mile optical fiber, generally referred to as fiber to the node (FTTN), and last-mile optical fiber, or fiber to the X (FTTX). FTTN is a migratory phase that positions the service provider’s optical network within
312
Chapter 6: Metropolitan Optical Networks
3000 to 5000 feet of residential or multitenant units. The build-out of central office optical line terminals (OLTs) to neighborhood centric optical network units (ONUs) falls into this class. From here, many providers bridge their optical fiber distribution to their existing copper plant, whether it be xDSL or long-reach Ethernet (LRE) in the case of Incumbent Local Exchange Carriers (ILECs) and CLECS, or with coaxial cable in the case of the cable multiple service operators (MSOs). FTTX (X refers to any), and more specifically fiber to the building (FTTB), fiber to the curb (FTTC), or fiber to the home (FTTH), is the brightest beacon for metro access connections. Collectively referred to as fiber to the premise (FTTP), the use of optical fiber in the last mile revitalizes the metro access, quashing bit-rate barriers, multiservice margins, and sheer capacity limitations inherent in the copper cousins. Ethernet is laying claim to this fiber infrastructure. Ethernet, scalable to 10 Gbps and beyond, is the ascendant Layer 2 technology for metro access. Ethernet, a Layer 2 data link technology, is delivered over a variety of metro optical transport methods such as SONET/SDH, Resilient Packet Ring (RPR), coarse wavelength division multiplexing (CWDM), and dense wavelength division multiplexing (DWDM) and also directly via the Ethernet LAN PHY and WAN PHY sublayers. All of these are popular methods of transporting Ethernet over optical fiber. Ethernet equipment inherits advanced security and quality of service (QoS) capabilities of the IP layer above. Pairing Ethernet with Layer 1 optical fiber creates a scalable access link that dominates the access market, perhaps for the next 25 years. The rising availability of provider metro Ethernet product offerings reflects the recognition of Ethernet’s imminent dominance in the metro access market. Metro Ethernet is examined further in a later section titled “Metro Ethernet.”
Business Access In the business market, metro access is awash with private-line time-division multiplexing (TDM); public Frame Relay, ATM, Packet over SONET/SDH; Transparent LAN Services (TLS); and metro Ethernet. Optical fiber–based services are well established in the large business market. Optical fiber–based access links traditionally use OC-3/STM-1 speeds via ATM or Packet over SONET/SDH. Higher speeds of OC-12/STM-4 and OC-48/STM-16 are common among large businesses while OC-192/STM-64 is rather rare in the access tier. Metro Ethernet platforms deliver Fast Ethernet and Gigabit Ethernet over optical fiber and 10 Gigabit Ethernet is taking preference over OC-192 ATM and SONET/SDH links, when needed for the metro access tier.
NOTE
Industry terminology is shifting from the OC-48 and OC-192 designations, introduced with SONET/SDH and ATM, to the more conversational bit-rate designations of 2.5 Gbps and 10 Gbps.
Functional Infrastructure
313
Optical access links for businesses and large enterprises often consist of linear runs of optical fiber, connecting the business campus to the provider’s nearest metropolitan fiber hubbing point (add/drop multiplexer location). This is normally just a few kilometers of distance as providers try to position their metropolitan optical core networks within close proximity of business parks and dense industrial suburbs. Options also exist for businesses to get diverse entrance facilities, with one entrance surviving a building perimeter fiber cut if the other is damaged due to construction or renovation. Depending on availability, a business may even connect the diverse entrance’s optical fiber with a second fiber hubbing point of the same provider to create a protection-based access fiber ring that not only survives a fiber cut but also endures a provider metro node equipment failure. Conventional optical access links to large businesses are SONET/SDH-based facilities at speeds of OC-3 and higher. Recently, low-cost CWDM optics and metro-efficient DWDM are available options to further leverage the existing fiber access connection for both higher-speed and multiple services. Figure 6-2 shows the positioning of metro access networks within the metro functional infrastructure. Figure 6-2
Metro Access Network Positioning
Metro Access
Service POP Metro Edge Service POP
Metro Access
Metro Core
Service POP
Long Haul/ Extended Long Haul
Service POP Metro Edge Metro Access
Service POP
Source: Cisco Systems, Inc.
Residential Access In the residential market, metro access is hotly contested because traditional voice copper lines, video coaxial cable, wireless radio waves, and optical fiber are each capable of delivering multiple services to a particular extent. This is where DSL, cable, and metro Ethernet (copper based) are common broadband plays. Each of these technologies can
314
Chapter 6: Metropolitan Optical Networks
provide multiple services. DSL, cable, and copper-based Ethernet are covered in depth in Chapter 8, “Wireline Networks.” For residential access in metropolitan areas, optical fiber availability has been particularly sparse. Initial cost of deployment, practical broadband enhancements to installed copper facilities, and even legislation has served to dissuade the growth of residential fiber until FCC telecom rule changes during 2004. However, since about 2000, many new residential construction projects have been preinstalled with combinations of fiber and copper cable. High-density metropolises in countries such as Korea, Japan, and China are quickly moving toward the installation of optical fiber all the way to the premise. In addition to new construction opportunities, high-value residential areas are seeing movement of optical fiber ever closer to individual homes via the installation of lower-cost, passive optical components to form passive optical networks (PONs).
Passive Optical Networks (PONs) Passive optical equals high availability and resistance to power outages, which are important considerations in residential markets. Passive optics also reduces outlay, critical in the residential market, where revenue per connection is lower than the business market, required connections are high, and low cost drives volume. Passive optics is an optical component classification. Optical fiber is a passive transport medium. The physics of light propagation via internal reflection and refraction appears to happen magically using small wires and mirrors. If you consider the optical fiber core as a small wire and the cladding as a mirror, then maybe such an analogy can be drawn. Nonetheless, no electron power is present or needed to keep photons moving through the core of an optical fiber once a ray of light becomes incident on one end of the fiber. Other production optical components can split photons (light), bend and redirect the light, slow down the light, speed up the light, block the light, and filter and combine different colors of light. These are primarily passive components in that they don’t require an electrical power source to manifest their optical properties on photons traveling from fiber to fiber. Active optical (electro-optical) components on the other hand require power, generate heat, and cost more to produce and support. Power fails, heat damages and shortens component life, and neither are desirable on a large scale. Passive optics represents the technology catalyst for accelerating deployment of optical fiber to the residences of the world. Passive optics and optical fiber are, therefore, the fundamental building blocks of PONs. PONs use optical fiber for reach and make use of passive optical components such as couplers, splitters, filters, and triplexers to manipulate photons and wavelengths, much like switching trains to different tracks. The inert nature of PONs doesn’t constrain bit rates, so PONs are very fast. An optical laser (an active component) generates photons and launches them into the fiber at a designed bit rate (particular time duration between successive bits).
Functional Infrastructure
315
Once the timing of the photons are electrically modulated by the laser, they travel through the PON unimpeded except for the effects of minor optical impairments.
NOTE
The optical fiber of the PON has nothing to do with the bit rate. Because shorter distances are used—a dozen, two dozen, or more kilometers typically—light communications via PONs are relatively unaffected when compared to the protracted distances for long-haul optical networks.
PONs come in a few technology varieties, including the following:
•
ATM PONs (APONs)—A mature technology used in the core layers of many service providers worldwide. Because APONs are standardized by the ITU (G.983), APONs were the first fiber access technology option to gather support for point-to-multipoint fiber access deployments.
•
Gigabit PONs (GPONs)—GPONs at 1 Gbps and 2.5 Gbps are common. Service providers such as Verizon and SBC are deploying GPONs of the 2.5 Gbps variety.
•
Ethernet PONs (EPONs)—EPONs run at 10 megabits per second, some at 100 Mbps, and have the potential to reach multigigabits per second.
PONs move traffic, optically, from optical line terminals (OLTs) in central facilities to optical network units (ONUs) distributed throughout broadband-savvy neighborhoods. The ONUs contain fibers to individual homes and the passive optical components necessary to split, couple, and essentially redirect photons both downstream to homes and upstream to the provider(s). From the perspective of the ONU toward the individual residences, this is a point-to-multipoint physical topology. Downstream optical communications within the optical fiber coming from the provider side (OLT) into the neighborhood ONU encounters a splitter or demultiplexer to divide the bandwidth among the homes served by that particular ONU. Communications flowing upstream from the homes toward the provider will be multiplexed at the ONU onto the fiber that connects upstream to the provider’s OLT. By using optical fiber, PONs exhibit a high bandwidth capacity. PONs generally target 30 Mbps of user capacity or more. A bandwidth of 30 Mbps happens to be about the maximum capacity of typical coaxial cable MSO designs, and many PONs will attempt to target that inflection point. FTTN designs coupled with VDSL will attempt up to 52 Mbps per user. If you consider that a GPON runs at about 2.5 Gbps, and divide that by a 64-household split, you can deliver about 40 Mbps per household. With new MP-4 density and Microsoft Media 9 compression, 40 Mbps is enough to accommodate six concurrent, high-definition video streams plus Wi-Fi backhaul and numerous concurrent Voice over IP (VoIP) conversations. Change the lasers at the OLT side to something faster, and the bandwidth automatically scales downstream.
316
Chapter 6: Metropolitan Optical Networks
Some providers of PONs are targeting 30 Mbps per household up to 100 Mbps per household. The bandwidth target per household will rally around denominations that provide a suitable amount of video, voice, Internet data, and entertainment—doing so within a reasonable price tag. PONs are the technology direction for point-to-multipoint, optical fiber implementations of either the FTTN or FTTP type of initiatives. They represent the new era of metro access technology that is intended to establish scalable broadband to the masses. Figure 6-3 depicts the positioning of PONs within the metropolitan Ethernet service infrastructure. Figure 6-3
Passive Optical Networks within Metropolitan Ethernet Services
Provider Services at Layer 3 IP
Metropolitan Ethernet Services ATM
Business Subscribers
Residential Subscribers
Point to Point/Multipoint Ethernet Ethernet over MPLS Ethernet over RPR/DPT Ethernet over SONET/SDH
Point-to-Point Ethernet
Ethernet over CWDM/DWDM
Point-to-Multipoint
Point-to-Multipoint
APON EPON GPON
GPON EPON APON
Optical Fiber Layer Mixture of FTTN and FTTP Enterprise/Business/Residential Subscribers
Wireline, wireless, optical point-to-point metro access services, and point-to-multipoint access services (such as PONs) connect the subscriber premise to the metropolitan edge. Since the metropolitan access layer is customer-facing, there exists a large variety of service interface types to meet an assortment of customer communication options. The desire to transition narrow- and wideband communication interfaces to broadband options
Functional Infrastructure
317
(for example, xDSL) and the opportunity to displace wireline with glassline further augments interface choices, resulting in a requirement to support a wide variety of physical interfaces. The metro access layer therefore depends on a wide service variety from the metropolitan edge equipment.
Metro Edge The metro edge aggregates large volumes of individual customer connections into the provider’s multiservice edge equipment. It terminates the customer access links, bundling customer communications at the edge, and further provides transport of customer communications upstream toward the metropolitan core network. The metro edge focuses on service variety toward the customer. Not only is the metro edge responsible for a wide variety of service interfaces, but it is also the intense focus of next-generation services. That is because the march to a broadband world must begin at the metro edge, the first level aggregation/distribution point for broadband communications. Figure 6-4 shows the positioning of metro edge networks within the metro functional infrastructure. Figure 6-4
Metro Edge Network Positioning
Metro Access
Service POP Metro Edge Service POP
Metro Access
Metro Core
Service POP
Long Haul/ Extended Long Haul
Service POP Metro Edge Metro Access
Service POP
Today’s metro edge networks are evolving, gaining new functionality and intelligence while connecting the metro’s broadband access layers with the metro core, adding great bandwidth and service variety in the process.
318
Chapter 6: Metropolitan Optical Networks
The Metro Edge Evolves Well into 2000, much of the metro edge networks in both America and abroad were still using legacy SONET/SDH designs and standards. Long-established SONET/SDH ADM— designed for TDM optimization—couldn’t provide new Layer 3 IP capabilities with packet interfaces for Ethernet, Fast Ethernet, and eventually Gigabit Ethernet opportunities. The equipment platforms were also disadvantaged when dealing with the sporadic, escalating traffic patterns presented by IP data applications. A next-generation SONET/SDH evolution or separate equipment overlay for IP would be necessary. In fact, many providers began with the IP overlay strategy in the late 1990s to capture opportunities for Internet service provisioning, often building separate business units in the process. Yet, a costeffective evolution from traditional SONET/SDH became the overarching direction. The metro edge saw an infusion of next-generation SONET/SDH capabilities in 2000 in the form of MSPPs (refer to Chapter 3, “Multiservice Networks”). Next-generation SONET/SDH products were successfully incorporating the fundamentals of TDM optimization and onboard add/drop multiplexing, while integrating Layer 2 Ethernet interfaces and Layer 3 intelligence, and developing roadmaps to optical WDM.
Intelligence Moves to the Metro Edge Since the introduction of new MSPP platforms, the metro edge has been in a state of transition, further optimizing TDM and packet over SONET, facilitating the new volume leader of data from voice, providing Layer 2 and Layer 3 Ethernet services and storage interfaces, substituting IP intelligence for ATM, and incorporating WDM extensions for provisioning of wavelength services. There is a developing theme here: ever-increasing intelligence in the metro edge. The micro-miniaturization of semiconductors, and the successes of semiconductors in the computing world, are gaining momentum in the advancement of routing and switching intelligence for telecommunications. More intelligence underlies new-era metro access services, and the source of that intelligence emanates from the metro edge. This intelligence is multifaceted, yet one thing is immediately noteworthy: the routing and switching of voice and data is rippling closer to the edge. MSPPs integrate a circuit-based digital access cross-connect system (DACS) functionality that was previously only resident at the hub central office (CO). The provider’s remote or suboffices wouldn’t generally have the real estate to provide the necessary cross-connect footprint. Therefore, traffic would be aggregated and backhauled to reach the cross-connect function within the large CO. Now that MSPPs contain DACS technology, this function is distributed to the metro edge, reducing backhaul circuit usage. A secondary benefit is reduced requirements for DACS ports at the hub CO, reducing port consumption and their respective cost. The circuit cross-connect function primarily serves dedicated voice and data circuits using a 3/1 and a 3/3 digital access cross-connect system. Traffic grooming is a typical application
Functional Infrastructure
319
where T1s are mapped into DS3s and partially filled DS3s are mapped into fewer and fuller DS3s. The DACS functionality at the metro edge optimizes circuits, reduces backhaul requirements, and reduces equipment requirements at a hub CO. Overall, this provides effective Synchronous Transport Signal/Virtual Container (STS/VC) management. The routing and switching of packet data moves closer to the metro edge with support for IP and Ethernet services within the MSPP. In this way, packet switching is further distributed in the network toward the access layer, and the intelligence of IP services allows for features such as QoS for traffic guarantees and 802.1Q in 802.1Q tunneling for metropolitan Ethernet virtual LAN (VLAN) services. Only packets with destinations outside the MSPP’s local switching domain are sent upstream toward the core, saving optical backbone bandwidth for more interdomain services. The intelligent, metro edge is, therefore, “centric” to advanced services. The metro edge is accountable for service variety and services aggregation. It is also responsible for service density because service aggregation presents a large number of DS3s. The integration of Ethernet, storage interfaces, DWDM, and switching intelligence provide customer-facing options for strategic services. Metro edge designs, based on MSPP capabilities, are the new, multifunctional, Swiss army knives of metropolitan networking.
Connecting Metro Access to Metro Core Today’s metro edge networks are considered aggregators and collectors of local traffic, connecting the metro access layer to the metro core. Providers often use ring-based topologies to connect the edge upstream to the metro core. With the relatively short optical fiber distances in the metro, these optical transmission spans are largely unamplified. Toward the customer, distances are even shorter, and the uses of passive optical components are more than sufficient. Thin film filters and low-cost pluggable International Telecommunication Union (ITU) optics such as Gigabit Interface Converters (GBICs), small form-factor pluggables (SFPs), and Xenpaks are desirable, particularly toward the access network side. While linear, mesh, and ring topologies are options for metro reach, edge networks are primarily deployed in fiber-based rings, traditionally two-fiber unidirectional pathswitched rings (UPSR) or four-fiber bidirectional line switch rings (BLSR), using the SONET standard. The European Synchronous Digital Hierarchy (SDH) standard has different names for these similar topologies: subnetwork connection protection (SNCP) ring is similar to a SONET UPSR, and the SDH multiplex section-shared protection ring (MS-SPRing) is a two- or four-fiber ring topology that is comparable in function to the SONET BLSR. These UPSR, BLSR, SNCP, and MS-SPRing designs are cost-effective approaches to distributing add/drop multiplexing functions into the metro edge network and have been largely depended on to provide sub-50ms protection for TDM-based voice traffic. Metro edge rings provide protected customer voice and data transmission, sending customer traffic into the metro core in a physical circular direction. If there is a break in
320
Chapter 6: Metropolitan Optical Networks
the fiber or an equipment node failure along the path, the fiber automatically loops (selfheals) at the next upstream node and travels in a counter-rotating direction. Even though the physical packets are circulating in a ring topology, the actual session communication method is logically a hub-and-spoke communication pathway.
Increasing Bandwidth and Services in the Metro Edge Metropolitan networks are often deployed in a hierarchical layout. Data accumulates from the access-to-edge aggregation to core delivery, placing large bandwidth burdens on the network nucleus. Many of these networks are still ring based, premeditated around SONET/SDH for traditional TDM requirements while supporting the lowest-cost network protection and redundancy—that of a ring orientation. With TDM requirements, SONET/ SDH’s fourfold escalation was more than adequate to outpace the growth of voice and data circuits, for example, OC-3 to OC-12, to OC-48, to OC-192, and so on. As today’s networks are pursuing optimization for data traffic, a fourfold bandwidth boost isn’t enough to outrun a traffic profile that is accustomed to tenfold bandwidth growth every few years. Computers, mobile handhelds, and wireless pocket PCs—the launchpads and landing strips of data—use either 10 Mbps Ethernet, 100 Mbps Ethernet, or 1000 Mbps (1 Gbps) Ethernet, with many high-end computing systems shipping with 10,000 Mbps (10 Gbps) Ethernet. Ethernet, the primary information feeder of new-era networks, advances in tenfold bandwidth increments, not four. This has resulted in many metro designs pursuing a SONET/SDH ring-stacking approach to add more bandwidth in the core. For example, if a provider has an OC-48 (2.5 Gbps) today and is anticipating tenfold traffic growth, then a speed jump to OC-192 still falls short by sixfold. Adding and stacking multiple OC-48s or several OC-192s is a common approach. The distribution of intelligent switching platforms such as MSPPs is another approach. Other approaches include RPR to provide more bandwidth headroom for data and the use of DWDM in the metro core. DWDM is particularly scalable, providing bandwidth relief by multiplying 10 Gbps lambdas on the same fiber pair. For the reasonable future, many providers will select metroefficient DWDM as they attempt to compensate for the tributary-to-core bandwidth mismatches. Metro DWDM, however, needs to be very modular, allowing the incremental growth of capacity on a pay-as-you-grow plan. The use of modular reconfigurable optical add/drop multiplexing (ROADM) functionality with MSPPs is also paramount to quickly recovering and reprovisioning bandwidth. In this context, DWDM is an appropriate response to metro capacity needs, yet there is also a service-oriented aspect that becomes the dominant justification for DWDM to the metro edge—that of wavelength services. Metro DWDM is further explored later in this chapter. The metro edge layer then performs multiple functions. It provides a wide variety of service interfaces, speeds, and connection topologies toward the metro access layer and performs the aggregation and grooming of customer traffic toward the larger metropolitan core network and service POPs. The metro edge is a focal point for the delivery of advanced,
Functional Infrastructure
321
intelligent, service awareness and becomes the meeting point for distributed intelligence such as IP Layer 3 functionality and subservice POPs. The addition of DWDM and ROADM technology into the metro is key to creating new wavelength services and is a fresh approach to scalable capacity. Innovative advances, increased densities, and price/performance gains in optical, switching, and routing technology are enabling the level of intelligence and automated adaptability that is required in the metro edge, currently the most dynamic area of all network mileposts.
Metro Core The metropolitan core (see Figure 6-5) forms the main, area-wide, high-speed backbone of the metropolitan optical network. The metro core brings together large numbers of metro edge networks from a customer-facing viewpoint. From a provider interior viewpoint, the metro core connects to service POPs to access communication switching, routing, and applications, and provides the runway interconnect to long-haul services. The core is responsible for geographic reach, expanding to maintain appropriate proximity with suburban sprawl. In addition, the metro core focuses on high-bandwidth capacity and ultra-high availability to collect and transport communications between the metro edge aggregation points and the service POP(s). Toward the service POP, service aggregation capability is important to bundle and present efficiently many different port speeds, protocols, applications, and traffic types. Figure 6-5
Metro Core Networks Positioning
Metro Access
Service POP Metro Edge Service POP
Metro Access
Metro Core Service POP Metro Edge
Metro Access
Source: Cisco Systems, Inc.
Service POP
Service POP
Long Haul/ Extended Long Haul
322
Chapter 6: Metropolitan Optical Networks
Defining Today’s Metro Core The metro core can be fairly defined as the optical equipment and interconnecting fiber plant between Tier 1 COs in a provider’s geographic area of coverage. For example, a network provider in New York City may have a dozen to two dozen COs on Manhattan Island, with an optical fiber cable daisy-chaining and interconnecting all of them together into a metropolitan core. This interoffice fiber (IOF) that establishes the metro core is “highvalue fiber” in that the effort and expense to augment and replicate that path with additional fiber cable is very disruptive. The metro core uses this optical fiber backbone as the highspeed carrier of communication traffic from CO to CO, adding additional communication calls/sessions at some COs/carrier hotels while dropping these sessions at other COs and carrier hotel interconnects (see Figure 6-6). Extending the life of IOFs, customarily all fiber in practice, is paramount to metropolitan optical network scalability and expandability. Figure 6-6
Basic Metro Ring and Lateral Design Local Exchange Network
End-User Buildings
Long-Distance Network
Lateral
Break-Out Point
Carrier Hotel
Central Office Metro Ring Carrier Hotel Break-Out Point Lateral
End-User Building Source: © 2004 PriMetrica
Functional Infrastructure
323
Larger metropolitan areas expand beyond their primary county of record, following highvalue subscribers into surrounding counties. New fiber is laid or leased, new COs are established, and equipment and facilities are spliced into the existing metro core network backbone. While growing the core network to include intercounty population centers, many of the largest metro cores are now reaching up to 600 km in circumference, requiring some optical reamplification. Optical amplification can be expensive, so very cost-effective erbium-doped fiber amplifiers (EDFAs) are desirable for metro amplification. More intelligence is increasingly fundamental as ROADMs, automatic power control, and other dynamic features can be utilized to remain flexible and fast in response to subscriber demand. Many of the larger metropolitan areas are expansive enough to be classified with regional status, earning the designation “metro regional.” Customary speeds in the metro core are a minimum of OC-48/STM-16 using SONET/SDH standards or 2.5 Gbps using non-TDM optical bit-rate specifications. Depending on the subscriber density and offered traffic volumes, the metro core may use multiple OC-48s to scale capacity or migrate/jump to backbone speeds of OC-192/STM-64 (SONET/SDH) or 10 Gbps optical bit rates. Beyond using just a bit-rate speed increase, multiring designs, mesh connections, and the use of DWDM are all options with which to scale traffic capacity of a metro core. Larger metros will likely employ OC-192/10 Gbps backbone rings and use DWDM functionality for scale. Some may consider OC-768/40 Gbps speeds when appropriate.
Scaling Core Bandwidth Metro core networks are migrating to two-fiber and four-fiber DWDM rings, particularly to address high-value fiber exhaust. The use of metro-optimized DWDM technology is desirable, and many metro products now integrate DWDM function. A proper blend of active and passive DWDM components contributes to the cost and operational efficiencies that are required in the metro. Low-cost, modular filtering of 4 to 64 lambdas are expected with lower lambda counts typically using passive optics and with the higher lambda counts moving into ROADM technology for maximum flexibility. Metro cores will demand modular, incremental DWDM channel capabilities for a pay-as-you-grow strategy but can be expected to target from 40 to 80 or perhaps 120 lambdas, depending on selected channel spacing, fiber characteristics, and chosen optical windows. The use of the C-band optical window is popular because these components are widely available and such availability lowers cost. The use of metro-efficient DWDM presents new design strategies for metro core traffic profiles. A provider can dedicate certain traffic classes to individual lambdas for ease of management, to simplify QoS, and to execute service-level guarantees. Distributing metro DWDM from the core, beyond the metro edge and into the metro access, provides the ability to offer wavelength and subwavelength services.
324
Chapter 6: Metropolitan Optical Networks
The use of metro DWDM scales capacity to quickly keep up with user demand. One hundred lambdas using 10 Gbps each equals 1000 Gbps of traffic capacity. As networks deploy metro DWDM and proffer such capacities, advanced computing applications, metro-based storage farms, and residential Ethernet will seek to fill it. For the larger metros, thousands of gigabits to thousands of terabits may be needed within five years. Petabits of capacity may exist in metropolitan core networks within another seven years.
Scaling Core Topology Metro cores increase availability through the use of redundant equipment, the use of automatic protection switching, and predominately, fiber-ring topology designs. The fiberring topology design has proven to be very resilient. Not only is the ring design used for metropolitan optical networks, but the practice is also used for national optical networks. When using SONET/SDH rings, the SONET headers limit the number of nodes to 16 total. In fact, many metro cores contain rings of about 6 to 14 nodes in each such ring. Cisco uses an extension feature to increase this capability to 25 nodes, which helps with reaching larger ring circumferences, all other optical transmission limitations considered. In a metropolitan area, you can expect that continuous construction, road and utility installation, and repair will often dig up, crush, or break optical fiber cables. Optical fiber rings and equipment are capable of quick, virtually automatic switching around such events. As a simple example, a two-fiber ring would pass through all of the interoffice core equipment locations to form two counter-rotating rings or traffic paths—a clockwise path and a counterclockwise path. If a fiber span is cut between a pair of offices, then each office equipment node adjacent to that cut will sense and switch internally, communicating to the other ring nodes to lash what’s still functional between the two counter-rotating fiber strands into a one-fiber-strand ring. Each node therefore maintains connectivity, and traffic is protected from a fiber-cut event. In practice, there are various ring designs and protection options such as two-fiber UPSRs, two-fiber BLSRs, and four-fiber BLSRs. The two-fiber UPSR has the advantage of providing lower-cost survivability and is often used with smaller ring circumferences such as metro access and metro edge rings, where availability requirements are less stringent. The use of four-fiber BLSRs are more expensive to procure and stand up, but they have the added advantages of surviving multiple concurrent fiber cuts and node failures, they double the production bandwidth capacity of the ring, and are also capable of spatial reuse of ring bandwidth. Four-fiber BLSR rings are very desirable in large ring topologies and are customarily used to provide the ultra high availability metrics that are essential for metropolitan core networks. Another high-availability option is applying a feature called a path-protected mesh network (PPMN).
Functional Infrastructure
325
A PPMN takes a mesh or semi-mesh interconnected design, and through Layer 3 autodiscovery routing protocols makes each node in the mesh topology aware of the other. The shortest calculated path is determined to be the working traffic path. A backup path between the same pair of nodes—through a meshed connection via a third node—is established, forming a virtual ring over the mesh topology. During failures of fiber or equipment, the PPMN protocols apply UPSR-like protection, switching to surviving traffic paths. The result is ring-like protection without constructing a physical fiber ring. When a metro core ring needs to scale capacity, one of the design options is to create fiber connections between some of the nodes on the ring to form a semi-mesh. The use of PPMN can then apply ringbased redundancy features. In addition, PPMN allows the use of different line rates to be mixed together to form the PPMN virtual rings. Such a design can scale core capacity at an incremental cost. Figures 6-7 and 6-8 depict PPMNs. Figure 6-7
Path-Protected Mesh Networks Source Node Node 5
Node 3
Node 2 Node 4 Node 1 Node 8 Node 6 Node 10
Node 7 Node 9
Protect Traffic Node 11
Source: Cisco Systems, Inc.
ffic
g Tra
in Work
Destination Node
Primary Path Secondary Path
326
Chapter 6: Metropolitan Optical Networks
Figure 6-8
Path-Protected Mesh Network Virtual Ring ONS 15600 Node 5
OC-48
ONS 15454 Node 6
ONS 15454 Node 1
ONS 15600 Node 4
OC-192 UPSR
ONS 15600 Node 2
ONS 15454 Node 3
ONS 15454 Node 8
OC-48
ONS 15600 Node 7
Source: Cisco Systems, Inc.
Connecting Metro Edge to Service POP Metro cores also require service aggregation. The metro core provides transport between metro edge fabrics but is particularly responsible for the density of traffic presented to and received from the service POP. When you consider that the core is using high-value fiber, a key requirement is the efficient packing of communication transactions at the service POP interface, optimizing the high-value fiber as much as possible. This is known as service aggregation and is commonly implemented with technology, much like that found at the metro edge. It is expected that the metro core-to-service POP interface must support a larger aggregate of optical rings and optical linears (point-to-points), perhaps even some optical laterals (optical drops to nearby customers). This calls for high-density MSPP-like technology and is generally classified in telecommunication taxonomy as multiservice switching platforms (MSSP). Therefore the MSSP is the primary interface between the metro core and high-density service POP(s). Contemporary metro core networks are more data-oriented optical designs using a variety of topologies and technologies to increase value to the overall metropolitan network. For many providers, metropolitan core networks are part of a larger regional optical backbone that serves the provider’s geographic market. Each metropolitan core provides geographic reach and transport, bandwidth scalability, the highest availability, and efficient service aggregation into and out of service POPs.
Functional Infrastructure
327
Service POP Service POPs are circuit, application, and IP centric. Most IP and VPN services, application servers, content servers, ISP connections, and provider Layer 2 managed services are generally resident in the service POP. The service POP provides multiple functions, including optical switching, IP routing (both edge and backbone), TDM grooming and aggregation, and high-value Internet application services. Synchronous Transport Signal (STS) and optical bandwidth management are also key requirements. This tier additionally provides access to long-haul networks and ISP connections. It supports backbone switching between multiple service POPs, service interworking of protocols, hosted content delivery, and web and DNS caching services. Primary service POPs are centralized on the metropolitan core, and secondary service POPs are often distributed throughout the metropolitan optical network closer to the metro edge. In this way, distributed service POPs concentrate specific services such as IP, Ethernet, and CWDM and DWDM wavelengths, while distributing intelligence closer to the customer. Figure 6-9 shows the positioning of the service POP tier within the metro functional infrastructure. Figure 6-9
Service POP Positioning
Metro Access
Service POP Metro Edge Service POP
Metro Access
Metro Core
Service POP
Long Haul/ Extended Long Haul
Service POP Metro Edge Metro Access
Service POP
Source: Cisco Systems, Inc.
Located within a service POP are higher-layer switching functions and platforms. Voice switching would utilize a class 5 TDM voice switch. Data virtual circuits using ATM or Frame Relay will interface with Layer 2 ports on ATM switches and Frame Relay switching
328
Chapter 6: Metropolitan Optical Networks
equipment within the service POP. IP routing services such as MPLS and VPNs will connect to Layer 3 routers within the service POP. ISP services, web services, content networking, and other Layer 4-7 application services may reside on computer servers within the service POP. Additionally, equipment that effectively aggregates optical channels from the metro core, destined for the metro regional or long-haul networks will populate the service POP. In effect, Layer 2, Layer 3, and Layer 4 services and their delivery platforms are very concentrated within this tier. Service POPs perform many services, including those that follow, to provide circuit, packet, and application based services:
•
Metro switching—Requirements call for the switching of hundreds of OC-48s and dozens of OC-192s. Support for OC-3, OC-12 and GFP-based Gigabit Ethernet flexibility is also needed within a service POP. The use of multirate line cards with pluggable optics lowers cost and reduces inventory sparing. Switching data between two fiber BLSR rings is a common application. The use of MSSP platforms helps to optimize the equipment density within a service POP and allow for scale of highspeed switching interfaces.
•
Hosted telephony services—Wireline telephony has been the classic fixture of service POPs. Class 5 TDM voice switching and trunking supports all customer types with managed voice services (Centrex), customer PBX and video switching services using ISDN PRI/BRIs, and analog voice features for POTS, fax, and data modem access services. Telephony is undergoing tremendous change from TDM circuit to packet-based technologies based on IP. The infusion of IP into voice telephony has expanded the number of differentiated voice services that a provider can leverage.
•
IP routing—Many providers today have an MPLS core network that covers their regional market. A service POP is likely to contain a number of MPLS provider edge routers that provide MPLS and VPN service interconnection within the metropolitan network and to points beyond. IP routing is fundamental to many of the other service platforms that need IP intelligence. Features such as QoS, security, packet voice, and many of the larger IP routers in metropolitan networks are resident in the service POP layer.
•
VPN services—VPN services take many forms such as access VPNs, Intranet VPNs and Extranet VPNs. Layer 2 data VPNs such as Frame Relay and ATM are common offerings. Layer 3 data VPNs using IP are increasingly used within the service POP to converge packet voice, data, and the Internet access needs of customers. Providerbased VPNs provide customers the benefit of private networking features with the lower costs of a shared infrastructure.
•
TDM grooming and aggregation—The service POP is the concentration point for TDM services. In addition to interfacing with hosted telephony services, many TDM circuits bypass Class 5 voice switches and must be aggregated and groomed into larger bandwidth pipes for regional, long-haul, or extended long-haul transport.
Functional Infrastructure
329
Service POP equipment such as MSSPs and MSPPs can function as distributed bandwidth managers for effective aggregation of OC-n, STS-1, and VT 1.5 cross connection services.
•
Wavelength services—Managed wavelength services are services driven by highspeed enterprise computing and storage networking requirements. Wavelength services have matured from custom buildouts, to tariffed offerings, to recent managed wavelength services built upon intelligent DWDM technologies. The ability to support and aggregate multiple protocol types and service interfaces into wavelengths is a desirable capability. Wavelength services usually are a combination of DWDM equipment at the customer premise, metro edge, in the metro core and in the service POP for wavelength aggregation to storage services, LAN extension, long-haul transport, etc. Managed wavelength services help customers meet high-speed networking requirements without the investment of building their own private, optical fiber networks.
•
High-value internet services—Internet accessibility is a primary lead-with service. Whether delivered in the metro access layer via dial up, cellular, cable, DSL, T1/E1, Wi-Fi, T3/E3, or OC-n/STM-n, the ISPs distribution and core layer routing and switching platforms are often resident in service POPs. In addition, a number of usability applications are necessary such as DNS servers, DHCP servers, e-mail servers, and newsfeed servers, all important to the customers overall Internet experience. Service POPs that colocate Internet services from multiple providers are often referred to as carrier hotels.
•
Content, e-commerce, and applications services—Content services tend to be Layer 4 and higher services that locate multimedia, web hosting, software support applications, and e-commerce engines closer to the Internet subscriber base. These are generally computer servers that are located in some of the tier one service POPs.
•
Video delivery—Platforms that produce, adapt, and push televideo services such as broadcast television, cable television, high-definition television, pay-per-view, and video on demand are often resident in the service POP of media providers. Cable MSOs tend to call these locations cable headends rather than using the term service POPs.
•
Hosted storage services—Service POPs may provide managed storage services. The requirement for many critical industries and business functions to survive data and accessibility disasters encompass the remote storage and replication of data, applications, and/or complete data centers. Hosted storage platforms are provider assets that commonly reside in larger service POPs. Storage services commonly interface through the metro core to the MSPPs in the metro edge/access that provide physical connection to the customer’s mainframes, servers, and storage area networks (SANs).
330
Chapter 6: Metropolitan Optical Networks
Service POPs are thus used to perform service adaptation, packet switching, circuit switching, bandwidth grooming, IP edge and core routing, Internet services, content services, video services, storage services, and so on. This tier is a valuable point of leverage for next-generation services that enable computing and messaging convergence, worldwide Internet visibility, high-value content delivery, and reduced response times through content and service distribution.
Metro Regional Along with suburban sprawl comes metropolitan network extension. Many of the Tier 1 cities of America, such as Atlanta, Chicago, Dallas, Denver, San Francisco, and Los Angeles, have undergone widespread suburban expansion, vastly increasing the boundaries of their metropolitan statistical areas (MSAs). Covering their respective geographies with ring-based network topologies pushes these networks to distances of 300 km plus. As suggested in Chapters 5 and 7, optical signal reamplification is generally needed every 50 to 80 km, depending on a number of different factors. More expensive optical-signal regeneration such as optical-to-electrical-to-optical (OEO) could be needed when approaching summative distances of 500 to 600 km. Beyond geographical expansion, other business requirements are driving the need for metro regional network connectivity. For example, financial service regulations require broker/ dealers to maintain backup data centers on separate electrical grids. The recommendations call for 200 to 300 km (120 to 180 miles) of separation between data centers. High-speed connectivity is a must with minimum latency for storage mirroring. These requirements can be served well by using a metro regional network owned and operated by the same provider. As another example, health care regulations such as the U.S. Health Insurance Portability and Accountability Act (HIPAA) require information security, encryption, and secure transmission for health information communications and records. Using a metro regional network by a single provider can strengthen security options in that a large percentage of regional communication transactions can remain within the provider’s infrastructure and avoid Internet backbone segments. Many states and European countries are creating regional optical networks of their own. Both the price and the complexity of acquiring, building, and operating optical networks has dramatically improved, and many entities are seizing the opportunity to in-source regional optical transport for high-bandwidth and high-value services. These are usually targeted initially at a small set of requirements for interconnecting facilities and communities of mutual computing and networking interest, yet they are achieving distances easily classified as regional in scope. Metro regional designs may use ROADMs and EDFA-based amplification, and depend heavily on network intelligence for power automation. Other enabling technologies are long-reach optics at 1550 nm nominal wavelengths and the use of DWDM, helping to achieve fiber span distances of 40 to 80 km between ring nodes.
Metro SONET/SDH
331
Providers can leverage metropolitan and regional presence to link Tier 1 and Tier 2 COs into a metro regional core or regional backbone. This backbone may be a mesh connection between a few select MSAs within the provider’s area of coverage, yet it’s designed to serve regional connectivity between multiple metro core networks. This increases the ability to keep regional traffic on more private optical facilities and off of the Internet. Keeping more end-to-end traffic responsibility allows the provider to use the QoS, traffic management, and security features of IP to a fuller extent. This provides stronger differentiators for product offerings based on service guarantees, security, and so on. Thus, metro regional networks can include sprawling metropolis geographies as well as regional connectivity between metropolitan areas. Many former metropolitan providers are building greenfield regional networks to expand their markets and revenues. A key design win is the ability to use the same or similar metro platforms flexible enough to function as building blocks for metro edge, metro core, or metro regional network designs. Previous chapters introduce much of the technical background behind the technologies and solutions in this section. Technologies in and of themselves can be used for revenue generation and cost control, and many are employed based on their individual merits. Technologies that are applied to the creation or enhancement of services and solutions yield offerings that are more customer centric, more value differentiated, and hopefully, more technology agnostic. Service providers and network operators are in the business to serviceorient their product offerings in search of maximum acceptance, beneficial margins, accelerated customer growth, and increased customer retention. With the technology backdrop of prior chapters, the next sections emphasize the serviceoriented networking features of metro technologies as applied to metropolitan optical solutions.
Metro SONET/SDH There’s no question that SONET/SDH networks have provided customers with voice and data network extensibility. For almost twenty years, SONET/SDH networks in the metropolitan networks of the world have provided network reach, bandwidth scale, and almost continuous availability. SONET/SDH was developed as a reliable, bit-oriented, TDM multiplexing and synchronous data transmission sublayer protocol with which to link ATM switching services with the optical fibers of metropolitan, long-haul, and global networks. For the metropolitan area, SONET/SDH-compliant products brought new efficiencies and capacities to TDM voice-transport, high-availability mechanisms to couple with the low error rates of optical fiber and fiber-ring topology support for hitless survivability of fiber cable cuts and node failure. For metropolitan data, SONET/SDH provided high-bandwidth transmission pipes for government, large enterprise, universities, and dot-coms. Packetover-SONET/SDH (PoS) is an adaptation of SONET/SDH to provide less overhead in the
332
Chapter 6: Metropolitan Optical Networks
transport of asynchronous, bursty, IP packet data. PoS remains popular for point-to-point packet connections such as enterprise WAN connections, ISP Internet connections, and for internal, provider connections linking service POPs with aggregation routers. The international installed base of SONET/SDH is so large, and its deployment in the metropolitan space so concentrated, that further adaptations for IP packet transport—if well engineered—can quickly achieve widespread acceptance. Those with a vested interest in SONET/SDHs longevity—researchers, vendors, and providers alike—continue to co-opt packet-based data functionality, intent on extending SONET/SDH competencies beyond purpose-built heritage and toward new-era, multiservice modularity. These efforts lead to new integrated features that are providing SONET/SDH with the aptitude to transition to next-generation SONET/SDH services. Virtual concatenation (VCAT), generic framing procedure (GFP), and Link Capacity Adjustment Scheme (LCAS) deliver increased packet protocol efficiencies, service aggregation, and on-the-fly provisioning for SONET/SDH network designs. RPR is another that optimizes SONET/ SDH fiber rings for even greater packet bandwidth capacities and efficiencies. The customer demand for service variety has brought a convergence of all of these technologies to the metro edge in the form of next-generation metro edge platforms recognized as multiservice provisioning platforms (MSPPs). Smaller, denser, smarter, and more service-prolific MSPPs breathe new life into SONET/SDH through the support of VCAT, GFP, and LCAS, while incorporating IP, Ethernet, storage and mainframe protocols, and new interface proficiencies. Metro SONET/SDH is now more service applicable than ever before.
Virtual Concatenation (VCAT) First introduced in Chapter 5, “Optical Networking Technologies,” VCAT increases the efficiency of packaging Ethernet, storage, and other framing protocols into metropolitan SONET/SDH OC-n/STM-n facilities. The widely variable byte lengths of packetized data don’t concatenate well based on SONET/SDH’s 64 Kbps TDM-bounded architecture. As packet data is provisioned within SONET/SDH circuits, gaps/fragments occur much like a fragmented disk in a personal computer would from the dynamics of writing new files and deleting old ones. VCAT intelligence virtually “stitches” together multiple gaps/fragments in which to place a data flow, increasing the overall packing efficiency of SONET/SDH bandwidth. This efficiency is essential in high-growth metropolitan networks where these data interfaces originate, and where the continuous provisioning and deprovisioning of customer circuits can leave bandwidth “stranded.” Virtual concatenation scales in 50 Mbps increments and drastically increases the utilization efficiency for Gigabit Ethernet, IBM Channel, and SAN protocols when transported over SONET or SDH. This allows for the support of more customers within a metropolitan SONET/SDH network. Less overhead is expended on unused bandwidth, which means more revenue per bit of bandwidth capacity.
Metro SONET/SDH
333
Generic Framing Procedure (GFP) Metropolitan networks must support a large variety of data protocols in addition to TDMbased voice—data protocols that are variable (asynchronous) in length and are often byte oriented in addition to bit oriented. GFP is a standard data encapsulation technique to adapt asynchronous, bursty data traffic with variable frame lengths prior to transport over a synchronous-based SONET/SDH facility. Many different client-side protocols can be mapped into a GFP frame, creating a mechanism for aggregating different data services together with a GFP header prior to SONET/SDH transport. GFP can encapsulate Internet Protocol/Point-to-Point Protocol (IP/PPP), Ethernet, Enterprise Systems Connection (ESCON), Fiber Connection (FICON), and Fibre Channel, transporting any of these over SONET/SDH Layer 1 networks, lending credence to the term generic framing procedure. (See the later section “Metro Storage Networking” for more detail on ESCON, FICON, and Fibre Channel.) Before GFP, protocols would be limited to some subset of available bandwidth within a large SONET/SDH facility. With GFP, all protocols can now take advantage of unused bandwidth. GFP is a complete mapping protocol that efficiently maps byte-oriented data traffic as well as storage protocols, which prefer transmission in block data mode. GFP is more strategic than PoS, as GFP can be used with non-SONET/SDH infrastructures—for example, mapping GFP frames directly onto an optical infrastructure. Deploying GFP allows a metropolitan provider to further leverage the existing infrastructure bandwidth to gain more value from network investments.
Link Capacity Adjustment Scheme (LCAS) LCAS is an operational feature for enhancing customer bandwidth availability in metropolitan SONET/SDH networks. It allows providers to dynamically change bandwidth capacity within a SONET/SDH transport without creating outages. LCAS provides a mechanism for automatic bandwidth reprovisioning to increase or decrease the capacity on a virtual concatenation group basis. This is beneficial for making TDM adjustments “hitless.” Additionally, LCAS can enhance the flexibility of virtual concatenation. This adds up to more rapid provisioning, avoiding delays caused by having to schedule customer bandwidth changes during provider maintenance windows. These are but some of the features allowing next-generation SONET/SDH networks to better manage overall bandwidth, especially packet data, and support traffic such as Ethernet over SONET/SDH with more granularity. Figure 6-10 shows the difference between traditional SONET implementations and next-generation SONET implementations using the MSPP platform. The figure shows how the use of next-generation, SONET/SDH-based MSPPs can reduce the number of customer aggregation devices needed for interfacing TDM, Ethernet, ATM, and video, using features such as GFP, VCAT, and LCAS to efficiently aggregate and present voice, data, and video to the business and metropolitan edge
334
Chapter 6: Metropolitan Optical Networks
ring. Furthermore, the use of MSPPs integrates DACS functionality, allowing the reduction of four discrete equipment footprints to one MSPP footprint at the edge of the metropolitan backbone ring. Figure 6-10 Comparison of Traditional SONET Implementation with Next-Generation SONET Implementation Traditional SONET Implementation Next Generation SONET Implementation
OC-48
OC-48
OC-48 Four Devices Become One OC-48 OC-48
3/3/1 Digital Cross Connect OC-12
Business Ring
Metropolitan Ring
OC-3
OC-12
OC-48
OC-12
Encoder Decoder
OC-48
OC-48 MSPP
OC-3/12/48/192 UPSR/BLSR Business Network
MSPP
OC-3/12/48/192 UPSR/BLSR Metropolitan Network
MSPP
Use of GFP, MSPP VCAT, LCAS
Video Ethernet
ATM
TDM Ethernet ATM Video
TDM
Backbone Ring
TDM Ethernet ATM Video
OC-3
OC-3
Source: Cisco Systems, Inc.
Figure 6-11 shows a topological view of the next-generation, baseline SONET/SDH metro system using the MSPP platforms. Notice that this baseline system, by virtue of the MSPP design, incorporates SONET/SDH ADM and wideband and broadband digital cross-connect functions within the MSPP chassis. The MSPP chassis is directly connected to the SONET/ SDH fiber ring without any legacy ADMs or DACS needed. The enterprise locations are connected using traditional DS-n or OC-n TDM-based services.
Metro SONET/SDH
335
Figure 6-11 Cisco MSPP-Based Metro SONET/SDH
Internet MSPP
Service POP
Enterprise B Access
Enterprise A MSPP Interoffice Ring OC-48
MSPP
Edge Ring OC-12 Access
Edge Ring OC-12
Access
MSPP MSPP
MSPP
Enterprise C
Enterprise A
MSPP Enterprise C Access
Enterprise B
Enterprise C
Source: Cisco Systems, Inc.
Moving Packets over Metro SONET/SDH The ability to blend next-generation SONET/SDH features with technology innovations creates new multiservice capabilities for metropolitan SONET/SDH networks. As a result, providers can offer many services in support of customer requirements for voice, video, data, storage, and distributed processing. A service summary for metro SONET/SDH may include the following:
• • • • •
Dedicated SONET/SDH services such as TDM voice, video, private lines Packet over SONET/SDH services Point-to-point Ethernet over SONET/SDH services Multipoint Ethernet over SONET/SDH services Storage over SONET/SDH services
336
Chapter 6: Metropolitan Optical Networks
These multiservice-over-SONET/SDH capabilities extend the support of packet-based traffic, and enhance the value of metropolitan SONET/SDH networks to positively affect top-line revenue, customer retention, and new customer acquisition. Next-generation SONET/SDH networks are transitioning to new-era multiservice, modular networks that are better optimized for circuit and packet delivery. Customers gain access to more advanced Ethernet, IP, and storage networking options, easily distributed to match the customer’s geographic business plans. For providers, accelerated services growth and distinctive customer value are expected results. Metropolitan SONET/SDH equipment was designed to efficiently and reliably transport and bundle voice circuits from the customer premises to (and beyond) the nearest exchange; however, it was not intended to support the enormously growing demand for IP bandwidth. Since data is not aligned on 64 Kbps boundaries as are voice signals, a new framing interface is needed at the data link layer to take IP packets and map them efficiently into SONET/SDH payloads. Developed previous to GFP, PoS is the first adaptation of SONET/SDH to transport packetbased data services. PoS employs a couple of standard techniques to provide more efficient transport of packet data over SONET/SDH. These were necessary to accommodate or “map” the asynchronous, byte-oriented, self-clocking datastream of packet data within the bit-oriented, centrally clocked, time-domain networking slots of SONET/SDH. A larger maximum transmission unit (MTU) provides less overhead and efficient packaging into a SONET payload. Next-generation MSPP platforms support Ethernet line cards, introducing the ability to deliver Ethernet interfaces to metropolitan customers. Mapping the customer Ethernet data packets over SONET/SDH is one Ethernet transport option. When mapping Ethernet frames to SONET/SDH circuits, point-to-point Ethernet Private Line (EPL) services are produced that are still considered TDM services. This competes with T1 and T3 offerings as yet another customer access option. While this is a good strategy to grow customer bandwidth, it has limits on margins, and is primarily valued as a flat-rate, distance-oriented service. The Cisco E, CE, and G series Ethernet line cards for the Cisco MSPP platforms are technology that enables EPL services. Figure 6-12 shows an example of EPL services over a SONET/SDH infrastructure. An EPL is shown provisioned over a Cisco MSPP between two customer sites, and another example is shown using EPL connections to a service POP’s multilayer switch, which would normally connect upstream to the Internet or other POP-based services.
Metro SONET/SDH
Figure 6-12 Ethernet Private Line over SONET/SDH Example
Point-to-Point Ethernet Private Line Service Between Customer Sites (Line Rate: 100 Mbps or 1 GbE)
Multilayer Switch
Enterprise A
Point-to-Point Ethernet Private Line Service Customer Sites to POP (Line Rate: 100 Mbps or 1 GbE)
Multilayer Switch
MSPP
Enterprise A Access Ring
Enterprise C
MSPP
MSPP Enterprise B
Source: Cisco Systems, Inc.
Enterprise C
Enterprise C
Enterprise C
MSPP Enterprise B
Enterprise C
Enterprise C
337
338
Chapter 6: Metropolitan Optical Networks
The E, CE, and G series Ethernet cards use SONET STS bandwidth scaling to create EPLs at various line rates. By adding statistical multiplexing to Ethernet technology, Ethernet services can provide more granular increments of bandwidth at both guaranteed and peak rates, allowing oversubscription for higher utilization of facilities. Support for GFP, VCAT (both high and low order), and LCAS are integrated into the CE series cards, which have 800 Mbps of switching access to the TDM backplane. The opportunity to create measured rate services is available for both point-to-point Ethernet and multipoint Ethernet, including the ability to create multipoint TLS. This implies similarities to the Frame Relay business model as applied to Ethernet. These capabilities were introduced with the multilayer (ML) series Ethernet cards for the Cisco MSPP platforms. In addition to statistical multiplexing benefits, the multilayer personality of the ML series Ethernet card brings IP packet processing functionality to the MSPP Ethernet toolset, to the tune of 5.7 Mbps of Layer 2 or Layer 3 switching performance. Multipoint Ethernet topologies are now supported, allowing for the creation of Ethernet Multipoint Services (EMS) at the MSPP edge. Oversubscription can be applied in a hierarchy, saving aggregate bandwidth at interconnection points in service POP switches. By sharing a common software code base with Cisco’s enterprise routers, the ML series Ethernet cards inherit the same Layer 3 QoS mechanisms and IP services. The QoS mechanisms allow for differentiated services as opposed to basic, commoditized bandwidth transport. These services can be stratified and further margined for high-availability SLAs since they inherit the protection capabilities of the SONET/SDH layer below. The Cisco 3550-like capability of the ML series Ethernet card creates opportunities for Ethernetswitched services based on VLANs. The support of 802.1Q in 802.1Q is present, allowing providers to separate customer traffic with no impact on or coordination of customer VLAN assignments. The ML series interoperates with the G series cards, and supports encapsulation schemes including PoS. Considered together, these Layer 3 features will allow a provider to create a multipoint, multi-QoS Ethernet service deliverable over a SONET/SDH metro infrastructure. Figure 6-13 shows an example of Ethernet TLS. Enabling SAN applications over metropolitan SONET/SDH networks requires the ability to transmit and receive storage protocols on the client-side interface, and then package the storage data into SONET payloads for transport. Very low latency is desired so that storage replications and storage accesses don’t noticeably impede application-processing performance. These customer requirements can be met in MSPP platforms by including SL series cards to deliver storage application support.
Metro SONET/SDH
339
Figure 6-13 Ethernet TLS over SONET/SDH VLANs 1- 4095 VLAN 220
Internet
Ethernet Multipoint for Transparent LAN Services Using Provider Q-in-Q and ML Series Cards
Service POP Enterprise A
Enterprise B
MSPP Interoffice Ring OC-48
Access Ring OC-12
MSPP Enterprise C
MSPP
Access Ring OC-12
VLANs 220, 221
Enterprise C
Enterprise A
Enterprise A
Enterprise B VLANs 1- 4095
VLANs 220, 221 Source: Cisco Systems, Inc.
The SL series cards provide Fibre Channel interface connections to the client-access side of an MSPP platform. These are multirate cards that support Fibre Channel speeds of 1.0625 Gbps and 2.125 Gbps, usually referred to as 1 Gbps and 2 Gbps Fibre Channel. This technology can use the VCAT, LCAS, and GFP features of next-generation SONET/SDH. The GFP-transparent (GFP-T) mode header is used because GFP-T best supports the block mode data transmission style common to storage protocols. On the client side, GBIC technology is included to easily configure ports for various client distances from the MSPP. On the SONET/SDH network side, the GFP-T–wrapped storage data is mapped into SONET/SDH payloads for delivery across a metro or long-haul SONET/SDH network.
340
Chapter 6: Metropolitan Optical Networks
Many automated features are inherent in these cards to maintain reliability during SONET/ SDH switchovers/failures and to insulate the customer’s Fibre Channel switches from Layer 1 protection events. An extended distance mode allows for SAN extension from 1150 km (2 Gbps) to 2300 km (1 Gbps). In addition to SONET/SDH transport, the SL series cards accommodate Fibre Channel over dark fiber, wavelengths, and dedicated or switched Gigabit Ethernet using Fibre Channel over IP. All together, these capabilities allow the MSPP to provide storage over SONET/SDH applications and do so with reliable, low-latency transport. Figure 6-14 shows the concept of storage transport over SONET/SDH using an MSPP design. Figure 6-14 Storage Fibre Channel over SONET/SDH
Backup Tape Library
FC HPA
FC HPA
FC HPA
Service Provider Network SONET/SDH FC
MDS 9000 FC
FC
Cisco ONS 15454 with SL-Series Card
FC HPA
MDS 9000
FC
FC
FC HPA
FC
Data Center Major Site MDS 9000
Remote Branch
FC HPA
Small Office Source: Cisco Systems, Inc.
Metro IP Metro IP provides IP and Ethernet services over ring-based optical transport systems. Metro IP platforms use packet ring technology such as RPR and DPT, and IP networks such as IP/MPLS. With the emergence of packet-ring technologies such as RPR and DPT, the best efficiencies of packet data transport can be reached for ring-based fiber topologies.
Metro IP
341
Metro IP is not a product name but rather Cisco’s category of solutions that apply to IP-based metro networking. The focus is on industry-standard, efficient provider networking options that fully leverage the intelligence, scalability, and convergence power of IP. Chapter 5 introduced RPR and DPT, and Chapter 3 introduced IP/MPLS. This section is intended to present the benefits of using the metro IP solutions for new-era metropolitan optical networks.
Resilient Packet Ring (RPR): Packet Power for the Metro For ring-based architectures supporting packet communications, the optical transport technology of choice is the RPR/IEEE 802.17 standard. RPR is a resilient, ring-based technology optimized for packet-based traffic. RPR is an important consideration for providers and operators that have fiber-based, ring-oriented metro infrastructures and desire to increase bandwidth efficiencies when transporting packet-based traffic. RPR can be used over optical rings without any underlying SONET/SDH framing, as is the case when no TDM traffic is required, or RPR can be layered over existing SONET/SDH fiber rings to optimize the packet data portion of the traffic while SONET/SDH maintains the TDM transport. For example, an ILEC provider with a TDM voice market would have an existing SONET/SDH ring for TDM transport purposes, but also use the RPR solution for optimizing an increasingly offered load of packet data. A cable operator with no TDM voice requirements would use RPR over optical to gain better data traffic efficiencies on the metro fiber infrastructure for Internet, video on demand, and packet-based IP voice offerings. Since RPR is a ring-based technology, it applies to any metro tier where rings are found— primarily in metro access, metro edge, metro core, and metro regional tiers. Also, when using fiber-ring interfaces between service POPs and the metro core, RPR is an excellent option for streamlining packet traffic in that tier. RPR is optimized for transporting TCP/IP traffic and combines the intelligence of IP with the bandwidth efficiencies and protection capabilities of optical rings. Dual counterrotating rings are used, on which both control and data packets flow. By basing RPR on SONET BLSR mechanisms, using framing similar to Ethernet IEEE 802.3, as well as statistical multiplexing, there is no need to dedicate one half of the ring bandwidth to protection traffic. This potentially doubles the achievable bandwidth to nearly twice the line rate. The technology provides native support for dual homing, self-healing, and load balancing. This scalability and reliability is important for providers and operators who need to maximize their high-value fiber infrastructure. RPR provides many benefits to metropolitan optical networks, such as:
•
Bandwidth efficiency through spatial reuse, destination-based stripping, byte-oriented statistical multiplexing, and oversubscription
•
Plug-and-play operations via topology autodiscovery and high resiliency via selfhealing ring architecture with sub-50 ms protection switching
342
Chapter 6: Metropolitan Optical Networks
•
Infrastructure transparency through RPR over SONET/SDH, RPR over dark fiber, and RPR over xWDM
•
IP service enablers such as packet priority for Layer 3 IP Type of Service (ToS) cooperation, point-to-multipoint data, IP multicasting, and efficient broadcast mechanisms
Bandwidth Efficiency To achieve bandwidth scalability, an RPR source station sending data in the clockwise direction will send that data’s control packets in the counterclockwise direction. This has the result of notifying all the stations along the ring, keeping them informed of current bandwidth session spans and bit-rate allocations. This bandwidth knowledge, part of the 802.17 protocol operation, is an RPR mechanism that communicates bandwidth usage availability on interstation spans helping to achieve maximum reuse of bandwidth on a ring topology. Destination-based stripping frees up ring bandwidth. RPR uses byte-oriented statistical multiplexing, and as a result, oversubscription can be used to get the most bandwidth utilization possible while still supporting guaranteed bandwidth service levels and traffic priorities.
Auto-Topology Discovery and High Resiliency The 802.17 protocol sends topology discovery packets, supporting autodiscovery to build and maintain a topology map of the ring. Very similar to a Layer 3 routing protocol, 802.17 uses the least number of hops to get a packet between a source station and the destination station. In sharp contrast to SONET/SDH rings, RPR supports up to 128 stations on an RPR ring, although practical designs may use less (perhaps 30 to 40). This leverages metropolitan-based, high-value fiber infrastructure better, while expanding geographic coverage for sprawling metros. RPR recognizes failed fiber spans or RPR stations and steers traffic to surviving ring paths, all under 50 ms protection targets.
Infrastructure Transparency RPR’s support for dark fiber, xWDM, or SONET/SDH allows RPR to exist in a hybrid infrastructure environment. Some RPR stations can be connected via dark fiber along the ring, while others participate with packet traffic handling on the SONET/SDH fiber rings. This capability sets up a new ring for new packet transport services, including IP voice on the dark fiber path, while optimizing packet efficiency on the SONET/SDH ring. As packet data services grow, packet data can be migrated from the SONET/SDH ring to the RPRonly ring. This creates an efficient RPR ring for packet data transport in parallel with TDM voice services on the SONET/SDH ring.
Metro IP
343
IP Service Enablers RPR can prioritize packets using three traffic classes (High, Medium, Low) to distinguish packets with IP voice, video, and data, for example. RPR not only supports point-to-point data services but multipoint as well. With SONET/SDH, data using broadcast and multicast packets is from one source to many destinations, so the packet must orbit the ring once for every destination node, causing a lot of bandwidth utilization to handle the multicast or broadcast processes. RPR natively supports multipoint, and as such, RPR is an exceptional industry standard with which to deliver metro Ethernet private ring services, multipoint information distribution via IP multicasting, and high-quality video broadcasting applications. Used in service POP designs, RPR addresses the growing bandwidth needs for intra-POP, ISP exchange point, and server farm/storage applications. RPR’s high-resiliency, lower port count, efficient data transport, and support for traffic priorities can optimize these applications. Used in metro access rings, RPR is useful for reaching multitenant units such as high-rise buildings and office complexes, delivering Ethernet and IP over any existing facilities already positioned for TDM voice. Multitenant Internet access, IP VPNs, wholesale Ethernet, multitenant building Ethernet access, mission-critical TLS, storage mirroring, business continuance applications, and high-definition videocasting are a few of the newera, packet-based, metro services enabled with RPR. RPR can support such services and maximize revenue for each unit of bandwidth provisioned. As a ring-based architecture, high reliability is a given, supporting superior levels of service guarantees that add margin to product offerings. Figure 6-15 shows the use of an RPR ring to get Internet data to an ISP POP. Figure 6-16 shows the use of RPR for a cable multiple systems operator (MSO) network. In this example, RPR is used for the cable MSO core network connecting multiple distributed RPR rings back to the cable operator’s head-end functions. The RPR rings can use xWDM to scale capacity within the fiber portion of the infrastructure. These distributed RPR rings have cable modem termination systems, allowing the operator to provide standard television broadcasting, high-speed data and Internet, VoIP, video games, video on demand, and High Definition TV (HDTV). In addition, the small office/home office (SOHO) and small and medium business (SMB) markets can be reached with dedicated Internet services, video, and VoIP delivery.
344
Chapter 6: Metropolitan Optical Networks
Figure 6-15 ISP IntraPOP
Intracity POP
Building 1
Regional Metro Network
OC-48c/STM-16 2 x 2.5 Gbps RPR
Building 2 Source: Cisco Systems, Inc.
Figure 6-16 Cable MSO Using RPR over DWDM Core
Edge
Access
Customer Element
Service Gateways 2.5 G RPR xWDM
Headend ISPs
CATV
Regional Transport Network
10 G RPR xWDM
SOHO SMB
VoD
2.5 G RPR xWDM
VoD Source: Cisco Systems, Inc.
CMTS
TV/STB HSD VoIP Game/ VoD HDTV
CMTS CATV
TV/STB HSD VoIP Game/ VoD HDTV SOHO SMB
Metro IP
345
RPR helps ring-oriented providers make the switch from circuit to packet, from TDM voice to IP voice/video, and from TDM data to high-reliability Ethernet services for IP/VPN services and packet data, all optimized for scalable and survivable IP packet transport at gigabit-per-second speeds. Table 6-1 presents some of the service applicability of the RPR solution. Table 6-1
RPR Applicability Metro Core/Regional RPR Solution Examples Topology support
Via dark fiber or WDM Via SONET/SDH Cable MSO Enterprise and campus networks
Service examples
High-availability packet services Hybrid fiber coaxial network IP aggregation Transparent LAN Services IP VPN over MPLS backbone Ethernet deployment
Internet Service Provider RPR Solution Examples Topology support
IntraPOP aggregation Exchange point Internet data center Multitenant Internet access Business park Internet access Ethernet access
Service examples
Simplified POP topology, reduced router ports High-speed ISP peering, ISP hotels, distributed exchange point applications Server farm/storage for web hosting services, application services, storage services High-density subscribers, enterprise, commercial, residential High-density subscribers via hub and spoke Ethernet access continues
346
Chapter 6: Metropolitan Optical Networks
Table 6-1
RPR Applicability (Continued) Metro Edge/Access RPR Solution Examples Topology support
Multitenant dwelling Multitenant building Business park access
Service examples
IP data, voice, and video Ethernet access
Dynamic Packet Transport (DPT): The Cisco RPR Solution Chapter 5 previously introduced the Cisco DPT architecture. The DPT complement to the RPR 802.17 protocol is called spatial reuse protocol (SRP). SRP supports a subset of the functions of 802.17, as SRP predates the 2004 standardization of the IEEE 802.17 protocol. A more accurate way to represent the terms is to compare RPR/802.17 (IEEE standard) to DPT/SRP (Cisco RPR solution). Since the standardization of RPR in June 2004, Cisco has designed new DPT interface cards to include the compliant RPR/802.17 protocol. The dual-mode RPR/DPT card functionality is available for multiple Cisco router platforms. Most of the commands for configuring the RPR options for DPT use the RPR-IEEE parameter, for example, interface rpr-ieee slot/port and show rpr-ieee topology. Table 6-2 shows a comparison of RPR/ 802.17 and DPT/SRP. Table 6-2
Comparing RPR/802.17 and DPT/SRP Feature (Year)
RPR (2004)
DPT/SRP (1999)
Owner
IEEE 802.17 standard
Cisco prestandard RFC 2892
Terminology
RPR stations, spatial bandwidth reuse
SRP nodes, spatial reuse protocol
Spatial reuse
Single/multichoke
Single choke
Addressing
Unicast, multicast, simple broadcast
Unicast, multicast, simple broadcast
Packet priority classes
Class A (low jitter), Class B (bounded jitter), Class C (best effort)
High and low
Protection switching
Steering
Wrapping and steering
Fairness granularity
Four types
One type
Topology discovery
Multicast
Unicast
Cisco command syntax show rpr-ieee topology
show srp topology
Metro IP
347
The newer dual-mode card is referred to as the dual-mode RPR/SRP module. In addition, these new cards use interface ports supporting SFP optics. An east-facing interface could have a different optics requirement (that is, intermediate range) than the west-facing interface (that is, long range), optimizing optics expense depending on the provider’s geography. Like RPR, DPT rings use MAC layer addressing; each DPT node has a Layer 2 MAC address. IP over Ethernet expects to use the Address Resolution Protocol (ARP). When Ethernet data travels the DPT ring, the standard ARP function is at work, though ARP has been augmented to work within the DPT ring topology. DPT running in SRP mode has a few differences from the RPR standard compliant operation. DPT supports two priority classes of traffic known as high priority and low priority. Control packets are originated by DPT interfaces, and these are always sent using the highpriority marking. DPT is primarily transporting upper-layer protocols, so QoS mechanisms such as IP precedence or DiffServ are present in the IP headers of the Layer 3 packet that enters the DPT ring. The high-priority queue works as a strict priority mechanism, exhibiting low jitter and delay in support of packet-based voice and video. DPT’s SRP doesn’t need to make decisions on traffic priority and will map the Layer 3 QoS priority into the SRP priority field. The capability to modify the automatic priority mapping is also supported. Much too complex to document here, SRP uses a sophisticated fairness algorithm to properly adapt priority traffic to the topology nature of the DPT ring. DPT also offers sub-50 ms protection features like that of SONET but without the liability of the bandwidth reserve. Called Intelligent Protection Switching (IPS) in DPT terms, each node issues a keep-alive packet every 106 microseconds. Lost keep-alives can trigger an automatic failover process that wraps a ring and steers traffic around a fiber-cut (signal-fail) signal degrade, or a DPT node failure. Unlike, RPR, DPT can perform both a wrap function as well as a steering function. With DPT, a wrap process changes the topology tables so that after the wrap each node’s ARP cache is flushed and rebuilt through topology discovery. This has the benefit of optimizing traffic by steering traffic to the best paths. If a DPT interface is configured in RPR-IEEE–compliant mode, then a fiber or node failure will use the RPR steering function only in a ring protection switching process. Automatic topology discovery in DPT is based on a unicast mechanism that sends a packet around the ring to each DPT node. Each DPT node performs this process to build and maintain its own topology table on a five-second basis by default. When a node initiates its topology discovery process, each neighboring node appends its specific information and sends the packet to the next neighbor along the ring. After the packet makes its rounds, the topology discovery packet originator will receive back its originated packet with all of the nodal information needed to initially build or to update its topology map. To guard against transient events that might affect the topology’s validity, two identical topology packets must be received to modify a node’s topology table. Another application for packet ring technology is the migration of FDDI backbones, common in many enterprise and campus settings. Providers or customers may choose to
348
Chapter 6: Metropolitan Optical Networks
replace FDDI backbones with DPT rings to increase bandwidth from the FDDI maximum of 100 Mbps to a DPT/RPR rate of 1.2 Gbps, 2.5 Gbps, or 10 Gbps. Remember that these speeds are the line rates of each individual ring, that both the inner and outer ring carry traffic, and that DPT/RPR-inherent features push data utilization toward a doubling of the line rate. For example, up to 20 Gbps capacity is approachable on a DPT/RPR ring with OC-192c node interfaces. This is because of byte-oriented statistical multiplexing, knowledge of bandwidth usage and fairness control, destination-based packet stripping, oversubscription, multiple node concurrent transmission, and a nonreserve bandwidth protection model. DPT interface speeds are available at up to OC-192c (10 Gbps) rates and will keep pace with optical technology progress. DPT supports the bandwidth scalability and efficiency mechanisms, protection capabilities, and topology flexibility that RPR supports. Like RPR, DPT can scale up to 128 nodes over a distributed metropolitan area. With a five-year head start, there are hundreds of networks that are using the Cisco DPT/ SRP solution, and these aren’t outdated as a result of the RPR standard. The DPT/SRP specification is published as an informational RFC, and other vendors can design DPT/SRP interfaces for their products that will interoperate with the Cisco prestandard. The RPR/ 802.17 IEEE standard has the potential to be adopted by multiple product manufacturers, and this can provide more capabilities for operating multivendor-based packet ring architectures. The same service applicability examples shown in the Table 6-1 apply to both DPT and RPR. Each incorporates the resiliency of ring topology with the intelligence and bandwidth efficiencies of IP packet transport. Functionally, either DPT or RPR will deliver service providers and operators the scalability, efficiency, resiliency, and service creation flexibilities of packet ring technology.
IP/MPLS in the Metro For service providers, the migration of legacy infrastructures and services based on TDM, Frame Relay, and ATM technologies onto a more flexible, efficient IP/MPLS common core packet infrastructure is one of the key drivers in building next-generation metropolitan networks. With traditional voice revenues in decline and data services commoditizing due to service substitutions and flat-rate, bandwidth-based revenue models, the call to action is clear. You must invest strategic CapEx dollars into converging network infrastructure that will enable significant improvement in OpEx efficiencies. The administration, management, and billing systems of multiple overlay networks present a serious drag on the operational budgets of providers in addition to impacting the pace of service delivery. Simplifying the network core using highly available IP/MPLS routing systems eliminates costly redundancy and overly complex tiered architectures required by Layer 2 designs. MPLS makes an excellent technology bridge. By dropping MPLS capability into the core layer of the metropolitan network, you can reduce the complexity of Layer 2 redundancy
Metro DWDM
349
design while adding new Layer 3 service opportunity. You can interface multiple technologies across the MPLS core using traffic engineering or Layer 3 VPN capabilities. IP/MPLS capability can be combined with ATM, letting ATM become Layer 3 aware to simplify provisioning and management. IP/MPLS can be layered on new RPR or DPT ring infrastructures. The benefits of MPLS can be pushed closer to the edge of the network to facilitate new IP-based services. MPLS provides an excellent migration path into next-generation metropolitan provider services—services such as IP/VPNs and metropolitan Ethernet services for businesses.
Metro DWDM Metro users are subscribing to broadband, seeking direct Ethernet speed between their mainframes, servers, PCs, TVs, and their metro data targets. This puts pressure on metro service providers to scale, expand capacity, and better manage their fiber assets. DWDM is one option to address the anticipated scarcity of fiber in the metro area. Metro DWDM will experience faster growth than long-haul DWDM over the next few years. Ethernet and storage area networking applications are primarily driving that development. The ensuing bandwidth surge is more accelerated than ever before, so multiyear design/platform decisions carry more risk. A successful metro network today could become a regional network in one year or perhaps one end of a long-haul network in three. Mergers and acquisitions further exacerbate this uncertainty. Flexibility is key and metro DWDM helps with flexibility in the metro. A trend by manufacturers toward function integration and modularity within metro DWDM systems further helps to mitigate risk, enabling a pay-as-you-grow strategy that bodes well for total cost of ownership. As an example, for metro DWDM applications, 100 GHz, 32-channel systems are becoming prominent. Some of these systems contain components that are rated for 50 GHz channel spacing, anticipating an incremental upgrade to 64 channels when needed. In this section, you’ll examine metro DWDM and CWDM as they apply to business drivers/ requirements, technology, design considerations, and service orientation.
Drivers for Metro DWDM The principle drivers for metropolitan DWDM systems are relatively similar to established metropolitan network requirements. As businesses and individuals are successful, they expand. More expansion yields more people and machines, creating more communication transactions for the metro. As both older generations and younger children embrace computing technology, mobile technology, and the storage hives of the Internet, concurrent voice, data, and video sessions are trending logarithmically. Bytes per transaction continue ramping upward, having an 8 to 1 bit-impact on serial transmission. Bandwidth is a metro provider’s crop that must be seeded, fertilized, weeded, and grown. In the broadband generation, how much do you plant, and how fast should it grow are the anguishing questions.
350
Chapter 6: Metropolitan Optical Networks
Some of the considerations for metro DWDM include the following:
•
Fiber capacity—Metropolitan fiber is high in value. Much fiber was installed in the 1980s and early 1990s that is optimized for transmission in the 1310 nm range. However, this fiber is likely a poor fit for using DWDM, which prefers the 1550 through 1625 nm range. Continued urbanization and the bandwidth boom of the 1990s added supplementary fiber plant to most metro areas, much of it optimized for DWDM use. Maximizing metro fiber to increase capacity is a key driver for metro systems. Fiber relief remains a strong metro application that can be addressed by metro-optimized DWDM technology solutions.
•
TDM efficiency—Many of the next-generation MSPPs include the ability to crossconnect TDM traffic, distributing TDM grooming functionality closer to the customer edge and reducing backhaul requirements through the metro core. TDM is very much a revenue generator in the metro and must be accommodated and enhanced in nextgeneration metro designs.
•
Service diversity and granularity—Metro DWDM should provide several service options, at more granular bandwidths than allowed by the SONET and TDM architecture. Interfaces for Ethernet, storage, and digital video support are required. Multirate capabilities and service aggregation will allow for more distinctive services and more interface options. Customers are more likely to upgrade services quicker when they can do so in manageable budgetary increments.
•
Wavelength services—Metro DWDM enables new managed services built around wavelengths. Wavelengths are dedicated services that are inherently secure with bitrate and protocol independence. Options to provide customers with DWDM CPE, service aggregation features, and protection options increase premiums and add stickiness. A wavelength service is a build once, sell many times bandwidth model.
•
Faster provisioning—To support a customer bandwidth-on-demand model, systems should be quick to engineer and swift to provision. Reducing order to billing time banks revenue faster and enables new on-demand service offerings.
•
Scalable bandwidth—With Ethernet, Fast Ethernet, and Gigabit Ethernet potentially everywhere, metro bandwidth must never run out of capacity. Bandwidth must scale in advance of summative saturation levels.
•
Low-touch operational model—Metro DWDM needs to be simple to install and easy to manage. Integration with operational support systems (OSSs), remote control, and intelligent automation of critical parameters are essential to realize lower operational expenditures (OpEx) over the investment horizon.
Metro DWDM
351
Metro DWDM Technology DWDM cut its teeth in the long-haul networking markets. Many vendors of long-haul DWDM systems attempted to “cram” that technology into the metropolitan network space, with limited if any success. That’s because DWDM for the metro is quite a different personality. Metro DWDM is more aptly focused on service creation than on the capacity creation approach of long-haul DWDM solutions. Until DWDM, a bit-rate increase was the only way to augment the capacity of a pair of optical fiber strands. A single wavelength at either 1310 or 1550 nm would be modulated so as to pack and carry as much information as possible between metropolitan nodes. Many of today’s metro optical networks use SONET/SDH in that design capacity. DWDM allows for the multiplication of capacity by adding unique wavelengths within the same fiber pair. As a result, many optical signals can share the same fiber, boosting net channel capacity while depending less on complex bit-rate increases (see Figure 6-17). In this figure, two of the three fiber rings have been upgraded to support multiple DWDM wavelengths for traffic paths that require more capacity, such as data traffic from large businesses or traffic destined to and from a service POP. In this way, DWDM leverages fiber to the maximum. The proper selection of fiber for effective DWDM operation is important to current and future metro bandwidth scalability. Figure 6-17 Increasing Metro Capacity with DWDM VLAN 2
Wavelength n
Fiber Ch.
Service PoP (CO) FE
OC-48c OC-192c
Storage n
DWDM Ring
DWDM Ring
GigE
Cisco ONS 15xxx OC-48/192 Ring
VLAN 1 FE
ADSL
Ethernet FTTH
Source: Cisco Systems, Inc.
Cisco ONS 12xxx
352
Chapter 6: Metropolitan Optical Networks
DWDM fundamentals are covered in Chapter 5. More specifics of long-haul DWDM are covered in Chapter 7, “Long-Haul Optical Networks.” The following sections introduce some of the latest DWDM innovations that relate to metropolitan optical networks, specifically tunable DWDM components, and reconfigurable optical add/drop multiplexers (ROADMs). Several DWDM innovations and ROADMS are then examined within the context of the Cisco ONS 15454 MSTP platform.
Tunable DWDM for the Metro Metropolitan optical networks are built to connect a city’s COs, service POPs, wiring centers, and hubbing points. That might represent dozens of locations. Every process concerning equipment and equipment maintenance, parts sparing, power and environmentals, floor and rack space, fiber routing, network management, and maintenance access is multiplied and magnified for optical networks that serve metropolitan communication needs. Adopting early generation long-haul DWDM functionality into the metro ignores many of the key requirements of metro DWDM networks such as feature integration and operational support system (OSS) interoperability, parts sparing costs, dynamic provisioning, and costper-subscriber ratios. DWDM in the metro is much more dynamic. Network engineering, provisioning, and reprovisioning must be nimble and quick. Optical bandwidth optimization should maximize revenue from facilities. Inventory costs for parts sparing should not encumber network growth. Tunable DWDM components for the metro assist in overcoming these challenges. Tunable lasers speed provisioning, help to optimize bandwidth, and reduce sparing requirements. With tunable DWDM components for the bandwidth-driven metro, optical networks can achieve many of these goals. Tunable lasers are here. In fact, widely tunable lasers over a complete C band and L band may redefine the all-optical metro network from fixed to flexible. Lasers for metro applications don’t require the degree of accuracy or the distance endurance of long-haul network applications. With the relatively short distances between metro interoffice locations, short- and intermediate-range optics are usually adequate. In this space, Fabry Perot lasers have often been used for 1310 nm SONET/SDH applications up to OC-12 rates. Uncooled and directly modulated lasers such as distributed feedback lasers and electro absorption modulated lasers work well within these distance requirements at 2.5 Gbps/OC-48 and 10 Gbps/OC-192 bit rates. Vertical cavity surface-emitting lasers (VCSELs) are another laser type that is common in the metro area and cost effective at 2.5 Gbps/ OC-48 rates. These various laser types are frequently used to create fixed 1310 nm, fixed 1550 nm, and fixed DWDM wavelength lasers. Nonetheless, software-controlled, tunable lasers are the new direction of the optical DWDM industry—reducing the high cost of DWDM laser card spare inventory and creating flexibility in wavelength provisioning. Wavelength services are a new offering in many metropolitan optical networks, and DWDM-based tunable lasers enhance the
Metro DWDM
353
flexibility of the service. In the metropolitan space, Sampled Grating DBRs (SGDBRs) are finding popularity. SGDBRs are current tuned lasers having waveguide cavities that pair with Bragg gratings. A current is applied across a front and a back mirror of the waveguide, which causes the mirror to reflect the laser output off a different portion of the Braggsampled gratings, ultimately yielding a change in wavelength. Characteristically, SGDBRs have a broader linewidth, which is adequate for most metro optical distances, and a wide tuning range that is very fast. Figure 6-18 shows a graphic of common metro lasers and their applicability. For more on tunable lasers, see the section “Tunable Optical Components” in Chapter 7. Figure 6-18 Lasers and Applications
40 Gb
10 Gb SG-DBR FABRY PEROT DBRs
2.5 Gb
<2.5 Gb
VCSEL
Metro
Access Directly Modulated <200 km
<600 km 1000 km Optical Transmission Reach
>2000 km
While tunability is an important feature for optical lasers, other system components require tunability to accomplish an end-to-end flexible design. Other desirably tunable components include optical filters, optical receivers, and optical wavelength monitors. Table 6-3 lists the typical application of tunable components within modern optical networks. Tunable components in optical networks enable bandwidth flexibility, rapid service provisioning, and lower operational expense. They can assist an optical network with dynamic wavelength provisioning, streamlining traffic patterns and reallocating wavelengths as bandwidth patterns change. Tunable lasers and receivers are also moving into line cards used for Layer 3 routers. This allows the extension of DWDM capabilities to the customer edge router without necessarily requiring fixed DWDM termination equipment at the customer premise. These capabilities are fundamental to providing on-demand services over metro optical networks. Tunable components are an inflection point for new-era metropolitan DWDM optical networks, leading to new innovations and new optical services.
354
Chapter 6: Metropolitan Optical Networks
Table 6-3
Applicability of Tunable Optical Components Description
Optical Applications
Tunable lasers
• Metro, long-haul, ultra long-haul DWDM • Optical add/drop multiplexing • Metro core and regional networking • 2.5, 10, and 40 Gbps transmission rates
Tunable filters
• Tunable demux filter • Tunable receiver • Reconfigurable optical add/drop multiplexing (ROADM) • Amplified spontaneous emission (ASE) suppression • Optical performance monitoring
Tunable receivers
• Tunable receiver • Reconfigurable optical drop for broadcast and select applications
Tunable optical channel monitors
• Laser gain tilt monitoring • Optical channel power equalization • Optical channel registration
Reconfigurable Optical Add/Drop Multiplexing (ROADM) for the Metro Optical add/drop multiplexing (OADM) has been a must-have feature of metropolitan SONET/SDH networks for years. Metro DWDM networks also require OADM functionality. An optical add/drop multiplexer is like a highway intersection, providing an optical on-ramp/off-ramp from access side fiber(s) onto the optical fiber core backbone. When customer communication is traveling a DWDM backbone, such as a metro ring, sooner or later the traffic needs to exit the backbone and be “dropped” to a smaller access ring, a service POP, or the client-side optical fiber connection. Any return traffic must be “added” back to the optical backbone path. Prior to reconfigurable technology, fixed OADM implementations for DWDM were static designs, generally borrowed from the long-haul DWDM approach and built-in modular chunks, often having to demultiplex a whole wavelength band to select the one, two, or four wavelengths to drop or add. The optical paths were predetermined, engineered, and operated based on forecasted traffic patterns. If those patterns changed, unused capacity would result, and the system would require reengineering to reallocate resources. The changing of add/drop filters and the retuning and balancing of composite optical power would be required, and customer availability would be impacted. Some providers would attempt to minimize these events but often leave bandwidth stranded in the process. A bandwidth-inefficient, high-risk, brute-force operational model is the result. Many metro providers chose to sit on the DWDM sidelines rather than embrace an intensive process
Metro DWDM
355
that, when magnified across many locations within a providers geography, could stretch total cost of ownership beyond reasonability. ROADMs now enable service provider networks to add, drop, or pass through any combination of available wavelengths via software control. Dispatching optical experts to reengineer nodes becomes a thing of the past. Along the path to ROADM technology, many other operational knobs and buttons, once separate, have been integrated. The modifications of add light paths and pass-through light paths are now software selectable. ROADMs are able to remotely adapt and compensate for the wavelength optical power changes as a result of the reconfiguration due to the inclusion of per-channel optical equalization and optical power monitoring. Insertion losses have been reduced, and form factors have shrunk. ROADMs help eliminate banded amplification schemes and stranded bandwidth. ROADMs will help create a 32 channel 1x2 optical switch. ROADMs also improve channel control and incorporate gain flattening filters (GFF). With ROADMs and tunable lasers you can build a DWDM ring or long haul network with an any-to-any client connectivity model. Thus, no more restrictions for running the IP protocol in an any-to-any fashion over a DWDM network. Designs aside, ROADM functionality allows for adding and dropping (add/drop) of metro DWDM wavelengths without manually changing the physical fiber connections or manually rebalancing optical channel power. This is advantageous, as it is the only foreseeable way to significantly reduce optical provisioning times while mitigating risk to network availability. ROADM technology is useful in network applications, especially the delivery of metro wavelength services, requiring fast, remote provisioning. With ROADMs, an add/drop provisioning activity that may have taken a day or two including trucks and technicians can now be performed remotely in minutes by network management software with SONET/SDH-like manageability. For metro providers, this is akin to moving from a continuous reengineering process to just reprovisioning, creating provider responsiveness that is definitely marketable and sustainable as traffic grows. 32-channel ROADMs are available on two-line cards, as used within the Cisco ONS 15454 platform. The use of four cards provides both an east and west, channelized ROADM functionality in one platform. The benefit is a single, 32-channel DWDM multiplexed fiber connecting east and west. More information on ROADMs can be found in Chapter 7. Tunable lasers are also used in the ONS 15454 MSTP, more specifically, tunable lithium niobate externally modulated lasers. Both 2.5 and 10 Gbps tunable laser assemblies are available.
356
Chapter 6: Metropolitan Optical Networks
Metro DWDM Design Considerations Metro DWDM design is different than the traditional long-haul DWDM blueprint. The sheer numbers of metropolitan networks vastly exceed the quantity of long-haul networks. With all of the opportunity in the metro space, innovations and competition will drive metro DWDM to better price/performance metrics in both capital acquisition and operational areas. Metro DWDM differs from long haul in the areas of topology support, fiber infrastructure, amplification/regeneration, raw per-channel capacity, and channel count, as described in the following sections.
Topologies Metro networks most often use a ring-based deployment model and can reach very large circumferences, all optical, using the proper components and fiber infrastructure. Both two- and four-fiber SONET/SDH ring topologies are used, with preferences for four-fiber BLSR/MS-SPR rings in metro cores, and two- or four-fiber UPSR/SNCP rings in the metro edge and metro access tiers. Linear point-to-point configurations are supported and might be used to connect a DWDM metro edge ring with a large bandwidth customer requiring a DWDM or CWDM lateral. Mesh networks are also under consideration by providers as a way to add capacity in high-traffic demand portions of the metro grid.
Fiber Infrastructure There is often a mixture of fiber ages and types in the metro. Some metro fiber cables could be approaching 20 years of use. Much of the older metro fiber is optimized for transmission in the S band at 1310 nm and may be unsuitable and inefficient with DWDM in the 1550 nm C band. Newer fiber may be usable with the C band but have dispersion or nonlinear characteristics that limit the achievable bit rate or the number of concurrent DWDM channels. The most prominent fiber type within metros is within the range of the ITU-T G.652 specifications supporting metro distances rather well. These are colloquially known as SMF-28 fiber, although there is a difference between SMF-28 fiber and G.652-compliant SMF-28 fiber. This latter fiber type supports metro DWDM very well at low-density DWDM interchannel spacing (>=100 GHz). To support high-density DWDM at 50 GHz interchannel spacing or better, the consideration of G.655-compliant fiber should be made. The G.655 specification describes fiber that is termed nonzero dispersion-shifted fiber (NZ/DSF) by optical fiber manufacturers. With metro DWDM sensitive to significant CapEx increases, lower-cost directly modulated laser sources are preferred. The use of negative dispersion NZ/DSF is often recommended to balance the positive chirp characteristics of these less-expensive, directly modulated laser sources.
Metro DWDM
357
Larger metros may need to scale to higher bit-rate channels over longer distances, putting the available fiber infrastructure under the microscope to determine its performance qualifications. Preinstallation fiber testing and analysis is a critical design step for upgrading or deploying metro DWDM networks. Often, results from fiber analysis activities are required as software input into metro DWDM design tools. Vendors normally publish their equipment specifications based on the assumption of G.652/SMF-28 classes of fiber.
Amplification/Regeneration Although optical amplifiers greatly benefit long-haul optical networks, they are seldom required for metropolitan optical networks except perhaps in the largest of metropolitan networks where cumulative distances or ring circumferences may approach long-haul status. Unamplified metro DWDM networks are preferred due to the inherent lower cost. Unamplified distances range from about 60 km to perhaps as much as 200 km, depending on fiber and equipment. Larger metros and metros considering regional expansion may need amplification. Most providers try to avoid regeneration in metro DWDM designs. Vendors of metro DWDM equipment generally allow modularity so the provider can choose to use or not to use amplification based on their design requirements. Achievable distances prior to regeneration commonly meet 600 km, so this is the typical high watermark for metro network boundaries. Metro optical networks, especially ring-based networks, do not present uniform span-tospan fiber losses as do typical long-haul networks. The placement of CO facilities along the optical ring(s) is rarely equidistant; their locations were established based on real estate availability and wireline centricity. When it is necessary to use EDFAs in metropolitan optical DWDM rings, it is normal practice to position EDFAs such that the EDFA-to-EDFA span loss is as uniform as possible. When EDFA amplification is integrated into the metro DWDM nodes, the ability to tune the amplifier gain is a reasonable requirement. For more information on amplification and regeneration considerations, refer to Chapter 7.
Per-Channel Capacity High speeds are essential to long-haul networks but not necessarily to metro networks. An increase in higher speeds also increases transmission impairments and nonlinear effects. For example, at 10 Gbps, the chromatic dispersion impairment limits the all-optical distance to 100 km. At 40 Gbps, the chromatic dispersion limits the design to a 10 km distance using SMF-28.
358
Chapter 6: Metropolitan Optical Networks
Metros still use a lot of 2.5 Gbps (OC-48) speed technology. Providers have more fiber strand options with which to scale overall traffic capacity through ring stacking rather than making the jump to a 10 Gbps (OC-192) or 40 Gbps (OC-768) system. Examples would include using RPR rings to nearly double bandwidth, or scale with additional modular chassis on new wavelengths. The jump to 10 Gbps (OC-192) can be relatively expensive to acquire and spare. Many ring-based topology providers are evaluating mesh designs to add capacity. Providers may consider meshed architectures in the metro to add capacity for more resilience and differentiated SLAs. Above all, modularity is important. It’s difficult to forecast whether a network will remain metro, grow to regional reach, or leap to long haul. Modularity in metro DWDM platforms eases this uncertainty with flexible-incremental upgrades, reconfigurations, or reuse. Channel wavelength reuse can assist with overall channel capacity. By reusing DWDM wavelengths that have been dropped at a node, more light paths can be subscribed on an optical metro DWDM network. Perhaps as many as 2.43 light paths per DWDM wavelength can be realized depending on traffic patterns.1 Light paths destined to pass through the metro and headed to long haul generally are less, at about 2.05 light paths per DWDM wavelength.
Channel Count Channel count is the number of DWDM channels that a metro DWDM system can accurately launch and recover over a single-fiber pair. Most metro DWDM products have channel count comparisons/set points in the 32-channel (protected) range. This gets back to a key inflection point that 32 channels, spaced at 100 GHz, each running at 10 Gbps, are reasonably deployable over G.652 fiber in the metro space with its shorter-distance spans. Moving into high-density DWDM requires 50 GHz or tighter channel spacing. In the metro, G.652 fiber has proven not to be an issue at 50 GHz channel spacing, having enough dispersion to control nonlinear effects and using transponders with wavelength lockers. Moving to 25 GHz channel spacing or closer for metro environments may require consideration of metro-optimized NZ/DSF fiber to increase channel counts, depending on distance. Within the metro DWDM industry, 16 to 32 channels are thought to be more than enough. However, that is a time-relative statement. Are 32 channels enough for now? Are 32 channels enough for tomorrow? Will a critical mass of wavelength service uptake begin to confront the 32-channel set point? Most vendors of metro DWDM gear are preparing their platforms for tighter channel spacing to yield high-channel counts as required.
Metro DWDM
359
Metro CWDM CWDM is limited to the metro. The CWDM channel plan spreads across the S and C bands, making cost effective amplification difficult. Since CWDM is targeted for low-cost WDM applications anyway, no one wants the added expense of amplification. The availability of managed wavelength services typically fulfills that need. Metro CWDM has been used for smaller-distance applications such as SANs, connecting two large and diversified campuses within a metro, and for business continuance solutions such as data center replication and disaster recovery. CWDM is usually offered as a lower-cost entry point for shorter-distance, metro access layer applications. To lower costs, CWDM lasers are generally uncooled with more relaxed tolerances for spectral width. The spectral width is usually four to five times that of a DWDM laser or about .4/.5 nm. (The spectral width of most DWDM systems is less than .1 nm.) CWDM lasers can be easily packaged into pluggable optics. Pluggable optics add usage flexibility, and these types of optics interfaces are the most prominent due to well-defined channel spacing, shorter-distance nominal tolerances, and the absence of nonlinear effects. Most CWDM products are fixed platforms with pluggable interfaces, targeting the do-ityourself, plug-and-play enterprise market. The use of linear, point-to-point and ring topologies are common. As an example, Cisco uses CWDM GBICs that have an optical link budget of 30 dB and can operate Gigabit Ethernet over SMF-28 single-mode fiber up to 100 km link spans. Passive OADMs are also available so that a CWDM design can include a one-, four-, or eight-wavelength add/drop node between end terminals. The Cisco CWDM GBICs use the following 8 CWDM wavelengths at 400 GHz interchannel spacing:
• • • • • • • • NOTE
1470 nm 1490 nm 1510 nm 1530 nm 1550 nm 1570 nm 1590 nm 1610 nm
Seldom would you see provider CWDM offerings, unless perhaps as a custom, special assembly.
360
Chapter 6: Metropolitan Optical Networks
Price declines in DWDM gear have the potential to put pressure on the lower-cost CWDM market, diluting the pricing power of lowest-cost CWDM and preferring DWDM flexibility (at lower costs). The CWDM market may remain a small niche of the overall WDM market or perhaps a DWDM-lite market will appear. Perhaps even a hybrid of CWDM and DWDM in the same platform will provide the assurance of a smooth upgrade path.
Metro DWDM-Enabled Services Service creation is the key purpose of metro DWDM systems. Vendors are continuing to integrate DWDM functionality into their metro-specific platforms, and doing so with lower cost, modularity, and operational simplicity. Yes, providers can use DWDM to address fiber relief or to converge multiple parallel networks onto fewer fiber pairs, better utilizing fiber assets. The use of fiscal capital for DWDM networks is also about capacity for revenue generation. By investing in DWDM, the strategic capacity is there to develop and offer new high-bandwidth services, and then deliver and excel on performance and service guarantees. Thanks to DWDM, the value of metro fiber increases logarithmically. As a customer, procuring dark fiber can be expensive, possibly approaching fiber pair replacement costs as determined by the fiber provider. For providers, the selling and leasing of dark fiber, typically through indefeasible rights of use (IRUs), is like selling transport. Once sold, or tied up in decade-plus IRUs, it is difficult for the fiber lessor to leverage further value out of the asset. To the provider, this is a Layer 1 transaction only. The customer as lessee has ultimate control over the fiber’s use. Metro and regional providers are very interested in maintaining as much control as possible for their high-value fiber assets. Therefore, there is a tendency to offer wavelength services, which can also be leveraged beyond mere Layer 1 transport to position for future Layer 2, 3, and 4 services. Metro DWDM is central to delivery of wavelength and subwavelength services. Wavelength services are alternatives to customers attempting to acquire dry fiber. Wavelength services allow for the same characteristics of dry fiber, such as protocol and bit-rate independence, yet on a per-wavelength basis. Customers can purchase one or more wavelengths depending on need. As a provider offering, wavelength services can be used by customers to support dedicated TLS, networked storage, and business continuance applications. Also, several organizations, smaller carriers, and research organizations may need wavelengths with which to complete select spans of their own fiber-dependent, customer-owned DWDM networks. Most large providers now offer wavelength services that are mainstream, tariffed solutions. As such, the provider engineers, monitors, and manages the service on behalf of the customer. Wavelength services are built using DWDM technology in the metro core, metro edge, and metro access layers, extending DWDM transport as close as necessary to the customer premise. SONET/SDH OC-n services, 1 and 10 Gbps Ethernet services, ESCON,
Metro Ethernet
361
FICON, Fibre Channel services, and others are customarily supported. Of these, high-speed Ethernet and storage networking services are expected to be large contributors to overall wavelength service uptake. Wavelength service pricing is typically a blended value based on the number of add/drop nodes, channel protection options or route diversity, distance, and contract term and service-level agreements. Metro Ethernet will place great demands on metro networks. High adoption of broadband Ethernet will be a significant case on which to justify metro DWDM investments. DWDM is important to enable capacity for 1 and 10 Gbps Ethernet services. DWDM will be very key to delivering 10 Gbps Ethernet services, as this line rate currently requires the full bandwidth of a DWDM wavelength. Storage networking requirements, while less pervasive than Ethernet, are better supported and more efficiently packaged into DWDM wavelengths than before. The storage networking category loosely includes the mainframe networking protocols as well. Mainframes and other processing systems work with storage, and therefore the storage protocols of Fibre Channel, ESCON, FICON, and Fibre Channel over Internet Protocol (FCIP) are generally supported. Mainframes can be coupled together over networking to support synchronous processing solutions, thus needing other protocols such as ISC-1 and ETR/CLO for timing references and so on. Storage networking protocols are supported by DWDM client-side interfaces, and many solutions allow features to extend storage over 2000 km. Most of the primary storage vendors have certified their products to work with a variety of DWDM platforms. See the section “Metro Storage Networking” for more details about storage protocols. For media and cable operators, HSD, VOD, VoIP, HDTV, Interactive TV, and Personal Video Recording are new applications that require great bandwidth to scale operator capacity in advance of broadband user demand. Building these networks to profit from new services requires high-capacity metro optical backbones, often requiring DWDM with which to increase overall bit power on existing high-value fiber infrastructure. DWDM can help alleviate bandwidth bottlenecks between metropolitan HFC segments and the metro core. DWDM can also be used to help segment and scale new digital-based media services, including business-related services, from traditional analog broadcast spectrum services.
Metro Ethernet Telecommunications subscribers, whether business or residential, use networks capable of supporting megabits of bandwidth. Wireline providers have core networks that support gigabits of bandwidth. The bottleneck between the two is more apparent than ever before.
362
Chapter 6: Metropolitan Optical Networks
Ethernet is the lowest-cost, highest-volume networking technology. The price/performance metrics of Ethernet has beaten all others in the local area network (LAN) market and is challenging other technologies in the metropolitan area (MAN) and wide area network (WAN) space. Ethernet supports all services such as data, packet voice and packet video, and all media types, copper and fiber. Because of these flexibilities, Ethernet has been moving from local IP networks to long IP networks. Most of the early action has been in the metropolitan area, where metro Ethernet is increasingly an IP-based WAN access link option or TLS. Many providers are positioning edge platforms with varieties of Ethernet support in their nextgeneration networks. The appeal of using Ethernet in the MAN and WAN is the low cost and the future scalability of the interface. The lifetime cost curve of a T1 interface, being replaced with a T3 interface and then being usurped by an OC3 interface, is logarithmically steep compared to a single Fast Ethernet interface that can scale from 1.5 to 100 Mbps without a truck roll or equipment change. Ethernet interfaces are packet based and provisioned quickly, with more granular speed options enabled through software configuration. Best of all, they can eliminate protocol conversions and MAC layer rewrites. This section examines Ethernet’s movement from LAN to MAN along with the different types of Metro Ethernet services. Further, some considerations are offered for taking metro Ethernet to the market and doing so in a service-oriented fashion.
Ethernet—from LAN to MAN Ethernet, developed as a LAN technology, incorporates broadcast, multicast, and unknown packet flooding mechanisms within its protocol operation. A large, Layer 2 Ethernet network, even one that uses switching, can border on scalability problems when the number of hosts and accumulated data utilization reach diminishing returns due to broadcast overhead. One of the many benefits for using VLANs is that they can take a large flat Ethernet network and segment it into multiple smaller networks as measured by the number of hosts and their cumulative wire distances. Broadcast and unknown packet flooding is limited to the boundaries of a specific VLAN, so these VLANs become isolated from one another, not able to learn MAC addresses across established VLAN boundaries. Additionally, because Ethernet is a Layer 2 bridging technology, the Spanning Tree Protocol (STP) is active across the Ethernet segment (one STP instance per VLAN in general), eliminating redundant paths that cause Ethernet packets to loop indefinitely. Ethernet has no signaling mechanism such as Frame Relay, ATM, or MPLS, so it is often piggybacked on Frame Relay, ATM, and MPLS signaling mechanisms when used in a metro provider’s core or regional network. The general desire to maintain any-to-any communication calls for the use of Layer 3 devices such as routers or Layer 3 switches that are capable of routing data between
Metro Ethernet
363
different VLANs, essentially interconnecting the Layer 2 VLANs at the Layer 3 routing instance. The use of Layer 3 devices terminates both the Ethernet broadcast domain and the spanning tree operation at the router interface. So, the combination of Ethernet-switched VLANs and Layer 3 routers yields a segmented but routable Ethernet network that is much more scalable. When porting Ethernet technology across a MAN or even a WAN, remember that in addition to standard unicasts, Ethernet still uses multicast, broadcast, and unknown packet destination flooding processes. STP remains essential for guarding against Layer 2 packet loops. VLAN IDs (tags), if used, must be preserved or translated appropriately to be meaningful between client source and server destination. This leads to the conclusion that Ethernet in the metro must be engineered properly, whether transmitted over a point-to-point Layer 1 wire or fiber, bridged/switched at Layer 2, or routed within a Layer 3 topology. The number of customer locations varies, so both pointto-point and multipoint Ethernet switching is necessary. Service-level agreements (SLAs) are often needed. Because this is metro Ethernet, these challenges are the provider’s concern, not necessarily the customers.
Metro Ethernet Services The demand for provider Ethernet services encompasses Transparent LAN Services (TLS), Ethernet Private Line (EPL) services, Ethernet-to-Internet services, Ethernet point-tomultipoint services, and Ethernet over passive optical networks (EPONs). Figure 6-19 shows a summary of Ethernet-based services that may be offered in a provider’s Metro Ethernet portfolio. As the Figure 6-19 shows, Ethernet can be used with Layer 1, Layer 2, and Layer 3 transport. Point-to-point and multipoint topologies are distinguished, and within each an assortment of Ethernet service variations is further delineated to address a particular feature set desired by customers and their applications. These Ethernet services represent Cisco’s terminology but are easily compared with the terminology documented by the Metro Ethernet Forum. Ethernet in the metro can be categorized as follows:
• • • • • • •
Ethernet Private Line (EPL) Ethernet Wire Service (EWS) Ethernet Relay Service (ERS) Ethernet Private Ring (EPR) Ethernet Multipoint Service (EMS) Ethernet Relay Multipoint Service (ERMS) MPLS VPN Ethernet service (VPLS) (described earlier in Chapter 4, “Virtual Private Networks”)
364
Chapter 6: Metropolitan Optical Networks
Figure 6-19 Summary of Ethernet-Based Services
Ethernet-Based Services
Layer 1
Layer 2
Point-to-Point
Ethernet Relay Service
Ethernet Private Line
Layer 3
Multipoint
Ethernet Wire Service
Ethernet Private Ring
Ethernet Multipoint Service
Ethernet Relay Multipoint Service
MPLS VPN
Virtual Transparent LAN Private LAN Service Similar to Leased Line over a Packet Network Analogous to Frame Relay Using VLANs for Multiplexing Analogous to Private Line over SONET/SDH/xWDM Network Source: Cisco Systems, Inc.
The next sections cover a few of these at a high level and then summarize them in a table for reference.
Ethernet Private Line (EPL) An EPL service is a point-to-point data transport service. Within metro Ethernet terminology, EPL is intended to be a MAN or WAN connection option, capable of delivery to the customer as a user network interface (UNI). Traditional UNI connection types of ISDN BRI/PRI, T1, and T3 are customarily used for private line, point-to-point services, and as an EPL service, Ethernet joins the list. EPL is categorized as a Layer 1, point-to-point transport service. The provider may offer and price this service based on achievable bandwidth and/or perhaps price the service based on distance. As a Layer 1 dedicated service the available bit-rate speed is the full line rate, either 10 Mbps, 100 Mbps, 1000 Mbps, or perhaps even 10,000 Mbps. Because this service is a dedicated pipe, Ethernet packets of all types flow over the EPL service, including bridge protocol data units (BPDUs), which are fundamental to the operation of STP.
Metro Ethernet
365
EPL doesn’t use an Ethernet switch function within the provider’s network. The customer can connect either routers or switches, or even computers at either end. A provider may choose to implement the EPL service over a SONET/SDH network, or perhaps via a CWDM or DWDM network. The provider’s network implementation method is transparent to the user. EPL is useful for mission-critical applications such as disaster recovery, storage mirroring, and medical imaging, among others.
Ethernet Wire Service (EWS) EWS is a point-to-point port-based service similar to EPL. One of the key differences is that EWS is a switched/shared service, not dedicated. The provider’s EWS functionality is an Ethernet switching function, which can perform local switching. It is built on a concept called pseudowires (PWs), which is essentially a logical wire built as a Layer 2 tunneling function within the provider’s network. The provider typically uses POP-based Ethernet switches, DWDM, Ethernet over Layer 2 MPLS (VPLS), or Layer 2 Tunneling Protocol version 3 (L2TPv3) with which to create and deliver the pseudowire services. Tiered offerings based on bandwidth, CoS, distance, or a blend of these, are common. The EWS function maintains customer transparency, so Ethernet packets, BPDUs, and STP work as expected end to end. As a shared service, pricing may be lower. Customers consider EWS as an Ethernet style of local loop, as an Ethernet access link to an ISP, or as connecting to an IP/VPN service.
Ethernet Relay Service (ERS) ERS is an offering that makes Ethernet look similar to a Layer 2, Frame Relay, hub-andspoke network. It is considered a switched, shared-access service that is a point-tomultipoint, hub-and-spoke topology. Instead of using Frame Relay DLCIs to differentiate the multipoint sites back to the hub or aggregation site, Ethernet VLAN tag IDs are used to differentiate the sites. The hub site connection is essentially an Ethernet point-to-point switching function capable of bringing together the individual sites (identified by a service provider–assigned VLAN ID) and presenting all of them to the hub interface, generally a large bandwidth interface on a customer router. Providers may tier their offerings based on bandwidth, CoS, distance, and, similar to Frame Relay, committed information rate (CIR). Providers may use POP-based Ethernet switches, L2 or L3 MPLS, or even FR or ATM network cores to deliver this service. If FR or ATM are used, then a service interworking specification is necessary for Ethernet to Frame Relay (RFC 2427) or for Ethernet to ATM (RFC 2684). Connectivity of several remote branches to a larger hub site within the metropolitan area is a common customer application for ERS. Connecting several metro sites to a common ISP is yet another application. As a switched, shared service, ERS may have cost advantages over using an equivalent number of dedicated EPLs to build semblance of a hub-and-spoke topology.
366
Chapter 6: Metropolitan Optical Networks
Ethernet Private Ring Service (EPR) EPR is a metro Ethernet service type that is built on a provider core network such as SONET/SDH or an RPR/DPT optical ring. Using either transport method, EPR is a dedicated bandwidth service available in both point-to-point and multipoint topologies. The use of provider optical ring technology assures high availability and large bandwidth scalability. As a dedicated bandwidth service, it is usually tiered based on desired bandwidth with uptime service-level guarantees. EPR is typically an intrametro service for delivering mission-critical Ethernet connectivity between metro data centers, headquarter and campus rings, or whenever high availability requirements are essential to a company’s business transactions, such as a dot-com company connected to an ISP. When configured as a multipoint service, EPR is essentially a private Ethernet LAN service.
Ethernet Multipoint Service (EMS) EMS is a multipoint port-based service. To carry Ethernet connectivity across a MAN or WAN to multiple locations and preserve the customer’s Ethernet campus design requires both an Ethernet Multipoint Service and a transparency feature called 802.1Q in 802.1Q. Cisco shortens this mouthful to 802.1Q in Q, and the colloquial term among industry engineers is Q-in-Q. Q-in-Q was an extension to the 802.1Q VLAN standard to adapt LANbased VLANs for transport or tunneling over a MAN. A simple way to grasp the concept: it encapsulates the customer’s 802.1Q VLAN header inside the service provider’s 802.1Q VLAN header. The 802.1Q specification provides 12 bits in the header for identifying VLANs, so this yields 4096 possible unique VLANs. This is usually enough for a customer or large enterprise, but generally not enough for a metro provider with many customers. By using the Q-in-Q functionality, a provider can engineer and scale beyond the 4096 VLAN restriction, using the combination of the customer’s VLAN tag and the provider’s VLAN tag to multiply the possibilities. The fundamental for this service requirement is that the customer already segments its LAN traffic internally by assigning unique VLAN IDs to his Ethernet traffic. Now the customer wants to connect a second campus (or more) across town, while propagating the same VLAN tag IDs to the second campus. Therefore, customers want their assigned VLAN ID information preserved such that when it’s presented to the east side of the provider’s metro network, it reappears unchanged on the west side of the provider’s network. The provider essentially creates Ethernet virtual circuits across the provider cloud, using POP-based Ethernet switches, or via an MPLS network using a feature called hierarchical Virtual Private LAN Services (H-VPLS). The provider receives customer data over the Ethernet UNI connection and assigns a unique provider 802.1Q VLAN ID value to this customer connection, in effect “stacking” the provider’s VLAN ID (which identifies this customer) in front of the customer’s internal VLAN ID information in every packet. The provider stacks the 802.1Q VLAN ID as a packet enters the edge switch on the east side, transports the packet across the Ethernet access domain using the Ethernet virtual
Metro Ethernet
367
circuit(s), and then strips the VLAN tag ID at the exit point on the west side, delivering the customer his packet(s) with the original VLAN tag assignments intact. All Ethernet packet types and BPDUs flow across the service. This is the technology mechanism behind TLS and is also known as VLAN tag preservation/stacking. Sometimes the market refers to this as a virtual LAN service. Rate limiting is a common feature of this offering, and the service may be tiered based on bandwidth, CoS, and distance and guaranteed using options of committed information rate (CIR), peak information rate (PIR), packet burst, and packet loss percentages. As a TLS, it functions as a campus LAN extension across a metro area to multiple sites. Providers can use the Cisco 7600 and 3550 platforms for TLS switching services. Whenever the TLS must extend beyond a POP-based switching function, then a core signaling mechanism such as Ethernet over SONET/SDH, Ethernet over RPR, Ethernet over MPLS LDP, or IP-based RSVP-TE can be used.
Ethernet Relay Multipoint Service (ERMS) ERMS is a multipoint VLAN-based service and as such, differs from EMS by allowing both multipoint and point-to-point Ethernet services to be multiplexed across the same customer Ethernet access UNI. Providers generally use POP-based Ethernet switches, and may use MPLS and H-VPLS to scale this type of service.
Comparing Metro Ethernet Services Table 6-4 summarizes many of the attributes of the metro Ethernet services just discussed.
Taking Metro Ethernet to the Market Providers often perform their own market requirements study and customer survey to determine which of the Ethernet service types are most applicable to their customer’s needs and inline with the provider’s business strategy and value objectives. Providers usually provide both a dedicated Ethernet private line service and a switched Ethernet TLS, using CoS to differentiate various levels of the TLS. A dedicated Ethernet private line service meets the need for a point-to-point, dedicated bandwidth application that requires the guaranteed bit-rate levels that Ethernet, Fast Ethernet, and Gigabit Ethernet can achieve.
Metro Ethernet-Based Services and Attributes
368
Table 6-4
Ethernet Wire Service
Ethernet Relay Service
Ethernet Private Ring
Ethernet Multipoint Service
Ethernet Relay Multipoint Service
Layer
1
2
2
2
2
2
Topology
Point-to-point dedicated
Point-to-point shared, VLAN based
Switched pointto-multipoint shared
Point-to-point and multipoint dedicated
Multipoint port based
Multipoint VLAN based plus pointto-point
Similar to
T1/E1 private line
T1/E1 leased line
Frame Relay hub and spoke
SONET/SDH or RPR services
ATM ELANs
—
Tiered pricing based on
Full line rate, for example 10 Mbps, 100 Mbps
Bandwidth, class of service, distance
Bandwidth, class of service, distance
Bandwidth, class of service, distance
Bandwidth, class of service, distance
Bandwidth, class of service, distance
SLA guarantee options
Uptime, packet loss
CIR, PIR, burst, packet loss
CIR, PIR, burst, packet loss
Uptime, packet loss
CIR, PIR, burst, packet loss
CIR, PIR, burst, packet loss
Provider core service enablers
POP-based switches, Ethernet over SONET/ SDH, Ethernet over ATM, Ethernet over DWDM
POP-based switches, CWDM/DWDM, Ethernet over L2 MPLS, L2TPv3
POP-based switches, Frame Relay or ATM, L2 MPLS, L3 MPLS
Ethernet over SONET/SDH, Ethernet over RPR/DPT
POP-based switches, HVPLS using MPLS LDP, 802.1Q in 802.1Q
POP-based switches, HVPLS using MPLS LDP, 802.1Q in 802.1Q
Customer VLAN transparency
Yes
Yes
No
Yes
Yes
No
Service multiplexed UNI
No
No
Yes
No
No
Yes, plus point-topoint
Chapter 6: Metropolitan Optical Networks
Description
Ethernet Private Line
Table 6-4
Metro Ethernet-Based Services and Attributes (Continued)
Description
Ethernet Private Line
Ethernet Wire Service
Ethernet Relay Service
Ethernet Private Ring
Ethernet Multipoint Service
Ethernet Relay Multipoint Service
Oversubscription
No
Yes
Yes
No
Yes
Yes
Layer 2 PDU transparency
Tunnel CDP, VTP, STP
Tunnel CDP, VTP, STP
Discard CDP, VTP, STP
Tunnel CDP, VTP, STP
Tunnel CDP, VTP, STP
Discard CDP, VTP, STP
Typical CPE type
Router or switch
Router or switch
Router or Layer 3 switch
Router or switch
Router or Layer 3 switch
Router or Layer 3 switch
Typical serviceoriented applications
Mission critical, Internet access, business continuity disaster recovery, storage mirroring, dualsite LAN connect
Ethernet local loop, Ethernet access to IP/VPN, Ethernet to ISP
Remote branch connect, multisite to ISP, intranet, extranet
Intrametro mission critical, Internet access, business continuity disaster recovery, storage mirroring, dualsite LAN connect, multisite LAN connect
Campus LAN extension, Ethernet WAN, disaster recovery
Campus LAN extension, Ethernet WAN, disaster recovery, plus point-topoint, Ethernet local loop, Ethernet access to IP/VPN, Ethernet to ISP
Functional use
Private line dedicated
Private line switched
--
Private LAN service dedicated
Transparent LAN service switched
—
Metro Ethernet 369
370
Chapter 6: Metropolitan Optical Networks
A switched TLS usually mirrors the EMS type described previously, but may be blended with features of the ERMS type to accommodate point-to-multipoint applications, such as a customer migrating or converting to Ethernet from Frame Relay service. Providers often differentiate levels of TLS based on committed bandwidth, CoS needs, and distance (mileage bands in metro terms). It is customary for providers to start with four TLS CoS levels, such as a gold, silver, bronze, and copper traffic prioritization scheme. The use of Q-in-Q forwarding is needed to isolate different customers and assign the desired CoS level to the proper field within the 802.1Q header. While market requirements, strategies, pricing exercises, and return on investment (ROI) iterations are in progress, providers contemplating new or enhanced Ethernet services conduct parallel efforts to evaluate vendor equipment with which to deliver and operate their metro Ethernet service offerings. These evaluation efforts can be lengthy in duration for the purpose of validating a vendor’s product on many levels, levels that include installation, upgrading, feature provisioning, monitoring, management, and billing integration, to name a few. The selection of strategic platforms and footprints that can scale easily through processor, memory, line card upgrades, and software feature enhancements are important for determining a product’s longevity and target amortization period. Providers and operators may also select different equipment platforms, perhaps even from different vendors, on which to support their diverse metro Ethernet services. They may segment these into separate layered networks based on port density, traffic management, monitoring and reporting features, and ROI margins. Older equipment tends to migrate toward the more basic, service delivery need. Once the vendor products are chosen, the implementation is scheduled, and the solution(s) are deployed, the metro Ethernet services are ready to extend to the customer premise. Metro Ethernet presents different types of customer network interfaces at the customer demarcation point (demarc, sometimes referred to as a network interface device, or NID). Providers delivering Ethernet at 10 or 100 Mbps can use either fiber or copper interfaces at the customer demarc. Ethernet interfaces for 1000 Mbps (1 Gbps) are fiber only. The following is typically used:
• •
10Base-T, 100Base-TX; copper interface; Category 5 RJ-45 jack
•
1000Base-LX; fiber interface; SMF with SC connector(s)
10Base-FL, 100Base-FX, 1000Base-SX; fiber interface; multimode fiber with ST or SC connectors
Ten Gigabit Ethernet interfaces are emerging options on many carrier-class Ethernet platforms today. These are generally associated with equipment that incorporates DWDM technology or at least can support a high number of OC-192 connections. A 10 Gbps Ethernet is equivalent in speed with an OC-192/STM-64 or a typical 10 Gbps–driven DWDM lambda. The DWDM solution would allow a more incremental scale of 10 Gbps Ethernet interfaces than would a SONET/SDH solution.
Metro Ethernet
371
The multimode fiber distance limitation is 2 km from the provider’s Ethernet switch port to the customer’s Ethernet switch port, not the demarc. Demarcs are often in telephone, equipment, and maintenance closets—generally too small to colocate core routers or switches. A customer’s demarc may be dozens or hundreds of feet from the customer’s Ethernet switch that will terminate the provider’s service , so that adds to the 2 km distance limitation for multimode fiber. Requirements for longer distances usually entail special assembly, and may use long-haul, or extended long-haul pluggable GBICs and SMF to achieve distances up to 10 km and beyond.
Service Orienting Metro Ethernet It is useful to categorize metro Ethernet offerings in “at-a-glance” charts for marketing purposes, including recommended CoS categories for sample customer applications. A chart might include the following, for example:
•
Recommending a dedicated Ethernet private line service for storage mirroring and other mission-critical applications
•
Recommending a gold level of TLS for customer applications that include VoIP and video services
•
Recommending a copper (most basic) level of TLS for e-mail and FTP applications
Customers understand Ethernet, so it is useful for providers to speak intelligently about their Ethernet offerings and to simplify their marketing and billing methods to coincide with the simplicity of Ethernet. The concept of Transparent LAN Services is largely what is intended by the customer when inquiring about metro Ethernet. It’s important to be sure which Ethernet application your customer has in mind when requesting such services. The service orientation of Ethernet carries pull because of the technology’s flexibility, price/performance, ubiquity, and customer mindshare. Ethernet forms a bridge between software and hardwire, softwire (wireless), and fiber. Ethernet complements a router as the perfect shuttle for IP. Ethernet is a new-era technology equalizer. Metro Ethernet is here to stay. Ethernet offerings let providers get closer to the customer and to IP services. Today’s IP services are data, voice, and video centric as well as service rich. Ultimately, providers want IP-based relationships with their customers. Ethernet, Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet interfaces will all find success in metropolitan networks and over long-haul optical fiber for very long IP networks because of Ethernet’s inherent low-cost, high-volume, and high-touch networking covenants. Chapter 8 includes more information about Ethernet over copper and Ethernet over optical fiber facilities.
372
Chapter 6: Metropolitan Optical Networks
Metro MSPP, MSSP, and MSTP This chapter has covered a large variety of optical technology, architectures, and services that are specific to metropolitan networking. These technologies and services are found within the Cisco MSPP, MSSP, and MSTP product platforms, which collectively can be used to build, scale, and multiply next-generation network services within metropolitan optical networks.
MSPP Build it. Next-generation MSPPs are the building blocks of metropolitan connectivity. The diverse features and services of the MSPP platform allow for consolidation of traditional metro equipment and a migration to both circuit- and packet-based services. MSPPs are optimized for the metro access and the metro edge. MSPPs are fundamentally based on SONET/SDH add/drop multiplexing. These multipurpose, next-generation MSPPs are very strong in the three areas of optical multiplexing support, TDM switching, and packet-based Ethernet services. The Cisco ONS 15454 MSPP’s robust support for SONET/SDH, TDM, video, ATM, storage, and packet-based Ethernet is an effective combination that establishes leadership in the MSPP market. Layer 1 transport, layer 2 ATM, Ethernet and storage, and Layer 3 Ethernet define the multiservice capabilities of the platform. Additional multiservice provisioning platforms are available within the Cisco ONS 15300 series. The ability of the Cisco MSPP platforms to work across UPSR, BLSR, linear, unprotected, point-to-point, and PPMN optical topologies, while supporting established and new innovative solutions creates a flexible, next-generation SONET/SDH multiservice access layer in provider metropolitan networks, linking customers with high-value networking services. MSPPs are the new broadband edge of metropolitan optical networks. Detailed information on MSPPs is covered in Chapter 3.
MSSP Scale it. MSSPs are the optical glue for MSPPs that serve the metro edge and metro access. MSSPs are optimized for metropolitan core requirements, typically consolidating aging SONET ADMs and broadband digital access cross-connects (DACs) while providing core switching services for MSPP deployments. It is the success of the MSPP platforms that drives the requirements for the MSSPs. MSSPs are aggregation points for high bandwidth switching in the metro, supporting large quantities of high-speed SONET/SDH, and Gigabit Ethernet interfaces to collect and aggregate metro edge rings and shuttle traffic to service POPs and long-haul network interconnects. While MSSPs are largely transparent to customers, they are an important design layer within provider metropolitan optical infrastructure. MSSPs are the metropolitan backbones of broadband optical switching. Detailed information on MSSPs was previously covered in Chapter 3.
Metro MSPP, MSSP, and MSTP
373
Figure 6-20 shows the typical positioning of Cisco’s MSPP and MSSP platforms within the metropolitan optical network architecture. Figure 6-20 Cisco MSPP and MSSP Platform Metro Positioning Cisco ONS 15454
Customer B
LH/ELH Network Metro Edge Ring 2.5 G
Cisco ONS 15454 Metro Edge Ring/Mesh 10 G
Cisco ONS 15600 TDM
Cisco ONS 15600
Class 5 Switch
Metro Edge Ring 2.5 G
Customer A
Customer A
Cisco ONS 15454
Cisco ONS 15454
Cell
ATM Switch
Cisco ONS 15454 Packets
Core Router Customer B
Source: Cisco Systems, Inc.
Multiservice Transport Platform (MSTP) Multiply it. MSTPs help metropolitan optical networks stay in front of the bandwidth surge. MSTPs are optimized for both metro and long-haul transport, typically utilizing DWDM technologies to apply scalability and flexibility to provider or customer high-value fiber. With the Cisco ONS 15454 MSTP platform, the ability to multiply metro, regional, or longhaul fiber capacity based on DWDM adds logarithmic value to metro bandwidth and service options.
374
Chapter 6: Metropolitan Optical Networks
The Cisco ONS 15454 MSTP is one configuration option of the popular Cisco ONS 15454 MSPP. The integrated DWDM capability of the MSTP configuration arrived with the announcement of Cisco ONS 15454 release 4.5 in June of 2003. The reconfigurable OADM (ROADM) capability for the MSTP was a release 4.7 announcement in June 2004 and was integrated into the combined SONET/SDH/DWDM release 5.0 deliverable in the 1Q of 2005. The integration of DWDM directly into the platform eliminates any dependency on standalone DWDM point products. The MSTP platform is optimized for metro and regional DWDM transport, which requires low-cost, low-maintenance, automatic optical power monitoring and adjustment, automatic node setup, and so on. The MSTP further increases metro flexibility and rapid provisioning through the use of ROADM technology and tunable lasers and components. The ONS 15454 MSTP is flexible enough to use for metro fiber relief applications, metro wavelength services, broadband video, VOD, HDTV services, storage and Ethernet applications, and standard voice and data services. The MSTP configuration supports up to 32 ITU-T DWDM-protected wavelengths, or up to 64 unprotected at 100 GHz channel spacing. The components are also rated for future 50 GHz channel spacing capability when required. Cisco’s implementation of ROADM is performed in silicon using planar lightwave circuit (PLC) technology. The PLC ROADM used in the Cisco ONS 15454 MSTP platform is the combination of two cards, the 32-WSS (Wavelength Selective Switch) and the 32-DMX (Demultiplexer). Within a Cisco ONS 15454 node, the combination of four cards, both east and west facing, forms a functional east/west ROADM node. The ONS 15454 ROADMs have the added benefit of providing automatic channel equalization, allowing all 32 wavelengths to be optically balanced. Better network operational flexibility and higher network availability are direct results. Tunable lasers are also used in the ONS 15454 MSTP—more specifically, tunable, lithium niobate, externally modulated lasers. Both 2.5 and 10 Gbps tunable laser assemblies are available. The tunable laser assemblies are incorporated as components on transponder line cards. Multirate transponders are tunable on the trunk side (DWDM side), but they also incorporate a client-side line interface on the same card. The client-side interfaces support SFP or XFP pluggable optics. On the client-interface side, these transponders have a multirate, protocol transparency property, allowing these tunable transponders to meet a number of service demands. For example, a 10 Gbps Multirate Enhanced Transponder Card would support signals on the client-side interface as shown in Table 6-5.
Metro MSPP, MSSP, and MSTP
Table 6-5
375
Client Interfaces for 10 Gbps Multirate Enhanced Transponder Card Client Application
Interface Speed and Type
Storage
10 Gbps Fibre Channel
Ethernet
10 Gigabit Ethernet WAN PHY 10 Gigabit Ethernet LAN PHY
SONET/SDH
OC-192/STM-64
The laser used for the client-side interface is usually of the distributed feedback, direct modulation type (DFB/DM), which is less expensive than the DWDM trunk-side lasers that must reach farther with more accuracy. Pluggable XFP optics are capable of sending an unamplified 10 Gbps Ethernet signal up to 10 km over SMF-28 fiber that meets particular dispersion and loss characteristics. The Cisco 2.5 Gbps Multirate Transponder Cards (tunable on the DWDM side) support the client-side interface types shown in Table 6-6. Table 6-6
Client Interfaces for 2.5 Gbps Multirate Transponder Cards Client Application
Interface Speed and Type
Storage and mainframe
ESCON 1 Gbps Fibre Channel/FICON 2 Gbps Fibre Channel/FICON External Timing Reference/Control Local Oscillator (ETR/CLO) IBM InterSystem Coupling (ISC-1)
Ethernet
Gigabit Ethernet
SONET/SDH
OC-3/STM-1 OC-12/STM-4 OC-48/STM-16
Digital video
D1-SDI Video HDTV C-Cor DV-6000 (2.38 Gbps)
For service aggregation requirements, both multiservice aggregation transponders and muxponder cards may be used. Multiservice cards help to support multiple interface types to provide service variety on the customer side. As an example, the Cisco 2.5 Gbps Multiservice Aggregation Transponder Cards (DWDM tunable) support the client-side interface types shown in Table 6-7. These are then transponded to the desired DWDM wavelength on the trunk side of the MSTP.
376
Chapter 6: Metropolitan Optical Networks
Table 6-7
Client Interfaces for 2.5 Gbps Multiservice Aggregation Transponder Cards Client Application Storage
Interface Speed and Type 1 Gbps Fibre Channel/FICON 2 Gbps Fibre Channel/FICON
Ethernet
Gigabit Ethernet
A muxponder is a combination multiplexer and transponder card that helps to increase the service density of a node by multiplexing multiple slower-speed interfaces into one higherspeed, trunk-side DWDM interface. One example would be the multiplexing of four ports of SONET/SDH OC-48/STM-16 into a muxponder that yields a 10 Gbps, OC-192/STM-64 trunk-side DWDM wavelength. The Cisco 4x 2.5/10 Gbps Enhanced Muxponder Cards (DWDM tunable) support client-side interfaces of OC-48/STM-16 for SONET/SDH. Figure 6-21 shows the concept of multiservice aggregation on the ONS 15454 MSTP platform. Using multiservice aggregation and muxponder cards, many different services can be efficiently packaged into 2.5 or 10 Gbps DWDM wavelengths (denoted by the greek symbol for lambda). This includes SONET/SDH services at DS-3, OC-3, OC-12, and Gigabit Ethernet services carrying multimedia traffic within lambda number 1. Also shown is the packaging of two Gigabit Ethernet interfaces, one OC-48c, or 24 Fast Ethernet interfaces into lambda 3. Figure 6-21 ONS 15454 MSTP Service Aggregation
2 x GigE, OC-48c, or 24 x 100 Mbps λ3 DWDM Mux
Fiber 10 Gbps λX
λ2 λ1
ADM DCS
L2 Cell Transponder Packet Switch Switch
Gigabit Ethernet Internet
2.5 G λ1
VoIP
LAN
OC-12c OC-3c DS3
Source: Cisco Systems, Inc.
Metro Storage Networking
377
These are just a sample of the cards that are usable within the ONS 15454 MSTP configuration. Most of them have other salient features such as software-selectable forward error correction (FEC and E-FEC) and provisionable digital wrapper technology (G.709). Using a 32-channel tunable DWDM system with 10 Gbps transponder cards yields up to 320 Gbps of node capacity that can be leveraged across one fiber pair. Combine this tunable capacity with ROADM capability, automatic node setup, automatic power control, both pre- and postoptical amplification, flexible topology support, and protection mechanisms. The combination enables a metro DWDM platform that is powerful, flexible, modular, and multiservice on-demand oriented. When considering solutions for metro DWDM requirements, the use of tunable components, ROADMs, integrated software intelligence, and optical power automation are some of the key developments that contribute to more flexible and affordable deployment of DWDM in metropolitan optical networks. More information on the Cisco ONS 15454 MSTP can be found in Chapter 7.
Metro Storage Networking Storage area networks (SANs) use network-centric technologies to index, catalog, serve, and protect the distributed data requirements of today’s high-tech computing enterprises. Metro storage networking is a growing segment of metro network services. Many businesses are looking to address regulatory constraints, downtime costs, and IT processing optimization with distributed processing and SAN solutions that are geographically dispersed. Remote storage backup, storage mirroring, and distributed data centers are applications that require super speeds for low-latency data transmission, the highest reliability, and the utmost availability. Storage protocols such as Fibre Channel, ESCON, and FICON, in conjunction with creative network transport solutions, must embody these types of communication service attributes. More than ever, customers are looking to providers to meet these challenging matters of mission-critical data delivery. Cisco MSPP platforms have introduced integrated feature cards, known as the SL series cards, to support storage networking and mainframe protocols. Not only are the protocols supported, but several optional features are included. These features improve network performance through encapsulation (GFP-T), concatenation within SONET/SDH (VCAT), LCAS, and enhanced distance reach through buffer-to-buffer credit spoofing—all configurable on the SL series line cards.
Fibre Channel Fibre Channel (FC) is a serial input/output protocol for the purpose of reliably connecting diverse storage systems and servers together for server-to-storage and data replication applications. Fibre Channel is an open, industry standard (ANSI X3.230-1994). As defined, the Fibre Channel architecture can use copper—either coaxial or twisted pair (at slower
378
Chapter 6: Metropolitan Optical Networks
speeds)—or optical fiber as a transport medium. The use of optical fiber to create Fibre Channel–based SANs is the most prominent physical media implementation. The Fibre Channel architecture supports three topologies:
• • •
Point-to-point Switched fabric Arbitrated loop (similar to a ring-based LAN)
Fibre Channel supports data rates of 1 Gbps (1.0625 Gbps), 2 Gbps (2.125 Gbps), and 10 Gbps. With Fibre Channel, the intent is to carry data storage between diverse locations with low latency for synchronous storage applications, asynchronous storage applications, or variations of the two. The Fibre Channel architecture allows for transport layer independence, so a number of “bit carriers” are available such as the following:
• • • •
FC over leased fiber or wavelengths (DWDM/CWDM) FC over SONET/SDH with optional VCAT FC over IP over Gigabit Ethernet FC over a switched IP network
Distance, round-trip latency, and the characteristics of the storage application are all key inputs into choosing a transport for Fibre Channel. Metro networking platforms such as MSPPs include support for Fibre Channel interfaces/ protocols, in essence helping to build the network bridge over which to shuttle missioncritical storage. Fibre Channel–based storage switches use a technique called buffer-to-buffer credits to establish a flow control mechanism between a pair of Fibre Channel storage switches involved in a storage mirroring or backup session. The buffer-to-buffer flow control allows more Fibre Channel packets to be sent by an origination switch before an acknowledgment of those frames is required from the destination switch. This allows Fibre Channel systems to transmit at a higher rate of data utilization. When Fibre Channel storage switching is extended across metropolitan or regional networks, the round-trip latency and propagation delay can reduce the efficiency of the bufferto-buffer flow control mechanism and lower overall storage data throughput. Placing buffer-to-buffer credit intelligence in the transport network (for example, in the MSPP SL cards) allows for the MSPP optical transport network to maintain and enhance the efficiencies of storage transfer and synchronous replications. Additionally, the MSPP SL cards support spoofing of acknowledgments, instead of waiting for the destination Fibre Channel switch to send the acknowledgment (affected by round-trip delay). The MSPP SL card interface spoofs the acknowledgments, making it appear that both Fibre Channel switches are local to each other. As long as the MSPP employs the proper buffering and spoofing, the Fibre Channel storage applications can operate over long distances, which is critical for data center disaster recovery and storage replication services.
Metro Storage Networking
379
Figure 6-22 depicts an example of metro storage technology. In this example, several storage applications are shown, both synchronous and asynchronous. The Cisco ONS 15454 can interface the storage equipment from customer premise 1, and use the SL card intelligence to efficiently transport across the local exchange carrier (LEC), interexchange carrier (IXC) and far-end local exchange carrier’s SONET networks to reach customer premise 2. Various types of applications now appear local, such as the clustering of enterprise mainframe processing systems, the clustering of SANs, and the ability for remote storage mirroring to keep storage synchronized between sites. Figure 6-22 Metro Storage Technology Example
Geographic View
Customer Premise
CT Miles
LEC IOC Miles
IXC IOC Miles
LEC IOC Miles
Customer Premise
CT Miles
End-to-End
Enterprise Systems Clustering
Storage-Area Network
SONET
Cisco ONS 15454 Voice and Service
Service: Fibre Channel/FICON Client Rate: 1.0625 or 2.125 Gbps Access Trunk: OC-48/192 Application ¥ Metropolitan Synchronous Mirroring ¥ Metropolitan Asynchronous Mirroring
Access (Ring)
OC-48
OC-48
ADM
IOF SONET
ADM
Access (Ring)
ADM
SONET IOF
Cisco ONS 15454 Voice and Service
ADM
SONET
ADM
Regional
Equipment View Source: Cisco Systems, Inc.
Storage-Area Network
SONET
ADM
Remote Mirroring
Enterprise Systems Clustering
Remote Mirroring
380
Chapter 6: Metropolitan Optical Networks
Enterprise Systems Connection (ESCON) IBM developed ESCON in 1991 as a channel protocol successor to IBM’s 370 series Bus/Tag parallel channels. ESCON linked IBM mainframe processors to storage directors, storage disks, and magnetic storage tapes at almost four times previous performance rates. Till then, the maximum data rate for the copper-based, Bus/Tag channel connection was 4.5 MBps. The introduction of ESCON used optical fiber and optical lasers to increase the IBM data rate specification to 17 MBps and to extend channel distances into the dozens of kilometers. Because of ESCON, a data center no longer was synonymous with a machine room.
NOTE
ESCON uses megabytes per second (MBps), which is the way data is referenced in the mainframe processing industry. The term 17 MBps refers to the rate at which a high-end, 1990 vintage mainframe processor could move data from the ESCON transmission link to processor memory/storage.
The actual link rate over the ESCON fiber cable is serial transmission at 200 Mbps. ESCON is half duplex in nature. The older IBM Bus/Tag channel design used multiple copper wires within the bundle, delivering eight simultaneous bits of data along with a plethora of control signals, therefore, earning the designation “parallel channel.” The introduction of ESCON and optical laser transmission moved these IBM channels into a parallel-to-serial-toparallel conversion effort from mainframe to ESCON channel to storage device, different than the previous ten years. For ESCON, a high bit rate was needed for the serial transmission to outperform the parallel channel architecture. The introduction of ESCON was the industry’s first SAN. ESCON is still used in many data centers and for distributed storage and parallel mainframe processing applications. As such, ESCON is a frequent customer requirement, which must be networked across a metro, a region, or the nation. MSPP and MSTP platforms generally support the ESCON channel protocol. Many data centers also used multiple mainframes to optimize overall processing and storage, linking them together in processing clusters that could share storage through storage directors. IBM would call this a parallel sysplex. With business continuance trends and company mergers/acquisitions, many of these parallel-processing clusters required separation and or extension across provider networks. Protocols for timing references need to be passed between geographically diverse clusters in addition to ESCON data. These protocols within IBM terminology are External Timing Reference/Control Local Oscillator (ETR/CLO) and IBM InterSystem Coupling (ISC-1). You will often see these features listed as supported interfaces/protocols on metro networking platforms. Often the IBM reference of GDPS for Geographically Diverse Parallel Sysplex is listed. GDPS in this sense is an umbrella solution including a few protocols. GDPS often
Metro Storage Networking
381
implies the concurrent use of ESCON/FICON, ISC, and ETR/CLO. Figure 6-23 shows the concept of multisite data centers and disaster recovery. The figure shows the metropolitan MSPP integration of multisite mainframe processing and storage using IBM ESCON and Fibre Channel (FC) to reach FC-based storage systems, and Gigabit Ethernet to reach IP-based storage servers and hosts. Figure 6-23 Multisite Data Centers and Disaster Recovery
Small Data Center Mainframe
MSPP
Hosts
Hosts
GE
FC
GE
FC
Storage Server
FC
MSPP ESCON
Mainframe and Storage Server
Main Data Center
MSPP ESCON
FC
Storage Server Mainframe and Storage Server
Standby Data Center
Source: Cisco Systems, Inc.
Fiber Connection (FICON) IBM later developed FICON as a next-generation channel protocol between mainframes and storage devices, achieving higher speeds and greater distances than its predecessor, ESCON. FICON uses a mapping layer similar to Fibre Channel. This layer allows for the multiplexing of small data transfers that can be interleaved with large data transfers, improving their latency. Therefore, FICON is a full-duplex transmission protocol. FICON was introduced in 1999 with an optical link transmission rate of 1 Gbps. This was followed shortly with FICON Express, a FICON enhancement that could autonegotiate at 1 Gbps or 2 Gbps. Using FICON at 1 Gbps link rate could yield up to 100 MBps of IBM
382
Chapter 6: Metropolitan Optical Networks
data transfer. Using FICON Express at 2 Gbps link rate could deliver sustained data rates in the range of 150 to 170 MBps between large servers and storage controllers. FICON is very similar to Fibre Channel speeds and data mapping, allowing both protocols to be easily supported on the same line card within an MSPP platform. FICON will likely make the jump to 10 Gbps data rate support. These capabilities enable an emerging market of storage service providers. The creation of storage farms within a SAN can fit into the metro market as a service POP connected to the metro core network. Storage backup, server replication, storage offload, storage databases, and a number of other fee-based applications are then accessible by the larger business market. Table 6-8 lists a few of the attributes of Fibre Channel, ESCON, and FICON protocols. Table 6-8
Comparing Fibre Channel, ESCON, and FICON Attribute
Fibre Channel
ESCON
FICON
Optimum mainframe channel-link to memory data rate
Similar to FICON
Up to 17 MBps
Up to 100 MBps (FICON) Up to 170 MBps (FICON Express) Up to 270 MBps (FICON Express2
Optical link transmission rate
1.0625 Gbps, 2.125 Gbps, 10 Gbps
200 Mbps
1.0625 Gbps (FICON)
Supported transport
Fiber
Fiber
Fiber
Wavelength
Wavelength
Wavelength
SONET/SDH
SONET/SDH
SONET/SDH
2.125 Gbps (FICON Express)
Ethernet
Ethernet
Switched IP
Switched IP
ONS 15454 line card
SL series
SL series
SL series
Duplexing features
Full duplex, also supports iSCSI
Half duplex
Full duplex
N/A
One channel program for one control unit at a time
IBM Channel program execution
Multiplexing Multiple channel programs for one or more control units at a time
Technology Brief—Metropolitan Optical Networks
Table 6-8
383
Comparing Fibre Channel, ESCON, and FICON (Continued) Attribute
Fibre Channel
ESCON
FICON
Distances via Cisco networking solutions
2300 km @ 1 Gbps
Dependent on application, slower ESCON speed limits the latency bounds, thus distance
2300 km @ 1 Gbps
Application targets
1150 km @ 2 Gbps
1150 km @ 2 Gbps
Continuous availability Multisite parallel processing Storage mirroring—synchronous/asynchronous Near-transparent disaster recovery Networking of diverse storage platforms
Technology Brief—Metropolitan Optical Networks This section provides a brief study on metropolitan optical networks. You can revisit this section frequently as a quick reference for key topics described in this chapter. This section includes the following subsections:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding metropolitan optical networks.
•
Technology at a Glance—Uses figures and tables to show metropolitan optical networking fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance—Presents charts that suggest business drivers and lists those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts in this section to see how business drivers are driven through technology selection, product selection, and application deployment to provide solution delivery. Additionally, business drivers can be appended with critical success factors and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
Technology Viewpoint In the new era of networking, communications never sleeps as humans drive data from peak hours of 7 a.m. to 7 p.m., and machines drive data during off-peak hours. Today’s metropolitan networks are assimilating all communications and network types—voice, video, and data—into urban webs of glass and light, mixed with traditional copper conduit
384
Chapter 6: Metropolitan Optical Networks
and electron energy. Business networks hub and hum from metropolitan areas, the source and supply of both their workforce and their market revenues. For business and industry, communications by voice came first, then business data, then video, then Internet data. For consumers en masse, wireline voice communications arrived first, followed by video (television), then wireless voice, and then broadband Internet data. Relative to broadband data, the term MAN is scarcely over ten years old. Metropolitan optical networks are traveling farther, paralleling the transportation and utilities infrastructures, following individual users to their homes with arguably unlimited broadband for personal and business communications. Technology innovation affects metro networks the most, as adopters of new technology encircle and punch the metro network for ever-faster messaging, productivity, and pleasure. Twenty-first century innovations are transforming switching techniques from circuit to packet, displacing electrical signal conduits with optical pathways, moving from time and distance rates to bandwidth and services billing, and converging applications, services, and networks. Accessibility approaches seamlessness, concurrently reaching any service from any device—anywhere. For metro networking providers, there is increased emphasis on lowering costs while growing revenues, maximizing reuse of technology, and making selective capital expenditure upgrades. Evolution to next-generation SONET/SDH is well underway to improve data and SAN support, and to add support for Ethernet, a potentially ubiquitous access technology in the metro. Providers are also deploying convergence technologies with which to streamline network and services infrastructure. This will allow for the optimization of multiple services, both circuit and packet based. Other considerations for metro providers are to
•
Position for Ethernet, wavelength services, storage services, and IP telephony services for businesses
•
Position for Ethernet, IP telephony services, storage services (audio and video), and video delivery for residential consumers
• • • • •
Find points of productive contribution for CWDM and DWDM
•
Position network infrastructure for modular network reconfigurability, service aggregation, and scale
•
Drive service convergence toward the delivery of any service over any device and access method, anywhere
Choose PON architecture wisely Consider any-to-any, carrier class services over any optical technology Converge to one network infrastructure Focus on revenues from new services, while maintaining focus on cost reduction for legacy services
Technology Brief—Metropolitan Optical Networks
385
These considerations can help metro systems providers and operators move to a more “bang for the buck” strategy, reducing capital expenditures and operational expenditures in proportion with selective spending, and adding those services and differentiators that uphold or lift revenue margins. Service providers and operators prefer options, options, and more options when selecting vendor products with which to build, enhance, and periodically morph their metropolitan network services. Innovations in metro technology continue to stoke the fires of change. Technologies such as SONET/SDH are enhanced, DWDM is metro optimized, Ethernet is pervasive, and IP has reached carrier class. Storage networking vaults the walls of the mainframe cathedrals and distributes farther into the long-haul, metropolitan, and business networks. Next-generation SONET/SDH provides new optimized capabilities to better deal with data growth. Metro DWDM raises the asset valuation of a provider’s high-value fiber infrastructure. The use of metro DWDM is about service creation and creating scale to keep pace with user demand. The principal driver for Ethernet in the metro is the demand for IP, because IP powerfully addresses the convergence of customer communications. Packet networking has always been IP centric, and new high-availability features have moved IP to the center of provider networks. Metropolitan optical networks will continue advances in speed, function, intelligence, and security to form high-speed data bus-backplanes capable of being used by peer-to-peer, distributed and cooperative computing models. As optical networking further pervades the metropolitan infrastructure, new processing architectures should appear. Centralized computing models will be hollowed out, moving some calculations, content, storage functions, and network processes beyond the data center/LAN and into the MAN and WAN—doing so in search of new cost efficiencies, multiapplication integration, business resilience, and most of all, distinctive customer value. Using optical networks, distributed computing goes deeper, storage goes further, graphics go faster, and bits run cooler. The Internet at large is principally responsible for digitally urbanizing a society that prefers geographic buffer. In a milli-instant, computer users gain access to urban communication services and/or punch photonic tickets for cabin rights over the long-haul optical rails. Just a click of a key, the flight of a packet, the inaudible wisp of data access, a photonic pulsation, and a flash of phosphor—a question’s answer appears with little energy wasted or time lost. A quote or purchase is transacted without predicated meeting or visitation within scheduled hours. Cash and coin move without physical shuffling and counting. Whether for business or pleasure, metropolitan networks are the launchpad and the landing strip of digital transactions and electromagnetic communications. Metropolitan optical networks are, therefore, reaching farther to accommodate the broadband communication needs of the sprawling urbanization of people and business. Still, the majority of the Internet and its content are resident within the metropolitan areas of the world. Seemingly, as the Internet grows larger, the world’s metropolises grow logically smaller, contracting into a virtual wormhole of communications that time, distance, and economics can no longer separate.
386
Chapter 6: Metropolitan Optical Networks
Technology at a Glance Figure 6-24 shows a functional design of a metropolitan optical network. Figure 6-24 Metropolitan Optical Network Design
Metro Access
Metro Edge
Metro Core (IOF) and Regional Core
Long Haul
MSPP MSSP
MSTP
MSPP
32 Wavelength 2.5/10 Gbps MSSP MSTP
O
O
MSSP MSTP O
O
O
O
O
MSSP
Source: Cisco Systems, Inc.
Figure 6-25 illustrates the metropolitan optical network platforms. Figure 6-26 shows metropolitan connectivity in U.S. cities. Each circle represents a separate metro area and is scaled according to the number of MAN providers offering service. Metro areas with less than two MANs are not shown.
Technology Brief—Metropolitan Optical Networks
Figure 6-25 Metropolitan Optical Network Functional Platforms
Ethernet FSO DSL Cable DS1, DS3 OC-n/STMn Wireless Video ATM Storage
ONS 15530/540 MSPP
MSSP ONS 15600 ONS 15454
MSTP
ONS 15327 MSPP ONS 15302/305/310-CL MSPP Access
Metro Edge
Figure 6-26 Metropolitan Connectivity: U.S. Cities
Source: © 2005 PriMetrica, Inc.
Metro Core
12000 10000 7600 6500 MGX8850 MGX8950
Service POP
MSTP ONS 15454
Long Haul
387
388
Chapter 6: Metropolitan Optical Networks
Table 6-9 compares metropolitan optical technologies. Table 6-9
Metropolitan Optical Technologies
Processor architecture technology
MSTP
MSPP
MSSP
ONS 15454
ONS 15454 SONET/ SDH, ONS 15327 SONET ONS 15310 SONET, ONS 15305 SDH, ONS 15302 SDH
ONS 15600 SONET/ SDH
Nonblocking XC and XCVT at VC4-Xc and VC12/3-Xc (future) XC10G XC-VXL-10G and 2.5G
Nonblocking XC and XCVT at VC4-Xc and VC12/3-Xc (future)
Core cross-connect CXC or SSXC 320 Gbps fabric Multishelf up to 5-terabit scalability
XC10G & XC-VXL10G & 2.5G Backplane switching speed range
Backplane switching
Backplane switching
Backplane switching
Same as MSPP plus:
240 Gbps total
40 Gbps per slot x 8 slots
DWDM transponders and muxponders up to 10 Gbps per lambda
Data plane 160 Gbps SONET plane 80 Gbps
STS switching fabric at 320+ Gbps
10 DCC to 32 DCC, 84 DCC future software release
6144 STS1 to 2048 OC-48 switching capacity
288 STS1 and 672 VT1.5 to 1152 STS1 and 672 VT1.5 Interface T1/E1 (DS0/DS1) speed support T3/E3
T1/E1
T1/E1
T3/E3
T3/E3
OC-3/STM-1
OC-3/STM-1
OC-3/STM-1
OC-12/STM-4
OC-12/STM-4
OC-12/STM-4
OC-48/STM-16
OC-48/STM-16
OC-48/STM-16
OC-192/STM-64
OC-192/STM-64
OC192/STM-64
Fast Ethernet (100 Mbps), Gigabit Ethernet (1 Gbps), 10 Gigabit Ethernet (10 Gbps)
Fast Ethernet (100 Mbps), Gigabit Ethernet (1 Gbps), 10 Gigabit Ethernet (10 Gbps)
OC-768/STM-256
Technology Brief—Metropolitan Optical Networks
Table 6-9
389
Metropolitan Optical Technologies (Continued) MSTP Interface E100T-12/E100-12-G speed support E1000-2/E1000-2-G (Continued) CE-100T-8
Key capacities
MSPP
MSSP
E100T-12/E100-12-G E1000-2/E1000-2-G CE-100T-8
G1000-4/G1K-4
G1000-4/G1K-4
ML100T-12/ML1000-2
ML100T-12/ML1000-2
Fibre Channel 1 Gbps, Fibre Channel 2 Gbps
Fibre Channel 1 Gbps, Fibre Channel 2 Gbps
FICON up to 2 Gbps
FICON up to 2 Gbps
ESCON up to 2 Gbps
ESCON up to 2 Gbps
D1 Video, HDTV
D1 video, HDTV
2.5 Gbps
2.5 Gbps
10 Gbps
10 Gbps
40 Gbps (future)
40 Gbps (future)
Total capacity per fiber pair (bit-rate x lambdas)
140 x 2 Mbps 120 x 45 Mbps
3,072 STS-1 Bi-Di cross-connects
DWDM 100 GHz
120 x 34 Mbps
64 OC-48, 16 OC-192
10G x 32 = 320 Gbps
48 x STM-1
DWDM 50 GHz
12 x STM-4
10G x 64 = 640 Gbps
12 x STM-16
12 lambdas per shelf
4 x STM-64
12 x 2.5 Gbps multirate transponders
64 UPSR/SNCP, any combination of UPSR/ SNCP, BLSR/MSSPRing, and 1+1 APS/ MSP can be mixed with allowable maximums
12 * 10 Gbps multirate transponders
Nonblocking VC-4 cross-connection capacity (line/line, trib/trib, line/trib)
12 x 4x OC-48 muxponder
Uni- and bidirectional cross-connection
12 x 1 Gbps Fibre Channel data muxponder or 24 x 2 Gbps Fibre Channel data muxponder
HO cross-connect size 384 x 384 VC-4
16 two-fiber BLSR/ MS-SPRing 64 1+1 APS/MSP uni- or bidirectional Path-protected mesh network (PPMN)
Up to 5 rings supported per system—4 SNCP and 1 MS-SPRing or 5 SNCP continues
390
Chapter 6: Metropolitan Optical Networks
Table 6-9
Metropolitan Optical Technologies (Continued) MSTP
MSPP
MSSP
Bandwidth range
Narrowband to broadband to 10 Gbps per lambda
Narrowband to broadband to 10 Gbps
Broadband switching to 40 Gbps
Service provider applications
Metro edge, metro core, metro regional platforms
Metro edge, metro core, metro regional platforms. Metro access for 15300 and 15200 series.
Multiring (mixed UPSR/ SNCP, BLSR/MSSPRing, and 1+1 APS/ MSP)
Digital cross-connect
Linear ADM
Terminal mode
Mesh
Ring
Linear add/drop multiplexer
Regenerator
Multiservice bandwidth aggregation
Two-fiber UPSR/SNCP/ BLSR
Fiber relief
Four-fiber BLSR
DWDM line amplifier
PPMN
DWDM ADM passive
Two-fiber MS-SPRing
DWDM ADM active
Four-fiber MS-SPRing
DWDM hub DWDM terminal
Multiring interconnection
VRF-Lite on ML series
Extended SNCP
Two-fiber UPSR/SNCP/ BLSR
Virtual rings
DWDM topologies Linear Point-to-point Mesh
Four-fiber BLSR
Hybrid SDH network topology
PPMN
Regenerator mode
Two-fiber MS-SPRing
Wavelength multiplexer
Four-fiber MS-SPRing
VRF-Lite on ML series
Star/hub
Technology Brief—Metropolitan Optical Networks
Table 6-9
391
Metropolitan Optical Technologies (Continued)
Provider and customer applicability
MSTP
MSPP
MSSP
Ethernet over DWDM
SONET/SDH ADM and DXC
SONET/SDH ADM, BBDXC replacement, and aggregation and TDM Switching
Storage over DWDM Wavelength and subwavelength services Service aggregation WAN aggregation IP/VPN Storage area networks Disaster recovery Internet access
Storage area networks Exchange/central-office colocation and interface to LH optical core networks Metropolitan video transport, data, and voice optical backbone networks
Ethernet access
Transparent LAN Services (TLS) platform
Transparent LAN Services (TLS) platform
Campus and university backbone network
Regional optical networks
Business transport network Distributed bandwidth manager Voice-switch interface Colocation DSLAM, and voice aggregator and transport system Cable TV (CATV) transport backbone network Wireless cell site traffic aggregator High-speed ATM/router link extender
Metro core and service POP switching MSPP metro ring aggregation Circuit to packet transition
392
Chapter 6: Metropolitan Optical Networks
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solutions and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. The following chart lists typical customer business drivers for the subject classification of network. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products and their key differentiators, and applications to deliver solutions with high service value to customers and market leadership for providers. Figure 6-27 charts the business drivers for metropolitan optical networks. Figure 6-27 Metropolitan Optical Networks
High Value
Critical Success Factors
Technology
Mass Customization Service Model Pay-as-You-Grow Modularity Rapid Provisioning Market Leadership
Service-Driven Metro Infrastructure/Offerings Network Convergence Circuit to Packet Electrical to Optical
Market Value Transition
Product and Technology Leadership – MarketLeading Interface Density and Flexibility – Business Continuance Broadband to the User Bandwidth Intensive Applications: E-Commerce, Storage Networking, Distance Learning, Medical Imaging
Market Share
High-Availability IP Networking Low Cost Competitive Maturity
IP Voice Communications Greater Service Variety/Granularity Business Drivers
Industry Players
Cisco IOS Optical Fiber WDM, CWDM, DWDM, ROADM, Tunable Lasers SONET/ SDH VCAT, LCAS, GFP RPR, DPT, IP/MPLS Ethernet Fibre Channel, FICON, ESCON
Applications Cisco Product Lineup ONS 15600 VoIP ONS 15454 Medical ONS 15327 Imaging ONS 15310 Video on ONS 15305 Demand ONS 15302 ONS 15216 Network ONS 15190 Storage ONS 15104 Distributed ONS 15540 Computing ONS 15530 Cisco CRS-1 Video Cisco 12000 Cisco 10000 Conference Cisco 7600 IP Cisco 7300 CommunicCisco 7200 ations Cisco 6500 Cisco 4500 Remote Cisco 3750 Backups Cisco 3550 Cisco Transport Metro Manager Ethernet uBR10012, Content uBR7200, Delivery uBR7100, HDTV Cisco Cable Mobility Mgmt Solutions
Service Value Managed Optical Wavelength Services Managed Metro Ethernet Services Managed Metro Storage Services Managed Voice/VoIP Services Managed IP/VPN Services Managed Video Services Vendor Partnerships for Unique Solutions Cisco Key Differentiators End-to-End Product Portfolio – – – – – Wavelength Services Metro IP/MPLS Solutions Metro Optical Transport Solution Metro Ethernet Switching Solution Cable HSD/Voice/MultiService over Cable Solutions Internet OSS Metro Services Wireless/Mobility Solutions IP/VPN Solutions Solution Delivery
Service Providers – Verizon – SBC – BellSouth – Qwest – AT&T – MCI – Sprint – Cox – Comcast Equipment Manufacturers – Nortel – Cisco – ADVA – Ciena – Lucent – Alcatel – Movaz – Fujitsu – Tellabs-ECI – UTStarcom – Huawei – Siemens – Marconi – Photuris – LuxN
Metropolitan Optical Networks
References Used in This Chapter
393
End Notes 1
Gumaste, Ashwin and Tony Anthony. DWDM Network Designs and Engineering Solutions. Cisco Press, 2003.
References Used in This Chapter Cisco Systems, Inc. “Cisco COMET Solutions Speed Optical Networks Toward the Future.” http://www.cisco.com/en/US/partner/netsol/ns341/ns396/ns114/ns231/ networking_solutions_white_paper09186a008009217c.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Toward a Service Driven Metro Network—A Service Providers Guide to Enabling Metro Business Services.” http://www.cisco.com/warp/public/779/largeent/ TEMP/CiscoMetroServices_brochure.pdf. Cisco Systems, Inc. “The Economics of Cisco’s Metro IP Solutions.” http:// www.cisco.com/en/US/partner/netsol/ns341/ns396/ns223/networking_solutions_ white_paper09186a00800ad6ca.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Cisco Multiservice over SONET/SDH Product Migration and Strategy.” http://www.cisco.com/en/US/partner/netsol/ns341/ns396/ns114/ns116/ networking_solutions_white_paper09186a008011391e.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Cisco ONS 15454 Series Fibre Channel over SONET/SDH.” http://www.cisco.com/en/US/products/hw/optical/ps2006/products_white_ paper09186a00801b97b3.shtml. Cisco Systems, Inc. “Cisco IEEE 802.17 Resilient Packet Ring Feature Guide.” http://www.cisco.com/univercd/cc/td/doc/product/software/ios120/120newft/120limit/ 120s/120s29/rprswg.htm#wp1098115. Cisco Systems, Inc. “Cisco 10720 Internet Router Datasheet.” http://www.cisco.com/en/ US/partner/products/hw/routers/ps147/products_data_sheet09186a0080091b6b.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Cisco ONS 15200/15454: Metro DWDM & Next-Gen Solution.” http://www.cisco.com/en/US/partner/products/hw/optical/ps2006/products_white_ paper09186a00800a8401.shtml. (Must be a registered Cisco.com user.) Force, Inc. “Lithium Niobate External Modulator Research, External Modulators.” http://www.fiber-optics.info/articles/external-mod.htm. Cisco Systems, Inc. “Cisco ONS 15454 Series Defining the Cisco Multiservice Transport Platform.” http://www.cisco.com/en/US/partner/products/hw/optical/ps2006/products_ white_paper09186a00801849f3.shtml. (Must be a registered Cisco.com user.)
394
Chapter 6: Metropolitan Optical Networks
Cisco Systems, Inc. “Cisco ONS 15600 Reference Manual, Release 1.” http:// www.cisco.com/en/US/partner/products/hw/optical/ps4533/products_technical_ reference_chapter09186a008011a8de.html#20774. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Cisco Metro Ethernet Design Fundamentals.” Cisco Networkers 2004 Session OPT-1042. Cisco Systems, Inc. “Cisco Deploying Metro Ethernet: Architecture and Services.” Cisco Networkers 2003 Session OPT-2045. Cisco Systems, Inc. “Cisco Metropolitan Ethernet Design Fundamentals” Cisco Networkers 2004 Session OPT-1042. Cisco Systems, Inc. “Cisco ONS 15454 SONET/SDH MSPP Storage Layer Multiplexing Datasheet.” http://www.cisco.com/en/US/partner/products/hw/optical/ps2006/ products_data_sheet09186a00801b97a7.html. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Metro Optical Transport Solution for Service Providers.” http://www.cisco.com/en/US/partner/netsol/ns341/ns396/ns223/ns218/ networking_solutions_white_paper09186a0080091f77.shtml. (Must be a registered Cisco.com user.) Halabi, Sam. Metro Ethernet. Cisco Press, 2003 TeleGeography research. http://www.telegeography.com/ee/free_resources/ mans2005_exec_sum.php IDC’s Next-Generation Optical Networks Taxonomy, 2004, Sterling Perrin, Study # 32456 – December 2004. IDC Worldwide Metro WDM 2004–2008 Forecast and Analysis, Sterling Perrin, Study # 21726 – August 2004. Warren, Dave, and Dennis Hartmann, editors. Cisco Self-Study: Building Cisco Metro Optical Networks (METRO). Cisco Press, 2004.
This page intentionally left blank
This chapter covers the following topics:
• • • • •
Understanding Long-Haul Optical Networks Extended Long-Haul Optical Networks Ultra Long-Haul Optical Networks Submarine Long-Haul Optical Networks Optical Cross-Connects (OXCs)
CHAPTER
7
Long-Haul Optical Networks Light has an information-carrying capacity that is 10,000 times greater than the highest radio frequencies in the electromagnetic spectrum. Nowhere is capacity more important than in long-haul networks, where communications converge between great masses of people. Since the 1980s, long-haul networks have followed the light to create today’s capacious long-haul optical networks. Long-haul optical networks benefit from two complementary ascendancies: the speed of optical modulation and the density of optical wavelengths. Both perpetuate the intrinsic capacity of optical fiber, the essential looking glass of long-haul networking. Long-haul optical networks are classified based on achievable distance without signal regeneration. Long haul, extended long haul, and ultra long haul are distinct designations for long-distance optical networking.
Understanding Long-Haul Optical Networks Long-haul optical networks are at the core of global information exchange. Their primary application is the transport of voice, video, and data communications between distant city pairs. Capacity is the defining scarcity because long-haul fiber deployment is a long-term capital asset, often exceeding 20 to 30 years. Before the commercial viability of wavelength division multiplexing (WDM), the only way to address long-haul capacity was either through bit-rate increases, the “lighting” of additional dark-fiber strands, or deploying new fiber at great expense. As long-haul networks approached fiber exhaust during the boom years of the Internet, WDM technology delivered on the promise of capacity abundance. Interexchange carriers (IXCs) were the first to deploy optical fiber cable in their long-haul networks for transport of long-distance voice traffic. The IXCs had the most to gain by future-proofing bit-rate increases within fiber medium while benefiting from extremely low bit-error rates. While AT&T began migration of the long-distance network to fiber, MCI and Sprint built optical fiber networks from the start, counting on optical technology to provide a competitive edge in the long-distance market. Metropolitan networks feed long-haul optical networks. Today’s all-fiber, metropolitan, local access optical networks are generally less than or equal to 100 km or about 62 miles. Normally, these access networks are unamplified, making use of lower-cost passive optical
398
Chapter 7: Long-Haul Optical Networks
components and pluggable ITU optics. Metropolitan core networks aggregate these access networks together for the local service provider and are positioned between access networks and long-haul networks. (See Chapter 6, “Metropolitan Optical Networks,” for a full description.) Many metropolitan networks are growing to accommodate expanding metros, and metropolitan core networks up to 600 km in circumference are increasingly common. Also, new and existing providers, utilities, and research networks are deploying regional optical networks, usually in the 300–1000 km range. The result is a number of optical networks that cross the line between metropolitan and long-haul distinctions. A number of optical references and optical equipment vendors classify optical networks based on the following:
• • •
Long-haul optical networks typically from 600 to 1000 km Extended long haul from 1000 to 2000 km Ultra long haul from 2000 to 4000 km
Of note is that older-generation equipment needed optical-to-electrical-to-optical (OEO) regeneration by the 600 km mark to preserve signal integrity. From that historical reference, a typical long-haul network span was considered to be 600 km between optical terminals— that is, 600 km between optical signal regenerations. That has changed with an extension of unregenerated distances from a number of recent optical innovations such as fiber optimizations, lower noise amplifiers, advanced error correction codes, and improved laser and photonic receiver tolerances. Given that metropolitan and purpose-built regional networks are blurring the field, this chapter assumes a more contemporary technology assertion that long-haul networks begin at 600 km and further classify into extended long haul and ultra long haul. For a specific distance classification with respect to extended and ultra long-haul optical networks, the range of 600 to 1000 km is assumed for long-haul optical networks. The oft-quoted longhaul distance of 600 km is very appropriate for Tier 1, city-pair reachability in densely populated northeast America and in Europe, and therein lies a distinction: long-haul networks aspire to interconnect major cities on an interstate or intercountry basis, even to multinational and international scale. Figure 7-1 depicts these long-haul optical network classifications. Long-haul enabled services are addressing near-term requirements for 2.5 and 10 Gbps traffic profiles including SONET/SDH (OC-48/STM-12 and OC-192/STM-64), Ethernet LAN and WAN PHYs at 2.5 and 10 Gbps, 2.5 Gbps and 10 Gbps Fibre Channel, and DWDM ITU grid wavelengths. Many of the long-haul optical platforms have strategic plans for 40 Gbps bit-rate services when needed.
Understanding Long-Haul Optical Networks
Figure 7-1
399
Long-Haul Network Classifications
Network Classification-Unregenerated Distance Ultra Long Haul and Submarine Extended Long Haul Long Haul 600 km
1000 km
2000 km
3000 km
4000 km
372 miles
620 miles
1240 miles
1860 miles
2480 miles
Examples of long-haul applications enabled by these technologies include
•
Extended metropolitan and regional 10 Gigabit Ethernet connectivity—As providers expand their networks and geography, the need for higher-capacity Ethernet services provides lower per-bandwidth costs while using customer-friendly network interfaces.
•
High-speed transport of SONET/SDH to and from transoceanic or submarine cable landing points of presence—Long-haul terrestrial networks often interconnect with multiple international network connection points for the transport of international communication traffic.
•
Regional transport networks for convergence of overlay network platforms— Many purpose-built network backbones are being consolidated to higher-capacity optical transport networks to optimize network expense and add new services.
•
High-speed terascale and supercomputing, disaster recovery, storage mirroring, and high-speed multimedia distribution—High-speed optical networks are the computing bus backbones for advanced computing applications that must collaborate over a distributed geography.
Long-haul networks must use single-mode fiber to reach protracted distances using a single wavelength such as 1310 or 1550 nanometers (nm) or using multiple wavelengths via DWDM in the 1550 nm window. Early DWDM systems were heralded based on their performance and raw capacities. The contemporary DWDM driver is that of lowering the total cost per bit while remaining service flexible and capacity scalable.
400
Chapter 7: Long-Haul Optical Networks
Networks of Nodes Long-haul, extended long-haul, and ultra long-haul optical system topologies are built by connecting optical nodes to optical fiber cable spans, often deployed as point-to-point terminals, mesh designs, or very large rings. Long-haul optical systems possess the flexibility to perform several functions that collectively perform end-to-end long-haul transmission. An individual optical node can be configured as the following:
•
Terminal node—The beginning or the end of a long-haul fiber route. Terminal sites multiplex and amplify incoming client-side signals and transmit them to the long-haul optical fiber span. The receiving terminal site demultiplexes the long-haul composite DWDM signal into client-side signals and sends them to the client devices. Terminal sites are generally the bookends, east to west, of a provider’s area of coverage; for example, DWDM signals aren’t passed through the terminal site. One terminal sends DWDM signals toward the east terminal site, which sends DWDM signals toward the west terminal site. A terminal site originates and terminates long-haul DWDM signals.
•
Hub node—A site configured as a hub node is often positioned between a pair of terminal sites. The hub node is one that demultiplexes all DWDM channels from a particular east or west direction, transponds (converts), and sends them to clientattached equipment (off of the hub node) on the receive path. The hub node also receives directly attached client transmit path signals and transponds these into an all-channels DWDM signal that is transmitted from the hub node in the appropriate east or west trunk direction.
•
Optical line amplifier node—Usually between a pair of terminal sites, amplifies the optical signal without demultiplexing it or regenerating it. In addition to reamplifying the optical signal (a 1R function), it generally performs other functions such as dispersion compensation, band separation, splitting and combining (for proper amplification purposes), and relaunching the signal onto the next fiber span. The distance between terminal sites and other factors such as fiber type and resulting optical power budget determine the quantity of optical line amplifier sites needed for a particular long-haul optical network.
•
Optical add/drop multiplexing (OADM) node—An OADM site can add and drop optical wavelengths as necessary while allowing other wavelengths to pass through the site on the express path. OADMs drop specific wavelengths, add specific wavelengths, and often amplify all transported channels. OADM nodes can be passive, amplified (active), and configured for anti-amplified spontaneous emission (anti-ASE) nodes. OADM nodes are different than hub nodes in that they add or drop partial DWDM channels, whereas hub nodes effectively add and drop all DWDM channels passing through the hub node. A new family of OADMs called reconfigurable OADMs (ROADMs) is distinguished from fixed OADMs through the use of software-reconfigurable properties. ROADMs speed service times through remote provisioning.
Understanding Long-Haul Optical Networks
•
401
Regeneration node (RN)—Optical regeneration sites demultiplex, regenerate, and remultiplex optical wavelengths when the distance between terminal sites is too long for the fiber span’s optical power budget. This regeneration function is important to retime, reshape, and regenerate (3R) the optical signal and is referred to as opticalto-electrical-to-optical (OEO) conversion. This is where many transponders are frequently used, increasing the cost of the overall optical system. Regeneration sites can also add and drop wavelengths when many channels need to be exchanged. Minimizing regeneration sites is an important design detail for long-haul optical networks and is key to further classification of extended long-haul and ultra long-haul networks.
Figure 7-2 shows the concept of these optical nodes, each configured to perform particular functionality in a long-haul optical network. Figure 7-2
Long-Haul Network Optical Nodes
Terminal Node (West)
OADM/ ROADM Node Optical Line Amplifier Node
Fiber Strand
Optical Line Amplifier Node Fiber Strand
Terminal Node (East)
Regeneration Node Optical Line Amplifier Node
Optical Line Amplifier Node
As long-haul systems are driven to lowest cost per bit, high-cost function like OEO regeneration should be mitigated as much as possible. A 1200 km system design would theoretically require an OEO regeneration node in the middle, using transponders and electrical components to regenerate a single wavelength, such as the common 1310 nm wavelength used in single-mode fiber. Transponders in particular are complex systems incorporating wideband photodetectors, filters, electronic circuitry, and lasers for ITU-T grid wavelength accuracy, which work collectively to provide the necessary OEO function.
402
NOTE
Chapter 7: Long-Haul Optical Networks
Transponders represent a significant portion of the cost in optical systems. If you assume a cost of $25,000 per transponder in a 32-channel DWDM system, then 64 transponders are necessary to regenerate the 32 DWDM wavelengths from east to west and vice-versa. This represents a $1.6 million investment at a minimum in an optical regeneration node. This fiscal aberration is why there is such a design focus on deriving as much optical signal distance as possible between regeneration functionality in long-haul optical networks.
To address fiber exhaust, just add DWDM to this design to increase the wavelength/channel count and the resulting capacity. Each DWDM wavelength, however, requires regeneration at the 600 km mark, increasing a 32-channel DWDM system regeneration to conceptually 32 times the cost of the single wavelength system, needing a transponder for each DWDM wavelength. By this plan, DWDM channel causes OEO regeneration costs to multiply, forcing economical limits on DWDM designs by attempting to stay under the 600 km threshold. This constraint was addressed both technically and financially with the advent of in-fiber optical amplifiers doped with rare earth fluorescing elements such as erbium-doped fiber amplifiers (EDFAs). A single optical fiber amplifier boosts the photonic power signature of a wide range of DWDM wavelengths.
NOTE
The term wavelength often refers to wavelength division multiplexing (WDM) as specified by the ITU-T wavelength grid. Wavelengths are uniquely referenced via specific length (nanometers) or frequency (terahertz), such as 1553.33 nm or 193.00 THz. Optical product documentation often refers to a group of wavelengths as channels or channel counts. The term wavelengths is often used interchangeably with the terms lambdas, channels, and colors.
Cisco Long-Haul Technologies Since 1999, Cisco Systems has invested time and talent into the service provider market for long-haul optical networks. Two long-haul DWDM products—the Cisco ONS 15808 DWDM System and the Cisco ONS 15454 MSTP System—are discussed here for the purpose of establishing Cisco’s point of entry into the market, initiating optical learning within a product context, and as a baseline for discussion of long-haul optical networks.
Cisco ONS 15808 DWDM System Cisco entered the long-haul DWDM market in December 1999 through the company’s acquisition of the Pirelli Optical Systems’ SpA division, an Italy-based optical networking
Understanding Long-Haul Optical Networks
403
manufacturer. The original Cisco ONS 15800, ONS 15801, and the follow-on Cisco ONS 15808 DWDM system are based on technology obtained through that acquisition. The ONS 15808 product is an example of a point-product DWDM system—that is, a system that has been architected to serve a single purpose in an optical network. The ONS 15808’s purpose is to expand the capacity of long-distance fiber through a high wavelength count, ITU grid, DWDM multiplexing architecture. To connect to metropolitan networks, the ONS 15808 interfaces with a number of SONET/SDH platforms and add/drop multiplexers (ADMs), ATM switching platforms, and IP routing platforms. The ONS 15808 system allows service providers to maximize the use of previously installed fiber in their longhaul optical networks while minimizing cost per bit per kilometer.
NOTE
Cisco has announced end of sale (EoS) and end of life (EoL) for the ONS 15808 DWDM Platform. According to Cisco.com, Cisco Technical Assistance Center (TAC) will continue to support customers that have active service contracts until February 28, 2010. To find more information about the EoS/EoL announcement for the Cisco ONS 15808 DWDM Platform, go to Cisco.com and search keywords ONS 15808 EoL. This chapter describes the Cisco ONS 15808 because it is an important backdrop to the Cisco ONS 15454 MSTP System, which you learn more about in the next section.
Technology enhancements in many of the Cisco Systems products allow for tighter channel spacing, higher channel capacity, higher bit rate, and greatly extended transmission distances, which enable a multichassis system such as the ONS 15808 to scale to more than 160 channels (with future growth to more than 300 channels) and to 40 Gbps transmission rates. This DWDM system uses multiple transmission windows, both in the conventional C band (80 wavelengths) and the long (L) band (40 wavelengths), creating up to 120 wavelengths with 50 GHz interchannel spacing. The system is applicable to both long-haul and extended long-haul optical designs. The Cisco ONS 15808 long-haul application is optimized for fiber spans up to 600 km. In the C band, various channel plans may be selected, such as 80, 60, 54, and 40 channels. To control the overall system cost, many vendors of optical equipment use ITU-T grid spacing, such as 50 GHz on the optical span transmission/trunk side, but will demultiplex, drop, add, and multiplex signals within the node at a greater interchannel spacing such as 100 GHz. Separating the channels into even and odd wavelength sets is one way to automatically achieve this interchannel spacing. The ONS 15808 uses this engineering practice. This reduces costs by confining the tight-precision, higher-cost lasers to the transmission/trunk output of the node while using lower-cost, adequate precision lasers and receivers internal to the node to accomplish complete functionality.
404
Chapter 7: Long-Haul Optical Networks
The ONS 15808 additionally supports an extended long-haul application for spans up to 2000 km without regeneration. This is accomplished through the use of both the L band of the infrared spectrum and a hybrid EDFA plus distributed Raman amplification. The L-band region, between 1575 and 1601 nm, has higher intrinsic dispersion than the C band, and the L band positively counteracts nonlinearities from impairing the signal at high bit rates over extended distances. Also, adding Raman amplification assists with extending the signal reach. Raman amplification pumps energy backward along the fiber’s optical signal, transferring photonic energy to the counter-propagating signal. The Raman amplification technique produces less overall optical noise, allowing the optical system to
• •
Reach extended distances without OEO regeneration Reach longer distances between optical amplification
The smaller accretive (accumulated) noise figure of the Raman amplification technology also contributes to tighter channel spacing and higher data rates. Raman amplification is further discussed in the section “Extended Long-Haul Optical Networks” later in this chapter. As a dual-band (C + L) wavelength system, the ONS 15808 has the general optical characteristics listed in Table 7-1. Table 7-1
ONS 15808 General Optical Characteristics Parameter
C Band
L Band
Wavelength range
1529 to 1561 nm
1575 to 1601 nm
Minimum ITU-T grid spacing
50 GHz
50 GHz
Bit rate per slot
2.5 Gbps, 10 Gbps
10 Gbps
Maximum number of channels in multichassis system configuration
80
40
Supported fiber types
Lucent TrueWave-Reduced Slope (TW-RS), Corning Extended Large Effective Area Fiber (E-LEAF), single-mode fiber (SMF), dispersion-shifted fiber (DSF), nonzero DSF positive slope (+NZDSF), nonzero DSF negative slope (–NZ-DSF)
Lucent TrueWave-Reduced Slope (TW-RS), Corning Extended Large Effective Area Fiber (E-LEAF), single-mode fiber (SMF),
Amplifier technique
Erbium-doped fiber amplifier (EDFA)
Hybrid EDFA and distributed Raman amplification
Directional multiplexing
Unidirectional point to point
Unidirectional point to point
Understanding Long-Haul Optical Networks
405
The ONS 15808 system uses different lasers to achieve these specifications, as follows:
•
Class 1 laser—This is a relatively low-power laser that is necessary for short-reach signals within the system. You typically find this type of laser in laser printers and compact disc players, although it’s designed for higher reliability.
•
Class 1M laser—This is a variation of the Class 1 laser that uses a wider, highly divergent beam, useful for client-side signals in transponders.
•
Class 3B laser—This laser is capable of up to 500 milliwatts (half of a watt) of output power. The class 3B laser is used to drive the optical signals for the long-haul and extended long-haul fiber spans. An automatic power reduction system reduces laser power to safe levels during faults or service maintenance activities.
In addition to the C- and L-band channels, the ONS 15808 uses an out-of-band optical supervisory channel (OSC), which is transmitted at 1620 nm, just outside the L-band amplification window. The OSC performs systems supervisory functions, monitoring, management, and configuration. The OSC runs at a bit rate of 2.048 Mbps, more than sufficient to handle the relatively small exchange of system information occurring with management functions. As both a long-haul and an extended long-haul platform, the Cisco ONS 15808 allows service providers to optimize cost and maximize capacity, speed, and reach. Much of the heritage and innovation in the ONS 15808 DWDM platform is now leveraged in the Cisco ONS 15454 MSTP platform.
Cisco ONS 15454 MSTP The Cisco ONS 15454 MultiService Transport Platform (MSTP) is a functional configuration enhancement to the popular Cisco ONS 15454 MultiService Provisioning Platform (MSPP). The Cisco ONS 15454 MSPP and MSTP configurations are targeted at multiservice use in metropolitan networks and regional and long-haul DWDM applications. The ONS 15454 MSPP technology heritage is from Cerent, which Cisco acquired in 1999. This next-generation IP over SONET MSPP platform has been reengineered three times since, adding DWDM (the MSTP option) capability in the summer of 2003. The Cisco optical team—sourced in the Pirelli acquisition—designed the MSTP feature set. The MSTP integrates advanced DWDM technology into the same ONS 15454 chassis customarily used for MSPP services. In this way, the ONS 15454 is a flexible platform that you can configure to support SONET/SDH and passive DWDM applications as an MSPP or provide active DWDM aggregation and wavelength services as an MSTP. The software fully integrates the passive DWDM MSPP and the intelligent DWDM MSTP functions into one system. The ONS 15454 MSTP and MSPP functionality provide a choice of multiservice aggregation, DWDM wavelength aggregation, and wavelength transport. Combining these services
406
Chapter 7: Long-Haul Optical Networks
with intelligent DWDM transmission in a single platform enables networks to be costoptimized for any mix of customer services. The ONS 15454 MSTP platform represents an optimum blend of metropolitan services with metro, regional, and long-haul DWDM features. The ONS 15454 MSTP is positioned into the network markets shown in Figure 7-3. Figure 7-3
ONS 15454 MSTP Positioning
1000 Total Capacity (Gbps)
Cisco ONS 15454 MSTP
Metro/Regional Long Haul
100
10
Metro Access
Extended/Ultra Long Haul
International Long Haul
1 1
10
100 1000 Distance (km)
Figure 7-4 shows a traditional DWDM architecture using a DWDM-only platform like the ONS 15808, requiring external point-product functionality such as a SONET/SDH add/ drop multiplexer (ADM). Figure 7-4
Traditional DWDM Architecture 10 Gb Ethernet
IP Router Storage System
10 Gb Fibre Channel
Long-Haul DWDM System
OC-192/STM-64
ADM SONET/SDH Add/Drop Mux ATM Switch
Source: Cisco Systems, Inc.
OC-48/STM-16 OC-12/STM-4 OC-3/STM-1 DS-3/E-3 DS-1/E-1
Understanding Long-Haul Optical Networks
407
Figure 7-5 shows a contemporary DWDM architecture that combines a mix of services and integrated DWDM functionality into the same platform, such as the ONS 15454 MSTP. This allows the connection of IP routers via 10 Gigabit Ethernet, storage systems via 10 Gigabit Fibre Channel, and traditional SONET/SDH signals. On the trunk side, the ONS 15454 MSTP presents these client-side signals to integrated transponders that perform ITU grid-compliant DWDM over the long-haul optical fiber. The integrated DWDM architecture of the ONS 15454 MSTP helps eliminate discrete DWDM-only platforms, reducing total capital and operational expense over the investment horizon. Figure 7-5
Integrated DWDM Architecture 10 Gb Ethernet
Integrated DWDM Transponders
IP Router
10 Gb Fibre Channel
Long-Haul Fiber Cisco ONS 15454 MSTP
Storage System
OC-48/STM-16 OC-12/STM-4 OC-3/STM-1 DS-3/E-3 DS-1/E-1
Source: Cisco Systems, Inc.
The ONS 15454 MSTP employs advanced, intelligent DWDM functionality, previously introduced in Chapter 5, “Optical Networking Technologies.” Features of intelligent DWDM include the following:
•
Automatic Power Control (APC)—Software-controlled power management simplifies the installation and upgrade of DWDM networks by automatically calculating the proper amplifier set points. APC also keeps per-channel optical power constant, while monitoring for both expected and unexpected variations in the number of optical channels. APC additionally compensates for optical network degradation due to the aging of components, fiber, and so on. APC is a network-level function controlled by a software algorithm on the amplifier and timing cards that designates a master node and starts the APC process check hourly or whenever a new circuit is provisioned or removed. Complementary to the APC function is an Automatic Node Setup function, which adjusts the values of the variable optical attenuators (VOAs) on the DWDM channel paths to equalize per-channel power at the input to any optical amplifiers.
•
DWDM network topology discovery—Each Cisco ONS 15454 node uses a network topology discovery function via a node services protocol (NSP) to identify other ONS 15454 nodes in the network, identify the different types of DWDM networks, and automatically update the nodes whenever a network-level change occurs. The flexibility of the ONS 15454 MSTP platform allows configuration of nodes to support any metropolitan design or regional DWDM topology.
408
Chapter 7: Long-Haul Optical Networks
•
Wavelength services—A to Z wavelength provisioning eliminates the need for support personnel at intermediate sites. The Cisco ONS 15454 MSTP supports a variety of services that can be used as a foundation for delivering wavelength services. Supported services are — 2.5 and 10 Gbps SONET/SDH services — Time division multiplexing (TDM), OC-3, OC-12, OC-48, and OC-192 services — Gigabit Ethernet, 10 Gigabit Ethernet LAN, 10 Gigabit Ethernet WAN — 2.5 and 10 Gbps Fibre Channel, Fiber Connection (FICON), Enterprise Systems Connection (ESCON), and video wavelength services such as HDTV and SDI — Muxponding (multiplexing many client non-DWDM signals into a DWDM wavelength conversion) and transponding (client non-DWDM signal to DWDM wavelength conversion) — 2.5 and 10 Gbps DWDM ITU-T Grid Optics, G.709 forward error correction (FEC), and enhanced forward error correction (E-FEC)
The ONS 15454 MSTP product supports up to 32 DWDM ITU-T 100 GHz wavelength channels beginning with ONS 15454 software release 4.5. This is accomplished through the use of eight discrete tunable transponders, each of which can be tuned to one of four neighboring grid wavelengths. A quantity of four cards of each of the eight transponder part numbers would equal 32 cards, each tunable to create a 32-channel ITU-T 100 GHz plan that covers the range of 1530.33 through 1560.61 nm. There are eight distinct DWDM transponder cards for supporting 2.5 Gbps bit-rate operation and another eight distinct DWDM transponder cards for supporting 10 Gbps bit-rate operation. Many internal components such as the lasers and wavelockers are technically capable of 50 GHz operation, facilitating incremental expansion to 64 DWDM ITU-T, 50 GHz–spaced wavelength channels in future releases.
Reconfigurable Optical Add/Drop Multiplexing (ROADM) The ROADM capability was introduced in ONS 15454 release 4.7. ROADM functionality allows for addition and dropping (add/drop) of DWDM wavelengths without manually changing the physical fiber connections or manually rebalancing optical channel power. This is advantageous because it is the only foreseeable way to significantly reduce optical provisioning times while mitigating risk to network availability. ROADM technology is useful in network applications that require fast, remote provisioning, such as on-demand wavelength services. The Cisco implementation of ROADM is performed in silicon using planar lightwave circuit (PLC) technology. The importance of PLC ROADM technology merits a brief flashback. Many first-generation ROADMs were based on micro-electro-mechanical
Understanding Long-Haul Optical Networks
409
systems (MEMS) technology, free-space optical photon switching using mirrors, which introduced high insertion loss through cascading of multiple MEMS matrices. Using optical amplifiers within these ROADM assemblies could compensate for the insertion loss, yet excess amplification generates aggregate noise buildup, limiting overall distance. Expensive to manufacture, early-generation (circa 1999) ROADMs based on MEMS technology moved upstream into large channel count networks and into networks at a time when cost wasn’t a significant factor. ROADMs using wavelength blocker technology appeared about 2002. Wavelength blockers use liquid crystal spatial light modulator technology to attenuate individual DWDM wavelengths. A wavelength blocker could change from full open state to fully closed state for an individual wavelength in less than 2 milliseconds. Using a 1 x 1 wavelength selective switch (wavelength blocker) in combination with 1 x 2 power splitters/combiners allowed for the traffic add/drop function. The wavelength blocker could be used to block dropped channels from passing through on the express path. Wavelength blockers would also be used for the add path channels, along with EDFAs to compensate for overall OADM insertion losses. Wavelength blockers may be assembled into a fabric allowing for 32- or 40-channel ROADMs, yet much of this is discrete components requiring plenty of manual assembly. High cost is always associated with manual assembly, placing constraints on the ability to scale volume and improve price and performance to meet the needs of the mass market. Used in the Cisco ONS 15454 ROADM, planar lightwave circuits (PLCs) build on wellknown arrayed waveguide (AWG) technology that can be built totally in silicon. A PLC ROADM combines muxing, demuxing, variable optical attenuation, optical taps, monitors, photodiodes, and even noise cancellation planar subassemblies into a one- or two-box ROADM assembly that can be integrated into line card form factors in the ONS 15454 MSTP. Essentially a semiconductor integration process, PLCs can be built in automated fabrication labs in volume. This design-once, build-many-times model and automated assembly methodology allows PLC-based ROADMs to enjoy higher yields and reach aggressive price and performance levels sought by the larger ROADM market. The low cost and high availability of PLC ROADMs has the potential to seed the broadband economics needed for mass deployment of fiber to the home. Within a Cisco ONS 15454 node, the combination of four cards, both east and west facing, forms a functional ROADM node. The PLC ROADMs used by Cisco has an added benefit of providing automatic channel equalization allowing all 32 wavelengths to be optically balanced. The ONS 15454 ROADM offers a significant reduction of insertion loss over previous back-to-back multiplexing or demultiplexing solutions. An approximate 5 ms fastswitching characteristic of PLC ROADMs facilitates optical shared protection, rather than having to build a separate protection overlay using a TDM layer. ONS 15454 configurations using the ROADM support up to 20-node optical rings. The ROADM technology also supports any-to-any connection capability and can be configured in optical bidirectional line switch ring (BLSR) topologies for fiber-cut protection schemes.
410
Chapter 7: Long-Haul Optical Networks
Probably the best thing about a ROADM function is that an add/drop provisioning activity that may have taken a day or two including trucks and technicians can now be performed remotely in minutes by network management software. With direct connection of services over DWDM wavelengths and with automated protection capabilities, the Cisco ONS 15454 ROADM provides for adaptive, self-healing, and flexible DWDM networking without using sophisticated optical planning.
Long-Haul DWDM Until DWDM, a bit-rate increase was the only way to augment the capacity of a pair of optical fiber strands. A single wavelength at either 1310 or 1550 nm would be modulated so as to pack and carry as much information as possible between long-haul terminals. The proper selection of optical fiber and optical components for long-haul system design is important if long-haul DWDM will be used for scalable capacity. Consideration should be given to waveguides (fiber and fiber-based components), lasers, both fixed and tunable, optical amplification and regeneration, wavelength planning, and the optical power budget.
Waveguide Challenges For long-haul optical networks, high speeds are essential. The fastest medium for creating a light-propagating waveguide is optical fiber. While fiber is the best medium to support higher bit rates, more optical impairments come into play, adding complexity and cost to system designs. Long-haul networks are optimized for high speeds, beginning with the selection of the appropriate fiber waveguide. The primary distinction between different groups of single-mode fiber waveguides is found in their dispersion management characteristics. In Chapter 5 you learned that dispersion is a characteristic of fiber that impairs an optical transmission to the point of negatively modifying the shape of the light pulse as it travels through the fiber. Dispersion can limit distance and is of particular concern in long-haul networks. Chromatic dispersion (CD) is a linear impairment causing different DWDM wavelengths (frequencies) to travel at slightly different velocities through fiber, leading to optical light pulse bit-time expansion, which complicates receiver detection accuracy at longer distances. An increase in higher speeds also increases transmission impairments and nonlinear effects. For example, at 10 Gbps on a particular fiber span, the chromatic dispersion impairment limits the all-optical distance to 100 km. At 40 Gbps on the same fiber span, the chromatic dispersion limits the design to a 10 km distance. For much of the older deployed fiber, polarization mode dispersion (PMD) can be a significant factor, especially at bit rates of 10 Gbps and more.
Understanding Long-Haul Optical Networks
411
For long-haul networks of all types, the most commonly encountered optical fiber types used for waveguides are
• • •
Single-mode fiber (SMF) Dispersion-shifted fiber (DSF) Nonzero dispersion-shifted fiber (NZ-DSF)
Conventional single-mode fiber (SMF) is designed with zero dispersion occurring at the 1310 nm wavelength. For years, long-haul optical fiber systems have used SMF (specification G.652) fiber and the single 1310 nm wavelength per fiber strand. As lower attenuation became evident in the 1550 nm window, fiber manufacturers decided to shift the point of zero dispersion from the 1310 nm mark to the 1550 nm mark, allowing optical signals to carry further. This created dispersion-shifted fiber (DSF, G.653) that could increase distances between OEO regenerator nodes using a single 1550 nm wavelength signal. Another advantage of the 1550 nm window was the advent of WDM and optical amplifiers such as EDFAs. As optical innovators looked to drive bit rates higher and channel spacing closer with DWDM, nonlinear impairments became a new challenge in this window when using DSF. Research showed that a small amount of dispersion would be necessary to mitigate the nonlinear effects of self-phase modulation (SPM), cross-phase modulation (XPM), and four-wave mixing (FWM). SPM is an effect that interacts with chromatic dispersion to change the rate at which a single light pulse (self) broadens as it propagates through the fiber. XPM is an effect that involves two light pulses. As optical power has slight variations, the two light pulses refract differently and change their individual speeds as they travel through the fiber. If these two pulses overlap enough, they can cause distortion of other pulses at the optical receiver. FWM is a WDM-based third-order nonlinearity that causes three closely spaced wavelengths to interact and form a fourth signal that can cause crosstalk with other WDM channels. The development of nonzero dispersion-shifted fiber (G.655) was primarily the answer, and widespread adoption of NZ-DSF was soon evident in new fiber deployments of long-haul carriers. NZ-DSF is the paramount fiber for maximizing long-haul optical networking. In fact, today’s fiber build-outs and expansions for long-distance networks prefer NZ-DSF fiber. Several variants of NZ-DSF are customized to address DWDM applications such as metropolitan, long haul, extended long haul, ultra long haul, and submarine haul. That is, NZ-DSF fiber is designed with dispersion slopes such that the zero dispersion point is at shorter wavelengths than the 1550 nm mark (positive dispersion or +NZ-DSF) or such that the zero dispersion occurs at longer wavelengths than 1550 nm (negative dispersion or –NZ-DSF). The choice to use the positive or negative version is generally determined by the type of laser used for the optical network design. For example, long-haul networks use moreexpensive lasers that usually have positive chirp. (Chirp is a small frequency shift to one
412
Chapter 7: Long-Haul Optical Networks
side or the other of the laser’s target frequency.) A positive chirp laser is often coupled with a –NZ-DSF type of fiber, a combination that provides better dispersion management over the long-haul fiber span. Metropolitan networks use more cost-effective lasers that routinely have negative chirp, potentially needing a +NZ-DSF type to provide maximum dispersion management in the metro. It’s important to know the dispersion slope orientation of any NZ-DSF fiber cable spans depending on your particular optical network application. Figure 7-6 shows the dispersion curves of SMF, DSF, and NZ-DSF fiber types. Dispersion Curves for Typical Long-Haul Fibers
Dispersion in Picoseconds per Kilometer
Figure 7-6
20 15
SMF SMF (G.652) DSF (G.653) NZ-DSF (G.655)
C L Band Band
–DSF
5 0
+NZ-DSF
1300 1350
1400
1450
1500
1550
1600
– NZ-DSF 1650
–5 –10 Wavelengths in Nanometers –15
While NZ-DSF fiber costs more per foot than standard SMF-28 fiber, the total cost of an optical system can quickly favor the NZ-DSF fiber model. That’s because the NZ-DSF fiber’s dispersion at 1550 nm is 75 percent less than standard SMF-28 fiber at 1550 nm. Therefore, using SMF-28 fiber for long-haul designs requires dispersion compensation technology at each optical line amplifier site. This is in contrast to a design that uses NZ-DSF fiber, where dispersion compensation technology is only needed in the terminal nodes—that is, at the beginning transmitter and the end receiver sites. Up until the early 1990s, the primary fiber types deployed in long-haul networks worldwide were SMF cables with a low fiber count. This might be 12, 24, or perhaps as much as 48 fiber strands per cable. Many of the new networks since the later part of the 1990s are using as much as 432 fiber strands per cable in long-haul deployments or expansions. While an increased fiber strand per foot of cable is expensive, the cost to redig and lay long-haul fiber is an exorbitant expense, one that a long-haul provider will not wish to incur twice. Also, since the latter part of the 1990s, many networks are deploying G.655 NZ-DSF fiber to future-proof their DWDM networks for high-speed scalability.
Understanding Long-Haul Optical Networks
413
Once you know the fiber infrastructure, it’s good practice to review the optical laser specifications of the equipment you will use. Since a laser is responsible for launching the light into the fiber, it’s essential to understand the capabilities and dispersion tolerances of the laser source. The equipment engineering documentation should provide technical details regarding the lasers/transmitters.
Lasers for the Long Haul Long-reach lasers are faster, more powerful, more accurate, and more technical than other kinds of optical lasers. Having nearly a 50-year past, lasers are fundamental to the alloptical future. Lasers must be modulated to be of use in telecommunications. Modulation is a technique for converting an electrical signal into an optical signal, much like pressing a flashlight switch to signal a 1 bit and unpressing the switch to signal a 0 bit. Modulation is imparting an intelligible stream of bit information—often from the electrical domain—onto a carrier, in this case, an optical laser’s photons. The electrical data stream of 1s and 0s is coupled to the laser, modulating the laser on and off to represent optically the digitally electrical information. Lasers may be directly modulated or externally modulated. Both are found in long-haul networks, but increasingly, the higher-speed lasers used for long-haul networks are of the externally modulated variety. Directly modulated lasers use the electrical data stream to control the on/off state of the laser in rapid succession, much like tapping the button of a simple laser pointer to create optical Morse code. This direct form of modulation works well for a 622 Mbps bit rate with short, intermediate, and long-reach requirements, for a 2.5 Gbps bit rate with short-reach and intermediate-reach requirements, and also for a 10 Gbps bit rate using short-reach optics. Directly modulated lasers, by design, inherit high levels of chirp. As mentioned, laser chirp is a slight frequency shift, just off center of the intended frequency (wavelength drift). This effect is caused by the physics of rapidly turning the laser on and off. High levels of chirp lead to down-span dispersion, cross-talk, and other effects. This constrains directly modulated lasers with insufficient accuracy for 2.5 Gbps long and extended reach, as well as for 10 Gbps intermediate-, long-, and extended-reach applications. Externally modulated lasers are needed for such high accuracy at high bit rates over long distances. Externally modulated lasers are the desired light sources for today’s high bit-rate, long-haul optical networks. Externally modulated lasers are continuous wave (CW) lasers coupled with components that act similar to a camera shutter. Leaving the laser on avoids the negative physics that occur with turning it on and off rapidly. Though the laser light is always on, the shutter-like component opens and closes rapidly, effectively modulating the light beam on and off, according to the desired signaling input. While the concept sounds simple, the practical implementation is a bit more complex.
414
Chapter 7: Long-Haul Optical Networks
External modulators for high bit-rate applications customarily contain a material called lithium niobate, an electro-optic crystal that acts much like an optical filter. A unique property of lithium niobate is the ability to pass light through it or slow the light down in response to an electrical signal (the modulating signal). An external modulator is positioned after a continuous wave laser source and then behind a beam splitter, the input to the modulator sending half of the laser light to the lithium niobate crystal and the other half of the light through a waveguide (the lower path) to an optical combiner. The lower path of the divided light output is unimpeded and proceeds at a fixed time delay to the optical combiner while the upper path connects on each side of the lithium niobate crystal. The lithium niobate crystal’s electro-optic property provides the unique ability to rapidly change between transparent and opaque depending on the electrical bias. A digitally electronic bit signal (ones and zeros) creates an electrical biasing (modulation) of the lithium niobate crystal such that the upper path of the laser light passing through the crystal slows down by 50 percent (creating a variable time delay). The net result creates an out-of-phase optical output on the upper path compared to the lower path. As the two paths of the optical signals recombine at the optical combiner, they are out of phase with each other, creating a destructive interference that yields near-zero photons on the output side of the combiner. When the lithium niobate crystal is otherwise electrically biased, it allows the upper path of the laser’s light path to proceed at full speed, meeting the lower path at the combiner inphase. The two in-phase signals provide constructive addition and yield a large photonic output from the optical combiner. The resultant photonic pulse train at the output of the combiner is now an optical representation of the digitally electrical signal input into the modulator. This modulation is accomplished without turning a laser on and off, eliminating chirp and its resultant downstream effects. As a recap:
• •
Use a continuous-wave laser, coupled to a lithium niobate external modulator.
•
Use out-of-phase destructive interference to shut off the light output from the optical combiner to represent a zero bit.
•
Use in-phase constructive addition to pass the light photons representing an optical 1 bit.
Electrically modulate the lithium niobate crystal, creating variable-speed light in one path of this external modulator in relation to the modulating signal.
These design considerations and individual components are often combined as a complete optical package to deliver an externally modulated laser that creates no chirp, helping lasers to scale to high bit rates anywhere from 2.5 to 40 Gbps plus. Figure 7-7 shows a conceptual diagram of a lithium niobate external modulator.
Understanding Long-Haul Optical Networks
Figure 7-7
415
External Laser Modulator Using Lithium Niobate Lithium Niobate Substrate
Beam Variable Delay Waveguide Splitter Continuous Wave Laser Light in Optical Waveguides within Substrate Fixed Delay Waveguide
–
Constructive/ Destructive Interference
Optical Combiner
Optical Pulse Train Out from Combiner
+
Digitally Electric Signal Creates Voltage Bias
Tunable Optical Components Until recently, long-haul lasers were limited to fixed wavelengths. This is very suitable for single wavelength use at 1310 or 1550 nm. Yet, when manufacturing lasers for WDM and DWDM applications, up to 150 different fixed-wavelength lasers, each with a unique part number, would be necessary to cover the ITU-T 100 GHz grid-spaced channel plan. This can create numerous problems for providers who operate and maintain high capacity longhaul networks. These issues can be summarized as follows:
• • • •
Large fiscal inventories, redundancies, and spares in reserve Longer provisioning times for new services requiring a specific-wavelength laser card Less-dynamic bandwidth optical circuit allocations No resolution for defective wavelength(s) as a result of fiber deficiency
As described in the previous section, the lithium niobate crystal’s electro-optic property provides the unique ability to rapidly change between transparent and opaque, depending on the electrical bias. When used in an external modulator after a continuous wave laser, this transparency of the lithium niobate crystal easily accommodates tunable laser technology within the DWDM optical realm. The use of wavelength-independent lithium niobate external modulators is well suited to tunable lasers and transponders, because the transparency property of lithium niobate accommodates all DWDM wavelengths. Tunable laser technology has been around for about ten years but only recently reached price points that ensured its demand in the commercial communication laser market. Widely tunable lasers over a complete optical C and L bands are redefining the all-optical long-haul network from fixed to flexible.
416
Chapter 7: Long-Haul Optical Networks
Initially used to reduce the high cost of DWDM laser card inventory and sparing, softwarecontrolled tunable lasers are the new direction of the industry. A tunable-wavelength laser can replace several different wavelength part numbers. Full-band tunable lasers in effect create one universal laser card that can replace hundreds of unique wavelength cards. The agility and service value of needing just 2 spare tunable laser cards compared to the earlier example of 64 laser cards is a compelling business case, the kind of margin that could justify the cost of complete DWDM system replacements. Today, these flexible lasers are routinely full-band tunable. For example, a leading C-band tunable laser can be tuned within 10 milliseconds or less to an adjacent frequency/ wavelength. In total, full-band tunable lasers envelop a 40 nm range of tuning agility and are available for both the C and L bands. Tunable lasers use a variety of semiconductor structures as well as a variety of tuning control methods such as current tuning, thermal tuning, and mechanical tuning. A few tunable lasers include the following:
•
Distributed feedback (DFB) lasers, single or triple section, thermally tuned—A three-section DFB can be thermoelectrically tuned by applying a current to one of the three sections of the DFB. This creates a controlled temperature that modifies the laser output up to about 4 to 5 nm. DFBs are generally used for narrow-tuning applications. Arrays of DFB lasers are also popular but increase the packaging size.
•
Sampled grating DBRs (SGDBRs), current tuned—These lasers have waveguide cavities that pair with Bragg gratings. A current is applied across front and back mirrors of the waveguide, causing the mirrors to reflect the laser output off a different portion of the Bragg-sampled gratings, ultimately yielding a change in wavelength. SGDBRs characteristically have a broader line width but a wide tuning range that is very fast.
•
Vertical-cavity surface-emitting lasers (VCSELs), MEMS tuned—These lasers apply a current to an MEMS device that moves the top mirror of the waveguide, shifting the cavity opening to emit one of several different arrayed wavelengths. This is analogous to a sliding hole on a box top of crayons. These are good where arrays of low-cost lasers are needed.
•
External-cavity lasers (ECLs), mechanically tuned—These lasers have a larger external cavity in which to place a diffusion grating and an MEMS rotary-actuated mirror. The CW laser reflects off the diffusion grating into the MEMS-mounted mirror; and current applied to the MEMS moves the mirror, which reflects different wavelengths back off of the diffusion grating for output. Excellent optical characteristics, low-cost components, and a wide tuning range make tunable ECLs desirable for long-haul DWDM applications.
No one type of tunable laser is appropriate for all of the distance applications. Many applications have different optical requirements, which are addressed appropriately by the different laser varieties.
Understanding Long-Haul Optical Networks
417
Long-haul lasers for DWDM applications must be more accurate, with a thin line width and no chirp if at all possible. This can mitigate down-span dispersion and cross talk. A wavelocker is a component within contemporary lasers that measures the laser’s wavelength output for drift and signals through a feedback loop to adjust the laser automatically, helping to compensate for wavelength drift. Combined with a thinner line width, this results in a more accurate wavelength, which means a down-span receiver can more easily distinguish the incident wavelength from others. More accurate line width results in tighter channel spacing—for example, 50 GHz, 25 GHz, or 12.5 GHz—yielding more wavelength channels within an optical band. Long-haul lasers must also possess the ability to launch higher power into the fiber to travel longer distances between reamplification and regeneration nodes. This output power is typically in the 10 to 20 milliwatt range. Some additional types of popular communication lasers are the Fabry Perot DBR and the electro-absorption modulated (EAM) laser. Figure 7-8 shows a relative positioning of common laser types within the metro, long haul, extended long haul, and ultra long haul. Figure 7-8
Lasers and Applications
40 Gb
Externally Modulated
10 Gb
EAM
SG-DBR FABRY PEROT DBRs
2.5 Gb
<2.5 Gb
VCSEL
Lithium Niobate
ECLs DFBs
VCSEL Long Haul
Extended and Ultra Long Haul
Metro
Access Directly Modulated <200 km
<600 km 1000 km Optical Transmission Reach
>2000 km
While tunability is an important feature for optical lasers, other system components require tunability to accomplish an end-to-end flexible design. Other tunable components include optical filters, optical receivers, and optical wavelength monitors. As you learned in Chapter 5, photonic detectors perform just the opposite of lasers. They detect light at the receiving end of the optical fiber, converting it into an electrical energy representative of the original electrical signal that was presented to the laser’s modulator. These semiconductors are primarily Indium Gallium Arsenide/Indium Phosphide (InGaAs/ InP) photodiodes, PIN photodiodes, and avalanche photodiodes (APDs). Low-speed,
418
Chapter 7: Long-Haul Optical Networks
low-cost optical systems will favor the price point of InGaAs/InP photodiodes. Medium- to high-speed applications often use PIN photodiodes while multigigabit, long-reach systems prefer the higher power conversion of the APDs. Table 7-2 lists the typical application of tunable components within modern optical networks. Table 7-2
Applicability of Tunable Optical Components Description
Optical Applications
Tunable lasers
Long-haul, ultra long-haul DWDM Optical add/drop multiplexing Metro core and regional networking 2.5, 10, and 40 Gbps transmission rates
Tunable filters
Tunable DEMUX filter Tunable receiver ROADM Amplified spontaneous emission (ASE) suppression Optical performance monitoring
Tunable receivers
Tunable receiver Reconfigurable optical drop for broadcast and select applications
Tunable optical channel monitors
Laser gain tilt monitoring Optical channel power equalization Optical channel registration
Tunable components in optical networks enable bandwidth flexibility, rapid service provisioning, and lower operational expense. They can assist an optical network with dynamic wavelength provisioning, streamlining traffic patterns and reallocating wavelengths as bandwidth patterns change. These capabilities are fundamental to providing on-demand services over optical networks. Tunable components also minimize sparing costs, potentially up to 70 percent savings in simple configurations. Tunable components become an inflection point for new-era long-haul networks, leading to new innovations and new optical services.
Optical Amplification Optical amplifiers are present within every long-haul DWDM design. With most installed fiber (there’s a lot of SMF G.652 out there) averaging about .25 dB of loss every kilometer, you can’t overcome a typical 25 dB of attenuation-only span loss after traveling about 100 km (62 miles) of fiber without reamplification of the 99.7 percent weakened composite
Understanding Long-Haul Optical Networks
419
signal. You must reamplify the optical signal, with a line amplifier node, prior to sending the data into the next fiber span. This requires the general placement of amplifiers, one for every 100 km (62 miles) of distance traveled. The actual placement will often vary between 50 to 70 miles between unmanned provider huts, which are environmentally controlled enclosures capable of supporting optical equipment. The particular span distance is often determined by the following, in this order:
• • •
Total attenuation Dispersion tolerance The fiber provider’s hut locations
Optical amplifiers only reamplify (1R) DWDM signals. They don’t reshape, retime, or reamplify (3R) them as an OEO regenerator site would. (You learn more about the 3R process in the next section, “Optical Regeneration.”) Though these amplifiers are essential for boosting the optical signal for the next 100 km light path, the continuous cascading of amplifiers affects the optical signal to noise ratio (OSNR) such that the only compensation is to OEO regenerate (3R) the signal, a node equipment configuration that is much costlier than amplification. If the dispersion tolerance isn’t reached first, then it’s likely that a deteriorated OSNR ratio will cause the span to fail at some distance. General industry experience suggests that this conservative engineering distance, considering adequate margins for real operation of unregenerated optical signals, is 600 km, or 372 miles. Optimized networks with low noise amplifiers that apply forward error correction (FEC) or extended forward error correction (E-FEC) techniques can increase the system’s tolerance of bit errors such that spans of up to 1100 km can be reached prior to regeneration. OSNR is described in more detail in the section “Considerations of an Optical Power Budget.” Optical amplifiers work on similar principles as lasers. Optical amplifiers are sections of fiber “doped” with rare earth minerals that are spliced into the long-haul fiber. They are often packaged in a line card form factor within part of the optical node chassis, intercepting the light paths between fiber spans. Rare earth minerals such as praseodymium, erbium, ytterbium, and others contain the properties of fluorescence, meaning that their atoms emit photonic light when stimulated by other photons. Photons of a particular wavelength (a pump wavelength) strike the atoms of these rare earth minerals, the atoms absorb the energy, and rise to an excited state. They quickly transition from excited to a metastable state, while these atoms are again struck by the DWDM composite wavelength photons traveling through the long-haul fiber. When struck, they drop from the metastable state to the stable state, emitting a large number of photons—at the same DWDM wavelength(s)—in the process. The long-haul wavelengths in the DWDM composite signal absorb these photons, with the net effect of increasing photon quantities and thus photonic power for each wavelength. These properties allow the use of these rare earth elements in optical amplifiers.
420
Chapter 7: Long-Haul Optical Networks
As minerals these, periodic elements are known as lanthanides:
•
Praseodymium fluoresces at about 1290 to 1320 nm, so it’s a useful element in building optical amplifiers for 1310 nm wavelength signals. Praseodymium has been used for years in carbon arc lights for the motion picture industry, and can act as a saltlike ingredient that colors glass yellow for filtering harsh light (welders’ goggles, for example). The common abbreviation for a praseodymium-doped fiber amplifier is PDFA.
•
Erbium fluoresces in a range around 1550 nm, contributing to erbium’s popularity as an optical amplifier for 1550 nm single-wavelength systems and especially in 1550 nm C-band DWDM systems. Erbium is commonly used in photographic filters and in the nuclear industry. Erbium was first discovered in 1843 in the vicinity of Ytterby, Sweden. The common abbreviation for an erbium-doped fiber amplifier is an EDFA.
•
Ytterbium gets its name from the village of “Ytterby” near Vaxholm in Sweden, where it was discovered by a scientist in 1878. Pronounced it-TER-bi-um, it fluoresces in the 1050–1100 nm range. This wavelength range is not within the practical optical windows. As a laser, it is finding use as pump lasers for Raman converters. The common abbreviation for an ytterbium-doped fiber amplifier is a YDFA.
•
Thulium is another element that is being used in the construction of optical amplifiers, targeting the S band from 1440 to 1520 nm. This may soon expand C-band DWDM into an adjacent optical window. The common abbreviation for a thulium-doped fiber amplifier is a TDFA.
EDFAs especially are benefiting from continual innovation, many reducing their noise figure while increasing their optical gain spectra. EDFAs have the momentum of price and performance, and you will likely see their use extended in ever longer network applications, to as much as 2000 km without the assistance of Raman amplification. Raman amplification is another technique that has gained great popularity and functionality in optical amplification. Due to Raman’s wide range of wavelength coverage, many networks are already capable of lighting 320 wavelengths per fiber. Raman amplification is further discussed in the section “Extended Long-Haul Optical Networks” later in this chapter. While optical fiber amplifiers create photonic gains in the range of 25 to 35 dB, they also produce 5 to 6 dB of noise—random frequency perturbations known as amplified spontaneous emissions (ASE). This 5 to 6 dB of noise is known as a noise figure (NF) and must be acknowledged when budgeting for optical power, specifically as a factor in the overall OSNR. That’s why light-propagating signals through successive amplifier stages will accumulate noise, degrading the OSNR to the point that regeneration of signals becomes necessary. Figure 7-9 shows the relative coverage of optical amplifier technology along the electromagnetic infrared spectrum.
Understanding Long-Haul Optical Networks
Optical Amplifier Spectrum Coverage
Saturated Output Power (dBm)
Figure 7-9
421
40
30
RAMAN
YDFA
EDFA
20 PDFA
10 1000
1100
1200
1300 1400 Wavelength (nm)
TDFA
1500
1600
1700
Source: Cisco Systems, Inc.
Optical Regeneration The distance at which regeneration is required is the defining point for classifying longhaul, extended long-haul, and ultra long-haul networks. Depending on the OSNR noise budget at various stages within the particular long-haul system design, DWDM signals may need periodic regeneration. OEO functionality is manifested as optical signal regeneration nodes that can be positioned down a long, extended, or ultra long-haul fiber route when necessary to provide optical signal regeneration. Regeneration is needed within a network wherever the aggregate optical fiber span can no longer maintain the designated bit error rate (BER) or the OSNR design of the system. Though optical amplifiers are maintaining the power level of the optical signal, the signal’s determinant bit information is no longer easily distinguishable because of accumulated dispersion, noise, and other bit-timing impairments. You must regenerate the optical signal, and using the OEO node is the practical way to do it. The OEO functionality performs a 3R function in that it reshapes, retimes, and reamplifies the input or incident optical signal(s). It can be said that OEO functionality is implemented in optical terminal nodes, OADM optical nodes, and mid-network optical nodes, providing regeneration functions. Terminal nodes typically receive a client optical signal on a non-ITU-T grid wavelength into a transponder, convert the signal to electrical, and then generate the signal optically via the transponder’s ITU-T grid wavelength compatible output. The transponder performs an OEO function to get the client signal into a DWDM ITU-T wavelength on the trunk side of
422
Chapter 7: Long-Haul Optical Networks
the long-haul network, but with respect to the trunk side of the long-haul network this is technically optical signal generation. An OADM node uses appropriate transponders to take a DWDM ITU-T grid wavelength from the trunk side and drop it to a transponder that will perform an OEO function, delivering a non-ITU-T grid signal to the client side—a signal of frequency and timing that meets the customer equipment expectations. This functionality works in reverse as a client signal is added to the OADM node’s trunk side, but the add/drop OEO functionality is also classified as signal generation. You can find the OEO signal regeneration functionality in optical nodes between the optical terminal end sites. Located between a pair of terminal nodes, intermediate nodes can perform a 3R function on the DWDM trunk-side signals received from the east-facing terminal node and resend the signals via DWDM trunks toward the west-facing terminal node. To pass through a DWDM signal in this fashion, transponder technology is used to receive DWDM ITU-T wavelengths from the east terminal node, convert them to the electrical domain, and then, if passing through the node (70 to 80 percent of long-haul traffic is passthrough traffic), use a west-facing transponder to regenerate the optical signal as a DWDM ITU-T wavelength for transmission toward the west terminal node. The need to use east-facing and west-facing transponders to allow a signal to passthrough the node essentially doubles the cost of wavelength regeneration.
Optical Wavelengths Optical DWDM wavelengths possess more channels than cable or satellite TV. The quantity of available channels, as defined by the ITU-T G.692 specification for 50 GHz spacing, yields 300 wavelengths that cover the C and the L bands. Spacing opportunities of 25 GHz and even 12.5 GHz are deliverable, pushing channel options to 1200 plus. The ITU-T has superseded its earlier G.692 grid recommendation to include support for the S band as well as for the tighter channel spacings of 12.5 and 25 GHz, already used in many production long-haul systems. The new revised specification is ITU-T G.694.1 for DWDM and G.694.2 for CWDM. Generally, a longer network has less fiber strands per cable. Metropolitan fiber cables would normally contain more fiber strands (for example, 864 strands), while a long-haul cable may contain 48 strands, circa 1995. Price per foot per strand is an important capital budget factor in any long-haul deployment. As a contextual example, Table 7-3 shows the 32-channel DWDM wavelength plan used by the Cisco ONS 15454 MSTP DWDM optics based on the ITU-T 100 GHz spacing allocation. This channel plan uses 4 consecutive ITU-T wavelengths, skips 1, uses the next 4 ITU-T wavelengths, skips 1 again, and so on. This 4-skip-1 channel plan helps minimize nonlinear impairments such as XPM and FWM. This is a very common practice with longhaul DWDM system design.
Understanding Long-Haul Optical Networks
Table 7-3
423
Cisco ONS 15454 32 Channel Plan (4-Skip-1 Plan) Wavelength (nm)
Wavelength (nm)
Wavelength (nm)
Wavelength (nm)
1530.33
1538.19
1546.12
1554.13
1531.12
1538.98
1546.92
1554.94
1531.90
1539.77
1547.72
1555.75
1532.68
1540.56
1548.51
1556.55
1534.25
1542.14
1550.12
1558.17
1535.04
1542.94
1550.92
1558.98
1535.82
1543.73
1551.72
1559.79
1536.61
1544.53
1552.52
1560.61
The particular ITU-T wavelengths skipped are 1533.47, 1537.40, 1541.35, 1545.32, 1549.32, 1553.33, and 1557.36. Source: Cisco Systems, Inc.
Table 7-4 shows the 32-channel DWDM wavelength plan used by the Cisco ONS 15454 DWDM tunable optics cards, for example the 10 Gbps Multirate Enhanced Transponder Card. This channel plan is the identical 4-skip-1 plan as shown in the Table 7-3, except that the subject card is tunable to four adjacent ITU-T 100 GHz wavelengths. This creates eight distinct card part numbers that can be used to create up to 32 unique channels. This reduces the number of unique card part numbers by 75 percent, which translates into a 75 percent savings on spare card maintenance planning. The table shows the eight unique card part numbers and the specific wavelength set that each card supports. Each card’s part number suffix is a three-digit convention to indicate the base wavelength of the tunable range.
Optical Power Budget Optical power budgeting is a very complex and multivariable process, best performed with the following:
• • •
Software-based tools An educated familiarity with the optical equipment and fiber used A steady vision of the application requirements of the particular long-haul network under design
Well beyond the context of this chapter, this section introduces some key takeaways of optical power engineering.
424
Chapter 7: Long-Haul Optical Networks
Table 7-4
Cisco ONS 15454 32-Channel Plan Using ONS 15454 MSTP Transponder Cards Card Part Number Suffix (xx.x) Wavelength (nm) 1530.33 30.3
1531.12 1531.90 1532.68 1534.25
34.2
1535.04 1535.82 1536.61 1538.19
38.1
1538.98 1539.77 1540.56 1542.14
42.1
1542.94 1543.73 1544.53 1546.12
46.1
1546.92 1547.72 1548.51 1550.12
50.1
1550.92 1551.72 1552.52 1554.13
54.1
1554.94 1555.75 1556.55 1558.17
58.1
1558.98 1559.79 1560.61
Understanding Long-Haul Optical Networks
425
Considerations for an Optical Power Budget For long-haul systems, the most important challenge of the optical power budget is to overcome aggregate loss per fiber kilometer of optical power. This loss is comprised of a number of impairments, both linear and nonlinear in nature. Understanding optical impairments and implementing the appropriate compensations are important dimensions of the optical power budget. The goal of long-haul optical design is to optimize the transmission characteristics from end to end, and then to maintain this optimization day to day. The current, usable, optical communication spectrum is principally bookended by the superattenuating optical fiber effects of scattering and absorption. Both are intrinsic properties within the fiber’s silica material. First, heating a silica glass preform blank to a molten consistency and then meticulously drawing the molten glass into a fiber strand creates an optical fiber strand, known as a waveguide. As the glass strand cools, small-density variations are present within. This can have the effect of minute deflections or refractions of optical wavelengths, primarily affecting the shorter wavelengths below 800 nm. The continuous deflection of portions of the light pulses, as they proceed through a span of fiber, rob the original bit pulse of constituent photons and reduce the effective optical power of the original signal. This common form of intrinsic scattering is termed Rayleigh Scattering and is considered the first bookend (at about the 1000–1040 nm mark) of the usable infrared spectrum. Optical signal attenuation is also caused by absorption. Glass impurities and submicroscopic defects in the silica glass tend to absorb some of the light as it passes through the fiber core, contributing to a dimmer light signal compared to the original. High levels of absorption around 1400 nm (specifically 1383 nm) were determined to be from hydroxyl (OH-) molecules and can be addressed through changes in fiber manufacturing, but the infrared absorption above 1700 nm increases dramatically, creating the second bookend of usable spectrum for optical fiber communications. As a result, long-haul commercial use of optical fiber within the electromagnetic spectrum is concentrated in about a 400 nm region between 1260 and 1660 nm. Figure 7-10 depicts the areas of attenuation, intrinsic scattering, and absorption in the infrared portion of the spectrum. Long-haul DWDM networks live and work in the C and the L bands (third and fourth usable windows). Based on Figure 7-10, these long-haul DWDM networks commonly encounter fiber that attenuates (loses) optical power at a rate of .25 dB per kilometer. Therefore, 100 km of this fiber would create an aggregate loss of 25 dB, which is at the typical limit of an optical span budget. To put this in perspective, a loss of 25 dB is equivalent to a loss of 99.7 percent of the original signal’s launch power. With only .3 percent of the optical signal intact, is that enough to be properly detected by the receiver? The answer is usually yes, as long as about 40 plus photons per bit are still present upon reaching the receiver.
426
Chapter 7: Long-Haul Optical Networks
Figure 7-10 Optical Fiber Attenuation, Scattering, and Absorption
2
1
Fourth Window L Band
3
Third Window C Band
Second Window
Optical Loss (dB/km)
4
First Window
5
Total Attenuation Hydroxyl Water Peak
Intrinsic Absorbtion
Intrinsic Scattering (Rayleigh)
0 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000 Wavelength (Nanometers) Source: Cisco Systems, Inc.
At this point a long-haul design either uses optical amplification such as an EDFA to reamplify the signal (1R) or may choose to apply OEO regeneration to retime, reshape, and reamplify (3R) the signal prior to delivery to the next fiber span. Stitching together a 1000 km, long-haul network would theoretically need about nine intermediate optical nodes to perform a mix of amplification and regeneration between the two terminal nodes at each end. In actuality, other impairments are also at play depending on the bit rate, wavelength channel spacing, and chosen channel usage plan, among others. Amplification not only increases an optical signal, but it increases any noise that has been aggregated by previous amplifier stages. While key to long-haul network designs, designers must be wary of a gradual buildup of amplified noise. The optical power budget then depends on a number of controllable and uncontrollable factors. Trade-offs and allowances are made within an optical power budget based on attenuation and impairments either measured or perceived. With that said, determining the required equipment, fiber, and node locations for a long-haul network depend on the optical power budget. By the time an optical power budget is properly determined, you will agree that optical networking is an applied science and that successful long-haul networks are engineered art. Compared to local or metropolitan access networks, long-distance optical communication requires a higher optical launch power with which to propagate light pulses over the desired distances. Higher optical power at high bit rates can engage a number of nonlinearities such as SPM, XPM, FWM, stimulated Raman scattering (SRS), and stimulated Brillouin
Understanding Long-Haul Optical Networks
427
scattering (SBS), of which XPM is particularly challenging in long-haul DWDM networks. These must all be accounted for in an optical power budget. When determining a high bit-rate DWDM optical power budget, you must consider the following:
• • • •
OSNR
• • •
Optical receiver sensitivity
BER, typically 109 or higher, and use of FEC Optical fiber characteristics, including all-fiber spans Frequency variation, including optical laser-launched power, center line width, and chirp Optical amplifier heuristics, including gain, tilt, and NF Linear effect such as — Attenuation, including scattering and absorption — Dispersion, including chromatic/group velocity dispersion and polarization mode dispersion
• • •
Nonlinearities such as SPM, XPM, FWM, SRS, and SBS
•
Component aging, repair, and safety margin (with 3 db recommended)
Insertion loss caused by optical couplings, filters, and OADMs Data modulation techniques such as return to zero (RZ), nonreturn to zero (NRZ), differential phase shift keying (DPSK), duobinary
All of these factors represent about two dozen parameters that may need consideration in a long-haul design’s optical power budget. The exercise applies to each fiber span, 10 spans based on the theoretical 1000 km network discussed earlier, and applies to each direction or fiber strand. Typically, one strand of fiber carries data from east to west, and the second strand of the fiber pair carries data from west to east. That’s potentially 24 parameters times 10 spans times 2 directions, or about 480 calculations to consider for an end-to-end design. Also consider that some of the DWDM wavelengths may be affected more prominently than others. Most designs account for the worst-performing wavelength of the range, typically one of the bookend wavelengths on the edge of the supported DWDM range. Many of the parameters are fixed, fortunately requiring less than 480 calculations, yet it’s easy to anticipate the complexity of this effort. That’s why many software simulation tools and optical budget design programs (for example, the Cisco MetroPlanner optical design tool) have found usefulness in the market. These tools help organizations manage the optical engineering effort to reach the goals of optimized network capital and mitigated operational risk. Many of the calculations influence the outcome of two important overall design parameters: OSNR and BER. The OSNR represents the absolute quality of an optical signal and the
428
Chapter 7: Long-Haul Optical Networks
probability that an optical receiver can accurately distinguish the exact datastream that was originally sent. The OSNR is positively high at the transmission source, and the ratio declines as the signal proceeds through any amplifier stages. (Amplified noise buildup effects the ratio.) An overall system OSNR tolerance (for example, an OSNR tolerance of 20 dB) is a design point, and once the measured/calculated OSNR reaches that tolerance limit, the only compensation that remains is to regenerate (3R) the signal. The BER is a design goal for measuring how many corrupted bits are to be tolerated compared to the number of correct bits received. A BER of 1012 means that 1 bit error is acceptable for every 10 to the 12th power of correct bits transmitted. The system BER is a design goal that establishes a guarantee of data transmission performance. If the BER is not being met, then the OSNR can be examined to determine whether and where a system parameter adjustment or repair needs to be made. Receiver sensitivity is an important component in long-haul design, as the BER at the receiver determines the actual optical system performance.
Understanding dB and dBm According to Cisco Systems, an optical power budget is made up of allowable margins and losses, expressed in decibels (dBs). Cisco describes the allowable margins and losses as follows: Signal power loss or gain is never a fixed amount of power, but a portion of power, such as one half or one quarter. To calculate lost power along a signal path using fractional values, you cannot add 1/2 and 1/4 to arrive at a total loss. Instead, you must multiply 1/2 by 1/4. This makes calculations for large networks time consuming and difficult. For this reason, the amount of signal loss or gain within a system, or the amount of loss or gain caused by some component in a system, is expressed using the decibel (dB). Decibels are logarithmic and can easily be used to calculate total loss or gain just by doing addition. Decibels also scale logarithmically. For example, a signal gain of 3 dB means that the signal doubles in power; a signal loss of 3 dB means that the signal halves in power. Keep in mind that the decibel expresses a ratio of signal powers. This requires a reference point when expressing loss or gain in decibels. For example, the statement “there is a 5 dB drop in power over the connection” is meaningful, but the statement “the signal is 5 dB at the connection” is not meaningful. When you use decibels, you are not expressing a measure of signal strength but a measure of signal power loss or gain. It is important not to confuse decibel and decibel milliwatt (dBm). The latter is a measure of signal power in relation to 1 mW. Thus a signal power of 0 dBm is 1 mW, a signal power of 3 dBm is 2 mW, 6 dBm is 4 mW, and so on. Conversely, –3 dBm is 0.5 mW, –6 dBm is 0.25 mW, and so on. Thus, the more negative the dBm value, the closer the power level approaches zero.1
An optical power budget exercise begins by determining the difference between the optical laser transmitter output power at the first node and the optical receiver sensitivity power range at the second node. These values come from the optical equipment manufacturers’ documented specifications regarding the particular components/cards in use. If a laser transmitter sources an input power expressed as 0 dBm (1 mw) and the receiver sensitivity
Extended Long-Haul Optical Networks
429
range is from –8 to –24 dBm, then the optical link loss budget for this first to second node span cannot exceed –24 dB of total signal loss with all attenuation and impairment factors considered. The power budget is determined span by span, and then proceeds to an analysis of the OSNR of the system and any adjustments necessary to deliver the BER target of the system design.
Extended Long-Haul Optical Networks Extended long-haul (ELH) optical networks are considered to be 1000 to 2000 km (620 to 1240 miles) between terminal nodes. Most ELH traffic tends to be in the 1600–1700 km range. That’s enough to connect Tier 1 cities such as New York to Chicago to Atlanta (with a loop to Miami and back) and then on to Houston to Dallas to Denver to Los Angeles. From coast to coast in just 5 hops. The total network traffic between these major metropolitan cities represents the lowhanging fruit of long-distance communications. AT&T built it first, then MCI, then Sprint, then WilTel, then a flock of others in the 1990s. ELH networks using optical fiber cable snake between these cities, following national rights-of-way such as railroads, the national power grid backbone, the interstate system, and pipelines. In the previous discussion on long-haul networks, you learned about some challenges to optical signal distances, especially at higher bit rates and with DWDM. ELH networks must reach farther, shine brighter, and swim deeper than their long-haul brethren. ELH networks are capable of 1000 to 2000 km of optical reach before needing signal regeneration because they have tighter tolerances and use more expensive components including thinner line-width lasers (providing more accurate center frequency); better dispersion tolerance and compensation; lower loss components such as filters, muxes, and demuxes; and, of course, enhanced single-mode fiber. Regeneration is also used in ELH networks as appropriate. The point is that ELH networks less than 2000 km have more of a choice when it comes to considering regeneration functions. There are a few other technologies worthy of mention within the context of today’s ELH networks, as follows:
• • • • •
Advanced fibers Extended spectrum, including use of the L band Extended amplification, specifically Raman amplification Extended coding with forward error correction (FEC) and extended FEC (E-FEC) Extended modulation formats
430
Chapter 7: Long-Haul Optical Networks
Advanced Fibers Advanced fiber designs have moved long-haul distances into the extended long-haul range. Dispersion-managed fibers combine lower attenuation (sub .2 db/km) with larger effective areas and lower overall dispersion to minimize nonlinear effects, allowing for longer distances. A fiber with low polarization mode dispersion (PMD) has a lower coefficient of dispersion per kilometer of fiber. Fiber with a lower PMD is manufactured with a higher attainment of core concentricity throughout the length of the fiber. This can extend the optical transmission distance before dispersion compensation becomes necessary. The purer and more concentric the fiber’s glass core, the less attenuation and the less total dispersion impact on the signal. This results in extended distances, albeit at some higher cost per kilometer for this level of glass purity and cylindrical tolerance. Submarine networks particularly consider purer glass fiber at initial deployment to future-proof their fiber plant infrastructure and minimize dispersion compensation costs along the path.
Use of the L Band Use of the extended infrared spectrum known as the L band, or long-wavelength band, is another way to extend DWDM network optical distances. The L band (1565 to 1625 nm) naturally contains more dispersion on NZ-DSF fiber types, and the relatively higher dispersion better mitigates nonlinearities that steal distance from fiber spans. L-band transmission requires higher-cost EDFA amplifiers, so some carriers will likely exploit tighter channel spacing in the C band prior to moving to an L-band implementation. Many optical systems already contain internal componentry to split and combine both C- and L-band wavelengths, so the technology is available when and if the provider should elect to use the L band.
Raman Amplification Raman amplification is the king of distance, enhancing long-haul and extended long-haul regeneration requirements from 400 to over 2000 km. Fiber-agnostic Raman amplification gain can be generated within SMF (G.652), DSF (G.653), and NZ-DSF (G.655). Raman amplification is a technique that sends a lower-wavelength (usually about 100 nm shorter) “pump” light beam through the fiber, causing vibration excitation in atoms within the fiber. As the original optical ITU-T wavelength, weakened from attenuation, encounters the excited atoms, the ITU-T wavelength stimulates these vibrating atoms, causing them to emit photons of the same wavelength as the weakened ITU-T wavelength (the incident wavelength). This adds identical photons to the traveling wavelength, amplifying the optical power wherever the wavelength encounters the Raman pump signal. Raman amplification is often distributed across the network based on design requirements.
Extended Long-Haul Optical Networks
431
Counter-propagating Raman pump amplifiers allow for a lower launch power to be used at signal origination and still amplify signals with more power and lower noise then previous methods (see Figure 7-11). Finding early use in unrepeated submarine fiber cable systems, Raman amplification is customarily included with long-haul and extended long-haul platforms, often used in combination with C- or L-band EDFA amplification. The benefits of Raman amplification are longer distances between amplifiers and/or OEO regenerations, tighter channel spacing, and higher data rates.
Typi c
al ED
FA A m
plific
Typic a
l Ram
an A
ation
mplif
icatio
ting aga nal p o Pr p Sig erunt n Pum o C ma Ra
n
Received Power (dBm)
Launch Power (dBm)
Figure 7-11 Raman Amplification Technique
Fiber Span (Distance)
Forward Error Correction (FEC) FEC, which is specified by the ITU-T as G.975, is a data-coding technique pioneered by submarine networks and their demanding requirements to push longer distances without error degradation. Essentially a complex coding algorithm, FEC adds redundancy to a data packet in the form of parity bits. If the received signal contains several erroneous bits, the original signal can still be decoded and reconstructed from the redundancy bits attached to the packet by the forwarding node. This has the effect of maintaining BER guidelines at extended distances. An enhanced version of FEC (E-FEC) provides about another 1 to 2 dB of budget margin, further extending unregenerated networks while maintaining BER targets.
Modulation Formats Another area where new alternatives are contributing to enhanced distances and tighter channel spacing is modulation formats. Quadrature phase shift keying (QPSK) uses a fourpoint phase shift of a waveform to represent two bits per phase or cycle, sometimes called a symbol. When starting a bit cycle of an optical waveform, if the phase shift begins at 0 degrees, the binary bits represented = 00. A phase shift beginning at 90 degrees = 01, at 180 degrees = 11, and at 270 degrees = 10. This has the effect of doubling the bit bandwidth over single-bit systems.
432
Chapter 7: Long-Haul Optical Networks
The binary modulations called nonreturn to zero (NRZ) and return to zero (RZ) are well known and used. An alternative modulation called optical duobinary requires less bandwidth to represent distinct bits, which can enhance dispersion tolerance by three to four times more than current NRZ formatting. This narrower spectrum also allows denser packaging of DWDM wavelengths within the same amount of bandwidth spectrum and can help with FWM and SBS suppression. These technologies allow for ELH DWDM networks to reach between major city pairs without costly regeneration.. ELH networks are likely to grow as long-haul networks brim with capacity, looking to grow longer and wider to extend their reach beyond regional applications.
Ultra Long-Haul Optical Networks Ultra long-haul (ULH) optical networks are defined as reaching more than 2000 km and up to and beyond 4000 km. ULH optical networks are generally transcontinental networks, such as from New York to Dallas and then on to Los Angeles. They are also intercontinental networks, linking the world’s continents through transoceanic fiber spans, such as from New York to London. With ULH networks, and particularly with DWDM ULH networks, the achievable distance before needing signal regeneration is paramount to the business model, a model that is driven toward the lowest cost per bit. This is especially true as 70 to 80 percent of very longhaul communications typically passes through intervening nodes. The least expensive approach is to maintain the ULH signals in the optical domain for as long as possible. In addition to new component technologies, ULH providers are increasingly deploying dispersion-managed fiber solutions to mitigate costly regeneration from their business plans. Long-haul terrestrial optical fiber networks are habitually constructed with one type of fiber, either an SMF-28 variety or perhaps a +NZ-DSF type. Higher bit-rate requirements require tighter attenuation margins and PMD budgets. Higher laser launch powers require fiber cores to have larger effective areas through which to couple high optical power without incurring nonlinear signal distortions. More than any other network design, ULH networks are the primary aggregators of data, voice, and video—communications that are transported coast to coast and shore to shore. With fiber cables spanning thousands of kilometers, the fiber infrastructure is a large percentage of the capital and operational budgets of ULH networks. Generally, the longer the network, the less fiber strands per cable. To optimize fiber capacity, higher bit-rate migrations are common line items for two- to three-year planning horizons. Until DWDM, many of these ULH networks were approaching fiber exhaust. DWDM has brought new life to ULH networks along with new technological challenges.
Ultra Long-Haul Optical Networks
433
Quantum leaps in optical technology are necessary to provide DWDM networks with ultra reachability, especially at high bit rates such as 10 and 40 Gbps. Recent progress in transmission technologies are enabling point-to-point terrestrial ULH networks up to 8000 km without required OEO regeneration. A combination of technologies must be mixed to propagate photonic signals that far, such as highly accurate lasers, new types of fiber and dispersion compensators, Raman amplification, and data modulation formats. Optical cross-connects (OXCs) also add to the flexibility and extensibility of ultra long-haul DWDM networks. Technologies such as these are useful in engineering transparent, all-optical ULH networks. The Internet has made the world a smaller place, which is largely due to the availability of ULH networks.
Highly Accurate Lasers Everything starts with the laser, which should have chirp-free operation and a very thin line width. Line width refers to the narrowness in frequency of a laser’s emitted light. Chirp-free operation eliminates the frequency variations of the source laser output, allowing optical pulses to emit from their center frequency with a high degree of accuracy and fewer cross talk effects to and from adjacent DWDM frequencies. An accurate, on-frequency, thin linewidth laser pulse is easier to detect after several kilometers, especially when using tightly spaced DWDM channel plans as these systems often do.
Dispersion Management The success of ULH DWDM also depends on accurate dispersion management design. Dispersion-managed fiber and dispersion compensators are key considerations. Dispersionmanaged fiber solutions have long been used in submarine fiber applications, and the technique is increasingly used for ULH terrestrial networks. Dispersion-managed fiber is the splicing of a positive dispersion fiber with a negative dispersion fiber in such a way as to manage the fiber span’s total aggregate dispersion toward zero. Most recently, symmetric dispersion-managed fiber solutions have emerged as design options. An example symmetric dispersion-managed fiber span is built using three sections of dispersion-managed fiber—two sections with positive dispersion, positive slope, and one section with negative dispersion, negative slope. For an example 80 km fiber span, placing the negative dispersion section in the middle would create a fiber span starting with 24 km of positive dispersion, followed by 32 km of negative dispersion, and finishing with 24 km of positive dispersion. With the proper pre- and post-dispersion compensation at the respective nodes on each end of the three-section symmetric dispersionmanaged fiber design, dispersion is architected and mirrored on additional spans such that dispersion tolerances can survive over 4000 km of fiber between regeneration nodes.
434
Chapter 7: Long-Haul Optical Networks
Symmetric DMF is the preferred fiber choice for high-capacity, ULH DWDM networks. It is also common for dispersion-managed terrestrial ULH networks to be designed for Raman amplification, specifically to optimize for Raman pump wavelength efficiencies and provide effective gain and low noise.
Amplification Amplification is necessary in ULH systems as in ELH and LH systems. The same issues of photonic attenuation must be overcome due to fiber losses. The purer the glass fiber span, the longer between reamplification stages, but eventually the noise buildup from successive amplification stages will degrade the OSNR ratio. Raman amplification is most often used in ULH systems due to its larger dual-band coverage, its lower noise figure, and its optical power gain capabilities. To achieve the ultimate benefit of amplification, ULH systems that use distributed Raman should consider the use of fiber that is also optimized for the Raman pump wavelengths. The use of bidirectional pumping for distributed Raman amplification is a good way to help manage the wavelength-amplified gain profiles. Another way is through the use of dynamic gain-flattening filters (DGFFs). DGFFs are often used to compensate for nonlinear gains when amplifiers are cascaded in ULH networks. Consider that when DWDM signals are amplified, the amplifier’s gain profile may increase some DWDM channels to a higher power than others. If this is perpetuated through multiple optical spans, there can be a large OSNR difference between the optimum DWDM channels and those that are in the weaker part of the amplifier’s gain profile. This forces optical designs to cater to the weakest channels. The recommended approach is to flatten the gain profile so that all desired DWDM channels have near-equivalent photonic power as they leave an amplifier. Early components used variable optical attenuators, either manual or automatic, to attenuate those channels with the highest postamplifier gain to be more even with the rest. DGFFs are equalizers with feedback mechanisms that integrate with EDFAs. The DGFF measures the photonic power of the individual DWDM channels and supplies feedback information back to the amplifier to sort of “coax” the amplifier to boost all signals to as even an output power across the DWDM spectrum as possible. This creates a gain-flattening function that is largely dynamic and automatic. This is another important consideration for ULH networks.
OXC Architectures All-optical OXC nodal architectures should find synergies with ULH networks. These architectures use component solutions that exhibit exceptionally low loss for express-path channels. Express-path channels are pass-through signals representing the highest percentage of internodal traffic in ULH networks. Built-in spectral equalization keeps add/drop channels power leveled, internal EDFAs compensate for OADM losses, and noise cancellation
Submarine Long-Haul Optical Networks
435
protects signal integrity. Contemporary optical-to-optical-to-optical (OOO) seed technologies such as planar lightwave circuits can foster intermediate optical interchanges between ULH terminals, allowing more service on-and-off ramps that pick up more customers and their service revenues along the way.
Data Modulation As in ELH networks, data modulation techniques such as return-to-zero (RZ) modulation offer ULH transmission link performance advantages, maintaining a higher OSNR and a lower BER. RZ modulation helps receivers maintain better bit recognition and timing synchronization, leading to fewer bit errors. RZ modulation also reduces the effects of chromatic dispersion and polarization mode dispersion at high bit rates such as 10 and 40 Gbps. The use of optical duobinary modulation is also an increasing practice.
Submarine Long-Haul Optical Networks Submarine networks can be described as “light swimmers.” They must use relatively high channel counts, handle higher optical power, and have longer reach, lower bit error rate, and higher information carrying capacity than their terrestrial fiber counterparts. Submarine networks are usually classified into the following categories:
• •
Transoceanic networks Short-haul undersea networks
Transoceanic networks must span the continents, typically in the 3000–10000 km range with most of the fiber in deep water. The physical cables themselves typically are copper tubing surrounding the optical fiber strands, as well as copper conductors for power feeds to underwater repeaters (amplifiers). Long and deep undersea cable deployments carry a higher propensity for surrounding the cable with an armored jacket and filled with a dense compound, both for protection and for negative buoyancy. With a transoceanic network, one of the primary requirements is for an unregenerated network, given the difficulty to create and maintain a powered regenerator station in the middle of the sea. As discussed previously, to create long-haul networks with long fiber spans, both the linear effects of attenuation and dispersion must be managed to their absolute minimums. Submarine systems, therefore, demand the highest fiber performance under the most stringent optical power budgets and the harshest environmental conditions possible. Short-haul submarine networks are often deployed with distances of 100 to 400 km. Used for connecting islands or islands to the mainland, these networks also must endure harsh underwater conditions, though diving much shallower than transoceanic cable. A particular type of short-haul marine network is a festooned network (think trotline), often used to interconnect coastal cities that are reasonably dense along the ocean shoreline.
436
Chapter 7: Long-Haul Optical Networks
Figure 7-12 depicts a conceptual short-haul festooned submarine system cable. Figure 7-12 Festooned Submarine Cable System
UNIVERSITY
City UNIVERSITY
City
UNIVERSITY
Submarine Optical Cable in Festooned Deployment
City
Submarine Network Fiber Types DWDM is a real boon for oceanic optical fiber cables, allowing the increase of bandwidth without laying new cables undersea. In fact, the utilization of DWDM technology for submarine cable design is trending toward less fiber pairs per cable, significantly reducing the costs associated with intercontinental connections. Many of these use 80 DWDM channels at up to 10 Gbps per fiber pair. A four-fiber pair cable could provide 3.2 terabits of capacity, which has greater magnitudes of capacity at an installation cost less than similar systems just a few years before. Implementing lower-cost submarine solutions is paramount to a profitable business model’s rate of return. The selection of the proper fiber type(s) is one of the most fundamental steps in maximizing the available DWDM channels for submarine applications. Fiber types with minimum attenuation and proper dispersion slopes are important. Another benefit is for the fiber to have a large effective area, essentially a combination of the core and immediately adjacent cladding that couples and propagates the laser launch power. Large effective areas on fiber allow for higher laser launch powers to achieve further distances between repeaters. By permitting higher laser power handling, more DWDM channels can coexist at higher speeds over longer distances before reamplification is required.
Submarine Long-Haul Optical Networks
437
A common sea-going fiber type specification is G.654. Rarely used terrestrially, G.654 fiber is a modified version of the G.652 fiber specification. With G.654, attenuation loss of optical fiber (less than 0.2 dB per km) is reduced to the minimum. Accomplished through the use of a purer glass optical fiber structure, pure silica fiber is often intended for long, singlespan applications such as those found in undersea, festooned systems. In these systems, the desire for unregeneration on the optical span is paramount due to the complexity and therefore the cost of using undersea optical regeneration. While pure silica fiber is the most expensive fiber of all, the trade-off of eliminating high-cost regeneration often supports the capital investment of pure silica fiber. A good example of a fiber design to address transoceanic distances of 3000 km and greater is a hybrid fiber solution. A hybrid solution manages dispersion totally within the fiber span without the use of dispersion compensation modules traditionally found at reamplification or regenerator nodes. Combining a positive dispersion/positive dispersion-slope fiber to a negative dispersion/negative dispersion-slope fiber allows for in-fiber auto dispersion compensation. Cascading the two different dispersion-managed fibers back to back— + to – to + to –, and so on—means the fiber span can manage dispersion without the requirement of regeneration functions. This balancing of dispersion effects creates a total system dispersion near zero for the fiber span. Many of these hybrid fiber designs can support the entire C band, allowing DWDM networks to reach farther without signal regeneration. Submarine DWDM applications without repeaters (amplifiers), such as coastal festoon networks and deep-water crossings, can also benefit from these ocean application-specific fiber types. A combination of Raman amplification, propagated from the terminal nodes, and FEC-based optical interfaces can extend unrepeatered submarine DWDM networks to 450 km.
Submarine Fiber Amplifiers You may wonder how submarine fiber cable can “swim” so far without regeneration. The key word is regeneration. Regeneration (3R) retimes, reshapes, and re-reamplifies the signal, and is the most expensive form of reamplification of optical signals. Submarine networks primarily use repeaters that only reamplify (1R) the signal. (Amplifiers are commonly referred to as repeaters in submarine deployments.) Ocean-going fiber amplifiers are short fiber sections spliced into the submarine fiber span at the appropriate underwater points. Amplifiers require electrical power, which is fed to them through insulated copper cables, essentially superoceanic electrical extension cables that run down the length of the fiber cable. Getting the power from the land-based terminal landing site, the power is delivered in parallel down the length of the cable to the sealed underwater amplifier. These subsea amplifiers command the best possible components using the tightest tolerances and the lowest noise figures possible. Figure 7-13 depicts a typical submarine system cable diagram. The dry plants are the landbased optical stations on the shorelines of a deep water crossing. The wet plant is the
438
Chapter 7: Long-Haul Optical Networks
underwater optical fiber and components that are submerged, often to depths approaching 8000 meters. Underwater repeaters are amplifiers that are fed power from land-based power-feed equipment through copper power lines sealed inside the underwater cable assembly. Underwater branches could be used to splice the fiber to another landing station for multipoint optical designs. When using DWDM through submarine networks, wavelength-terminating equipment connects between the underwater fiber spans and the land-based line-terminating equipment that carries traffic to and from the terrestrial network point of presence. Figure 7-13 Submarine System Cable Diagram Dry Plant
Wet Plant
Dry Plant To Landing Station
Landing Station
PoP
N P E
L T E
1 2 W 3 T E
PFE
Transmission Repeater Branching Gain Line Amplifier Unit Equalizer
Landing Station
W T E
1 2 3
L T E
N P E
PoP
PFE
WTE: Wavelength Termination Equipment PFE: Power Feed Equipment LTE: Line Terminal Equipment NPE: Network Protection Equipment PoP: Terrestrial City PoP
Source: TeleGeography Research, ©Primetrica Inc. 20042
Optical Cross-Connects (OXCs) Optical DWDM topologies are implemented as linear point-to-point, ring, protected hubbed ring, protected meshed ring, and others to accommodate the provider’s requirements for serving the customer base, territorial footprint, and fiber plant geographical model. Data interconnection between separate topologies, both similar as well as dissimilar, require the services of an optical cross-connect (OXC) in order to make the switch. The moving of photonically represented data over the long haul requires the interconnection of a fiber cable’s optical inputs with another fiber cable’s optical outputs. The ability to switch optical input to optical output is challenging. For the photons to bridge the gap between optical fibers, a “virtual splice” is needed. This bridging task is the purpose of an OXC. An optical cross-connect can pair two fibers or a wavelength across two fibers on a permanent, semi-permanent, or call duration basis.
Optical Cross-Connects (OXCs)
439
Optical cross-connects are needed to keep photons moving from fiber to fiber. Because photons are massless, they cannot be stored and later forwarded like electrons within semiconductor memories. In the optical domain, there is no commercial way of buffering or storing optical bits. The lack of a functional optical random access memory (RAM) is a deterrent to an all-optical packet switch. For now, photons must keep moving in real time. Do not confuse optical circuit switching with Ethernet switching, which is packet switching at millisecond, microsecond, or nanosecond speeds. Optical circuit switching, very much like matrix switching, still represents the majority of light-path interconnections. To interconnect a pair of fiber strands or cables, a short length of optical patch cable (a less-flexible, long-duration cross-connect) might be used. Where many fiber cables come together, a purpose-built, automated OEO cross-connect switch (a more-flexible, long- and shortduration cross-connect) is conventionally used. A modification of the OEO cross-connect switch is to replace the central, electrical switching fabric with an optical switching fabric, transforming OEO switching to an all-optical, OOO cross-connect.
Optical to Electrical to Optical (OEO) OEO has already been mentioned as the stuff that optical signal regeneration is made of— applicable to long-haul networks. OEO is also a functional model in the context of optical cross-connect switching. An OEO cross-connect is wavelength signal processing and patching performed in the electrical domain and bordered by optical transmission input and output. In cases where a provider’s location forms a physical hub between multiple fiber cables traveling east, west, north, and/or south, OEO cross-connect switches are used to provision optical circuits from one fiber pair or cable to another fiber pair or cable. This can take the form of complete fiber strands as well as wavelengths. Many providers are offering wavelength services across their regional or national footprints, and OEO cross-connect switching is essential to that type of product offering. The functionality of the OEO progression easily facilitates an optical switching process. Once the input optical signal has been converted to electrical form, it is switchable by costeffective, reliable electronic switching matrices. The matrix-switching technology in OEO cross-connects has matured over years of voice network and computer data switching. The sandwiching of an intermediate, electrical switching fabric between optical inputs and outputs creates an OEO cross-connect switch. Software-controlled OEO switches can leverage new functionality through added intelligence. An OEO cross-connect switch provides another valuable function as a matter of course. It inherently supports simultaneous optical switching and regeneration functions, as 3R regeneration is a by-product of this cross-connect platform. Drawbacks of a digital electronic switching fabric are bit-rate dependency, switching latency, and protocol intelligence. Because of this, OEOs must be protocol agnostic,
440
Chapter 7: Long-Haul Optical Networks
limiting scalability to higher speeds and requiring software drivers for supporting multiple protocol services such as ESCON, Fibre Channel, and others. An example of an OEO cross-connect switch is the Cisco ONS 15600 Multiservice Switching Platform (MSSP), which you learned about in Chapter 3, “Multiservice Networks.” Within the ONS 15600 is a core cross-connect card, which provides 3072 bidirectional STS-1 cross-connects, using a fully cross-point capable, nonblocking, broadcast supporting electronic switch matrix. This card provides the electrical switching function while other SONET/SDH interface cards provide the optical input and output functions. Together, the mix of optical and electrical switching cards creates the OEO functionality within the ONS 15600 MSSP. Contemporary OEO devices such as the ONS 15600 MSSP are often used to address the capital and operational expenses of legacy ADMs and broadband digital cross-connects in service provider metropolitan POPs. The Cisco ONS 15600 MSSP was built from the ground up within Cisco, using seed technology and architecture from the Monterey acquisition (1999). Integrating ADM and cross-connect functions allows for effective aggregation of metropolitan collector rings. This is also useful to amalgamate metro bandwidth into large bandwidth circuits (such as OC-192s) with which to feed long-haul networks, perhaps through an OADM node or an OOO switch. Better functionality and flexibility are the result, with fewer devices needed in the POP—representing important reductions in operational expenses. New-generation OEO cross-connects use denser semiconductor technology, advanced software intelligence, and integrated network management to improve space and power requirements, improving optical circuit provisioning times from days to minutes. These technology advances improve the price points of the products, helping carriers meet new growth capacity while lowering operating costs through legacy cross-connect migration.
Optical to Optical to Optical (OOO) The all-optical cross-connect (OOO) has not enjoyed the benefit of an early adopter development within enterprise organizations, where requirements start small and availability is reasonably tolerant. The OOO is a new-generation version of the OEO switches, which are exclusively used in carrier networks. The OOO cross-connect has been and continues to be developed for the service provider market, where functional requirements are the highest and carrier-grade availability is the most stringent. The success of the OOO cross-connect platforms depend on the ability to achieve carrier-grade functionality and reliability at a price/performance allowing OOO products to compete for new network builds and, over time, to compete as OEO replacements. For all-optical OOO switches to reach critical market mass (if history is a suitable teacher), they will need to meet or exceed the functionality and reliability of OEO switches and reach a compelling ten times the price/performance of current switch fabrics.
Optical Cross-Connects (OXCs)
441
This section explains some of the requirements, services, and challenges of OOO crossconnect switching.
OOO Requirements OOO cross-connect switching needs to meet the following requirements:
•
Switch light with minimum optical insertion loss—When using OOO switches in long-haul networks, the optical power budget is already under intense focus. OOO switches must have minimum impact on the optical span budgets of these networks.
•
Scale in terms of ports and throughput—Switches must offer a number of n x n optical matrices. Popular switch matrix sizes are 32 x 32, 256 x 256, 512 x 512, and 1024 x 1024.
•
Bit-rate and protocol independence—Leaving wavelengths in the all-optical domain provides both bit-rate and protocol independence. This allows an OOO switch to scale almost limitlessly while remaining protocol and data rate agnostic.
•
Carrier-grade reliability—Reliability must challenge that of existing technologies and systems.
•
On-par network management capabilities—OOO switches must integrate with provider network management systems and support optical standards such as those developed by the Optical Internetworking Forum.
•
Reduction in space and power requirements—All-optical switches can potentially realize large savings in power and space requirements. This is an important requirement for helping providers manage their operational budgets and protect margins.
•
Multivendor interoperability—Most providers’ optical networks are multivendor architectures. OOO switches must be usable in these situations.
OOO Services OOO cross-connect switching offers the following services:
•
Wavelength services—leasing—The leasing of wavelengths by customers is seen as a renewable resource by providers.
•
Managed wavelength services—Many customers are looking for optical networking at high bit rates (such as for supercomputing) but are not willing to internalize the skills necessary to support sophisticated optical networking. Providers are using managed wavelength services to address the needs of these customers.
•
Wavelength conversion—Wavelength conversion can be used to connect a customer’s disparate locations that are using different technologies or wavelengths. Wavelength conversion is required for these instances.
442
Chapter 7: Long-Haul Optical Networks
•
Optical VPNs—The ability to segment optical backbone capacity and/or wavelengths into optical virtual private networks is a market opportunity that is assisted with OOO switching.
OOO Challenges One of the primary challenges of all-optical switches is how to support decision logic. Software is often integrated into firmware, which is largely electronic today. Embedding software into OOO switches will likely remain an electronic function for a long time. An OOO switch should classify as an OOO device as long as it manipulates optical wavelengths totally in the optical domain. OOO cross-connect switching faces the following challenges:
•
High insertion losses—Optical switching fabrics can have high insertion losses between 6 and 12 dB. Coupling an input fiber into an optical switching component and then into an output fiber port is fraught with several dBs of optical power loss. If the OOO switch is a multistage switch, then more insertion loss will occur as the light passes from stage to stage. High insertion or pass-through losses have both upstream and downstream effects on wavelength amplification design.
•
Switching speed—MEMS coupled to gimbal mirrors have switching speeds between 10 and 25 ms. Inkjet bubble technology is just less than 10 ms. These latency times must improve. Liquid crystal light valves are another development that may lower switching speeds to submillisecond and perhaps nanoseconds over time.
•
Subrate grooming—In the digital cross-connect and OEO domains, subrate signals are multiplexed electrically before being optically generated. In an OOO switch, it is desirable that this be an all-optical process. Dedicating large parts of an optical switch matrix to subrate grooming may not be an efficient use of resources.
•
Restoration switching—In the event of a fiber or equipment failure, the fault must be instantly recognized. Fault awareness needs to be broadcast to all affected nodes, usually by the optical service channel and the network management systems. A coordinated switching for 1+1, 1:1, or 1:n must be executed to redirect traffic to the protection path(s). OOO switches must have fast switching speeds while integrating well with the network management systems of the long-haul system. With multiwavelength DWDM systems, the ability to make a switching decision based on the loss of one or more wavelengths becomes an architectural decision. Providers will want flexible options for restoration switching.
All-optical, OOO cross-connect switches are on the market and are present in some network deployments, primarily in long-haul optical switching centers. Most of these are considered first-generation products. However, OOO switches, just like optical fiber communications, are essentially all-analog devices. As analog switches, they have the unique abilities of bit-rate independence and protocol transparency. They also can contain optical amplifiers to provide 1R wavelength amplification.
Optical Cross-Connects (OXCs)
443
MEMS technology is the established building block for creating an all-optical switching matrix. At a basic level, MEMS uses miniaturized electromechanics to tilt mirrors on a twoor three-dimensional axis, thus reflecting light wavelengths from an optical input port to an optical output port. Bidirectional OOO switches having a 256 x 256–port matrix, a 1024 x 1024–port matrix, or even larger, are deployed in a number of networks. Since the switching function is essentially mirror-reflected light-path redirection, both bit-rate and protocol independence is inherent to the switch, theoretically future-proofing the OOO cross-connect for unlimited scalability. As discussed previously regarding the use of MEMSs in ROADMs, the cascading of MEMS is necessary to create large matrices of OOO switching, but this technology choice introduces higher insertion loss, requires internal amplification to compensate, and generates ASE noise buildup. Switching times have also been problematic, necessitating a restoration dependency on a conventional TDM layer overlay to maintain protection schemes. Wavelength blocker switches using liquid crystal technology have also been adapted to the architecture of OOO switching by a few equipment manufacturers and may transition well into commercial OOO products. The outlook for next-generation OOOs resembles an extension of planar lightwave circuit (PLC) ROADM technology. All-optical, fast-switching, low-noise, low insertion loss, silicon-based PLC technology shows great promise as an OOO-switching architecture. Adapting PLCs beyond just east/west switching to include north/south switching and other compass points in between will enhance the service reach of OOO switching in the optical switching centers of long-haul networks. PLC-based OOO cross-connects may start as PLC ROADMs with large channel counts and grow to large matrices via modularization. At its heart, the all-optical OOO switch needs to appropriately meet requirements, provide services, and address many of the challenges described in this section. For example, it is difficult to determine the BER after passing through an OOO cross-connect, and sublambda awareness and grooming are unlikely features. It’s imaginable that OOO cross-connect technology will not incorporate backward-compatible functions for tight integration with OEO switches. A hybrid OOO and OEO switch may be required to address a graceful migration strategy. A pure-play OOO technology may merit a distinct delineation of use and positioning within a network. All-optical processing benefits long-haul network architectures through lower operational costs from reduced space and power requirements and through increased availability from fewer component failures. The future ability to commercially deliver OOO cross-connects that are fast enough to become optical packet switches/routers would completely transform light-path provisioning. Switching times in the picoseconds would likely be realized. Next-generation OEO functionality will continue to address near-term requirements such as reliability and manageability, while reducing capital and operational expenses. OOO technology will continue pursuit of greenfield network opportunities while improving support for the carrier standards necessary for OEO platform integration and mass migration.
444
Chapter 7: Long-Haul Optical Networks
Hybrid OOO and OEO Technologies It’s conceivable that the best of OOO and OEO technologies may become hybrid combinations that meet all of the requirements of service providers. This will probably manifest itself in OXO switches where the strengths of each technology apply to the required function. A dual-switching fabric would be the result. For example, optical wavelengths passing through the OXO node would use the OOO functionality to pass through a wavelength(s) without latency, protocol, or bit-rate dependency. Requirements for SONET/SDH intelligence, broadcasting or multicast, buffering, and subrate grooming may best be handled in an electrical switch fabric path, obtaining 3R treatment before delivery to an optical output fiber. There is an abundance of researchers and innovators that are addressing the opportunities in the area of optical cross-connect switching. Some are working on near-term market requirements, while others are looking to deliver the all-optical packet switch or to advance the possibility of optical computing. OEO, OOO, and OXO hybrids are some of the optical cross-connect platforms likely to help photons make the optimum switch. With potentially millions of lambdas, high-speed optical circuit switching may give new life to the once-plausible circuit-switched model. With an abundance of lambdas and dynamic, fully photonic long-haul networks, the effortless switching of lambdas may challenge the packet-switching model, a model that is predicated on bandwidth constraint.
Technology Brief—Long-Haul Optical Networks This section provides a brief study on long-haul optical networks. You can revisit this section frequently as a quick reference for key topics described in this chapter. This section includes the following subsections:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding long-haul optical networks.
•
Technology at a Glance—Uses figures and tables to show long-haul optical network fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance —Presents charts that suggest business drivers and lists those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts in this section to see how business drivers are driven through technology selection, product selection, and application deployment to provide solution delivery. Additionally, business drivers can be appended with critical success factors and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
Technology Brief—Long-Haul Optical Networks
445
Technology Viewpoint Long-haul optical networks are at the core of global information exchange. They form the underpinnings of countless national and international communication networks as well as the backbone of the global, galactic Internet. Their primary application is the “speed of light” transport of voice, video, and integrated data communications between distant city pairs, from coast to coast, and from shore to shore. Long-haul optical networking is an applied science, and successful long-haul networks are engineered works of art. Long-haul optical networks benefit from two complementary ascendancies:
• •
The speed of optical modulation The density of optical wavelengths
Both perpetuate the intrinsic capacity of optical fiber, the quintessential looking glass of long-haul networking. Capacity is the defining scarcity, as long-haul fiber deployment is a long-term capital asset, often exceeding 20- to 30-year life cycles. Until the commercial viability of wavelength division multiplexing, the only way to address long-haul bandwidth constraint was either through bit-rate increases, through the “lighting” of additional dark-fiber strands, or through deploying new fiber at great expense. As long-haul networks approached fiber exhaust during the boom years of the Internet, wavelength division multiplexing technology delivered on the promise of capacity abundance. Long-haul optical networks are distance-classified by their effective reach without regeneration with traditional long-haul networks beaming from 600 to 1000 km, extended long-haul flashing from 1000 to 2000 km, and ultra long-haul shooting beyond 2000 km. To achieve such protracted distances, each must make use of single-mode fiber, choosing from a technical variety of fiber types, every one optimized for application specificity. For long-haul networking, the compelling fiscal driver is the lowering of total cost per bit while accelerating service flexibility and unleashing infinite scalability. The advent of DWDM established the virtuality of optical fiber. DWDM allows for the multiplication of capacity by adding unique wavelengths within the same fiber strand. As a result, many distinct optical signals share the same fiber, boosting net capacity while depending less on complex bit-rate increases. In this way, DWDM leverages long-haul fiber to the maximum, altering the value perception and net worth of a physical fiber pair. The current yardstick for measuring the cost effectiveness of DWDM optics is wavelengths times bit miles per cable without enduring opto-electronic regeneration. With wavelength speeds advancing from 10 to 40 Gbps and beyond, DWDM wavelengths per fiber strand exceeding 1200 channels, and fibers per cable reaching 1200 strands, longhaul all-optical networking will weave a global computer bus-backplane connecting billions of stored digital objects, and a billion or more handheld computers and personal digital media devices—all in near real time with leftover lambdas for everyone.
446
Chapter 7: Long-Haul Optical Networks
The reduced barriers to capital entry and the perpetual drive to lower total cost per bit apply pressure on service competitiveness and long-haul operational expenses. Provisioning operations should improve process times, network administration should automate functions and increase availability, and spare inventory management expenses should be reduced as a percentage of operational budgets. A new era of tunable components is reaching the market and shows promise in transforming these areas. Automatic optical power control, tunable lasers/filters, and remotely reconfigurable add/drop multiplexers are reducing cycle times of operational functionality and service provisioning, while driving down card- and component-sparing costs. In addition to tunability, optical advances are producing optical cross-connects, facilitating the construction of all-optical networks. The agility to switch optical input to optical output is challenging yet paramount. All-optical transmission keeps photons in a massless state, where they can move effortlessly and swiftly with little energy waste. Optical crossconnects show promise, allowing long-haul networks to oscillate faster, reach farther, and run cooler. Most importantly, they foster intermediate optical interchanges, allowing more service on-and-off ramps and their associated revenues between long-haul optical terminal nodes. All-optical networks will always outperform optical-electronic networks in latency, reliability, flexibility, and—ultimately—in cost. There is a trend away from DWDM point products and toward the amalgamation of DWDM within multiservice platforms. For example, the integrated DWDM architecture of the Cisco ONS 15454 MSTP helps eliminate discrete DWDM-only platforms, reducing total capital and operational expenses over the investment horizon. The ONS 15454’s MSTP and MSPP bifunctionality provides a choice of multiservice aggregation, DWDM wavelength aggregation, and wavelength transport. Combining these services with intelligent DWDM transmission in a single platform enables networks to be cost-optimized for any mix of customer services. The Cisco optical platforms represent an optimum blend of services with metro, regional, and long-haul DWDM features. The new era of networking is based on increasing opportunity through service pull rather than technology push. Long-haul networks “pull” data between metropolitan areas, dense population centers, and international communication hubs to make business and pleasure more instantaneous. Weaving a global web of glass and light, photonic data pulsates at the speed of light, seeking effortless information exchange. Long-haul networks feed on metropolitan data creation, aggregation, and object storage. All-optical networks and fiber to the home represent new sources of data growth, data that eventually flows upstream into and through long-haul optical networks. With the essence of the Internet stored in the metropolitan networks of the world, long-haul networks provide the instantaneous linkage between global data demand and information supply. Long-haul optical networks, therefore, are the harbingers of ubiquitous broadband, foretelling the new era of optical wealth with enough lambdas for all. Facilitating the abundance of capacious lambdas will be the continued deployment of optical fiber networks—to the extent that someday there will be more optical glass underground than there is glass above.
Technology Brief—Long-Haul Optical Networks
447
Technology at a Glance Figure 7-14 shows the network classifications of long-haul technologies. Figure 7-14 Optical Technology Application
Network Classification-Unregenerated Distance Ultra Long Haul and Submarine Extended Long Haul Long Haul 600 km
1000 km
2000 km
3000 km
4000 km
372 miles
620 miles
1240 miles
1860 miles
2480 miles
Figure 7-15 shows the DWDM building blocks of a long-haul network. Figure 7-15 Long-Haul Network DWDM Building Blocks Typical LH: 600 km Typical ELH: 2000 km
Client Signals LTE
Multiplexer OEO
LTE EDFA, Raman
LTE
EDFA, Raman
OEO OA
ROADM
Wavelengths Multiplexed Signals
LTE
Client Signals
DeMultiplexer
LTE
OA Optically Amplified Wavelengths
OEO IR/SR Optical Interface
Wavelengths Converted via Transponders
LTE
LTE
Receive Transponders
Source: Cisco Systems, Inc.
Table 7-5 expands on the optical technology used for the Cisco ONS 15454 MSTP.
Cisco ONS 15454 Optical Technology
448
Table 7-5
Receiver Technology Comments
Interface Card
Speed(s)
Reach
OC3/STM1
155.52 Mbps
IR/SH
1310 nm
Fabry Perot
InGaAs/InP photo detector
SONET/SDH
OC12/STM4
622.08 Mbps
IR/SH
1310 nm
Fabry Perot
InGaAs/InP photo detector
SONET/SDH
OC12/STM4
622.08 Mbps
LR/LH
1310 nm
Fabry Perot
InGaAs/InP photo detector
SONET/SDH
OC12/STM4
622.08 Mbps
LR/LH
1550 nm
Distributed feedback (DFB) laser
InGaAs/InP photo detector
SONET/SDH
OC48/STM16
2.49 Gbps
IR/SH
1310 nm
Uncooled direct modulated DFB
InGaAs/InP photo detector
SONET/SDH
OC48/STM16
2.49 Gbps
LR/LH
1550 nm
Distributed feedback (DFB) laser
InGaAs/InP photo detector
SONET/SDH
OC48/STM16 DWDM
2.49 Gbps
ELR/ELH
ITU-T 100 GHz 1530.33 to 1560.61 nm
Electroabsorption modulated (EAM) laser
InGaAs/APD photo etector
37 cards with ITU-T 100 GHz spacing
OC48/STM16 DWDM
2.49 Gbps
ELR/ELH
ITU-T 200 GHz 1530.33 to 1560.61 nm
Electroabsorption modulated (EAM) laser
InGaAs/APD photo detector
18 cards with ITU-T 200 GHz spacing
Chapter 7: Long-Haul Optical Networks
Laser Transmitter Technology
Wavelengths Supported
Table 7-5
Cisco ONS 15454 Optical Technology (Continued) Laser Transmitter Technology
Receiver Technology Comments
Speed(s)
Reach
OC192/STM64
9.95328 Gbps
SR/IO
1310
Directly modulated distributed feedback (DM/ DFB) laser
PIN diode
SONET/SDH
OC192/STM64
9.95328 Gbps
IR/SH
1550
Cooled electroabsorption modulated (EAM) laser
PIN diode
SONET/SDH
OC192/STM64
9.95328 Gbps
LR/LH
1550
Lithium niobate external modulator
Avalanche photodiode/ TIA (APD/ TIA)
SONET/SDH
OC192/STM64
9.95328 Gbps
LR2/LH
1550
Lithium niobate external modulator
Avalanche photodiode (APD)
SONET/SDH
OC192/STM64 DWDM
9.95328 Gbps
LR/LH ITU 100 GHz DWDM
Blue band 1534.25 to 1540.56 nm Red band 1550.12 to 1556.55 nm
Lithium niobate external modulator
Avalanche photodiode (APD)
8 to 32 cards with ITU-T 100 GHz spacing
MXP-2.5G-10G
2.48832 Gbps client signal to 9.95328 Gbps or 10.70923 Gbps with FEC DWDM signal
ITU-T 100 GHz DWDM
16 cards with 2 wavelengths tunable from 1530.33 to 1560.61 nm
Lithium niobate external modulator
Avalanche photodiode (APD)
16-card versions to create 32 channels
4 x 2.5G muxponder
continues
Technology Brief—Long-Haul Optical Networks
Interface Card
Wavelengths Supported
449
Cisco ONS 15454 Optical Technology (Continued) Laser Transmitter Technology
Receiver Technology Comments
Interface Card
Speed(s)
Reach
TXP-MR-2.5G TXPP-MR-2.5G
8 Mb/s to 2.48832 Gbps client signal into 8 Mbps to 2.48832 Gbps DWDM signal
ITU-T 100 GHz DWDM
8 cards with 4 wavelengths tunable from 1530.33 to 1560.61 nm
Lithium niobate external modulator
Avalanche photodiode (APD)
8-card versions to create 32 channels
200 Mbps to 2.48832 Gbps client signal into 2.48832 Gbps DWDM signal
ITU-T 100 GHz DWDM
8 cards with 4 wavelengths tunable from 1530.33 to 1560.61 nm
Lithium niobate external modulator
Avalanche photodiode (APD)
8-card versions to create 32 channels
10 Gbps client signal into 10 Gbps DWDM signal at 9.95328 Gbps or 10.70923 Gbps with FEC or 10.3125 Gbps for 10GE or 11.0095 Gbps Gbps with FEC over 10GE
ITU-T 100 GHz DWDM
16 cards with 2 wavelengths tunable from 1530.33 to 1560.61 nm
Lithium niobate external modulator
Avalanche photodiode (APD)
16-card versions to create 32 channels
10 Gbps client signal into 10 Gbps DWDM signal at 9.95328 Gbps or 10.70923 Gbps with FEC, 10GE and 10G FC
ITU-T 100 GHz DWDM
8 cards with 4 wavelengths tunable from 1530.33 to 1560.61 nm
Lithium niobate external modulator
Avalanche photodiode (APD)
8-card versions to create 32 C-band channels and 5card versions for L-band channels
2.48832 Gbps client signal to 9.95328 Gbps or 10.70923 Gbps with FEC DWDM signal
ITU-T 100 GHz DWDM
8 cards with 4 wavelengths tunable from 1530.33 to 1560.61 nm
Lithium niobate external modulator
Avalanche photodiode (APD)
8-card versions to create 32 channels
Multirate transponder
2.6 Gbps DWDM signal with FEC MXP-L1-2.5G MXPP-L1-2.5G Multiservice aggregation muxponder TXP-MR-10G Multirate transponder
TXP-MR-10E 15454-10EL1xx.x Enhanced multirate transponder 15454-10ME-L1xx.xx Enhanced 4 x 2.5G muxponder
Chapter 7: Long-Haul Optical Networks
Wavelengths Supported
450
Table 7-5
Technology Brief—Long-Haul Optical Networks
451
Table 7-6 shows a comparison of long-haul optical technologies. Table 7-6
Long-Haul Optical Technologies Long-Haul Optical Networks Key standards
Physical layer standards: DWDM ITU-T G.692 and G.694 (wavelength grid) SONET GR.253.CORE ANSI T1.105/T1.106 SDH ITU-T G.691 SDH ITU-T G.707 CCAT SDH ITU-T G.783 SDH ITU-T G.957 ITU-T-G.707/Y.1332 VCAT ITU-T G.7042/Y.1305 LCAS ITU-T G.7041/Y.1303 GFP RFC 1662 PPP over SONET/SDH with HDLC RFC 2615 PPP over SONET/SDH
Seed technology
Optical fibers: ITU-T G.652 SMF ITU-T G.652.C ZWP SMF ITU-T G.653 DSF ITU-T G.654 PSCF ITU-T G.655+ NZDSF ITU-T G.655- NZDSF DWDM Lasers: Semiconductor Lasers, LEDs, VCSELs, Direct, and Externally Modulated: Photodiodes: PIN, APD Erbium-doped fiber amplifiers (EDFAs), fiber Raman amplifiers (FRAs), praseodymium-doped fiber amplifier (PDFA) amplification Transponders, multiplexers, demultiplexers, variable optical attenuators, dispersion compensators, fiber bragg gratings, arrayed waveguides, passive optical filters, optical circulators, isolators, band separators, splitters, combiners MEMS, wavelength blockers, planar lightwave circuits Pluggable optics: ITU-T GBIC, SFP, Xenpak, XFP NRZ, RZ, and duobinary modulation FEC, E-FEC continues
452
Chapter 7: Long-Haul Optical Networks
Table 7-6
Long-Haul Optical Technologies (Continued) Long-Haul Optical Networks Distance range
Trunk interface optics: LH: 600 km to 1000 km ELH: 1000 km to 2000 km ULH: 2000 + km Client interface optics: Long reach: 40 km at 1310 and 1550 nm Intermediate reach: 15 km at 1310 and 1550 nm Short reach: 2 km at 1310 nm with MMF
Interface speed support
T1/E1 (DS0/DS1), T3/E3 OC-3/STM-1, OC-12/STM-4, OC-48/STM-16, OC-192/STM-64 Fast Ethernet (100 Mbps), Gigabit Ethernet (1 Gbps), 10 Gigabit Ethernet (10 Gbps) Fibre Channel, FICON, ESCON D1 Video, HDTV 2.5 Gbps 10 Gbps 40 Gbps (future)
Key bandwidth capacities
Total capacity per fiber pair (bit rate x lambdas): 10G x 32 = 320 Gbps 10G x 40 = 400 Gbps 10G x 80 = 800 Gbps 10G x 120 = 1200 Gbps
Topologies
LH: Linear point to point, mesh, ring ELH: Linear point to point, mesh ULH: Linear point to point
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solutions and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions
Technology Brief—Long-Haul Optical Networks
453
and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. The following chart lists typical customer business drivers for the subject classification of network. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products and their key differentiators, and applications to deliver solutions with high service value to customers and market leadership for providers. Figure 7-16 charts the business drivers for long-haul networks. Figure 7-16 Long-Haul Optical Networks
High Value
Market Leadership
Market Value Transition
Critical Success Factors Reconfigurable OADM (ROADM) Trunk Side Tunability for ITU Optics Client Side Pluggable ITU Optics A to Z Circuit or Wavelength Provisioning Reduction in Spares Inventory Lowering Cost of Wavelengths per Bit Mile per Cable w/o Regeneration Topology Flexibility
Multi-Service SONET/SDH, Ethernet, Storage Integration – Integrated, Intelligent DWDM Lowering Cost (CAPEX and OPEX) Data/Internet Traffic Growth National, International Reach
Market Share
Revenue from Data Services Regional Optical Networks Ethernet over WAN
Low Cost Competitive Maturity
Fiber Capacity Relief Business Drivers
Industry Players
Technology Optical Fibers
Cisco Product Lineup ONS 15454 MSPP/MSTP
DWDM SemiConductor Fixed and Tunable Lasers, Modulators Receivers PIN, APD EDFA, Raman DCU Optical Filters, Couplers, Gratings, Circulators, Isolators, Splitters, Combiners, Muxes, Demuxes, Transponders Muxponders MEMS, PLCs FEC, E-FEC
ONS 15600 MSSP Cisco Transport Manager
Applications Service Value Long Haul Transport Extended Long Haul Ultra Long Haul Trans Oceanic InterNational Transport VoIP Backbones Storage Networks Medical Imaging Astronomy Geological Imaging Super Computing Grid Computing
Improve Provisioning – Days –> Minutes Interface Parity with Customer IP/Ethernet/Storage over Optical Managed Wavelength Services Reduced Time to Market Cisco Key Differentiators Plug and Play Multi-Service – Automatic Power Control – Dynamic, Tunable Optical Optical VPNs Wavelength Transport 10 Gigabit Ethernet Connectivity Carrier’s Carrier Transport Services All-Optical Cross Connect Switching
Solution Delivery
Service Providers – IXCs – NextGen IXCs – PTTs – ILECs – ISPs – Cable Operators Equipment Manufacturers – Cisco Systems – Nortel – Lucent – Alcatel – NEC – Ciena – Huawei - Fujitsu - Tellabs
Long-Haul Networks
454
Chapter 7: Long-Haul Optical Networks
End Notes 1
Cisco Systems, Inc. “Cisco Systems ONS 15540 ESPx Planning Guide, Optical Loss Budgets.” http://www.cisco.com/en/US/products/hw/optical/ps2011/ products_implementation_design_guide_book09186a0080220fb9.html. 2 TeleGeography
Research. Submarine Cable System Diagram. http://www.telegeography.com/ee/free_resources/ib2004-02.php.
References Used in This Chapter Cisco Systems, Inc. “Cisco ONS 15808 DWDM System Description Manual.” http://www.cisco.com/application/pdf/en/us/guest/products/ps2018/c1028/ ccmigration_09186a00801d33d3.pdf. Cisco Systems, Inc. “10-GBPS Multirate Enhanced Transponder Card for Cisco ONS 15454 Multiservice Transport Platform.” http://www.cisco.com/en/US/partner/products/ hw/optical/ps2006/products_data_sheet0900aecd80101903.html (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Cisco ONS 15454 MSPP Engineering Planning Guide.” http://www.cisco.com/univercd/cc/td/doc/product/ong/15400/r4145doc/41plangd.pdf. Gumaste, Ashwin and Tony Antony. DWDM Network Designs and Engineering Solutions. Cisco Press, 2003.
This page intentionally left blank
This chapter covers the following topics:
• •
Narrowband—Squeezing Voice and Data Broadband—Pushing Technology to the Edge
CHAPTER
8
Wireline Networks One phone company, one type of phone, one service (talk), and per-minute rates—for well over 120 years residential wireline has been a narrowband world. The 1876 invention of the telephone spawned a massive communications utility that emanated like spider webs of wood and wire. An abundant resource of copper and trees were mined and harvested for use as an electrical signal distribution system, which came to be known publicly as telephone lines and telephone poles. Copper wire was the first way to communicate over great distance effectively, and through these webs of wood and wires the first telephone service providers found both opportunity and success. These high-wire acrobats strung the nations with a physical medium capable of placing any two people within vocal exchange with one another in just a few seconds. The telephone utilities fueled a tremendous surge of national productivity as businesses shaved significant amounts of time from interactive processes. Government would now communicate coast to coast with near-instant reachability. National security was finally national. Desired services were finally at our beck and call. The providers of wireline networks had delivered on their promise. The nation began to truly operate in real time. The power of voice communications pulsed through the insulated copper backbone of America and abroad, sparking a new consciousness regarding the true value of time. Wireline networks were originally built for voice and then adapted and overlaid with technologies for business data. A cable television provider with a larger wire called coaxial cable appeared on the wireline scene and began to deliver higher-quality video. For many years, both telephony and cable wireline networks have been at work—aggregating, mechanizing, and then sampling, quantizing, encoding, and optimizing voice and video communications for businesses and residences. Wireline providers have been providing private data communications services to businesses since the 1960s and originally supplemented remote and residential data connectivity with low-speed dial-up over analog telephone facilities. In the early 1990s, spurred by the national and international appeal of the Internet, providers began build-out and delivery of longer-duration data services to residential users and the small business market. The telephony wireline providers began with narrowband dial-up services, but the data demand loomed ever larger. Existing T1/E1, T3/E3, and higher-speed data technologies used for businesses could not easily be repositioned and repriced for
458
Chapter 8: Wireline Networks
residential use. New lower-cost broadband technologies such as cable high-speed data (HSD) and Digital Subscriber Line (DSL) are needed, and leveraging the copper infrastructure of the wireline providers is a fundamental requirement for the pricing model. In addition to HSD and DSL, Ethernet, optical, and Voice over IP (VoIP) are making their way into the wireline access networks, enabling new services for small, medium, and large businesses; and for residential broadband. This chapter introduces narrowband and broadband technologies that are used by the wireline service providers. Granted, today these providers use optical core network technologies between their central office (CO) and head ends to aggregate wireline access technologies. You’ve already covered optical technologies in Chapter 5, “Optical Networking Technologies.” The intent of this chapter is to focus on the technologies and services for the wireline access layer.
Narrowband—Squeezing Voice and Data Narrowband, as a classification of communications bandwidth capacity, is generally defined as a telecommunications channel that is not capable of T1 (1.544 Mbps) data rates. Within the U.S., narrowband has its roots in 24-gauge and 26-gauge copper wire, some 65 million twisted tons of it, deployed by the regional Bell operating companies (RBOCs) for the express purpose of delivering universal telephony service to the U.S. Referred to as phone wire, two insulated, thin-gauge copper wires are twisted in a helical pattern around one another to form a wire pair. You can transmit and receive on both wires in the pair depending on your point of reference. The term narrowband most likely originated from the description of the range of electrical frequencies that were initially designed for this type of wire. Anxious to wire the world as expeditiously as possible, most telephone service providers pumped a limited range or narrowband of low frequencies through the wire up to a limit of about 3 miles, generally close to 18,000 feet. The standard frequency range for passing analog voice services through phone wire consists of a band of about 3 kHz that is layered between 0 Hz and 4 kHz. The premise for using this set of frequencies is
• •
The frequency band is well within the audible range. The lower the frequency, the faster and farther an electrical signal will travel through copper wire. At its limit, typical phone wire today can transmit about 1 MHz of frequencies for a distance of 18,000 feet, or 30 MHz for a distance of about 200 yards.
Narrowband technology, the use of thin, unshielded, twisted-pair copper wire and low frequencies, forms the foundation of the Telco-spun nucleus of wire emanating from the provider’s CO voice switches to the subscriber residence, known as the voice residential
Narrowband—Squeezing Voice and Data
459
loop, as described in the next section. Narrowband includes both analog and digital versions of the residential loop, and technologies such as Integrated Services Digital Network (ISDN) and Frame Relay are generally classified as narrowband communications services.
Residential Loop for Analog Transmission The residential telephone user calls analog telephony a phone line. The news media classifies analog telephony as plain old telephone service (POTS). Telecommunications providers refer to analog telephony as the residential loop or local loop. The loop reference comes from the acknowledgement that for an analog voice signal to travel to a residence and back, the electrical pulses have to loop their way through the residential telephone set to maintain an electrical current path with the Telco provider’s CO equipment. No matter what the reference, this POTS, providing just local telephony, nets the U.S. wireline telephony service providers over $120 billion per year. Throw in wireless revenues—many wireline carriers have wireless networks as well—and toll service and the total revenue annually exceeds $290 billion. This revenue stream is spread across approximately 15,600 telephone switches nationwide and 178 million telephone loops, as reported by the FCC in 2004. At the time, California, Texas, and then New York State made up the top three states for numbers of telephone loops, totaling over 47 million loops or 27 percent of the total access lines alone. Florida, Pennsylvania, and Illinois had 27 million loops between them, an additional 15 percent. Further down the list, Tennessee registers about 3.2 million telephone loops, and Oklahoma carries just fewer than 2 million of the total loops, with Wyoming having the least of the 50 states, at just under 300,000 loops.1 For wireline companies, meaning the traditional voice service providers, these local loops represent a recurring revenue base—a cash cow of multibillion dollar proportions. Analog loop transmission technology is similar to frequency modulation (FM) radio. Analog is simply defined as a signal that varies in frequency and amplitude on a continuous basis. When someone speaks into the mouthpiece of a telephone handset, technology in that part of the transceiver modifies the sound waves from the spoken voice into a representative pattern of frequencies and amplitude that are modulated onto the local loop wire pair. The pattern of frequency changes and amplitude swings represents an electrical image of the original voice and is carried through the local loop to the CO voice switch. The CO voice switch has previously switched or cross-connected and holds electrically active the designated local loop where the conversation is to be delivered. After traveling down the local loop to the intended residence, the modulated frequencies and amplitude are received and then induced, as closely as a 3.4 kHz range can possibly represent, the original spoken sounds into the earpiece of that party’s receiver. In summary, sound waves become electrical signals, which, in turn, become sound waves once again at the destination end. By the mid-1970s, digital loop technology became an affordable option, heralding potential changes in the electrically continuous circuit for the analog local loop.
460
Chapter 8: Wireline Networks
Going Digital with PCM and TDM True digital telephone technology became generally available for the nation’s voice switches about 1976. The analog loop had enjoyed its 100th birthday, and the contemporary gift of digital technology would become the embodiment of the wireline provider’s CO switching infrastructure. Used first to optimize switching and backhaul transport of interoffice voice trunking, the digital application of voice would benefit the capital expenditure (CapEx) and operational expenditure (OpEx) components of wireline providers and become new service delivery options for businesses. Pulse code modulation (PCM), defined in International Telecommunication Union Telecommunication Standardization sector (ITU-T) standard G.711, digitized analog voice calls, optimizing both switching and transport within the new digital backbone of the public switched telephone network (PSTN) infrastructure. By going digital, you could put computers to work processing and switching digital 1s and 0s. Voice switches use specialpurpose digital computers under stored program control (software)—the heart of their intelligence. The digitization begins with PCM at the CO end of the analog loop, sampling the analog signal 8000 times every second. Each of these 8000 samples is quantized and then encoded into an 8-bit digital byte, as shown in Figure 8-1 and Figure 8-2, which when multiplied becomes 64,000 bits per second (bps). These 64,000 digital bits are essentially the digital representation of one second of analog voice. The wireline provider takes some overhead from this bit stream if using in-band control signaling and monitoring, so about 8000 is used, leaving about 56,000 bits. If the signaling is out of band, then a total of 64,000 bits can be used to represent the analog voice signal. The result is a 64 Kbps digital narrowband signal known in the U.S. as the digital signal zero (DS0). The European standard also uses this same technique to arrive at a 64 Kbps narrowband signal, which is designated as an E-0. Figure 8-1
Analog Voice Input to Sampling Stage Frequency Modulated Analog Voice Input about 400 to 3400 Hz
Telephone
Sampling Stage 8 Bits per Sample-8000 Samples per Second
Narrowband—Squeezing Voice and Data
Figure 8-2
461
PCM—Analog-to-Digital Conversion Sampling Stage 8 Bits per Sample-8000 Samples per Second
PCM Quantizing Stage as Samples Become Digital
101100110011110
These sampling and encoding techniques were decided on by the standards organizations to provide an excellent balance of analog-to-digital conversion voice quality, along with the size of the digital data stream that would be created.
Narrowband Aggregation for DS1 and E1 To further optimize the transport and switching of digital voice calls between CO switches— common narrowband aggregation points for multiple, concurrent voice calls—the wireline industry elected to package up to 24 of these digitally converted voice calls into a one-second, overall digital signal. Aggregating and assigning these 24 digital voice calls of 64 Kbps each into 24 time slots is a process known as time division multiplexing (TDM). With TDM, the voice calls could be multiplexed at the aggregation end and sent across a pair of wires for a total of 24 times 64 Kbps or 1,536,000 bps. Adding an additional 8000 bits onto this amalgamated data stream for provider signaling and monitoring yields 1,544,000 bps, and this became the basic digital building block of U.S. telephony, known as the digital signal one (DS1). In Europe, the same narrowband U.S. standard digital signal of 64 Kbps (DS0) is used, yet the European standard multiplexes 32 of the DS0 signals into a signal yielding 2,048,000 bits per second, known as an E-1 digital signal, often abbreviated as E1. (Two of the 32 DS0s within the E1 are used for provider purposes, so effectively 30 DS0s are available for throughput, about 1,920,000 bits per E1.)
462
Chapter 8: Wireline Networks
Containing 24 digital signals of 64 Kbps each (the basic DS0 level), the DS1 data stream could then be transmitted across a physical Tier 1 (T1) carrier, or T1 facility. The DS1 represents the total, encoded digital bit stream representing one second of voice samples for each of the 24 voice calls, whereas the T1 carrier is the physical transmission facility that electrically clocks the DS1 onto the wire pair, delivering it to a T1 receiver in equipment at the other end. In Europe, no distinction is made between the digital signal and the physical transmission medium, so E1 can be considered as representing both. In post office analogy, the DS1 is like 24 different letters that the postman pulls out of the central post office bins, then groups (multiplexes) into a bundle (the DS1), places into his mailbag (the T1 carrier), and transports to his assigned 24-house neighborhood, where he removes (demultiplexes) them from his mailbag and places the proper letters in each of 24 mailboxes (DS0s) on his route. The DS1 and T1 terminology are different technologies, yet they’re symbiotic, often used synonymously to represent the same digital communication process. A voice switch that supports 100 T1s would naturally be assumed to carry 100 DS1 signals, representing 24 unique DS0s in each of the T1s. T1s and E1s then, became the first, essential digital building block for interoffice trunks within the wireline provider’s geographic network, and then progressed as telephony service options became available to subscribers, primarily large to small businesses. Data is naturally digital, so T1s and E1s are commonly used to carry both digitized voice and data transmissions from customer site to customer site, primarily for large and small business communications (prior to residential demand for Internet connectivity). Today, the provider’s interoffice copper facilities are generally supported over higher-speed DS3/T3s and, in Europe, E3s. A DS3/T3 facility is the further amalgamation of 28 DS1/T1 signals, plus overhead to form a 44,736,000 bps transmission facility, commonly called the 45 Mbps DS3/T3. An E3 is the further union of 16 E1s to create an E3 that comprises 32,768,000 bps of digital signal transmission. To keep things within orders of magnitude, a 45 Mbps channelized T3 facility can carry 672 voice calls concurrently, while an E3 carries about 512 voice calls. Or if you want to use the T3/E3 for supporting data, you have the option of configuring it for clear channel, allowing your data stream to essentially use the whole bandwidth of 45 Mbps/34 Mbps. As more interoffice calls are experienced, T3s and E3s become the fundamental interoffice building block for copper facilities.
NOTE
T3s over copper need a larger wire to accommodate the increased frequency dynamics needed to send all of that digitally represented data per second. T3s over copper facilities use coaxial cable and are used as internal connections within and between providers and are almost exclusively sold to larger enterprises, which are positioned to justify and afford them.
Narrowband—Squeezing Voice and Data
463
Table 8-1 shows the U.S, European, and Japanese digital signal hierarchy standards up through the T3/E3/J3 levels. Table 8-1
Digital Signal Hierarchy for U.S, Europe, and Japan
Line Rate (bps)
Digital Signal for North America
Physical Carrier
Digital Signal DS0s for Europe
Digital Signal for DS0s Japan
64,000 bps
DS0
--
1
DS0
1
--
1.544 Mbps
DS1
T1
24
--
--
J1
2.048 Mbps
--
--
--
E1
32
--
6.312 Mbps
DS2
T2
96
--
--
J2
8.448 Mbps
--
--
--
E2
128
--
32.064 Mbps
--
--
--
--
480
J3
34.368 Mbps
--
--
--
E3
512
--
44.736 Mbps
DS3
T3
672
--
--
--
The primary challenge that TDM-based interfaces have is that the bandwidth options do not grow on a linear basis, but rather in a kind of step function. If your data demands exceed more than a T1’s worth of bandwidth, you either have to use additional T1s, often called bonded T1s, or you have to subscribe to a fractional or full T3 facility. When using multiple bonded T1s, you must change the equipment interface on both the customer and the service provider equipment, and the cost of the change has a major impact on both the customer and the provider. The technical process to accomplish the T1 channel bonding also creates more overhead. The data rate of a T1 facility, at essentially 1,536,000 effective bits per second (1.5 Mbps), is almost in the realm of broadband. Yet, the T1, using two pairs of wires, was built on a business model that for over two decades has primarily been positioned, priced, and provisioned for businesses, not for residential subscribers. This being neither technically applicable nor affordable to the critical mass of the residential market, a different communications technology was needed to deliver higher-speed switched data. Anxious to develop a switched broadband offering for the consumer market, and after much ado, the industry rallied around the creation of technology that would allow both digital voice and data on the same residential wire pair. By standardizing nationally and internationally on the same technique, the wireline industry could potentially afford to move digital capabilities into the local loop, over the same gauge wire pair as before. In essence, the industry created an integrated digital network, which came to be called the Integrated Services Digital Network (ISDN).
464
Chapter 8: Wireline Networks
ISDN ISDN is a point-to-point technology that rides the fence between narrowband and broadband. As an end-to-end switched, digital technology, ISDN is the first of these technologies to be targeted at the residential loop to support simultaneous voice and data. ISDN, through its all-digital transmission, effectively doubles a local loop DS0 bit rate from 64,000 bps to 128,000 bps at its basic rate interface (ISDN BRI) specifications. Another ISDN specification, the ISDN primary rate interface (ISDN PRI), is an enhanced digital T1-like facility capable of aggregating ISDN BRIs. As with all new technologies, there is usually a portion that ends up providing a missing link in the overall puzzle that is telecommunications. The most successful and beneficial uses for ISDN have been
•
ISDN basic rate interface (BRI)—Provided a higher-speed, lower-cost, digital transmission than switched 56 Kbps dataphone digital service (DDS). As such, it became successful for WAN dial-backup, switched video calls, and on-demand WAN circuits for intranet and Internet use. Where ISDN was priced cheaply, ISDN BRI often was used for small to medium business WAN primary links. It provided an option for a digital residential local loop.
•
ISDN primary rate interface (PRI)—Developed as an aggregator and multiplexer for multiple ISDN BRIs, as a more effective and efficient trunking mechanism between voice private branch exchanges (PBXs), and as a higher-speed digital switched service such as is used for Internet access.
•
Signaling System 7 (SS7)—Developed in parallel with the ISDN effort to provide an international standard for voice network integration and out-of-band signaling within and between service providers.
ISDN BRI ISDN was defined by the Consultative Committee for International Telephone and Telegraph (CCITT) standards and works as a switched digital service with the goal of supporting concurrent voice and data, on a digital basis, to potentially every household in the larger communications market. The ISDN BRI was defined as a new digital, residential, and business local loop. The architects of this 128 Kbps of digital bandwidth chose to carve it into a pair of 64 Kbps channels: either two for voice or two for data, or one for voice and another one for data, each called Bearer (B) channels. Another 16 Kbps was used as an out-of-band signaling channel called the D channel, allowing full use of the 64 Kbps-per-bearer channels. The combined total of an ISDN BRI 2B+D circuit is 144 Kbps. This results in the typical classification of ISDN BRI as a 2B+D technology—two bearer channels and one controlsignaling channel. With ISDN, one of the prime features was that voice switch to handset control signaling was now performed out of band in the D channel, which allowed for new
Narrowband—Squeezing Voice and Data
465
features as well as faster data rates. For example, when a voice channel was not in use, that channel (through the control signaling of the D channel) could be bonded with the data channel, boosting the effective data transmission rate to 128 Kbps. Since ISDN is digital end to end in the access link, ISDN BRIs must interface to ISDNintelligent terminal adapters at the customer premise. The ISDN terminal adapter is sometimes an external device that terminates the ISDN line at the customer premise, often an ISDNcapable IP router, or typically an integrated ISDN terminal adapter within an IP router or a voice switch. Some ISDN terminal adapters also adapt standard analog telephones to use the ISDN local loop service. Otherwise, specific ISDN telephones are needed. ISDN BRIs are often used for low-cost WAN connectivity, as a secondary WAN dial-backup or on-demand link, and as data on-demand circuits for Internet access. A common application is for multisite WAN locations to use dial-backup BRIs to call in to a central site ISDN PRI-enabled WAN router if there is a primary link WAN data communication problem (for example, in a financial institution). ISDN is typically priced at both flat rate and measured rate for local calling and incurs long-distance charges.
ISDN PRI ISDN PRI is a digital T1 channelized version and is classified as a 23B+D facility—that is 23 B channels and 1 D channel. With ISDN PRI, each of the 23 bearer channels use 64 Kbps as the line rate, and the D channel has a 64 Kbps control-signaling allocation. This totals 24 channels, which add up to 1,536,000 bps, the data rate of a DS1/T1 facility. In Europe, the PRI comprises 30 B channels plus 1 64 Kbps D channel for a total of 1,984,000 bps. Here again, the PRI’s D channel is an out-of-band control-signaling channel that enables various features for control of both voice calls and data. For example, multiple PRIs can be used between two voice PBXs, and one of the PRI’s D channels can be used for the control signaling of all of them, in essence common channel signaling. Called Non-Facility Associated Signaling (NFAS), this allows the 2nd through the nth PRI to use all of their PRI’s 24 channels as bearer channels. An ISDN PRI is effectively an enhanced digital T1 facility but is much more flexible in its provisioning and usefulness with PBX systems. For example, prior to ISDN PRIs, a PBX would often need a separate T1 facility from the wireline provider to deliver inbound voice calls to the PBX, and another separate T1 to deliver outbound calls from the PBX. Usually, large companies would have large numbers of inbound and outbound T1s to meet the concurrent voice-carrying requirements of their business. ISDN PRI, however, is flexible enough to use for both inbound and outbound calling on its 23 (North America) or 30 (Europe) channels. Using ISDN PRIs as PBX interfaces can often save the business 30 to 45 percent of recurring costs rather than the legacy, directional T1s. ISDN PRI interfaces are also available in data equipment such as Cisco routers, and are often used as aggregation links for Internet service providers (ISPs) and for larger WAN use, as well as ISDN BRI dial-backup. ISDN PRIs are effectively used by businesses worldwide.
466
NOTE
Chapter 8: Wireline Networks
Many ISDN providers consider speeds greater than T1 as Broadband ISDN (B-ISDN). From an Incumbent Local Exchange Carrier (ILEC) perspective, B-ISDN services would range from 25 Mbps up to Gbps speeds. The market for these B-ISDN links has not enjoyed the success of the more popular ISDN PRI due to many service substitutions that are now available in the broadband range, such as SONET/SDH-based OC-3s, OC-12s, OC-48s and so on.
SS7 Call signaling is the transparent glue behind all switched voice and data call routing, setup, and feature interaction, and it requires a parallel network that is continuously connected and communicating. Signaling System 7 (SS7) is a digital, packet-based network that uses common channel signaling techniques to allow the worldwide PSTN to interoperate. SS7 is a worldwide architecture for supporting call establishment, call routing, billing, and information exchange for the PSTN. The primary architectural change for SS7 was the use of out-of-band, digital signaling techniques between provider network elements in combination with a hierarchical routing and database structure. SS7 was developed and deployed in the 1980s, about the time that ISDN was finishing its standards definition, and is sometimes considered in league with the ISDN effort. SS7 was chosen as the signaling architecture for ISDN, sharing similarities by using out-of-band signaling (ISDN’s D channel). Synergy was anticipated, as SS7 would provide more efficient call signaling in the provider PSTN backbone network while ISDN provided efficient call signaling in the customer access link. This could allow ISDN call features to extend endto-end across the worldwide SS7-enabled PSTN. Interestingly, the SS7 architecture can use any available physical carrier facility such as a T1/E1, T3/E3, ISDN, and SONET/SDH OC-n/STM-n facility—link type agnostic. Other benefits are that SS7 allows for signaling at higher speeds and for signaling features to occur during an in-process call, not just at the origination and teardown of a call. SS7 signaling can now be transported over IP networks through the use of the IETF standard called Signaling Transport (SIGTRAN). By using IP networks to carry SS7 traffic, providers have new options for cost savings and the ability to integrate various features between internetworking and telecommunications.
ISDN Challenges Supporting ISDN in service provider voice switches required expensive software upgrades, on the order of millions of dollars for large voice switches like the Nortel DMS-100 or the Lucent 5ESS. The upgrade cost for ISDN support, combined with less-than-optimum port
Narrowband—Squeezing Voice and Data
467
scalability at the switch and the limitation of 128 Kbps for a BRI—which was too little bandwidth, too late—skewed the economics such that the business case for the up-front investment held serious doubts. Initially, both Nortel and Lucent, North America’s largest voice PBX vendors, implemented ISDN differently enough that interoperability was a problem, and an initial attempt at a simple, universal ISDN connectivity standard was flawed. The National ISDN 1 (NI-1) specification would later address that issue and a more comprehensive standard (NI-2) was later defined. While both Europe and Japan have enjoyed more success with ISDN BRI for the residential local loop, the United States has not. ISDN placed more severe distance limitations on the length of the local loop. The U.S. enjoys more suburban sprawl than the business centers of Europe and Japan. Many U.S. residential areas couldn’t be served by ISDN economically. Also, in the United States, adoption of ISDN by wireline providers was slow, with spotty coverage. This less-than-robust national commitment doomed ISDN’s residential loop component to niche business applications, such as data bandwidth for small and large business, dial-backup facilities for WANs, and bonded channels for video conferencing systems. Ultimately, ISDN forced digital data through the voice switch, which was not economically optimized for data transmission and switching. In addition, the ISDN local loop specified a maximum of 128 Kbps for data or to support two simultaneous 64 Kbps voice calls. But this speed is considered to be narrowband, subbroadband. Today’s residential connections to the Internet demand low-cost, broadband connectivity. ISDN doesn’t fit the residential business model, and its economic liability seriously distracted and delayed the efforts of the wireline providers from making quick entry into the residential broadband market.
Frame Relay Frame Relay is a packet-switching, shared network protocol that allows for variable-length packets and employs statistical multiplexing techniques to optimize sharing of network facilities by packets from multiple sources. This allows for more efficient and flexible data transmission and lowers costs compared to constructing networks with point-to-point dedicated network circuits. Frame Relay was originally proposed to the CCITT in 1984, but rapid adoption didn’t occur until after 1990. Frame Relay is standardized in the United States by the American National Standards Institute (ANSI) and internationally by the ITU-T—ITU-T is the successor to the CCITT. For many years, much of packet data transport technology, such as X.25, was error prone, requiring a large degree of intelligence such as error correction and frame checking at each and every node in the X.25 network provider path. At the time, the customer data end nodes had relatively little intelligence to perform these functions, relying on the service provider to guarantee acceptable levels of end-to-end data transmission. Additionally, much of the transport technology was analog-based at the time, so protocols such as X.25 were designed
468
Chapter 8: Wireline Networks
to increase transport reliability through error detection, correction, and packet routing at every link and node throughout the network. This created OSI Layer 3, or network layer, overhead in the transport network, with a resultant effect on throughput and capacity. For example, a service provider X.25 node might have throughput delays of almost 200 milliseconds (ms). Many of the enterprise customers, flush with IBM 3270 networks constructed of SNA/SDLC multidrop networks, were facing build-outs of IP-based WANs. Newer, more intelligent IP-based data devices for distributed computing became available with requirements for peer-to-peer and client/server IP and other protocol networking. With much of the telecommunications infrastructure migrated to error-free, fiber-based digital services, a pristine transport layer quality eliminated the constraint to perform Layer 3 services in the provider network. By moving the error detection, correction, and packet routing decision logic back into the customer data equipment, the transport service could use OSI Layers 1 and 2 only. Since packets at Layer 2 are called frames, this became known as Frame Relay service, a connection-oriented, fast packet switching system. In a Frame Relay environment, the Layer 3 functions are handled at the customer end points. This allows Frame Relay networks to expedite packet switching (really frame switching) from customer source to customer destination, creating a fast packet service. Frame Relay circuits typically experience less delay, a delay improvement factor of eight to ten times compared to the X.25 networks of the day. Previously, most data networks that didn’t use X.25 would use point-to-point dedicated circuits, as point-to-point was the only choice. A small WAN with 24 end sites would require 24 point-to-point circuits, each terminating on 1 of 24 interface ports at a central site WAN router. Frame Relay is appealing in that you can create hub-and-spoke networks or point-tomultipoint networks with fewer physical transmission facilities through the use of logical virtual circuits. In addition, Frame Relay supports multicasting, used by various IP routing protocols as well as IP data applications. This reduces the number of router ports needed at a hub site. A single T1 circuit from a hub site might contain as many as 24 or more virtual circuits to remote customer locations; therefore, Frame Relay is essentially time division multiplexing. The service provider’s Frame Relay network aggregates the logical connections from the customer remote offices and presents them over a single, higher-speed physical circuit. Not only does this reduce router port requirements, it also reduces the number of point-to-point dedicated data circuits typically used in physical mesh networks, improving the price/performance of the customer’s network. With several customers essentially sharing the provider’s Frame Relay infrastructure, this has the affect of aggregating volume, allowing providers to pass infrastructure cost savings on to the customer. Faster transmission, lower-cost and lower-port requirements, and better reliability are fundamental factors to Frame Relay’s popularity.
Narrowband—Squeezing Voice and Data
469
Frame Relay is typically offered in the following speeds:
• • •
Fractional T1 (56 Kbps to 1.472 Kbps) T1 (1.536 Mbps) T3 (44.736 Mbps)
Typical Frame Relay applications are
• • • • • •
Private line replacement Peer-to-peer IP data applications IP Internet access Voice over Frame Relay and Voice over IP over Frame Relay Disaster recovery applications Layer 2 Tunneling Protocol V3 (L2TPv3) over Frame Relay
Frame Relay uses logical virtual circuits to create a connection-oriented network communication. These virtual circuits come in two forms: a permanent virtual circuit (PVC) and a switched virtual circuit (SVC). Frame Relay PVCs are essentially nailed up logical connections from customer site to customer site over the Frame Relay provider’s backbone. Frame Relay SVCs are temporary connections, oriented to more sporadic use of data transfer. Initially, SVCs were not supported by many manufacturers of Frame Relay’s provider equipment, so SVC deployment was very sparse. More recently, customers of Frame Relay services have been requiring SVCs, as SVCs suit sporadic data transfers better and cost less than having dedicated PVCs established all the time. SVCs use the same signaling protocols as ISDN, so the call setup time is very quick, barely a few seconds. Customers purchase and install Frame Relay end site equipment manifested in routers, bridges, personal computers, or Frame Relay terminals. To the Frame Relay network, these connection points are referred to as data terminal equipment (DTE). The Frame Relay packet switches within the provider WAN networks are referred to as data communications equipment (DCE). A Frame Relay PVC is established across the network using data link connection identifiers (DLCIs). Since DLCIs are Layer 2 (conceptually Media Access Control [MAC] addresses), it’s necessary to configure/map the DLCIs to an interface Layer 3 IP address within the customer’s DTE equipment. The DLCIs are agreed on by the Frame Relay provider and the customer; more typically they’re assigned by the provider. The DLCIs are configured in both the customer DTE and the provider’s DCE, so that providers can logically separate customer data traffic from other customer traffic on the provider’s shared, network infrastructure. Although Figure 8-3 depicts a Frame Relay PVC configured from DLCI y to DLCI x, DLCIs are only locally significant to the particular DCE attachment. DLCIs can be the same on each end of this diagram and are usually numbered from 16 to 1007. The Frame Relay provider can still separate and deliver the traffic to the appropriate PVC.
470
Chapter 8: Wireline Networks
Figure 8-3
Conceptual Frame Relay Network
Terminal
DTE
Packet Switch
Frame Relay WAN
Personal Computer
DLCI y
DTE
DCE
DLCI x
Network Host
PVC
DTE
Source: Cisco Systems, Inc.
Computer-based applications that can check for errors and, therefore, retransmit packets if they are lost are a prerequisite for Frame Relay. TCP/IP is a prime example. If TCP/IP packets are lost in a Frame Relay network, the receiving customer node notifies the sending customer node to retransmit, and likely the retransmission will succeed. For all intents and purposes, Frame Relay is really a best-effort packet delivery service that works well with networking protocols such as TCP/IP. As a result, customers can pay less for a best-effort shared network infrastructure service. To some extent, though, Frame Relay as a shared network infrastructure must emulate the dedicated private line environment that customers prefer. In a dedicated point-to-point environment, the onus is on the customer for data service delivery levels; the provider is only concerned with circuit availability and circuit error thresholding. For a shared network infrastructure such as Frame Relay, it is also necessary for service providers to employ some congestion management mechanisms to establish service delivery guarantees for the customer. This helps the customer to carry mission-critical traffic over Frame Relay networks while they also allow best-effort traffic to share the same facilities. These congestion control mechanisms are primarily as follows:
• • •
Discard Eligibility indicator (DE bit) Committed information rate (CIR) Forward and backward explicit congestion notifications (FECNs and BECNs)
Discard Eligibility (DE) is a bit in the Frame Relay header that marks a frame as eligible to be discarded in the provider’s network, depending on how the provider chooses to police
Narrowband—Squeezing Voice and Data
471
the network’s bandwidth resources. Some providers explicitly discard any frames with the DE bit set, but most providers will allow DE tagged frames to transit the network on a best effort basis, and these frames will succeed in the absence of congestion along the network path. Customers can even set the DE bit to indicate which packets, or rather frames, have lower priority than other packets; for example, an intranet database application might not set the DE bit, but casual Internet access might trigger the setting of the DE bit if casual Internet access is determined to be less important between the two. The CIR is a bandwidth value agreed on between the provider and the customer. The CIR acts as a service-level agreement (SLA) indicating the amount of bandwidth that is guaranteed to flow over the customer’s PVC. A typical CIR might be a CIR of 256 Kbps on a Frame Relay PVC circuit size of 512 Kbps. This would mean that up to 50 percent of the customer’s PVC bandwidth is guaranteed to traverse the provider’s Frame Relay backbone network. If the customer is transmitting 512 Kbps for an extended duration, for example a file transfer, then only 256 Kbps is guaranteed, and any packets exceeding the 256 Kbps data-rate threshold are marked with the DE bit. It is up to the current configuration/state of the Frame Relay provider’s DCE device utilization to determine whether to really discard the DE-tagged packets or allow them to pass through the network on a best-effort basis. FECN and BECN are congestion notification bits that are set in the Frame Relay header by the provider’s DCE switches. If a provider’s DCE experiences congestion along a PVC path, then the FECN bit will be set on within the Frame Relay header for that frame. When this is received by the customer’s destination DTE device, the DTE is then aware that congestion was experienced somewhere between the customer’s DTE source and this destination DTE, allowing the DTE to implement flow control as necessary or to ignore the indication (rely on Layer 3/4 retransmit if packets were lost). BECN operates in the reverse direction of the PVC that set a FECN bit. The best way to explain it is from the viewpoint of the same customer DTE: A customer DTE sourcing frames into the Frame Relay network on the transmit line knows nothing about FECN, as it is up to the provider DCE to set FECN if congestion is experienced along the path to the destination DTE. A received FECN bit in a frame notifies the destination DTE that a congestion instance was experienced along the path. BECN is sent to a sourcing DTE’s receive line by the attached provider DCE if the DCE needs to indicate to the sourcing DTE that there is congestion along the PVC path in the provider’s network. A sourcing DTE will consider the BECN bit and either exercise flow control (diminish its frame sending rate) or ignore the BECN, send the frames at the current rate, and hope for the best. Think of FECN and BECN in a unidirectional data transfer from the perspective of the sourcing DTE. FECN is congestion notification that is forwarded into the network from the perspective of the sourcing DTE, and BECN is congestion notification back from the network to the sourcing DTE (backward notification). From the customer perspective, Frame Relay is typically considered a narrowband service, as customers usually elect fractional or full T1 Frame Relay service. That is because Frame Relay is normally chosen to remove the customer expense of networking multiple sites
472
Chapter 8: Wireline Networks
together. These sites are usually small branch locations with a few users, requiring a T1 or less of bandwidth to deliver a suitable IP WAN application environment. Obviously, some customer applications might require Frame Relay T1/E1s for more broadband-based applications. In summary, Frame Relay is a physical Layer 1 and data link Layer 2 networking service that allows for connection-oriented data communications (frame switching) through a provider’s shared network infrastructure on both a best-effort and a guaranteed (CIR) data delivery basis. Frame Relay enables point-to-multipoint network designs in addition to point-to-point designs and saves customer’s money through the use of shared networking resources. The provider’s core network can use fewer core network physical circuits and resources, as providers can multiplex multiple customers onto the same core link facilities using an oversubscription methodology. For the provider, Frame Relay is a step up the OSI stack from just providing physical transport at Layer 1, and truly can be called a service offering.
Narrowband Aggregation Layer and Digital Loop Carriers The wireline telecommunications providers have used narrowband aggregation systems for many years. There are simply too many twisted-pair wireline connections from subscriber residences to terminate each and every one on voice and data equipment back in the provider’s CO. This would also have a negative impact on the cable binder groups necessary to carry the traffic into the network and also on the required narrowband port capacity of a Class 5 voice switch. The solution of aggregating subscriber traffic through digital loop carrier (DLC) technology is persistently situated closer to residential neighborhoods and business parks. DLC systems were originally deployed where subscribers were too distant from a CO to be quality served over the local loop. In fact, of the approximately 199 million copper access lines in the United States, only about 144 million of those are served from a local CO, whereas about 55 million of them exceed 18,000 feet and require the services of a DLC. Distributing DLC systems closer to subscribers allowed the termination of the subscriber local loop at shorter distances, where aggregation of traffic would occur. An example of an early DLC is a remote subscriber terminal, which could terminate 96 lines of analog local loops from the subscriber residences and then aggregate and feed the CO via a T1 carrier connection. For example, 24 homes served by 96 wire pairs could be terminated at a DLC, which then uses four wire pairs for either T1 or High Data Rate DSL (HDSL) to carry the traffic to the CO. Aggregating 96 wire pairs of traffic into 4 wire pairs toward the CO achieves the desired result of a narrowband aggregation system. DLCs are typically deployed at pedestals, SLICs, controlled environment vaults, telephone poles, and even in office buildings. Next-generation DLCs appeared in the 1980s, targeting thousands of subscribers, using more digital technology, and paving the way for the delivery of ISDN to the residence.
Narrowband—Squeezing Voice and Data
473
These DLCs contained architecture to support more varieties of connections to the subscriber side, such as switched 56 Kbps DDS, T1s, ISDN BRI and PRI, in addition to analog POTS and other special variations. Also, DLCs are necessary to accommodate an increasing number of different customer interfaces and combine them into an increasing number of trunking technologies upstream toward the provider. Typical DLCs might support the following downstream customerfacing interfaces:
• • • • • • •
Analog POTS (for example, telephony to a residence)
•
Video (for example, supplying a specific video connection to a TV broadcasting station)
DS0 data and V.35 serial data interfaces (for example, to a branch bank) DS1/E1s (for example, to a small business for voice and data) ISDN BRI/PRI (for example, residences or businesses using ISDN) Fractional T1 data or subrate data (for example, a bank branch using Frame Relay) Hybrid fiber coaxial (HFC) (for example, supplying cable TV to a residence) Digital Subscriber Line (xDSL) (for example, supplying xDSL connectivity to a residence or small business)
Examples of DLC upstream network-facing interfaces are
•
Multiple T1/E1s (nxT1/E1) (for example, necessary to pool bandwidth to aggregate multiple downstream connections)
•
Optical interfaces such as OC-3/STM-1 (for example, allow DLCs to be distributed over longer distances)
•
HFC and analog services (for example, often used to connect cable providers upstream to a cable head-end device)
Upstream to the network, these DLCs would aggregate and connect to the provider’s CO via copper high-speed facilities over multiple T1/E1s, OC-1s, and via OC-3/STM-1s over optical fiber facilities. While the market for ISDN to every subscriber never materialized, the advanced digital architecture of these DLCs was easily leveraged to support new subscriber connection types. By the 1990s, new models of these DLCs supported more services such as xDSL, HFC, and other video interfaces. Figure 8-4 shows a conceptual design of a digital loop carrier.
474
Chapter 8: Wireline Networks
Figure 8-4
Next-Generation Digital Loop Carrier
nxT1/E1 Optical OC-3/STM-1 HFC Analog
Next Generation Digital Loop Carrier
Analog POTS & PL DS-0/V.35 DS-1/E-1 ISDN BRI/PRI Subrate Data HFC/xDSL Video
Upstream to Network Central Office
Downstream to Subscribers
DLCs in the new century might look more and more like the previous decade’s core network technology. Optical rings will be pushed further into the local loop, terminating on more advanced DLCs in order to offer multiple services for legacy narrowband, but with a special emphasis on new broadband subscribers. Multiple architectures are necessary in these DLCs, because they will continue to support TDM applications, cell-based bus applications, and statistical multiplexing methods for IP-based traffic. New DLCs are based on modularity and flexibility to provide digital data and video solutions to all types of wireline providers, whether telecommunications, cable providers, or otherwise. Urban sprawl will continue to push requirements on DLC technology to support more services and higher speeds downstream to the subscriber. Upstream to the network will require more options for fiber connectivity to synchronous optical network (SONET) rings and Resilient Packet Ring (RPR) rings. This enhances broadband service delivery. With fiber closer to the user, higher-speed copper-based services such as Very High Data Rate DSL (VDSL) can reach subscribers over shorter copper wire distances. DLC equipment might contain technology similar to optical network unit (ONU) equipment. Overall, manufacturers need to consider moving the DLC architecture up the protocol stack beyond Layers 1 and 2 to perhaps Layer 3, IP-based technology. Manufacturers will no doubt be looking at the applicability of routing intelligence into the DLC technology space.
Broadband—Pushing Technology to the Edge The narrowband residential loop is now being leveraged with new technologies such as xDSL, cable modem, and Ethernet to deliver broadband services to residences and small business for the purpose of creating more advanced services with higher margins, thereby
Broadband—Pushing Technology to the Edge
475
increasing revenues. With so much emphasis on new opportunity to the residential market, new innovations will continue to alter the loops from analog to digital technology, pushing the limits of copper loop technology faster and ever farther.
DSL Digital Subscriber Line (DSL) is an internationally popular family of broadband connections for using twisted-pair copper infrastructures from telephone providers. The DSL Forum monitors DSL installations around the world, and a 2005 report showed more than 100 million DSL users internationally. The DSL Forum has set a goal of 500 million DSL subscribers by the year 2010.2 DSL began as the Telco’s response to the cable operator’s high-speed data service over cable. There are many different variations of DSL technology, which is why it’s customary to precede the acronym with an x, as in xDSL, or just to refer to the generic market as DSL. Providing a technique to carry voice and data services over the same twisted pair of wire, DSL has allowed for incumbent local exchange providers to participate in the residential and small business broadband game. DSL is offered by telephone companies around the world and will likely outpace broadband cable deployment by two to one on an international basis.
NOTE
Customers typically don’t buy DSL or cable modem access; they buy the services that are deliverable over such technologies.
DSL equipment is a digital modem technology that uses existing twisted-pair telephone lines to transport high-bandwidth data, such as multimedia data and video, in addition to voice services to subscribers. DSL services are dedicated, point-to-point, public network access over twisted-pair copper wire on the local loop between a service provider’s CO and the customer site, or on local loops created either intrabuilding or intracampus. DSL Layer 1 technology is coupled with Layer 2 ATM and Layer 3 IP, and increasingly using Ethernet at Layer 2 instead of ATM. These are the common protocol building blocks with which to design DSL access networks for service providers. When DSL uses ATM, it depends on ATM Layer 2 switching beyond the DSL network itself and relies on TCP/IP for routing between networks and into the public Internet. DSL data is not passed through expensive telephone switches at the service provider’s CO. The data portion of a DSL line is aggregated into DSL termination equipment called a Digital Subscriber Line Access Multiplexer (DSLAM) in the CO, and then aggregated via a broadband remote aggregation system (BRAS) for routing through IP network infrastructure and on to ISPs. While the analog telephony portion of the DSL line continues to be
476
Chapter 8: Wireline Networks
separated (using splitters or microfilters) and delivered to the CO telephone switch, less expensive data communications equipment supports the larger volume of aggregate data traffic from this residential broadband data service. DSL modems are rather complex and try to accommodate a patchwork of different copper lines by checking up to 256 different frequency pathways before deciding on the best frequency path to use within the twisted pair. Since DSL uses frequencies up to just above 1 MHz, this places limitations on DSL coverage due to increased signal attenuation with higher frequencies. This limits DSL distance and speed. DSL is only able to reach about 60 percent of the market without deploying DSLAM technology into the digital loop carrier domain. Back at the CO, or in a DLC cabinet, SLIC, or other provider point of presence, the service provider uses a DSLAM to aggregate the xDSL connections from the neighborhoods and businesses. The DSLAM is usually a device that aggregates individual DSL lines at Layer 1. The DSLAM contains cards often referred to as DSL modems on which to terminate the subscriber side DSL modems. These DSLAM DSL modem cards come in various port densities, as each DSL remote subscriber is connected to one port of the DSLAM’s modem cards. DSL uses the concept of modems because DSL signaling is essentially a conversion of electrical signals to sound tones, although these tones are in the inaudible range. The DSLAM is responsible for converting these frequency sound waves into electrical or optical signals that are delivered upstream toward the provider’s core network and service platforms. More on the DSLAM is covered at the end of this section under “DSLAM Broadband Aggregation Layer.” Collectively and generally, the various DSL technologies are referred to as xDSL. Each type provides a particular requirement of bandwidth symmetry, speed, and distance. DSL is drawing significant attention from implementers and service providers, because it promises to deliver high-bandwidth data rates to dispersed locations with relatively small changes to the existing Telco local loop infrastructure. DSL is broadly divided into asymmetric and symmetric categories. The following list introduces several of the current variations of DSL technology:
•
Asymmetric DSL (ADSL)—It is called “asymmetric” because the download speed is greater than the upload speed. ADSL works this way because most Internet users look at, or download, much more information than they send, or upload. Another version of ADSL is Rate Adaptive DSL (RADSL). This is a popular variation of ADSL that allows the modem to adjust the speed of the connection depending on the length and quality of the line.
•
Asymmetric DSL-Lite (G.Lite)—Often referred to as splitterless ADSL, this technology attempts to provide ADSL capabilities at reduced transfer rates, while still supporting analog telephony. G.Lite is often priced less to the customer using marketing terms like “DSL Lite.”
Broadband—Pushing Technology to the Edge
477
•
High Data Rate DSL (HDSL)—Providing transfer rates comparable to a T1 line (about 1.5 Mbps), HDSL receives and sends data at the same speed, but it requires two subscriber lines that are separate from a normal subscriber line. Four-wire HDSL is often used within service providers as an alternate form of T1 data delivery for businesses and high-end data subscribers. Sometimes referred to as repeaterless T1, HDSL saves providers both time and costs in provisioning T1 lines.
•
High Data Rate DSL-2 (HDSL2)—HDSL2 uses a more aggressive modulation technique called pulse amplitude modulation 16 (PAM-16), which accomplishes up to 2 Mbps of symmetrical bandwidth on a single pair of wires. Two-wire HDSL now becomes applicable to residential users with requirements for symmetrical bandwidth.
•
ISDN DSL (IDSL)—Geared primarily toward existing users of ISDN, IDSL is slower than most other forms of DSL, operating at a fixed rate of 144 Kbps in both directions. The advantage for ISDN customers is that they can use their existing equipment, but the actual speed gain is typically only 16 Kbps (ISDN runs at 128 Kbps).
•
Symmetric DSL (SDSL)—Like HDSL, this version receives and sends data at the same speed. While early SDSL modems also require a separate line from your phone, the high data rate is accomplished through the use of a single line instead of the two lines needed by HDSL. SDSL is also the official designation of the European Telecommunication Standards Institute (ETSI) to develop a standard based on HDSL2, but is expected to be rate adaptable up to 2 Mbps while providing voice and ISDN services without the use of analog splitters.
•
Multirate Symmetric DSL (MSDSL)—This is a symmetric DSL that is capable of more than one transfer rate. The service provider sets the transfer rate, typically with prorated pricing based on the service level.
•
Very High Data Rate DSL (VDSL)—An extremely fast connection, VDSL includes an asymmetric and a symmetric version. VDSL achieves high rates by supporting a shorter distance specification for standard copper phone wiring. However, the maximum speed of VDSL technology will likely drive the technology to prominence within service provider networks.
•
SHDSL—An ITU standard developed to replace or enhance many existing DSL technologies and transport options into one standard for better interoperability and manufacturing support. SHDSL is a symmetric service that supports multiple data rates from 192 Kbps to 2.3 Mbps. Other benefits are a 30 percent farther reach than SDSL, support for both IP and ATM, and spectral compatibility with ADSL. Mainly used for business-class users, SHDSL is expected to replace many of the proprietary SDSL implementations.
Table 8-2 provides a brief comparison of the various DSL technologies. Transmit and receive speeds are affected by distance and quality of the subscriber line, so the table attempts to list the theoretical maximums. Also, interoperability of subscriber DSL
478
Chapter 8: Wireline Networks
modems with the DSLAM can affect optimum results. For example, mixing vendor A’s remote DSL modem with vendor B’s DSLAM might impair full functionality due to minor inconsistencies between vendor’s choices or versions of DSL chip sets within the DSL modems. Table 8-2
Comparing DSL Technologies
DSL Technology
Maximum Transmit Speed (End User)
Maximum Receive Speed (End User)
Maximum Distance
Number of Lines
Analog Telephony Support
ADSL
800 Kbps
8 Mbps
18,000 ft or 5500 m
1
Yes
ADSL-Lite (G.Lite)
512 Kbps
1.54 Mbps
18,000 ft or 5500 m
1
Yes
RADSL
1 Mbps
7 Mbps
18,000 ft or 5500 m
1
Yes
HDSL
1.54 to 2 Mbps
1.54 to 2 Mbps
12,000 ft or 3650 m
2
No
HDSL2
1.54 to 2 Mbps
1.54 to 2 Mbps
12,000 ft or 3650 m
1
No
SHDSL
192 Kbps to 2.3 Mbps
192 Kbps to 2.3 Mbps
25,000 ft or 7500 m
1
No
VDSL
16 Mbps
52 Mbps
4000 ft or 1200 m
1
Yes
VDSL2
100 Mbps
100 Mbps
500 ft or less
1
Yes
SDSL
2.3 Mbps
2.3 Mbps
22,000 ft or 6700 m
1
No
MSDSL
2 Mbps
2 Mbps
29,000 ft or 8800 m
1
No
IDSL
144 Kbps
144 Kbps
35,000 ft or 10,700 m
1
No
As seen from the previous list and table, there are many varieties of DSL targeted at different requirements. The varieties exist because there is a definite segmentation of customer requirements, speeds, and willingness to pay. Asymmetric DSL varieties tend to target Internet use and residential pricing markets. Symmetric DSL tends to focus on the business market to exhibit performance similar to dedicated point-to-point circuits or for high-speed multimedia requirements. The development and deployment of ADSL, SHDSL, and VDSL technologies and architectures is currently the primary focus in xDSL.
ADSL In the residential market, ADSL is one of the most popular of the bevy of xDSL technologies. ADSL allows for the concurrent addition of broadband digital data frequencies to the legacy 400 to 3400 kHz of bandwidth for residential voice service.
Broadband—Pushing Technology to the Edge
479
ADSL technology is asymmetric in that it allows more bandwidth downstream—from a service provider’s central office to the customer site—than upstream from the subscriber to the CO. This asymmetry, combined with always-on access—virtually eliminating call setup—makes ADSL ideal for Internet/intranet surfing, video on demand, and remote LAN access. Users of these applications typically download much more information than they send. In this section, you further examine ADSL information such as ADSL modem technology, multiplexing standards, POTS filtering and ADSL data rates, distance limitations, and ADSL service considerations.
ADSL Modem Technology DSL modems are rather complex devices that try to accommodate a patchwork of different copper lines by checking up to 256 different frequency pathways before deciding on the best frequency path to use within the twisted-pair wire. DSL has distance and speed limitations, and many types of DSL modem technologies attempt to help with reaching the largest percentage of the market. Since virtually all twisted-pair wires are in larger cable bundles, noise and cross talk between wire pairs are common. DSL modems, at a high level, are really noise managers, determining what transmission frequencies are the most noisefree in a particular pair of wires, and adapting the modem to communicate over the most noise-free channels that derive the desired customer data rates. ADSL modems are present on each end of the customer local loop. To get the most bandwidth out of the copper pair, the ADSL modem’s technology must have the intelligence to divide up the wire’s frequency range from about 0.4 MHz to 1.1 MHz. Most ADSL modems create 256 subchannels within this spectrum range of about 4.3 kHz per subchannel, placing downstream and upstream data and control information within each, depending on which of the channels are usable within that wire pair. ADSL modems determine this by performing a training sequence on boot-up, sending a known data stream down the subchannels and comparing the results to determine which subchannels have the best signal-to-noise ratio (SNR) and, therefore, which will be assigned to carry more bits per subchannel than others. Therefore, ADSL modems are adaptable to different line conditions. To control and manage all of these subchannels of frequency-based bandwidth, the ADSL modem needs digital signal processors, transceivers, multiplexers and demultiplexers, analog-to-digital (A/D) converters for analog POTS support, and surrounding semiconductor intelligence. The heart of the ADSL system is the digital signal processor, often called a network processor, which is responsible for managing the complex processing of subchannel information. A transceiver is a combination transmitter and receiver with which to place data signals onto and receive data signals from the copper wires. The demultiplexers help to move data into the appropriate channels for creating various downstream data rates before sending these to the transmitter part of the transceiver. This effectively packages data into these predetermined channels for sending over the wire. At the remote end, the receiver
480
Chapter 8: Wireline Networks
part of the remote ADSL modem’s transceiver takes the data from the individual channels, feeding a network processor and multiplexer to stitch the data together again before presenting to the customer Ethernet or USB interface. This juggling of up to 256 frequency channels, multiplexing, demultiplexing, and A/D conversion (for accommodating analog POTS on a digital service) makes ADSL modems the workhorses of twisted-pair copper communications.
ADSL Multiplexing Standards Within ADSL there are two competing and incompatible multiplexing standards—carrierless amplitude/phase (CAP) and discrete multitone (DMT). The CAP system was used on many of the early installations of ADSL. Figure 8-5 shows the frequency design layout for CAP. Figure 8-5
Frequency Design Layout for Carrierless Amplitude/Phase (CAP)
Carrier Amplitude Modulation 0–4 kHz
25–160 kHz
Analog Upstream Voice
240 kHz to 1.5 MHz
Downstream
CAP operates by separating the signals on the twisted-pair facility into three distinct bands. Voice conversations are carried in the 0 to 4 kHz band, as they are in typical telephony circuits. The upstream channel (from the user back to the service provider) is carried in a band between 25 and 160 kHz. The downstream channel (from the service provider to the user) begins at 240 kHz and covers up to about 1.5 MHz, depending on a number of line conditions. This system, with the voice, inbound data, and outbound data channels widely separated, minimizes the possibility of interference between the channels on one line, or between the signals on different lines. DMT is the official ANSI standard for ADSL, and the ITU-T standardizes DMT as G.DMT. DMT is the variety of DSL multiplexing most prevalent today. ANSI also created a second issue of the North American DMT standard called DMT2. The DMT2 version of ADSL also divides signals into separate channels but accomplishes this technique differently. DMT2 divides the data into 256 separate channels, each of these channels is 4.3125-kHz wide. These channels are called bins or carriers and cover the range of frequencies between 0 Hz and about 1.104 MHz. Each of these DMT carriers operates at a bandwidth capacity of about 32 Kbps each. Multiple carriers/bins are employed and multiplexed together to “train” ADSL modems up to their highest bandwidth rate possible in 32 Kbps increments,
Broadband—Pushing Technology to the Edge
481
given supporting copper line conditions. If impairments are encountered for a particular carrier or set of carriers, the data is shifted to the nearest adjacent carrier frequencies that can get the data through. In this sense, the ADSL system is like a frequency scanner. Each channel is monitored and, if the quality is too impaired, the signal is shifted to another channel. This system constantly shifts signals between different channels, searching for the best channels for transmission and reception. In addition, some of the lower channels (those starting at about 8 kHz) are used as bidirectional channels, for upstream and downstream management information. Monitoring and sorting out the information on the bidirectional channels and keeping up with the quality of all of the possible 256 channels, makes DMT more complex to implement than CAP, but gives it more flexibility on lines of differing quality. Figure 8-6 shows the frequency design for DMT. Figure 8-6
Frequency Design for DMT
Discreet MultiTone (DMT) Analog Voice
247 Channels for Data (4 kHz Each)
256 Total Channels (4 kHz Each)
ADSL Filter If you have ADSL installed, you were likely given small microfilters to attach to the telephone outlets that don’t provide the signal to your ADSL modem. These filters are lowpass filters—simple filters that block all signals above a certain frequency. Because all voice conversations take place below 4 kHz, the low-pass (LP) microfilters are built to block everything above 4 kHz, preventing the data signals from interfering with standard telephone calls. Even if the ADSL equipment were to fail on a given subscriber line, the low-pass filter guarantees uninterrupted basic telephone service. Figure 8-7 shows a conceptual view of how a low-pass filter passes and rejects frequencies. Figure 8-7
Low-Pass Filtering Used with Analog Telephones
Low-Pass Filtering 0–4 kHz
4 kHz–1.5 MHz High Frequencies REJECTED
Analog Voice PASSED
482
Chapter 8: Wireline Networks
ADSL Data Rates ADSL modems provide data rates consistent with North American T1 1.544 Mbps and European E1 2.048 Mbps digital hierarchies and can be purchased with various speed ranges and capabilities. The minimum configuration provides 1.5 or 2.0 Mbps downstream and a 16 Kbps duplex channel; others provide rates of 6.1 Mbps and 64 Kbps duplex. ADSL products with downstream rates up to 8 Mbps and duplex rates up to 640 Kbps are available today. DSL providers will implement their services differently depending on copper plant, DSLAMs, and other considerations, so these theoretical speeds might vary in practical service offerings. ADSL modems accommodate Asynchronous Transfer Mode (ATM) transport with variable rates and compensation for ATM overhead, as well as IP protocols. The following shows typical ADSL (up to 6 Mbps) speeds for combinations of downstream bearer channels (toward the subscriber) and the upstream channels (duplex bearer channels):
•
Downstream bearer channels using n x 1.536 Mbps multiplexing: — 1.536 Mbps — 3.072 Mbps — 4.608 Mbps — 6.144 Mbps
•
Downstream bearer channels using n x 2.048 Mbps multiplexing: — 2.048 Mbps — 4.096 Mbps
•
Upstream bearer channels: — 16 Kbps (C channel) — 64 Kbps (C channel) — 160 Kbps (optional) — 384 Kbps (optional) — 544 Kbps (optional) — 576 Kbps (optional) — 640 Kbps (combination of a 64-Kbps C channel and a 576 Kbps optional channel)
As shown above, an ADSL downstream speed of 6.144 Mbps with an upstream speed of 640 Kbps is theoretically possible. The upstream speed of 640 Kbps is achieved by combining the 64-Kbps C channels with the 576 Kbps optional channels. As ADSL technology is advancing, different forms of modulation techniques will increase the bits per channel so that higher speeds are achievable.
Broadband—Pushing Technology to the Edge
483
Distance Limitations All varieties of DSL are distance sensitive. As the length of the twisted-pair facility increases, the signal quality and connection speed decrease. ADSL service, for example, has a maximum distance of 18,000 feet (5460 meters) between the DSL modem and the DSLAM, although for speed and quality of service reasons, many providers establish an even lower limit on the ADSL maximum distance. At the upper extreme of the distance limit, ADSL customers might experience speeds far below the promised maximum data rate, whereas customers nearer the CO or DSL termination point might experience speeds approaching the maximum data rate. In practice, providers usually segment their DSL offerings into X downstream/Y upstream classifications to meet the various needs of users. Purchasing a particular DSL offering usually determines the user’s maximum achievable rates. Many DSL service providers often deploy remote terminal DSL solutions when critical masses of DSL subscribers need to be reached that are beyond the distance limitations from the CO. The remote terminal DSL equipment becomes part of the digital loop carrier systems and has the effect of shortening the copper loop length between the DSL subscriber and the remote terminal DSL equipment. Downstream data rates depend on a number of factors, including the length of the copper line, its wire gauge, presence of bridge taps, and cross-coupled interference. Line attenuation increases with line length and frequency and decreases as wire diameter increases. Ignoring bridge taps, ADSL performs as shown in Table 8-3. Table 8-3
ADSL Physical Media Performance Data Rate (Mbps)
Wire Gauge (AWG)
Distance (ft)
Wire Size (mm)
Distance (km)
1.5 or 2
24
18,000
0.5
5.5
1.5 or 2
26
15,000
0.4
4.6
6.1
24
12,000
0.5
3.7
6.1
26
9000
0.4
2.7
Source: Cisco Systems, Inc.
Selecting DSL Service Selecting DSL service works differently than selecting dial-up service. In most cases, a dial-up service provider will only give you one connection speed option. Any variation in connection speed depends mainly on the customer’s modem and its capabilities. An xDSL service provider might offer multiple service options in order to meet the needs of different types of customers. For example, there might be nine or more different speed tiers, with maximum downstream connection speeds ranging from 384 Kbps to upwards of 8.0 Mbps.
484
Chapter 8: Wireline Networks
Again, the DSL provider usually determines which speed classifications will be supported by the majority of their cable plant and will typically offer just a few bandwidth selections. DSL service is a dedicated point-to-point service from the provider to the user. This is contrasted with cable modem high-speed data offerings, because the cable plant is installed in a tree structure with a neighborhood sharing some of the frequency space. DSL service is often marketed as a more secure service, but the meaning of secure is a relative term. You might wonder why distance is a limitation for DSL but not for voice telephone calls. The answer lies in small amplifiers, called loading coils, which the telephone provider uses to boost voice signals. These loading coils are incompatible with DSL signals, because the amplifier disrupts the integrity of the data. This means that if there is a voice coil in the loop between your telephone and the telephone company’s CO, you cannot receive DSL service. The piecemeal types of wiring facilities, amplification equipment, and diversity of distances from a CO are what largely affect DSL’s applicability to all residential and business users.
ADSL2 and ADSL2+ With ADSL’s popularity has come continued research and development in order to extend the life of ADSL technology. Approved in early 2003, the ITU standards of G.992.3/4 define ADSL2, and the ITU G.992.5 standard supplies specifications for ADSL2+. The newer standards improve on the original ADSL by offering higher downstream data rates and longer reach. ADSL2 increases downstream data rates to more than 12 Mbps while extending reach about another 600 feet. ADSL2+ doubles the ADSL2 data rate to approximately 25 Mbps downstream. In addition, both of the new standards support improved interoperability, all digital carriage for analog telephony, power-saving enhancements, and bonding of ADSL lines to create even higher data throughput services. Higher modulation efficiencies using trellis coding and one-bit constellations are behind the improved data rates for ADSL2 and ADSL2+. A fast startup feature has improved initialization time from 10 seconds to 3 seconds. A statistical power-saving feature lowers the DSL modem power when it detects that the line is not in use for an extended period. This can save power, which affects DSL modem port density, operational expense for electricity and cooling, and can ultimately save everyone money. A new channelization capability supports analog voice over DSL in a digital channel. Often called Channelized Voice over DSL (CVoDSL), this capability allows for derived TDM analog voice to be carried transparently over a digital channel within ADSL2 and ADSL2+, removing the restriction of the underlying 4 kHz for POTS that was originally designed around the ADSL specification. This opens up the lower 4 kHz frequency range to support more upstream subscriber bandwidth channels by as much as 256 Kbps. There is also support for packet-based services over DSL, such as Ethernet.
Broadband—Pushing Technology to the Edge
485
ADSL2+ achieves the higher data rates of 25 Mbps through expansion of the frequency range available to the modulation engine. ADSL2+ doubles the maximum frequency range from 1.1 to 2.2 MHz, achieving data rates of up to 25 Mbps downstream at distances of about 5000 feet. Both standards share support for a line-bonding feature known as inverse multiplexing over ATM (IMA). An ATM forum specification, ATM IMA has been used for years to bond T1/E1 circuits together to pool bandwidth for more granular service speeds. For example, bonding a pair of T1s using IMA delivers 1.544 Mbps x 2 T1s or approximately 3.072 Mbps of throughput (less overhead). Incidentally, non-ATM bonding of T1s is available through a technology feature called Multilink PPP (MLPPP). The same technology is now applied to ADSL2 and ADSL2+, allowing bonding of two or more ADSL2 and ADSL2+ lines. The result is a far greater flexibility with downstream data rates:
• • •
20 Mbps on two bonded pairs 30 Mbps on three bonded pairs 40 Mbps on four bonded pairs
These enhancements to the original ADSL standard bring additional life to installed copper plant.
SHDSL Single-Pair High-Rate DSL (SHDSL) is an internationally accepted symmetric DSL service. SHDSL was standardized by the ITU-T as G.991.2 in February 2001. In some documentation, the SHDSL is referred to by its prestandard acronym of G.shdsl. SHDSL is positioned to replace all previously existing symmetric DSL forms, such as HDSL, HDSL2, IDSL, and SDSL. SHDSL, as a symmetric, multirate version of DSL will also replace a lot of legacy T1/E1 and ISDN services as the data rate for SHDSL covers from 192 Kbps to 2.312 Mbps. As a symmetric service, SHDSL can support data, voice, and video services. This service also can use up to eight repeaters per wire pair to significantly extend the reach of SHDSL. To help with extended reach, SHDSL uses the baseband frequencies (low frequencies reach farther), so POTS cannot be concurrently operated over a SHDSL wire pair. However POTS can be supported in-band through the use of CVoDSL features. SHDSL uses Trellis Coded Pulse Amplitude Modulation (TC-PAM) as a line encoding technique that can yield about a 30 percent distance improvement over previous techniques. This is a less complex encoding algorithm that is manufactured in silicon at low cost. The SHDSL chipset also requires very little power (subwatt), adding to increased densities of about 1000 ports of SHDSL in a Telco standard seven-foot rack. The frequency modulation
486
Chapter 8: Wireline Networks
design of SHDSL is spectrally friendly with other xDSL services that might be used within the same binder cable group. Its data rate adaptability lends well to fractional or full symmetric DSL services. A dual-pair SHDSL is part of the specification to add additional data rates from 384 Kbps to 4.6 Mbps by multiplexing across the four wires. With symmetric voice, data, and video service capability, as well as long-reach, low-power requirements and spectrally friendly attributes, SHDSL is one of the more significant xDSL technologies. An extended version of SHDSL called G.SHDSL.bis is under standardization by the ITU-T and ANSI. This extended version uses enhancements to TC-PAM to increase the symmetric data rate to 5.7 Mbps while still complying with spectral compatibility requirements. The G.SHDSL.bis standard was adopted by the Ethernet in the First Mile (EFM) committee, which developed the IEEE 802.3ah EFM standard. Therefore, G.SHDSL.bis will be a fundamental physical Layer 1 transport (PHY) for the 802.3ah Ethernet over copper specifications.
VDSL and VDSL2 Other prominent variants of DSL technology are known as Very High Data Rate DSL (VDSL) and Very High Data Rate DSL 2 (VDSL2). VDSL was standardized as ITU-T G.993.1 in 2004, and VDSL2 was standardized as ITU-T G.933.2 in 2005. Both VDSL and VDSL2 attempt to push the limit of data transmission over 24-gauge copper wire pairs with both asymmetric and symmetric DSL data versions. Many view the VDSL technologies as the next step in providing a complete home communications and entertainment package. By supporting entertainment video, VDSL can offer competing service to cable TV. Some providers such as Qwest currently offer VDSL service in selected areas in the United States, and VDSL is very popular in South Korea, Japan, and China. VDSL benefits from recent advances in digital signal processor technology to provide an incredible amount of xDSL bandwidth—speeds up to about 52 Mbps are possible with VDSL and up to 100 Mbps with VDSL2 (even in the symmetric version) on very short copper loops of about 250 to 500 feet. Compare that with a maximum speed of 6 to 8 Mbps for ADSL or 25 Mbps for ADSL2+, and it’s clear that the move from current ADSL technology to VDSL could be as significant as the migration from a 56 K data modem to any type of xDSL. In simple terms, VDSL technology operates over the twisted pair of copper wires in a phone line in much the same way that ADSL does, with a range of speeds depending on actual line length. Nonetheless, there are a couple of important distinctions for VDSL. The maximum downstream rate is 52 Mbps over lines up to 1000 feet (304.8 meters) in length. Downstream speeds as low as 13 Mbps, over lengths beyond 4000 feet (1219 meters), are also common. Upstream rates in early models are asymmetric, just like ADSL, at speeds from 1.5 to 2.3 Mbps. VDSL2 pushes speeds to 100 Mbps while further limiting copper wire distance.
Broadband—Pushing Technology to the Edge
487
The VDSL technologies’ amazing performance comes at a price: VDSL can only operate over the copper line for a short distance, up to a maximum of about 4500 feet (1372 meters) and perhaps 500 feet or less for VDSL2 at maximum rate. So a strategy for getting VDSL closer to the subscriber is in order. Both VDSL downstream and upstream data channels will be separated in frequency from bands used for basic telephone service and ISDN, enabling service providers to overlay VDSL on existing services. At present, the two high-speed channels are also separated in frequency between themselves. VDSL and VDSL2 achieve extra bandwidth capacity by using different frequency ranges within the copper loops. The frequency range of about 2 MHz to 12 MHz is used for VDSL so as not to overlap the ADSL frequency windows. VDSL2 uses frequencies even higher than 12 MHz, perhaps up to as much as 30 MHz in order to access more bandwidth capacity. These higher-frequency ranges are possible because these VDSL standards place limitations on the length of the copper loop—the shorter the loop, the less that high frequencies are attenuated. VDSL offerings will largely target no more than a few hundred to a few thousand feet of copper, with providers pushing their optical network backbones and distribution systems with ONUs closer to businesses and residences. Also, the VDSL standards include support for either of two line-coding mechanisms— DMT and Quadrature Amplitude Modulation (QAM). Line coding is used to encode as many bits of customer data into symbols or time periods that travel over the DSL path. The more bits per symbol, the higher the bandwidth capacity. Covered earlier, DMT uses a very large number of chip-based transceivers, each to create channels or “tones” working in parallel. QAM, which uses a combination of phase shift keying and amplitude modulation, uses a smaller number of these transceivers, each working in a particular band of the frequency range. QAM is another method of encoding multiple bits in a symbol or time period. QAM chips come in various bit constellations, such as 16-QAM, 32-QAM, and so on. The real key to the viability of VDSL is that the service providers are replacing many of their main binder cable feeds with optical fiber cable, effectively reducing the overall length of the copper facility from the provider’s CO to the subscriber. In fact, many service providers are planning fiber to the curb (FTTC), which means that they will replace all existing copper lines right up to the point where your phone line branches off at your house or business. At the very least, most companies expect to implement fiber to the neighborhood (FTTN). Instead of installing optical fiber cable along each street, FTTN has fiber going to the main provider point of presence or an optical network unit (ONU) for a particular neighborhood. Placing a VDSL transceiver in your home and a DSLAM with VDSL modem cards in the nearest DLC cabinet or ONU overcomes the speed and distance limitation. The DSLAM takes care of the analog-digital-analog conversion problem that disables ADSL over optical fiber lines. It also converts the data aggregated by the DSLAM transceivers into pulses of light that can be transmitted over the optical fiber system to the CO, where the data is routed through a BRAS to the appropriate network to reach its final destination. When data is sent
488
Chapter 8: Wireline Networks
back to the subscriber, the DSLAM converts the signal from the optical fiber cable and sends it to the VDSL remote transceiver at the subscriber location. Figure 8-8 shows a conceptual diagram of the devices in a VDSL network. Figure 8-8
Devices in a VDSL Network
CO
Fiber
VDSL
Twisted Pair
VDSL
Premises Distributed Network
ONU 13 to 55 Mbps 1.6 to 2.3 Mbps Source: Cisco Systems, Inc.
Early versions of VDSL use FDM to separate downstream from upstream channels and both of them from basic telephone service and ISDN. Echo cancellation is typically required for later-generation systems featuring symmetric data rates. A rather substantial distance, in frequency separation, is maintained between the lowest data channel and basic telephone service to enable very simple and cost-effective basic telephone service splitters. Normal practice would locate the downstream channel above the upstream channel. VDSL downstream data rates derive from submultiples of the SONET and Synchronous Digital Hierarchy (SDH) canonical speed of 155.52 Mbps, namely 51.84 Mbps, 25.92 Mbps, and 12.96 Mbps. That’s because the industry wants to efficiently pack DSL data into upstream SONET/SDH infrastructure, the wireline provider’s primary optical distribution backbone. It’s more simplistic to refer to these data rates as 13, 26, and 52 Mbps. Each rate has a corresponding target distance range, as shown in Table 8-4. Table 8-4
VDSL Asymmetric Speed Range per Distance (Typical)
Target Range Downstream (Mbps)
Target Range Upstream, Asymmetrical (Mbps)
Distance (ft), 24 AWG Wire
Distance (m), 24 AWG Wire
12.96
1.6
4500
1372
25.92
3.2
3000
915
51.84
6.4
1000
305
Source: Cisco Systems, Inc.
Broadband—Pushing Technology to the Edge
489
Table 8-5 shows VDSL symmetric rates for the corresponding target distance. Table 8-5
VDSL Symmetric Speed Range per Distance (Typical)
Target Range Downstream (Mbps)
Target Range Upstream, Symmetrical (Mbps)
Distance (ft), 24 AWG Wire
Distance (m), 24 AWG Wire
6.48
6.48
3000
915
9.72
9.72
3000
915
12.96
12.96
3000
915
19.44
19.44
1000
305
25.96
25.96
1000
305
Source: Cisco Systems, Inc.
Table 8-6 shows the target VDSL2 asymmetric and symmetric rates as known during the 2005 standardization effort. Table 8-6
VDSL2 Asymmetric and Symmetric Data Rates (Typical)
Target Range Downstream (Mbps)
Target Range Upstream, Asymmetrical (Mbps)
Target Range Upstream, Symmetrical (Mbps)
Distance (ft), 24 AWG Wire
Distance (m), 24 AWG Wire
70
30
--
1000
305
100
--
100
1000
305
Figure 8-9 compares VDSL, VDSL2, and ADSL transfer rates. There are about a half dozen DSL data encapsulations that are commonly deployed among service providers. There are so many because there are so many different engineers with different ideas on network design. Although each type of DSL encapsulation improves on the other along some vector of scalability, security, or performance, a typical provider would seldom use more than two or, at the most, three of these architectures on an ongoing basis. It is not the focus of this book to describe the various DSL encapsulations or designs. See the “Recommended Reading” section at the end of this chapter for books that provide a deeper look at DSL network architectures, design, and implementation.
490
Chapter 8: Wireline Networks
Figure 8-9
Comparison of Transfer Rates: ADSL and VDSL 100 VDSL2
90
VDSL ADSL
80 70 MB/Sec
60 50 40 30 20 10 1
3
6 9 12 15 Distance in kft of 24 (.5 mm) ga Wire
18
Source: Cisco Systems, Inc.
Figure 8-10 shows a typical DSL network design. Note that the subscriber-side DSL modem is generally referred to in the industry as an ADSL Transceiver Unit-Remote (ATU-R). The subscriber side connects to the network access provider’s DSLAM, which aggregates multiple DSL subscriber sessions into a BRAS device to provide flexibility in upstream connectivity and services methods. An example of a network access provider might be BellSouth, and the network service provider might be an ISP such as UUNet.
DSLAM Broadband Aggregation Layer The new era of communications is pushing broadband to the edge. Since it is unfeasible to connect broadband trunks from each subscriber to equipment in the CO, wireline providers distribute broadband aggregation technology closer to the subscribers. This has the creative effect of a distribution layer in hierarchical terms. The CO becomes the core, a DSLAM becomes the distribution layer, and the end subscribers form the DSL access layer. Connecting the DSL access layer between the subscriber and the Telco CO is the DSLAM broadband aggregation layer.
Broadband—Pushing Technology to the Edge
491
Figure 8-10 Overview of a DSL Network
Subscriber Premise
Network Access Provider
Network Service Provider ISP
Internet
Internet DSLAM
BRAS
ATU-R
Enterprise
ATMCore
Termination Service Selection
Video Aggregation Service Selection
Content Voice
Source: Cisco Systems, Inc.
Digital Subscriber Loop Access Multiplexer (DSLAM) DSLAMs are essentially broadband digital loop carriers. They fundamentally perform radio frequency modulation to the remote DSL modem, they bridge and might route data traffic from downstream to upstream and back, and they perform physical layer media conversion in the process. DSLAMs are key components for wireline providers to enable broadband communication options to residential users and businesses over twisted-pair copper wire infrastructure. At the DSL provider CO, wire center, or other point of presence, the wireline provider uses a DSLAM to aggregate the xDSL connections from the neighborhoods and businesses. Each card in a DSLAM chassis terminates and supports some number of DSL modem ports, depending on the component integration capabilities of the DSLAM manufacturer. Therefore, DSLAMs terminate and geographically concentrate the Layer 1, physical layer connectivity from individual DSL modem subscribers. Another function of a DSLAM is to accept data sourcing from hundreds of downstream DSL modems, further aggregating and multiplexing that data onto a higher-speed bandwidth facility that connects upstream to the next provider network device. DSLAMs are generally flexible and able to support multiple types of DSL in a CO, and different varieties of protocol and modulation—both CAP, DMT, and QAM, for example—
492
Chapter 8: Wireline Networks
in the same type of DSLAM chassis. Aggregating these subscriber sessions for delivery upstream is historically a Layer 2 ATM process. Primarily, a DSLAM will switch Layer 2 ATM PVCs coming from the subscriber DSL modems upstream into a broadband remote access server or an MPLS provider edge (PE), where higher-layer services can be accessed for the subscriber’s data delivery needs. DSLAM technology is usually built on an ATM Layer 2 fabric. This allows the use of ATM class of service (CoS) whenever differentiated services are designed and offered by the provider. Early DSLAMs would multiplex several ATM PVCs, one per DSL modem, into an ATM DS3 or ATM OC-3 uplink toward a provider ATM switch edge device, very often colocated with the DSLAM. The ATM switch is usually the edge of the wireline provider’s ATM core network, through which access to the upstream network service provider(s) is available. More recent DSLAM technology incorporates Gigabit Ethernet as an upstream, Layer 2 aggregation method and is gaining popularity among providers where ATM core platforms are less entrenched or reaching their technology sunset.
Broadband Remote Access Server (BRAS) Using Layer 2 ATM, Point-to-Point Protocol (PPP), or Gigabit Ethernet, modern DSLAM network designs aggregate downstream users for connectivity upstream to a device functionally known as a BRAS. The BRAS performs Layer 2 session aggregation for ATM, Ethernet, PPP, or, more commonly today, Layer 3 logical IP aggregation for the regional/ access Layer 1/2 DSL network. Requirements for BRAS functionally are specified in the DSL Forum’s Technical Report TR-092. The BRAS is fundamental to providing IP services and QoS–based services to DSL subscribers from upstream network service providers (for example, an ISP). The BRAS device is normally defined as the last service provider IP address(s)–aware device between the upstream service providers and the DSLAM/DSL subscriber access network. The BRAS aggregates Layer 2 protocol encapsulations, such as PPP or ATM, coming from the DSLAM as well as Layer 3 IP. At Layer 3 IP, the BRAS is the point of demarcation from IP QoS toward the provider network and from any Layer 2 CoS toward the subscriber. Regarding QoS, the BRAS is capable of synthesizing QoS into downstream DSL devices that might not be QoS aware. An example would be mapping IP QoS into an ATM CoS scheduler toward the DSLAM/DSL access network. The BRAS can also act as a Layer 2 bridge, especially when supporting transparent LAN services that are often used in metro Ethernet deployments. One example of a BRAS functional device from Cisco Systems, Inc., is the Cisco 10000 series broadband aggregation routers. Please see the DSL Forum’s technology report TR-092 for more BRAS specification and functional requirement information.
Broadband—Pushing Technology to the Edge
493
Cable Cable operators—sometimes referred to as multiple systems operators (MSOs)—have been distributing broadcast video since the 1950s. Analog TV and premium content programs have sustained cable operators for decades as a primary source of video programming in the United States. In North America, cable passes more than 100 million homes, and the latest subscriber numbers suggest that greater than 80 percent of those are served by cable. Cable is a generic, pop-culture term, which is typically used to reference the delivery of video programming of both local and premium content over a RG59 coaxial cable medium.
NOTE
Coaxial cable is called “coaxial” because it includes one physical channel that carries the signal surrounded by another concentric physical channel, both running along the same axis. A layer of insulation separates the two channels, and there is additional insulation around the outer concentric physical layer.
Cable operators got an early lead in residential broadband systems while upgrading their infrastructure with HFC systems to support two-way interactive video. This capital intensive effort was an attempt to remain competitive with the onset of direct broadcast satellite television. While the cable industry continued to seek standardization and mutual exchange of ideas, the proposal to use cable as a transport for promising, IP-based Internet services was born, test-marketed in 1994/1995, and first delivered as a product in 1995. With coaxial cable having a capacious spectrum, coupled with the demand for residential Internet service, broadband data service over cable was the next breakaway opportunity that the industry was seeking. The new data over cable services opportunity led to fresh investor-interest in the cable industry, sparking an outbreak of recapitalization activities in the late 1990s. For example, AT&T, the once and former Mother of the Bells, acquired TCI and MediaOne, becoming the largest provider of cable TV service in the United States. With the surge of Voice over IP (VoIP) within the telecommunications industry, the possibility of packet telephony over cable could be leveraged for those operators desiring a triple play of video, data, and voice. In addition to the cable industry’s important technical differentiators of speed and alwayson capability, vertical integration of content providers with cable operators is yet another development, pairing content creation with a distribution and delivery vehicle, for example Time Warner Cable, Inc. Cable operators are poised to become the undisputed leaders in delivering broadband services to U.S. residential subscribers. The crystal ball is much cloudier regarding leadership in business subscribers, traditionally the stronghold of the ILECs. To adequately position for breaching the business market, operators will need to address limitations of
494
Chapter 8: Wireline Networks
today’s cable access equipment, looking to achieve a balance between their investments in infrastructure and the potential revenue gains offered by emerging IP-based data, voice, and video services. Symmetrical bandwidth services are key requirements for business services. To generate more revenue, cable providers need to offer IP-based enhanced services such as guaranteed bandwidth Internet access, IP telephony, video on demand, managed home networking, gaming, and commercial services. By bundling voice, broadband access, and digital television services, cable providers can provide superior value to their customers, competing with others such as ILECs and direct broadcast satellite (DBS) service providers.
Cable Technology for Broadband Media Cable operators typically use HFC systems as an infrastructure over which to offer broadband services such as high definition video and data. HFC systems use a blend of optical fiber trunks, originating at the cable modem termination system (CMTS) head-end office, to connect to coaxial cable distribution “trees” in and around neighborhoods. The optical fiber improves a large portion of the cable network between the operator and subscriber, increasing capacity, reliability, and multiple service development. HFC systems are still largely point-to-multipoint distribution systems. The leverage of optical fiber backbones and potentially dense wavelength division multiplexing (DWDM) allows operators to further segment their HFC systems into more logical distribution segments to increase scalability, improve security, and offer higher bandwidth to a smaller set of users per segment. Coaxial cable has an excellent spectrum width of several hundred megahertz. TV channels are found in 6 MHz increments from 54 MHz to 216 MHz for channels 2 through 13, and from 470 MHz to 812 MHz for channels 14 through 69. Today, using MPEG compression, CATV systems transmit up to 10 channels of digital video within the same 6 MHz bandwidth of a single analog channel, allowing for up to 1000 TV channel possibilities in about 550 MHz of overall used bandwidth. Needless to say, there is plenty of room still left in the coaxial cable to accommodate data over cable. The typical U.S. cable frequency allocation is shown in Table 8-7. Table 8-7
Typical U.S. Cable Frequency Allocation Frequency Range (MHz) Direction
Primary Use
5 to 42
Upstream
Return path for Data over Cable Systems Interface Specifications (DOCSIS) data, network management, pay-per-view billing
54 to 350
Downstream Broadcast Analog TV and DOCSIS data (frequency of use varies by operator)
350 to 750
Downstream Broadcast Digital TV and DOCSIS data (frequency of use varies by operator)
750 to 1000
Upstream
Potential return path for DOCSIS data
Broadband—Pushing Technology to the Edge
495
Recently, the industry has advanced the spectrum-carrying capability of cable via quad shield cable. Most commonly found in broadband (CATV) cable, quad shields are multiple layers of foil and braid. This construction gives excellent shield protection and also allows the use of water blocking gels and other methods in order to weatherproof cables. The newer RG6 cable is more suited to broadband applications with its wider spectrumcarrying capacity and reduced noise and signal loss metrics. RG6 comes in 1 GHz (1000 MHz), 2.2 GHz (2200 MHz), and 3 GHz (3000 MHz) varieties. Many cable operators are using the newer cable to future-proof the copper portion of their HFC network plants as they build new infrastructure in growing neighborhoods and business areas. For providing broadband data with cable modems, the downstream data channel is placed into a separate 6 MHz channel from that of the TV channels, typically at about 850 MHz, at the back end of the currently used TV spectrum. This downstream data flows to all connected users on that cable, similar in concept to an Ethernet network. It’s up to the individual MAC address of the cable modem to decide which data to pass (yours) and which to block (theirs). The upstream data, currently architected for reduced bandwidth requirements, is placed into a 2 MHz window, generally in the 5 to 42 MHz band. The narrower upstream bandwidth uses TDM, measured in milliseconds, in which users can transmit one “burst” at a time to the Internet. This division by time works well for the very short commands, queries, and addresses that form the bulk of most users’ traffic upstream to the CMTSs and on to the Internet. The CMTS at the cable provider’s point of presence typically supports up to 1000 cable modem Internet users through a single 6 MHz channel. One 6 MHz channel can support about 30 (QAM-64 modulation) to 40 Mbps (QAM-256 modulation) of throughput, depending on the modulation type in use. This bandwidth is shared amongst your fellow neighborhood subscribers and variably diminishes its performance, as other local cable modem users are concurrently online. The good news is that the cable provider can add a new channel, splitting the base of users, which resolves this particular performance issue. Most cable providers monitor performance regularly and add another channel when the bandwidth per user reaches a certain traffic threshold. Figure 8-11 shows a typical block diagram of a CMTS.
496
Chapter 8: Wireline Networks
Figure 8-11 Cable Modem Termination System
Cable Modem Termination System Internet
CMTS
Data (Upstream) Data (Downstream)
Headend Transmitter
Video
Audio
Local Programming
Cable Industry Standards and Initiatives Cable standards, such as the DOCSIS standard, were a revolution for the cable industry, providing a standard backbone framework for two-way data transmission over a cable television infrastructure. DOCSIS addressed the transmission and operational support infrastructure that cable operators needed, as well as the cable modem equipment required at the home or office to support bidirectional transfer of data across a cable network. Previous to the DOCSIS specifications, proprietary cable system technology established unique architectures. This approach was largely typical of the grass roots heritage of cable operators. Depending on the cable technology vendor platform chosen by a cable operator, service options were determined by the feature set(s) supported by that proprietary platform. This was good for differentiation but became a problem as cable operators began to merge and acquire more regional or national systems (MSOs). When this occurs, there is a challenge in having multiple proprietary systems, and complete technology systems might have to be replaced in favor of the system most preferred by the MSO. The DOCSIS specifications added requirements for vendors of proprietary cable technology to either integrate or migrate their technology platforms to supporting DOCSIS. Over time, this would allow MSOs to better integrate and expand regionally and nationally, as multivendor cable systems would have commonality in feature and functional support based on DOCSIS standards. This has the affect of improving operator capital purchase and operational expenditures, improving the operator’s business model. Additionally, vendors
Broadband—Pushing Technology to the Edge
497
complying with the DOCSIS standards have access to a larger revenue market, which naturally drives competition and tends to result in lower prices for standards-compliant technology. As a result of the DOCSIS standards, cable technology is more interoperable, more profitable, and better able to compete with service substitutes, such as DSL and satellite broadcasters. A DOCSIS-based broadband cable network can support both data and voice traffic. Once deployed, cable operators can take advantage of their standards-based DOCSIS HFC backbone, head-end, and hub infrastructure to lower overall deployment and operational costs for new VoIP or commercial service offerings. Cable operators can enable aggressive service price discounting for bundled packages of voice, data, and cable television services to subscribers. By offering data, voice, and video services, cable operators can differentiate themselves from telecommunications providers who typically offer only one type of service and dramatically reduce the churn of customers to DBS providers. By taking advantage of the economies of network integration and scalability, cable operators can compete effectively with telephone companies or satellite providers. In addition to the DOCSIS standard, the cable industry has defined the PacketCable architectural standard for the purpose of supporting VoIP over cable system infrastructures. The two standards are briefly described in the next sections.
DOCSIS DOCSIS 1.0 cable modem certifications began in 1999 and gave cable providers assurances that different cable modems could interoperate with multiple vendors CMTSs. DOCSIS 1.0 is essentially best-effort, high-speed data communication services over cable media. By agreeing to a national and subsequently international standard for cable modems, the pace of semiconductor innovation increased rapidly as the pool of cable engineering talent magnified. This provided the effect of lowering prices for cable modem equipment, as previous cable equipment no longer had a value edge with which to sustain its proprietary price models. The cable modem standards have continued to advance internationally, with Europe essentially adopting the DOCSIS 1.0 standard and calling it the Euro-DOCSIS standard. The primary difference between the DOCSIS 1.0 standard and the Euro-DOCSIS standard is that Europe uses 8 MHz of bandwidth per channel as opposed to the North American use of 6 MHz. In North America, DOCSIS 1.1 cable modems were first certified in the fall of 2001, with the standard providing more security, bandwidth, and latency guarantees in order to offer toll-quality voice services and business-class services. Many cable providers, desiring to press into the business market or offer voice over cable services, considered or implemented an upgrade of their infrastructure to support DOCSIS 1.1, gaining access to the following marketable features:
•
Security—DOCSIS 1.1 provides baseline security mechanisms that assure individual user privacy across the shared-cable medium. This includes encryption keys that are
498
Chapter 8: Wireline Networks
sent between a DOCSIS-compliant CMTS head end and the subscriber cable modem. Policing and filtering mechanisms are added to mitigate risk of broadcast attacks on subscriber cable modems and to validate authorized users of the cable system.
•
Service-level agreements—Often required by business users, QoS mechanisms help guarantee bandwidth to apply various priorities to data, video, or voice applications. This can also support multimedia gaming and video conferencing services within the bounds of asymmetric bandwidth delivery.
•
Toll-quality VoIP—The ability to deliver VoIP is assured via congestion management and QoS mechanisms.
•
IP multicast—IP multicast provides support for real-time IP video streaming applications.
Increasing upstream capacity has long been a dream of cable MSOs. With more upstream capacity, the delivery of additional services to the enterprise and small business market is possible. Requirements for more symmetric throughput are being driven by services and applications such as VoIP, videoconferencing, peer-to-peer networking, and gaming. DOCSIS 2.0 builds on DOCSIS 1.1 capabilities, effectively adding advanced digital modulation capabilities to increase upstream bandwidth by three times over DOCSIS 1.1 and by six times over the original DOCSIS 1.0. By approaching more symmetrical capabilities of upstream and downstream bandwidth, effectively 30 Mbps capacity in both directions per channel, business-related cable modem services such as business-class videoconferencing are now possible. Available since December 2002, DOCSIS 2.0 provides a 50 percent increase in spectral efficiency and a 300 percent increase in the throughput of a single carrier compared to DOCSIS 1.x. The new upstream physical (PHY) layer supports a raw data throughput of up to 30.72 Mbps via a single, 6.4 MHz digitally modulated carrier. Under DOCSIS 1.x, the maximum data throughput was 10.24 Mbps in 3.2 MHz of bandwidth. These enhancements increase the network capacity and improve statistical multiplexing performance, thus reducing the cost per bit for the service provider. DOCSIS 2.0, with its greater upstream throughput, supports this trend with higher-order modulation formats and increased upstream channel radio frequency (RF) bandwidth. Table 8-8 shows a comparison between DOCSIS 1.x and DOCSIS 2.0 upstream PHY layer parameters. DOCSIS 2.0 system deployments began in 2003, and their penetration timeline is largely dependent on individual cable operator’s business strategies, capital expenditure plans and depreciation schedules, and competitive opportunities. DOCSIS 2.0 systems have been very effective in greenfield builds, but have been slow in the U.S. Many cable operators in the U.S. are still leveraging DOCSIS 1.0 and 1.1 system investments, and are choosing to wait for the standardization of DOCSIS 3.0 to get a clearer picture of what their next cable technology migration step needs to be.
Broadband—Pushing Technology to the Edge
Table 8-8
499
Comparing DOCSIS 1.x and DOCSIS 2.0 Upstream PHY Parameters Property
DOCSIS 1.x
DOCSIS 2.0 A-TDMA
S-CDMA
Multiplexing technique
Frequency division multiple access (FDMA)/time division multiple access (TDMA)
FDMA/TDMA
FDMA/S-CDMA
Symbol rates (ksym/sec)
160, 320, 640, 1280, 2560
160, 320, 640, 1280, 2560, 5120
1280, 2560, 5120
Modulation types
Quadrature Phase Shift Keying (QPSK), 16-QAM
QPSK, 8-QAM, 16-QAM, 32-QAM, 64-QAM
QPSK, 8-QAM, 16-QAM, 32-QAM, 64-QAM, 128-QAM (trellis-coded modulation [TCM] only)
Raw spectral efficiency (bits/sym)
2 and 4
2 to 6
1 to 6
FEC
RS (T = 1 to 10)
RS (T = 1 to 16)
RS (T = 1 to 16), TCM
Equalizer
8 tap
24 tap
24 tap
Byte block interleaving
No
Yes
No
S-CDMA framing
No
No
Yes
Bit rate (Mbps)
0.32 to 10.24
0.32 to 30.72
2.56 to 30.72
PacketCable PacketCable is a CableLabs initiative to develop an end-to-end solution architecture for the delivery of two-way real-time multimedia services based on the IP protocols. Real-time suggests VoIP, video streaming, and IP multicast applications. VoIP is the primary emphasis of PacketCable and is considered crucial to the cable industry’s ability to execute the “triple play” of video, data, and voice service. The PacketCable initiative is being developed for both DOCSIS 1.1– and DOCSIS 2.0–compliant cable network systems. PacketCable defines a comprehensive approach to cable telephony. Many of the architectural goals of the PacketCable initiative surround the following:
•
Voice quality equivalent to or better than the PSTN—The use of appropriate CODECs creates the proper VoIP call quality, while minimizing delay and jitter is necessary. In addition to voice quality, the ability to place calls anywhere is another
500
Chapter 8: Wireline Networks
key requirement to replicate the functionality of the PSTN. This also requires reliability and redundancy in network components in order to match the five 9s availability of the PSTN.
•
Call signaling—PacketCable defines a call signaling system to support calls between the cable network and the PSTN, international calling, and intra- and intercable network calls. Call signaling also is responsible for enabling custom calling features such as call waiting and local calling services such as caller ID.
•
Distributed QoS—PacketCable includes the distribution of QoS into the cable access network to enable priority mechanisms for call setup, QoS changes during a call, and priority mechanisms for E911 services.
•
Device provisioning and operations systems—Replicating the PSTN functionality includes systems capable of provisioning and managing potentially millions of subscribers—Call Management Servers (CMSs), Multimedia Terminal Adapters (MTAs), Media Gateway Controllers (MGCs), and signaling controllers for PSTN gateways. Operational systems and network management are also key, so a number of different management and billing systems are needed to efficiently operate the PacketCable environment.
•
Security and regulatory monitoring—Privacy mechanisms are required that emulate or exceed that of the PSTN. In addition, PacketCable supports regulatory monitoring requirements for lawful intercept, E911, and so on.
By properly implementing a multiservice HFC DOCSIS and PacketCable network, cable operators can offer IP cable telephony services at a competitive cost compared to the PSTN, adding IP data and telephony service to the mix. Additional cable industry initiatives are OpenCable and CableHome. Like PacketCable, both initiatives are administered and managed by CableLabs on behalf of North American cable operators. OpenCable seeks to provide next-generation, interactive, digital cable services. By defining a standard, digital hardware cable interface that also separates the cable provider’s conditional access system from the digital device, retail markets are enabled to sell digital TVs, video recorders, and other digital cable-ready devices that are portable and operable across different regional cable networks. The CableHome initiative seeks to extend the services boundary beyond the home-based cable modem into the homebased LAN, providing networking and management services for wired or wireless homebased networks. Additionally, VoIP over cable, based on PacketCable specifications, should drive industry support and deployment for voice over cable technologies.
Cable Modem Termination System (CMTS) There are well over 12,000 cable head ends or CMTSs in the United States. The CMTS head ends aggregate data traffic to and from subscriber cable modems, effectively forming a broadband access layer between the subscriber and the IP networks that make up the Internet.
Broadband—Pushing Technology to the Edge
501
At a high level, a CMTS system provides an extended Ethernet network over a WAN up to about 100 miles. The WAN is essentially an HFC system, which is a combination of fiber trunks and optoelectronic nodes that convert the video and data streams to the coaxial cable that feeds your residence. A CMTS equipment platform is a data switching system, essentially a router with special multiplexing interfaces. CMTS equipment uses amplitude modulation—which is analog transmission—through the fiber and the coax, all the way to the cable modem. This keeps costs low because analog-to-digital converters are not required. Cable modems are made for data transmission, and, as such, the subscriber’s data stream must be transported to and from a cable MSO office, interfacing with the CMTS equipment. In today’s HFC networks, the CMTS at the MSO office connects downstream (the forward path) to a fiber trunk that interfaces several fiber nodes. An individual fiber node is an opto-electronic converter, accomplishing the media change from optical fiber to electrical coaxial. In this way, the cable system is segmented into smaller clusters as each fiber node serves a particular set of residences, usually between 500 and 2000 homes. The CMTS sends data downstream through the fiber node and onto the coaxial distribution cable that serves your neighborhood and street. A unique HFC Layer 2 MAC address is part of the header of the data, and if it matches the HFC MAC address of your cable modem, then the data will flow through your cable modem (blocked by others) and into any attached PCs or home routers.
NOTE
Cable modems usually have multiple Layer 2 MAC addresses. An HFC MAC address that identifies the subscriber’s cable modem to the cable operator’s CMTS system is used on the cable network side. The Ethernet port of the cable modem has its own MAC address for the customer premise side toward the customer’s router/LAN or PC. USB-enabled cable modems have two additional MAC addresses: one for the cable modem USB physical socket and a second for the emulated USB network driver in the customer PC or router.
When your cable modem transmits data upstream, the data is time division multiplexed into the radio frequency domain and upstream (return path) into a smaller pool of bandwidth (for DOCSIS 1.x networks). This bandwidth is in the lower end of the assigned frequency range and travels via coaxial cable to the fiber node, which returns your data to the CMTS head-end equipment at the MSO office. The CMTS is then responsible for routing your data further upstream to backbone IP routers, which often connect to many ISPs. In this way, the CMTS functions as the mediation point between the RF network and the IP network by handing off data to backbone IP routers at the cable office. Again the RFs used for CMTS data transmission are outside the assigned ranges for analog and digital TV channels, so concurrent video and Internet data are kept logically and spectrally separate.
502
Chapter 8: Wireline Networks
A cable operator generally designs and deploys a metropolitan video and data delivery system in a series of basic cable CMTS hubs that aggregates into a larger super hub. In this way, service platforms and supporting equipment can be centralized or distributed depending on the operator’s needs and operational business model. An example of CMTS equipment is the Cisco Systems uBR10012 Series router. This CMTS supports the DOCSIS 1.1, EuroDOCSIS 1.1, and PacketCable 1.0 specifications. The fully loaded, 12-slot chassis is rated for the support of up to 44,000 users per system using eight line card slots, with redundant processors across two slots and redundant timing and clock control cards. There are four high-speed WAN ports for IP backbone and external network connections, and support for an OC-48 DPT upstream connection. Two of these systems fit into a Telco seven-foot rack. A basic hub office might have one or perhaps many CMTSs depending on the operator’s intended design. Companies such as Cisco Systems, ARRIS, and others are ramping up design and production of systems to deliver these capabilities. With DSL providers exploring video service delivery and satellite providers enabling high-speed data service, cable MSOs should leverage the technological benefits of DOCSIS 1.1 and DOCSIS 2.0, and examine DOCSIS 3.0. The market for symmetrical data services is poised to take off as soon as DOCSIS 2.0 and higher systems are reasonably available. The technology to deliver these services is specified and available. There are many options with which to ready cable networks for the deployment of advanced services that will help maintain a competitive edge.
Ethernet to the Masses Who wouldn’t want IP everywhere? For that matter, who wouldn’t want Ethernet everywhere? All Internet-based traffic today begins and ends with IP transported over Ethernet. Indeed, mass-market connectivity to the Internet will continue to march toward the Ethernet connectivity model. Enterprise networks have long been the fertile grazing grounds for LAN technology. Ethernet, Fast Ethernet, and Gigabit Ethernet are popular technologies for linking desktop and laptop PCs, VoIP telephones, and wireless access points to corporate data servers and even IBM mainframes. As personal computer manufacturers increasingly integrate Fast Ethernet into standard PC technology, Ethernet has leapt from the enterprise into the home networking market. Both cable and DSL providers interface their respective modem technologies with the subscriber’s PCs, primarily via Ethernet. With millions of Ethernet ports in both the business and home markets, today’s service providers are surrounded with requirements for price-performing Ethernet transport and service options. Traditional TDM-based T1s, DSL, and cable cannot touch the bandwidth opportunities afforded by 10/100/1000 Mbps Ethernet. There are just too many equipment types with too many protocols. Subscribers, whether business or residential, use networks capable of supporting megabits of bandwidth. Wireline providers have core networks that support gigabits of bandwidth. The bottleneck between the two is more apparent than ever before.
Broadband—Pushing Technology to the Edge
503
Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet are all based on the original Ethernet technology that dominates the LAN environment. Today, the largest percentage of the data traffic in the metropolitan area terminates on LANs. Therefore, because most LANs are predominately Ethernet, the most obvious solution is to use Ethernet as the endto-end Layer 2 technology in order to flatten the protocol stack between the provider and the Ethernet user, minimizing protocol conversions and MAC rewrites wherever possible. For Ethernet to scale in speed and ubiquity, it requires the guaranteed QoS, and operations, administration, and maintenance (OAM) features that are needed for provider-grade voice, audio, and video applications of sub-50-millisecond (ms) recovery time and approximately 99.999 percent network availability. And it must do this over a number of massively large customer market segments for Ethernet transport services. Ethernet options for wireline providers take several forms. In Chapter 5, “Optical Networking Technologies,” you learned about the optical Ethernet solutions of Ethernet over SONET/SDH, Ethernet over RPR/DPT, Ethernet over MPLS, and other Ethernet transport technologies. The demand for provider Ethernet services will encompass transparent LAN services, Ethernet private line services, Ethernet-to-Internet services, and Ethernet over passive optical networks (EPONs). Wireline providers in all segments are pursuing and deploying many of these metro Ethernet solutions. This new opportunity for Ethernet to the masses is predicated on
•
Bandwidth scalability—An Ethernet interface is a low-cost access interface, yet one that can be used to scale bandwidth from 10 Mbps to 100 Mbps and even up to 1 Gbps without changing the customer premise equipment interface.
•
Bandwidth granularity—Most of the contemporary Ethernet switching equipment is QoS enabled and can be provisioned to provide very granular, tiered, bandwidth options; this has great appeal to customers who like to purchase what they need, when they need it.
•
Rapid provisioning—Provisioning Ethernet is largely based on software parameters to increase or decrease bandwidth and add new services. Packet provisioning is faster since network equipment and subscriber interfaces don’t change. Services can be provisioned in days or even hours, rather than weeks.
•
Global interoperability—Users desire technology to be simple, fast, inexpensive, and largely transparent. They desire the ability to compute and communicate from a number of different locations. Ethernet has all of the attributes to approach global interoperability by providers everywhere.
Ethernet to the masses is best enabled through international standards that define markets, requirements, technology options, and management. Standards help with investment protection, interoperability, and scalability of technology and platforms. A global standard is available for delivering Ethernet to the masses.
504
Chapter 8: Wireline Networks
The IEEE 802.3ah standard for Ethernet in the First Mile was ratified in June of 2004. The term “in the First Mile” is not explicit to an exact distance, but more representative of the local loop access link between a subscriber and the provider’s nearest point of presence. Further, “First Mile” emphasizes the technology from the customer’s perspective. The IEEE 802.3ah (EFM) standard identifies the following areas for Ethernet delivery in the access layer:
• • • •
Ethernet in the First Mile over Copper (EFMC) Ethernet in the First Mile over Point-to-Point Fiber (EFMF) Ethernet in the First Mile over Passive Optical Network (EFMP or EFM EPON) Ethernet in the First Mile Operations, Administration, and Maintenance (EFM OAM)
Figure 8-12 shows a summary of developing Ethernet-based services. Figure 8-12 Ethernet-Based Services
Ethernet-Based Services Layer 1
Layer 2
Layer 3
Point-to-Point
Ethernet Private Line
Ethernet Relay Service
Multipoint
Ethernet Wire Service
Ethernet Multipoint Service
Ethernet Relay Multipoint Service
MPLS VPN
Transparent LAN Service Similar to Leased Line Analogous to Frame Relay Analogous to Private Line Source: Cisco Systems, Inc.
802.3ah Ethernet in the First Mile over Copper (EFMC) EFMC defines the specifications and guidelines for implementing Ethernet over category 3, twisted-pair copper wire. Note that the Layer 2 protocol stack is Ethernet only from
Broadband—Pushing Technology to the Edge
505
subscriber to provider, not ATM, PPP, and so on. A portion of the standard specifies two minimum objectives for speed and reach:
NOTE
•
EFMC short reach (EFMC SR)—Ethernet over copper for short reach provides a minimum of 10 Mbps up to at least 750 meters.
•
EFMC long reach (EFMC LR)—Ethernet over copper for long reach provides a minimum of 2 Mbps up to at least 2700 meters.
These data rates are minimums and don’t limit the achievement of higher rates and distances within the overall guidelines of the standard.
There is also a specification for a low overhead bonding of multiple copper pairs to increase throughput where fiber is not economically likely to exist. Again, this pair bonding is specific as an Ethernet aggregation layer and will not use ATM IMA or the multipair bonding feature of SHDSL. The EFMC physical layer (PHY) for both EFMC SR and EFMC LR uses DSL modulation techniques, leveraging years of experience with DSL technology and a large installed base worldwide. Not all DSL types are specified, but rather specific DSL types. Current specifications call for EFMC SR to be implemented over VDSL (Ethernet protocols only) and for EFMC LR over SHDSL ITU-T G.991.2 and G.SHDSL.bis (Ethernet protocols only). In addition, these specific DSL types must use Ethernet only as a Layer 2 protocol stack—no ATM or PPP allowed. Two new sublayers are being defined for the EFMC PHYs by the 802.3ah standard. Called rate matching and loop aggregation, these new sublayers are being adapted into these DSL technologies. The EFMC portion of the 802.3ah standard is a good fit for existing business parks and residential neighborhoods where existing voice-grade copper infrastructure already exists. Multitenant units such as apartment buildings, hotels, and office buildings are also good candidates for EFMC. Table 8-9 highlights some of the EFMC information of the 802.3ah EFM standard. Table 8-9
EFMC
EFMC Ports
Ethernet-Only EFMC PHY Type DSL Type
Minimum Data Rate/Distance (ft:m)
EFMC short reach PHY 10PASS-TS
VDSL
10 Mbps/2460 ft:750 m
EFMC long reach PHY
SHDSL (G.991.2) and G.SHDSL.bis
2 Mbps/8858 ft:2700 m
2BASE-TL
506
Chapter 8: Wireline Networks
802.3ah Ethernet in the First Mile over Point-to-Point Fiber (EFMF) EFMF defines the guidelines for implementing Ethernet over point-to-point single mode fiber (G.652) and using either a single-fiber strand or a dual-fiber strand (fiber pair). The use of a single fiber strand is made possible through the specification of a two-wavelength multiplexer that splits the signal into separate transmit and receive wavelengths. One wavelength is specified as downstream (D) and one is specified as upstream (U). This can help optimize fiber strand usage in the access layer. EFMF supports symmetrical bandwidths up to 100 Mbps and 1 Gbps on full-duplex, pointto-point fiber links at a minimum of 10 km. EFMF enables more cost-effective fiber to the building (FTTB), fiber to the curb (FTTC), and fiber to the home (FTTH) solutions and is particularly well-suited for new business parks and new residential subdivisions. This portion of the 802.3ah standard specifies two objectives for Ethernet over optical speed and reach:
•
EFMF 100 Mbps SMF—Uses Ethernet over single-mode fiber (SMF) of either a single-fiber strand or a dual-fiber strand. Uses 100 Mbps Fast Ethernet. Minimum reach is specified as 10 km.
•
EFMF 1000 Mbps (1 Gbps) SMF—Ethernet over SMF of either a single-fiber strand or a dual-fiber strand. Uses 1000 Mbps Gigabit Ethernet. Minimum reach is specified as 10 km.
Included is a temperature-operating range extension from -40 degrees to +85 degrees Celsius to make EFMF components suitable for outside deployment. The EFMF physical layer (PHY) specifies a new physical media-dependent sublayer for the Layer 1 PHY. The dual fiber supports transmit and receive on separate fiber strands, classifying this sublayer as 100BASE-LX10 for 100 Mbps support and as 1000BASELX10 for the 1 Gbps support. For the single-fiber strand, the PHY sublayer for Fast Ethernet is 100BASE-BX10 and for Gigabit Ethernet the PHY sublayer is 1000BASEBX10. Both of these speeds are supported at SMF fiber distances of 5 km or at greater than 10 km. Table 8-10 highlights some of the EFMF information of the 802.3ah EFM standard. Table 8-10
EFMF
SMF (G.652) Fiber
EFMF PHY Sublayer Type for 100 Mbps, Fast Ethernet (km)
EFMF PHY Sublayer Type for 1000 Mbps, Gigabit Ethernet (km)
Single-fiber strand
100BASE-BX10-D, 10 km
--
100BASE-BX10-U, 10 km
Wavelength(s) Plan Dual wavelength, downstream (BX10-D) transmit 1480 nm to 1580 nm; upstream (BX10-U) transmit 1260 nm to 1360 nm.
Broadband—Pushing Technology to the Edge
Table 8-10
507
EFMF (Continued)
SMF (G.652) Fiber Single-fiber strand
Dual-fiber strand
EFMF PHY Sublayer Type for 100 Mbps, Fast Ethernet (km)
EFMF PHY Sublayer Type for 1000 Mbps, Gigabit Ethernet (km)
--
1000BASE-BX10-D, 10 km
100BASE-LX10, 10 km
Wavelength(s) Plan
1000BASE-BX10-U, 10 km
Dual wavelength, downstream (BX10-D) transmit 1480 nm to 1500 nm; upstream (BX10-U) transmit 1260 nm to 1360 nm
1000BASE-LX, 5 km, and 1000BASE-LX10, 10 km, extended temperature
Transmit wavelength is 1260 nm to 1360 nm, compatible with most SONET/ SDH OC-3/STM-1 transceivers
802.3ah Ethernet in the First Mile over Passive Optical Networks (EFMP) Ethernet in the First Mile PON (EFMP) defines the guidelines for implementing Ethernet over a point-to-multipoint, single-mode fiber (SMF G.652) at 1 Gbps at up to 20 km of distance. EFMP is specific to the use of passive optical components in the access network, which lends to its classification as an Ethernet PON (EPON). The point-to-multipoint nature, sometimes referred to as a 1:n passive design, allows multiple subscriber optical fibers to be multiplexed onto a single-fiber strand upstream, often called the trunk fiber. This allows optical to be extended to residential areas at greater distances and lower cost than previous optical solutions. The IEEE 802.3ah EPON specification defines a Multi-Point Control Protocol (MCPC), Point-to-Point Emulation (P2PE), and two physical media-dependent sublayers for 10 km and 20 km distances using a 1490 nm laser for the downstream and a 1310 nm laser for the upstream. The MCPC is necessary to perform bandwidth assignment, bandwidth polling, and autodiscovery. EPON networks include optical line terminals (OLTs) that usually reside in the provider’s CO or remote point of presence. The OLT is typically an Ethernet switch with optical ports for the downstream trunk fiber. Downstream data from the trunk fiber toward the subscribers reaches a 1:n passive optical splitter, which delivers n copies of all data over the n numbers of subscriber ONUs. Subscriber ONUs are uniquely identified and will only pass downstream data intended for their MAC address. Upstream data from the subscriber ONUs reaches the trunk fiber splitter connection, where it is arbitrated via help of the MCPC into time slots and delivered upstream using time division multiple access (TDMA) to the OLT. This use of TDMA upstream is typical of point-to-multipoint topologies—allowing only one subscriber to transmit upstream at a time with respect to the OLT at the head end of the single trunk fiber.
508
Chapter 8: Wireline Networks
This portion of the 802.3ah standard specifies two objectives for Ethernet over optical speed and reach:
•
EFMP 1000 Mbps SMF > 10 km—Uses Ethernet over SMF of a single-fiber strand to a 1:16 passive optical splitter. Uses 1000 Mbps (Gigabit Ethernet) as the point-tomultipoint data rate specified for at least a 10 km reach.
•
EFMF 1000 Mbps SMF > 20 km—Uses Ethernet over SMF of a single-fiber strand to a 1:16 passive optical splitter. Uses 1000 Mbps (Gigabit Ethernet) as the point-tomultipoint data rate specified for at least a 20 km reach.
802.3ah Ethernet in the First Mile Operations, Administration, and Maintenance (EFM OAM) To be suitable for mass public deployment, Ethernet must be equipped with better OAM features. OAM refers to the tools and utilities, automated or semiautomated, that allow for the installation, monitoring, and troubleshooting of a network. In this case, OAM applies to all EFM types of EFMC, EFMF, and EFMP. The definitions of EFM OAM within the 802.3ah standard take some of the existing Simple Network Management Protocol (SNMP) management information base structures and extend and adapt these for Ethernet management in the local access loop. This helps with monitoring, reporting, remote troubleshooting with loopback testing, and so on. The main features of the EFM OAM protocol provide more robust Ethernet management such as
• • •
Link performance monitoring Fault detection and signaling Loopback testing
The EFM OAM protocol specification defines a number of protocol data units called operations, administration, and maintenance protocol data units (OAMPDUs) with which to perform OAM processes. The use of a slow protocol OAM MAC address is used to shuttle management PDUs between Ethernet-attached devices, whether they are using EFMC, EFMF, or EFMP network topologies, at a rate of no more than ten OAMPDUs per second. OAMPDUs are Ethernet frames from 64 to 1518 bytes or octets in length. The OAM sublayer is wedged into the data link Layer 2 between the typical MAC and LLC sublayers. The use of the EFM OAM sublayer is optional, so providers have a choice to use existing network management tools or to migrate to the EFM OAM management protocols on their particular timeline. The EFM OAM specifications take the first step in the improvement of Ethernet management in the local access layer.
Ethernet—New Access Choices for Providers By deploying Ethernet as the Layer 2 technology of choice in the first mile, network designers can build networks with IP and pure Ethernet, and avoid the cost and complexity
Technology Brief—Wireline Networks
509
of protocol conversion. Ethernet is the lowest-cost, highest-volume networking technology. Ethernet solutions in the first mile enable designers of hardware systems to leverage the installed base of 300 million Ethernet ports and the merchant industry of chipsets and optics. Because Ethernet is familiar technology with a large installed base, the development of Ethernet in the first mile will enable network managers to take advantage of their investments in the installed equipment, network management and analysis tools, and information technology staff expertise. Ethernet also supports all services (data, voice, and video) and both media types (copper and fiber). Ethernet is cost-effective in a first-mile network. By removing protocol layers and the associated network elements at the edge of the last mile, the use of Ethernet lowers equipment and operating costs, lowers complexity, and simplifies the architecture. More important, the design philosophy of the Ethernet industry promotes high-volume manufacturing and low-cost design. Because the whole industry—from chipset vendors and optical component manufacturers to system vendors—participates in the standards process, the IEEE 802.3ah EFM Ethernet interfaces are very well-defined and can be implemented with available technology. This should enhance provider adoption of Ethernet to the premise solutions. Ethernet is quickly becoming the preferred access technology between providers, operators, and their customers. Like IP, Ethernet is becoming a household term within technologyenabled circles, whether business or residential. Ethernet is one of the winning technologies that enable next-generation network services in the new era of communications.
Technology Brief—Wireline Networks This section provides a brief study on wireline networks. You can revisit this section frequently as a quick reference for key topics described in this chapter:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding wireline networks.
•
Technical at a Glance—Uses figures and tables to show wireline networking fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance—Charts that suggest business drivers and present those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts in this section to see how business drivers are driven through technology selection, product selection, and application deployment in order to provide solution delivery. Additionally, business drivers can be appended with critical success factors and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
510
Chapter 8: Wireline Networks
Technology Viewpoint Wireline networks have been the workhorses of communications for the last 120 years. Wireline networks have weathered many generations of customer demands, regulations, and technology—building a local, national, and global PSTN that transparently helps you reach out and touch someone, anytime and anywhere. Particularly with technology, generation upon generation has weaved a web of various protocols, codes, conversions, and modulations. Ignoring a heritage of central intelligence and dumb devices on the edges linked by malleable wire, the Internet and e-commerce rush of the 1990s created a bandwidth challenge between the customer masses and providers. Serving as the primary electronic gatekeepers into the Internet’s central data hives and worldwide storefronts, wireline providers have likely turned over more technology, regulations, and customer requirements in the past 10 years than the previous 110 years. Narrowband and wideband voice and data have given way to broadband three-way multimedia while the first language of the Internet, the IP protocols, continues to dictate and dominate provider strategy; capital investment; network, application, and services convergence; as well as market opportunity. This signals an inflection point of intense scrutiny on the last mile, or the first mile (depending on your perspective), also known as the local loop and the cable drop. IP-based voice, data, and video applications and services are seething from the optical network core and from the customer edge, squeezing the wireline access layer for every bit that IP can get. A next generation of wireline and capacious fiberline services is moving into place, intent on ascending to the IP network layer, while taking many different technology roads to get there. IP is a universal translator that pumps up data utilization and concurrent communication sessions. Ascending to IP places providers into a worldwide league of consensus. The sluices of data, voice, and video will naturally gravitate to communication pathways that are resistance free. Wireline providers must do the hard work and make the hard decisions to rapidly remove resistance from up and down their appropriate value chain. There’s little distinction between wireline and wireless communiqué when it comes to the electromagnetic spectrum. One insulates with copper and the other with air. Even another— fiber—insulates with a glass cladding, to ensure the propagation of wave after wave of communications. It is an exciting time for the broadband market. Thanks to the Internet, data demand is growing for multimedia data including music, movies, and IP video broadcast and multicast. Wireline broadband and managed services are expected to lead the way. New wireline copper and optical innovations promise bandwidth scalability for addressing the access layer bottlenecks. Yet it is an apprehensive time for the narrowband residential landline market, for both local and long-distance voice. The wireline providers have entered the long-distance market at a time when the premium margins of distance are dying. Most wireline providers must offer unlimited long-distance within their bundled plans, devaluing the billable minute. Long-
Technology Brief—Wireline Networks
511
distance service becomes a “me too” offering that must be present—more for retention purposes and value response and less for stunning revenue growth. Unlimited local calling and free long-distance by wireless providers, as well as cross-market competition is having a large, service substitution effect on the wireline residential provider. DSL began as the Telco’s response to the cable operator’s high-speed data service over cable. DSL is on the rise around the world and will likely outpace broadband cable deployment by two-to-one on an international basis. Keeping the traditional wireline industry alive in broadband, DSL has allowed for incumbent local exchange carriers with twisted-pair copper tethers to participate in the residential and small business broadband game. By accommodating multiple services on the same wire facility, DSL allows providers an incremental fee opportunity for broadbandbased data services but, more importantly, a value play for customer retention. Looking to steal third base, many of the ILECs are developing or delivering IP TV through which to execute their triple-play strategies. ADSL2+, SHDSL, and VDSL/VDSL2 can literally transform the existing public information network from one limited to voice, text, and low-resolution graphics to a powerful, ubiquitous system capable of bringing multimedia, including full-motion video, to every home in the new century. ADSL2+, SHDSL, and VDSL/VDSL2 will be the leaders of the DSL technologies and will play a crucial role over the next decade or more as the ILEC service providers enter new markets for delivering information in video and multimedia formats. Whether it can be argued that DSL is tactical or strategic is often beside the point. The ILECs will need to use the technology for both subscriber retention and revenue growth, and will explore the technology in an effort to find strategic uses of the medium. New broadband optical cabling might take years to reach most prospective subscribers while local municipalities and satellite operators stretch to cover the rest. Success of these new services will depend on reaching as many subscribers as possible during the first few years. By bringing movies, television, video catalogs, remote CD-ROMs, corporate LANs, and the Internet into homes and small businesses, ADSL2+, SHDSL, and VDSL/VDSL2 will make these markets viable and profitable for service providers and application suppliers alike. Cable operators gained an early lead in North American residential broadband systems while upgrading their infrastructure with hybrid fiber coaxial systems to support two-way interactive video. Data communication over cable has become critical as convergence takes hold and shapes the next generation of network designs. For cable operators to remain stateof-the-art in service delivery, the best available technologies and specifications, such as DOCSIS 1.1 and 2.0, should be applied to bidirectional broadband data services. The symmetrical bandwidth features available in DOCSIS 2.0 products give operators more compelling features with which to introduce business video and voice services to the business market. DOCSIS 3.0 is in the wings. By building on the industry’s highly
512
Chapter 8: Wireline Networks
successful cable modem infrastructure, PacketCable networks use IP technology to enable a wide range of multimedia services, such as IP telephony, multimedia conferencing, interactive gaming, and general multimedia applications. To generate more revenue, cable providers should offer IP-based enhanced services such as guaranteed bandwidth Internet access, IP telephony, video on demand, managed home networking, gaming, and commercial services. By bundling voice, broadband access, and digital television services, cable providers can provide superior value to their customers, effectively competing with other multiservice providers such as ILECs and DBS service providers. With millions of Ethernet ports in both the business and home markets, today’s service providers are surrounded with requirements for cost-effective Ethernet transport and LANbased service options. The IEEE 802.3ah EFM standard seeks to future-proof networks by using Ethernet over copper or fiber, eliminate non-Ethernet Layer 2 protocols in the provider access layer, and hurl Ethernet to the masses for global interoperability. The standard becomes a catalyst, as several ILEC providers have announced or are deploying fiber and fiber/copper blends to the building, curb, or home. The standard helps providers lay the proper physical Layer 1 infrastructure to allow the most flexibility in leveraging pure Ethernet all the way to the premise, if that is their desired choice. This will be one of the most interesting technology spaces to watch in the coming years, as all the primary service providers have both recent and developing offerings and deployments underway for extending Ethernet services from LAN to MAN to WAN. Optical bandwidth and optical switching will be the next rocket stage of Ethernet and IP, working with high-speed copper scooters to push these technologies faster and farther into modern networks. Where optical abounds, congestion possibilities dissipate. The super-rise of dumb bandwidth with probabilistic features (Ethernet) and TCP/IP resilience quickly overcomes the expense of smarter networks with deterministic guarantees. IP, Ethernet, and optical fiber are definitely viewed as the technologies of choice for providing not only broadband but even superbroadband to the masses. Deployed through standards-based DSL or cable copper, optical fiber, and passive optical networks, these technologies allow service providers to lead the speed race, staying out in advance of the value distinction and the hopefully insatiable communication desires of the hundreds of millions of residential and small business customers in North America and abroad. IP over Ethernet over fast copper and optical fiber would essentially arrest the current bandwidth bottleneck between the nation’s businesses and the future computing and entertainment needs of a technology-enabled population. There is a lot of customer passion around Ethernet and IP as unifying technologies. Wireline providers are advised to exploit opportunities in Ethernet and IP services—both of which are extensible, future-proof technologies with a profusion of technological service pull. In addition, ascending to the network layer of IP will be crucial to innovation within all segments of wireline providers. New opportunities for value-added solutions increase margins beyond transport products and baseline communication offerings. The key will be
Technology Brief—Wireline Networks
513
taking the lead in enterprise and residential IP service creation. A multidisciplined focus on innovation, execution, operation, and superior customer service will be vital to continued retention and applauded growth.
Technology at a Glance Figure 8-13 illustrates where various wireline applications are available and the data ranges in which they generally function. Figure 8-13 Wireline Technology Application
100 Gbps Fiber Optic Cable 10 Gbps
1 Gbps Ethernet 100 Mbps VDSL Cable 10 Mbps
1 Mbps ADSL 0.1 Mbps Copper Long Haul
Dense Urban
Urban
Industrial Residential Suburban Suburban
Rural
Remote
Typical wireline connectivity options are as follows:
• • • • • • •
Dial up—56 Kbps, very low cost, very low reliability Cable modem—3.0 Mbps and higher, low cost, low reliability DSL—3.0 Mbps and higher, low cost, low reliability Fractional T1—256 Kbps to 712 Kbps, moderate cost, medium reliability T1—1.5 Mbps, high cost, high reliability T3—43 Mbps, very high cost, high reliability Frame Relay—56 Kbps to 1.5 Mbps, very high cost, high reliability
514
Chapter 8: Wireline Networks
• •
ATM—26 Mbps to 622 Mbps, very high cost, high reliability 10/100/1000/10000 Ethernet—2 Mbps to 10 Gbps, moderate to high cost, medium to high reliability
Table 8-11 summarizes wireline technologies. Table 8-11
Wireline Technologies
Standards
Voice/Data
DSL
Cable
Ethernet
ANSI T1E1
G.992.1/G.dmt
DOCSIS 1.0
IEEE 802.3
G.711/PCM
G.992.2/G.Lite
DOCSIS 1.1
IEEE 802.3u
ANSI T1.601
EuroDOCSIS 1.1
IEEE 802.3z
SONET/SDH
G.992.3 and G.992.4/ADSL2
DOCSIS 2.0
IEEE 802.3ah
RPR/802.17
G.992.5/ADSL2+
PacketCable
IEEE 802.1Q
G.993.1/VDSL
OpenCable
IEEE 802.1w
G.991.2/G.shdsl
CableHome
G.933.2/VDSL2 CAP Seed Technology
FDMA
FDMA/TDMA
TDMA
FDMA/TDMA Advanced-TDMA and SynchronousCDMA for DOCSIS 2.0
10Base-T CSMA/CD Ethernet over SONET/SDH Ethernet over RPR Ethernet over MPLS
Speed/Max Distance
Copper: 56 Kbps/18,000 ft T1-1.544 Mbps/18,000 ft E1-2.048 Mbps/16,000 ft T2-6.312 Mbps/12,000 ft E2-8.448 Mbps/9000 ft
ADSL: 1.5 Mbps to 8 Mbps/17,000 ft G.Lite: 176 Kbps to 1.5 Mbps/18,000 ft SDSL: 768 Kbps/ 10,000 ft
T3-44.736 Mbps/450 ft
VDSL: 12.96 Mbps/ 3000 ft
Optical:
25.92 Mbps/3000 ft
OC-3 155 Mbps
51.84 Mbps/1000 ft
OC-12 622 Mbps
VDSL2 up to 100 Mbps/< 500 ft
OC-48 2.5 Gbps OC-192 10 Gbps From 2 km to 45 km
G.shdsl: 2.3/4.6 Mbps/22,000 ft
Forward path: 1.5 Mbps up to 43 Mbps
Copper:
Return path: 320 Kbps to 10.24 Mbps
Optical fiber:
5 to 15 Mbps LRE/5000 ft 10 Mbps 100 Mbps 1000 Mbps or 1 Gbps 10,000 Mbps or 10 Gbps
Technology Brief—Wireline Networks
Table 8-11
515
Wireline Technologies (Continued)
Range
Voice/Data
DSL
Cable
Ethernet
Short to long
Short to medium
Short to long
Short to medium (copper) Short to long (optical)
Upper Frequency Range in Hertz
Analog 3.4 kHz
ADSL/1.024 MHz
ISDN 80 kHz (U.S.), 120 kHz (Europe)
VDSL/30 MHz
1000 MHz
Optical Hundreds of terahertz
T1/2B1Q 400 kHz Applications
Voice features
Voice telephony
Multidrop data communications
Remote access computing
PC-to-PC remote computing
Internet access computing
Telecommuting
Home networking
WAN connectivity
Video on demand
Analog video television
Campus MAN
High-speed Internet access
LAN to LAN
Video on demand Digital video television
Videoconferencing
Voice telephony
Internet access
Home networking
Business data
Category 5 UTP/350 Mhz
High-speed WAN VPN Storage area networks Disaster recovery Internet access Private line
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solutions and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. In the at-a-glance charts that follow, typical customer business drivers are listed for the subject classification of networks. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products, and their key differentiators, as well as applications to deliver solutions with high service value to customers and market leadership for providers.
516
Chapter 8: Wireline Networks
Figure 8-14 charts the business drivers for wireline voice. Figure 8-14 Wireline Voice
High Value
Critical Success Factors Augment Consumer Loyalty and Retention
Market Leadership
Technology Cisco IOS
Superior Branding
Nortel
Accurate Market Segmentation Bundle by Behaviors
Lucent
Network Asset Convergence Incremental Capital Outlay Optimize CapEx and OpEx Carrier-Class Reliability
Siemens Motorola NEC Cisco MPLS
Market Value Transition
Market Share
Soft Switch Success-Based CapEx Model -
Frame Relay
Outsourced Network Services
ISDN
Integrated Voice/Data Applications
SDH
Long-Distance and International Toll Fax and Modem Dial-Up Business Communications
Low Cost Competitive Maturity
ATM
Network Convergence
SONET
Business Drivers
Industry Players
CallManager and CallManager Express BTS 10200 Softswitch PGW 2200 Softswitch MGX 8000 Series GSR 12000 and ESR 10000 Series
Unified Messaging Dial-Up Internet Access
2400/3700
Managed Voice/Data Services Managed Enterprise IP Telephony Stellar Customer Service
Cisco Key Differentiators Voice Technology Leadership - IP Telephony ConferenLeadership – Common Framework cing
E911 and Operator Services
7200/7400
Superior Voice Quality and Features
Voice Calling Features
6500/7600
T1/E1
FDM
Service Value IP Call Center
Site to Site Voice
T3/E3
PCM/TDM
Applications
Cisco 6705IAD
AS5000 Gateways
PBX Personal Communications
Cisco Product Lineup
Teleworkers TeleCommuting Paging Services
OC-3/OC-12/OC-48 Voice and Data ISDN/Frame Relay/ATM Digital Data Services Long-Distance Voice Services Business Voice Services Residential Voice
Solution Delivery
Service Providers – Verizon – SBC – BellSouth – Qwest – Sprint – Citizens – ALLTEL – CenturyTel – Broadwing Equipment Manufacturers – Nortel – Lucent – Siemens – NEC – Alcatel – Cisco Systems
Wireline Voice
Technology Brief—Wireline Networks
517
Figure 8-15 charts the business drivers for wireline DSL. Figure 8-15 Wireline DSL
High Value
Critical Success Factors Expanding Service Reach Bundle DSL Internet with Local, Long, and Wireless Services
Market Leadership
Establish Video and IP Telephony Strategy Service Level Guarantees Competitive Differentiation/Retention Diversify Services and Grow ARPU
Technology Cisco IOS TCP/IP ATM
CAP G.Lite G.SHDSL ADSL
Dynamic Subscriber Bandwidth Selection – IP and ATM QoS for Differentiated Services
ADSL2⫹ SDSL
Surging Bandwidth Requirements Broadband Data and Video Services
Market Share
Competitive Maturity
HDSL HDSL2
Teleworkers/Enterprise VPN
VDSL
Small/Medium Business Market
VDSL2
Service Substitutions, Competitive Response
Digital Loop Carrier
Low Cost
TDMA Business Drivers
Industry Players
Cisco Broadband Operating System
G.DMT
SMB Market Leadership Market Value Transition
Cisco Product Lineup
Cisco Broadband Applications Service Selection Gateway (SSG) Service Control Engines 870, 850, 800 Series DSL Modems and Routers
Applications Service Value Remote Access VPNs Managed Firewall Security Digital TV
Tiered Broadband Services Value-Added Business DSL Services Premium Managed IP Services for SMB Regional/National Brand Identity and Coverage High Availability and Security Stellar Customer Service
Digital Video on Demand Internet Services E-Mail Web Site Hosting
Cisco Key Differentiators Single Framework – IP Feature Leadership – Multirate, Customer Selectable
Residential Voice and Internet Data 1st Generation Business DSL Services ISDN and Dial-Up Replacement
Multiline Voice Services Home Networking Solution Delivery
FDMA
Service Providers – SBC Communications – Verizon Communications – BellSouth – Qwest – AT&T Network Services – ALLTEL – Covad Equipment Manufacturers – Alcatel, Lucent, Adtran, Paradyne, NEC, Marconi, Siemens, Copper Mountain, Cisco Systems
Wireline DSL
518
Chapter 8: Wireline Networks
Figure 8-16 charts the business drivers for wireline cable. Figure 8-16 Wireline Cable
High Value
Critical Success Factors Digital Video Transition
Cisco IOS
Best Speed Alternative
Hybrid Fiber Coaxial
Leverage IXC and Content Relationships Market Leadership
Absolute Operational Performance Recognized Reliability Expand Service Reach to SMB Markets
Market Value Transition
Building Broadband Service Manager
CableLab’s Technology Standards
MPEG4
CNS Network Registrar
DOCSIS 1.X
Cisco SRC DPR
EuroDOCSIS 1.1
uBR10012 uBR7246VXR
DOCSIS 2.0
uBR900
Rich IP Services – Scale to 80,000 Modems per CMTS Chassis – CableLabs Standards-Based
Demand for IP Telephony Services Teleworkers/Enterprise VPN Small/Medium Business Market Digital Video Services
Competitive Maturity
Broadband Access Center for Cable
MPEG2
Broadband Data, Demand Video Services
Low Cost
Coaxial Media
Cisco Product Lineup
Customer Retention
Surging Bandwidth Requirements
Market Share
Technology
Digital Home Networking Services Business Drivers
Industry Players
uBR7100 uBR RF Switch
Packet Cable
12000 Series
TCP/IP
10000 Series
S-CDMA
6500/7600
A-TDMA
Applications Service Value Analog Video
Tiered Pricing Model/Bundled Services
HDTV Digital Video
Triple Play of Video, Data, Voice and Premium on-Demand Content
Digital Video Recorder Video on Demand High-Speed Internet, E-Mail, and Web Page Digital Music
Symmetrical Broadband for Business
Digitally Connected Home Superior Reliability Great Customer Service Cisco Key Differentiators Single Framework – IP Feature Leadership – Highest Integration Analog Broadcast Cable Television Digital Broadcast Cable Television Cable Telephony Services On-Demand Video Services
Digital Telephony
High-Speed Internet Access Service
7200/7500
TDMA
BTS10200 SoftSwitch
FDMA
MGX8000
Home Networking
AS5000
Solution Delivery
Service Providers – Comcast – Time Warner Cable – Cox Communications – Charter Communications – Shaw Communications – Cablevision – Adelphia – Rogers Cable – Insight Communications – Mediacom Equipment Manufacturers – Terayon – Cisco Systems – ADC – Arris – Motorola – Scientific Atlanta – Tellabs – Juniper – Riverstone
Wireline Cable
End Notes
519
Figure 8-17 charts the business drivers for wireline Ethernet. Figure 8-17 Wireline Ethernet
High Value
Critical Success Factors
Cisco Product Lineup
Applications
Ethernet
6500/7600
IP VPN
Fast Ethernet
4000/4500
Internet Access
Technology
Better Time to Market Multiservice Provisioning Strategy Market Leadership
Layer 2 and 3 Skills for Ethernet and IP Architecture and Technologies Multiple Architectures for Multiple Market Segments Convergence on Technology Standards
Gigabit Ethernet 10 Gigabit Ethernet 802.1QinQ
Market Value Transition
802.3Q IP Feature Leadership – High Availability – Security – Quality of Service
CSMA/CD EoMPLS EoSONET
Surging Data Bandwidth Requirements Broadband Voice, Data and Video Medium/Large Business Market Market Share
Growth in Consumer Broadband Enterprise Transparent LAN Services Demand for Familiar Technology and Skills Competitive Differentiation
Low Cost Competitive Maturity
Business Drivers
Industry Players
3750/3550 2950 and 2950LRE 500 Series LRE CWDM GBICs ONT 1031
Service Value
Transparent LAN Services Video on Demand Video Services
1105 ESSE VoIP Services
ONS15454 ONS15327 ONS15310-CL Managed Voice Ethernet EoWDM and Data /Fast EoVDSL Services Ethernet/ Gigabit GFP Ethernet over Downtown G.7041 Campus SONET/SDH VC G.707/ Line Cards Data Center G.783 Connect LCAS 10 Gigabit SANs G.7042 Ethernet EoSDH EoRPR
Fast, Packet-Based Service Provisioning (Speed) Bandwidth Granularity and Scalability (Flexibility) Customer Passion for Technology (Service Pull) Future-Proof Bandwidth Service (Value Awareness) Cisco Key Differentiators Layer 2/3 Technology Leadership – Enterprise Technology Leadership Metro LAN Connection Service Ethernet Private-Line Service Ethernet Relay Service Ethernet Wire Service Ethernet Multipoint Service Ethernet to the Business Ethernet to the MxU
Solution Delivery
Service Providers – SBC Communications – AT&T – MCI – Cogent Communications – Yipes! – Looking Glass – Qwest – BellSouth – OnFiber – Time Warner …… Europe>> Italy’s FastWeb – Sweden’s B2 Equipment Manufacturers – Cisco Systems – Nortel – Ciena – Fujitsu – Lucent – – Sorrento – Foundry – Extreme – Riverstone – Internet Photonics – ADVA – Movaz – Enterasys
Wireline Ethernet
End Notes 1
“Trends in Telephone Service.” Federal Communications Commission, June 2005. http:/ /www.fcc.gov/Bureaus/Common_Carrier/Reports/FCC-State_Link/IAD/trend605.pdf 2
DSL Forum. “DSL Gains 10 Million New Global Subscribers in First Quarter of 2005.” June 2005. http://www.dslforum.org/PressRoom/ Q1%2005%20DSL%20subscriber%20figures%20.pdf
520
Chapter 8: Wireline Networks
References Used in This Chapter Cisco Systems, Inc. “Cisco PacketCable Primer White Paper.” http://www.cisco.com/en/ US/partner/netsol/ns341/ns396/ns289/ns4/ns320/networking_solutions_white_ paper09186a0080179138.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Internetworking Technology Handbook, Digital Subscriber Line.” http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/dsl.htm Cisco Systems, Inc. “Business Considerations of a DOCSIS 1.1 Migration.” http:// www.cisco.com/en/US/partner/products/hw/modules/ps4302/products_white_ paper09186a0080179140.shtml. (Must be a registered Cisco.com user.) Cisco Systems, Inc. “Ethernet in the First Mile – Setting the Standard for Fast Broadband Access,” a Cisco Systems White Paper, by Bruce Tolley. http://www.cisco.com/en/US/ partner/netsol/ns341/ns396/ns223/ns227/networking_solutions_white_ paper09186a008009d660.shtml. (Must be a registered Cisco.com user.)
Recommended Reading Abe, George. Residential Broadband. Cisco Press, 1999 Cisco Systems, Inc., edited by Wayne Vermillion. End-to-End DSL Architectures. Cisco Press, 2003 Gumaste, Ashwin. First Mile Access Networks and Enabling Technologies. Cisco Press, 2004 Mervana, Sanjeev, and Chris Le. Design and Implementation of DSL-Based Access Solutions. Cisco Press, 2001 “Cisco Broadband Aggregation Portfolio.” http://www.cisco.com/en/US/netsol/ns341/ ns396/ns301/ns242/netbr09186a0080088763.html “Cisco Broadband Local Integrated Services Solution for T1/E1.” http://www.cisco.com/ en/US/netsol/ns341/ns396/ns166/ns327/networking_solutions_package.html “Cisco Broadband Local Integrated Services Solution for Cable.” http://www.cisco.com/ en/US/netsol/ns341/ns396/ns166/ns311/networking_solutions_package.html “Cisco Broadband Local Integrated Services Solution for Metro Ethernet.” http:// www.cisco.com/en/US/netsol/ns341/ns396/ns166/ns321/networking_solutions_ package.html “Cisco Multiservice over Cable Solution.” http://www.cisco.com/en/US/netsol/ns341/ ns396/ns289/ns269/netbr09186a0080153e36.html
Recommended Reading
521
“Cisco Broadband Managed Access Solution for Cable Operators and ISPs.” http:// www.cisco.com/en/US/netsol/ns341/ns396/ns289/ns3/ns1/networking_solutions_ sub_solution_home.html “Cisco Cable-Ready HSD MxU Solution for Service Providers.” http://www.cisco.com/en/ US/netsol/ns341/ns396/ns2/networking_solutions_sub_solution_home.html “Cisco Gigabit Ethernet Optimized Video on Demand Solution.” http://www.cisco.com/en/ US/netsol/ns341/ns396/ns159/ns333/networking_solutions_white_ paper09186a008017915b.shtml “Cisco Long Reach Ethernet Technology.” http://www.cisco.com/en/US/products/hw/ switches/ps1901/products_white_paper09186a0080088896.shtml Ethernet in the First Mile Alliance (EFMA) white papers and tutorials. www.efmalliance.org
This chapter covers the following topics:
• •
Cellular Mobility Basics Wireless LANs
CHAPTER
9
Wireless Networks In the very near future, the number of wireless devices will outnumber wired devices. A few decades ago, wireless voice (walkie-talkie and other vocal variants), wireless audio (radio), wireless video (television), and wireless data (satellite, microwave, and so on) were each developed to address a particular communication purpose. Of these, wireless voice has found great acceptance through a fundamental one-on-one communication style. Flexibly switched to any other number in the world, wireless voice adds the enchantment of portability and mobility to individual communications. To a large extent, personal wireless voice has become the axis of communication convergence. Wireless mobility is a must-have. Like an explorer’s compass, your mobile phone has become your navigator to personal communication. You use it to blaze new conversations ahead while you maintain association with current and past acquaintances. Wireless computing is a need-to-have on its way to a must-have reality and is leading personal computing into the superpersonal realm. As your personal access card into the world’s storehouse of knowledge, it becomes your private window into a vast library of learning, of which data, audio, and video are an indispensable part of the total comprehension of knowledge. A remarkable convergence of wireless communications and wireless low-power computing is colliding into handheld form factors for pocket or purse. Wireless networks enable the capability and equality of personal communications, superpersonal computing, and timesaving information to everyone who chooses to explore the communication landscape. Without them, you might never leave home or the office. With wireless networks, wherever you and your wireless communication devices are becomes your digital home and office. This chapter introduces many of the technologies behind the success of wireless cellular mobility networks, wireless local area networks (WLANs), wireless personal area networks (WPANs), and both fixed and satellite wireless networks.
Cellular Mobility Basics Cellular phones are sophisticated radios that, at a basic level, use frequency modulation in full-duplex fashion. That means that both parties can speak and listen at the same time,
524
Chapter 9: Wireless Networks
which would be really useful if you could develop the multiplexing skills required to comprehend and absorb such a bidirectional exchange. These mobile phones are generally referred to as cellular because of the cell-by-cell approach the wireless provider uses to divide up and provide citywide service. More conversationally, they’re referred to as cell phones. This section introduces mobility basics under the context of analog and digital cellular systems and reviews the underlying seed technologies that make them work.
Analog Cellular Access Technology Cellular access technologies, particularly frequency division multiple access (FDMA), are generally used with analog systems such as Advanced Mobile Phone Service (AMPS). The FDMA technique, used in the United States since 1983, separates the usable frequency channels into uniform blocks of bandwidth—each phone call using a different frequency. As a nondigital technology, FDMA systems handle voice circuit switching and aren’t designed to carry data. Traditionally, AMPS has been called an analog cellular standard. The concept of a cell is the basic geographic service area on which providers build signal coverage. The cell-based design approach for mobile radio services has its roots in an R&D project at Bell Labs in the 1970s. An original cellular infrastructure, conceptually designed much like an invisible honeycomb, uses variable low-power transmitters to cover approximately 10 square miles per cell, with each cell containing one or more broadcasting antennae or base stations that are connected back to the provider’s mobile telephone switching office (MTSO). In practice, cell sites are different sizes to best serve the natural landscape, often ranging from .62 mile (1 km) to about 6.2 miles (10 km). The natural terrain and other structures can alter the coverage and shape of an individual cell. Figure 9-1 shows a conceptual layout of a cluster of cells. A cellular phone is often designed to communicate on 1664 frequency-modulated channels, of which 832 channels are for one digital band or frequency range, while the other 832 channels are intended to support an additional digital band or frequency range. These 1664 channels are fundamental to a dual-band phone, to make the phone applicable to these wireless mobility frequency ranges. In addition, many phones are also dual-mode, which means that they support both digital cellular/PCS and analog services. The dual-mode capability supplements gaps in digital coverage with legacy AMPS service, for example, whenever you’re roaming beyond your digital wireless provider’s coverage area. A wireless provider is assigned 832 radio frequencies to use within a city, which are then engineered into a spectrum plan that the provider divides across the cells of the coverage area. Within a cell, a provider generally uses one-seventh of these possible frequencies, while different frequencies are used in up to six adjacent cells to form a seven-cell cluster. Beyond an adjacent cell, providers can reuse the same frequencies once again as long as they don’t interfere with other cells.
Cellular Mobility Basics
Figure 9-1
525
Cellular Cluster
Cell 7
Cell 2
Cell 6
Cell 1
Cell 3
Cell 5
Cell 4
To complete the cell-by-cell design, the provider also uses variable low-power transmitters for the base station antennae. By combining this low-power design with particular frequency coverage for a specific cell, frequencies weaken and fade about a mile into adjacent cells. By this time, your cellular phone has frequency hopped as it moves into a new cell, automatically releasing the weakening frequencies of the cell behind and switching to the strengthening, but different range of frequencies in the cell ahead. By using the cellular concept with variable low-power transmitter levels, cells can be sized according to the subscriber density and demand of a given area. As the population grows, cells can be added to accommodate that growth. Frequencies used in one cell cluster can be reused in other cells. Conversations can be handed off from cell to cell to maintain constant phone service as the user moves between them. The cellular radio equipment (base station) can communicate with mobile phones as long as they are within range. Figure 9-2 depicts a multiple-cell cluster. With AMPS systems (EIA/TIA-553) using FDMA access technology and the 824 to 893 MHz frequency band, there are 395 voice channels (30 kHz frequency channels) for usage by an AMPS provider. Designers then usually divide that capacity by a seven-cell cluster to properly distribute the frequency range such that adjacent cells don’t reuse the same frequencies. Adjusting for interchannel separation, a typical AMPS cell concurrently carries about 40 to 50 mobile conversations. For a seven-cell cluster, there are about 280 to 350 concurrent conversations.
526
Chapter 9: Wireless Networks
Figure 9-2
Multiple Cell Clusters
Frequency Range 7
Cell 7 Frequency Range 2
Frequency Range 6
Cell 2
Frequency Range 7
Cell 6 Frequency Range 1
Cell 7 Frequency Range 2
Frequency Range 6
Cell 2
Cell 1
Cell 6
Frequency Range 3
Frequency Range 5
Frequency Range 1
Cell 3
Cell 5
Cell 1
Frequency Range 4
Frequency Range 3
Frequency Range 5
Cell 3
Cell 4 Frequency Range 7
Cell 4
Cell 7 Frequency Range 2
Cell 5 Frequency Range 4
Frequency Range 6
Cell 2
Cell 6 Frequency Range 1
Cell 1 Frequency Range 3
Frequency Range 5
Cell 3
Cell 5 Frequency Range 4
Cell 4
Radio energy dissipates over distance, so the cellular phones must be within the operating range of the base station. Like the early mobile radio system, the base station communicates with mobiles via a channel. The channel is made up of two frequencies, one for transmitting to the base station and one to receive information from the base station. This is why an 832channel AMPS system supports less than half this number of concurrent voice calls, allowing for special control channels and so on. A cell cluster’s mileage radius depends on numerous factors, such as subscriber density, but theoretically could be up to about a 30-mile diameter. Designers continue to build out their
Cellular Mobility Basics
527
citywide coverage areas with multiple clusters that reuse the same frequencies over and over again to scale their total concurrent user capacity. In dense, urban areas, smaller cells and cell clusters are typically designed to accommodate subscriber call capacity requirements and to maintain enough signal power despite a number of building and structural objects that attenuate and reflect cellular frequencies. Figure 9-3 illustrates the concept of using different-sized cell clusters to properly engineer for terrain and capacity. Figure 9-3
Conceptual Cell Cluster Design
Smaller Cell Sites for Urban Areas
Larger Cell Sites for Rural Areas
This cell-based approach to mobile telephony allows the provider to use and reuse its assigned frequency spectrum extensively across its assigned coverage area. Because providers are assigned different frequency ranges for their unequivocal use, there are numerous, invisible “honeycombs” of frequency coverage layered over the same geographic coverage areas, each communicating with provider-sourced cellular phones programmed
528
Chapter 9: Wireless Networks
to explicitly operate within their assigned frequency range, type of access technology, and system parameters. Under power, your mobile handset’s unique electronic serial number (ESN) is continuously broadcasting its reachability to the local cell. The ESN is a unique 32-bit number that is factory programmed when the phone is manufactured. In addition to the factoryprogrammed 32-bit ESN, the wireless provider uses other numbers to identify and track your cellular handset on its network:
•
System Identification Code for Home System (SIDH)—This is a unique five-digit number assigned to each cellular provider by the Federal Communications Commission (FCC) and is used to identify your handset as belonging to the provider’s system. This identifies your handset as belonging to, for example, the Sprint PCS system rather than the Verizon Wireless system. SIDH is often abbreviated to SID.
•
Mobile Identification Number (MID)—Your provider uses this ten-digit number (your assigned mobile telephone number) to uniquely identify your handset within its network.
The combination of a factory-programmed ESN, along with a wireless, providerprogrammed SID and MID, allows the provider to activate your handset, in effect, registering your phone to send and receive calls bearing your mobile telephone number. At power-up, your phone listens for the SID on its control channel, a special frequency used between the phone and base station for call setup and channel switching. It compares the SID it receives with the one that it is specifically programmed for. A match indicates that it is communicating with its home system, and a nonmatch indicates it is out of range or roaming on a different provider system. Along with the SID, your phone will continuously transmit a registration request, which updates the MTSO database so that the MTSO knows which particular cell you are currently in. By keeping track of your current cell position in the database, the MTSO knows how to reach you for an incoming call. To deliver a call, the MTSO first determines which cell you are in, then picks an unused frequency pair for that cell, signaling your phone over the control channel to switch to those frequencies to connect and receive the call. To initiate a call, your handset sends a request to the MTSO over the control channel, the database is scanned to validate your phone’s ESN, SID, and MID as a registered device, and then the MTSO selects and sends your phone the frequency pair to use to continue call setup and connection. All of this is done in mere seconds. An enhancement to the AMPS system was later developed and called Narrowband AMPS (NAMPS, EIA/TIA/IS-91), squeezing channel spacing from 30 kHz to 10 kHz channel widths, effectively tripling the capacity of NAMPS per cell and per cluster. AMPS is the first analog wireless network standard in the world and is principally used in the United States, Australia, South America, and China. In the U.S., the Federal Communications Commission has scheduled the sunset of the AMPS networks for November 2007.
Cellular Mobility Basics
529
Digital Cellular Access Technologies In addition to analog access technology, a digital transmission technique called time division multiple access (TDMA) assigns each call a certain portion of time along with a particular frequency. Another method called code division multiple access (CDMA) assigns unique codes to each concurrent call and spreads a call over any of the available frequencies within the current cell. An enhanced frequency division multiplexing technology called orthogonal frequency division multiplexing (OFDM) makes more optimal use of frequency spectrum. These are discussed next. These cellular access technologies, TDMA, CDMA, and OFDM, are used for various network standards and mobile operations throughout the world.
TDMA Earlier digital cellular used TDMA in combination with AMPS technology (sometimes called Digital AMPS). It is important to note that TDMA-based systems still use FDMA technology and frequency division duplexing (FDD) as well, overlaying TDMA timing functions on the FDMA frequencies. The combination is an FDMA/TDMA/FDD technique but is usually referred to as a TDMA system. TDMA systems break up the frequency range into sets that can be used on a cell cluster basis. A different set of frequencies is used in each adjacent cell as discussed previously. When a cell phone moves from one TDMA cell to an adjacent TDMA cell, the network system must rapidly terminate the call frequencies in that cell, switch the call to the adjacent cell tower, and reestablish communications over a different frequency channel set to the cell phone—doing so without dropping the call or dropping large parts of the conversation. This practice of terminating frequency channels and reestablishing new frequency channels is referred to as a hard handoff. In TDMA systems, the time slot is 6.7 milliseconds long on a narrowband, 30 kHz–wide frequency range within the band, allowing from three to six simultaneous digital conversations on the same strip of frequency. This allows multiple users to have concurrent calls on the same frequency channel of the TDMA-based cellular system. The timeshare technique of TDMA keeps these calls from overlapping. This affords a TDMA system about three to six times the concurrent call capacity of a system based on FDMA only. TDMA, often referred to as a narrowband digital cellular system, operates in either the 800 MHz (IS-54) frequency band or the 1900 MHz (IS-136) frequency band in the United States. In Europe, TDMA-based systems, such as Global System for Mobile Communications (GSM), operate in the 900 MHz and 1800 MHz frequency bands. TDMA can also be designed in a Hierarchical Cell Structure (HCS), allowing for the use of macrocells, microcells, and picocells for better optimization and localization of coverage. Using these HCS designs with adaptive channel allocation techniques (enhanced TDMA) and more intelligent antennas have the potential to extend TDMA to scale to many times the capacity of an original AMPS.
530
Chapter 9: Wireless Networks
NOTE
Initial TDMA implementations increased calls per channel to about three times that of FDMA. Newer versions of TDMA technology have improved to six times that of analog systems, and through the use of an enhanced version of TDMA called E-TDMA, the concurrent capacity increases to 15 times FDMA. E-TDMA accomplishes this by compressing quiet time during conversations and through further time division of the frequency channels.
Variants of TDMA technology are used in different digital cellular systems within the overall market. The European GSM standard implements TDMA differently than the older U.S. specification IS-136 protocol standard. Motorola developed a proprietary technology called integrated dispatch-enhanced network (iDEN), used as the basis for the original Nextel network in the U.S. TDMA technology has also been ported to wireline use, with cable modem systems using advanced TDMA (A-TDMA) for upstream data channels between cable modems and the cable modem termination system (CMTS).
CDMA CDMA is another digital access technique that assigns a unique code to each call and then spreads the call over any available frequencies in the wireless provider’s complete frequency range. This access technique is a variant of digital spread-spectrum technology, an idea coinvented by a Hollywood, California actress/scientist named Hedy Lamarr back in 1940.
NOTE
At the height of her acting career in 1942, Hedy Lamarr, along with coinventor George Antheil, patented an 88-key frequency-switching system for torpedo guidance. Though never used for military applications, Sylvania later developed the concept using 1962vintage electronics for the purpose of naval communications. Subsequent patents in frequency hopping have referred to the Lamarr-Antheil patent as the basis of the field.
Originally used for secure military communications during World War II, today’s digital spread-spectrum technology rapidly switches from one frequency to the next, or from code to code over several frequencies, all synchronized with GPS clocks and pseudorandom number generators.
Cellular Mobility Basics
531
CDMA-based systems also use FDMA technology and FDD, overlaying CDMA spreadspectrum functions on the FDMA frequencies. The combination is an FDMA/CDMA/FDD technique but is usually referred to as a CDMA system. CDMA is used for both voice calls and data transmission over CDMA systems. Essentially, CDMA allows all of the system’s users to transmit and receive in the same wideband block of the provider’s entire spectrum assignment, accurately time stamping and spreading each mobile user’s signal over the entire frequency bandwidth with a unique spreading code. This means that multiple transmitters are sending to the same receiver at the same time. A particular pseudorandom code is assigned to a CDMA-based cell phone when it is on the system, and transmission to this particular phone is spread across a wide range of available frequencies using the pseudorandom code. The phone is able to decipher its particular code from the different bit streams and multiplex the bits back together. Using this approach, each CDMA cell can use the same full frequency band per cell, eliminating the cell-cluster design requirement. This also allows the cell phone to use the same frequencies as it moves from cell to cell, because all frequencies are usable in all adjacent cells. The ability for a CDMA cell phone to transition between cells and use the same frequencies is known as a soft handoff. CDMA system cell phone users are isolated by up to 4.4 trillion codes rather than frequencies. Frequency separation space that goes unused in TDMA systems is available for use with CDMA technology. With the CDMA handsets transmitting at about .6 watts, the lowpower transmission across the wide band of frequencies gives CDMA the colloquial reference of “wide and weak.” With 4.4 trillion codes modulated and distributed at high speed over a wide range of frequency signals at such low power, a CDMA call appears inconspicuous and transparent, much like background noise—extremely difficult to intercept and demodulate. Through use of supersecure CDMA technology, you can place from 10 to 20 concurrent calls in the same channel space as that of a traditional analog FDMA-only system. CDMA depends on power control of the mobile cell phones to maintain good call capacity. The CDMA base station transceiver automatically throttles power of cell phones close to the cell tower and boosts weaker signals of cell phones furthest from the tower to allow all phone transmissions to access the tower at approximately the same power. CDMA systems can use forward error correction (FEC) coding to improve the bit error rate factor, achieving gains in channel capacity. In the United States, CDMA also operates in the 1900 MHz frequency band but is generally transparent to TDMA systems using the same frequency space. This is because the wideband usage of CDMA transmits at a much lower spectral power density then the narrowband transmitters as used in TDMA. This lower transmit power density characteristic allows both spread-spectrum (CDMA) and narrowband signals (TDMA) to cooperate in the same frequency bands.
532
Chapter 9: Wireless Networks
CDMA is the seed technology for a number of digital cellular standards, of which some are covered later. These include variations of CDMA beyond the original IS-95 standards such as CDMA2000 1x/1xRTT (a high data rate version of CDMA 1x EV-DO), CDMA2000 1xEV-DV, wideband CDMA (WCDMA), and time-division synchronous CDMA (TD-SCDMA).
•
cdmaOne—cdmaOne is a brand marketing name. The original IS-95A and IS-95 B CDMA specifications are now referred to as cdmaOne. The technologies use 1.25 MHz–wide channels to deliver voice and data.
•
CDMA2000—A direct evolution of cdmaOne, CDMA2000 provides a set of specifications for enhancing voice and data capacity of cellular and PCS systems. The CDMA2000 family includes: — CDMA2000 1x—Referred to as 1x, or sometimes 1xRTT (Radio Transmission Technology), 1x is 21 times more efficient than analog cellular and four times more efficient than TDMA networks according to Qualcomm. This designation of 1x is used by Qualcomm to denote the type of CDMA radio technology used over a pair of 1.25 MHz frequency width channels. — CDMA2000 1xEV-DO—1xEV-DO is short for first evolution (1xEV) data optimized (DO). It is considered part of the Qualcomm CDMA2000 1xEV family, which is 1x technology with high data rate (HDR) technology applied. 1xEV-DO provides peak data rates of over 2.4 Mbps and an average of 700 Kbps inside a 1.25 MHz channel. — CDMA2000 1xEV-DV—1xEV-DV stands for first evolution data and voice, and targets speeds of 3.1 Mbps downstream (forward link) and 1.8 Mbps upstream (reverse link). It is also part of the CDMA2000 1xEV family.
•
WCDMA—An additional 3G data overlay is known as wideband code division multiple access (WCDMA). WCDMA is based on the UMTS and IMT-2000 specifications.
•
TD-SDCMA—Called time division synchronous CDMA, TD-SCDMA is one of three internationally accepted CDMA standards. This technology is being pursued in mainland China to avoid royalties from other CDMA systems.
OFDM OFDM is a relatively new option for wireless access technology. Researchers developed OFDM as an access technique in the 1980s. Only recently has it been finding its way into commercial communication systems, primarily because Moore’s Law has driven down the cost of the signal processing needed to implement OFDM-based systems.
Cellular Mobility Basics
NOTE
533
Gordon Moore, cofounder of Intel, observed in 1965 that the density of transistors per square inch of silicon-based integrated circuits had doubled every year, predicting the trend would continue in subsequent years. While this pace has slowed somewhat, data density on integrated circuits doubles approximately every 18 months, which is the current definition of Moore’s law. This allows more processing per square inch for less cost per bit.
OFDM can be thought of as a combination of modulation and multiple-access schemes that segment a communication channel in such a way that many users can share it. Whereas TDMA segments are according to time and CDMA segments are according to spreading codes, OFDM segments are according to frequency. The OFDM technique divides the spectrum into a number of equally spaced tones and carries a portion of a user’s information on each tone. A tone can be thought of as a frequency, in much the same way that each key on a piano represents a unique sound and frequency. OFDM can be viewed as a form of frequency division multiplexing (FDM); however, OFDM has an important special property that each tone is orthogonal, or independent of every other tone. Typical FDM techniques require the provisioning of a frequency guard band between adjacent frequencies to mitigate interference with each other. OFDM allows the spectrum of each tone to overlap, and because they are orthogonal, they do not interfere with each other. By allowing the tones to overlap, the total required spectrum is reduced. OFDM, therefore, provides the best of the benefits of TDMA in that users are orthogonal to one another, and CDMA, while avoiding the limitations of each; which are the need for TDMA frequency planning and equalization, and the engineering to avoid multiple access interference in the case of CDMA. Wideband OFDM is another variant of OFDM that is promising for cellular systems.
NOTE
Traditional broadband wireless technologies have struggled to overcome problems caused by radio frequency (RF) waves bouncing off tall objects such as buildings or low objects such as lakes and pavements. The distance traveled by the primary signal is shorter than the deflected signal; the resulting time differential causes the two signals to be received, overlapped, and merged into a single distorted signal. The original signal plus duplicate or echoed signals from deflections is known as multipath, and results in intersymbol interference and distortion.
534
Chapter 9: Wireless Networks
Cellular Standards Cellular standards are defined by national and international organizations to promote wide acceptance, common equipment, interoperability, and quicker deployment. The various types and generations of cellular standards and industry terminology have rapidly increased since the 1980s. Mobile telephony is extremely international in nature, with major standards at work in Europe, Japan, and the United States. Table 9-1 is offered to increase your understanding of these international wireless standards, generations, and terminology. Also included is a reference to the relative functionality of these systems as they apply to the mobile technology generation categories of 1G, 2G, 2.5G, and 3G. (These generations are described later). Table 9-1
International Wireless Systems and Standards Primary Air Link Frequency Access Type Band
Functional Standard Generation
Cellular Systems
Global Theater
1983
Advanced Mobile Phone Service (AMPS)
United States
800 MHz
FDMA
EIA-553
1G
1985
Extended Total Access Communications (E-TACS)
Europe
900 MHz
FDMA
—
1G
1985
Japan Total Access Communications (J-TACS)
Japan
900 MHz
FDMA
—
1G
1986
Nordic Mobile Telephone
Europe
450 MHz, 900 MHz
FDMA
—
1G
1986
Personal Digital Cellular (PDC)
Japan
900 MHz, 1500 MHz
TDMA
—
2G
1990
GSM (2G)
Europe
900 MHz, 1800 MHz
TDMA
GSM
2G
1991
U.S. Digital Cellular (Digital AMPS)
United States
800 MHz, 1900 MHz
TDMA
IS-54, 2G supplanted by IS-136
1992
Narrowband AMPS (NAMPS)
United States
800 MHz
FDMA
—
1G
1995
Personal Communications Services (PCS)
Canada
1900 MHz
TDMA, GSM, CDMA
ANSI 95A and 95B
2/2.5G
Year
Cellular Mobility Basics
Table 9-1
535
International Wireless Systems and Standards (Continued) Primary Air Link Frequency Access Type Band
Functional Standard Generation
Year
Cellular Systems
Global Theater
1996
PCS
United States
1900 MHz
TDMA, GSM, CDMA
IS-136, GSM, ANSI 95A and 95B
2/2.5G
1996
PCS
GSM-1800 United Kingdom, Hong Kong
TDMA GSM
GSM
2/2.5G
1998
GSM
United States, Canada
800 MHz, 1900 MHz
TDMA, GSM
GSM, Release 97 and 99
2.5
1998
GSM
Europe
900 MHz, 1800 MHz
TDMA, GSM
GSM, Release 97 and 99
2.5
2000
CDMA (2G)
Japan, Asia
900 MHz
cdmaOne
ANSI 95A and 95B
2G
2001
CDMA2000
United States
1900 MHz
CDMA 1xRTT
IS-2000, part of IMT-2000
3G
Europe, Japan, Asia, United States
900 MHz, 1800 MHz, 2000 MHz
WCDMA, TDMA, CDMA
UMTS, (often called 3GSM), part of IMT-2000
3G
Japan/ Korea, United States, South America
1900 MHz
CDMA 1xEV-DO
IS-856
3G
South America Asia 2001
Universal Mobile Telecommunications System (UMTS), wideband CDMA (WCDMA)
2002/3 CDMA 1xEV-DO
536
Chapter 9: Wireless Networks
Wireless network standards exist for analog systems, digital cellular systems, digital PCS, and so on. This section discusses the more prominent digital cellular network standards— GSM, CDMA2000, PCS, UMTS, and IMT-2000.
GSM GSM is one of the major digital cellular network standards. GSM uses a variant implementation of TDMA as its access technology. Beginning with the formation of the GSM forum in Europe in 1982, GSM adopted the digital TDMA access technology in 1987 and opened commercial GSM operations in European countries in 1991. GSM’s popularity in Europe has recently immigrated into the United States. The worldwide standard allows for interoperability, and a GSM subscriber can roam on most of the GSM systems in the world. Used by Verizon Wireless, Cingular Wireless, and T-Mobile in the United States, GSM’s primary functional benefit through the use of TDMA is improved digital voice quality and advanced features. Short messaging services, multiparty calling, voice mail, fax mail, caller ID, and cell broadcast are a few of the more notable capabilities. GSM uses companion technologies to support data rates up to 384 Kbps and beyond. GSM uses the 900 and 1800 MHz frequency ranges in most of the world, excluding North America. In the United States, GSM operates in the 800 and 1900 MHz frequency band. Through use of digital TDMA access technology, a GSM cell usually supports about 6 to 15 times that of an AMPS cell or about 300 to 750 concurrent calls per cell depending on the particular TDMA version. This greatly increases the mobile subscriber scalability per cluster and is well suited to dense metropolitan areas referred to as metropolitan statistical areas (MSAs). A GSM cell phone includes a feature called a subscriber identification module (SIM). The SIM stores subscriber profile information (for secure authentication purposes), the subscriber’s telephone book, and other appropriate information items. This allows the GSM subscriber to easily change phones or systems, as the SIM module can be transferred to the new cell phone without having to reinput all of the essential information. GSM uses a structured cell design made up of the following:
• • • •
Macro cells—Antenna on mast or tall building Micro cells—Urban rooftop antennas Pico cells—Smart cells covering a few dozen meters Umbrella cells—Special purpose cells used to plug gaps in shadowed regions of other cells
Cell radius varies depending on the height of antennas, terrain, and so on, but the longest distance for a GSM cell is about 35 km (21.7 miles). GSM authenticates the user’s cell phone handset to the GSM network and also uses cryptographic algorithms to ensure the privacy of the air link. GSM uses 200 kHz–wide radio frequency channels into which eight
Cellular Mobility Basics
537
voice channels are time division multiplexed at 25 kHz per user. Based on TDMA, GSM must use separate frequencies, often called sets, in adjacent cells. A typical concurrent call number for GSM cells is four times that of AMPS systems, or about 224 theoretical calls per cell. GSM networks are capable of both 2G and 2.5G network classification depending on the implementation of optional features and data rates. These generations are described later in this section.
CDMA2000 The original CDMA version, now called cdmaOne, increased simultaneous call capacity per cell from 8 to 15 times that of early FDMA systems, which had approximately 56 users per cell. That’s a range of about 512 (8 carriers) to 960 (15 carriers) theoretical active conversations per cell. The CDMA2000 family of CDMA cellular and PCS standards is extending wireless voice and data capabilities even further. The CDMA2000 1x has doubled voice capacity (approximately 128 calls per 1.25 MHz carrier x 8 carriers equal 1024) over the previous cdmaOne technology, while remaining compatible with cdmaOne systems. CDMA2000 uses a signaling standard known as IS-2000. The IS-2000 signaling protocol adds more traffic channels, link and media access control layers, and quality of service (QoS) control. CDMA2000 1xRTT (sometimes called 1x or 3G1x) is the basic layer of CDMA2000. CDMA2000 meets the requirements of 3G networks. Sprint PCS and Verizon Wireless use Qualcomm-developed CDMA as the fundamental access technology for their wireless networks. Basic CDMA2000 1xRTT networks can use data rates up to about 150 Kbps. To extend data rates even further, Qualcomm developed the high data rate (HDR) technology that could be added to CDMA2000 1xRTT networks. This combination of CDMA2000 1xRTT plus HDR technology is known as the CDMA2000 1xEV (EV for Evolution) family. The first technology of the family is CDMA2000 1xEV-DO (Evolution-Data Optimized). This adds higher data rates to over and above the CDMA2000 1xRTT voice and data capacity, up to 3.1 Mbps toward the cell phone (forward link), and up to 1.8 Mbps toward the cell tower (the reverse link). This is performed within a radio channel dedicated to carrying high-speed packet data. In the United States, both Verizon Wireless and Sprint PCS are deploying CDMA2000 networks with the 1xEV-DO high data rate technology. Japanese operator KDDI is also deploying the CDMA2000 1xEV-DO network. China Unicom began a 2003 commercial trial and is now overlaying CDMA2000 data service on Unicom’s GSM core network. This included the ability to perform real-world testing of dual-mode GSM/CDMA2000 handsets extending to its 60 million plus subscriber base the increased voice capacity, higher data rates, and spectral efficiency of CDMA2000. CDMA2000 is also one of five approved radio access technologies of the International
538
Chapter 9: Wireless Networks
Mobile Telecommunications-2000 (IMT-2000) framework. IMT-2000 is intended to bring high-quality mobile multimedia telecommunications to a worldwide mass market based on a set of interfaces specified in the global, mobile, ITU standards. CDMA2000 can be deployed in all cellular and PCS radio frequency regions, as well as in the United States, Europe, and the rest of the world. SK Telecom of Korea deployed the first CDMA2000 1x network in October of 2000. Since then, CDMA2000 has expanded across all regions. According to the CDMA Development Group, as of 2005 there are 144 commercial networks in operation, and 40 more are being deployed within Asia, Australia, Africa, Europe, and the Americas.1
PCS PCS is the designated name for the 1900 MHz radio frequency band for using digital mobile phone services in North America. TDMA, GSM, and CDMA technologies are used to build digital cellular systems. All of them are adaptable to PCS. PCS networks began implementation in Canada in 1995, and then entered the United States in 1996. PCS was a 1994 spectrum assignment decision made by the U.S. Federal Communications Commission (FCC) and Industry Canada to expand digital cell phone operations beyond the original North American 800 MHz cellular phone band. The new 1900 MHz band is called the PCS band. So, PCS is more of a reference for systems, be they TDMA, CDMA, or GSM based, that operate in the 1900 MHz band in North America or within the GSM1800 MHz band in the United Kingdom and Hong Kong. At the time of the 1994 PCS digital cellular standard, PCS defined a new generation of wireless phone technology that introduced a range of features and services surpassing those available in analog and digital cellular phone systems. PCS networks usually provide the user with an all-in-one wireless phone, paging, messaging, data service, and video services having a greatly improved battery standby time. In the United States, Sprint adopted the terminology as part of its name for its wireless business unit known as Sprint PCS. One of the unique features introduced with PCS phones is that of dual-band and dual-mode. PCS dual-band phones operating at 800 and 1900 MHz enable users to receive full PCS features and services for TDMA, CDMA, or GSM systems wherever they may roam. The PCS phone’s dual-mode capability provides service continuity and interoperability between analog and digital networks. As a result, a PCS phone can transition well across outdoor wireless services and serve as a flat-rate digital cordless phone at home. PCS phones also have a better standby time through more efficient monitoring of the digital control channel (DCCH). Whenever a PCS phone is idle, it camps on the DCCH, and after a few milliseconds shuts off much of its circuitry to conserve power. A PCS phone will then check in with the DCCH channel every few milliseconds to see if there are any incoming calls or pages.
Cellular Mobility Basics
539
UMTS The UMTS network standard is recognized as a 3G (third-generation) function set. UMTS is generally referenced as 3GSM among European GSM technology operators, and is envisioned as the successor to GSM, signaling the move into the 3G of mobile networks. UMTS also addresses the growing demand for new mobile and Internet applications and for added capacity in the overcrowded mobile communication sky. The new network standard, essentially referred to as UMTS/W-CDMA, uses W-CDMA as the air link along with GSM infrastructure and establishes a global roaming standard. The W-CDMA specification increases theoretical data transmission speed to peak rates of 1.92 Mbps per mobile user, with average rates beginning at 384 Kbps. W-CDMA, while based on CDMA multiplexing principles, is an independently developed, complete, and detailed mobile phone protocol that is not compatible with the Qualcomm CDMA family of technologies. Another data service specification called High-Speed Downlink Packet Access (HSDPA) is planning to advance peak data rates to beyond 10 Mbps. HSDPA is described in a following section on mobile data overlays. UMTS uses a pair of 5 MHz–wide radio frequency channels, selecting the uplink from the 1900 MHz band and the downlink from the 2100 MHz band. Specifically, the UMTS standard defines 1885–2025 MHz for the uplink range, and 2110–2200 MHz for the downlink range. As of 2005, the 1900 MHz range is available for UMTS systems in the United States, but the 2100 MHZ range is largely reserved for U.S. satellite operations. In the United States, the FCC is attempting to free up space within the 2100 MHz range for full-scale UMTS. UMTS was first deployed in 2001 by DoCoMo of Japan. T-Mobile has launched UMTS in Austria and Germany. Plus GSM of Poland launched in 2004. In the United States, AT&T Wireless (now part of Cingular Wireless) has deployed UMTS systems in over a half dozen cities, while Cingular Wireless plans to accelerate UMTS deployment. Dozens of UMTS networks are deployed. UMTS is envisioned to allow many more applications to be introduced to a worldwide base of users and provides a vital link between today’s multiple GSM systems and International Mobile Telecommunications–2000 (IMT–2000), the ultimate single worldwide standard for all mobile telecommunications.
IMT-2000 The vision of IMT-2000 is the global standardization for 3G and beyond wireless communications, defined by a set of interdependent International Telecommunication Union (ITU) recommendations. IMT-2000 provides a framework for worldwide wireless access by linking the diverse systems of terrestrial and satellite-based networks. This includes cellular, PCS, LANs, cordless, and satellite radio frequency environments.
540
Chapter 9: Wireless Networks
As a global effort, the idea is to provide common radio spectrum across the world, initially in the 1900–2200 MHz range, extend this to include the 2500–2900 MHz range and the 700 MHz band, and—long term—include the heavily utilized 800–1800 MHz range. Targeting global seamless roaming and a wide range of multimedia services across all IMT-2000 family networks, IMT-2000 plans to integrate telecommunications networks worldwide. The coordination of radio spectrum and network infrastructure specifications is an international effort, and the ITU works with global government and industry to coordinate and apply standards to radio spectrum, telecommunications networks, and network services. Over time, all network standards are expected to migrate toward the IMT-2000 specification. In 1999, the IMT-2000 specification adopted five types of terrestrial radio access technology standards for use. The IMT-2000 Terrestrial Radio Interface standards are
•
CDMA direct spread—CDMA direct spread is a form of CDMA technology that directly “spreads” its pseudorandom sequence codes over the radio frequency channel in the frequency domain. This implies the underlying use of FDD. Direct spread is short for Direct Sequence Spread Spectrum (DSSS). The selected radio frequency bandwidth channel is a 5 MHz–wide channel and technique using wideband CDMA (WCDMA). WCDMA is the air link standard for the UMTS 3G wireless telecommunications standard.
•
CDMA multicarrier—CDMA multicarrier spreads the data signal over multiple carriers. For the IMT-2000 CDMA multicarrier specification, the use of CDMA2000 1x and CDMA2000 1xEV technologies meet the requirements.
•
CDMA time-division duplexing—Universal Terrestrial Radio Access (UTRA) is a standard that identifies the radio modes of access for UMTS networks. Both FDD and TDD are specified. The mode of UTRA TDD is specified for the CDMA TDD radio air interface. Another UMTS technology that implements TDD is TD-SCDMA.
•
TDMA single-carrier—The TDMA single-carrier radio interface uses TDMA techniques as found in AMPS and EIA/TIA-136 cellular systems. The specific radio air interface is the use of UWC-136 for voice and EDGE for data. Universal Wireless Consortium-136 (UWC-136) is designed to provide an evolutionary path from AMPS and 2G EIA/TIA-136 networks to 3G participation with the IMT-2000 specification. The use of Enhanced Data rates for the GSM Evolution (EDGE) is the specification for data. These network technologies use TDMA in single-carrier mode.
•
FDMA/TDMA—FDMA/TDMA is the use of both frequency and time-division multiplexing techniques. The specification calls for a local cordless telecommunications technology known as Digital Enhanced Cordless Telecommunications (DECT). DECT is a European Telecommunications Standards Institute (ETSI) standard that acts like a minicellular system in principle for cordless telephones. The cell distance is from 25 to 100 meters, typical of many cordless technologies today around homes and businesses. The DECT standard used FDMA, TDMA, and TDD modes to create
Cellular Mobility Basics
541
radio frequency communication channels in both frequency and time. DECT is specified to operate in the 1900 MHz range, but only for the short distances specified. Table 9-2 summarizes these IMT-2000 Terrestrial Radio Interfaces. Table 9-2
IMT-2000 Terrestrial Radio Interfaces IMT-2000 Radio Interface Technologies
Technology Application
CDMA direct spread
WCDMA (UMTS)
CDMA multicarrier
CDMA2000 1x and 1xEV
CDMA TDD
UTRA TDD and TD-SCDMA
TDMA single-carrier
UWC-136 and EDGE
FDMA/TDMA
DECT
For IMT-2000, data rates up to 2 Mbps for indoor wireless environments is specified for phase 1. The bandwidth size of 2 Mbps happens to be the size of an E1 telecommunications interface, for which most of the European wireline infrastructure is optimized using 4/3/1 digital access cross-connect systems. Additional phases will define data rates for wireless indoor and mobility specifications. Since IMT-2000 is based on the 3G classification, it is useful to examine these classifications next.
Generation Upon Generation There are several competing standards for wireless communications. The technologies of GSM, TDMA, and especially CDMA are leading the wireless pack. Then there’s the data add-ons of GPRS, PCS, EDGE, variations of CDMA such as CDMA2000 1x, wideband CDMA (W-CDMA), and high data rate technology CDMA 1x EV-DO. Each of these is designed to deliver particular mobile service functionality and features, and the industry classifies them into functional generations (xG) such as 1G, 2G, 2.5G, 3G, and beyond.
1G Systems First-generation (1G) systems are generally referred to as analog cellular systems using the AMPS standards. These systems were typically designed and deployed in the 1970s and 1980s, offering primarily voice-only services. These included the Nordic mobile telephone system, the AMPS systems used for the United States, and the early TACS system in the United Kingdom.
542
Chapter 9: Wireless Networks
2G Systems Second-generation (2G) systems began deployment in the 1990s. The digital version of AMPS referred to as D-AMPS was considered a 2G technology, in that it basically provided voice communications and some improvements to handset technology only. 2G systems enhanced the use of the frequency spectrum, providing more security in through-the-air communications and allowing mobile telephony to begin linkage with computer information systems such as databases. Both the signaling and the speech channels are digital transmission technology. These digital systems also improved the battery performance of the digital mobile handsets.
2.5G Systems In the late 1990s, new requirements for two-way messaging, digital voice mail, wireless data such as e-mail and Internet, and personal number services drove an evolutionary enhancement to 2G systems, known commonly as 2.5G. It is the increased sophistication of these digital voice enhancements and low-speed digital data transmission capabilities that generally identifies 2.5G systems. Short Message Service (SMS), wireless application protocol (WAP), and General Packet Radio Service (GPRS) are some of the defining data applications in 2.5G systems. The current European GSM network standard is usually classified as 2.5G network functionality. Data transmission in 2.5G systems is faster than that of 2G systems, benefiting from GPRS and helping designate European 2.5G systems with the nomenclature of GSM/GPRS. You learn more about GPRS later in this chapter.
3G Systems Often termed the holy grail of wireless capabilities, advanced mobile wireless, or thirdgeneration (3G) services, 3G systems allow for high-speed, always-on data transmission. They are intended to provide access to a wide range of telecommunications services, specifically for mobile users. Worldwide roaming and services capability, Internet, and other multimedia applications are some of the key features that fit the 3G paradigm. The use of volume-based billing of content services by the kilobyte in addition to voice minutes is another goal of 3G systems. For 3G data applications, the specifications generally require the ability to support both circuits and packets with the following rates and caveats:
• • • • • •
144 Kbps (minimum) or higher for vehicular traffic 384 Kbps (minimum) for pedestrian mobile users 2 Mbps or more for indoor, semi-stationary mobile users Asymmetric data rates for send and receive Multimedia mail store and forward Both fixed and variable rate data traffic
Cellular Mobility Basics
543
Figure 9-4 shows a typical evolution path from 2G operator networks, through 2.5G to 3G networks. Figure 9-4
Typical 3G Migration Path
TDMA
GSM
GSM/GPRS
PDC
3GSM/EDGE (EGPRS)
UMTS/WCDMA
CDMA
CDMA2000 1x
2G
2.5G
CDMA2000 1x CDMA 1 xEV-DO 3G IMT-2000
Beyond 3G to 4G Systems There is so much emphasis on the deployment and utilization of 3G systems that there is little definition work or development effort, at least by the current 3G standards organizations in regard to 4G mobile systems. It appears that many new technologies (such as HSDPA) will become folded into 3G system enhancements or considered as either beyond 3G (B3G) or perhaps 3.5G systems. The aforementioned IMT-2000 is a long-term project, so much effort will be placed in that direction on an international basis. Possible directions for 4G systems will likely proceed along a couple of paths and be driven by entrepreneur forums and countries that wish to get a jump on future mobile communications. First, the push for even higher data rates of 10 Mbps and more, may or may not be significant enough to classify a technology as 4G status. Secondly, the press for open and pervasive wireless ubiquity may lay claim to 4G status. This would take the form of efforts to create software-defined mobile phones that can seamlessly roam from GSM or UMTS to CDMA to Wi-Fi, to satellite and other air link technologies, and perhaps even wireline. Data rates of 100 Mbps or more could become available to mobile phones and pocket PCs within a few years. Scaling from product trials to millions of subscribers and pervasive geographic coverage is always a challenge. The goal of seamless, ubiquitous connectivity for open architecture wireless is somewhat embodied in the IMT-2000 effort, as previously discussed, and more specifically listed as the charter of the 4th Generation Mobile Forum (4GMF). As always, time will tell.
544
Chapter 9: Wireless Networks
Mobile Data Overlay Many of the early voice access technologies are not designed to carry high-speed data. Wireless operators have chosen to overlay their networks with data support technology in a modular fashion. The data overlay technologies have resulted from requirements, design, and usability efforts to standardize and deliver the technology for use by providers worldwide. As you learn in the next sections, a few of these data standards are
• • • • • • • •
High-Speed Circuit-Switched Data (HSCSD) Generalized Packet Radio Service (GPRS) Personal Communications Services (PCS), described earlier in the chapter Enhanced Data rates for GSM Evolution (EDGE) Variants of CDMA, such as CDMA2000 1x and 1xEV-DO Wideband CDMA (WCDMA) Time-division synchronous code division multiple access (TD-SCDMA) High-Speed Downlink Packet Access (HSDPA)
HSCSD HSCSD is an enhancement of data services (Circuit-Switched Data, or CSD) of all current GSM networks. It allows for access of data applications services, sending and receiving data from portable computers at a speed of up to 28.8 Kbps. The HSCSD solution enables higher rates by using multiple channels, allowing subscribers to enjoy faster rates for their Internet, e-mail, calendar, and file transfer services. Many operators that use HSCSD technology are currently upgrading their networks to rates of up to 43.2 Kbps. The HSCSD technology is a data attribute of a 2G network.
GPRS GPRS was made available in GSM Release 97. GPRS is a standardized packet-switched data service and an extension of the GSM architecture. Packet switching means that GPRS radio resources are used only when users are actually sending or receiving data. Rather than dedicating a radio channel to a mobile data user for a fixed period of time, the available radio resource can be concurrently shared between several users. GPRS, therefore, lets network operators maximize the use of their network resources more dynamically. GPRS uses TDMA time slot techniques. GPRS phase 1 support of about 28 Kbps of user data was introduced in 2000. In later phases, GPRS supports up to a 114 Kbps and may extend to a 171 Kbps data rate, although doing so would require using all eight TDMA time
Cellular Mobility Basics
545
slots. In reality, GPRS generally uses two to three time slots. GPRS is commonly the data overlay technology for GSM 2.5G networks, so you may see designations of a provider network as a GSM/GPRS network infrastructure.
EDGE The first of the data overlay technologies applicable to the 3G specifications is EDGE. EDGE was made available in GSM Release 99. It is a technology that operates from about 384 Kbps and higher, which is essentially wireless broadband, providing three times the data capacity of GPRS with what is sometimes termed Enhanced GPRS (EGPRS) technology. Essentially, this increased data capacity is accomplished through enhanced modulation and coding techniques that allow 3 bits per symbol rather than the 1 bit per symbol for GPRS. Using EDGE, operators can choose to handle three times more subscribers than GPRS, triple their data rate per subscriber, or add extra capacity to their voice communications. EDGE uses the same TDMA frame structure, logic channel, and 200 kHz carrier bandwidth as current GSM networks, which allows the existing cellular plant to remain intact. Using EDGE technology, the typical data rate is targeted for 384 Kbps and the theoretical data rate improves to about 553 Kbps. In the United States, Cingular Wireless made the EDGE overlay technology commercially available on its network in 2003.
CDMA2000 Family for Mobile Data CDMA2000 is a family of Qualcomm-developed technology supporting both voice and data services over a standard (1x) CDMA channel. Since this section defines mobile data, it is useful to discuss the data transmission attributes of the CDMA2000 family, which include CDMA2000 1x, CDMA2000 1xEV-DO, and CDMA2000 1xEV-DV.
CDMA2000 1x CDMA2000 1x provides many performance advantages including up to twice the capacity of earlier cdmaOne systems, helping to accommodate the continuing growth of voice services as well as new wireless Internet data services. CDMA2000 1X also provides peak data rates of up to 153 Kbps (and up to 307 Kbps in the future), without sacrificing voice capacity for data capabilities. From a data throughput perspective, it often compares and competes with the EDGE technology approach.
546
Chapter 9: Wireless Networks
CDMA2000 1xEV-DO For those who want higher-speed or larger-capacity data services, a data-optimized version of CDMA2000 1x, called 1xEV-DO, provides peak rates of over 3.1 Mbps, with an average throughput of over 700 Kbps. 1x EV-DO is comparable to wireline DSL services and fast enough to support demanding applications such as streaming video and large file downloads. Achieving these improvements through variable-rate speech codecs, dual receivers in handsets, and four-branch diversity in the base stations, CDMA2000 1xEV-DO also delivers data for the lowest cost per megabyte, an increasingly important factor as wireless Internet use grows in popularity. 1xEV-DO devices provide “always-on” packet data connections, helping to make wireless access simpler, faster, and more useful than ever. After conducting field trials, several providers began commercial deployments of 1xEVDO during 2002. By combining CDMA2000 and 1xEV-DO as needed, CDMA2000 provides a flexible, integrated solution that maximizes capacity and throughput for both voice and data.
CDMA2000 1xEV-DV The Qualcomm 1xEV-DV (Evolution-Data and Voice) specification is similar to 1xEV-DO. The difference is that 1xEV-DV can support the 3.1 Mbps data rate, along with concurrent operation of CDMA2000 1x and 1xRTT data users within the same 1.25 MHz radio frequency channel.
WCDMA An additional 3G data overlay is known as WCDMA. Providing mobile users with wide area data rates initially from 384 Kbps and local area data rates up to about 2 Mbps, WCDMA is an ultra high-speed, ultra high-capacity radio technology that generates and carries a superbroadband wireless bandwidth for demanding applications such as streaming video, animations, and digital audio. WCDMA uses spectrum with a 5 MHz–wide radio signal and a chipping rate of 3.8 megachips per second (Mcps) over which to apply its DSSS techniques. WCDMA is one of the data technologies initially targeting up to 2 Mbps data rates for both indoor wireless and mobile technologies, and is one of the enabling technologies for the European data specification of UMTS. A common reference to a 3G European network would be characterized as UMTS/WCDMA.
Cellular Mobility Basics
547
TD-SCDMA TD-SCDMA uses a time-based (TDD) multiplexing technique to employ only a single frequency channel for both downstream and upstream data communications from the mobile cell phone to the base station transceiver. The RF channel width is 1.6 Mbps and enables data rates of 1.2 Kbps to 2 Mbps. The use of a single RF channel is termed TDD unpaired mode, meaning that the single channel is used for both upstream and downstream data communications. This is contrasted with FDD paired channel techniques, which use one frequency channel for the downstream data path and another for the upstream data path. TD-SCDMA is one of the data technologies approved for the IMT-2000 CDMA TDD radio interface specification. TD-SCDMA was jointly developed by Siemens AG, Datang, and the China Academy of Telecommunications Technology.
High-Speed Downlink Packet Access (HSDPA) HSDPA is a relatively new technology that has evolved from experience with UMTS’s WCDMA Release 99, but implementing a fast and complex channel control mechanism with new adaptive modulation and coding techniques and a fast scheduler. HSDPA works within a WCDMA downlink 5 MHz channel and is targeted at 0.9 to 10 Mbps data rates with the current standard allowing up to 14.4 Mbps. HSDPA is a 3rd Generation Partnership Project (3GPP) standard that achieves up to 10 Mbps in release 5, and using an option called multiple input multiple output (MIMO) is targeting 20 Mbps in release 6 of the 3GPP standardization process. Early HSDPA deployments will likely target about 3.6 to 4 Mbps data rates. These speeds approach the range of wireline DSL and cable modem technology. This places HSDPA data capabilities into the 3.5G range (greater than 2 Mbps for mobile users). The first UMTS/HSDPA trial completed in Israel in 2005, and a UMTS/HSDPA network expansion is in the plans for Cingular Wireless in the United States. Most operators of GSM networks view both EDGE and WCDMA as complementary technologies, allowing for a staged migration from GPRS data rates to EDGE, to those of WCDMA. WCDMA is likely to be used in dense metropolitan areas while EDGE serves the smaller metro areas, highways, and rural countryside. HSDPA is also a consideration for newer GSM networks or network upgrades. The moniker of 3GSM is the common marketing name for these advancing networks.
Comparing Mobile Data Rates Figure 9-5 depicts target data rates for the various wireless technologies.
548
Chapter 9: Wireless Networks
Figure 9-5
Target Data Rates for Mobile Services
3.6–14.4 Mbps
HSDPA CDMA 1xEV-DO and 1xEV-DV
3.1 Mbps UMTS Terrestrial Radio Access
W-CDMA and TD-SCDMA EGPRS
EDGE
384–553.6 kbps 153.6–307 kbps
CDMA 1x GPRS
384–1920 kbps
114–171.2 kbps
HSCSD 57.6–115.2 kbps CSD 9.6 kbps 2G
2.5 G
3G
To put these mobile wireless data rates in perspective, it is helpful to compare them with a typical user application, such as the download of an MP3 audio file from the Internet to a mobile phone. Table 9-3 depicts the approximate transmission times (theoretical) for a 23.04-megabit MP3 song file using the various data overlay technologies. Table 9-3
Approximate Download Times for Three-Minute MP3 Network Technology
Maximum Data Rate
Download Time
GSM (CSD)
9.6 Kbps
41 minutes
IS95-A CDMA
14.4 Kbps
31 minutes
GSM (HSCSD)
43.2 Kbps
9.5 minutes
GPRS (TDMA)
45 Kbps
9 minutes
IS95-B CDMA
64 Kbps
6 minutes
CDMA2000 1x
153 Kbps
1.5 minutes
EDGE
553 to 1920 Kbps
12 to 41 seconds
WCDMA
384 to 1920 Kbps
12 to 60 seconds
Cellular Mobility Basics
Table 9-3
549
Approximate Download Times for Three-Minute MP3 (Continued) Network Technology
Maximum Data Rate
Download Time
TD-SCDMA
2.0 Mbps
12 seconds
CDMA 1xEV-DO
3.1 Mbps
8 seconds
HSDPA
3.6 Mbps to 14.4 Mbps
2 to 7 seconds
Mobile Radio Frequency Spectrum Since the topic of radio frequency spectrum is so vast, this section focuses on only those particular areas that pertain to wireless communications via cellular and PCS technologies. In the United States, analog and digital cellular frequencies are assigned about 70 MHz from the 824–894 MHz range. This band is one of the usable modes in dual-mode phones. Whenever the digital cellular PCS or GSM up-spectrum band (1800 and 1900 MHz) is out of range, the analog mode (800 MHz band) of a dual-mode phone will become active to fill in gaps of coverage. In the United States, the TDMA version of PCS operates in the 1850–1990 MHz band, specifically using about 60 MHz from 1850–1910 MHz for phone transmit, followed by a 20 MHz separation or guard band, and then another 60 MHz from 1930–1990 MHz for the base station transmit. The CDMA version of PCS uses the same frequency range, but as previously mentioned, CDMA will use a digital spread-spectrum technique to encode and distribute mobile calls across the available range of assigned frequencies. This allows both the use of TDMA and CDMA in the same frequency ranges. In the United States, GSM operates in the 1850–1990 MHz frequency band and is generally referred to as GSM1900. In Europe, where mobility began in the Nordic Scandinavian countries (home of Ericsson, Nokia), GSM networks have a very strong legacy in assigned frequency use. GSM base station transceivers and cell phones can use four frequency ranges, referred to as GSM400, GSM850, GSM900, and GSM1800. Additionally, the GSM1800 range is referred to as the PCS range for Europe as well as China. Also in Europe, the UMTS assigned frequency range is the 1885–2025 MHz range and the 2110– 2190 MHz range. Figure 9-6 provides a conceptual view of the wireless mobile phone spectrum allocations. IMT-2000 is intended to bring high-quality mobile multimedia telecommunications to a worldwide mass market based on a set of interfaces specified in the global, mobile, ITU standard. It’s best to think of the IMT-2000 frequency assignments as a global overlay approach. Systems looking to provide worldwide interoperability and IMT-2000 status should work within the IMT-2000 designated bands. For example, the current European UMTS frequency bands fit within the IMT-2000 frequency bands.
550
Chapter 9: Wireless Networks
The IMT-2000 usable bands, initially identified at the World Radiocommunications Conference (WRC) in 1992, are 1885–2025 MHz and 2110–2200 MHz. These are called the IMT-2000 core or current frequency bands. At the WRC-2000 meeting, additional usable spectrum was proposed, extending IMT-2000 future allocations in the 2290–2300 MHz and 2520–2670 MHz bands. The ITU is also suggesting allocation of existing 2G spectrum as long-term spectrum for IMT-2000, with “long term” defined as years 2005–2010. For a summary of these allocations, see Figure 9-6. Figure 9-6
Wireless Mobile Spectrum Allocations
U.S. Europe
China
Analog/Digital Cellular
GSM 1900 and PCS
824—894 MHz
1850—1990 MHz
450—496 MHz
824—894 MHz
880—960 MHz
1710—1880 MHz
GSM 400
GSM 850
GSM 900
GSM 1800
Analog/Digital Cellular
GSM 900
GSM 1800 and PCS
810—870 MHz
880—960 MHz
1710—1850 MHz
Japan/Korea
830—930 MHz
1500 MHz
PDC
PDC
IMT-2000 Core
1885—2025 MHz
2290—2300 MHz 2520—2670 MHz
Extended Long Term
2110—2160 MHz
880—960 MHz
1710—1885 MHz
There is also momentum for establishing 90 MHz of spectrum for 3G services in the 1710– 1755 MHz band and another 45 MHz of spectrum from within the 2110–2170 MHz band. To do so will require clearing of several existing aeronautical mobile and tactical radio relay systems that currently operate in the area.
Cellular Mobility Basics
551
Navigating the Mobile Spectrum When you place a call from a North American digital cellular phone to another North American PCS phone (TDMA), you begin to surf the spectrum. Let’s assume that you use the first available channel in the assigned spectrum, and after negotiation with the base station (MTSO) your digital cellular–initiated call starts transmitting on a narrowband frequency channel at 825 MHz. You will hear the call progress or ringing tones coming back to your phone’s earpiece via the receive channel at about 870 MHz. In this example, the MTSO determines that the destination mobile ID number of the PCS phone is not on your subscribed system, so it will switch the call over landline facilities to get to the MTSO of the destination PCS network. Borrowing the previous assumptions about the availability of the first frequency channel, the MTSO will send the call request to the PCS phone via a TDMA narrowband channel at 1930 MHz and the PCS phone will answer and transmit back to the PCS’s MTSO via the 1850 MHz channel. In summary, this two-way conversation used four wireless narrowband 30 kHz frequency channels—two on the digital cellular network at 825/870 MHz and two on the PCS network at 1850/1930 MHz—and also switched between the different MTSOs via landline services. If the destination PCS phone is on a CDMA system, then the spread-spectrum design of CDMA will encode, encrypt, spread, and modulate the information onto a pair of 1.25 MHz bandwidth radio carriers between the CDMA network’s MTSO and the PCS phone, making it more difficult for you to know what data are manipulated as CDMA carries the conversation. Let’s stretch the example to an international call to Europe, with the destination being a GSM subscriber. Let’s assume that your digital cellular phone negotiates the 825 MHz transmit and 870 MHz receive to connect to the local MTSO, and that the call destination digits are determined to be international and will route via long-distance facilities of the wireless provider’s choosing. If the call destination is a country with GSM1800 system coverage, then the call could be delivered to the GSM phone handset using the Euro-GSM frequency allocation of 1710 MHz for phone transmit and 1805 MHz for the phone’s receive frequency. For the first part of the complete round trip, your conversation gets digitized and modulated onto, for example, the U.S. 825 MHz channel, then hops from MTSO to local switching office(s), then to an international long-distance provider to skip across the Atlantic Ocean, and finally jumps to the destination European GSM1800 system which modulates the conversation onto the 1805 MHz frequency channel to the GSM phone. For the return path, the GSM phone converses via the 1710 MHz channel to the GSM MTSO, which hops, skips, and jumps back across the Atlantic to your digital cellular MTSO to send the return conversation to your digital cellular handset over the 870 MHz channel. In summary, that’s U.S. 825 MHz transmit to European 1805 MHz receive to 1710 MHz transmit to U.S. 870 MHz receive. If the international long-distance segment of this call were delivered via satellite, then the call would use additional spectrum in the uplink and downlink from the appropriate satellite(s).
552
Chapter 9: Wireless Networks
Wireless LANs Wireless LANs (WLANs) are creating attractive new growth opportunities. Simple to deploy and relatively inexpensive to acquire, WLAN technology uses unlicensed spectrum to achieve data transfer rates at 1 Mbps, 2 Mbps, 5.5 Mbps, 11 Mbps, 54 Mbps, and beyond. These data rates are significantly higher than the specifications of 3G and are potentially disruptive to 3G data service offerings—especially for pedestrian-based wireless services. The early iterations of WLAN technology left much room for improvement with primary issues related to security and limited distance. Deployed in the Industrial, Scientific, and Medical (ISM) band of the radio spectrum at 2.4 GHz (2400 MHz), there are caps and limitations on power and range to reduce interference with the surplus of other devices in this unlicensed area. The 802.11x standard represents the technical specifications of WLANs. The 802.11b, 802.11a, and 802.11g standards are currently the most rampant. Given the statutory and physics-related constraints on the improvement vector of WLANs, or more specifically, the 802.11x standard, 3G data technologies and 802.11x technologies could end up as complementary products that can be knitted together as a blanket of coverage. A parallel for this is one where 802.11x would represent wireless LANs while 3G data technologies would characterize wireless WANs. WLANs are more commonly referred to using the marketing term Wi-Fi—which is short for wireless fidelity. Using a Wi-Fi card or Wi-Fi integrated technology in laptops or handhelds, mobile computing users can surf the Internet at 11 Mbps up to 54 Mbps speeds without physically plugging their computer into anything, in this case, as long as they are within about 300 feet of a Wi-Fi “hotspot’s” central access point (AP). Essentially, Ethernet through the air, Wi-Fi is a technology that is easily co-opted and overlaid on existing wired LANs to provide laptop computers more ubiquitous access and always-connected capability. Wi-Fi is cellular in concept—the Wi-Fi AP is generally at the center of a wireless cell that radiates the radio frequencies for a few hundred feet in circumference. Deploying several APs and tying them together through a network backbone infrastructure enables wireless roaming from Wi-Fi cell to Wi-Fi cell. Before moving into a discussion of the 802.11 standards, it is first helpful to work through the underlying seed technologies that make WLANs possible. A brief review of the 802.11 physical layers is necessary. Following will be an introduction of the 802.11a, b, and g standards and their comparative merits. Additional wireless technologies are then reviewed, such as WiMAX, Bluetooth, Ultra-Wideband (UWB), and wireless optics, closing the chapter with fixed wireless topics such as MMDS, LMDS, and satellite.
Wireless LANs
553
802.11 Physical Layer (PHY) Techniques In June 1997, the IEEE released 802.11 as the first international standard for WLANs. Initially defined for 1 and 2 Mbps data rates, the original standard employed three physical techniques:
• • • NOTE
Diffused infrared Frequency hopping spread spectrum (FH or FHSS) Direct sequence spread spectrum (DS or DSSS)
In September 1999, the 802.11 standard was updated to the 802.11b version, which dropped the use of the diffused infrared PHY layer while maintaining and enhancing the FHSS and DSSS PHY layers.
Diffused Infrared The diffused infrared method is essentially photonic wireless and fiberless transmission, using the 850- to 950-nm band of infrared light with a peak power of about 2 watts. Restricted to close-proximity operation and constrained by low-power requirements to reduce any possible damage to the human retina, diffused infrared is limited to approximately 25 to 35 feet at a speed of 1 to 2 Mbps. The diffusion property of the infrared transmitter fills an area much like a light bulb, bouncing off of the walls and ceiling. Many early laptop PCs incorporated diffused infrared ports with which to communicate with other like PCs on an ad hoc networking basis. Part of the original 802.11 standard, the specification of diffused infrared as a PHY layer in the 802.11b standard revision, was dropped.
FHSS The 802.11b standard uses the FHSS as one of two PHY specifications. FHSS is analogous to FM radio transmissions where a data signal is superimposed on a narrowband carrier, but in this case the data signal can change frequency or “hop”. The standard provides for 26 hop patterns or frequency sequences to choose from as it operates in the ISM band at around 2.4 GHz. The 2.4 GHz band is unlicensed radio space, about 82 MHz of it used for a bundle of things that use wireless communications. With FHSS, each of 79 channels is 1 MHz wide and the selected channel must hop at a fixed but nearly random rate. (The United States specifies a minimum of 2.5 hops/sec.) This rapid modulation or frequency hopping helps protect the signal from radio interference that may be concentrated around one frequency. By frequently hopping at a pace of about .4 seconds in the 2.402–2.4835 GHz range, the signal transmission is successful and reasonably secure as long as the transmitter and receiver know the rate and the sequence of the frequency hops. However, the ISM band is full of
554
Chapter 9: Wireless Networks
other wireless devices, and a good percentage of the FHSS frequency “hopping” time may encounter it. Actual throughput per user using FHSS could be less than 1 Mbps, which is fine for applications like inventory scanners and warehouse telemetry systems, but comes up short of the higher bandwidths needed for multimedia data and database applications. FHSS is generally limited to 2 Mbps data rates but the bandwidth can be increased to up to 24 Mbps by designing multiple APs into the network. The standard doesn’t include coordination of hopping sequences for multiple APs, so more interference is likely unless the AP and wireless card vendor uses a proprietary mechanism to provide this on its equipment solution. The FHSS technique has a limited range compared to other modulation techniques. For that reason, FHSS is primarily an intrabuilding technology and is largely unused in Wi-Fi deployments looking for 11 Mbps and higher speeds.
DSSS Another physical layer technique is called DSSS. First developed by the U.S. military as a secure wireless technology, it modulates or changes a radio signal on a pseudorandom interval, making it very hard to distinguish and decipher from background noise. To provide the most robust security, encryption techniques are often used with DSSS technologies. DSSS works by taking a data stream of ones and zeros, and then modulating the stream with a second pattern known as the chipping sequence. The chipping or spreading code is used to generate a redundant bit pattern to be transmitted, and the resulting signal appears as wideband noise to the unintended receiver. This is important because 802.11 WLANs work in the ISM band, which is unlicensed and fraught with interference from other wireless devices. It’s like yelling a secret to one other person in a crowded room—the two of you being the only persons who understand the language. With the 802.11b specification, DSSS divides the 2.4 GHz radio frequency band into three nonoverlapping, 22 MHz–wide channels (channels 1, 6, and 11). Data is sent across one of the 22 MHz–wide channels without hopping to either of the other channels. It is the function of the 11-bit chipping sequence, known as the Barker sequence, combined with spreading the data signal across the 22 MHz channel, that provides the extra bits necessary for the error checking and correction needed to recover the data at the receiver. This is like taking one bit and making it into 11 bits, hoping that at least enough of the bits will be received despite any interference to determine the proper value of the bit that was originally sent. Without knowing the spreading code, the data appears as background noise and is effectively unintelligible. However, the proper receiver knows the code and despreads the signal for use. While this seems like a lot of overhead, the result is more data throughput per bit-time, fewer retransmissions, and lower latency. Low latency is especially important to wireless voice applications. The net result is that DSSS is more tolerant of noise and interference, and therefore can expand data capacity in these environments.
Wireless LANs
555
The DSSS modulation technique was selected to support not only the 1 and 2 Mbps speeds, but also two additional speeds of 5.5 and 11 Mbps, the speedy crux of the 802.11b standard. To enable the higher speeds of 5.5 and 11 Mbps in the 802.11b standard, in 1998 Lucent Technologies and Harris Semiconductor proposed a new redundant sequencing method called Complementary Code Keying (CCK). Through the CCK replacement of the original 11-bit chipping sequence, which effectively coded one data bit at a time, the 5.5 Mbps wireless data rate is achieved using CCK to encode 4 bits per carrier. By enhancing CCK to 8 bits per carrier, you can double the rate to the 11 Mbps threshold that is so highly desired with today’s wireless infrastructures. DSSS-based 802.11b WLANs running at 11 Mbps establish a kinship with their wired 10 Mbps Ethernet cousins, leading to the popularity of the 11 Mbps WLANs as the minimum entry into WLANs.
Orthogonal Frequency Division Multiplexing (OFDM) OFDM has previously been reviewed as a wireless access technology. OFDM has applicability not only to wireless mobility devices such as data-enabled cell phones, but also to wireless LANs. In fact, the IEEE 802.11 standards that support up to 54 Mbps WLAN transmission rates do so using the OFDM multiplexing technique to achieve data rates higher than 11 Mbps. OFDM research is leading to capabilities to push the top end to 108 Mbps. Variants of OFDM are often referred to as coded OFDM (COFDM) and vector OFDM (VOFDM), described later.
802.11—11 Mbps and Beyond The wireless speed of 11 Mbps has initiated the charge toward a critical mass of WLAN deployments. 11 Mbps represents a 10x performance improvement over the original 1997 IEEE 802.11 standard, while providing a suitable substitute for wired 10-megabit Ethernet connections. In this section, you’ll review some highlights of the 802.11b, 802.11a, and 802.11g specifications within the 802.11 standard. These are primarily differentiated by their achievable speeds, their assigned frequency range, or their effective channels and channel throughput. Figure 9-7 shows the IEEE standardization timeline for 802.11b, 802.11a, and 802.11g.
802.11b The 802.11b revision of the standard specifies the use of both FHSS- and DSSS-based PHY layers for achieving data speeds up to 11 Mbps per radio frequency channel. 802.11b operates in the 2.4 GHz band, so the use of DSSS as an access technique is the most popular, achieving rates of 1, 2, 5.5, and 11 Mbps. The higher speeds were accomplished using a high-rate DSSS channelization scheme using either CCK or an optionally available packet binary convolutional coding (PBCC) scheme. Three nonoverlapping channels are used, each 22 MHz wide within the 82 MHz of 2.4 GHz assigned spectrum.
556
Chapter 9: Wireless Networks
Figure 9-7
Timeline for 802.11 WLAN Standards 802.11g 2.4 GHz-OFDM/DSSS Up to 54 Mbps
Network Radio Speed
802.11a 5 GHz-OFDM Up to 54 Mbps 802.11b 2.4 GHz-DSSS Up to 11 Mbps
Proprietary
1999
IEEE 802.11a/b Ratified 2000
2001
2002
2003
Because the U.S. FCC regulates output power for 2.4 GHz and ISM-band products to no more than 1 watt, all 802.11b AP radios will adapt to a less-complex and slower encoding technique, as an 802.11b wireless device moves farther away from the AP. For example, the 2.4 GHz DSSS radios transmit at a maximum of 100 mW. If your wireless-enabled device is within 130 feet, it will typically move data at 11 Mbps speeds. As you move further away from the AP, there is a power fade because your device no longer is receiving a full 100 milliwatts. The AP detects this movement and “downshifts” the transmission to the next slower technology, for example, 5.5 Mbps. As you move to the fringe of the AP’s coverage area, your data transmission speed will drop to 2 Mbps and finally to 1 Mbps. While building layouts cause these numbers to vary, the approximate distance from the AP at which you will likely downshift to slower rates is as follows:
• • • •
1–130 feet: 11 Mbps 131–180 feet: 5.5 Mbps 181–feet: 2 Mbps 251–350 feet: 1 Mbps
An 802.11b AP generally reaches a maximum aggregate capacity of 18 Mbps, delivering about 6 Mbps per user if all three channels are in concurrent use. To support more concurrent users, multiple AP designs factor in effective per-client throughput, the types of wireless data and voice applications in use, and their delay bounds.
802.11a Enter 802.11a, an amendment designed to operate within more recently allocated 5 GHz spectrum known as the Unlicensed National Information Infrastructure (U-NII) bands. The U-NII bands cover about 300 MHz of spectrum for 802.11a, 200 MHz of which is at 5150– 5250 MHz (U-NII indoor) and 5250–5350 MHz (U-NII low power) and the remaining 100 MHz of which is at 5725 to 5825 MHz (U-NII/ISM). In November 2003, the FCC
Wireless LANs
557
allocated the 5470–5725 MHz spectrum to 802.11a operation as well, extending the U-NII/ ISM range by another 255 MHz. The higher-frequency U-NII bands have more inherent performance from the shorter-length radio waves, and much less competing interference than today’s 2.4 GHz ISM band. However, remember that higher frequency radio waves propagate for shorter distances and are less effective at passing through walls and other obstructions. The effective distance of 802.11a is less than that of 802.11g, for example. 802.11a uses 20 MHz–wide frequency channels, assigning four channels per 100 MHz (the original three U-NII bands) for a total of 12 concurrent wireless channels in the United States. Europe uses an amendment called 802.11h, which doubles the channels for use in Europe to 24 channels. 802.11h is the European version of 802.11a, but it includes radar detection capabilities in this European ETSI standard so the 802.11h AP radios can have up to 24 channels. With more channels, more aggregate throughput is the result. The 802.11a standard utilizes this new frequency space, along with increases in power radiation (50 mw in U-NII bands 1 and 2 up to 1 watt in U-NII band 3) and a new coded orthogonal frequency division multiplexing (COFDM) encoding scheme to achieve the higher symbol rates of as much as 54 Mbps. Transmitting over multiple carrier frequencies in parallel, COFDM is the antidote for intersymbol interference. Actually, this 802.11a standard uses 6, 12, 24, and 54 Mbps as well-defined data rates. Cisco System’s 802.11acompliant radios additionally support speed options of 36 and 48 Mbps. Again, the further you wander from the AP, the more your data rate proceeds along a downramp of speed bumps. As you reach the fringe of 802.11a AP coverage, you are still transmitting at about 6 Mbps, six times the entry performance of the 802.11b standard. The use of Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA), a common error avoidance technique in wired Ethernet networks, rounds out the ability of the 802.11a standard to achieve such high data rates. The maximum theoretical data rate of the COFDM technique is considered to be 108 Mbps, about double today’s WLAN radio capabilities. The maximum aggregate capacity of an 802.11a AP is about 300 Mbps, with all 12 channels in concurrent operation, delivering about 25 Mbps per channel. The average throughput rate per client is dependent on the number of simultaneous clients and the types and data sizes of the applications in use. Despite the benefits of the less-crowded 5 GHz band, 802.11a is therefore in a completely separate frequency band than 802.11b or 802.11g, with no backward compatibility to support an ordered migration of 802.11b wireless clients. A flash cut to 802.11a would be required. If you’re adding WLAN to a kitchen microwave assembly and testing plant, you may want to consider the 802.11a radio, as microwaves operate in the 2.4 GHz unlicensed range. The added irritant of operating in a regulated frequency band (5 GHz) adds to uneven coverage worldwide, as each region will have different regulatory approaches to allowing the use of the spectrum for 802.11a.
558
Chapter 9: Wireless Networks
802.11g In 2003, another amendment to the 802.11 standard was finalized: 802.11g. The 802.11g specifications allow 802.11g radios to transmit up to 54 Mbps, but does it in the 2.4 GHz ISM-band operating range. The 802.11g standard is a fifth generation of 2.4 GHz radios and provides data performance comparable to that of the 802.11a WLAN standard, which operates in the 5 GHz bands, while providing backward compatibility with the legacy 11 Mbps 802.11b standard, which operates at 2.4 GHz. To accommodate this compatibility, the 802.11g radio standard uses any of three modulation techniques of the DSSS, PBCC, and OFDM technologies, with a variety of supported data rates of 1, 2, 5.5, 6, 9, 11, 12, 18, 24, 36, 48, and 54 Mbps. Data rates at 12 Mbps and above typically use the OFDM method. This makes for an easy and speedier upgrade path from the 802.11b radios, which operate at 2.4 GHz and at a maximum data rate of 11 Mbps. While backward compatibility of 802.11g APs with 802.11b wireless client cards is great for migration, it should be understood that any 802.11b client that associates with an 802.11g AP will cause the AP to slow down to 11 Mbps transmission rates on all channels to achieve the header timing necessary to accommodate the lone 802.11b client. At close proximity, a single-user 802.11g wireless client can approach 54 Mbps of throughput. An 802.11g AP will generally have a maximum aggregate throughput of 66 Mbps, allowing each of the three nonoverlapping channels to achieve about 22 Mbps of utilization each for concurrent operations. When an 802.11b client is present within the 802.11g domain, the aggregate throughput drops to about 24 Mbps, doling out about 8 Mbps per channel. The added benefits of the 802.11g techniques are the lower cost of the 2.4 GHz radios as well as the lower power requirements of the 2.4 GHz transmitters in end devices. Low power is extremely important to many wireless handheld devices. Power-efficient Wi-Fi will be a key component in tomorrow’s mobile handhelds and pocket PCs.
Comparing 802.11 Standards Within the context of WLANs, network capacity is roughly calculated as the product of throughput multiplied by the number of available radio channels. The 802.11b and 802.11g devices at 2.4 GHz are limited by their respective standards to no more than three nonoverlapping radio channels. The 802.11a specification at 5 GHz allows for up to 12 radio channels in the United States and up to 24 channels in Europe(802.11h). More radio channels provide more aggregate data capacity per access point, and practical use normally yields data throughput per channel less than the advertised maximums. Table 9-4 provides useful perspective for approximate network capacity, throughput, and channel comparisons of 802.11x wireless radio technology.
Wireless LANs
Table 9-4
559
Approximations for 802.11b, 802.11g, and 802.11a Networks
IEEE Standard
Maximum Data Rate (Mbps)
Throughput (Mbps)
Channels
Access Point Capacity (Mbps)
802.11b
11
6
3
18
802.11g (with 802.11b clients)
54
8
3
24
802.11g
54
22
3
66
802.11a
54
25
12
300
802.11a (with 802.11h)
54
25
24
600
Both the 2.4 and 5 GHz 802.11 wireless products are successful, providing distinct price/ performance data points along the mobile computing curve. A variety of task groups are continuing work to enhance security (802.11i), quality of service (QoS) (802.11e), wireless bridge operations (802.11c), AP interroaming recommendations (802.11f), virtual wireless LANs, and other features that are desirable in maturing the Wi-Fi technology. Another high rate specification is in the works as 802.11n, seeking to create an 802.11 standard for 108 Mbps operation. The IEEE 802.11 standards group has defined a very rich feature set with an abundance of options. The Wi-Fi Alliance, an industry group, is focused on streamlining the 802.11 feature set to enhance interoperability and, above all, acceptability of wireless LANs in the marketplace. Cisco Systems takes the minimal feature sets defined by the Wi-Fi Alliance and adds some additional, differentiating features into its 802.11x radios. All vendors and groups are focused on stoking the take-up rate of wireless LANs into departmental networks; public access hotspots such as airports, hotels, and coffee shops; service provider commercial offerings; and, of course, into the home and mobile networking techno-set.
802.16 802.16a is a relatively new IEEE standard also known by its IEEE marketing term of IEEE Wireless MAN 802.16. Colloquially, the industry refers to the resulting products, networking, and functionality afforded by the standard as WiMAX. The 802.16a wireless networking standard offers broadband wireless access at greater range and bandwidth than the Wi-Fi set of 802.11 standards. 802.16a is targeting available radio spectrum from 10 to 66 GHz with amendments to use frequencies in the 2–11 GHz range as well, supporting
560
Chapter 9: Wireless Networks
both licensed and unlicensed bands. A variant of the standard, called 802.16e, is specified to enable a single base station to support both fixed and mobile users, and is gaining traction very fast with service providers. Further standardization revisions are occurring in 2005, and WiMAX-certified products are relatively new. The technology supports adaptive modulation, effectively balancing different data rates and wireless link quality. The original standard expects to use both FDD and TDD. The legacy FDD method, widely deployed in cellular networks, requires two channel pairs for transmit and receive with frequency separation to limit interference. The use of TDD uses a single channel for both transmit and receive transmission, dynamically allocating upstream and downstream bandwidth depending on traffic requirements. Both 802.16a and 802.16e will likely use OFDM as a multiplexing technique for the standard. The WiMAX standard is targeting about 70 Mbps of shared throughput up to a distance of 31 miles (50 km) from a single base station, with features such as QoS and data, voice, and video services. This greater range and increased bandwidth would be most appealing to service providers seeking price performance options in delivering last-mile access to broadband users rurally, nationally, and internationally. With this type of range, bandwidth, and coverage area, WiMAX would be classified as a wireless metropolitan area network (WMAN). As a WMAN, the last-mile(s) technology would hope to connect to Wi-Fi networks (the last few hundred feet), such as 802.11x access points in your business or residence, to bridge Wi-Fi access technology to core service provider networks. The inclusion of Layer 2 roaming features into the technology would be a forward-thinking capability.
Wireless Personal Area Networks Additional wireless technologies target short-range applications for personal area networking, sometimes referred to as wireless personal area networks (WPANs). These technologies are Bluetooth and Ultra-Wideband (UWB), and they are are being addressed in the IEEE 802.15 standards working group.
Bluetooth Conceptually similar to a very low speed Universal Serial Bus (USB) technology, Bluetooth wirelessly connects personal computer peripherals, consumer electronics, and telephone systems within a 10-meter range. It uses autonegotiation to connect to other Bluetooth-enabled devices such as a notebook computer to a mouse, to a printer, to a PDA, or to a mobile phone, creating cable-free connections between up to seven devices. Bluetooth is also targeting consumer electronics such as audio and video entertainment systems to reduce the cable clutter.
Wireless LANs
NOTE
561
The Bluetooth RF standard took its name from the 10th century king of Denmark, Harald Blatand, which transliterated to English means Bluetooth. Harald Blatand, son of the first king of Denmark, “Gorm the Old,” was credited with uniting Denmark and parts of Sweden and Norway into a single kingdom.
Bluetooth (IEEE 802.15.1) communicates in the ISM band of 2.4 GHz using FHSS and TDD at 1600 times per second. In practice, the frequency range is 2402–2480 MHz, with 79 1 MHz–wide channels in the United States. In Japan, the frequency range is 2472–2497 MHz, with 23 1 MHz RF channels. Data in a packet can be up to about 2745 bits in length. Transmitting at no more than the allowed 1 mW, Bluetooth asynchronous speeds reach 721 Kbps in one direction, with 57.6 Kbps in the return path. Synchronous rates are supported at a speed of 432.6 Kbps. Automatic and inexpensive, Bluetooth wants to take over the short-range world. The current Bluetooth standard is found in over 1000 different devices. Bluetooth 2.0 extended data rate (EDR) is now achieving about 3.0 Mbps at distances of 10 meters.
Ultra-Wideband (UWB) UWB is a more recent development in short-range radio frequency technology that is targeting the high bit rate WPAN consumer markets in the areas of personal computing, consumer electronics, and automotive industries. Other potential uses for UWB are imaging applications such as ground penetrating radar and medical imaging systems, due to UWB’s radar-like properties and its resistance to multipath interference. UWB has the potential for very high data rates using very low power at limited range.
NOTE
Multipath interference is the reception of two or more signals over different paths. The direct signal may combine with a reflection off a roof, wall, or other surface. It can be refraction off of trees or an atmospheric inversion layer. The received signal is the vector sum of the two signals creating both an amplitude and phase change. This type of distortion may move rapidly across the frequency band. UWB devices use a RAKE receiver to combine multipath signals and benefit from them.
UWB is unique in that it achieves wireless communications without using an RF carrier. Instead, UWB uses modulated pulses of energy less than a nanosecond in duration. Like tracer fire on an anti-aircraft gun, the UWB modulation technique encodes the data bits into a pulse train instead of a continuous wavelength, which has the benefit of power efficiency.
562
Chapter 9: Wireless Networks
The technology uses multiple wideband channels with a minimum effective throughput of 50 to 100 Mbps per channel. To create the multiple channels, the familiar technologies of TDMA, CDMA, FDM, and time hopping (TH) are all usable for UWB designs. UWB has a potential for 500 Mbps or more at ranges less than 10 meters, transmitting between 3.1 and 10.6 GHz spectrum at power levels up to one milliwatt. This is a very wide RF channel at 7 GHz of width. The UWB technology is finding use in Wireless USB 2.0, which is using UWB technology to achieve data rates up to 480 Mbps at a distance of 10 meters. Although used in the U.S. military for years under special license, the FCC has only recently (February 2002) granted manufacturers permission for developing and commercially marketing UWB products in the United States. UWB has the potential to take market share from short-range Wi-Fi home networking and perhaps compete with wired bus technologies such as IEEE 1394 (e.g., Apple’s FireWire) and USB 2.0 in the personal computer market. Digital TVs, home theater systems, digital cameras, and camcorders are some of the consumer electronic devices being targeted for early UWB technology chip sets.
Wireless Optics Wireless optics is the use of lasers to send voice, video, and data through the air from one building to another instead of through expensive, underground fiber-optic cables. This is very significant when you consider that such technology could go a long way toward solving the “last-mile problem”—the inability of businesses and consumers to access and afford superbroadband fiber-optic cables at their doorstep. Often termed free-space optics (FSO), this point-to-point wireless broadband technology sends an invisible, eye-safe beam of laser light through the air, from rooftops, through windows, or from cell towers to another receiving location. Current FSO systems provide excellent reliability through recently developed active pointing and tracking technology. Exceptionally high security is provided due to narrow beam divergence. Mesh network design for FSO systems also increases reliability in weather such as advection fog and snowstorm conditions, and in excessive building sway. Active laser-gain mechanisms allow the lasers to compensate for air clarity, tuning the laser output to deal with changing weather conditions. The primary appeal of these systems is that they do not require the purchase of spectrum licenses or expensive trenching for in-ground fiber. Some of the manufacturers in this space are offering carrier-grade, network-ready FSO connectivity at Fast Ethernet (100 Mbps) and OC-3/STM-1 (155 Mbps) capacities. The solution works in either indoor or outdoor configurations, with two transceiver units placed within line of sight of one another. Network traffic is converted to infrared light at 1550 nm and transmitted through the air at about 15 mW of power. On the other end, specialized lenses and mirrors focus the signal onto a receiver, which converts it back to the original data and sends it over the building’s network infrastructure to the appropriate destination. These types of FSO systems are full duplex, and they transmit data at the full rated speed.
Wireless LANs
563
There are also FSO products that support OC-12/STM-4 (622 Mbps) and GigE (1.25 Gbps) data rates. Some products create redundant links that can be used to deliver bit error rates (BERs) of 1012, a performance previously only available via fiber. Wireless optics solutions are uniquely designed to meet the short-distance, point-to-point and point-to-multipoint line-of-sight wireless broadband requirements of today’s enterprise and provider networks. They can be deployed in any situation where fiber optic or leased lines are unavailable, too costly, or bandwidth constrained, as long as the buildings are within line of sight and under a 4 km distance. Carrier-class high-availability specifications generally require shorter line-of-sight distances, usually in the range of 500 meters. Typical wireless optics applications include provider fiber network extension (or last-mile deployment), LAN-to-LAN or LAN-to-campus connections, spatially diverse/redundant connections, temporary links, mobile wireless network backhaul or extension, fiber backup, and disaster recovery.
Fixed Wireless Fixed wireless systems are often used for both extensions of wireline telephony and for broadcast video, audio, and data applications. Fixed wireless can be used to extend wireline services at an appropriate cost factor. In a telephony example, fixed wireless can be a radio spectrum-based local exchange service in which telephone service is provided by common telephony providers. It is primarily a rural application, because it reduces the cost of conventional wireline. Fixed wireless extends telephone service to rural areas by replacing a wireline local loop with radio communications. There are several other terminologies for fixed wireless, including
• • • • • • •
Wireless local loop Fixed loop Fixed radio access Wireless telephony Radio loop Fixed wireless Radio access
Fixed wireless access systems generally employ TDMA or CDMA access technologies and, more recently, VOFDM.
564
Chapter 9: Wireless Networks
VOFDM VOFDM is a Cisco Systems innovation that is widely hailed for its ability to overcome multipath interference for wireless communications. VOFDM allows for provision of fixed wireless systems that reach near line-of-sight environments. Previously, fixed wireless technologies required deployment on very tall buildings or towers with an absolute line-ofsight requirement to the receiving antennas. This prerequisite limited deployment options and customer reachability. VOFDM has now made possible near line-of-sight configurations of both point-to-multipoint and point-to-point at a range of up to 30 miles. This technology can increase the effective market reach beyond the capabilities of traditional last-mile broadband wireless systems. VOFDM employs a vector processing technique that combines frequency and spatial diversity to mitigate multipath fading and narrowband interference. VOFDM delivers higher spectral efficiency, even in obstructed paths or interference-limited cells. Spatial diversity increases a system’s tolerance to noise and multipath interference in both upstream and downstream directions in a point-to-multipoint wireless system. VOFDM effectively increases the transmission strength by combining multiple signals at the receiving end, boosting overall wireless system performance, link quality, and availability. There are also fixed wireless networks to support and augment data applications that aren’t cost effective by using wireline technologies. Some of these examples are Multichannel Multipoint Distribution Service (MMDS) and Local Multipoint Distribution Service (LMDS).
MMDS MMDS is a broadband wireless system presently used in North America to deliver video program content for television entertainment and, in cooperation with Instructional Television Fixed Services (ITFS) operators, to deliver video for distance-learning activities. Most systems traditionally use analog transmission under the National Television Systems Committee (NTSC) standard to deliver one video program per 6 MHz radio frequency (RF) channel on about 31 channels. These fixed wireless systems require line of sight to a metropolitan or residential area rather than cellular, as found in mobile PCS systems. This requires the transmission towers to seek maximum elevation to cover the largest area possible. MMDS broadcast signals are usually received via fixed rooftop antenna. A broadband wireless system such as MMDS can deliver up to 30 Mbps data capacity in a 6 MHz channel; therefore, many providers have adapted their systems to provide data capabilities via unused or reallocated channel space. A strong point of fixed wireless is that it can quickly provide high-speed bursty data such as Internet access to a 10-mile, 20-mile, or 35-mile radius depending on the frequency band used. This allows the MMDS service provider to work with or compete with cable TV to serve small-sized and medium-sized
Wireless LANs
565
businesses and high-end users with data offerings. As was mentioned in Chapter 8, “Wireline Networks,” a cable provider’s coaxial cable plant typically serves residential neighborhoods and underserves the premium business market. The available downstream spectrum for fixed wireless systems includes the following:
•
Two Multipoint Distribution System (MDS) channels, 2150–2162 MHz, that generally are single-channel broadcast stations
•
Sixteen Instructional Television Fixed Systems (ITFS) channels, A through D Group, 2500–2596 MHz, which are educational television and distance learning
• •
Eight MMDS channels, E and F Group, 2596–2644 MHz Four ITFS G Group, interleaved with three MMDS H1/2/3 channels, 2644–2686 MHz
In many other countries, a similar amount of downstream spectrum has been assigned for MMDS use within the range of 2 to 3 GHz. A major change is occurring in the MMDS industry with the transition to digital video compression and transmission. Digital technology enables compression of at least five video streams of similar resolution to NTSC-standard analog video into one 6 MHz RF channel. In the digital environment, an operator that has access to most of the downstream channels listed above can offer a selection of program streams that can aggressively compete with either direct broadcast satellite (DBS) or CATV entertainment video delivery systems with several channels to spare. MMDS has also adopted the cable labs’ Data over Cable Systems Interface Specifications (DOCSIS). (The moniker of DOCSIS+ denotes the wireless broadband version of DOCSIS.) MMDS will likely be in competition with 802.16 WiMAX.
LMDS Local Multipoint Distribution Service (LMDS) is another fixed wireless system characterized by shorter-range, three-mile radius transmissions, but at wider channel spacing. LMDS channels are 20 MHz wide each, and are assigned in the 27.5–28.35 GHz, 31–31.3 GHz, and 38 GHz band. LMDS uses small line-of-sight antennas with which to communicate. This is a relatively new service definition compared with traditional broadcast fixed wireless systems, and applications for LMDS are developing. Examples of LMDS applications include a fixed wireless Internet data solution or wireless cable television targeting a residential neighborhood or concentrated business park center. These early systems promised cheaper building access than fiber, but technology challenges, labor-intensive assembly, and roof-right negotiations have not allowed LMDS to catch up to its marketing vision. Like MMDS, LMDS has also adopted the cable labs’ DOCSIS specifications (DOCSIS+) to deal with theft-of-service issues. LMDS will likely compete with 802.16 WiMAX as well.
566
Chapter 9: Wireless Networks
Satellite Wireless From 22,223 feet above the Earth, in what is known as the Clarke belt, geo-stationary satellites, often termed GEOs, blanket the Earth with wireless audio, video, and data received from ground broadcast stations. From that altitude, you can cover the Earth with three satellites, which must orbit very rapidly to keep up their stationary, relative position as the world turns. At such a distance (more than 35,000 km), communication satellite technology inherits a one-way, 275–300 millisecond (ms) propagation delay (round trip of 550–600 ms), reducing its effectiveness for interactive two-way communication applications. Since signal power attenuates in proportion to the square of the traveled distance, Earth-based transmitters require large antenna dishes and megawatts of focused beam power to work with GEO satellites. That’s why several new providers in this space want to deploy low-to-medium Earthorbiting satellite technology, often referred to as LEOs. The lower the orbit, the less the round-trip delay, yet the more satellites it takes to provide global coverage, in this case, about 30. How low? Many of these LEO projects are targeting a mere 500 to 1500 km above the Earth, making them 25 to as much as 60 times nearer than GEO satellites way out in the Clarke belt. Yet this also reduces the power requirements and metric form factors of the satellite receiver in user technology. Many of these LEO providers may start with about 12 satellites to cover key population areas with broadband Internet access, interactive multimedia, and high-quality voice. Using the Ka band of frequencies from about 17 to 36 GHz, small-aperture antennas—a little over two-feet wide—are capable of discriminating between satellites. For example, using the Ka-band frequencies such as 28.6–29.1 GHz for the uplink and 18.8–19.3 GHz for the downlink, these low-to-medium Earth-orbiting satellites can send about 20 Mbps to user terminals on the downlink and up to 2 Mbps on the uplink. Both TDMA and CDMA are transmission technologies used in satellite services. Applications for satellite services range from wireless cellular mobility, to Internet broadband services, to high-speed TCP/IP-based multimedia delivery. The market for these types of services is generally considered those areas that cannot be reached with fiber optics but that need performance approaching that of fiber optics. These nongeostationary orbit satellites (NGSOs), as they are sometimes called, must work with international organizations for spectrum allocations.
Technology Brief—Wireless Networks This section provides a brief study on wireless networks. You can revisit this section frequently as a quick reference for key topics described in this chapter. This section includes the following subsections:
•
Technology Viewpoint—Intended to enhance perspective and provide talking points regarding wireless networks.
Technology Brief—Wireless Networks
567
•
Technology at a Glance—Uses figures and tables to show wireless network fundamentals at a glance.
•
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance—Presents charts that suggest business drivers and lists those factors that are largely transparent to the customer and consumer but are fundamental to the success of the provider. Use the charts shown in the figures in this section to see how business drivers are driven through technology selection, product selection, and application deployment to provide solution delivery. Additionally, business drivers can be appended with critical success factors and then driven through the technology, product, and application layers, coupled as necessary with partnering, to produce customer solutions with high service value.
Technology Viewpoint Wireless networks now cover the spectrum from cellular phones to wireless Ethernet to fixed wireless and satellite wireless services. Since 1985, the United States has amassed over 200 million cellular subscribers that are served by some 176,000 cellular sites. Of the 60 plus percent penetration of U.S. mobile phones to U.S. citizens, the conversion to digital cellular and PCS subscribers have reached greater than 90 percent of that install base, doing so rapidly since digital handsets first became available in the first quarter of 1996. Worldwide, subscribers are expected to top 2 billion users by 2010. It is this massive digitalization that enables the convergence of data, audio, and video to the palm-sized communicators with a surplus of portable possibilities. Wireless mobility stands at a difficult junction. The industry is becoming increasingly involved in delivering data, audio, and video over its wireless networks. A struggle to standardize data services over cellular and PCS infrastructure looms very large indeed. Though mobile phones are pervasive, the industry is a worldwide tower of complex technologies and diversified markets. A global orbit would require a suitcase of different phones, varied technologies, unique data transports, and a plethora of radio spectrum to semi-continually stay in touch. Multiple generations of wireless classifications add to the burgeoning acronym soup of mobile wireless technologies. There are access standards, data standards, network standards, and usability specifications—all are used to classify a network as 1G, 2G, 2.5G, and 3G. Each of these is generally “crammed” into the existing RF spectrum assignments. While the overall mixture creates complexity, options are increasing for wireless providers to craft personable, mobile services that customers will use. It is apparent that the wireless manufacturers and wireless providers are preparing for increased mobile data usage in the years to come, as mobility and computing, voice and data come together to provide a seamless, untethered, spectrum-efficient, and robust mobility experience.
568
Chapter 9: Wireless Networks
Providers should focus on allowing seamless roaming among different networks without interruption of user service, working with manufacturers and standards organizations to create solutions that support multiple technologies of 802.11, 802.16e, CDMA 1xRTT and CDMA 1xEV-Dx, GSM/GPRS, EDGE, and UMTS/WCDMA networks. IMT-2000 is one approach. As the industry integrates data communications and computing into the mobile phone form factor, industry leaders should consider following the way of the Internet. Standards are crucial if mobile devices are to seamlessly connect with the Internet and the vast data warehouses of the world’s enterprises. IP networking, the fundamental carriage of the global Internet, provides an open, standards-based protocol suite for supporting data delivery, whether for personal computing or mobile phone networks. With mobile IP networking, wireless networks can become seamless extensions of the Internet, the enterprise, the home, and the individual. Wireless LANs are all about portability of personal computing. Industries such as government, healthcare, education, and manufacturing have led the charge into untethered computing, realizing productivity gains as much as 40 percent. Internet networking standards have swiftly coupled the world’s disparate computers, PDAs, and pocket PCs together for a best-yet data-sharing experience. WLAN standards have adopted the Internet Protocol suite and, through the deployment of WLAN infrastructures, are becoming lastlink enablers to anytime, anywhere computing. The integration of voice services through Voice over IP protocols becomes yet another data application to carry on the computing backplanes of current and future handheld personal computers. Enterprises will be looking for WLAN and Wi-Fi ease of deployment, centralized management, high performance, and scalability, as well as robust security in WLAN options. These are generally addressed with features such as dynamic power and channel assignment, automated RF site surveys and RF validation, rouge AP detection, and load balancing. Also high on the list are QoS, low-latency support for wireless voice and video, seamless roaming, high-throughput (Gbps) switching capacity, Wi-Fi–protected access, and advanced encryption standard security and integration support for advancing standards such as WiMAX. WPANs and WMANs are recent developing market spaces with promising opportunities. The mobile and wireless industry has already matured beyond adolescence and cannot ignore the fundamentals of longevity. Mobile and wireless service providers that focus on the basics of service quality, customer service and segmentation, ease of use, and operational performance—while using technological, procedural, and cultural innovation to differentiate—will enjoy the best success.
Technology at a Glance Figure 9-8 illustrates the application of wireless data technology.
Technology Brief—Wireless Networks
Figure 9-8
569
Wireless Data Technology Application
100 Gbps
10 Gbps Fiber Optic Cable 1 Gbps
Fixed Wireless
100 Mbps WiMax 10 Mbps
LMDS
1 Mbps Wi-Fi
DSL
.1 Mbps
Long Haul
Dense Urban
Satellite
Wireless Local Loop
Copper Urban
Industrial Residential Suburban Suburban
Rural
Remote
Table 9-5 compares wireless technologies. Table 9-5
Wireless Technologies WPAN
WLAN
WMAN
Standards or developing standards
Bluetooth (802.15.1)
802.11 Wi-Fi
MMDS and LMDS
Seed technology
FHSS, TDMA
FHSS, DSSS, OFDM
FHSS, DSSS, OFDM, VOFDM
Ultra-wideband
WWAN
GSM, GPRS, EDGE, WCDMA, cdmaOne, CDMA 1xRTT, 802.11 Wi-Fi CDMA 1xEV-DO, 802.16 WiMAX 2.5G–3G, satellite, and others FDMA, TDMA, CDMA, OFDM continues
570
Chapter 9: Wireless Networks
Table 9-5
Wireless Technologies (Continued)
Speed
WPAN
WLAN
WMAN
WWAN
1 to 3 Mbps Bluetooth
11 to 54 Mbps
11 to 100 Mbps
10 Kbps to 2.4 Mbps
Medium
Medium–long
Long
Mobile enterprise computing
Wireline replacement
Mobile telephony
Mobile Internet computing
Last-mile broadband access
Media on demand
250-500 Mbps UWB Range
Short
Applications Peer-to-peer Device-to-device Home theatre and entertainment systems Vehicle proximity SmartCard transactions
Home networking
Mobile messaging Mobile Internet Mobile enterprise Mobile/global positioning system
PC peripherals
Business Drivers, Success Factors, Technology Application, and Service Value at a Glance Solutions and services are the desired output of every technology company. Customers perceive value differently, along a scale of low cost to high value. Providers of solutions and services should understand business drivers, technology, products, and applications to craft offerings that deliver the appropriate value response to a particular customer’s value distinction. The charts shown in the following figures list typical customer business drivers for the subject classification of networks. Following the lower arrow, these business drivers become input to seed technology selection, product selection, and application direction to create solution delivery. Alternatively, from the business drivers, another approach (the upper arrow) considers the provider’s critical success factors in conjunction with seed technology, products and their key differentiators, and applications to deliver solutions with high service value to customers and market leadership for providers.
Technology Brief—Wireless Networks
571
Figure 9-9 charts the business drivers for wireless mobility. Figure 9-9
High Value
Wireless Mobility
Critical Success Factors Increase Consumer Loyalty and Retention
Market Leadership
Mobile Exchange
TeleMatics
IP Transfer Point
LBS
Lucent
Service Selection – Self-Provisioning
Siemens
Multiple Billing Options: Pre, Post, Tier
Motorola
Content Measurement – Multiaccess Technology Service and Access Separation – Flexible Billing
Data Applications Service Mix and Location-Based Services Multi-Access Anywhere Common User Experience
Competitive Maturity
Cisco IOS
Branding
Mobile Communications
Low Cost
Applications
Nortel
Carrier-Class Reliability
Market Share
Cisco Product Lineup
Increase Average Revenue per User
Optimize CapEx and OpEx
Market Value Transition
Technology
Secure Transactions Business Drivers
Industry Players
Qualcomm Cisco FDMA TDMA CDMA OFDM CDMA 2000 1xRTT
Gateway GPRS Node Packet Data Serving Node Service and Subscriber Selection Gateways Content Services Gateway
CDMA 2000 1xEVDO CNS Access Registrar GSM HSCSD GPRS EDGE WCDMA HSDPA WAP
Access Control Server Mobile Wireless Center 6500/7600 7200/7400
Service Value Superior Voice Quality and Features Mobile Internet Premium Content-Based Services
GPS Music
Brand Identity
Gaming
Captive Portal
Imaging
Stellar Customer Service
Push to Talk Mobile Payment Internet MMS
Cisco Key Differentiators Single Framework - IP Feature Leadership – Up to 300 k subs per CSG – Scale to 1M PDPs UMTS/WCDMA
E-Mail I-Paging
cdmaOne, CDMA2000 1x and 1xEV
SMS
Personal Communications Services
Data Roaming Voice Calling Features
GSM/GPRS/EDGE Digital Cellular Analog Cellular
Voice
3200
Solution Delivery
Service Providers – Cingular Wireless – Verizon Wireless – SprintPCS – T-Mobile – Alltel – U.S. Cellular – Western Wireless Handset Manufacturers – Nokia – Motorola – Samsung – Sanyo Siemens – LG Electronics – Sony Ericsson – Kyocera – NEC
Wireless Mobility
572
Chapter 9: Wireless Networks
Figure 9-10 charts the business drivers for WLANs. Figure 9-10 Wireless LANs
High Value
Critical Success Factors
Technology
Ethernet Plug and Play
Market Leadership
Enterprise-Class Features and Security
Cisco IOS
Multiple Billing Options: Pre, Post, Tier
Wi-Fi 802.11b 802.11a 802.11g
Common Global Architecture Common User Experience Increased Technology ROI
Market Value Transition
11 Mbps to 54 Mbps – Dual-Band Support – Field Upgradeable Radios – Multivendor Support Mobile Data Applications Convenience
Market Share
Bluetooth UltraWideBand FHSS DSSS
Mobility, Portable Device Adoption
OFDM
Security and Secure VPNs
Competitive Maturity
WiMAX 802.16 802.16e
Productivity Gains, Work on Demand
Home Networking Low Cost
802.11h 802.11i 802.11e
Anytime, Anywhere Connectivity Business Drivers
COFDM
Cisco Product Lineup Aironet Access Points 1300 Series 1200 Series 1100 Series 1000 Series 350 Series Client Adapter 802.11a/b/g Wireless Bridges 1400 Series 1300 Series 350 Series 350 Series Workgroup Bridge Wireless LAN Solutions Engine Mobile Wireless Center
Applications Service Value Superior Mobility Mobile Internet Wireline Substitute Enterprise LAN Extension Wireless Voice Mobile Workers Solves Wiring Challenges Seamless Roaming Building to Building Connect WLAN WPAN
Teleputing Seamless Data Roaming Time Savings 20% to 30% Productivity Gain Immediate Customer Service Cisco Key Differentiators Centralized Management – Robust Security – Key IP Features VLAN, QOS, Proxy Mobile IP
Home Networking Wireless Public Hotspots Wireless Broadband Distribution Point-to-Point Wireline Substitution
WMAN
Wireless 7920 VoIP Phone
Cisco Systems – Buffalo – LinkSys – 3Com – Lucent/Agere – Enterasys – Symbol Industry Players
Wireless LAN
Wireless Local Area Network Overlay Solution Delivery
References Used in This Chapter
573
Figure 9-11 charts the business drivers for fixed wireless. Figure 9-11 Fixed Wireless
High Value
Critical Success Factors
Technology
Maximize Subscribers per CapEx Near Line-of-Sight Options Market Leadership
Wireless Optics
Point-to-Point and Point-to-Multipoint
Increased Technology ROI
Service Value
Satellite GEOS LEOS
Rural Broadband
Fixed Broadband Internet Long-Distance Voice and VoIP Business Multimedia Stellar Customer Service
Building to Building Connect
Cisco Key Differentiators
WMAN Rural and No-Man’s-Land Coverage
Wireless Broadband Distribution
Last-Mile Broadband Wireless Market Share
Superior Coverage Density
Wireline Substitute Wireless Broadband
TDMA CDMA VOFDM
Market Value Transition
Applications
FSO Maximize Frequency Use and Range Support MMDS, LMDS, U-NII, ETSI Bands
Cisco Product Lineup
Point-to-Point Fiber Substitution
MMDS, LMDS Data Applications Point-to-Point Wireline Substitution Service Mix and Voice over IP Direct Broadcast Services
Home Networking Low Cost Competitive Maturity
Wireless Public Hotspots
Security and Secure VPNs Business Drivers
Industry Players
Solution Delivery
TeraBeam – AirFiber – Canon Intelsat – Hughes (PanAmSat-DIRECTV-SPACEWAY) – EchoStar – Iridium – Globalstar – ORBCOMM
Wireless, Fixed
End Notes 1 CDMA Development Group. “CDMA Industry Is Defining Next-Generation CDMA2000
Technologies.” http://www.cdg.org/news/press/2005/Jun14_05.asp.
References Used in This Chapter Cisco Systems, Inc. “Multipoint Broadband Wireless.” http://www.cisco.com/en/US/ partner/about/ac123/ac114/ac173/ac167/about_cisco_packet_ department09186a008010f7d7.html. (Must be a registered Cisco.com user.) International Engineering Consortium. “Global System for Mobile Communications.” http://www.iec.org/online/tutorials/gsm/.
574
Chapter 9: Wireless Networks
International Engineering Consortium. “Cellular Communications.” http://www.iec.org/ online/tutorials/cell_comm/ International Engineering Consortium. “Personal Communications Service (PCS).” http://www.iec.org/online/tutorials/pcs/ International Engineering Consortium. “Billing in a 3G Environment.” http://www.iec.org/ online/tutorials/billing_3g/ International Engineering Consortium. “Wireless Broadband Modems.” http://www.iec.org/online/tutorials/wire_broad/ Wilson, James M. “Ultra-WideBand/s Disruptive RF Technology – Intel Research and Development white paper.” September 10, 2002. http://developer.intel.com/technology/ ultrawideband/downloads/Ultra-Wideband_Technology.pdf Nedeltchev, Plamen. “Wireless Local Area Networks and the 802.11 Standard.” March 31,2001. http://www.cisco.com/warp/public/784/packet/jul01/pdfs/whitepaper.pdf Wexler, Joanie. “Tips from the Trenches.” Cisco Systems Packet Magazine. 3Q 2003. Qualcomm, at www.qualcomm.com Ericsson, at www.ericsson.com GSM Association, at www.gsmworld.com
This page intentionally left blank
INDEX
Numerics 10 Gigabit Ethernet, 275–277 optical networks, 278–280 pluggable optics, 288–292 10GBASE-ER, 290 10GBASE-LR, 289 10GBASE-LX4, 290 10GBASE-SR, 289 10GBASE-SW, 290 10GE, 275–277 optical networks, 278–280 pluggable optics, 288–292 1G (first-generation systems), 541 2.5G systems, cellular mobility, 542 2G (second-generation systems), 542 3G systems (third-generation), 542–543 4G systems, 543 4GMF (4th Generation Mobile Forum), 543 4th Generation Mobile Forum (4GMF), 543 6Bone, 42 802.11 standards timeline, 556 WLANs, 553–555 comparing revisions, 558–559 diffused infrared, 553 DSSS, 554–555 FHSS, 553–554 revision a, 556–557 revision b, 555–556 revision g, 558 802.16 standard, WLANs, 559–560 802.17 protocol, RPR architecture, 268–271 8900 Series, 113
A AALs (ATM Adaptation Layers), 105 access control, Cisco PWLAN architecture, 81
layers, multilayer switching, 59 points, Cisco PWLAN architecture, 80 policy servers, Cisco PWLAN architecture, 81 Access VPNs, 164, 171–172 IPSec (IP security), 172–175 firewall, 176 hardware clients, 176–177 remote-site routers, 177 software-based clients, 174–176 MPLS, 182 benefits, 185 function, 183–185 SSL (secure socket layer), 177–179 wireless, 179 hardware-based, 181 security, 182 software-based, 180 Access Zone Router (AZR), Cisco PWLAN architecture, 81 add/drop multiplexers (ADMs), SONET/SDH networks, 261 addressing, LANs IP routing, 52 ADMs (add/drop multiplexers), SONET/SDH networks, 261 ADSL (Asymmetric DSL), 478–479 data rates, 482 distance limitations, 483 filter, 481 modems, 479–480 multiplexing standards, 480–481 service selection, 483–484 ADSL2, 484–485 ADSL2+, 484–485 Advanced Mobile Phone Service (AMPS), 524, 529 aggregation layers, ISDN wireline networks, 472–474 AGS (Cisco Advanced Gateway Server), 47 AH (authentication header), 166–167 amplification ELH, 430–431
578
amplification
metro DWDM, 357 ULH, 434 amplifiers DWDM long-haul networks, 418–420 submarine long-haul networks, 437–438 AMPS (Advanced Mobile Phone Service), 524, 529 analog technologies, cellular mobility, 524–528 transmissions, residential loop, 459 Antheil, George, 530 Any Transport over MPLS (AToM), 195–198 any-to-any connectivity, MPLS Layer 3 VPNs, 192–194 AoMPLS (ATM over MPLS), 68 APC (Automatic Power Control), 407 APDs (avalanche photodiodes), 238 APONs (ATM PONs), 315 application service providers (ASPs), 85 architectures ULH OXC, 434–435 WANs (long IP networks), 64 ASPs (application service providers), 85 Asymmetric DSL (ADSL), 476–479 data rates, 482 distance limitations, 483 filter, 481 modems, 479–480 multiplexing standards, 480–481 service selection, 483–484 Asymmetric DSL (ADSL), 476 Asymmetric DSL-Lite (G.Lite), 476 ATM cell tax, 105 cell-based MPLS components, 118–119 multiservice networks, 104–106 next-generation multiservice switching, 108–110 VPNs (virtual private networks), 161–163 ATM Adaptation Layers (AALs), 105 ATM LSR, 119–120
ATM over MPLS (AoMPLS), 68 ATM PONs (APONs), 315 AToM (Ant Transport over MPLS), 195–198 authentication header (AH), 166–167 Automatic Power Control (APC), 407 avalanche photodiodes (APDs), 238 AZR (Access Zone Router), Cisco PWLAN architecture, 81
B bandwidths STS, 141 WANs (long IP networks), 63 basic rate interface (BRI), 464–465 BGP (Border Gateway Protocol), 52 B-ISDN (Broadband Integrated Services Digital Network), 104, 466 Bluetooth, WPANs, 560–561 bonded T1s, 463 Border Gateway Protocol (BGP), 52 BPX 8600 Series, 110 BRAS (Broadband Remote Access Server), 475, 492 BRI (basic rate interface), 464–465 broadband wireline networks cable, 493–502 DSL, 475–490 DSLAM, 490–492 Ethernet, 502–509 digital access cross-connects, SONET/SDH networks, 262 Broadband Integrated Services Digital Network (BISDN), 104, 466 Broadband Remote Access Server (BRAS), 475, 492
C cable, broadband wireline networks, 493–494 CMTS (Cable Modem Termination System), 500–502
Cisco
standards, 496–500 technology, 494–495 Cable Modem Termination System (CMTS), 494, 500–502 called distributed CEF (dCEF), 61 capacity Global IP networks, 88–89 metro DWDM, 357–358 WLANs (wireless LANs), 74–75 CDMA (code division multiple access), 18, 530 digital cellular technology, 530–532 direct spread, 540 multicarrier, 540 time division duplexing, 540 CDMA 1x EV-DO, 18 CDMA2000, 532 cellular standards, 537–538 data mobility, 545–546 CDMA2000 1x, 18, 532, 545 CDMA2000 1xEV-DO, 532 CDMA2000 1xEV-DV, 532–546 cdmaOne, 532 CEF (Cisco Express Forwarding), 60 cells clusters, analog technology, 524–528 MPLS, 118 ATM components, 118–119 ATM LSR, 119–120 Cisco ATM multiservice switches, 120–121 eLSR, 119–120 cellular mobility, 523–524 analog technology, 524–528 call transmission, 551 data overlay, 544, 547 CDMA2000, 545–546 data rates, 548–549 EDGE, 545 GPRS, 544–545 HSCSD, 544
579
HSDPA, 547 TD-SCDMA, 547 WCDMA, 546 digital technology, 529 CDMA, 530–532 OFDM, 532–533 TDMA, 529–530 functional generations, 541 2.5G system, 542 4G systems, 543 first-generation (1G), 541 second-generation (2G), 542 third-generation (3G), 542–543 networks, 82 Cisco Mobile Exchange Frameworks, 84–87 MPLS, 84 packet gateways on router platforms, 82–83 packet-based VoIP, 84 RAN support, 83 SS7oIP, 83 WLAN 802.11, 83–84 radio frequency spectrum, 549–550 standards, 534–536 CDMA2000, 537–538 GSM, 536–537 IMT-2000, 539–541 PCS, 538 UTMS, 539 Cellular Telecommunications & Internet Association website, 17 channels counts, metro DWDM, 358 DWDM design, 250–251 optical impairments, 249 chromatic dispersion, 248 CIDR (Classless Interdomain Routing), 39 Cisco next-generation multiservice networks, 110 Cisco 8900 Series, 113 Cisco BPX 8600 Series, 110 Cisco IGX 8400 Series, 113
580
Cisco
Cisco MGX 8250 Edge Concentrator, 112 Cisco MGX 8800 Series, 112 website, 43 Cisco 8900 Series, 113 Cisco Advanced Gateway Server (AGS), 47 Cisco BPX 8600 Series, 110 Cisco CNS SESM, Cisco PWLAN architecture, 81 Cisco CRS-1 Carrier Routing System, 126 8-slot single-shelf systems, 131 16-slot single-shelf systems, 131 hardware design Fabric Chassis, 128–129 line card shelves, 126–127 Multishelf Systems, 129–131 Cisco Express Forwarding (CEF), 60 Cisco IGX 8400 Series, 113 Cisco Information Center, 81 Cisco IOS XR Software, multiservice network routing, 132–133 Cisco MGS series routers, 12 Cisco MGX 8250 Edge Concentrator, 112 Cisco MGX 8800 Series, 112 Cisco Mobile Exchange Framework, IP cellular networks, 84–87 Cisco ONS 15454 CE Series Ethernet data card, 143–144 Cisco ONS 15454 E Series Ethernet data card, 142 Cisco ONS 15454 G Series Ethernet data card, 142–143 Cisco ONS 15454 ML Series Ethernet data card, 143 Cisco ONS 15454 MSTP, 405–408 Cisco ONS 15808 DWDM System, 402–405 Cisco XR 12000 Series Routers, 133–134 architecture, 134–136 capacities, 136–138 cladding, optical fiber, 233 Classless Interdomain Routing (CIDR), 39 Clinton, President Bill, Telecommunications Reform Act of 1996, 8
CMTS (Cable Modem Termination System), 494, 500–502 CNS Performance Engine (CNS-PE), 81 CNS-PE (CNS Performance Engine), 81 coarse wavelength division multiplexing (CWDM), 242, 254–257, 359–360 coating, optical fiber, 233 coaxial cables, 493 code division multiple access. See CDMA coded OFDM (COFDM), 555 COFDM (coded OFDM), 555 communications networks era of changes, 5–8 government regulation, 8–11 technological advancement, 11–12 IP (Internet Protocol) growth, 12–14 optical communications growth, 14–17 wireless communications, 17–20 components, optical networking, 228–229 electromagnetic spectrum, 230–232 lambdas, 229–230 light emitters, 232–233 receivers, 238 optical fiber, 233–238 computing power, 5 core layers, multilayer switching, 59 core networks, 138–139 MSPPs (Multiservice Provisioning Platform), 140–141 Cisco ONS 15454 CE Series Ethernet data card, 143–144 Cisco ONS 15454 E Series Ethernet data card, 142 Cisco ONS 15454 G Series Ethernet data card, 142–143 Cisco ONS 15454 ML Series Ethernet data card, 143 MSSPs (Multiservice Switching Platforms), 144–147 core optical fiber, 233 Corning LEAF fiber, 237
distribution layers, multilayer switching
Corning SMF-28 fiber, 236 Corning SMF-28e fiber, 236 Corning SMF-DS fiber, 236 Corning SMF-NZ-DSF fiber, 236 Corning VASCADE fiber, 237 correspondent nodes, 71 CPE, customer premise equipment), 45 CRS-1 Carrier Routing System, 126 8-slot single-shelf systems, 131 16-slot single-shelf systems, 131 hardware design Fabric Chassis, 128–129 line card shelves, 126–127 Multishelf Systems, 129–131 customer premise equipment (CPE), 45 CWDM (coarse wavelength division multiplexing), 242, 254–257, 359–360
D DACs (digital access cross-connect systems), SONET/SDH networks, 262 dark fiber, 250 dark lambdas, 250 dark wavelengths, 250 DAT (Distributed Administration Tool), 81 data ADSL (Asymmetric DSL), 482 cellular mobility, 544, 547 CDMA2000, 545–546 data rates, 548–549 EDGE, 545 GPRS, 544–545 HSCSD, 544 HSDPA, 547 TD-SCDMA, 547 WCDMA, 546 era of change, 5 forwarding, IPSec (IP security), 168 transport mode, 170 tunnel mode, 168–169
581
IP (Internet Protocol), converged networks, 44 modulation, ULH, 435 SONET/SDH, 266 wireline networks, 457–458, 509 broadband, 475–509 narrowband, 458–474 next-generation networks, 23–24 dCEF (called distributed CEF), 61 dense wavelength division multiplexing. See DWDM density, multilambda networks, 242 DFB (distributed feedback), 416 Diffie-Hellman, authentication key methods, 167 diffused infrared, 75, 553 digital access cross-connect systems (DACSs), SONET/SDH networks, 262 digital access technologies, WLANs (wireless LANs), 75–76 Digital AMPS, 529 digital certificates, authentication key methods, 167 digital loop carrier (DLC), 472–474 digital signal one (DS1), 461–463 digital signal zero (DS0), 460 Digital Subscriber Line Access Multiplexer (DSLAM), 475 Digital Subscriber Line. See DSL digital technologies cellular mobility, 529 CDMA, 530–532 OFDM, 532–533 TDMA, 529–530 wireline networks, 460–461 Direct Sequence Spread Spectrum. See DSSS dispersion management, ULH, 433–434 optical impairments, 248–249 shifted fiber, 250 Distributed Administration Tool (DAT), 81 distributed feedback (DFB), 416 distribution layers, multilayer switching, 59
582
DLC (digital loop carrier)
DLC (digital loop carrier), 472–474 DMVPN (dynamic multipoint VPN), 189 DOCSIS 1.0 standard, 497–498 DPT (Dynamic Packet Transport), 266–268, 346–348 benefits, 274 Ethernet, 284–285 SRP protocol, 271–273 DS0 (digital signal zero), 460 DS1 (digital signal one), 461–463 DSL (Digital Subscriber Line), 475–478 ADSL, 478–479 data rates, 482 distance limitations, 483 filter, 481 modems, 479–480 multiplexing standards, 480–481 service selection, 483–484 ADSL2, 484–485 ADSL2+, 484–485 SHDSL (Single-Pair High-Rate DSL), 485–486 VDSL (Very High Data Rate DSL), 486–490 DSLAM (Digital Subscriber Line Access Multiplexer), 475 broadband aggregation layer, 490 basics, 491–492 BRAS (Broadband Remote Access Server), 492 DSSS (Direct Sequence Spread Spectrum), 553 802.11 standard, 554–555 wireless digital access technologies, 75 DWDM (dense wavelength division multiplexing), 242–246, 494 design balance, 252 channel count, 250 channel plans, 251 transponders, 251 fiber types, 249–250 intelligence and integration, 252–254 long-haul optical networks, 410, 422–424 lasers, 413–414
optical amplifiers, 418–420 optical power budget, 423–429 optical regeneration, 421–422 tunable components, 415–418 waveguide challenges, 410–413 metropolitan optical networks, 349 business drivers, 349–350 CWDM, 359–360 design considerations, 356–358 enabled services, 360–361 technology, 351–355 network topology discovery, 407 optical impairments, 246 channels, 249 dispersion, 248–249 power loss, 248 dynamic multipoint VPN (DMVPN), 189 Dynamic Packet Transport. See DPT
E EAP (Extensible Authentication Protocol), 76 EAP-TLS (Extensible Authentication ProtocolTransport Layer Security), 77 ECLs (external-cavity lasers), 416 EDGE (Enhanced Data Rates for GSM Evolution), 18, 83, 544–545 edge label switch router (eLSR), 115, 119–120 edge network switching, 138–139 MSPPs (Multiservice Provisioning Platform), 140–141 Cisco ONS 15454 CE Series Ethernet data card, 143–144 Cisco ONS 15454 E Series Ethernet data card, 142 Cisco ONS 15454 G Series Ethernet data card, 142–143 Cisco ONS 15454 ML Series Ethernet data card, 143 MSSPs (Multiservice Switching Platforms), 144–147 EFM OAM (Ethernet in the First Mile Operations, Administration, and Maintenance), 504, 508
Extranet VPNs
EFMC (Ethernet in the First Mile over Copper), 504–505 EFMF (Ethernet in the First Mile over Point-toPoint Fiber), 504–506 EFMP (Ethernet in the First Mile over Passive Optical Networks), 504, 507–508 EIGRP (Enhanced Interior Gateway Routing Protocol), 52 Einstein, Albert, stimulated emission of radiation, 229 electromagnetic spectrums optical networking, 230–232 technological refinement, 5–6 ELH (Extended Long-Haul Optical Networks), 429 advanced fiber, 430 FEC, 431 L band, 430 modulation formats, 431–432 raman amplification, 430–431 eLSR (edge label switch router), 115, 119–120 EMS (Ethernet Multipoint Service), 363–367 encapsulating security payload (ESP), 168 Enhanced Data Rates for GSM Evolution (EDGE), 18, 83, 544–545 Enhanced Interior Gateway Routing Protocol (EIGRP), 52 Enterprise Systems Connection (ESCON), 380–381 enterprise-managed VPNs, 216–217 EoMPLS (Ethernet over MPLS), 68 EPL (Ethernet Private Line), 363–365 EPONs (Ethernet over passive optical networks), 315, 503 EPR (Ethernet Private Ring), 363, 366 EPS, IPSec headers, 168 ERMS (Ethernet Multipoint Service), 363, 367 errors, LANs IP routing, 53 ERS (Ethernet Relay Service), 363–365 ESCON (Enterprise Systems Connection), 380–381 ESP (encapsulating security payload), 168
583
Ethernet broadband wireline networks, 502–504 EFM OAM, 508 EFMC, 504–505 EFMF, 506 EFMP, 507–508 new access choices, 508–509 LANs (local area networks), 48–50 metropolitan optical networks, 361–362 LAN to MAN, 362–363 market requirements, 367, 370–371 service orienting, 371 services, 363–369 optical networking, 274–277 10GE, 278–280 direct over optical fiber, 285–292 Gigabit Ethernet, 278 next-generation SONET/SDH, 280–283 RPR/DPT, 284–285 switching Layer 2, 55–57 multilayer, 58–60 optimizing multilayer, 60–62 WANs (long IP networks), 68–69 Ethernet Multipoint Service (EMS), 363–367 Ethernet over MPLS (EoMPLS), 68 Ethernet over passive optical networks (EPONs), 315, 503 Ethernet Private Line (EPL), 363–365 Ethernet Private Ring (EPR), 363, 366 Ethernet Relay Multipoint Service (ERMS), 363, 367 Ethernet Relay Service (ERS), 363–365 Ethernet Wire Service (EWS), 363–365 EWS (Ethernet Wire Service), 363–365 excitation, 229 Extended Long-Haul Optical Networks. See ELH Extensible Authentication Protocol (EAP), 76 Extensible Authentication Protocol-Transport Layer Security (EAP-TLS), 77 external-cavity lasers (ECLs), 416 Extranet MVPNs, 210 Extranet VPNs, 165, 211–213
584
Fabric Chassis, Cisco CRS-1 Carrier Routing System
F Fabric Chassis, Cisco CRS-1 Carrier Routing System, 128–129 Fast Ethernet, 280 FDD (frequency division duplexing), 531 FDDI (Fiber Distributed Data Interface), 46 FDMA (frequency division multiple access), 524, 540 FEC (Forward Error Correction), 431, 531 FHSS (Frequency Hopping Spread Spectrum), 553 802.11 standard, 553–554 wireless digital access technologies, 75 FIB (forwarding information base), 60 fiber DWDM, 249–250 ELH, 430 metro DWDM, 356–357 submarine long-haul networks, 436–438 Fiber Connection (FICON), 381–383 Fiber Distributed Data Interface (FDDI), 46 fiber to the node (FTTN), 311 fibre channels, metro storage networks, 377–379 FICON (Fiber Connection), 381–383 filters, ADSL (Asymmetric DSL), 481 firewalls, remote-site IPSec VPNs, 176 first generation systems (1G), 541 fixed wireless, WLANs, 563 LMDS (Local Multipoint Distribution Service), 565 MMDS (multichannel multipoint distribution service), 564–565 VOFDM (vector orthogonal frequency division multiplexing), 564 foreign agents, Mobile IP, 71 Forward Error Correction (FEC), 431, 531 forwarding information base (FIB), 60 Frame Relay Hub-and-Spoke Design, 66 ISDN wireline networks, 467–472
VPNs (virtual private networks), 161–163 WANs (long IP networks), 65–67 Frame Relay over MPLS (FRoMPLS), 68 frame-based MPLS function, 116–117 terminology, 115–116 frequency division duplexing (FDD), 531 frequency division multiple access (FDMA), 524, 540 Frequency Hopping Spread Spectrum. See FHSS FRoMPLS (Frame Relay over MPLS), 68 FTTN (fiber to the node), 311
G GBICs (Gigabit Interface Converters), 251, 286–288 Generalized Packet Radio Service (GPRS), 18, 83, 544 generations, cellular mobility, 541 2.5G system, 542 4G systems, 543 first-generation (1G), 541 second-generation (2G), 542 third-generation (3G), 542–543 Generic Framing Procedure standard (GFP standard), 281–282 GFP (Generic Framing Procedure), 281–282, 333 GFP framed (GFP-F), 282 GFP transparent (GFP-T), 282 GFP-F (GFP framed), 282 GFP-T (GFP transparent), 282 Gigabit Ethernet, 275–277, 280 optical networks, 278 over optical fiber, 285–286 Gigabit Interface Converters (GBICs), 251, 286–288 Gigabit PONs (GPONs), 315 Global IP networks, 87–88 capacity, 88–89 Internet, 90–92 resiliency, 89–90
Internet Protocol
Global System for Mobile Communications (GSM), 18, 536–537 Globally Resilient IP (GRIP), 89 governments, telecommunications regulation, 8–11 GPONs (Gigabit PONs), 315 GPRS (Generalized Packet Radio Service), 18, 83, 544–545 GRIP (Globally Resilient IP), 89 GSM (Global System for Mobile Communications), 18, 536–537
H hard handoffs, 529 hardware Cisco CRS-1 Carrier Routing System Fabric Chassis, 128–129 line card shelves, 126–127 IPSec VPN clients, 176–177 wireless VPNs, 181 HCS (Hierarchical Cell Structure), 529 HDLC (High-Level Data Link Control) protocol, 66 HDSL (High Data Rate DSL), 477 HDSL2 (High Data Rate DSL-2), 477 headers, IPSec (IP security), 166 AH (authentication header), 166–167 ESP (encapsulating security payload), 168 HFC systems, 494 Hierarchical Cell Structure (HCS), 529 high availability, IP VPNs, 164 High Data Rate DSL (HDSL), 477 High Data Rate DSL-2 (HDSL2), 477 High-Level Data Link Control (HDLC) protocol, 66 High-Speed Circuit-Switched Data (HSCSD), 544 High-Speed Downlink Packet Access (HSDPA), 544, 547 histories, IP (Internet Protocol), 36 technology share, 36–38 version 4, 38–40 version 6, 40–43
585
home agent, Mobile ID, 71 hosted storage networks, service POPs, 329 hosted telephony, service POPs, 328 HSCSD (High-Speed Circuit-Switched Data), 544 HSDPA (High-Speed Downlink Packet Access), 544, 547 HTML (Hypertext Markup Language), 7 hub nodes, long-haul optical networks, 400 hub-and-spoke design, intranet VPNs, 188 Hypertext Markup Language (HTML), 7
I–J IANA (Internet Assigned Numbers Authority), 51 IBM SNA (Systems Network Architecture), 12 IDSL (ISDN DSL), 477 IEEE 802.11x standard, 74 IEEE standards, 75 IETF RFC 2002, 70 IGRP (Interior Gateway Routing Protocol), 52 IGX 8400 Series, 113 IKE (Internet Key Exchange), 166 ILEC (Incumbent Local Exchange Carrier), 466 IMT-2000, cellular standard, 539–541 Incumbent Local Exchange Carrier (ILEC), 466 infrared, diffused, 553 infrastructures, convergence, 26–28 Integrated Services Digital Network. See ISDN inter-AS MVPNs, 210 interexchange carriers (IXCs), 397 Interior Gateway Routing Protocol (IGRP), 52 Intermediate System to Intermediate System (IS-IS), 52 internal networks. See intranet VPNs Internet Global IP network, 90–92 service accesses, 7 Internet Assigned Numbers Authority (IANA), 51 Internet Key Exchange, 166 Internet Protocol. See IP
586
Internet Protocol Suite
Internet Protocol Suite, 37 Internet Protocol/Multiprotocol Label Switching (IP/MPLS), 67 Internet service providers (ISPs), 85 Internet Software Consortium website, 13 intranet VPNs, 165, 186 IPSec designs, 188 components, 189 DMVPN (dynamic multipoint VPN), 189 full-mesh on-demand with TED, 188–189 hub-and-spoke, 188 L2TPv3, 202–204 MPLS Layer 2, 194–195 AToM, 195–198 VPLS, 198–202 MPLS Layer 3, 190–194 MVPNs, 205 Cisco introduction, 207 Extranet, 210 inter-AS, 210 MDs, 209–210 MDT, 208 MPLS need, 206 MTI, 208 mVRFs, 207 SSM, 210 site-to-site, 186–188 IOS XR Software, multiservice network routing, 132–133 IP (Internet Protocol), 4 advancement, 12–14 converged networks, 44 future, 92–93 Global networks, 87–88 capacity, 88–89 Internet, 90–92 resiliency, 89–90 history, 36 technology share, 36–38 version 4, 38–40 version 6, 40–43 LANs (local area networks), 44–46 Ethernet, 48–50
routing, 50–55 switching, 55–62 technologies, 46–48 metropolitan optical networks, 340–341 DPT, 346–348 MPLS, 348–349 RPR, 341–346 mobile networks, 69–72 cellular networks, 82–87 WLANs (Wireless LANs), 73 next-generation networks, 21 routing, service POPs, 328 technology, 93 business drivers, 96–100 network summary, 95–96 viewpoints, 93–95 VPNs, 163–165 WANs (long IP networks), 62–63 architecture changes, 64 bandwidth, 63 regulatory policy changes, 64 technologies, 65–69 IP security. See IPSec IP/MPLS (Internet Protocol Multiprotocol Label), 67 IPSec (IP security), 76, 165, 170–171 intranet VPNs designs, 188 components, 189 DMVPN (dynamic multipoint VPN), 189 full-mesh on-demand with TED, 188–189 hub-and-spoke, 188 multiservice VPNs, 213–216 remote-access VPNs, 172–175 firewalls, 176 hardware clients, 176–177 remote-site routers, 177 software-based clients, 174–176 SAs, 166 site-to-site VPNs, 186–188 VPNs, 165–166 data forwarding, 168–170 headers, 166–168 technologies, 170–171 IPv4 (Internet Protocol version 4), 38–40
long-haul optical networks
IPv6 (Internet Protocol version 6), 40–43 ISDN (Integrated Services Digital Network), 463 wireline networks, 464 BRI, 464–465 challenges, 466–467 PRI, 465 SS7, 466 ISDN DSL (IDSL), 477 IS-IS (Intermediate System to Intermediate System), 52 ISPs (Internet service providers), 85 ITU-T G.709 OTN, 292–294 control plane, 295–297 IP over optical, 294 IXCs (interexchange carriers), 397
K–L key exchange SAs, 166 L band, ELH, 430 L2TPv3 (Layer 2 Tunneling Protocol version 3), 202–204 label switch router. See LSR, 115 Lamarr, Hedy, 530 Lamarr-Antheil patent, 530 lambdas, 229–230 LAN Management Solution (LMS), 81 LANs (local area networks), 37, 44–46 Ethernet, 48–50 IP routing, 50–51 application multiplexing, 54–55 global addressing, 52 packets, 51–52 TCP/IP, 53 windowing flow control, 53 IP support, 37 metro Ethernet, 362–363 switching Layer 2, 55–56 Layer 3, 56–57
587
multilayer, 58–60 optimizing multilayer, 60–62 technologies, 46–48 lasers DWDM long-haul networks, 413–414 ULH, 433 Layer 2 MPLS VPNs, 194–195 AToM, 195–198 VPLS, 198–202 switching, LANs (local area networks), 55–56 Layer 2 Tunneling Protocol version 3 (L2TPv3), 202–204 Layer 3 MPLS VPNs, 190–194 switching, LANs (local area networks), 56–57 LCAS (Link Capacity Adjustment Scheme), 283, 333–334 LEAP (Lightweight Extensible Authentication Protocol), 76 lights emitters, 232–233 optical networking, 227–228 propagating, 239–241 receivers, 238 Lightweight Extensible Authentication Protocol (LEAP), 76 line card shelves, Cisco CRS-1 Carrier Routing System hardware, 126–127 Link Capacity Adjustment Scheme (LCAS), 283, 333–334 LMDS (Local Multipoint Distribution Service), 565 LMS (LAN Management Solution), 81 local area networks. See LANs local loops, 459 Local Multipoint Distribution Service (LMDS), 565 long IP networks. See WANs long-haul optical networks, 397–399, 444–447, 450–452 DWDM, 410
588
long-haul optical networks
lasers, 413–414 optical amplifiers, 418–420 optical power budget, 423–429 optical regeneration, 421–422 tunable components, 415–418 waveguide challenges, 410–413 wavelengths, 422–424 ELH, 429 advanced fiber, 430 FEC, 431 L band, 430 modulation formats, 431–432 raman amplification, 430–431 nodes, 400–402 OXCs, 438–439 hybrid technologies, 444 LOEO, 439–440 OOO, 440–443 submarine, 435–438 technologies, 402, 444–447, 450–452 Cisco ONS 15454 MSTP, 405–408 Cisco ONS 15808 DWDM System, 402–405 ROADM, 408–410 ULH, 432–433 amplification, 434 data modulation, 435 dispersion management, 433–434 laser accuracy, 433 OXC architectures, 434–435 long-wavelength band ELH, 430 low-loss wavelengths, 240 LSR (label switch router), 115
M MAC protocol, 268 macro cells, GSM, 536 management, IP VPNs, 164 MANs (metropolitan area networks), 138–139 metro Ethernet, 362–363 MSPPs (Multiservice Provisioning Platform), 140–141
Cisco ONS 15454 CE Series Ethernet data card, 143–144 Cisco ONS 15454 E Series Ethernet data card, 142 Cisco ONS 15454 G Series Ethernet data card, 142–143 Cisco ONS 15454 ML Series Ethernet data card, 143 MSSPs (Multiservice Switching Platforms), 144–147 MDs (Multicast Domains), 209–210 MDT (Multicast Distribution Tree), 208 Media Access Control protocol, 268 Metcalfe, Bob, Ethernet, 48 metro access, 311–312 business access, 312–313 PONs (passive optical networks), 314–317 residential access, 313–314 tiered metropolitan optical network, 310 metro core, 321 defining, 322–323 metro edge connection to service POP, 326 scaling bandwidth, 323–324 tiered metropolitan optical network, 310 topology scaling, 324–325 metro DWDM, metropolitan optical networks, 349 business drivers, 349–350 CWDM, 359–360 design considerations, 356–358 enabled services, 360–361 technology, 351–355 metro edge, 317 bandwidth and services increase, 320–321 connecting metro access layer, 319–320 evolution, 318 increased intelligence, 318–319 tiered metropolitan optical network, 310 metro Ethernet, metropolitan optical networks, 361–362 LAN to MAN, 362–363
modems, ADSL (Asymmetric DSL)
market requirements, 367, 370–371 service orienting, 371 services, 363–369 metro IP, metropolitan optical networks, 340–341 DPT, 346–348 MPLS, 348–349 RPR, 341–346 metro MSPP, metropolitan optical networks, 372 metro MSSP, metropolitan optical networks, 372–373 metro MSTP, metropolitan optical networks, 373–377 metro regional, 330–331 metropolitan area networks. See MANs metropolitan optical networks, 307–308 business drivers, 308–309 functional infrastructure, 309–311 metro access, 311–317 metro core, 321–326 metro edge, 317–321 metro regional, 330–331 service POP, 327–330 metro DWDM, 349, 360–361 business drivers, 349–350 CWDM, 359–360 design considerations, 356–358 technology, 351–355 metro Ethernet, 361–362 LAN to MAN, 362–363 market requirements, 367, 370–371 service orienting, 371 services, 363–369 metro IP, 340–341 DPT, 346–348 MPLS, 348–349 RPR, 341–346 metro MSPP, 372 metro MSSP, 372–373 metro MSTP, 373–377 SONET/SDH networks, 331–332 GFP, 333 LCAS, 333–334
589
packet movement, 335–340 VCAT, 332 storage networks, 377 ESCON, 380–381 fibre channel, 377–379 FICON, 381–383 technology, 383–385, 388–391 metropolitan statistical areas (MSAs), 536 MGX 8250 Edge Concentrator, 112 MGX 8800 Series, 112 micro cells, GSM, 536 MID (Mobile Identification Number), 528 MMDS (Multichannel Multipoint Distribution Service), 564–565 MMF (multimode fiber), 234–235 Mobile Identification Number (MID), 528 Mobile IP networks, 69–72 cellular networks, 82 Cisco Mobile Exchange Framework, 84–87 MPLS, 84 packet gateways on router platforms, 82–83 packet-based VoIP, 84 RAN support, 83 SS7oIP, 83 WLAN 802.11, 83–84 WLANs (Wireless LANs) private, 73–78 public, 78–81 mobile node, Mobile IP, 71 mobile operators, Cisco PWLAN architecture, 81 mobile virtual network operators (MVNOs), 85 mobility cellular networks, 82 Cisco Mobile Exchange Framework, 84–87 MPLS, 84 packet-based VoIP, 84 platform gateways on router platforms, 82–83 RAN support, 83 SS7oIP, 83 WLAN 802.11, 83–84 modems, ADSL (Asymmetric DSL), 479–480
590
modulation formats, ELH
modulation formats, ELH, 431–432 Moore, Gordon, 533 MPLS (Multiprotocol Label Switching), 114–115 benefits, 123 cell-based, 118 ATM components, 118–119 ATM LSR, 119–120 Cisco ATM multiservice switches, 120–121 eLSR, 119–120 frame-based function, 116–117 terminology, 115–116 IP cellular networks, 84 large enterprise example benefits, 124–125 Layer 2 VPNs, 194–195 AToM, 195–198 VPLS, 198–202 Layer 3 VPNs, 190–194 metro IP, 348–349 remote-access VPNs, 182–185 services, 121–122 MSAs (metropolitan statistical area), 536 MSDSL (Multirate Symmetric DSL), 477 MSPPs (Multiservice Provisioning Platforms), 138–141, 372, 405 Cisco ONS 15454 CE Series Ethernet data card, 143–144 Cisco ONS 15454 E Series Ethernet data card, 142 Cisco ONS 15454 G Series Ethernet data card, 142–143 Cisco ONS 15454 ML Series Ethernet data card, 143 MSSPs (Multiservice Switching Platforms), 138, 144–147, 372–373 MSTP (Multiservice Transport Platform), 373–377 MTI (Multicast Tunnel Interface), 208 Multicast Distribution Tree (MDT), 208 Multicast Domains (MDs), 209–210 Multicast Tunnel Interface (MTI), 208
Multicast VPNs. See MVPNs Multicast VRFs (mVRFs), 207 Multichannel Multipoint Distribution Service (MMDS), 564–565 multilayer switching, LANs (local area networks), 58–62 multimode fiber (MMF), 234–235 multiplexing ADSL (Asymmetic DSL), 480–481 LAN applications, 54–55 Multiprotocol Label Switching. See MPLS Multirate Symmetric DSL (MSDSL), 477 multiservice networks, 103–104 ATM, 104–106 MANs (metropolitan area networks), 138–139 MSPPs (Multiservice Provisioning Platform), 140–144 MSSPs (Multiservice Switching Platforms), 144–147 MPLS (Multiprotocol Label Switching Networks), 114–115 benefits, 123 cell-based, 118–121 frame-based, 115–117 large enterprise example benefits, 124–125 services, 121–122 next-generation, 107 ATM switching, 108–110 Cisco switches, 110–113 networks, 21–22 routers, 125–126 Cisco CRS-1 Carrier Routing System, 126–131 Cisco IOS XR Software, 132–133 Cisco XR 12000 Series Routers, 133–138 technologies, 150 Multiservice Provisioning Platforms. See MSPPs Multiservice Switching Platforms (MSSPs), 138, 144–147, 372–373 Multiservice Transport Platform (MSTP), 373–377 multiservice VPNs, 213–216
ODR (On-Demand Routing)
Multishelf Systems, Cisco CRS-1 Carrier Routing System, 129–131 MVNOs (mobile virtual network operators), 85 MVPNs (Multicast VPNs), 205 Cisco introduction, 207 Extranet, 210 inter-AS, 210 MDs, 209–210 MDT, 208 MPLS need, 206 MTI, 208 mVRFs, 207 SSM, 210 mVRFs (multicast VRFs), 207
N narrowband wireline networks, 458–459 aggregation layer through DLC, 472–474 digital technology, 460–461 DS1, 461–463 frame relay, 467–472 ISDN, 464–467 residential loop, 459 NAT (Network Address Translation), 40 Network Address Translation (NAT), 40 networking government regulation, 8–11 technological advancement, 11–12 IP (Internet Protocol) growth, 12–14 optical communications growth, 14–17 wireless communications, 17–20 Networking Services Configuration Engine, 81 networks era of changes, 5–8 management, Cisco PWLAN architecture, 81 multiservice, 103–104 ATM, 104–106 MANs (metropolitan area networks), 138–147
MPLS (Multiprotocol Label Switching Networks), 114–125 next-generation, 107–113 routing, 125–138 technologies, 150 next-generation, 20–21 IP (Internet Protocol), 21 multiservice networks, 21–22 optical networks, 23 services, 25–30 VPNs, 22–23 wireless networks, 24–25 wireline networks, 23–24 next-generation multiservice networks, 107 ATM switching, 108–110 Cisco switches, 110 Cisco 8900 Series, 113 Cisco BPX 8600 Series, 110 Cisco IGX 8400 Series, 113 Cisco MGX 8250 Edge Concentrator, 112 Cisco MGX 8800 Series, 112 next-generation networks, 20–21 IP (Internet Protocol), 21 multiservice networks, 21–22 optical networks, 23 services, 25–26 convergence, 28–29 infrastructure convergence, 26–28 transformation from technology push, 29–30 VPNs, 22–23 wireless networks, 24–25 wireline networks, 23–24 nodes, long-haul optical networks, 400–402 non-zero dispersion-shifted fiber, 250
O OADM nodes (optical add/drop multiplexing nodes), 400 ODR (On-Demand Routing), 52
591
592
OEO (Optical to Electrical to Optical)
OEO (Optical to Electrical to Optical), 439–440 OFDM (Orthogonal Frequency Division Multiplexing), 532, 555 digital cellular technology, 532–533 wireless digital access technologies, 75 WLANs, 555 On-Demand Routing (ODR), 52 ONS 15454 CE Series Ethernet data card, 143–144 ONS 15454 E Series Ethernet data card, 142 ONS 15454 G Series Ethernet data card, 142–143 ONS 15454 ML Series Ethernet data card, 143 OOO (Optical to Optical to Optical), 440 OXCs, 440–441 challenges, 442–443 requirements, 441 services, 441–442 Open Shortest Path First (OSPF), 52 Open System Interconnection (OSI), 36–37 optical add/drop multiplexing nodes (OADM nodes), 400 optical amplifiers, DWDM long-haul networks, 418–420 optical communications, 4, 14–17 Optical Cross-Connects. See OXCs optical fiber Ethernet, 285 10GE pluggable optics, 288–292 GBICs (Gigabit Interface Converters), 286–288 Gigabit Ethernet, 285–286 optical networking, 233–234 MMF (multimode fiber), 234–235 SMF (single-mode fiber), 235–238 optical line amplifier nodes, long-haul optical networks, 400 optical networking, 227 components, 228–229 electromagnetic spectrum, 230–232 lambdas, 229–230 light emitters, 232–233
light receivers, 238 optical fiber, 233–238 DWDM long-haul networks, 422–424 Ethernet, 274–277 10GE, 278–280 direct over optical fiber, 285–292 Gigabit Ethernet, 278 next-generation SONET/SDH, 280–283 RPR/DPT, 284–285 facilitating, 241–242 CWDM (coarse wavelength division multiplexing), 254–257 DWDM (dense wavelength division multiplexing), 244–254 WDM (wavelength division multiplexing), 242–244 light, 227–228 next-generation networks, 23 OTN (optical transport network), 292–294 control plane, 295–297 IP over optical, 294 propagating light, 239–241 SONET/SDH, 257–258 data challenges, 266 hierarchy, 259–260 network elements, 261–262 origins, 258–259 Pos/SDH (packet over SONET/SDH), 263–265 statistical multiplexing DPT (Dynamic Packet Transport), 266–274 RPR (Resilient Packet Ring), 266–274 technologies, 300–302 optical power budget, DWDM long-haul networks, 423 considerations, 425–428 decibels, 428–429 optical regeneration, DWDM long-haul networks, 421–422 optical supervisory channel (OSC), 253 Optical to Electrical to Optical (OEO), 439–440
Reconfigurable Optical Add/Drop Multiplexing
Optical to Optical to Optical. See OOO optical transport network. See OTN Orthogonal Frequency Division Multiplexing. See OFDM OSC (optical supervisory channel), 253 OSI (Open System Interconnection), 36–37 OSPF (Open Shortest Path First), 52 OTN (optical transport network), 292–294 control plane, 295–297 IP over optical, 294 OXCs (Optical Cross-Connects), 438–439 architectures, 434–435 hybrid technologies, 444 OEO, 439–440 OOO, 440–441 challenges, 442–443 requirements, 441 services, 441–442
P packet over SONET/SDH (PoS/SDH), 263–265 PacketCable, 499–500 packets LANs IP routing, 51–52 SONET/SDH networks, 335–340 VoIP, mobile cellular networks, 84 passive optical networks (PONs), 314–317 PCM (pulse code modulation), 460–461 PCS (Personal Communications Services), 18, 536–538, 544 PEAP (Protected Extensible Authentication Protocol), 76 Perfect Forward Secrecy, 167 Personal Communications Services (PCS), 18, 536–538, 544 PFS (Perfect Forward Secrecy), 167 photodiodes, 238 photons, 229 pico cells, GSM, 536
593
plain old telephone service (POTS), 459 pluggable optics, GBICs (Gigabit Interface Converters), 286–288 PMD (polarization mode dispersion), 249 PN Ethernet service. See VPLS polarization mode dispersion (PMD), 249 policies, WANs (long IP networks), 64 PONs (passive optical networks), 314–317 PoS/SDH (packet over SONET/SDH), 263–265 POTS (plain old telephone service), 459 power loss, optical impairments, 248 preshared keys, authentication key methods, 167 PRI (primary rate interface), 464–465 primary rate interface (PRI), 464–465 private networks. See intranet VPNs private WLANs, 73 capacity, 74–75 digital access technologies, 75–76 security, 76–78 standards, 73–74 Protected Extensible Authentication Protocol (PEAP), 76 provider-managed VPNs, 217–218 pseudowire emulation services, 202 PSTN (public switched telephone network), 63, 460 public switched telephone network (PSTN), 63, 460 public WLANs (PWLANs), 78, 81 pulse code modulation (PCM), 460–461 PWLANs (public WLANs), 78–81
Q–R QoS, IP VPNs, 164 quad shield cables, 495 radio frequencies, spectrum, 549–550 raman amplification, ELH, 430–431 RAN, cellular network support, 83 Reconfigurable Optical Add/Drop Multiplexing. See ROADM
594
regeneration
regeneration DWDM long-haul networks, 421–422 metro DWDM, 357 regeneration node (RN), 401 regenerators, SONET/SDH networks, 261 regulations, WANs (long IP networks), 64 reliability, LANs IP routing, 53 remote-access VPNS, 171–172 IPSec (IP security), 172–175 firewalls, 176 hardware clients, 176–177 remote-site routers, 177 software-based clients, 174–176 MPLS, 182 benefits, 185 function, 183–185 SSL (secure socket layer), 177–179 wireless, 179 hardware-based, 181 security, 182 software-based, 180 repeaters, SONET/SDH networks, 261 residential loops, 459 Resilient Packet Ring. See RPR revisions, 802.11 standard comparison, 558–559 revision a, 556–557 revision b, 555–556 revision g, 558 RFC 1662, 263 RFC 1918 private addressing, 39 RFC 2002, 70 RFC 2615, 264 RFC 791, 38 RIP (Routing Information Protocol), 52 RN (regeneration node), 401 ROADM (Reconfigurable Optical Add/Drop Multiplexing), 252, 354, 408 long-haul optical networks, 408–410 metro DWDM, 354–355 roaming, Mobile IP, 72
routing LANs (local area networks), 50–51 application multiplexing, 54–55 global addressing, 52 packets, 51–52 TCP/IP, 53 windowing flow control, 53 multiservice networks, 125–126 Cisco CRS-1 Carrier Routing System, 126–131 Cisco IOS XR Software, 132–133 Cisco XR 12000 Series Routers, 133–138 Routing Information Protocol (RIP), 52 RPR (Resilient Packet Ring), 266, 341–342 auto-topology discovery, 342 bandwidth efficiency, 342 Ethernet, 284–285 infrastructure transparency, 342 IP service enablers, 343–346
S sampled grating DBRs (SGDBRs), 416 SAs (security associations), 166 satellite wireless, 566 scalability, IP VPNs, 164 SDH (Synchronous Digital Hierarchy), 104, 257–258 data challenges, 266 Ethernet, 280–281 GFP standard, 281–282 LCAS, 283 VCAT (Virtual Concatenation), 282–283 hierarchy, 260 network elements, 261–262 origins, 258–259 Pos/SDH (packet over SONET/SDH), 263–265 SDSL (Symmetric DSL), 477 second generation systems (2G), 542 secure socket layer (SSL), 177–179 security IP VPNs, 164
standards
IPSec (IP security), VPNs, 165–171 wireless VPNs, 182 WLANs (wireless LANs), 76–78 security associations (SAs), 166 service aggregation, multilambda networks, 242 service POP, 311, 327–330 services metro Ethernet, 363–364 attribute summary, 367–369 EMS, 366–367 EPL, 364–365 EPR, 366 ERMS, 367 ERS, 365 EWS, 365 MPLS, 121–122 next-generation networks, 25–26 convergence, 28–29 infrastructure convergence, 26–28 transformation from technology push, 29–30 OOO, 441–442 pull, 5–7 WANs (long IP networks), 69 SGDBRs (sampled grating DBRs), 416 SGM (Signaling Gateway Manager), 81 SHDSL (Single-Pair High-Rate DSL), 477, 485–486 short message service (SMS), 83 SIDH (System Identification Code for Home System), 528 Signaliing System 7 (SS7), 464 Signaling Gateway Manager (SGM), 81 signal-to-noise ratio (SNR), 479 SIM (subscriber identity module), 536 single-mode fiber (SMF), 234–238 Single-Pair High-Rate DSL (SHDSL), 477, 485–486 site-to-site VPNs, 186–188 SMF (single-mode fiber), 234–238 SMS (short message service), 83 SNA (Systems Network Architecture), 37–38 SNAP (Subnetwork Access Protocol), 105
595
SNR (signal-to-noise ratio), 479 soft handoffs, 531 software-based VPN clients, 174, 176 software-based wireless VPNs, 180 SONET (Synchronous Optical Network), 104, 257–258 data challenges, 266 Ethernet, 280–281 GFP standard, 281–282 LCAS, 283 VCAT (Virtual Concatenation), 282–283 hierarchy, 259–260 network elements, 261–262 origins, 258–259 Pos/SDH (packet over SONET/SDH), 263–265 SONET/SDH (Synchronous Optical Network/ Synchronous Digital Hierarchy), 257 metropolitan optical networks, 331–332 GFP, 333 LCAS, 333–334 packet movement, 335–340 VCAT, 332 Source Specific Multicast (SSM), 210 spatial reuse protocol. See SRP splitterless ADSL, 476 SRP (spatial reuse protocol), 271 nodes, 267 protocol, DPT architecture, 271–273 SS7 (Signaling System 7), 464–466 SS7 Signaling over IP (SS7oIP), 83 SS7oIP (SS7 Signaling over IP), 83 SSL (secure socket layer), 177–179 SSM (Source Specific Multicast), 210 standards broadband cable, 496–497 DOCSIS 1.0, 497–498 PacketCable, 499–500 cellular, 534–536 CDMA2000, 537–538 GSM, 536–537 IMT-2000, 539–541
596
standards
PCS, 538 UTMS, 539 IEEE, 75 IEEE 802.11x, 74 WLANs (wireless LANs), 73–74 statistical multiplexing DPT (Dynamic Packet Transport), 266–268 benefits, 274 SRP protocol, 271–273 RPR (Resilient Packet Ring), 266–268 802.17 protocol, 268–271 benefits, 274 storage networks, metropolitan optical networks, 377 ESCON, 380–381 fibre channel, 377–379 FICON, 381–383 STS, bandwidth scaling, 141 submarine long-haul optical networks, 435–438 subnets, 39 Subnetwork Access Protocol (SNAP), 105 subnetworking, 39 Subrate Gigabit Ethernet, 280 subscriber identity module (SIM), 536 switching ATM next-generation multiservice networks, 108–110 LANs (local area networks) Layer 2, 55–56 Layer 3, 56–57 multilayer, 58–60 optimizing multilayer, 60–62 metro, service POPs, 328 Symmetric DSL (SDSL), 477 Synchronous Digital Hierarchy. See SDH Synchronous Optical Network. See SONET Synchronous Optical Network/Synchronous Digital Hierarchy. See SONET/SDH System Identification Code for Home System (SIDH), 528 Systems Network Architecture (SNA), 37–38
T T3s, over copper, 462 TCP (Transmission Control Protocol), 13, 54 TCP/IP (Transmission Control Protocol/IP protocol), 47 application multiplexing, 54–55 LANs IP routing, 53 windowing flow control, 53 TDM (time division multiplexing), 461 digital technology, 460–461 services POP, 328 TDMA (Time Division Multiple Access), 18, 529, 540 digital cellular technology, 529–530 single-carrier, 540 TD-SCDMA (time-division synchronous code division multiple access), 532, 544, 547 technologies, 170–171, 444–447, 450–452 advancement, 11–12 IP (Internet Protocol) growth, 12–14 optical communications growth, 14–17 wireless communications, 17–20 cable for broadband media, 494–495 cellular analog, 524–528 digital, 529–533 IP (Internet Protocol), 93 business drivers, 96–100 network summary, 95–96 sharing, 36–38 viewpoints, 93–95 LANs (local area networks), 46–48 long-haul optical networks, 402 Cisco ONS 15454 MSTP, 405–408 Cisco ONS 15808 DWDM System, 402–405 ROADM, 408–410 metro DWDM, 351–352 ROADM, 354–355 tunable, 352–353 metropolitan optical networks, 383–385, 388–391
Universal Terrestrial Radio Access (UTRA)
multiservice networks, 150 optical networking, 300–302 push, 5, 29–30 share, 36 VPNs, 218–223 WANs (long IP networks), 65 Ethernet, 68–69 Frame Layer, 65–67 VPNs, 67–68 wireless networks, 566–570 wireline networks, 510–515 WLAN 802.11, 83–84 TED (Tunnel Endpoint Discovery), 188–189 telecommunications government regulation, 8–11 technological advancement, 11–12 IP (Internet Protocol) growth, 12–14 optical communications, 14–17 wireless communications, 17–20 wireline networks, narrowband, 474 Telecommunications Reform Act of 1996, 8 telephony, narrowband, 459 aggregation layer through DLC, 472–474 digital technology, 460–461 DS1, 461–463 frame relay, 467–472 ISDN, 464–467 residential loop, 459 telecommunications, wireline networks, 458 terminal multiplexers, SONET/SDH networks, 261 terminal nodes, long-haul optical networks, 400 third-generation systems (3G), 542–543 time division multiple access. See TDMA time division multiplexing. See TDM time division synchronous CDMA (TD- SDCMA), 532, 544, 547 TLS (transparent LAN services), 363 Token Ring, 47 Tomlin, Lily, 8 topologies metro DWDM, 356
597
WANs (long IP networks), 65 Ethernet, 68–69 Frame Layer, 65–67 VPNs, 67–68 Transmission Control Protocol (TCP), 13, 54 Transmission Control Protocol/IP protocol. See TCP/IP transparency, multilambda networks, 242 transparent LAN Services (TLS), 363 transponders DWDM design, 251 long-haul optical networks, 402 transport mode, IPSec (IP security), 170 TTLS (Tunneled Transport Layer Security), 77 tunable components, DWDM long-haul networks, 415–418 tunable DWDM, 352–353 tunable lithium niobate externally modulated lasers, 355 Tunnel Endpoint Discovery (TED), 188–189 tunnel mode, IPSec (IP security), 168–169 Tunneled Transport Layer Security (TTLS), 77
U ULH (Ultra Long-Haul Optical Networks), 432–433 amplification, 434 data modulation, 435 dispersion management, 433–434 laser accuracy, 433 OXC architectures, 434–435 Ultra Long-Haul Optical Networks. See ULH Ultra-Wideband (UWB), 561–562 umbrella cells, GSM, 536 UMTS (Universal Mobile Telecommunications Systems), 539 uniform resource locator (URL), 7 Universal Mobile Telecommunications Systems (UMTS), 539 Universal Terrestrial Radio Access (UTRA), 540
598
URL (uniform resource locator)
URL (uniform resource locator), 7 UTMS, cellular standards, 539 UTRA (Universal Terrestrial Radio Access), 540 UWB (Ultra-Wideband), 561–562
V Variable-length Subnet Masking (VLSM), 39 VCAT (Virtual Concatenation), 282–283, 332 VCSELs (vertical-cavity surface-emitting lasers), 416 VDSL (Very High Data Rate DSL), 474, 477, 486–490 vector orthogonal frequency division multiplexing (VOFDM), 564 vertical-cavity surface-emitting lasers (VCSELs), 416 Very High Data Rate DSL (VDSL), 474, 477, 486–490 video IP (Internet Protocol), converged networks, 44 service POPs, 329 Virtual Concatenation (VCAT), 282–283, 332 Virtual Private LAN Service. See VPLS virtual private networks. See VPNs virtual private wire service (VPWS), 195 VLSM (Variable-length Subnet Masking), 39 VOFDM (vector orthogonal frequency division multiplexing), 564 voice IP (Internet Protocol) converged networks, 44 wireline networks, 457–458, 509 broadband, 475–509 narrowband, 458–474 next-generation networks, 23–24 VoIP, packet-based, 84 VPLS (Virtual Private LAN Service), 195, 363 MPLS Layer 2 VPNs, 198 Cisco IOS, 202 hierarchical, 201
logical mode, 198–200 need for, 198 VPNs (virtual private networks), 161 access VPNs, 171–172 IPSec (IP security), 172–177 MPLS, 182–185 SSL (secure socket layer), 177–179 wireless, 179–182 enterprise-managed, 216–217 Extranet VPNs, 211–213 Frame Relay to ATM internetworking, 161–163 intranet VPNs, 186 IPSec designs, 188–189 L2TPv3, 202–204 MPLS Layer 2, 194–202 MPLS Layer 3, 190–194 MVPNs, 205–210 site-to-site, 186–188 IP networks, 163–165 IPSec (IP security), 165–166 data forwarding, 168–170 headers, 166–168 technologies, 170–171 multiservice, 213–216 next-generation networks, 22–23 provider-managed, 217–218 service POPs, 328 technologies, 218–223 WANs (long IP networks), 67–68 VPWS (virtual private wire service), 195
W WANs (long IP networks), 62–63 architecture changes, 64 bandwidth, 63 regulatory policy changes, 64 technologies, 65 Ethernet, 68–69 Frame Relay, 65–67 VPNs, 67–68 wavelength division multiplexing (WDM), 242–244
WLANs (Wireless LANs)
wavelengths, 402, 422–424 capacity, multilambda networks, 242 DWDM long-haul networks, 422–424 low-loss, 240 service POPs, 329 services, intelligent DWDM, 408 WCDMA (Wideband CDMA), 18, 532, 544–546 WDM (wavelength division multiplexing), 242–244 websites Cellular Telecommunications & Internet Association, 17 Cisco, 43 Internet Software Consortium, 13 WEP (Wired Equivalent Privacy), 76 Wideband CDMA (WCDMA), 18, 532, 544–546 wideband digital access cross-connects, SONET/ SDH networks, 262 Wi-Fi, advancements, 17–20 Wired Equivalent Privacy (WEP), 76 wireless communications, advancement, 17–20 Wireless LAN Solution Engine (WLSE), 81 wireless local area networks. See WLANs wireless mobilities, 4 wireless networks cellular mobility, 523–524 analog technology, 524–528 call transmission, 551 data overlay, 544–549 digital technology, 529–533 functional generations, 541–543 radio frequency spectrum, 549–550 standards, 534–541 next-generation networks, 24–25 technologies, 566–570 WLANs. See WLANs wireless optics, WLANs, 562–563 wireless personal area networks. See WPANs wireless remote-access VPNs, 179 hardware-based, 181 security, 182 software-based, 180
wireline networks, 457–458, 509 broadband cable, 493–502 DSL, 475–490 DSLAM, 490–492 Ethernet, 502–509 narrowband, 458–459 aggregation layer through DLC, 472–474 digital technology, 460–461 DS1, 461–463 frame relay, 467–472 ISDN, 464–467 residential loop, 459 next-generation networks, 23–24 WLAN 802.11, mobile cellular networks, 83–84 WLANs (Wireless LANs), 73, 523, 552, 560 802.11 standard, 553–555 comparing revisions, 558–559 diffused infrared, 553 DSSS, 554–555 FHSS, 553–554 revision a, 556–557 revision b, 555–556 revision g, 558 802.16 standard, 559–560 fixed wireless, 563 LMDS (Local Multipoint Distribution Service), 565 MMDS (multichannel multipoint distribution service), 564–565 VOFDM (vector orthogonal frequency division multiplexing), 564 OFDM (Orthogonal Frequency Division Multiplexing), 555 private, 73 capacity, 74–75 digital access technologies, 75–76 security, 76–78 standards, 73–74 public, 78–81 satellite wireless, 566 wireless optics, 562–563
599
600
WLANs (Wireless LANs)
WPANs (wireless personal area networks) Bluetooth, 560–561 UWB, 561–562 WPANs (wireless personal area networks), 523 Bluetooth, 560–561 UWB (Ultra-Wideband), 561–562 WSLE (Wireless LAN Solution Engine), 81
X–Y–Z XR 12000 Series Routers, 133–134 architecture, 134–136 capacities, 136–138 zero water peak fiber, 249
Cisco Press
SAVE UP TO 30% Become a member and save at ciscopress.com!
Complete a user profile at ciscopress.com today to become a member and benefit from discounts up to 30% on every purchase at ciscopress.com, as well as a more customized user experience. Your membership will also allow you access to the entire Informit network of sites. Don’t forget to subscribe to the monthly Cisco Press newsletter to be the first to learn about new releases and special promotions. You can also sign up to get your first 30 days FREE on Safari Bookshelf and preview Cisco Press content. Safari Bookshelf lets you access Cisco Press books online and build your own customized, searchable electronic reference library. Visit www.ciscopress.com/register to sign up and start saving today! The profile information we collect is used in aggregate to provide us with better insight into your technology interests and to create a better user experience for you. You must be logged into ciscopress.com to receive your discount. Discount is on Cisco Press products only; shipping and handling are not included.
Learning is serious business. Invest wisely.
THIS BOOK IS SAFARI ENABLED INCLUDES FREE 45-DAY ACCESS TO THE ONLINE EDITION The Safari® Enabled icon on the cover of your favorite technology book means the book is available through Safari Bookshelf. When you buy this book, you get free access to the online edition for 45 days. Safari Bookshelf is an electronic reference library that lets you easily search thousands of technical books, find code samples, download chapters, and access technical information whenever and wherever you need it.
TO GAIN 45-DAY SAFARI ENABLED ACCESS TO THIS BOOK:
• •
Go to http://www.ciscopress.com/safarienabled
•
Log in or Sign up (site membership is required to register your book)
•
Enter the coupon code found in the front of this book before the “Contents at a Glance” page
Enter the ISBN of this book (shown on the back cover, above the bar code)
If you have difficulty registering on Safari Bookshelf or accessing the online edition, please e-mail
[email protected].