Go Directly to Page Content
Go Directly to Site Search
Go Directly to Site Navigation
ITS Networking & Telecommunications

Distribution Layer Switch Upgrade

U-M Backbone at Ann Arbor Campus

The campus UMnet Backbone network architecture was modified dramatically in 2002 to provide for two Distribution Layer switches in each building or building complex. This modification had a number of advantages in terms of management, as well as consolidating LANs and minimizing individual customer connections to the backbone. The project was centrally funded with no intent to centrally fund future new installations or upgrades. Engineering was accomplished by ITS Engineering staff under the auspices of the U-M Network Working Group. Maintenance and management has been handled by UMnet Administration. This model has worked well for several years.

As the need to replace aged Distribution Layer switches approached, the Network Working Group re-evaluated the expected functionality of the switches. It became apparent that better performance and less downtime could be achieved by pushing Layer 3 from the Core to the Distribution Layer. Also, the Distribution Layer architecture needed to be consistent and conform to a set of well-defined standards for administrative, maintenance, and performance reasons. The Minimum Configuration for Distribution Layer Switches as set down by the Network Working Group is in Appendix A.

In addition to a new configuration model for the architecture, a new funding model was developed and approved by the Provost's Office.

Approved Funding Model

At the simplest level, ITS will capital fund one (1) Distribution Layer switch per building or complex of buildings in accordance with the guidelines set forth in Appendix A. Any building or complex of buildings wishing to continue use of a second (redundant) Distribution Layer switch will be required to fund the second Distribution Layer switch. ITS will capital finance the second Distribution Layer switch for the department and will recover equipment and financing costs through a monthly recurring charge (MRC) to the department requesting the second Distribution Layer switch. Desired features or functionality in excess of those identified in Appendix A will be the responsibility of the owner of the building or complex of buildings. The second Distribution Layer switch needs to be upgraded at the same time as the primary Distribution Layer switch.

ITS will continue to fund the Service Agreements for all Distribution Layer switches, as well as continue to manage, administer, maintain, and support the Distribution Layer switch environment. For security reasons, ITS will maintain sole management of the Distribution Layer switch environment.

Distribution Layer switch 4006 series will be upgraded in FY09, the 7603 series in FY10, and the balance in FY11 and FY12. The budgeted life cycle of these devices will be five years. This model is proposed as the solution for the current replacement cycle. The model is subject to change for the next replacement cycle (FY14–FY18).

The initial Distribution Layer switch implementation in 2002 was funded by ITS surplus funds in an attempt to spend down the reserve account to meet federal guidelines for reserve accounts. The costs for this upgrade will be recovered through debt funding with the debt being financed via the backbone cost recovery model over the next five years.

This Distribution Layer switch upgrade is applicable to UMnet customers directly connected to the campus backbone. It is not applicable to the U-M Health System or School of Engineering—these campus units manage and fund their own backbones.

Standard Circuit

The standard interface from the DL switch to the backbone core router is a 1 Gigabit Ethernet (1GE) circuit. Units needing a faster link may purchase a 10 Gigabit Ethernet (10GbE) through ITS. See UMnet Backbone Circuit Rates.


Frequently asked questions related to this upgrade project.

UMnet Backbone Funding Model Prior to FY09

Funding Model

UMnet Backbone Funding Model as of FY09

Funding Model

Appendix A

Minimum Configuration for Distribution Layer Switches


The UMnet Backbone is comprised of two layers—the Core and the Distribution Layer. The Distribution Layer is a set of two Distribution Layer switches that reside in each Ann Arbor campus building, or building complex and extend the backbone to that building or complex. The Distribution Layer switches provide a point of connection for local switches/networks in the building. Each set of Distribution Layer switches has two one-Gigabit Ethernet connections to the backbone core. These connections are to two different core devices for redundancy. The Distribution Layer switches are maintained and operated by UMnet Administration.

The first sets of Distribution Layer switches were installed starting the 14th of May 2002. New Distribution Layer switches were added as new buildings were built or it was determined that network connectivity was needed to a particular existing building that was not supported by a Distribution Layer switch.

Purpose and Scope

The purpose of this document is to outline the baseline configuration of the Distribution Layer switches being deployed during the current replacement cycle (summer 2008–summer 2011). The U-M Network Working Group developed and endorsed the minimum configuration as a means to provide a reliable backbone without being extravagant with the features.

Some departments may see a need for a Distribution Layer switch that has more capabilities than that outlined in this document. An example would be a department requiring ten-Gigabit Ethernet (10GE) links from the Access Layer to Distribution Layer switches. This is not in contradiction with these guidelines. However, the unit desiring additional features will be responsible for cost differentials above the baseline configuration.

Physical Characteristics

  • Height: desired: 1-2 Rack Units (RUs), 4RU maximum
  • Width: 19" (maximum)
  • Depth:
  • Physical Ports:
    • Minimum of 24 SFP (Gigabit Ethernet) w/ ability to support 10/100 Copper SFPs.
    • Support for 3rd party SFPs desirable.
    • This is based on a average of one 1GE per LDF.
    • Fewer ports may be warranted in buildings where the DL and AL is the same box.
    • For those buildings requiring more ports additional chassis can be clustered/stacked.
    • Upgradable to 2 - 10GE ports.
  • Console port
  • Power: 110VAC/20Amp
  • Redundant power supply (required in absence of second DL)
  • Stacking/Clustering capability to create one virtual DL from multiple chassis.
    • Minimum Interconnect speed of 32 Gbps

Layer1 Features & Protocols

  • 802.3ae
  • 802.3af Power Over Ethernet (POE) for DL/AL combo switches
  • 802.3z (Gigabit Ethernet)
  • 802.3ab (1000Base-T)
  • 802.3ac (vLAN Tagging)
  • 802.3ad (Link Aggregation)
  • 802.3ae (10Gbps Ethernet)
  • Does not lock GBIC, SFP, XFP, ZenPack to vendor code
  • Port Mirrioring (minimum of 4 mirrors on any port/vLAN)
  • Port
    • Description field
    • Speed setting
    • Duplex settings

Layer2 Protocols

  • MAC Access-lists
  • 802.1Q/p (vLAN tagging/trunking)
  • Jumbo Frames
    • 9K MTU
    • Support of Layer2 and Layer3 Interfaces
  • Storm Control
    • Broadcast
    • Multicast
    • Unicast
  • Spanning Tree Protocol (STP) 802.1D
    • Cost on port
    • Root Guard
    • Extended ID
    • Per vLAN Spanning Tree (PVST+)
    • Rapid Per vLAN Spanning Tree (Rapid-PVST+)
    • Rapid Spanning Tree 802.1w
    • Multiple Instance Spanning Tree 802.1s
    • MAC Address table aging
    • MAC Access-lists
    • 802.1x Authentication
      • Support dynamic vLAN assignment (via RADIUS attribute)
      • No-Auth vLAN assignment
      • Web Auth
    • Multicast
      • IGMP Snooping version1, 2, 3
      • IGMP filter
    • VPMS
    • UDLD
    • VTP version 1,2, and 3 or GVRP
    • 802.1AE link layer encryption is desirable

    Layer3 Features & Protocols

    • GRE tunnels
    • IP access-lists (Spoofing filters) (Wire Speed)
    • Policy Base Routing (Wire speed)
    • Bootp/DHCP forwarding (per vLAN)

    Routing Protocols

    • VRF-lite (Multiple forwarding Tables, Mimimum of 3 instances)
    • IPv4: OSPFv2, IS-IS, RIPv2, Static
    • IPv6: OSPFv3, IS-IS, RIP-ng, Static
    • Multicast Routing
    • PIM-SM, SSM

    Management Features

    • Netflow/sflow
      • export v9
    • Micro flow throttling
    • Logging
      • Ability to send to multiple syslog server
      • Buffered
      • Source Interface
      • Facilities setting
      • Timestamps
    • RADIUS
      • Authentication
      • Source Interface
      • Server customer ports
    • TACACS+
      • Authorization
      • Source interface
    • NTP
    • SSHv2
      • With ACLs
      • Login banner
      • Escape Characters
    • Telnet
      • With ACLs
      • Login banner
      • Escape Characters
      • Ability to disable
    • Console
      • Escape Characters
      • Ability to disable logging
    • Web interface
      • Ability to disable
    • TFTP
      • File upload and download
    • SNMP
      • Multiple SNMP community
      • Location
      • Contact
      • Persistent interfaces
      • Version 1, 2, 3
      • SNMP trap on suppression limits
    • Debug
      • All technologies in switch
      • Timestamps
    • Password encryption
    • Multiple loopback address
    • DHCP
      • Snooping per vLAN
      • Snooping rate limit per port
      • Trusted source port(s)
    • IP Source verification