Cisco CCNP Enterprise 300-420 ENSLD – Discovering SD Access Architecture Part 2
SD Access Node Roles Each fabric site includes a supporting set of control plane nods, Edge nods, border Nods, and Wireless land controllers ISD appropriately from the listed categories. Ice Policy Nods are also distributed across the sites to meet survivability requirements in a single physical network. Multiple fabrics can be deployed for this case, individual fabric elements control plane Nods, border Nods, Edge nodes and WLCS are assigned to a cycle only. Customers can achieve higher precision, faster investigation, and can leverage the latest Cis working capabilities to avoid,…
Each fabric site includes a supporting set of control plane nods, Edge nods, border Nods, and Wireless land controllers ISD appropriately from the listed categories. Ice Policy Nods are also distributed across the sites to meet survivability requirements in a single physical network. Multiple fabrics can be deployed for this case, individual fabric elements control plane Nods, border Nods, Edge nodes and WLCS are assigned to a cycle only. Customers can achieve higher precision, faster investigation, and can leverage the latest Cis working capabilities to avoid, stop or mitigate threats faster than ever before. Cisco Digital Net Architecture Cisco DNA is the industry’s first network with the ability to find threats in encrypted traffic with encrypted analysis. Cisco SD Access nodes using Cisco DNA Center automation switches in the extended node role are connected to the fabric edge using an 802 one Q trunk over an ether chat one or multiple physical members and discovered using zero touch plug and play endpoints, including fabric APS, connect to the extended node switch.
Vilas and SGT’s are assigned using host onboarding out of fabric provisioning. Scalable group tagging policy is enforced at the fabric edge. Network automation simple gee and intent based automation of fabric devices. Network assurance data cures, analyze endpoint to app flows and monitor fabric status. Identity services, NAC and IDC systems egis for dynamic endpoint to group mapping and policy definition control Nods Map system that manages endpoint ID to device relationships fabric border nodes that connects external l three networks to the SDA fabric edge nodes of the device egg. Access or distribution that connects wired endpoints to the SDA fabric Wire Controller A fabric device that connects APS and wireless endpoints to the SDA fabric the Cisco Descanter software, including the SD Access application package, is designed to run on the Cisco DNA Center appliance.
The appliance is available in form factors sized to support not only the SD Access application, but also networking’s and new capabilities as they are available. In the simplest form, identity management can be a user used for authenticating users adding embedded security functions and application visibility. Network Devices provides telemetry for advanced policy definitions that can include additional contexts such as physical location, device used, type of access network application used, and time of day. SD accession combines the Cisco DNA Center software, identity services, and wired and wireless fabric functionality within the SD Access solution. A fabric site is composed of an independent set of fabric control plane Nods intermediate transport only Nods and border Nods high Availability can go hand in hand with site survivability a site with single fabric, border control plane, node or wireless controller risks single a failure in the event of a device outage.
Fabric wireless controllers manage and control the fabric mode eight using the same model as the traditional centralized model of local mode controllers, offering the same operational advantages such as mobility control and radio resource management. The Role of the Control plane Node The Kalane node runs a host tracking database to map location information. The control planes enable different cell users and applications to be securely isolated. The SD Access fabric control plane node is base locator ID separation protocolity combine same node. The control plane database tracks all endpoints in the fabric site and associates the endpoint. Fabric nodes decoupling the endpoint IP address or Mac address from the location closest router in the network. The control plane node functionality can be colocated with a border node or can use dedicated Nods for scale between two and six. Nods for wired deployments only are used for resiliency.
Border and edge nodes register with and use all control plane Nods, so resilient nodes chosen should be of the same type for consistent performance. The control plane node enables the following functions host Tracking Database The Host Tracking database Htdb is a central repository of ETH to fabric edge node bindings map server. The Lisp MS is used to pop the Htdb from registration messages from fabric edge devices. Map Resolver The Lispmistor is used to respond to map queries from fabric edge devices requesting RLOC mapping information for destination aid. Control Plane Platform roles and capabilities The SD Access solution combines the Cinque Center software identity services and wired and wireless fabric functionality. Within the SDA solution, a fabric site is composed of an independent set of fabric control plane knots, edge Nods, intermediate path only Nods, and Border Nods. Wireless integration adds fabric WLC and fabric mode components to the fabric site. Choose your SD access network platform based on capacity and capability acquired by the network considering the recommended functional roles.
Refer to the SD Access hardware and software compatibility matrix for the most up to date details about which platforms and software are supported. For each version of sysd Access, your physical network design requirements drive the platform and software choices. Control plane platform roles and capabilities. A wide range of Cisco Catalyst 9000 series devices, both wired and wireless, and Catalyst 3000 853,650 are supported. However, certain devices are supported as fabric edge, border and control plane node roles, and the available roles may be ignored as newer versions of Cisco DNA Center and Cisco iOS XE software are released. Additional devices such as the Cisco Catalyst 4000 506,800 series and Cisco Nexus 7100 series are also supported, but there may be specific supervisor module, line card module and fabric facing race requirements.
Also, the rolls may be reduced. For example, Nexus 7007 software may restrict the SD access roll to being used only as an external border. Also requiring a separate control plane node. The roll of the Edge node, the SD access fabric Edge Nods are the equivalent of an axis switch in a traditional campus land design. The edge node provides first hop services for users and devices connected to a fabric. The Edge Nods implement the layer three axis design with the addition of the following fabric tins endpoint Registration Each edge node has a Lisp control plane session to all control plane nodes. After the Fabric Edge detects an endpoint, it is added to a local host tracking database called the Et Table. The edge device also issues a Lisp map register message to inform the control plane node of the endpoint detected so that it can populate the Htdb mapping of user to virtual network. Endpoints are placed into virtual nest by assigning the endpoint to a VLA and associated with a Lisbon instance.
The mapping of endpoints into vines can be done statically or dynamically using 802 one x and SSgt is also assigned and scalable group tags can be used to provide segmentation and policy enforcement at the Fabric edge. Any cast. Layer three gateway a common gateway IP and Mac addresses can be used at every node that shares a pizza nett, providing optimal forwarding and mobility across different Rlocs lisp forwarding in of a typical routing based decision. The fabric edge nods. Query the map server to determine the RLLC associated with the nation e, then use that information as the traffic destination. If there is a failure to resolve the destination of C, the traffic is sent to the default fabric border in which the global routing table is used for forwarding.
Response received from the map server is stored in the Lisp Map cache, which is merged to the Cisco Express forwarding table and installed in hardware. If traffic is received at the Fabric Edge for an endpoint not locally connected, a Lispset Map request is sent to the sending Fabric Edge to trigger a new map request. This process addresses the case when the endpoint present on a different Fabric Edge switch. Vixlan encapsulation the encapsulation. The Fabric Edge use the RLOC associated with the destination IP address to encapsulate the traffic with Vixland headers. Similarly, Vixland traffic received at a destination RLOC is the encapsulated. The encapsulation and the encapsulation of traffic enables the location of an endpoint to change and be encapsulated with a different edge node and Rlloc in the network without the endpoint having to change its address within the encapsulation edge. Nodeform roles and capabilities Various routing platforms are supported as control plane and border nodes such as Co ISR two 4000 404,300 series integrated services routers Cisco as 1000. X and 1000. HX series aggregation Services routers, but none can be fabric.
The Cisco Cloud Services Router 1000. Volts series is also supported, but only as a control plane node edge node platform roles and capabilities The Cisco Catalyst 9800 standalone and embedded 8000 545,000 523,504 series wireless Lancontrollers have specific software requirements for their support. Similarly, the Cisco Catalyst 9100 and Cisco Erinet, wave two and wave One APS call for specific software versions. Cisco must be deployed with a version compatible with Cisco DNA Center. The role of the border node the node is an entry and exit point for data traffic going into and out of a fabric. The fabric border nodes serve a gateway between the SD access fabrics site and the network’s external to the fabric. The fabric border node is responsible for network virtualization interworking and Sgt propagation from the fabric to the rest of the network.
Most networks use an external border for a common exit point from a fabric, such as for the rest of an enterprise network with the Internet. The external border is an efficient mechanism to offer a default exit point to all virtual works in the fabric without importing any external routes. A fabric border node can be configured as an internal operating as the gateway for specific network addresses, such as a shared services or data center network where the ignetworks are imported into the VMs in the fabric at explicit exit points for those networks. A border note also have a combined role as an anywhere border both internal and external border, which is useful in networks with border elements that cannot be supported with only external borders, where one of the external borders is also a location where specific routes need to be imported using the internal border functionality. Border nods implement the following functions advertisement of eat subnets SD access configures border gateway protocol as the preferred routingle used to advertise the Eat prefixes outside the fabric and traffic destined for eat subnets coming in from outside the goes through the border nods. These Eat prefixes appear only on the routing tables at the border throughout the redder fabric. The eat information is accessed using the fabric control plane fabric domain exit point.
The external fabric border is the gateway of last resort for the fabric edge Nods the gateway is implemented using proxy tunnel router functionality. Also possible are internal fabric borders connected to networks with the well defined of IP subnets, adding the requirement to advertise those subnets into the fabric. Mapping of Lisp instance to VRF The fabric border can extend network virtualization from inside the fabric to outside the fabric by using external VRF instances with VRF aware routing protocols to preserve the virtualization policy mapping the fabric border node also maps Sgt information from within the fabric to be appropriately maintained when exiting.
That Sgt information is propagated from the fabric border node to the network external to the fabric by transporting the tags to Cisco Trust SEC availed devices using Sgt exchange protocol or by demapping SGT’s into the Cisco metadata field in a packet using inline tagging capabilities implemented for connect to the border node. Border Node platform Considerations If the chosen border nodes support the anticipated endpoint scale requirements for a fabric, it is logical to collicate the fabric control plane functionality with the borders. However, if the collicated option is not possible Nexus 7700 lacking the control plane node function or end point scale requirements exceeding the platform capabilities, then you can add diss dedicated to this functionality such as physical routers or virtual routers at a fabric site. Bode Platform Considerations One other consideration for separating control plane functionality onto dedicated devices is to support frequent roaming of endpoints across fabric edge Nods roaming across Fabric Edge Nods causes control plane events involving the WLC updating the control plane Nods on the mobility of these roaming endpoints.
Typically, the core switches in a network form the border of the SDKs fabric and adding control plane functions. Border Nods in a high frequency roam environment is not advisable as it impacts the core and the border functionality of the vic network. Internal Border Considerations The internal border advertiser’s end points to outside, subnets to the inside. Dedicated internal border Nods are commonly used to connect the site to the data core, while dedicated external border Nods are used to connect the site to the Man Wan and Internet dedic. Redundant routing infrastructure and firewalls are used to connect the site to external resources, and border Nods should be fully to these redundant routing infrastructure and firewalls and to each other.
Although the topology depicts the border amperscore, the border at a large site is often configured separately from the cos, which is at another aggregation point. The large site may contain dedicated guest fabric border and control plane Nods. These devices are generally deployed with the fabric rolls colocated rather than distributed, and are physically accessed and connected to the DMZ. This provides complete control plane and data plane separate between guest and enterprise traffic, and optimizes guest traffic to be sent directly to the DMZ without the need for an anchor WLC external Border Constraints The external border is a gateway of last resort for any unknown destinations. Most net use an external border for a common exit point from a fabric, such as for the rest of an enterprise network. Along with the Internet, the external border is an efficient mechanism to offer a default exit point to all virtual networks in the Fab without importing any external routes. A fabric border node can be configured as an internal border operating as the gateway for specific network addresses, such as a shared services or data center network, where the external networks are imported into the VNS in the fabric at explicit exit points for those networks. A border node can also have a fine role as an anywhere border both internal and external border, which is useful in networks with border requirements that be supported with only external borders, where one of the external borders is also a location where specific routes need to be imported using the internal border functionality.
The fabric enabled WLC is integrated into the fabric for SD Axis wireless clients. The fabric WC integrates with the fabric control plane. The fabric mode APS are Cisco WiFi 6802. 1 Ax and 802. 11 AC. Wave Two and Wave One APS associated with the fabric WLC that have been configured with one or more fabric enabled SSIDs. Fabric enabled wireless LAN Control both fabric fabric WLCS and non fabric WLCS provide AP image and configuration management, session management and mobility services. Fabric WLCS provide additional services for fabric integration by registering Mac addresses of wireless clients into the host tracking database of the fabric control plane during US client join events and by supplying Fabric Edge RLOC location updates during client roam events.
WLC connects to fabric via border underlay. Fabric enabled APS connect to the WLC capwap using a dedicated host pool. Overlay fabric enabled APS connect to the edge wire VXLAN. Wireless clients use regular host pools for data traffic and policy. Same as wired. Fabric enabled WLC registers clients with the control plane as located on local edge plus AP. A key difference with no fabric WLC behavior is that fabric WLCS are not active participants in the data plane traffic. Forwarding Role for the SSIDs that are fabric enabled fabric mode APS directly forward traffic to the Feeders. Those SSIDs. Typically the fabric wireless LAN controller devices connect to a shared sus distribution or data center outside the fabric and fabric border, which means that their management IP address exists in the global routing table for the wireless APS. To establish a capwap tunnel for wireless Lankan management. The APS must be in a VN that has access to the external device in DSD access Solutions DNA Center configures wireless APS to reside within the VRF named which maps to the global routing tableting the need for route leaking or fusion router multivrf Router selectively Sharing routing Information server establish connectivity. Each fabric site has to have a wireless LAN controller unique to that site. It is recommended to place the wireless LAN controller in the local site itself.
Because of latency requirements for SDX. Small to medium scale deployments of Cisco Sdxs can use the Cisco Catalyst 9800 embedded wireless controller. The controller is available for the Catalyst 9300 switch as a softgage update to provide wired and wireless fabric only infrastructure with consistent policy, segmentation city and seamless mobility while maintaining the ease of operation of the Cisco unified wireless network. The wireless control plane remains unchanged using capwap tunnels, initiating on the APS and terminating on the Cisco Catalyst 9800 embedded wireless controller. The data plane uses VXLAN encapsulation for the overlay traffic between the APS and the fabric edge. The Catalyst 9800 embedded wireless gun for Catalyst 9300 series software package enables wireless functionality only for Cisco SDX deployments with two supported topologies Cisco Catalyst 9300 series which is functioning allocated border and control plane. Cisco Catalyst 9300 series switches functioning airbrick in a box. The embedded controller only supports fabric mode access points fabric Mode Access Points fabric mode APS continue to support the same wireless media services that traditional APS support IAvC Quality of service, QoS and other wireless policies and establish the Capwap can lane to the fabric WLC. Fabric APS join as local mode APS and must be directly connected.
Fabric Edge Node Switch to enable fabric registration events, including RLOC assignment via the fabric WL. The fabric edge nodes use CDP to recognize APS as special wired hosts, applying special port garrations, and assigning the APS to a unique overlay network within a common eat space across a fabric. The assignment allows management simplification by using a single subnet to COVID the AP infrastructure at a fabric site. When wireless clients connect to a fabric mode AP and authenticate into the fabric enabled wireless LAN, the WLC the fabric mode AP with the client layer two VNI and NSGT supplied by I’s. Then the WLC registers the wireless client layer two E into the control plane, acting as a proxy for the Egress fabric edge node switch. After the initial connectivity is established, the AP uses the layer two VNI information to VXLAN encapsulate wireless client communication on the Ethernet connection to the directly connected fabric edge switch. The fabric edge switch maps the client traffic into the appropriate VLA and interface associated with the VNI wording across the fabric and registers the wireless client IP addresses with the control plane database.
Cisco DNA Center offers a central network management system. Complete network management system with single pane as for all devices. End-to-end health information in real time. Granular visibility simplified workflows. Automation for provisioning with zero touch deployment device. Lifecycle management. And policy enforcement analytics for assurance with verify intent of network settings proactively resolve issues and reduce time spent. Troubling platform for extensibility with integrate APIs with third party solutions integrate and customize now and evolve operational tools and processes. The following are the components of the Cisco DNA Center cisco Epic M The Cisco DNA Center runs on the Cisco software defined networking SD controller, which is Cisco application policy infrastructure controller enterprise Module APIC M in the Cisco Dam text cisco Epic M is the automation controller that plays a critical role in automating network operations.
Cisco Network Data Platform The Cisco Network Data platform streamlines network data analytics and helps you focus on your business goals by simplifying the collection and correlation of network data and offering a rich set of in the Cisco DNI context. NDP is the enabling layer for the deployment and management of the next generation digital network and is the underpinning of the assurance tasks that are available in the Cisco DNA Center. Size The Cisco Identity Services engine is a security policy management platform that simplifies the delivery and system secure access control across wired and wireless multivendor networks and remote access environments. Cisco DNA Center Service Components When the network is operational, the devices that make up the fabric communicate with Ice for to approve or deny access of devices, users or applications to the network.
The device will communicate with assurance to monitor the health of the devices and applications that make up the network fabric. Gray box, you see the contents of the Cisco DNA Center appliance. At the top you have Cisco DNA Center, the single pane of glass for network where the four step workflow process for SDA network is performed. Steps are design, provisioning, policy and assurance. On the left you have Eyes, which is responsible for policy creation and management. Cisco DNA Center is also known as the abstraction layer because it abstracts the quality of the network design and configuration. The configuration steps are passed down from Cisco DNI set. The automation layer, also known as the orchestration layer. Automation pushes the configuration to the network. ISIS to create the fabric using NETCO, NF, SNMP and SSH two Ice to create the network policy or assurance for monitoring the network. Cisco DNI center architecture.
SD access is enabled application package that runs as part of the Cisco DNA Center software for designing provisioning applying policy facilitating the creation of an intelligent campus wired and wireless network with assurance using Cisco Center to automate the creation of virtual networks reduces operational expenses coupled with the advantage of Houstrisk with integrated security and improved network performance provided by the assurance and analytics capabilities within the SD access architecture. Cisco DNA Center and Cisco Eyes work in unison to provide the automate for planning, configuration, segmentation identity and policy services. Cisco Eyes is visible for device profiling, identity services and policy services. Dynamically exchanging information with DS. DNI center consists of the automation and assurance components that work in unison to form a loop automation system enabling the configuration, monitoring and reporting required to realize the full extent of this ibn in campus environments.
Cisco DNA Center centrally manages major configuration and opens workflow areas design, configures device, global settings, network site profiles for physical divinery, DNS, DHCP IP addressing software, image repository and management, device templates and access. Policy defines business intent for provisioning into the network, including creation of virtual networks, assignment of endpoints to virtual networks, policy contract definitions for groups and configures application. Sees provision provisions devices and adds them to inventory for management. Supports Cisco plug and play. Creates fabric domains, control plane nodes, border nods, edge nodes, fabric wireless cisco unified network wireless transit and external connectivity assurance enables proactive monitoring and ins to confirm that user experience meets configured intent using network client and application health Dashboard, zoom management and sensor driven testing platform. Allows programmatic access to the network and system integration with third party systems using APIs using feature set bundles, configurations, a runtime dashboard analyper toolkit.
Cisco DNA Center supports integration using APIs, for example, Info’s and BlueCat IP address management and policy enforcement. Integration with eyes are available through cisco DNI center. A comprehensive set of northbound Rest APIs enables automation, integration and innovation. All controller functionality is exposed through northbound Rest APIs. Organizations and ecosystem partners can easily build new applications. All northbound Rest API requests are governed by the controller RBAC mechanism. Cisco DNA Center workflow for SD access at the heart of automating, the SD access Cisco DNA Center SD access is enabled with an application package that runs as part of the Cisco I Center software for designing provisioning, applying policy and facilitating the creation of an intelligent campaign and wireless network with assurance. Cisco DNI Center centrally manages major configuration emissions workflow areas, design, configures device, global settings, network site profiles for physicals inventory, DNS DHCP IP addressing software, image repository and management, device templates and user access. Policy defines business intent for provisioning into the network, including creation of virtual networks, assignment of endpoints to virtual networks policy contract definitions for groups and configures apple policies. Provision provisions, devices and adds them to inventory for management. Supports Cisco play. Creates fabric domains, control plane nods, border nods, edge nods fabric wireless cisco unifireless network, wireless transit and external connectivity assurance. Enables proactive monitoring sites to confirm that user experience meets configured intent using network client and application health Dashboard, issue management and sensor driven testing platform. Allows programmatic access to the network and system integration with third party systems using APIs using feature set bundles, configurations, a runtime dashboard and a developer toolkit.
Cisco DNA Center supports integration using APIs, for example, Info blocks and Bluecoat IP address management and policy enforcement. Integration with eyes are available through Cisco dissenter. A comprehensive set of not bound Rest APIs enables automation integration and inner workflow process for Cisco SD access. Follow these tasks for SD access. Task one integrates Cisco. DNI center with Cisco is for policy exchange. Task two PNP discovery engineering of SD access fabric underlay. Task Three configure fabric overly control plane and data plane virtual networks and security groups and policies. Task Four test connectivity within and across secure groups and virtual networks SD Access campus fabric automation and verification. The figure workflow shows the following five phases of SD Access campus fabric automation and verification.
Phase One you will integrate Cisco DNA Center with the Cisco I’s VM for the policy based provisioning of network services such as Security and for the Virtual Network Micro and Macro segmentation services. Phase Two uviol Ump provisioning process for these Is network underlay discovery and deployment. The underlay network discovery based on one seat switch. The underlay network will include two fabric switches 9301 and 9302 and seat switch 9501. The fabric edge switches will connect wired clients of so called endpoints to the SD access camera underlay the Fabric Edge switches 9300 minus one and 9300 minus to be configured and discovered by Cisco DNA Center automatically using the PNP process. Cisco DNA Center will act as a PNP server and will discover and push the configuration to the PNP agent’s switches and to the seats 1501. This configuration will define the ISIS based Vex Land tunneling in Brasov for underlay network IP connectivity.
Phase Three you will deploy two Virtual Networks campus Virtual Network and Guest Virtual Network on the top of the fabric underlay using the Soap called overlay provisioning. Each Virtual Network will include a dedicated network subnet and VRF router that will connect devices inside or outside the Virtual Network. Each Virtual Network will also include two onboard endpoints in different or in the same security groups using micro segmentation policies via Security Group tag Sgt, you will define the network connect between the two endpoints in each Virtual Network. Phase Four you will onboard four different wired endpoints e one, PC two, PC Three and PC Four and associate them to two different Virtual Networks and security groups. Phase Five you will finally test the deployed policy based network connectivity inside the same Virtual OCN across the two Virtual Networks. So among different onboard endpoints.