Sunday, January 5, 2020

Introduction to Cisco Hyperflex

Agenda

The following topics will be discussed
  1.   Introduction
  2.   Cisco Hyperflex
  3.   System Components
  4.   Topology Overview
  5.   HyperFlex Data Plateform (HXDP)
  6.   Logical Network Design ( vMware Use Case)
  7.   Installation
  8.   Management
  9.   References

Introduction

HyperConverce Infrastructure (HCI) has the following characteristics:
  • Combine compute, storage and the network in one platform
  • Unified Management
  • Distributed Direct-Attached Storage (DAS)
Cisco Hyperflex

HyperFlex (HX) is Cisco’s move into the hyperconvergence space with a new product line designed for hyperconverged environments.

Cisco HyperFlex solution combines compute, storage and the network in one platform.

The platform is built on existing UCS components and a new storage component. The servers used in the solution are based on the existing Cisco UCS product line. Networking is based on the Cisco UCS Fabric interconnects switches. The new storage component in Cisco’s platform is called the Cisco HyperFlex HX Data Platform, which is based on Springpath technology.

Cisco HX supports multiple hypervisors, such as VMware ESXi, Microsoft Hyper-V, and KVM (roadmap); it also supports virtualization through containers.


System Components

Cisco HyperFlex solution consists of: 
  • Nodes: these are converged nodes ( compute and storage), or compute only nodes that forms a cluster
  • Fabric Interconnect switchs (FI): these are switchs that interconnects nodes, and interconnects nodes to customer LAN/WAN
Cisco HyperFlex nodes

Cisco HyperFlex nodes comes in different flavors which are:
  • HyperFlex Hybrid Nodes
  • HyperFlex All-Flash Nodes
  • HyperFlex All-NVMe Nodes
  • HyperFlex Edge Nodes
  • HyperFlex Compute-Only Nodes
Up to date list and detailed specifications can be found on the following Link
  • HX hybrid nodes Converged nodes, use serial-attached SCSI (SAS), serial advanced technology attachment (SATA) drives, and SAS self-encrypting drives (SED) for capacity. The nodes use additional SSD drives for caching and an SSD drive for system/log.
  • HX all-flash nodes Converged nodes, use fast SSD drives and SSD SED drives for capacity. The nodes use additional SSD drives or NVMe drives for caching and an SSD drive for system/log.
  • HX all-flash nodes Converged nodes, use NVMe SSD drives for capacity. The nodes use additional NVMe drives for caching and write-logging
  • Edge Nodes: Converged nodes, Hybrid node targeted toward remote office/branch office (ROBO) application. 
  • Compute-Only nodes: These nodes contribute to memory and CPU but do not to capacity. 
All nodes supports Virtual Interface Card a next-generation converged network adapter (CNA) that enables a policy-based, stateless, agile server infrastructure that presents up to 256 virtual  PCIe standards-compliant interfaces to the host that is dynamically configured as either network interface cards (NICs) or Host Bus Adapters (HBAs).


Fabric Interconnects 
  • Fabric Interconnects (FI) are deployed in pairs
  • The two units operate as a management cluster, while forming two separate network fabrics, referred to as the A side and B side fabrics. Therefore, many design elements will refer to FI A or FI B, alternatively called fabric A or fabric B.
  • Both Fabric Interconnects are active at all times, passing data on both network fabrics for a redundant and highly available configuration
  • Management services, including Cisco UCS Manager, are also provided by the two FIs but in a clustered manner, where one FI is the primary, and one is secondary, with a roaming clustered IP address. This primary/secondary relationship is only for the management cluster, and has no effect on data transmission.
Topology Overview
  • The Cisco HyperFlex system is composed of a pair of Cisco UCS Fabric Interconnects along with up to 64 nodes (32 HyperFlex converged nodes + 32 Compute-only nodes) per cluster.
  • In the edge node configuration, Cisco Hyperflex systme supports up to 4 Edge converged  nodes. the use of  Fabric Interconnect switch is not required, any L2 switch could be used
  • The two Fabric Interconnects both connect to each node.
  • Upstream network connections, also referred to as “northbound” network connections are made from the Fabric Interconnects to the customer datacenter


                                                                       Hyperflex nodes

HyperFlex Data Platform

The engine that runs Cisco’s HyperFlex is its Cisco HX Data Platform (HXDP).

The HXDP is designed to run in conjunction with a variety of virtualized operating systems such VMware’s ESXi, Microsoft Hyper-V, Kernel-based virtual machine (KVM), and others.

Currently, Cisco supports ESXi, Microsoft Windows Server 2016 Hyper-V, and Docker containers.

HyperFlex Data Platform Controller (DPC)


  • Runs as a VM on top of Hypervisor in each Converged node and implements a scale-out distributed file system using the cluster’s shared pool of SSD cache and SSD/HDD capacity drives.
  • Implement log-structured file system that uses a caching layer in SSD drives to accelerate read requests and write responses, and a persistence layer implemented with HDDs or SSD
  • DPCs communicate with each other over the network fabric via high-speed links such as 10 GE or 40 GE depending on the specific underlying fabric interconnect.
  • Handles all of the data service’s functions such as data distribution, replication, deduplication, compression, and so on.
  • Creates the logical datastores, which are the shared pool of storage resources.
  • The hypervisor itself does not have knowledge of the physical drives. Any visibility to storage that the hypervisor needs is presented to the hypervisor via the DPC itself.
  • DPC integrates with the hypervisor using two preinstalled drivers: 
  1. IOvisor is used to stripe the I/O across all nodes. All the I/O toward the file system, whether  on the local node or remote node goes through the IOvisor.
  2.  An integration driver for specific integration with the particular hypervisor. the role of this agent is to offload some of the advanced storage functionality, such as snapshots, cloning, and thin provisioning to the storage arrays
  • The compute-only nodes have a lightweight controller VM to run the IOvisor
  • DPC uses PCI/PCIe pass-through to have direct ownership of the storage disks. DPC creates the logical datastores, which are the shared pool of storage resources.
Dynamic Data Distribution
  • HX uses a highly distributed approach leveraging all cache SSDs as one giant cache tier. All cache from all the nodes is leveraged for fast read/write. Similarly, HX uses all HDDs as one giant capacity tier. HX distributed approach uses HX DPC from multiple nodes.
  • If multiple VMs in the same node put stress on the local controller, the local controller engages other controllers from other nodes to share the load.
  • Data is striped across all nodes
  • A file or object such as a VMDK is broken in smaller chunks called a stripe unit, and these stripe units are put on all nodes in the cluster.

Data Protection With Replication
  • Replication of the data over multiple nodes. protect the cluster from disk or node failure.
  • The policy for the number of duplicate copies of each storage block is chosen during cluster setup, and is referred to as the replication factor (RF).
  • HX has a default replication factor (RF) of 3, which indicates that for every I/O write that is committed, two other replica copies exist in separate locations.
  • In case of a disk failure, the data is recaptured from the remaining disks or nodes.
  • If a node fails, data strip units are still available on othe nodes
  • The VMs that were running on a failed node are redistributed to other nodes using VM high availability, and the VM has access to their data




Inline Compression and deduplication
  • Always On, high-performance inline deduplication/compression on data sets to save disk space.
  • Deduplicated and compressed are performed when data is destaged to a capacity disk
  • Less CPU intensive

Data Rebalancing
  • Rebalancing is a nondisruptive online process that occurs in both the caching and persistent layers.
  • When a new node is added to the cluster, the rebalancing engine distributes existing data to the new node and helps ensure that all nodes in the cluster are used uniformly from capacity and performance perspectives.
  • If a node fails or is removed from the cluster, the rebalancing engine rebuilds and distributes copies of the data from the failed or removed node to available nodes in the clusters.

Logical Networ Design ( vWmare hypervisor use case)

Logical Zones

The Cisco HyperFlex system has communication pathways that fall into four defined zones.
  • Management Zone: This zone comprises the connections needed to manage the physical hardware, the hypervisor hosts, and the storage platform controller virtual machines (SCVM).
  • VM Zone: This zone comprises the connections needed to service network IO to the guest VMs that will run inside the HyperFlex system. This zone typically contains multiple VLANs that are trunked to the Cisco UCS Fabric Interconnects via the network uplinks, and tagged with 802.1Q VLAN IDs.
  • Storage Zone: This zone comprises the connections used by the Cisco HX Data Platform software, ESXi hosts, and the storage controller VMs to service the HX Distributed Data Filesystem.
  • VMotion Zone: This zone comprises the connections used by the ESXi hosts to enable vMotion of the guest VMs from host to host.
Virtual switches

HyperFlex Installer automatically create virtual switchs listed in the following table


VLANs

In Cisco HyperFlex system configuration, multiple VLANs to the UCS domain have to be carried from the upstream LAN. these VLANs are defined  in the UCSM configuration tab of HyperFlex Installer



Installation

The following 3 components are required to install Hyperflex
  • External vCenter server: to manage HyperFlex ESXi and HyperFlex system through Web client plugin. 
  • HX Installer: used to install HyperFlex and came as an OVA installed on either vMware ESX or vMware Workstation
  • DNS/NTP server: NTP is an absolute requierment




Fellow the bellow steps to install Hyperflex
  1. Use Consol port to provide Fabric Interconnect switches initial configuration (admin password, IP addressing, DNS, Domain name
  2. Use UCSM to install Fabric Interconnect; NTP, Uplink Ports (Connected to ustomerNetwork), Server Ports (Connected to HyperFlex Servers), Server Discovery
  3. Deploy HyperFlex Installer OVA
  4. Connect to Hyperflex Installer by browsing to Hyperflex Installer IP Address
  5. Choose "Cluster Creation with HyperFlex (FI)" workflow to create HyperFlex Cluster.
The workflow will guide you through the process of setting up your cluster. It will configure Cisco UCS policies, templates, service profiles, and settings, as well as assigning IP addresses to the HX servers that come from the factory with ESXi hypervisor software preinstalled. 

The installer will load the HyperFlex controller VMs and software on the nodes, add the nodesto the vCenter cluster, then finally create the HyperFlex cluster and distributed filesystem. All of processes can be completed via a single workflow from the HyperFlex Installer webpage




Mangement

HyperFlex can be managed through the following management tools:
1.     HyperFlex Connect
2.     vCenter Web Client Plugin

HyperFlex Connect


After the installation completes, HyperFlex system can be managed through HyperFlex Connect tool.

HyperFlex Connect is the new, easy to use, and powerful primary management tool for HyperFlex clusters. HyperFlex Connect is an HTML5 web-based GUI tool which runs on all of the HX nodes, and is accessible via the cluster management IP address. 

To manage the HyperFlex cluster using HyperFlex Connect, complete the following steps:
  1. Using a web browser, open the HyperFlex cluster’s management IP address via HTTPS
  2. Enter the username, and the corresponding password.
  3. Click Login.
  4. The Dashboard view will be shown after a successful login.



vCenter Web Client Plugin

The Cisco HyperFlex vCenter Web Client Plugin is installed by the HyperFlex installer to the specified vCenter server or vCenter appliance.

The plugin is accessed as part of the vCenter Web Client (Flash) interface, and is a secondary tool used to monitor and configure the HyperFlex cluster.

This plugin is not integrated into the new vCenter 6.5 HTML5 vSphere Client. In order to manage a HyperFlex cluster via an HTML5 interface, i.e. without the Adobe Flash requirement, use the new HyperFlex Connect management tool.

To manage the HyperFlex cluster using the vCenter Web Client Plugin, complete the following steps:

      1. Open the vCenter Web Client, and login with admin rights.

      2. In the home pane, from the home screen click vCenter Inventory Lists.



      3. In the Navigator pane, click Cisco HX Data Platform.


      4. In the Navigator pane, choose the HyperFlex cluster you want to manage and click the name.



References

Following are some useful documents used as a reference for this post:


No comments:

Post a Comment