This white paper demonstrates and discusses VMware all-flash Virtual SUPERMICRO SAN Ready Node, a hypervisor-converged infrastructure, which perfectly fits these needs. The paper also shares the results of virtual desktop infrastructure full-clone and linked desktop deployments scenarios running on SUPERMICRO TwinPro2 servers, which are superior deployment models for virtual desktop infrastructure.
The SUPERMICRO TwinPro™ Solution architecture builds on SUPERMICRO’s proven Twin technology to provide outstanding throughput storage, networking, I/O, memory, and processing capabilities in a 2U server, allowing customers to further optimize SUPERMICRO solutions to solve their most challenging IT requirements.
Optimized for high-end Enterprise, Data Center, Hyper-Converged and Cloud Computing environments, the SUPERMICRO TwinPro Solutions are designed for ease of installation and maintenance with high quality for continuous operation at maximum capacity. The resulting benefit is compelling Total Cost of Ownership for customers seeking a competitive advantage from their data center resources.
SanDisk SAS-based flash storage has long helped customers realize new levels of technical, operational and financial efficiencies in virtualized environments. This storage has enabled
When a VMware® all-flash Virtual SAN™ is combined with Horizon View software on SUPERMICRO TwinPro2 servers, customers have one of the most-cost effective, high-performing solutions for client virtualization and virtual desktop infrastructure) needs.
VMware Virtual SAN is software-defined storage for VMware vSphere. The all-flash Virtual SAN solution clusters server-attached, SAS-based, mixed-used-capability SSDs as a flash storage tier for caching, and provides read-intensive-capability SSDs for capacity. The result is a flash-optimized, highly resilient shared datastore designed for virtual environments.
VMware Horizon View enables users to access all their virtual desktops, applications and online services through a single workspace. Horizon View is the virtual desktop host infrastructure for vSphere. This platform enables user desktops as virtual machines on top of ESXi hosts.
This white paper discusses the benefits of the solution described above. A key benefit is the high level of responsiveness of the virtual desktops and applications as measured by response times.
Virtual desktop infrastructure has been adopted extensively by enterprises who use it in financial, healthcare, engineering, education, and other sectors. As virtual desktop infrastructure is being widely deployed, new challenges are emerging. With workforce globalization and desktop consolidation in data centers, user demands and expectations have changed. The boot storm is no longer simply a 9 a.m. phenomenon for a particular time zone, or accessing desktops for a certain time period. The virtual desktop infrastructure environment now needs to be up and running 24x7, with the promise of consistent, predictable performance under any condition.
With the adoption of cloud deployment, virtual desktop infrastructure needs have become more elastic in nature. The environment grows or shrinks rapidly, and traditional storage approaches are not a good fit for these demands.
All these challenges need elastic, scalable, and predictable performance in a pre-configured environment. This white paper demonstrates and discusses VMware all-flash Virtual SUPERMICRO SAN Ready Node, a hypervisor-converged infrastructure, which perfectly fits these needs. The paper also shares the results of virtual desktop infrastructure full-clone and linked desktop deployments scenarios running on SUPERMICRO TwinPro2 servers, which are superior deployment models for virtual desktop infrastructure.
The average response times for virtual desktop infrastructure desktops are listed below. This shows how robust the solution is, as the application response time is well below the threshold.
Linked-Clone Desktop Response Time:
CPU-sensitive applications: 95th percentile: 0.53 seconds (threshold <1 sec.)
CPU- and disk-sensitive applications: 95th percentile: 3.10 seconds (threshold <6 sec.)
Full-Clone Desktop Response Time:
CPU-sensitive applications: 95th percentile: 0.58 seconds (threshold <1 sec.)
CPU- and disk-sensitive applications: 95th percentile: 3.23 seconds (threshold <6 sec.)
As VMware defines it, “Virtual SAN Ready Node is a validated server configuration in a tested, certified hardware form factor for Virtual SAN deployment, jointly recommended by the server OEM and VMware. Virtual SAN Ready Nodes are ideal as hyperconverged building blocks for larger data center environments looking for automation and a need to customize hardware and software configurations.”
SUPERMICRO and SanDisk jointly tested an all-flash VSAN Ready Node. Below are the configuration details.
Figure 1: All-flash VSAN Ready Node using a SUPERMICRO Server and SanDisk SAS SSDs
In this Ready Node, the following tasks were done:
Below are the application response times for virtual desktop infrastructure desktops under test.
Full-Clone Desktops (230 users) Response Time:
CPU-sensitive applications 95th percentile: 0.58 seconds (threshold target of <= 1 second)
CPU-and disk-sensitive applications 95th percentile: 3.23 seconds (threshold target of <= 6 seconds)
Linked-Clone Desktops (730 users) Response Time:
CPU-sensitive applications 95th percentile: 0.53 seconds (threshold target of <= 1 second)
CPU- and disk-sensitive applications 95th percentile: 3.10 seconds (threshold target of <= 6 seconds)
The above application response times were measured using VMware View Planner, which provides 95th- percentile value at the end of the test run and generates a report. Individual application response times are shown in the Test Results section.
Scaling the number of users depends on the storage capacity of the Ready Node: with 14 TB of usable capacity, scale-up can reach 230 Full Clone users. However, Linked Clone has better image management and thus can accommodate more users. Additional storage can be added to increase the density of virtual desktop infrastructure users.
For validation testing in the Virtual SAN environment, the VMware View Planner standard benchmark workload was used. Group A (CPU-sensitive) and Group B (CPU- and I/O-sensitive) scores were used to determine the Virtual SAN environment capability for hosting virtual desktop infrastructure.
The View desktop was created using a Windows 7, 32-bit operating system standard image. Necessary configuration changes were done according to the View Planner Installation and Configuration User Guide. All applications included in the View Planner pre-selected workload requirements were installed inside this image.
The VMware Virtual SAN default storage policy was applied to the desktop VM, which provides high availability in case one of the nodes goes down.
A four-node Virtual SAN cluster was created as defined in the Ready Node.
Figure 2: VMware Horizon View pool creation - datastore selection
All other Horizon View infrastructure VMs, such as View Planner Appliance, vCenter, AD-DNS, DHCP, and VMware Horizon View, were provisioned outside the Virtual SAN cluster.
Figure 3: Test bed architecture
The Virtual SAN datastore was configured using a single disk group in each node. Each disk group was configured with one caching tier, using a 400 GB Optimus Ascend™ SAS SSD and two capacity-tier Optimus MAX 4 TB SAS SSDs per server.
Figure 4: All-flash Virtual SAN disk group configuration
The tests were executed in two different phases. First, the Full Clone desktop pool was created, and the desktops were tested for performance and density using View Planner. Later, the Full Clone pool was deleted, and Linked Clones were recreated from the same desktop image.
The following section discusses the test results for both environments.
View Planner: Full Clone
Full clone desktops are generally used for engineering or high-end users. For that reason, we ran View Planner Standard Benchmark, which generally represents the power user profile requirement.
The following figures shows the response time for CPU-sensitive and disk-sensitive application operations of the View Planner workload.
Figure 5: Response time – CPU-sensitive applications operation
Figure 6: Response time – disk-sensitive applications operation
CPU Utilization Data for VSAN
The following graph captures the CPU core utilization across all the nodes and combined. The CPU utilization at the host was well below the availability of CPU cycles.
Figure 7: CPU utilization – all nodes shown as combined
The next graph collects the IOPS at the storage controller level to aggregate the total IOPS at steady state. Each disk group in the VSAN node is configured with a single storage controller.
Figure 8: Disk IOPS (read and write) – combined storage IOPS (controller level)
Similar to the IOPS collection, the throughput of all disks is gathered at the storage controller level.
Figure 9: Disk throughput (read and write) – combined storage IOPS (Controller Level)
The disk latency is well below one millisecond, except for some occasional spikes during steady state.
Figure 10: Disk latency (read and write) – combined latency (controller level)
View Planner Linked Clone
Linked clone pools are widely deployed, mostly being used for knowledge user profiles. For that reason, we ran a modified standard benchmark by increasing the think time from 5 sec. (for power user profile) to 10 sec. for a knowledge user profile. There is no standard definition of power or knowledge user think time, but as think time is increased, the resource consumption reduces for each desktop.
The following figure shows the response time for CPU-sensitive and disk-sensitive application operations of the View Planner workload.
Fig 11: Response time – CPU-sensitive applications operation
Figure 12: Response time – disk-sensitive applications operation
Figure 13: CPU utilization – combined nodes
Figure 14: Disk IOPS (read and write) – combined storage IOPS (controller level)
Figure 15: Disk throughput (read and write) – combined node storage throughput (controller level)
Figure 16: Disk latency (read and write) – combined node storage latency (controller level)
The disk latency is 0ms for most of the time in steady state. Because of low latency, the application response time is well below the threshold limit.
For full-clone desktops, the system was scaled up to 230 users with a power user profile. This is a very high density, considering the resource consumption of the desktops. With the given usable capacity, no additional full-clone desktops could be deployed. If additional storage is added in the four node all-flash Virtual SAN, further scaling is possible, as the resource utilization is well within the limits.
For linked clones, the system was scaled up to 730 users with a knowledge user profile. These are commonly used desktops in any mid- to large-size enterprise. The desktop density was kept at a level where application response times were very fast and user experience needs were met.
In both cases, the disk latencies were sub-millisecond, with very few spikes. This enables faster application response, thus improving user experience.
The following tables describe the test bed.
ESXi Host Configuration
|Storage Per Node||
|Network Per Node||
Virtual SAN Configuration
|Each disk group configuration||Caching Tier – 1 x 400 GB Optimus Ascend SSD
apacity Tier – C2 x 4 TB Optimus MAX SSDs
|Disk group in each node||1|
|Total Virtual SAN node||4|
Virtual Machine Configuration
|VMware Horizon View Manager||
|VMware Horizon View Composer||
Installed Desktop Application
|VMware Horizon View Manager||
|VMware Horizon View Composer||
Infrastructure Software Configuration
Bill of Materials
The following table summarizes the bill of materials for this solution:
|Servers||2U-4Node TwinPro2 server w/3008 E5-2670v3 12C 2.3G 30M 9.6 GT/s 512 GB RAM||4|
|Storage||400 GB Optimus Ascend SSD (1 per server)||4|
|4 TB Optimus MAX SSDs (2 per server)||8|
|SMC3008 12Gbps SAS3/HBA/2 internal mini ports (1 per server)||4|
|Network||2 X10 Gb NIC and 2 X 1 Gb in each node||4|
|10G Ethernet Switch SSE-X3348T / SSE-X3348TR||1|
This solution from SUPERMICRO and SanDisk, based on VMware’s Virtual SAN Ready Node, meets the stringent requirements of today’s applications without the complexity but with a scalable technology that is rapidly deployable, cost-effective, easy to manage and can be fully integrated into existing datacenters.
This paper illustrates with relevant benchmark performance case studies how the challenging Service Level Agreement (SLA) requirements for virtual desktop infrastructure can be met with sub millisecond latencies by using SUPERMICRO’s Hyper Converged Infrastructure (HCI, here implemented on 2U TwinPro²) with SanDisk storage technologies and VMware’s VSAN & Horizon View. In summary, this joint SUPERMICRO/SanDisk/VMware Virtual SAN Ready Node solution delivers lower TCO, with increased scale and operational efficiency over the traditional alternatives.
Birkaç başlangıç sorusu sormaya veya şirketinizin ihtiyaçlarına göre düzenlenmiş bir SanDisk çözümünü görüşmeye hazırsanız SanDisk satış ekibi yardım etmek için yanınızdadır.
Sorularınızı yanıtlamaktan memnuniyet duyarız, başlamak için lütfen aşağıdaki formu doldurun.
Teşekkür ederiz. İsteğiniz bize ulaştı.