Building out Citrix VDI Desktop within a SMB Environment
When I first began my journey transitioning my company to be 100% virtual six years ago, there was no documentation on best practices for Small-Midsized Businesses like my own. I was planning on deploying less than 100 VDI images, and most documentation for the Citrix VDI Desktop environment, then XenDesktop, broke out each component to their own individual servers. For large-scale deployments this made sense, but for smaller deployments it did not. As such, I designed my own architecture, and have been successful running this VDI environment for a while.
The design first includes creating a redundant environment for the Database, Controller and StoreFront. Large-scale designs define at a minimum 6 servers to handle redundancy for these three software requirements, more for larger deployments. I whittled that down to two virtual servers (Server Cloud). As shown on the diagram below, the first layer begins with the Database. Using SQL Server Standard, I created three containers for the three required Citrix functions: Config, Log, and Monitor. Then on the second database, I created a Failover Cluster using Windows built in Cluster Management and configuring SQL Server’s Always-On synchronization. You can find a step-by-step guide here. The database failover is a Hot/Cold Standby environment.
The next layer is are the Controllers, installed on the same machines as the SQL Database architecture. The Delivery Controller is the server-side component that is responsible for managing user access, plus brokering and optimizing connections. Controllers also provide the Machine Creation Services that create desktop and server images. A production Site should always have at least two Controllers on different physical servers. If one Controller fails, the others can manage connections and administer the Site. The controllers are designed to load-balance connections, so not only do they back each other up, but they will spread the load across both servers.
The third layer of the cake is Storefront. StoreFront authenticates users to sites hosting resources and manages stores of applications and desktops that user’s access. It hosts your enterprise application store, which lets you give users self-service access to app and desktops you make available to them. It also keeps track of users’ application subscriptions, shortcut names, and other data to ensure they have a consistent experience across multiple devices. Installing Storefront on each server, then using the Windows cluster virtual address, allows users to connect to the Hot server Storefront, and automatically failover to the Cold server Storefront if the primary server is inaccessible.
Today we run our environment within a hyper-converged Nutanix/VMWare environment, but before that, we had the Controller servers and all the VDI desktops (VDI Desktop Cloud) running on only two physical servers with SSD drives and their compute sized to handle 100% of the VDI desktops should one of the hardware fail.
And of course we did have failures. A hard drive failed, a Controller hung, SQL Server’s Always-On sync stopped syncing, and yet, we have never had the inability to access 100% of our desktops at any time in 6 years.
Many times, with Citrix support on the phone, they would comment that this was not a supported architecture. Most likely because they did not deal with SMB’s running XenDesktop very often, not with under 100 desktops. But the architecture not only works, but thrives in a small SMB environment. It keeps both physical and labor costs down to a minimum, while taking advantage of Windows configurations that impact one or more failover environments.
For those who are looking to implement VDI in their SMB, I hope this design helps you justify building out such environment.
I would love to understand how other SMB’s have successfully deployed their own VDI system, and what architecture did you use?