We build a server for graphic and CAD/CAM applications for remote work on RDP based on CISCO UCS-C220 M3 v2
Almost every company now definitely has a department or group working in CAD/CAM
or heavy design programs. This group of users has serious requirements for hardware: a lot of memory - 64GB or more, a professional graphics card, fast ssd, and to be reliable. Often, companies buy some users of such departments several powerful PCs (or graphic stations) and others less powerful depending on the needs and financial capabilities of the company. Often this is the standard approach for solving such problems, and it works fine. But during a pandemic and remote work, and indeed in general, this approach is not optimal, very redundant and extremely inconvenient in administration, management and other aspects. Why is this so, and which solution will ideally satisfy the demand for graphic stations of many companies? Welcome to the cat, which describes how to assemble a working and inexpensive solution to
In December last year, one company opened a new office for a small design office and the task was to organize the entire computer infrastructure, given that the company already has laptops for users and a couple of servers. The laptops were already a couple of years old and these were mainly game configurations with 8-16GB of RAM, and basically could not cope with the load from CAD/CAM applications. Users must be mobile, as it is often necessary to work outside the office. In the office, for each laptop, it is additionally bought on the monitor (this is how they work with graphics). With such input data, the only optimal but risky solution for me is to implement a powerful terminal server with a powerful professional video card and nvme ssd disk.
Advantages of a graphical terminal server and RDP operation
- On individual powerful PCs or graphic stations, most of the time hardware resources are not used even by a third and are idle idle and only a short period of time is used in 35-100% of their capacity. Mostly efficiency is 5-20 percent.
- But often the hardware is far from the most expensive component, because basic graphic or CAD/CAM software licenses often cost from $ 5,000, and if with advanced options then from $ 10,000. Usually in an RDP session, these programs run without problems, but sometimes you need to order an RDP option, or search the forums for what to configure in the configs or registry and how to run such software in an RDP session. But to verify that the software we need works according to RDP is necessary at the very beginning and to do it simply: we try to access via RDP - if the program is launched and all basic program functions work, then there will most likely be no problems with licenses. And if it gives an error, then before implementing the project with a graphical terminal server, we are looking for a satisfactory solution to the problem for us.
- Also, a big plus is the support of the same configuration and specific settings, components and templates, which is often difficult for all PC users. Management, administration and software updates are also "without a hitch"
In general, there are many pluses - let's see how our almost perfect solution actually shows.
We build a server based on CISCO UCS-C220 M3 v2
It was originally planned to buy a newer and more powerful server with 256GB DDR3 ecc memory and 10GB ethernet, but they said that you need to save some money and fit into the budget for a terminal server of $ 1600. Well - the client is always
used CISCO UCS-C220 M3 v2 (2 X SIX CORE 2.10GHZ E5-2620 v2) \ 128GB DDR3 ecc - $ 625
3.5 "3TB sas 7200 w USA ID - 2x65 $=130 $
SSD M.2 2280 970 PRO, PCI-E 3.0 (x4) 512GB Samsung - $ 200
Video card QUADRO P2200 5120MB - $ 470
Adapter Ewell PCI-E 3.0 to M.2 SSD (EW239) -10 $
Total server=$ 1,435
It was planned to take ssd 1TB and 10GB ethernet adapter - $ 40, but it turned out that there were no UPS to their 2 servers, and I had to shrink a bit and buy a UPS PowerWalker VI 2200 RLE -350 $.
Why a server, not a powerful PC? Justification for the selected configuration.
Many short-sighted admins (many times already encountered) - for some reason they buy a powerful (often a gaming PC), put 2-4 disks there, create RAID 1, proudly call it a server and put it in the corner of the office. The whole komlektuha is natural - the “hodgepodge” of dubious quality. Therefore, I will write in detail why such a configuration is selected for such a budget.
- Reliability !!! - All server components are designed and tested to work for more than 5-10 years. And game mothers work 3-5 years by force, and even the percentage of failure during the warranty period for some exceeds 5%. And our server is from the super-reliable CISCO brand, so no special problems are expected and their probability is an order of magnitude lower than a stationary PC
- Important components such as a power supply are duplicated and, ideally, you can supply power from two different lines and if one unit fails, the server continues to work
- ECC memory - now few people remember that initially ECC memory was introduced to correct one bit from an error that occurs mainly from the influence of cosmic rays, and on a memory capacity of 128GB - an error can occur several times a year. On a stationary PC, we can observe a program crash, freezing, etc., which is not critical, but on a server the price of an error is sometimes very high (for example, an incorrect entry in the database), in our case, with a serious glitch, it needs to be rebooted and sometimes it costs a day's work of several people
- Scalability - often the company's need for resources grows several times in a couple of years and it is easy to add disk memory to the server, change processors (in our case, six-core E5-2620 to ten-core Xeon E5 2690 v2) - on a regular PC, there is almost no scalability
- Server format U1 - servers must stand in server! and in compact racks, and not to stomp (up to 1 kW of heat) and make noise in the corner of the office! Just in the new office of the company, a few (3-6 units) a place in the server room was provided separately and one unit on our server was right next to us.
- Remote: management and console - without this, normal server maintenance for the remote! work is extremely difficult!
- 128GB of RAM - 8-10 users were said in the TOR, but in reality there will be 5-6 simultaneous sessions - therefore, given the typical maximum memory consumption of 2 users in each company, 30-40GB=70GB and 4 users for 3-15GB=36GB, + up to 10GB per OS, in the amount of 116GB and 10% in our reserve (this is all in rare cases of maximum use. But if you don’t have enough, you can add up to 256GB at any time
- Video card QUADRO P2200 5120MB - on average per user in that company in
In a remote session, the video memory consumption was from 0.3 GB to 1.5 GB, so 5 GB would be enough. The initial data was taken from a similar, but less powerful solution, based on the i5/64GB/Quadro P620 2GB, which was enough for 3-4 users
- SSD M.2 2280 970 PRO, PCI-E 3.0 (x4) 512GB Samsung - for simultaneous operation
8-10 users, it is NVMe speed and Samsung ssd reliability that are needed. By functionality, this disk will be used for OS and applications
- 2x3TB sas - we combine in RAID 1 we use for voluminous or rarely used local user data, as well as for system backup and critical local data from the nvme drive
The configuration is approved and purchased, and soon the moment of truth will come!
Build, configure, install, and troubleshoot.
From the very beginning, I was not sure that this was a 100% working solution, since at any stage, from assembly to installation, launch and correct operation of applications, you could get stuck without the ability to continue, so I agreed about the server for a couple of days can be returned, and other components can be used in an alternative solution.
1 far-fetched problem - a professional, full-format video card! + a couple of mm, but what if it does not fit? 75W - what if the pci plug does not pull? And how to make a normal heat sink of these 75 watts? But it got in, it started, the heat sink is normal (especially if the server coolers are turned on above the average speed.True, when I set it, to make sure that nothing closes something in the server, I bend it back 1 mm (I don’t remember what), and then, after the final setup, I peeled off the instruction film, which was on the entire cover, and which could degrade heat sink through the cover.
2nd test - the NVMe disk through the adapter might not be visible or the system will not be delivered there, and if it is, it will not boot. Oddly enough, Windows was installed on the NVMe disk, but it could not boot from it, which is logical since the BIOS (even the updated one) did not want to recognize the NVMe for loading in any way. I didn’t want to crutches, but I had to - our favorite habr came to the rescue and I downloaded about downloading from an nvme disk on legacy systems utility Boot Disk Utility (BDUtility.exe) , created a flash drive with CloverBootManager according to the instructions from of the post, installed the USB flash drive in the BIOS first to boot and now we load the bootloader from the USB flash drive, Clover successfully saw our NVMe disk and after a couple of seconds it automatically booted from it! You could play around with installing clover on our raid 3TB disk, but it was already Saturday evening, and there was work left for a day, because until Monday you had to either give the server or leave it. I left the bootable USB flash drive inside the server, there was just an extra usb.
3rd is almost a threat of failure. I installed Windows 2019 standart + RD services, installed the main application, for which everything was started, and everything works wonderfully and literally flies.
Wonderful! I’m going home and connecting via RDP, the application starts, but there is a serious lag, I look and in the program the message “soft mode is turned on”. What ?! I am looking for more recent and super-professional firewood on the video card, I set the result to zero, more ancient firewood under p1000 is also nothing. And at this time, the inner voice is all mocking "but I told you - do not experiment with the freshman - take p1000". And time - it's been night in the yard for a long time, I go to bed with a heavy heart. Sunday, I’m going to the office - I put quadro P620 in the server and it doesn’t work on RDP either - MS, what's the matter? I am looking at the forums “2019 server and RDP” - I found the answer almost immediately.
It turns out that since most monitors are now with higher resolution, and on most servers the integrated graphics adapter does not support these resolutions, hardware acceleration is disabled by default through group policies. I quote the instructions for inclusion:
- Open the Edit Group Policy tool from Control Panel or use the Windows Search dialog (Windows Key + R, then type in gpedit.msc)
- Browse to: Local Computer Policy \ Computer Configuration \ Administrative Templates \ Windows Components \ Remote Desktop Services \ Remote Desktop Session Host \ Remote Session Environment
- Then enable “Use the hardware default graphics adapter for all Remote Desktop Services sessions”
Reboot - everything works fine on RDP. Changing the graphics card to the P2200 works again! Now that we are confident that the solution is fully operational, we bring all the server settings to ideal, enter the domain, configure user access and more, put the server in the server room. We test the whole team for a couple of days - everything works perfectly, there are plenty of resources for all server tasks, the minimum lag resulting from working on RDP is not visible to all users. Great - the task is 100% complete.
A couple of points on which the success of implementing a graphic server depends
Since at any stage of introducing a graphic server into an organization, pitfalls may arise that can create a situation similar to that in the picture with escaped fish
then at the planning stage you need to take a few simple steps:
- Target audience and tasks - users who intensively work with graphics and need hardware acceleration of the video card. The success of our solution is based on the fact that the power requirements of users of graphic and CAD/CAM programs were met more than 10 years ago, and at the moment we have a power reserve exceeding the requirements by 10 or more times.For example, the power of the Quadro P2200 GPU is enough for an excess of 10 users, and even with a lack of video memory, the video card gets from RAM, and for a regular 3d developer, such a small drop in memory speed goes unnoticed. But if user tasks have intensive computing tasks (rendering, calculations, etc.), which often use 100% of the resources, then our solution is not suitable, as other users will not be able to work normally during these periods. Therefore, we carefully analyze user tasks and the current resource loading (at least approximately). We also pay attention to the amount of rewriting to disk per day, and if it is a large amount, we select server ssd or optane disks for this volume.
- Based on the number of users, we select the appropriate server, video card and drives for resources:
- processors according to the formula 1 core per user + 2.3 on the OS, all the same, each at one time does not use one or a maximum of two (with a rare loading of the model) cores;
- video card - we look at the average amount of video memory and GPU consumption per user in an RDP session and select a professional! video card;
- we do the same with RAM and the disk subsystem (now you can even choose RAID nvme inexpensively).
- We carefully look at the documentation for the server (since all branded servers have full documentation) compliance with the connectors, speeds, power and supported technologies, as well as physical dimensions, and heat dissipation standards for installed additional components.
- We check the normal operation of our software in several sessions on RDP, as well as the absence of license restrictions, and carefully check the availability of the necessary licenses. We solve this issue until the first steps in implementing the implementation. As stated in the comment by respected malefix
"- Licenses can be tied to the number of users - then you violate the license.
- The software may not work correctly with several running instances - if he writes garbage or settings not in the user profile/% temp%, but in something generally accessible in at least one place, then you will have a lot of fun catching the problem "
- We think over where the graphic server will be installed, do not forget about the UPS and the presence of high-speed ethernet ports and the Internet (if necessary), as well as the compliance with the climatic requirements of the server.
- We extend the implementation period to a minimum of 2.5-3 weeks, because many even the smallest necessary components can go up to two weeks, and after all, the assembly and configuration takes several days - only normal server loading to the OS can take more than 5 minutes.
- We discuss with the management and suppliers that if suddenly at some stage the project does not go or goes wrong, then we can make a return or replacement.
- It was also kindly prompted in the malefix comments
after all the experiments with the settings - demolish everything and put it from scratch. Like this:
- during experiments it is necessary to document all critical settings
- during installation from scratch, you repeatedly perform the minimum necessary settings (which you documented in the previous step)
- We install the operating system (preferably Windows server 2019 - there is a high-quality RDP) first in Trial mode, but in no case evaluate it (you need to reinstall it from scratch later). And only after a successful launch we solve issues with licenses and activate the OS.
- Also, before implementation, we select an initiative group to test work and explain to future users the benefits of working with a graphic server. If you do this later, we increase the risk of complaints, sabotage and unreasonable negative reviews.
It feels like working on RDP is no different from working in a local session. Often you forget that you work somewhere on RDP - after all, even video and sometimes video communications in an RDP session work without noticeable delays, because now most people have high-speed Internet connection. In terms of speed and RDP functionality, Microsoft now continues to pleasantly surprise both 3D hardware acceleration and multimonitors - all that is necessary for users of graphic, 3D and CAD/CAM programs to work remotely!
So in many cases, the installation of a graphics server according to the implementation is more preferable and more mobile than 10 graphic stations or PCs.
P.S. You can see how easy and safe to connect via the Internet via RDP, as well as the optimal settings for RDP clients, in the article " Remote work in the office. RDP, Port Knocking, Mikrotik: Simple and Safe . "