Posted by Joe Forani on October 01, 2014 at 13:04:06:
In Reply to: Speed issues running Comet32 on Windows Server 2012 R2 posted by Joe Forani on October 01, 2014 at 04:39:54:
Just in case anybody is interested or is having similar problems I am posting here the email conversation between myself, Jeff Johnson and Julian Sutter as we try an resolve this issue. I will keep everyone posted. Joe
Yep, I’d bind the Ethernet ports anyway to provide a level of uplink redundancy, but it won’t really impact comet’s performance unless something else is misconfigured. Comet is pretty bandwidth efficient unless you are dealing with VERY large datasets. For example, a comet server on our infrastructure that supports about 40 full comet HEAVY users averages less than 1MB/s of continuous bandwidth utilization.
Your IT guy is right to have reservations about SSD. I think 80% of the localized outages we’ve had at Symbio in the last few years have been related to the failure of a consumer grade SSD. That being said, once you understand why SSD’s fail, and what you can do to prevent it, they become a literal IT miracle. If you decide to go that route, feel free to reach back out to me.
Thanks for the input. We are going to try the RAID 10 configuration and see where that takes us. Step 2 would be to try binding the NIC cards. Step 3 would be switching to a mirrored pair of SSD drives (Steve, our IT guy, has reservations about SSD drives - plus they are expensive).
We really appreciate your help and we'll let you know how we make out.
I figured I would jump in to give some advice. Jeff made some good suggestions, though Mhz for Mhz, the 2.2 Ghz modern CPU should be significantly faster (even for comet) than the old one. Each generation of cpu is on average about 15% more efficient per instruction cycle. Unless you are seeing Comet max out a single core in task manager, CPU is likely not your bottleneck.
My first guess would be storage. RAID 5 is generally bad choice for Comet (or any IO intensive application server). Without going too deep, in a RAID 5 array you are limited to the effective IOPs (instructions per second) of a single drive. A single 15k spindle can handle around 180 IOPs. With a raid 5 stripe of n number of 15k spindles, your effective IOPs would always be 180. By contrast, the slowest consumer grade SSD’s can handle 40,000 IOPs.
My first suggestion would be to reconfigure your storage to a RAID 10. This will effectively double the number of IOPs you are able to sustain, and my guess is you will see something like a 1.5x to 2x improvement in your benchmark. Next step up would be a mirrored pair of SSD’s. If you decide to go that route, I can make some suggestions about what to buy as the SSD world has some caveats.
Hey there Joe,
Sorry I did not get back to you yesterday. Just to clarify, on Test4, you are running ONE THOUSAND records TEN times? When I run that, we get .66 seconds on the server and 4.27 seconds on the client. Our new test is 1000.100 and we get 6.66 seconds on the server and 44.29 seconds on the clients.
Looking at your setup, everything looks pretty good. A few things that I noticed when I was building my server was that Comet could not use multiple cores. I found the fasted GHz speed proc I could, E5-1660, at 3.70GHz. Your RAM is the same that I have set. RAID 5 may cause a slight slowdown, but 4 (15K) drives should be fine for IO. We opted to go RAID 10 with SSD, so our IO is insane. I also increased the network card count. We were running a single gigabit NIC on our old server and on the new server I bonded 8 gigabit ports for increase throughput/redundancy.
The machine I am using for the server has the following specs
HP DL380 Gen 8
Intel Xeon E5-1660v2 (3.70 GHz, 15M Cache, 130W)
16GB RDIMM, LVDR
Smart Array P420i RAID controller
6 x 100 GB SSD (RAID10)
2 x 4 port GbE (bonded for 8GbE)
The only thing I can really see between the old server and the new server is that you increased your network throughput, but decreased your CPU speed. The easiest (most cost effective) test would be to pick up another NIC card and see if bonding them adds any speed for your clients. It will not affect TEST4 on the server though. After speaking with a few people (Julian Sutter @ Symbio Systems) I determined the most important part of the server is the CPU speed. I am surprised you did not get a decrease in Test4 based on dropping from 2.66GHz to 2.2GHz.
I wish I could be more help, but those are the hardware things I see. I do not think there was any software optimization that was done.
Post a Followup
Each file can be a maximum of 1MB in length Uploaded files will be purged from the server on a regular basis.