General OpenPOWER Hardware > General Hardware Discussion

2u Blackbird Build with 18 cores?!

(1/4) > >>

deepblue:
I wanted to provide a 'trip report' on the last ~6 months using the Blackbird as a 2u server platform.

The goal was to have a server with a similar footprint to an 1u/2u SuperMicro x86 rack mount 'appliance'. I paired this with an 18 Core Power 9 CPU as the logistical issues with procuring 8 core CPUs at the time was enough to twist my arm into spending another $1000. I really wanted on this platform, logistics and TDPs be damned!

There are serious thermal concerns that need to be addressed when considering a case.I would not recommend buying a case that you are uncomfortable with modifying, or is not explicitly designed for CPUs with a TDP of over 200 watts. RaptorCS does not recommend paring Blackbird motherboards with CPUs that have a TDP of over 160 watts. Since this configuration removes all thermal safety barriers, I knew that moving airflow in a short path and attaching heat sinks to the VFR chips would be mandatory.

Here is a parts list of the server (with prices at the time of purchase):


--- Code: ---Power9 - 18 Core CPU @ $1,570
Blackbird MicroATX Motherboard @ $999.00
2u Power9 Heatsink @ $80.00
2x 40x28mm High CFM fans @ $17.00
2x 32gb DDR4 ECC Memory @ $294.00
Rosewill 2u Server Chassis @ $85.00
2x Noctua 80mm case fan @ $20.00
Corsair CX Series 750w PSU @ $69.99
2x 256gb SanDisk 2.5” SSD Hard Drive @ $68.00
2x 2.5” to 3.5” Drive adapter @ ~ $10.00
6x 2tb Crucial 2.5” SSD Hard Drive @ $1337.88
ICY DOCK 6 x 2.5” SATA Cage @ $61.50
2x Noctua 40x10mm fans @ $28.00
LSI 9211-8I PCI-E SAS adapter @ $68.96
2x MiniSAS to SATA cable @ $24.00
Phanteks PWM Fan Hub Controller @ $19.99
Various cables and screws (included and acquired over the years)

--- End code ---

The case I went with is the humble Rosewill 2U Server Chassis/Server Case/Rackmount Case https://www.amazon.com/gp/product/B01F7RJPHO/. At $85 it was a great deal for what it provided. I was looking at a few specifics like low cost, support for a standard ATX power supply, and a detachable/modular internal disk tray design. But the most important feature was the 5.25” drive bay, as I was planning on installing an ICY DOCK 6 x 2.5" SATA cage to allow easy disk access and a quick way to add disk density while allowing for maximum airflow through the case.



Since I chose to go with the the low cost route on this case, the lid required a modification. I drilled an 80mm hole into the corner near the CPU fan exhaust, which helps move hot stale air away from the CPU cooler and brought down temps about 20c.



I had removed the disk trays inside the chassis when I first unboxed the case, so there is quite a bit of room for air to flow freely. I am able to maintain an idle temperature of around 40 Celsius, and a max temperature of around 75 Celsius under artificial stress tests. The disks never go over 40 Celsius and the noise is... not bad!. I am using 2x 40mm high CFM fans that makes the sound of sitting inside an idle jet aircraft with headphones on. This however is fine, as I have it stored in a room that is not going to cause distractions for me in noise sensitive areas.



Not pictured are the heatsinks on the VFR chips, but I am using Raspberry Pi heat sinks that are linked below.

https://www.amazon.com/Easycargo-Heatsink-conductive-Regulators-8-8mmx8-8mmx5mm/dp/B079FQ22LK

I am still playing around with the idea of retrofitting a different case with improved airflow, but the way the CPU and RAM are aligned makes it a bit tricky to route cool -> hot airflow. I know that my CPU utilization is quite low, so your mileage may vary when it comes to thermals during real world tasks for long periods of time. All of the fans are centrally controlled with the Phanteks controller, as the motherboard controller is not good enough for what I am attempting.

I hope this helps others when considering 2u options for PowerISA servers. Let me know if I should try something out if it helps with keeping temps under control.

ClassicHasClass:
That is indeed a pretty bleeding edge and I'm impressed by the lengths you went to. I'm leery of how well that would work under load, however (or for how long). What types of tasks is it doing? How does it perform if the CPU slices are loaded? Do you notice any throttling behaviour?

deepblue:
I did neglect to state that I am using the Indium pad that was provided with the 2u heat sink. It is a much required part to this setup.

I am running 4 containers running Golang/NPM/LAMP stacks with load balancing. Surprisingly, all the LAMP/Web based software is super lightweight and is noticeably quicker than when it ran on my old x86 server.

I am also running an x86_64 VM with Mikrotik's RouterOS, which was required as Mikrotik does not have a ppc version of RouterOS available on an .ISO image.

I have never been able to 'naturally' push the CPU over 10% utilization, so I am pretty sure that I am not even close to making a single CPU slice work hard and I do not see any throttling. I keep this in a air conditioned room that does not go over 72F, which definitely helps keep things under control.

I am constantly adding new services on the server, so hopefully I will have more real world numbers to report over the next few weeks and months.

Here are some temps when idle:



Here are some temps during a ~10 minute artificial stress test:


surf:
Thanks for sharing the details of your build!  Which CPU are you using?  The datasheet describes a 130w 18-core version and a 190w version.  It seems like the 130w CPU should not stress the blackbird board.  In your stress test, could you tell if the speed was 'throttled' as it got hot?

deepblue:
I am using the 18c chip that RaptorCS supplies:

https://www.raptorcs.com/content/CP9M06/intro.html

I could not tell if the chip was throttled during the test. I did try to load of a few web pages from the container that is hosting my website, but there was no noticeable difference in usability during the test. It was an artificial stress test so I know that in this particular instance it was being nice to the rest of the system.

Navigation

[0] Message Index

[#] Next page

Go to full version