Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - deepblue

Pages: [1]
General Hardware Discussion / Re: 2u Blackbird Build with 18 cores?!
« on: April 29, 2020, 04:17:24 pm »
The two 40mm fans strapped to the heat sink are pushing quite a bit of air. There is positive airflow moving into the case with an exit out the lid where I drilled the fan port. Its unlikely for hot air to linger near the heat sink. If this were in a non-ventilated environment, it would heat the space up very quickly.
Is there a definitive test that can determine if there is throttling at high temp loads? I wouldn't know because of how low my temps and actual utilization are.

I am using the model below. It looks like its the 190w variant:

General Hardware Discussion / Re: 2u Blackbird Build with 18 cores?!
« on: April 28, 2020, 07:53:56 pm »
I am using the 18c chip that RaptorCS supplies:

I could not tell if the chip was throttled during the test. I did try to load of a few web pages from the container that is hosting my website, but there was no noticeable difference in usability during the test. It was an artificial stress test so I know that in this particular instance it was being nice to the rest of the system.

Applications and Porting / Re: Poor QEMU Guest Performance
« on: April 26, 2020, 02:52:42 pm »
The only other option I have used where I have seen a difference is qemu64, and that only tells RouterOS that it has moved from a VM to an emulated OS. I do have a huge list of Intel CPU types, but I have not seen any differences when using them.

General Hardware Discussion / Re: 2u Blackbird Build with 18 cores?!
« on: April 26, 2020, 08:35:20 am »
I did neglect to state that I am using the Indium pad that was provided with the 2u heat sink. It is a much required part to this setup.

I am running 4 containers running Golang/NPM/LAMP stacks with load balancing. Surprisingly, all the LAMP/Web based software is super lightweight and is noticeably quicker than when it ran on my old x86 server.

I am also running an x86_64 VM with Mikrotik's RouterOS, which was required as Mikrotik does not have a ppc version of RouterOS available on an .ISO image.

I have never been able to 'naturally' push the CPU over 10% utilization, so I am pretty sure that I am not even close to making a single CPU slice work hard and I do not see any throttling. I keep this in a air conditioned room that does not go over 72F, which definitely helps keep things under control.

I am constantly adding new services on the server, so hopefully I will have more real world numbers to report over the next few weeks and months.

Here are some temps when idle:

Here are some temps during a ~10 minute artificial stress test:

Applications and Porting / Re: Poor QEMU Guest Performance
« on: April 26, 2020, 07:50:09 am »
I had a feeling that is what you meant. Here is the command I am using to launch the VM.

Code: [Select]
/usr/bin/qemu-system-x86_64 -name guest=The_Dude,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-9-The_Dude/master-key.aes -machine pc-i440fx-bionic,accel=tcg,usb=off,dump-guest-core=off -cpu kvm32 -m 256 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid cd817648-4846-42d4-936c-10a13375295a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-9-The_Dude/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=on,strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -drive file=/ssd_01/vms/The Dude.vdi,format=vdi,if=none,id=drive-ide0-1-0 -device ide-hd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/usb_01/samba/Software/mikrotik-6.45.3.iso,format=raw,if=none,id=drive-ide0-1-1,readonly=on -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=25,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=52:54:00:d3:2c:ea,bus=pci.0,addr=0x3 -netdev tap,fd=28,id=hostnet1 -device e1000,netdev=hostnet1,id=net1,mac=52:54:00:87:a4:b6,bus=pci.0,addr=0x6 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc,password -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on

Edit: I am also using the pre-built QEMU and KVM packages using apt-get

Blackbird / Re: Support for POWER9 22core?
« on: April 25, 2020, 09:10:59 am »
I made a post of my experiences using the 18c Power9 CPU on a Blackbird.,99.0.html

General Hardware Discussion / 2u Blackbird Build with 18 cores?!
« on: April 25, 2020, 09:09:46 am »
I wanted to provide a 'trip report' on the last ~6 months using the Blackbird as a 2u server platform.

The goal was to have a server with a similar footprint to an 1u/2u SuperMicro x86 rack mount 'appliance'. I paired this with an 18 Core Power 9 CPU as the logistical issues with procuring 8 core CPUs at the time was enough to twist my arm into spending another $1000. I really wanted on this platform, logistics and TDPs be damned!

There are serious thermal concerns that need to be addressed when considering a case.I would not recommend buying a case that you are uncomfortable with modifying, or is not explicitly designed for CPUs with a TDP of over 200 watts. RaptorCS does not recommend paring Blackbird motherboards with CPUs that have a TDP of over 160 watts. Since this configuration removes all thermal safety barriers, I knew that moving airflow in a short path and attaching heat sinks to the VFR chips would be mandatory.

Here is a parts list of the server (with prices at the time of purchase):

Code: [Select]
Power9 - 18 Core CPU @ $1,570
Blackbird MicroATX Motherboard @ $999.00
2u Power9 Heatsink @ $80.00
2x 40x28mm High CFM fans @ $17.00
2x 32gb DDR4 ECC Memory @ $294.00
Rosewill 2u Server Chassis @ $85.00
2x Noctua 80mm case fan @ $20.00
Corsair CX Series 750w PSU @ $69.99
2x 256gb SanDisk 2.5” SSD Hard Drive @ $68.00
2x 2.5” to 3.5” Drive adapter @ ~ $10.00
6x 2tb Crucial 2.5” SSD Hard Drive @ $1337.88
ICY DOCK 6 x 2.5” SATA Cage @ $61.50
2x Noctua 40x10mm fans @ $28.00
LSI 9211-8I PCI-E SAS adapter @ $68.96
2x MiniSAS to SATA cable @ $24.00
Phanteks PWM Fan Hub Controller @ $19.99
Various cables and screws (included and acquired over the years)

The case I went with is the humble Rosewill 2U Server Chassis/Server Case/Rackmount Case At $85 it was a great deal for what it provided. I was looking at a few specifics like low cost, support for a standard ATX power supply, and a detachable/modular internal disk tray design. But the most important feature was the 5.25” drive bay, as I was planning on installing an ICY DOCK 6 x 2.5" SATA cage to allow easy disk access and a quick way to add disk density while allowing for maximum airflow through the case.

Since I chose to go with the the low cost route on this case, the lid required a modification. I drilled an 80mm hole into the corner near the CPU fan exhaust, which helps move hot stale air away from the CPU cooler and brought down temps about 20c.

I had removed the disk trays inside the chassis when I first unboxed the case, so there is quite a bit of room for air to flow freely. I am able to maintain an idle temperature of around 40 Celsius, and a max temperature of around 75 Celsius under artificial stress tests. The disks never go over 40 Celsius and the noise is... not bad!. I am using 2x 40mm high CFM fans that makes the sound of sitting inside an idle jet aircraft with headphones on. This however is fine, as I have it stored in a room that is not going to cause distractions for me in noise sensitive areas.

Not pictured are the heatsinks on the VFR chips, but I am using Raspberry Pi heat sinks that are linked below.

I am still playing around with the idea of retrofitting a different case with improved airflow, but the way the CPU and RAM are aligned makes it a bit tricky to route cool -> hot airflow. I know that my CPU utilization is quite low, so your mileage may vary when it comes to thermals during real world tasks for long periods of time. All of the fans are centrally controlled with the Phanteks controller, as the motherboard controller is not good enough for what I am attempting.

I hope this helps others when considering 2u options for PowerISA servers. Let me know if I should try something out if it helps with keeping temps under control.

Applications and Porting / Re: Poor QEMU Guest Performance
« on: April 24, 2020, 05:51:24 pm »
I am not 100% sure what you mean by command line, but the host is running Ubuntu 18.04, and the guest is running Mikrotik RouterOS.

Applications and Porting / Poor QEMU Guest Performance
« on: April 23, 2020, 01:34:41 pm »
I am trying to set up an x86_64 QEMU guest to run on my Blackbird, and I am running into a performance issue.

The guest can see all of the processors that I allocate to it, however the CPU speed is capped at 500mhz and the guest is relatively slow. All CPUs that the guest sees are at 100% utilization.

The host reports a fraction of the guest CPU utilization, based on how many threads I give it. 4 threads = ~25% host utilization, 8 threads = ~12%. I am using virt-manager to manage the VMs and to see guest VM utilization.

Any insight on this would be helpful and appreciated.

Applications and Porting / Re: yQuake2 runs perfectly :D
« on: April 23, 2020, 09:23:20 am »
Do you have a quick video or FPS stats? I would love to see this in action!

Pages: [1]