Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - amock

Pages: [1] 2
1
Is there any update on this?  I updated my firmware and this is quite the downgrade.

2
General CPU Discussion / Re: Byte Magazine Unix benchmarking
« on: June 06, 2024, 04:00:15 pm »
Talos II Gentoo

Code: [Select]
Architecture:           ppc64
  CPU op-mode(s):       32-bit, 64-bit
  Byte Order:           Big Endian
CPU(s):                 144
  On-line CPU(s) list:  0-143
Model name:             POWER9, altivec supported
  Model:                2.2 (pvr 004e 1202)
  Thread(s) per core:   4
  Core(s) per socket:   18
  Socket(s):            2
  Frequency boost:      enabled
  CPU(s) scaling MHz:   58%
  CPU max MHz:          3800.0000
  CPU min MHz:          2154.0000
Caches (sum of all):
  L1d:                  1.1 MiB (36 instances)
  L1i:                  1.1 MiB (36 instances)
  L2:                   10 MiB (20 instances)
  L3:                   200 MiB (20 instances)
NUMA:
  NUMA node(s):         2
  NUMA node0 CPU(s):    0-71
  NUMA node8 CPU(s):    72-143
Vulnerabilities:
  Gather data sampling: Not affected
  Itlb multihit:        Not affected
  L1tf:                 Mitigation; RFI Flush, L1D private per thread
  Mds:                  Not affected
  Meltdown:             Mitigation; RFI Flush, L1D private per thread
  Mmio stale data:      Not affected
  Retbleed:             Not affected
  Spec rstack overflow: Not affected
  Spec store bypass:    Mitigation; Kernel entry/exit barrier (eieio)
  Spectre v1:           Mitigation; __user pointer sanitization, ori31 speculation barrier enabled
  Spectre v2:           Mitigation; Indirect branch serialisation (kernel only)
  Srbds:                Not affected
  Tsx async abort:      Not affected

Code: [Select]
   #    #  #    #  #  #    #          #####   ######  #    #   ####   #    #
   #    #  ##   #  #   #  #           #    #  #       ##   #  #    #  #    #
   #    #  # #  #  #    ##            #####   #####   # #  #  #       ######
   #    #  #  # #  #    ##            #    #  #       #  # #  #       #    #
   #    #  #   ##  #   #  #           #    #  #       #   ##  #    #  #    #
    ####   #    #  #  #    #          #####   ######  #    #   ####   #    #

   Version 5.1.3                      Based on the Byte Magazine Unix Benchmark

   Multi-CPU version                  Version 5 revisions by Ian Smith,
                                      Sunnyvale, CA, USA
   January 13, 2011                   johantheghost at yahoo period com

------------------------------------------------------------------------------
   Use directories for:
      * File I/O tests (named fs***) = /ramtmp/byte-unixbench/UnixBench/tmp
      * Results                      = /ramtmp/byte-unixbench/UnixBench/results
------------------------------------------------------------------------------


1 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

1 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

1 x Execl Throughput  1 2 3

1 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

1 x File Copy 256 bufsize 500 maxblocks  1 2 3

1 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

1 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

1 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

1 x Process Creation  1 2 3

1 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

1 x Shell Scripts (1 concurrent)  1 2 3

1 x Shell Scripts (8 concurrent)  1 2 3

144 x Dhrystone 2 using register variables  1 2 3 4 5 6 7 8 9 10

144 x Double-Precision Whetstone  1 2 3 4 5 6 7 8 9 10

144 x Execl Throughput  1 2 3

144 x File Copy 1024 bufsize 2000 maxblocks  1 2 3

144 x File Copy 256 bufsize 500 maxblocks  1 2 3

144 x File Copy 4096 bufsize 8000 maxblocks  1 2 3

144 x Pipe Throughput  1 2 3 4 5 6 7 8 9 10

144 x Pipe-based Context Switching  1 2 3 4 5 6 7 8 9 10

144 x Process Creation  1 2 3

144 x System Call Overhead  1 2 3 4 5 6 7 8 9 10

144 x Shell Scripts (1 concurrent)  1 2 3

144 x Shell Scripts (8 concurrent)  1 2 3

========================================================================
   BYTE UNIX Benchmarks (Version 5.1.3)

   System: gentoobe: GNU/Linux
   OS: GNU/Linux -- 6.5.0gentoobe -- #37 SMP Sun Nov 12 19:59:47 UTC 2023
   Machine: ppc64 (PowerNV T2P9D01 REV 1.00)
   Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
   19:55:48 up 31 days,  5:02,  2 users,  load average: 0.13, 0.84, 7.00; runlevel 2024-05-06

------------------------------------------------------------------------
Benchmark Run: Thu Jun 06 2024 19:55:48 - 20:23:54
144 CPUs in system; running 1 parallel copy of tests

Dhrystone 2 using register variables       42751196.0 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                     5171.1 MWIPS (9.8 s, 7 samples)
Execl Throughput                               3417.1 lps   (30.0 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks        726116.8 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks          191752.7 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks       2144675.1 KBps  (30.0 s, 2 samples)
Pipe Throughput                             1010071.8 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                 107060.2 lps   (10.0 s, 7 samples)
Process Creation                               6845.4 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                   5133.7 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                   4607.8 lpm   (60.0 s, 2 samples)
System Call Overhead                         761520.7 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0   42751196.0   3663.3
Double-Precision Whetstone                       55.0       5171.1    940.2
Execl Throughput                                 43.0       3417.1    794.7
File Copy 1024 bufsize 2000 maxblocks          3960.0     726116.8   1833.6
File Copy 256 bufsize 500 maxblocks            1655.0     191752.7   1158.6
File Copy 4096 bufsize 8000 maxblocks          5800.0    2144675.1   3697.7
Pipe Throughput                               12440.0    1010071.8    812.0
Pipe-based Context Switching                   4000.0     107060.2    267.7
Process Creation                                126.0       6845.4    543.3
Shell Scripts (1 concurrent)                     42.4       5133.7   1210.8
Shell Scripts (8 concurrent)                      6.0       4607.8   7679.6
System Call Overhead                          15000.0     761520.7    507.7
                                                                   ========
System Benchmarks Index Score                                        1229.9

------------------------------------------------------------------------
Benchmark Run: Thu Jun 06 2024 20:23:54 - 20:52:15
144 CPUs in system; running 144 parallel copies of tests

Dhrystone 2 using register variables     1301566752.7 lps   (10.0 s, 7 samples)
Double-Precision Whetstone                   361620.3 MWIPS (9.2 s, 7 samples)
Execl Throughput                              45703.9 lps   (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks      32471136.0 KBps  (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks         9562904.3 KBps  (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks      36136175.5 KBps  (30.0 s, 2 samples)
Pipe Throughput                            47642631.4 lps   (10.0 s, 7 samples)
Pipe-based Context Switching                5996735.7 lps   (10.0 s, 7 samples)
Process Creation                              74604.5 lps   (30.0 s, 2 samples)
Shell Scripts (1 concurrent)                 170218.4 lpm   (60.0 s, 2 samples)
Shell Scripts (8 concurrent)                  23748.9 lpm   (60.2 s, 2 samples)
System Call Overhead                       50438100.4 lps   (10.0 s, 7 samples)

System Benchmarks Index Values               BASELINE       RESULT    INDEX
Dhrystone 2 using register variables         116700.0 1301566752.7 111531.0
Double-Precision Whetstone                       55.0     361620.3  65749.2
Execl Throughput                                 43.0      45703.9  10628.8
File Copy 1024 bufsize 2000 maxblocks          3960.0   32471136.0  81997.8
File Copy 256 bufsize 500 maxblocks            1655.0    9562904.3  57781.9
File Copy 4096 bufsize 8000 maxblocks          5800.0   36136175.5  62303.8
Pipe Throughput                               12440.0   47642631.4  38297.9
Pipe-based Context Switching                   4000.0    5996735.7  14991.8
Process Creation                                126.0      74604.5   5921.0
Shell Scripts (1 concurrent)                     42.4     170218.4  40145.9
Shell Scripts (8 concurrent)                      6.0      23748.9  39581.6
System Call Overhead                          15000.0   50438100.4  33625.4
                                                                   ========
System Benchmarks Index Score                                       35625.3

3
Talos II / Re: Ubuntu 20.04 5.4.0-167: kexec load failed
« on: November 28, 2023, 09:00:04 am »
I don't know why that version doesn't boot, but I'm running 22.04 with 5.15.0-89-generic and haven't had any kernel trouble

4
Applications and Porting / Re: IDE's w/remote dev
« on: October 09, 2023, 08:55:00 pm »
Have you tried Eclipse with Remote System Explorer https://marketplace.eclipse.org/content/remote-system-explorer-ssh-telnet-ftp-and-dstore-protocols?  I haven't used it, but I think it might do what you want.

5
Does anyone know what happened to the Hardware Assisted Garbage Collection feature that was supposed to be in the POWER9 CPUs?  It is in the v3.0 spec, but  not v3.0B so the documentation was removed, but I can't find any mention of the removal.  It's also mentioned in the news from around the time of its release.  I'm guessing that it just didn't make it in to the final silicon for some reason, but I'd love to know if there was ever a reason given.

6
General CPU Discussion / Re: Asymmetric CPUs with the Same Core Count
« on: October 01, 2022, 09:28:06 pm »
Do you mean the timebase register? Not equivalent with the time-stamp counter on x86(_64) but should be usable for similar purposes.

https://www.gnu.org/software/libc/manual/html_node/PowerPC.html
Yes, thanks.  I couldn't remember the name but it looks to be easy to use and good enough.

7
General CPU Discussion / Re: Asymmetric CPUs with the Same Core Count
« on: September 30, 2022, 10:36:29 pm »
Do you, or anyone, know if anyone has made similar type of measurement on memory latency, including caching effects, from a single core to all memory in the system?

I don't know of any, but I see it done regularly for x86 systems so there might be something out there that's easily adaptable to other systems.  It seems like on x86 many benchmarking tools use
Code: [Select]
RDTSC and I haven't found an exact equivalent for the Power ISA, but there seems to be a 500MHz counter at least on POWER9 that I might try if I can find some simple code.  It's something I've thought about but haven't gotten around to yet.

8
General CPU Discussion / Re: Asymmetric CPUs with the Same Core Count
« on: September 29, 2022, 10:29:50 pm »
The darkest purple is a core pair, which shares L2 and L3 cache.  I'm guessing that the lighter purple is neighboring cores, but I don't know enough about the CPU to say for sure.  The graph is a measurement of how long it takes to send a message between cores, so I don't think it would ever go out to RAM on a single CPU, so the other purplish area is just cores in the same CPU that are far away and then the orange is cores on the other CPU.  There's a die picture at https://web.archive.org/web/20190325062541/https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/61ad9cf2-c6a3-4d2c-b779-61ff0266d32a/page/1cb956e8-4160-4bea-a956-e51490c2b920/attachment/56cea2a9-a574-4fbb-8b2c-675432367250/media/POWER9-VUG.pdf that I'm using to inform my speculation.

The small dark purple boxes just down and right from center are cores that have their own L2 and L3 cache instead of being paired, so that's just the 4 threads for a single core instead of the 8 that the other cores have.  If you ran this on an 4 core or 8 core machine it should have all small boxes.  Also, some of the cores might not have a neighboring core since only 18 of the potentially 24 cores are enabled.  There's also some error in the measurements since it doesn't use a cycle counting mechanism like on x86.

9
General CPU Discussion / Re: Asymmetric CPUs with the Same Core Count
« on: September 20, 2022, 09:19:14 pm »
If you don't want the bonus cache, I am sure someone would trade CPUs with you.  ;-)
I'm very happy with my Talos II :D

How do you read the purple and orange plot??

The darker purple is fastest going up to light purple and then red, then orange, then yellow.  There's a legend on the right side of the image, but you might have to scroll to see it. 

10
General CPU Discussion / Re: Asymmetric CPUs with the Same Core Count
« on: September 20, 2022, 11:44:05 am »
I'm pretty sure nothing was guarded out.  I just cleared it now and I've cleared out the guards before and it has always been like this.  The 8 core and smaller all have 10MB of cache per core, where the 18 core an larger have 10MB of cache per core pair.  So it seems like one of my CPUs has cores with unpaired cache (with 11 10MB caches like the 22 core CPU would) and one is just the paired caches (so 9 10MB caches) as normal.

I think this also contributes to an issue I had when I was overclocking.  I used the overclock from the Raptor repos and sometimes it would just suddenly die when under very heavy load.  My guess is that the CPU with 20MB of extra cache drew more power than it should have and tripped something.

11
General CPU Discussion / Asymmetric CPUs with the Same Core Count
« on: September 18, 2022, 09:35:19 pm »
I recently saw https://github.com/rigtorp/c2clat and ran it on my dual 18-core machine and got an interesting result.  I've attached an image of it, and the part that seems strange is the bottom left quadrant next to the origin where the pattern doesn't match what's in the top right quadrant.  It reminded me that I had earlier looked at my CPU caches and one seemed to have more than the other.

Code: [Select]
Package L#0
      L3 L#0 (10MB) + L2 L#0 (512KB)
        L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
          PU L#0 (P#0)
          PU L#1 (P#1)
          PU L#2 (P#2)
          PU L#3 (P#3)
        L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
          PU L#4 (P#4)
          PU L#5 (P#5)
          PU L#6 (P#6)
          PU L#7 (P#7)
      L3 L#1 (10MB) + L2 L#1 (512KB)
        L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
          PU L#8 (P#8)
          PU L#9 (P#9)
          PU L#10 (P#10)
          PU L#11 (P#11)
        L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
          PU L#12 (P#12)
          PU L#13 (P#13)
          PU L#14 (P#14)
          PU L#15 (P#15)
      L3 L#2 (10MB) + L2 L#2 (512KB)
        L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4
          PU L#16 (P#16)
          PU L#17 (P#17)
          PU L#18 (P#18)
          PU L#19 (P#19)
        L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5
          PU L#20 (P#20)
          PU L#21 (P#21)
          PU L#22 (P#22)
          PU L#23 (P#23)
      L3 L#3 (10MB) + L2 L#3 (512KB)
        L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6
          PU L#24 (P#24)
          PU L#25 (P#25)
          PU L#26 (P#26)
          PU L#27 (P#27)
        L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7
          PU L#28 (P#28)
          PU L#29 (P#29)
          PU L#30 (P#30)
          PU L#31 (P#31)
      L3 L#4 (10MB) + L2 L#4 (512KB)
        L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
          PU L#32 (P#32)
          PU L#33 (P#33)
          PU L#34 (P#34)
          PU L#35 (P#35)
        L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
          PU L#36 (P#36)
          PU L#37 (P#37)
          PU L#38 (P#38)
          PU L#39 (P#39)
      L3 L#5 (10MB) + L2 L#5 (512KB)
        L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
          PU L#40 (P#40)
          PU L#41 (P#41)
          PU L#42 (P#42)
          PU L#43 (P#43)
        L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
          PU L#44 (P#44)
          PU L#45 (P#45)
          PU L#46 (P#46)
          PU L#47 (P#47)
      L3 L#6 (10MB) + L2 L#6 (512KB)
        L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12
          PU L#48 (P#48)
          PU L#49 (P#49)
          PU L#50 (P#50)
          PU L#51 (P#51)
        L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13
          PU L#52 (P#52)
          PU L#53 (P#53)
          PU L#54 (P#54)
          PU L#55 (P#55)
      L3 L#7 (10MB) + L2 L#7 (512KB) + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14
        PU L#56 (P#56)
        PU L#57 (P#57)
        PU L#58 (P#58)
        PU L#59 (P#59)
      L3 L#8 (10MB) + L2 L#8 (512KB) + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15
        PU L#60 (P#60)
        PU L#61 (P#61)
        PU L#62 (P#62)
        PU L#63 (P#63)
      L3 L#9 (10MB) + L2 L#9 (512KB) + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16
        PU L#64 (P#64)
        PU L#65 (P#65)
        PU L#66 (P#66)
        PU L#67 (P#67)
      L3 L#10 (10MB) + L2 L#10 (512KB) + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17
        PU L#68 (P#68)
        PU L#69 (P#69)
        PU L#70 (P#70)
        PU L#71 (P#71)
compared to
Code: [Select]
Package L#1
      L3 L#11 (10MB) + L2 L#11 (512KB)
        L1d L#18 (32KB) + L1i L#18 (32KB) + Core L#18
          PU L#72 (P#72)
          PU L#73 (P#73)
          PU L#74 (P#74)
          PU L#75 (P#75)
        L1d L#19 (32KB) + L1i L#19 (32KB) + Core L#19
          PU L#76 (P#76)
          PU L#77 (P#77)
          PU L#78 (P#78)
          PU L#79 (P#79)
      L3 L#12 (10MB) + L2 L#12 (512KB)
        L1d L#20 (32KB) + L1i L#20 (32KB) + Core L#20
          PU L#80 (P#80)
          PU L#81 (P#81)
          PU L#82 (P#82)
          PU L#83 (P#83)
        L1d L#21 (32KB) + L1i L#21 (32KB) + Core L#21
          PU L#84 (P#84)
          PU L#85 (P#85)
          PU L#86 (P#86)
          PU L#87 (P#87)
      L3 L#13 (10MB) + L2 L#13 (512KB)
        L1d L#22 (32KB) + L1i L#22 (32KB) + Core L#22
          PU L#88 (P#88)
          PU L#89 (P#89)
          PU L#90 (P#90)
          PU L#91 (P#91)
        L1d L#23 (32KB) + L1i L#23 (32KB) + Core L#23
          PU L#92 (P#92)
          PU L#93 (P#93)
          PU L#94 (P#94)
          PU L#95 (P#95)
      L3 L#14 (10MB) + L2 L#14 (512KB)
        L1d L#24 (32KB) + L1i L#24 (32KB) + Core L#24
          PU L#96 (P#96)
          PU L#97 (P#97)
          PU L#98 (P#98)
          PU L#99 (P#99)
        L1d L#25 (32KB) + L1i L#25 (32KB) + Core L#25
          PU L#100 (P#100)
          PU L#101 (P#101)
          PU L#102 (P#102)
          PU L#103 (P#103)
      L3 L#15 (10MB) + L2 L#15 (512KB)
        L1d L#26 (32KB) + L1i L#26 (32KB) + Core L#26
          PU L#104 (P#104)
          PU L#105 (P#105)
          PU L#106 (P#106)
          PU L#107 (P#107)
        L1d L#27 (32KB) + L1i L#27 (32KB) + Core L#27
          PU L#108 (P#108)
          PU L#109 (P#109)
          PU L#110 (P#110)
          PU L#111 (P#111)
      L3 L#16 (10MB) + L2 L#16 (512KB)
        L1d L#28 (32KB) + L1i L#28 (32KB) + Core L#28
          PU L#112 (P#112)
          PU L#113 (P#113)
          PU L#114 (P#114)
          PU L#115 (P#115)
        L1d L#29 (32KB) + L1i L#29 (32KB) + Core L#29
          PU L#116 (P#116)
          PU L#117 (P#117)
          PU L#118 (P#118)
          PU L#119 (P#119)
      L3 L#17 (10MB) + L2 L#17 (512KB)
        L1d L#30 (32KB) + L1i L#30 (32KB) + Core L#30
          PU L#120 (P#120)
          PU L#121 (P#121)
          PU L#122 (P#122)
          PU L#123 (P#123)
        L1d L#31 (32KB) + L1i L#31 (32KB) + Core L#31
          PU L#124 (P#124)
          PU L#125 (P#125)
          PU L#126 (P#126)
          PU L#127 (P#127)
      L3 L#18 (10MB) + L2 L#18 (512KB)
        L1d L#32 (32KB) + L1i L#32 (32KB) + Core L#32
          PU L#128 (P#128)
          PU L#129 (P#129)
          PU L#130 (P#130)
          PU L#131 (P#131)
        L1d L#33 (32KB) + L1i L#33 (32KB) + Core L#33
          PU L#132 (P#132)
          PU L#133 (P#133)
          PU L#134 (P#134)
          PU L#135 (P#135)
      L3 L#19 (10MB) + L2 L#19 (512KB)
        L1d L#34 (32KB) + L1i L#34 (32KB) + Core L#34
          PU L#136 (P#136)
          PU L#137 (P#137)
          PU L#138 (P#138)
          PU L#139 (P#139)
        L1d L#35 (32KB) + L1i L#35 (32KB) + Core L#35
          PU L#140 (P#140)
          PU L#141 (P#141)
          PU L#142 (P#142)
          PU L#143 (P#143)

Does anyone else have asymmetric CPUs with the same core count?  For people with 18-core CPUs, what cache configuration do you have?

12
Talos II / Re: CPU Throttling
« on: September 18, 2022, 09:16:52 pm »
Thanks for your suggestions.  I've been waiting for it to happen again, but so far it hasn't.  When it does I'll see what i can get from those.

13
Talos II / Re: CPU Throttling
« on: September 02, 2022, 11:05:40 am »
Dual 18-core with the standard HSFs and indium pads.  I never saw temperatures above 83C, but I wasn't logging the temperatures so I only looked a few times.  I noticed that when this happens the CPU power usage is also stuck high, so that will probably keep the temperatures high and require fans.  I just don't know why it wouldn't throttle down the power once the CPUs are idle.

14
Talos II / CPU Throttling
« on: September 01, 2022, 05:35:34 pm »
Sometimes when my Talos II is under sustained heavy load I'll get the following messages in dmesg and then my CPUs won't throttle up and my fans are stuck at full power.  Has anyone else run into this or does anyone know how to reset this?  Currently I have to restart to get things back to normal.


[14289.353665] powernv-cpufreq: Pstate set to safe frequency
[14289.353673] powernv-cpufreq: PMSR = 6363631440000000
[14289.353677] powernv-cpufreq: CPU Frequency could be throttled

15
Talos II / Re: different memory sizes on each CPU in multi-CPU systems?
« on: September 01, 2022, 05:31:22 pm »
The User's Guide has RAM installation tables that show you can have any number of DIMMs between 1 and 16, so it is possible to have different numbers of DIMMs for different processors.  I haven't tested it, but that makes me think the DIMM sizes do not need to match between processors.  They have their own memory controllers, so that also points to not needing to match RAM between CPUs.

Pages: [1] 2