Sometimes, you just get to take apart cool servers. A few months ago They Let Me Bring a Camera Into a Top Classified US Supercomputer, El Capitan. That top-end supercomputer uses AMD Instinct MI300A APUs, combining CPU and GPU cores from AMD’s EPYC and Instinct lines along with HBM3 memory to make perhaps the fastest APU around. The GIGABYTE G383-R80-AAP1 takes four of those MI300A APUs and puts them into a system that you can buy and run without having to buy a cluster with tens of thousands of nodes. In the process, this is a very neat server that we wanted to review.
For this one, we have a short video that you can find here:
As a quick note, we looked at this system earlier this year in Taiwan so we have to say this is sponsored. Let us get into the system.
GIGABYTE G383-R80-AAP1 External Hardware Overview
The system comes in a really neat form factor at 3U and 950mm (around 37.4in) deep. If you were wondering, those handles come in handy when moving the server.

The front of the server has cooling on the bottom and then storage and I/O on the top 1U.

Taking a look at the cooling, we can see that the fans occupy the bottom 2U portion of the chassis and are designed to draw cool air into the APU heatsinks.

Storage is provisioned through eight 2.5″ NVMe drive bays.

Here is a quick look at the drive backplane which is cabled for each x4 drive.

The other main feature is the front I/O. Here we have two 10GbE NIC ports via a Broadcom BCM57416.

The front I/O board is really neat in how it is designed with M.2 storage and more. This area also has the USB ports, VGA, and management LAN port.

The rear is quite far away and has something a bit unexpected.

There are four 3kW PSUs. Compared to the big 8x GPU servers, this is a relatively lower power machine which is fun to think about.

The rest of the rear is a series of PCIe Gen5 x16 slots. These are arranged in four dual width and four single width slots.

These are cabled into the machine on small PCIe boards.

This is probably crazy, but something I thought would be really interesting to someone would be adding four 400Gbps NICs, one per APU, and then four double-width accelerators or CXL PCIe cards. Originally the MI300 series was supposed to support CXL, we saw pre-release samples where it did, but that seems to have fallen out of the feature list between the prototypes we saw and the shipping products. Still, it would be a neat application if ever AMD enabled it.
Next, let us get inside the server to see how it works.
From the first page,
“Compared to the big 8x GPU servers, this is a relatively lower power machine which is fun to think about.”
Something else that’s fun to think about is a single one of these chips in HPC-dev workstations. Any indication that we might see something like a DGX Station (GB300 Grace Blackwell) from AMD?
Has the time come once again for serious fp64?
They can do this, but can’t make a high end gaming card. 
I appreciate this article, but this is fun to think about instead, the SuperMicro GPU A+ Server AS -4126GS-NMR-LCC with 8 Instinct MI350 (8 x 288GB of HBM3E mem) and 2 EPYC 9005 series supporting 24 DIMM 6TB memory in 4U.