My goal with this project is to deploy a VCF-9 Homelab environment to gain more in-depth experience deploying and using VCF.
At the end I want my VCF deployment to be hosting my home services instead of a nested environment that is often used to test / try out the features of a VCF deployment.
As I live in Europe, where energy prices go up to 0,35€/KWh, power consumption is a key factor to consider when choosing hardware for my homelab.
For example: a typically specced Dell Poweredge R740 with medium load will consume around 150W, running a 3-Node setup would cost around 117,96€ in a single month!
When looking at the [hardware requirements](https://williamlam.com/2025/06/minimal-resources-for-deploying-vcf-9-0-in-a-lab.html) for VCF-9 you will see that there are some requirements that are pretty hard to fulfill.
Especially the 12 physical cores that are required to run the Aria Automations appliance limit our hardware choices dramatically.
That's why people who want to use VCF in their homelab usually commit to deploying power hungry servers that allow for higher specs or deploy VCF for testing purposes in a nested environment, for example with the [holodeck toolkit](https://www.vmware.com/docs/holodeck-toolkit-overview).
In this project I will propose a different solution, using the [Minisforum BD795 Motherboard](https://minisforumpc.eu/products/minisforum-bd795m-motherboard?variant=51755147133294&country=DE¤cy=EUR&gad_source=1&gad_campaignid=22899115618&gbraid=0AAAAAppTKXrsFuaxO7PXDRL2CGbm5_Vvd&gclid=CjwKCAjwlaTGBhANEiwAoRgXBeoR2u-MBVyd0NCc3liKAKRqr5bVRhuEEtszD1l7QgFSF1z4bdSBjhoCMn4QAvD_BwE)
## My Requirements
Below is a table of the requirements I set myself for the new environment.
| Category | Requirement | Explanation |
| ----------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ESXi hosts | 3 Physical Nodes | With VCF-9 the minimum ESXi hosts required for running with vSAN came down from 4 to 3 hosts, this will still enable us to take one host down for patching without affecting services. |
| CPU | 12 Physical Cores per Node | To support the Aria Automations Appliance, we will need at least 12 physical cores to run at all. |
| Memory | 128 GB Memory per Node | The minimum memory required to just deploy VCF is 194GB; running additional services we will adjust our requirement to 128GB RAM per Node. |
| Networking | 2x 10GbE per Node | Generally VCF can also run on 1x10GbE, but it just rubs me the wrong way having only a single uplink on a host. You can definitely get away with using just a single port NIC. |
| Storage | 4TB usable Storage | VCF itself requires 3,2TB. Additional services I plan to deploy do not use that much storage (under 800GB). |
| Power consumption | 70W average or less per Node | I am currently using a 3-Node vSAN HP-SFF Setup, running on an i7-8700 CPU which is using around 60W per Node running all of my services.<br>I do not want to increase my power consumption by much. I set myself a limit of 250W for my whole homelab, which leaves around 210W for the compute nodes, excluding Switches, NAS, Firewall. |
| Noise | 40dB or less | This is the noise level I have measured for my current homelab setup, which is not overly distracting when sitting right next to me. |
| Compatibility | VCF-9 | The hardware chosen needs to be compatible with VCF-9. This does not mean that it is fully validated by the hardware vendor, but it should be able to run VCF-9 without any major restrictions. |
| Dimensions | `440mm*440mm*11,5mm` per Node | Must fit inside a 19" Network rack with space at the back for cabling.<br>I have 8HE for the hosts; they will rest on a rack shelf each. |
### Some Words About VCF-9 Compatibility
The hardware being used must be able to run VCF-9.
The biggest compatibility challenge in a VMware homelab is usually the network card.
Realtek for example offers no real drivers for VMware for their common NICs and therefore cannot be used inside an ESXi installation.
Additionally with VCF-9, some devices that were compatible with ESXi 8 are now deprecated — see the [deprecated devices list](https://knowledge.broadcom.com/external/article/391170/).
Hardware that is going to be used must therefore already be tested to work with VCF-9 or be listed inside the [Broadcom Compatibility Guide.](https://compatibilityguide.broadcom.com)
## Choosing the Hardware
There are 4 types of hardware typically used for building a VMware homelab:
- Enterprise Servers
- models like `Dell Poweredge R640, HPE DL380 G10`
- Workstations/business PCs
- e.g. `Dell T3610`/`HP SFF 800 G4`
- Mini-PCs/NUCs
- e.g.`Minisforum MS-A2`/ `Intel NUC 11 Pro`
- Custom builds
- custom hardware and size, either with consumer hardware or workstation/server parts.
Each of them has its own advantages and disadvantages.
**Enterprise Servers**
Three of my main requirements (power consumption, noise level and compactness) disqualify Enterprise servers for me right from the start.
**Workstations**
When looking at old workstations there are no (to my knowledge) workstations with at least 12 physical cores that are reasonably cheap and also power efficient.
When looking at the `Dell T3610` for example, which can be configured to have 12 physical cores, it still fails at the power consumption criteria — the CPU itself nearly doubles the TDP at 120W compared to my i7 8700 (65W).
TDP is not the definitive way to measure actual power consumption, but it is still a good indicator.
Same story for e.g. dual socket 6 core workstations.
Old business PCs like I currently use (HP SFF 800 G4) lack the cores needed; consumer CPUs inside SFF PCs with more than 8 cores rarely exist.
**Mini-PCs/NUCs**
This leads us to Mini-PCs/NUCs, which are generally very power efficient and, as the name suggests, very compact.
When it comes to Mini-PCs/NUCs, the hardware requirements and I/O are the most difficult parts to fulfill — the 12 physical core requirement significantly reduces our choices.
The obvious choice here is the new [MS-A2](https://minisforumpc.eu/de/products/ms-a2-mini-pc?srsltid=AfmBOopGx1PP1KwNuYCOnrShhDOb-J1dELsSwmklcREbLooNAzZNfc2e) from Minisforum, and to be honest, if you do not have the special requirements that I have, it is probably the best all-in-one choice that requires almost no effort to set up.
It checks almost all the boxes:
- Fulfills the VCF-9 Requirements
- CPU: AMD Ryzen 9 7945HX (16 Cores / 32 Threads)
- RAM: up to 128GB RAM possible
- Network: 2x 10GbE SFP+ X710
- Power efficiency
- Laptop-based CPU with a configurable TDP between 45-75W
- Further optimization possible, boost deactivation etc. in BIOS
- Compact
- Very small box, can fit basically anywhere
The only concern I had with the MS-A2 is the noise level — especially under load many reviews say that it rises above 50dB. As my homelab runs right next to me in my living room, this is a no-go for me.
The cooling solution itself seems to be too compact to allow for a cooling mod.
So the only option left for my use-case were the MoDT motherboards also offered by Minisforum:
| | BD795i SE | BD795M | Note |
| --------------------- | ------------------------------------------------- | ------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Form factor | ITX | M-ATX | M-ATX is bigger than ITX but also has broader compatibility with cases. |
| Cooling | pre-installed fin stack | LGA1700 Cooler mount | Unsure if the built-in fin stack of the BD795i is good/sufficient enough for the required dB level. |
| CPU | Ryzen 9 7945HX<br>16 Cores / 32 Threads | Ryzen 9 7945HX<br>16 Cores / 32 Threads | |
| RAM | DDR5 5200 SODimm<br>up to 128GB (officially 64GB) | DDR5 5200 SODimm<br>up to 128GB (officially up to 96GB) | The website for BD795i SE says up to 64GB, but I do not see a reason why it wouldn't support 128GB as well.<br>For the BD795M, 128GB is confirmed to be working. |
| Network | 1x Realtek 2,5GbE | 1x Realtek 2,5GbE | Will not work with ESXi. |
| Storage expandability | 2x M2 2280 PCIe 4.0 | 2x M2 2280 PCIe 4.0<br>2x SATA 3.0 | The additional SATA Ports in the BD795 allow us to have a dedicated vSAN ESA NVMe disk and an NVMe disk for memory tiering. |
| PCIe expandability | 1x PCIe 5.0 16x | 1x PCIe 4.0 16x | PCIe is definitely needed. The generational difference is irrelevant for my use case; 10GbE NICs do not benefit from it. |
Two points to note: due to the lack of an integrated 10GbE port we will need to install a dual port 10GbE SFP+ PCIe card into the build — see the [[#Bill of Material|Bill of Material]] for details on what I chose.
The BD795M also supports bifurcation, so if any additional PCIe card is required in the future this could be done as well through an adapter.
I ultimately decided to go with the BD795M for the following reasons:
- The size difference is not really relevant for me; both will fit inside my network rack.
- The official Minisforum website no longer lists the BD795i SE as an available option. I do not want to risk that the part is no longer available in the future for spares etc.
- Officially the BD795i only supports up to 64GB RAM. My guess is that it also supports 128GB like the BD795M, but I did not want to take the additional risk of sending everything back again.
- The clearance for the PCIe slot on the 795i seems to be a bit smaller for airflow. Optimally I want to cool both front and back with airflow from the front of the case using a smaller Noctua fan — meaning if it is further away from the CPU fin-stack, it will probably get better exposure to the airflow.
### Network Card
For the network card I was looking at supported cards with ASPM (Active State Power Management) support, which allows the system to go into deeper C-states when idle.
I chose the Intel X710-DA2, as it is listed in the [Broadcom Compatibility Guide](https://compatibilityguide.broadcom.com/detail?program=io&productId=37976&persona=live) and is often available for under 80 Euro on eBay.
An alternative could also be the [Mellanox ConnectX-4](https://compatibilityguide.broadcom.com/detail?program=io&productId=40443&persona=live) series.
## Bill of Material
### ESXi Nodes
| Amount | Component | Price | Note |
| ------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 3 | [Minisforum BD795M](https://www.minisforum.com/de/products/minisforum-bd795m) | 399,00€ | |
| 3 | [Intel X710-DA2](https://www.intel.com/content/www/us/en/products/sku/83964/intel-ethernet-converged-network-adapter-x710da2/specifications.html) | 65,00€ | Can be found used on e.g. eBay for around this price. |
| 3 | [Crucial DDR5 SO-Dimm 128GB Kit(2x64GB)](https://www.crucial.com/memory/ddr5/CT2K64G56C46S5)<br> | 333,68€ | |
| 3 | [BeQuiet TFX Power 3 300W (80 Plus Gold)](https://www.bequiet.com/en/powersupply/2314) | 71,89€ | Lowest wattage TFX power supply from a reputable company that I could find; 250W would probably be fine as well. |
| 3 | [Thermalright AXP90-X47 Cooler](https://www.thermalright.com/product/axp90-x47/) | 21,69€ | https://www.caselabs.org/coolers/axp90-x47<br>Pretty cheap low-profile cooler with decent cooling performance. |
| 3 | [Samsung 990 Pro 2TB (vSAN)](https://www.samsung.com/de/memory-storage/nvme-ssd/990-pro-2tb-nvme-pcie-gen-4-mz-v9p2t0bw/?cid=de_pd_ppc_google_storage_sustain_ms-storage-ssd-ao23_text_samsung990pro2tb-20230510_none)<br> | 159,82€<br> | I have had good experiences with Samsung in the past and got the SSDs relatively cheap.<br>Any other PCIe 4.0 or even 3.0 NVMe SSD would have probably been fine for my storage I/O needs. |
| 3 | Lexar 256GB NVMe SSD (Memory Tiering) | 24,99€ | Reused these NVMe drives from an older setup as well; very useful as a memory tiering device. |
| 3 | Generic 128GB SATA SSD (Boot) | 15,99 | Just as an example — had some old NVMe drives laying around anyway. You can use any low-capacity NVMe or even SATA drive for ESXi to boot from. |
Cost per host: 1.092,06€
Sub-total: 3.276,18€
### Networking
| Amount | Component | Price | Note |
| ------ | ------------------------------------------------------------------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------- |
| 1 | [CRS326-4C+20G+2Q+RM](https://mikrotik.com/product/crs326_4c_20g_2q_rm) | 789,73€ | Offers 4x SFP+ ports and 2x QSFP ports, which can be converted to 4x SFP+ with a breakout cable (see below). |
| 1 | [2M QSFP+ to 4x 10G SFP+ Copper Breakout Cable](https://www.fs.com/de/products/303323.html) | 57€ | Breakout cable for switch. |
| 3 | 1M SFP+ DAC cable | 12,99€ | |
Sub-total: 885,70€
### Total
Excluding the already existing network rack, UPS and other components of my homelab, the 3x ESXi hosts plus the needed new networking equipment leads to a total of 4.161,88€.
This is obviously not cheap for a homelab in general, but considering my special requirements regarding power consumption, still within the realm of reality (I tell myself).
I had the NVMe drives and the SFP+ cables already laying around and was therefore able to get this project done for around 3.500,00€.
For me personally it is worth it, as I use my homelab not only for tinkering with things for fun but also as a learning platform to further my skills for my career path.
## Optimizing ESXi Hosts for Power Efficiency
### BIOS
Before you start changing any BIOS settings, make sure to update your BIOS to the latest version — after the upgrade all BIOS settings will be reset.
I found the [instructions](https://www.virtualizationhowto.com/community/home-lab-forum/steps-to-upgrade-minisforum-motherboard-bios-bd795m-bd795i-se/) that Brandon from [Virtualizationhowto.com](https://www.virtualizationhowto.com) released helpful for the installation.
Inside the BD795M BIOS there are 2 settings you want to disable to lower your power draw:
| Configuration | Value | Description |
| ------------------------------------------------------------------- | -------- | -------------------------------------------------------------------------------------- |
| Advanced -> AMD CBS -> CPU Common Options -> Core Performance Boost | Disabled | Automatically increases individual core frequencies above base clock, draws more power |
| Advanced -> AMD Overclocking -> Precision Boost Overdrive | Disabled | Automatic overclocking of the CPU, increases power draw drastically |
This alone made a difference of around 30W per host for me under light load (10-20% CPU usage).
### ESXi
During VCF deployment the VCF Installer configures all ESXi hosts to `Performance` mode.
Make sure to set the Active Policy to `Low Power` and also confirm that the ACPI P-states and C-States are recognized:
![[_media/Pasted image 20250829111249.png]]
### Measuring the Impact of the Optimization
To measure the impacts on real-life workloads, I tested the power draw once VCF-9 with vSAN was already installed with the following VMs:
- 1x VCF-Operations
- 1x VCF-Automation
- 1x VCF-SDDC
- 1x VCF-Fleet management
- 1x VCF-Collector
- 1x vCenter
- 1x NSX-Manager
- 2x NSX-Edge
Also running inside the cluster at this point were a couple of low-load Linux VMs (5 total).
For reference, this is the usage overview of my VCF-9 environment:
![[_media/Pasted image 20250830190704.png]]
As I was only able to measure power draw on my UPS, I was not able to measure each ESXi host individually.
Therefore I implemented the changes on a single ESXi host and then measured the difference.
| Optimized BIOS Settings | Low-power ESXi Settings | Power draw savings in W |
| ----------------------- | ----------------------- | ----------------------- |
| no | no | 0W |
| yes | no | 50W |
| yes | yes | 55W |
## BD795M Case
As another hobby of mine is 3D printing, I will design and print a custom case for my nodes out of PETG. Another option would be something like the [Silverstone ML03](https://www.silverstonetek.com/de/product/info/computer-chassis/ML03/), which also supports full-size ATX power supplies.
A beta version of the case I designed for the BD795 Motherboard can be found here:
- https://makerworld.com/en/models/1745788-minisforum-bd795m-case-beta-v0-1#profileId-1855627
This version currently has pretty tight tolerances and is therefore a bit tricky to assemble; I will fix this in a later version.
## 🔗Resources
### VCF-9 Homelab
- https://williamlam.com/2025/06/ultimate-lab-resource-for-vcf-9-0.html
### VMware Holodeck
- https://www.vmware.com/docs/holodeck-toolkit-overview
### MS-A2 Reviews
- https://www.servethehome.com/minisforum-ms-a2-review-an-almost-perfect-amd-ryzen-intel-10GbE-homelab-system/4/
- https://williamlam.com/2025/06/vmware-cloud-foundation-vcf-on-minisforum-ms-a2.html
### Minisforum Resources
- [BD795iSE/BD795M differences](https://minisforumpc.eu/blogs/blogartikels/unterschieden-zwischen-bd790i-bd795i-se-und-bd790m?srsltid=AfmBOoo9CSn57NA1DSU8Q2YvrF59qz2XSKi1FHtOEakLuOvQIqGt-n1e)
### Hardware
- [Silverstone case](https://www.silverstonetek.com/de/product/info/computer-chassis/ML03/)
- [CPU Cooler](https://www.caselabs.org/coolers/axp90-x47)
### BD795M Resources
- [BIOS Upgrade instructions](https://www.virtualizationhowto.com/community/home-lab-forum/steps-to-upgrade-minisforum-motherboard-bios-bd795m-bd795i-se/)