For any home lab enthusiast, the quest for the perfect balance between enterprise reliability, noise levels, and storage capacity is never-ending. Recently, I decided to overhaul my media setup to host Emby, downloader, and the full “Arr” stack (Sonarr, Radarr, etc.). I settled on a Dell PowerEdge T330 as the chassis of choice.
It’s a robust tower server that hits the sweet spot for a home environment: it’s quiet enough to not sound like a jet engine, but spacious enough to hold serious storage. Here is how I configured it to maximize every inch of space using Proxmox, ZFS, and Community Helper Scripts.
Storage Strategy
My goal was simple: I wanted the 8 main drive bays dedicated entirely to mass storage. I didn’t want to waste a hot-swap bay on a boot drive, nor did I want my OS competing for IOPS with my media downloads.
The Hardware:
- Chassis: Dell PowerEdge T330 (8x 3.5″ configuration) with PERC H730 in passthrough mode
- Boot/VM Storage: 2x SSDs
- Mass Storage: 8x 4TB HDDs
The T330 usually comes with a DVD-ROM drive. In 2024, that’s mostly dead weight. I realized the onboard SATA ports used for the optical drive were perfect for my boot media.
I disconnected the DVD-ROM and utilized the onboard SATA ports to connect two SSDs. I tucked these inside the chassis (there is usually enough room to velcro or mount them in the optical bay area). This was a crucial step because it freed up the PERC HBA/Backplane entirely for the data drives.
I switched the PERC H730 Storage Controller into passthrough mode by booting into System Setup (F2 > Device Settings > Dell PERC H730 controller > Controller Management > Advanced Controller Management and Switch to HBA mode). This is an important step as using hardware RAID will cause ZFS to perform very very poorly.
Proxmox VE & ZFS Root
With the hardware hacked together, I moved to the software. I am a huge proponent of Proxmox Virtual Environment (PVE) for its flexibility with containers (LXC) and VMs.
During the Proxmox installation, I selected the two internal SSDs as the target and created a ZFS Mirror (RAID 1). Remember to do this during the install of Proxmox, or you’ll be kicking yourself later. This ensures that if one SSD dies, the server keeps humming along without a hiccup. It also provides excellent read speeds for the OS and the LXC containers I plan to run.
After the install completed, I ran the post-install Community Script, https://community-scripts.github.io/ProxmoxVE/scripts?id=post-pve-install. This script disables the Enterprise Repo, corrects PVE sources, disables the subscription nag, updates Proxmox, and reboots the system.
Media Storage
Once PVE was up and running on the SSDs, I turned my attention to the main event: the eight 4TB drives sitting in the front bays.
In the Proxmox GUI, I created a second ZFS pool called ‘hddpool’ using these drives in RAID10. Yes, there are many other options I could choose but I opted for what I thought was the best overall balance between performance and redundancy.
Now, this is where things get got a little tricky. I wanted to have a dataset within the ‘hddpool’ called ‘media’ and then bind mount it to the containers for shared access to my media library. But I quickly realized that working with non-privileged containers vs. privileged containers is not all that straight forward because of UID/GID Mapping (User/Group ID Mapping).
Running ‘unprivileged’ containers is a security best practice in Proxmox because it isolates the container from the host system. It does this by taking the container’s users (like root or media) and shifting their IDs by 100,000 so they have no power if they manage to break out to the host. However, this creates a permissions nightmare: your container tries to read/write files as ‘Root 100,000,’ but files are owned by the PVE host. We resolve this issue by creating a user group within the PVE host and giving it the necessary permissions to the dataset. Then we use UID/GID Mapping in a container’s configuration file that essentially maps the container’s user to the user group on the PVE host. Essentially acting as a bridge that tells Proxmox that the user within the container is a member of a group on the PVE host that has read and write access to the dataset. Confused? It’s ok, I was confused until I finally got it up and running.
I used the following commands from the PVE Shell to create my dataset called ‘media’ on ‘hddpool’, created a local user group called ‘mediausers’, and gave the group permissions to the dataset.
zfs create /hddpool/media
groupadd -g 1000 mediausers
chown -R 1000:1000 /hddpool/media
chmod -R 2775 /hddpool/media
The rest of the magic happened when I deployed the containers.
Deploying the “Arr” Stack
With the infrastructure set, deploying the *Arr stack applications was straightforward. I again used the Community Helper Scripts over at https://community-scripts.github.io/ProxmoxVE. All of these scripts are run from the PVE Shell. It’s best to do this from the PVE WebUI. During the script execution, I made sure to run these containers from the SSDs for the best performance.
I grabbed Radarr, Sonarr, Prowlarr, and Emby to start.
Once the containers are up and running, I loaded up their shell so I could pre-configure the necessary groups needed for accessing the shared media library.
First, I had to figure out what user the container services are running as. It’s usually root, but some containers like Emby might be running as a different user.
ps aux | grep -i emby
Once I knew the user, I created the ‘mediausers’ group and added the container user to it.
groupadd -g 1000 mediausers
usermod -aG mediausers root
Now it was time to shut down the containers and present the media dataset via bind mounts and also do some UID mapping, so the containers have access to the shared media library.
From the PVE Shell I added the bind mount to each container (replacing ‘<CTID>’ with the container’s ID).
pct set <CTID> -mp0 /hddpool/media,mp=/mnt/media
Then I updated each containers configuration file with the necessary UID mappings.
nano /etc/pve/lxc/<CTID>.conf
I added this to the end of the configuration file of each container.
lxc.idmap: u 0 100000 1000
lxc.idmap: g 0 100000 1000
lxc.idmap: u 1000 1000 1
lxc.idmap: g 1000 1000 1
lxc.idmap: u 1001 101001 64535
lxc.idmap: g 1001 101001 64535
Then I started all the containers and tested access to /mnt/media – it worked!
Final Thoughts
The Dell T330 turned out to be a beast of a machine for this use case. By creatively using the optical SATA ports for a dedicated SSD boot mirror, I maximized the chassis potential, leaving all front bays open for what matters most: storage. The UID mapping took me a few hours to troubleshoot but eventually I was able to figure it out (with no help from Gemini or Copilot because they continuously provided inaccurate results).
Once I had everything up and running, I swapped out the case fan for something a little quieter. The power supply fan is still loud during boot but other than that the server is nearly silent.
It’s quiet, redundant, and fast—everything you want in a home lab server.