There’s a bunch of decisions to make and settings to get right before you install Docker or pull your first container. I skipped most of this when I started, and ended up fixing things while my family was already using the server. Learn from my mistakes.
Which Mac?
Any Apple Silicon Mac with enough RAM works. You want 64GB of unified memory. That’s enough to run 8+ Docker services and local LLMs at the same time without swapping.
| Machine | Unified Memory | Notes |
|---|---|---|
| Mac Studio M1 Max (2022) | 64GB | Best value used. Well under €1,500 on eBay. |
| Mac Studio M2 Max (2023) | 64GB | Slightly faster. Higher used prices. |
| Mac Mini M4 Pro (2024) | 64GB | Current generation. ~€2,400 new. |
32GB can work if you skip the larger LLMs, but you’ll feel it once you run a few things at the same time.
Tip: Check your local marketplace (eBay Kleinanzeigen, Facebook Marketplace, or equivalent) for used Mac Studios. The M1 Max generation is three years old now and many creative professionals are upgrading. A 64GB M1 Max that cost €3,500+ new regularly shows up for under €1,200.
It can be both: server and workstation
A Mac Studio draws about 5-7W at idle. It’s silent. It does not need to be locked away in a basement or a closet. Leave it on your desk, plug in a monitor, and keep using it for browsing, email, office work, coding. The server runs in the background and you won’t notice it. Heavy creative work like video editing or music production might compete with server workloads, but for normal daily use? No problem.
Where it gets tricky is resource-heavy background jobs. A photo management tool indexing a large library will pin all CPU cores for hours. So will ML pipelines for face detection or search indexing. If you’re trying to work on the same machine, that’s not great. Two ways around it:
Limit job concurrency. Most services let you configure how many background workers run in parallel. Turn them down to 1 or 2 during the day. Slower, but your machine stays responsive.
Schedule heavy work at night. Use a cron job or launchd plist to pause and resume resource-hungry workers outside work hours. I do this for ML indexing jobs. They run from midnight to 7 AM and pause during the day.
If the Mac is a dedicated headless server, ignore all of this. Let it rip.
User account
Do you create a separate macOS user for server stuff, or just use your own account?
Dedicated user (e.g. server). All Docker data, compose files, and scripts live under /Users/server/. Automatic login for that user. Clean separation, but you have to switch users or SSH in every time you want to touch something.
Your own account. Everything lives in ~/server/ or similar. Docker containers run in their own namespaces anyway, they don’t care what macOS user started them. Less ceremony, but your home directory does double duty.
I use my own account. It’s simpler. The containers don’t know the difference. Just pick one and stick with it.
Backups
Your server will hold stuff you can’t recreate. Photos, documents, messages, databases. Think about this before you put anything on the machine, not after.
The 3-2-1 rule
Classic principle: 3 copies of your data, on 2 different types of media, 1 copy offsite.
- Live data on the internal SSD. That’s your working copy.
- Local backup on an external drive. Covers accidental deletion, corruption, bad updates.
- Offsite or isolated backup. Covers theft, fire, ransomware, or a hardware failure that takes out both the Mac and the drive sitting next to it.
Most people skip the third one. Don’t.
Strategy 1: Time Machine + ransomware-protected vault
Two external drives, both local.
Drive 1: Time Machine. 6TB or 8TB external HDD, permanently connected via USB. Configure it to run daily. Something breaks? Roll back to yesterday. A 6TB Seagate or Toshiba costs about €100-130. Spinning disk is fine, speed doesn’t matter for backups.
Drive 2: Ransomware protection vault. Same size, same type. Here’s the idea: a script mounts the drive at 3 AM, writes an encrypted backup, verifies it, and ejects the drive again. For the remaining 23 hours and 55 minutes, the drive is unmounted. Physically connected, but the operating system can’t see it.
Ransomware can’t encrypt a volume that isn’t mounted. A rogue script can’t delete what it can’t reach. The backups themselves are append-only, so old versions can’t be overwritten either. In enterprise storage they call this WORM (write once, read many). Same principle, just at home with a USB drive and a cron job.
Simple, cheap (~€200-260 for both drives), fast restores. The downside: both drives sit in the same room. A fire or a burglary takes everything.
Strategy 2: Time Machine + cloud backup
Replace the second drive with encrypted cloud storage. Use something like restic or duplicati to push encrypted, deduplicated backups to an S3-compatible provider.
| Provider | Pricing | Notes |
|---|---|---|
| Backblaze B2 | $6/TB/month | S3-compatible, popular for backups |
| Hetzner Storage Box | ~€3.50/TB/month | EU-based, GDPR friendly |
| Wasabi | $7/TB/month | No egress fees |
| Cloudflare R2 | Varies | Free egress, S3-compatible |
A typical home server might use 1-2TB. That’s €4-12/month. Everything gets encrypted on your machine before it leaves. The cloud provider never sees your data.
Real offsite protection. Survives fire, theft, flooding. The catch: restoring from the cloud is slow (you’re limited by download speed), and the initial upload of a big photo library takes days.
Strategy 3: All three
Time Machine for quick local recovery. The vault for ransomware protection. Cloud for true offsite. Two external HDDs (~€200-260) plus a few euros per month. That’s the full 3-2-1.
Label your drives
Format both as APFS (Time Machine will prompt you) or HFS+. Label them something obvious: TimeMachine and Vault. You’ll be writing scripts that reference these names, so pick them now.
Networking
Ethernet, not Wi-Fi
Plug the Mac into your router with a cable. Wi-Fi is fine for a laptop, but a server that multiple devices talk to simultaneously wants a wired connection. Three phones uploading photos while someone chats and a document gets processed? That’s when Wi-Fi starts dropping packets.
Both the Mac Studio and Mac Mini M4 Pro have 10 Gigabit Ethernet. Your router probably only does Gigabit. Still plenty.
Static IP
Your server needs a stable address. Two approaches:
DHCP reservation (what I’d recommend). Log into your router, find the Mac’s MAC address, assign it a fixed IP. The Mac still uses DHCP, just always gets the same address. The router handles everything.
Static IP in macOS. System Settings > Network > Ethernet > Details > TCP/IP > Configure IPv4: Manually. Pick an IP outside your router’s DHCP range so nothing conflicts.
Write the IP down somewhere. You’ll type it a lot: DNS config, reverse proxy, SSH config on your other machines.
Hostname
Your Mac has a .local hostname via mDNS. You can set it to something memorable:
sudo scutil --set HostName myserver
sudo scutil --set LocalHostName myserver
sudo scutil --set ComputerName myserver
Now myserver.local works from any device on the network. Handy for SSH before you set up proper DNS.
Firewall
macOS has a built-in firewall (System Settings > Network > Firewall). Off by default.
On a home network behind a router, you probably don’t need it. The router’s NAT already blocks everything from the outside. Your Mac just needs to accept connections from your own devices.
If you do turn it on, make sure OrbStack, Ollama, and Screen Sharing are allowed through. I lost an hour debugging connectivity issues before I remembered I’d enabled the firewall the day before. Classic.
HDMI dummy plug
If no monitor is connected, macOS can default to a low resolution for Screen Sharing. Sometimes it refuses VNC connections entirely after a reboot. Annoying to debug when your server is in another room and you can’t see what’s happening.
A 4K HDMI dummy plug fixes this. €5-8 for a two-pack on Amazon. Plug it in, macOS thinks a display is connected, Screen Sharing works properly. Low-tech solution, but it works.
BetterDisplay (free, open source) does the same in software. It creates virtual displays. Works well, but depends on a login item loading in time after reboot. The physical plug is more reliable.
macOS settings
Energy and sleep
A server that sleeps is not a server. macOS defaults are designed for laptops, and they will put your machine to sleep after a while. Your services go offline, your family complains that photos aren’t syncing.
System Settings > Energy:
- Prevent automatic sleeping when the display is off: ON. The important one.
- Put hard disks to sleep when possible: OFF. Your backup drives need to stay awake.
- Wake for network access: ON. Safety net.
- Start up automatically after a power failure: ON. The Mac needs to come back on its own after a blackout.
Or just run these:
sudo pmset -a sleep 0 # disable system sleep
sudo pmset -a disksleep 0 # disable disk sleep
sudo pmset -a womp 1 # wake on network access
sudo pmset -a autorestart 1 # auto-restart after power failure
pmset -g # verify
Turning off the display is fine. 5 or 10 minutes, whatever you like. Display sleep and system sleep are different things. System sleep is what kills you.
FileVault
FileVault encrypts your disk. Great on a laptop. On a server, it means that after a power failure, macOS sits at a login screen waiting for a password before it boots. No services start. Nobody can connect. You have to physically walk up to the machine (or try Screen Sharing, which may or may not work pre-login depending on your macOS version).
I disabled FileVault. My server sits in my house behind a locked front door. The chance of someone stealing it is a lot lower than the chance of a power outage at 2 AM while I’m on vacation.
If you keep FileVault enabled, just know that every power outage means manual intervention.
Automatic login
System Settings > Users & Groups > Automatic login. Set it to your server user. After a reboot, macOS logs in on its own, OrbStack starts, containers come back up. No password prompt, no waiting.
This only works with FileVault disabled. With FileVault on, there’s a pre-boot login screen that automatic login can’t bypass.
Screen Sharing and SSH
System Settings > General > Sharing. Turn on Screen Sharing and Remote Login (SSH). That’s how you’ll manage the server when you’re not sitting in front of it.
Restarts and service recovery
Power outages happen. macOS updates want to reboot. Kernel panics are rare but they happen. Everything needs to come back on its own.
Docker restart policies
Every compose service should have this:
services:
myservice:
restart: unless-stopped
Restarts after a crash or reboot, but stays down if you stopped it on purpose. Put this on everything.
OrbStack auto-start
OrbStack needs to be running before any container can start. It should add itself to Login Items automatically. Check under System Settings > General > Login Items. If it’s not there, add it. I’ve had one instance where it disappeared after an OrbStack update. Easy to miss, confusing to debug when all your containers are just gone after a reboot.
Ollama auto-start
Homebrew installs a LaunchAgent for Ollama. Check that it exists:
ls ~/Library/LaunchAgents/ | grep ollama
If you installed from the .dmg instead, it handles this differently but should still auto-start. Verify after a reboot:
curl http://localhost:11434/api/version
Test a full reboot
This is the step that gives you confidence. After everything is configured, reboot the Mac and walk away. Wait two minutes. Then SSH in and check:
- You’re logged in (
whoami) - OrbStack is running (
docker ps) - Containers are up (
docker psshould show your services) - Ollama responds (
curl localhost:11434/api/version) - External drives are mounted (
ls /Volumes/)
If anything didn’t come back, fix it now. Not in three months when you have 50,000 photos and a year of documents on the machine.
Software to install
In this order:
1. Xcode Command Line Tools
xcode-select --install
Gets you git and basic build tools.
2. Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
3. OrbStack
brew install orbstack
Not Docker Desktop. OrbStack is faster, lighter, and the Linux VM underneath uses less memory. Free tier is fine for personal use.
4. Ollama (native, not Docker)
brew install ollama
Install this directly on macOS, not inside a container. The native version gets Metal GPU acceleration through Apple’s unified memory. The Docker version runs inside OrbStack’s Linux VM and has no GPU access at all. On a 64GB M1 Max, that’s the difference between running a 30B parameter model comfortably and watching it crawl.
Checklist
- Mac with 64GB unified memory, Ethernet connected
- User account decision made (dedicated or personal)
- Backup strategy chosen and drives connected/formatted
- Static IP configured (router DHCP reservation or manual)
- Hostname set
- HDMI dummy plug installed (if headless)
- System sleep disabled (
sudo pmset -a sleep 0) - Disk sleep disabled, auto-restart after power failure enabled
- FileVault decision made
- Automatic login enabled
- Screen Sharing and SSH enabled
- Xcode CLI tools, Homebrew, OrbStack, and Ollama installed
- Full reboot test passed (all services come back unattended)