Many developers deploy to managed cloud services and never have full control over their infrastructure. I run my production server myself — a Hetzner dedicated server with openSUSE Leap. Here I share the complete setup.
Hardware and Operating System
The server is located in the Hetzner data center in Falkenstein:
- CPU: Intel Core i7-6700 (4 Cores / 8 Threads, 3.4 GHz)
- RAM: 64 GB DDR4
- Storage: 2x 512 GB NVMe SSD in software RAID
- OS: openSUSE Leap 15.6
Why openSUSE? Stable enterprise base (SUSE Linux Enterprise), long support cycles, excellent package manager (zypper), and a clean separation between system and third-party packages.
nginx with Brotli: Maximum Compression
nginx is configured as a reverse proxy and web server — with the Brotli module for better compression than gzip:
server {
listen 443 ssl http2;
server_name www.example.de;
root /srv/www/vhosts/example.de/httpdocs/public;
# Dual compression: Brotli preferred, gzip as fallback
gzip on;
gzip_vary on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript;
brotli on;
brotli_static on; # Serves pre-compressed .br files
brotli_comp_level 6;
brotli_types text/plain text/css application/json application/javascript;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php-fpm/www.sock;
}
}
brotli_static on is the key: our build pipeline already generates .br files alongside the originals. nginx serves these directly, without needing to compress at runtime. Brotli delivers 20-30% smaller files than gzip at comparable CPU load. I primarily implemented Brotli for SEO reasons: offering Brotli compression alongside gzip improves load times and is considered a positive ranking signal by search engines like Google.
PHP-FPM via Unix Socket
PHP-FPM communicates through a Unix socket instead of TCP — this is faster because the entire TCP/IP stack is bypassed:
upstream php-handler {
server unix:/run/php-fpm/www.sock;
}
The PHP-FPM pool runs under the nginx user, so there are no file permission conflicts.
Security Headers
Every response includes security headers following current best practices:
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "0" always; # XSS auditor removed from all browsers, CSP is the correct protection
fastcgi_hide_header X-Powered-By;
Catch-All: No Fingerprinting
Requests to unknown hostnames are immediately dropped — without a response, without revealing certificates:
# HTTP: Close connection immediately
server {
listen 80 default_server;
server_name _;
return 444;
}
# HTTPS: Abort TLS handshake
server {
listen 443 ssl default_server;
server_name _;
ssl_reject_handshake on;
}
This prevents port scanners from discovering the hosted domains.
Multi-Layered Security Architecture
SSH Tarpit with endlessh-go
Port 22 is intentionally open — but behind it runs not an SSH server, but endlessh-go. This tool keeps SSH scanners trapped endlessly in a fake session without consuming real resources. The actual SSH server runs on a different port with key-only authentication.
CrowdSec as a fail2ban Successor
CrowdSec analyzes logs and blocks attackers — similar to fail2ban, but with community intelligence: when an attacker is flagged by another CrowdSec user, their IP is automatically blocked on my server as well.
# Install CrowdSec
sudo zypper install crowdsec crowdsec-firewall-bouncer-nftables
# Check status
sudo cscli metrics
sudo cscli alerts list
Firewall with nftables
firewalld with nftables backend using a whitelist approach: only explicitly allowed ports are accessible from outside (HTTP, HTTPS, mail). Internal services like RabbitMQ are bound to localhost only.
Let's Encrypt Wildcard Certificates
A single wildcard certificate for all domains — via DNS challenge, not HTTP challenge:
# certbot with DNS plugin for INWX
certbot certonly \
--dns-inwx \
--dns-inwx-credentials /root/.inwx-credentials \
-d "*.wunner-software.de" \
-d "wunner-software.de"
The advantage: no port 80 access needed, works for internal services too. A deploy hook automatically distributes the renewed certificate to nginx, Mailcow and RabbitMQ.
Automated Backups
A PHP script using pcntl_fork() for parallel execution runs daily at 04:00:
- Delete old backups
- In parallel: nginx config, PHP config, repositories, vHosts, databases
- Mailcow backup
- Nextcloud into maintenance mode, tar + mysqldump, maintenance mode off
- rsync to Hetzner Storage Box (only if all tasks succeed)
# Dry run for testing
php /root/scripts/backup-all.php --dry-run
Docker for Isolated Services
Not everything runs natively. Some services run isolated in Docker:
- Mailcow: Complete mail server (Postfix, Dovecot, SOGo, rspamd)
- Umami: Privacy-friendly web analytics
- Remark42: Self-hosted blog comment system
- endlessh-go: SSH tarpit
These services are accessible through nginx as a reverse proxy, bound to 127.0.0.1.
Clean Start/Stop with systemd
The container's own restart policy (unless-stopped) handles crashes, but during server shutdown containers are simply killed — no docker compose down, no clean ordering. That's why each Docker Compose project has its own systemd service:
[Unit]
Description=Mailcow Docker Compose
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/root/mailcow-dockerized
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
TimeoutStartSec=300
TimeoutStopSec=120
[Install]
WantedBy=multi-user.target
Type=oneshot with RemainAfterExit=yes is the key: systemd considers the service as active even after docker compose up -d has exited. During shutdown, systemd then calls ExecStop — and the containers are shut down cleanly.
The start order can be controlled via After= and Requires=. Mailcow starts first, since other projects may depend on its Docker network or services:
Start: docker -> Mailcow -> endlessh + Umami + Remark42 (parallel) Stop: endlessh + Umami + Remark42 (parallel) -> Mailcow -> docker
Monitoring with Grafana Cloud
Grafana Alloy (formerly Grafana Agent) collects metrics and logs and sends them to Grafana Cloud Free. This gives me dashboards for CPU, RAM, disk, nginx requests and CrowdSec alerts — without having to run my own Grafana server.
Conclusion
A self-managed server means more work than a PaaS — but also full control, better performance and lower costs. My motivation for this setup was simple: I already know the console commands and save myself the costs for tools like Parallels and the like. The initial setup takes time, but with systemd, automated backups and CrowdSec, the system runs largely maintenance-free afterwards. And honestly — Linux is no longer difficult to use these days, the era of cryptic configurations is long gone. With openSUSE Leap you also get a distribution that is binary-compatible with the enterprise derivative SUSE Linux Enterprise, guaranteeing long-term stability and professional support. And the knowledge you build along the way is valuable in any DevOps context.
Kommentare
Kommentare werden von Remark42 bereitgestellt. Beim Laden werden Daten an unseren Kommentar-Server übertragen.