How I Built My Own Server Using Local and Cloud Resources
1 - How I Built My Own Server Using Local and Cloud Resources
2 - Deploying containers via Cloudflare Tunnel (Zero Trust)
3 - Reverse proxy on Oracle VPS with Traefik
4 - Encrypted Server Backups to S3 with Duplicati
Introduction
In a world dominated by big-name cloud providers and expensive hardware, I decided to take a different route. Instead of purchasing a Raspberry Pi or renting high-end VPS instances, I took on a DIY challenge—turning a cheap SBC (Single Board Computer) called Set-Top Box (STB) from an ISP in Indonesia into a full-fledged self-hosted server. Paired with a cloud VPS from Oracle, I built a hybrid infrastructure that serves my needs for development, learning, and deployment.
This article is a documentation of that journey—how I set up the hardware, configured the software, and optimized performance using limited resources. It’s not just about saving costs, but about learning, experimenting, and truly understanding the infrastructure I depend on.
Why I Built This Setup
I had a few goals:
- Run self-hosted apps on my own hardware
- Avoid monthly cloud costs by using Oracle’s free VPS
- Gain experience with Docker, reverse proxies, and container security
- Build a low-power, always-on home server using what I already had
My Hardware and Tools
Set-Top Box (STB):
- Armbian 21.08.1 Bullseye
- 4 core CPU, 2GB RAM, 6GB eMMC storage, repurposed from an ISP device
- External HDD (formatted as ext4) used for Docker volumes and Docker's
data-root
(much faster than using NTFS/FAT) - Running Armbian, a lightweight Debian-based OS for ARM devices
- Uses Cloudflare Tunnel with Zero Trust to securely expose containers to the internet without opening any public ports
Oracle VPS:
- Ubuntu Server 22.04
- Always Free Tier (4 OCPU, 24 GB RAM, 200 GB Storage)
- Used for DNS resolution, reverse proxy, and edge routing with Traefik
First Setup: Getting the STB Ready
1. Preparing the Armbian STB
I didn’t flash Armbian myself — I bought an STB device that already had Armbian pre-installed. This saved time and effort, letting me dive straight into setup and configuration.
Once the STB powered on with Armbian pre-installed, I moved on to the initial system setup to make the device ready and secure.
Step 1: Renaming the default user
The system came with a default user account (usually something like armbian
). Rather than creating a new user from scratch, I renamed this default user to a custom username I preferred. This helps personalize the system and improve security by avoiding predictable usernames.
You can rename a user with commands like:
sudo usermod -l carens armbian
sudo groupmod -n carens armbian
After renaming, I updated the home directory name and ownership:
sudo mv /home/armbian /home/carens
sudo chown -R carens:carens /home/carens
Step 2: Updating system packages
Keeping the system up-to-date is crucial for security and stability. I ran:
sudo apt update
sudo apt upgrade -y
This updates the package lists and upgrades installed packages to their latest versions.
Step 3: Configuring SSH access
To manage the STB remotely, I set up SSH securely:
- Verified SSH service is enabled and running:
sudo systemctl enable ssh
sudo systemctl start ssh
sudo systemctl status ssh
- For stronger security, I generated SSH keys on my client machine and copied the public key to the STB’s
~/.ssh/authorized_keys
file, enabling passwordless login:
ssh-keygen
ssh-copy-id newusername@stb-ip-address
- Disabled root login over SSH by editing
/etc/ssh/sshd_config
, setting:
PermitRootLogin no
Then, I restarted the SSH service to apply changes:
sudo systemctl restart ssh
2. External HDD (ext4):
Since the STB’s internal storage (6GB eMMC) is limited, I use an external HDD to store Docker volumes, container data, and other files.
Step 1: Connecting the HDD
I connected the external hard drive via USB to the STB. Before formatting, I checked how the device recognized the drive:
lsblk
This command lists all storage devices and partitions. The external HDD usually appears as something like /dev/sda
or /dev/sdb
.
Step 2: Formatting the HDD with ext4
The external HDD came formatted as NTFS or FAT, but these filesystems aren’t ideal for Linux servers and can cause performance issues with Docker containers. To optimize speed and reliability, I reformatted the drive with the Linux-native ext4 filesystem.
Warning: Formatting erases all data on the drive — be sure to back up anything important!
To format, I ran:
sudo mkfs.ext4 /dev/sdX
Replace /dev/sdX
with the actual device name found from lsblk
.
Step 3: Creating a mount point
I created a directory where the HDD will be mounted, for example:
sudo mkdir /mnt/hdd
Step 4: Mounting the HDD
To mount the HDD manually:
sudo mount /dev/sdX /mnt/hdd
Step 5: Automating the mount at boot
To ensure the HDD mounts automatically after reboot, I edited the /etc/fstab
file by adding a line like this:
/dev/sdX /mnt/hdd ext4 defaults 0 2
Step 6: Setting permissions
Finally, I set proper ownership and permissions on the mount directory so Docker and my user can read/write:
sudo chown -R carens:carens /mnt/hdd
sudo chmod -R 755 /mnt/hdd
This setup lets me offload storage from the internal eMMC to the external HDD with much better performance and reliability for Docker containers and data.
Note: I initially tried using NTFS and FAT formats, but the Docker pull and write performance were really poor—practically unusable. Switching to ext4 made a huge difference in speed and stability.
3. Docker Installation:
For Docker installation on the STB, I won’t go into all the details here since I’ve already created a dedicated Docker installation and setup tutorial that covers everything step-by-step. In brief, I installed Docker using the official convenience script and made sure the Docker service is enabled and running on startup.
Feel free to check out the tutorial for the complete walkthrough!
4. Configure Docker to Use the HDD:
By default, Docker stores its images, containers, volumes, and other data on the internal storage—on this STB, that means the limited 6GB eMMC. To avoid filling up the internal storage and improve performance, I configured Docker to use the external HDD mounted at /mnt/hdd
as its data-root
.
Step 1: Stop Docker
Before making any changes, stop the Docker service to avoid data corruption:
sudo systemctl stop docker
Step 2: Create a new directory for Docker data
Create a directory on the external HDD to hold all Docker data:
sudo mkdir -p /mnt/hdd/docker
Ensure proper ownership so Docker can read and write data:
sudo chown -R root:docker /mnt/hdd/docker
sudo chmod 711 /mnt/hdd/docker
Step 3: Update Docker daemon configuration
Docker reads its configuration from /etc/docker/daemon.json
. Edit (or create) this file to specify the new data-root
location:
sudo nano /etc/docker/daemon.json
Add or update the following JSON configuration:
{
"data-root": "/mnt/hdd/docker"
}
If the file already contains other settings, just add the "data-root"
key inside the existing JSON object, making sure the syntax remains valid.
Step 4: Move existing Docker data (optional)
If you have existing Docker data you want to keep, move it from the old location to the new one:
sudo rsync -aP /var/lib/docker/ /mnt/hdd/docker/
Step 5: Restart Docker
Start Docker again to apply the changes:
sudo systemctl start docker
Verify Docker is running and using the new data directory:
docker info | grep "Docker Root Dir"
You should see:
Docker Root Dir: /mnt/hdd/docker
This setup redirects all Docker data to the external HDD, freeing up internal storage and improving overall system responsiveness, especially when pulling and running containers.
Final Thoughts
Setting up a self-hosted server using a repurposed STB and an external HDD was both a challenging and rewarding experience. By formatting the drive to ext4 and configuring Docker to use it as the data-root
, I unlocked significantly better performance — especially compared to when I used NTFS or FAT.
This setup allowed me to maximize cheap, low-powered hardware while preparing a strong foundation for containerized services. It may not be as fast as a Raspberry Pi, but it’s been a solid starting point for learning, experimenting, and running real workloads — all on a tiny budget.
Member discussion