Tom Petrocelli's take on technology. Tom was a IT industry executive, analyst, and practitioner as well as the author of the book "Data Protection and Information Lifecycle Management" and many technical and market definition papers. He is also a natural technology curmudgeon.

Monday, May 11, 2026

So, I Built a Cloud

What's that you say?

That's right. I built myself a cloud in my home lab. I used old computers and off-the-shelf free and open-source software (FOSS) to create a system that approximates the functionality of Google Workspace. 
Two things I immediately learned from this project. First, like a Tiki bar (IYKYK) it's never really done. There's always tweaks and new features to add. Second, you have to attend to it like any complex system, especially patches and updates.

Why did I build a Cloud?


The fact that I built a cloud system should immediately raise the question "Why?" There were three reasons really. First and foremost, I have concerns about being tethered to big tech for all of my computing needs. I have been using one form or another of the Microsoft technology stack as my daily drive for the past 40 years, starting with MS-DOS and Word back in the 80s. The difference between then and now is that I wasn't as reliant on Microsoft for so many applications. I could easily have swapped out my word processor since no one transmitted electronic files in those days. As time went on, however, more and more of my computing was being provided by Microsoft so that now I rely on them for my OS, all of my office applications, all of my communication applications, and a big chunk of my creative apps. Woof! Worse yet, I heavily rely on OneDrive for storage and the ability to access files remotely. If Microsoft decided to change something I didn't like, there would be little I could do about that.

A case in point is the discontinuation of Microsoft Publisher. Publisher has always been a desktop resident application; There is no online Publisher. Yet, when Publisher goes extinct, as it will this year, even the desktop version on my computer will stop working. In the bygone days, if a software developer stopped working on a product, what you already owned would continue to work albeit without updates. In the age of rented software, that's no longer true. When a company sunsets a cloud or subscription application, it disappears or stops working. There are entire websites devoted to the graveyard of big tech products because it happens so often. Having my own cloud makes me less vulnerable to big tech's tendency to cut bait and leave customer dangling.


Another reason for building my own cloud was so that I could reuse a bunch of hardware I had sitting around. Frankly, I hate to throw out usable stuff, especially when much of what's in that stuff is toxic to the environment. I live by the maxim "Reuse, Repurpose, Recycle!" I had recently gone through an upgrade cycle of my household's main desktop and laptop computers and had a bunch of hardware sitting around idle. most of it was under 10 years old. Add to that a bunch of ancient (16 years and 20+ years old respectively) hardware that was cluttering up my home lab, and it seemed like a good time to put it to use.


Finally, it seemed like a fun project. I know, not everyone would hear "building a cloud service" and think "Hey that's fun!" What can I say, I'm a computer geek. So, with that in mind, we were off to the races.


The Stack

I spend too many years in product development not to have a base set of requirements. In this case, hey didn't need to be detailed or stringent. I wanted to allow the project to evolve. Still, I needed a starting point. With my goals in mind, this is what I came up with:
  1. It must be FOSS. If my primary goal was freedom from big tech, relying on anything but FOSS was a mistake. Now, that meant relying on middle sized tech. For example, I knew that it would be a Linux stack. I choose Ubuntu Linux because I had the most experience with it. I could have chosen a project that was not tied to some company but that would have increased the degree of difficulty. Sorry to say, the best Linux implementations are from corporations not hobbyists.
  2. It must approximate Google Workspace. To that end, the software I used had to have enough functionality that was either core or from a community that could support remote file access, online office applications, email and calendar, preferably a bookmark manager, and a bunch of other smaller features such as notes. I deemphasized social media functionality and video conferencing since this was mainly for my use and I knew I didn't have the hardware to really run video well. 
  3. It has to run on more limited hardware. If I had to buy a rack of new servers to run the software, then it would defeat the purpose of reusing the old hardware. This was another reason to use Linux since you can always find a distribution that will run on more limited hardware.
  4. It did not need to be multi-tenant, only multi-user. Google, Microsoft, Zoho, Proton et al have to run their multiple instances on shared hardware platforms. That's why they deploy multi-tenant systems. For my cloud, I only had to support multiple users in the same implementation, not multiple instances. This vastly simplified my requirements. I didn't need something like Cloud Foundry or even Kubernetes to run separate instances of my cloud software.
  5. It had to all be off the shelf. Outside of a few maintenance scripts, I didn't want to have to develop my own software. I may in the future extend my cloud to include a set of services that don't yet exist but that's an entirely different project.
In the end, I came up with the following software stack:
  • Ubuntu Linux. It's stable, supported, and I had a lot of experience with it. I opted for the desktop versions so that I could make use of the GUI tools. If this was an enterprise stack instead of a home lab, it would be different. There I would likely not have a desktop absorbing resources and instead have Kubernetes running a bunch of containers. 
  • Snap containers. I know this could be controversial but snap containers are built into Ubuntu already and don't need any additional tooling to run. Meanwhile, they approximate the sandbox capabilities of other container systems such as Docker. I've used Docker before and found, for my more limited purposes, snaps and Docker containers are about the same.
  • SAMBA and SFTP. Most of the time, I need to access my files from my internal Windows environment. SAMBA provides the SMB remote file capabilities that are most recognizable to Windows and can work with Linux as well. Using SAMBA meant that I could sync OneDrive to my file store without too much effort. SFTP allowed for network access to files from within containers, which turned out to be a good thing. I didn't want to have to implement something like a service mesh to get secure file access across networked devices.
  • OpenSSH. Secure connections are a thing. I use SSH to remote into my servers and implement other secure protocols such as SFTP. That's expected.
  • Nextcloud. This was the meat of the software stack. Nextcloud is pretty well known and implements all the cloud basics. There is a vibrant community that produces other applications for Nextcloud and is available as snap and Docker containers, as well as standard Debian packages. There are also a series of mobile apps for Android that allow access to most major functions. 


Physical Architecture

Like I said, I had a bunch of leftover hardware floating around my home lab. Besides sitting idle, it sort of irritated my wife that all of this "useless" stuff was cluttering up my office. Useless? Ha!  

As a side note, using leftover hardware created some challenges. I wanted to add another hard drive (yes, I have a bunch of those lying around too) to one of the computers only to discover that all the SATA power connectors on the motherboard were in use. There were solutions for that problem, but it was not something I wanted to get into, especially since it meant buying more stuff. 

Anyway, I decided to separate the file store from the application server and add a bunch of other stuff to make it work better. This included network components, extra external archive storage, keyboards, monitors and network equipment. And yes, it was all leftovers.

The physical architecture ended up in three parts like this:

  • A file server. This contained a 1T disk, 16G of RAM, and 8 core processor. I had to place this in an out of the way spot in my house because it has a new but loud fan. Rather than connect it via the onboard WiFi, I used an extra remote station for my mesh network. I was able to attach the file server to the ethernet port on the remote station and achievemuch better performance than WiFi to a remote or base station. As I said, I used a desktop version of Ubuntu so that I could use the GUI tools, so this also needed a monitor, keyboard, and mouse.
    The file server implemented SAMBA which made it accessible to Windows and Linux clients as file shares. This made it was immediately useful to my Windows clients. It also implemented an SFTP server to facilitate network connections from snap containers.
  • An application server, or app server. The app server has a 500G hard drive, 16 core i7 processor, and 8G of RAM. The app server is wired into a switch that connects directly to the router. Its traffic has priority on the switch since the other devices on the switch are clients of some sort. Attached to the app server, besides the usual array of keyboard, monitor, and mouse, is a 1T USB drive. It is available on the network via SAMBA but is quite slow. That's fine since it only mirrors the most important files on the file server. It acts as a backup in case the file server is offline. 
    The app server runs the Nextcloud software in a snap. You can look up Nextcloud to see it's full range of capabilities but needless to say, it does the job well. Nextcloud rarely pushes any one core into double digit utilization, everything including the OS takes up less than half the disk space, and memory is typically only at about at 30% utilization. Needless to say, there is a lot of headroom on this box for more services.
  • A monitoring station. I could monitor everything from my primary desktop, but it has other things to do, like running some AI crap. Instead, I have a 16-year-old Dell laptop that runs Pop-OS (which is built on Ubuntu) in use for monitoring the boxes. It only has 4G of memory and a 128G SSD which makes it hard to use as a real laptop. It can operate as something akin to a Chromebook but even then, it's WiFi is slow and a two-core processor. It's just too slow to be truly useful. As a station for monitoring a couple of boxes, it's great. One more piece of equipment saved from a landfill. 
    I put Pop-OS on it because... well, why not? I wanted to see if the Cosmic desktop was all that folks were saying it was. Originally Pop-OS, which is developed by System7, used a heavily modified version of Gnome to achieve its signature look and feel. It was more of a mod than a new desktop. With its last release, System7 implemented the new Cosmic desktop written entirely in Rust. It uses all the familiar desktop elements such as a toolbar and popup launcher, along with a quick command line launcher, similar to PowerToys Command Palette. It looks pretty and is very responsive. It's also a bit unstable and I've already had to reinstall Cosmic when it became corrupted. But I digress.

No-IP makes it really cloudy 

A cloud system that is only available within your interior network is not very cloudy at all. If it can't be accessed from the wider world, then you can't unshackle yourself from the bonds of big tech. I'm a cheapskate so I didn't want to pay a lot of money to my already annoying and expensive ISP to get a static IP address. Thankfully there is a good solution. It's called Dynamic DNS or DDNS. 

See, the problem of accessing a cloud service from an everchanging IP address is that you can't know from moment to moment what that address might be. Subsequently, the internet naming system that translates your numerical address into something a human can remember can't know what your IP address is if it keeps changing. DDNS detects changes in the IP address from your ISP and updates DNS records with that new address. 

No-IP is a service provider for DDNS and a bunch of other network services. They have a free tier DDNS that is perfect for home labs. My router already supports No-IP so I didn't need to do anything fancy on my app server to make it work. No-IP provides me with a human readable address for my network and keeps it's IP address updated as my router detects them. 

A few changes in my firewall and I now have a cloud system that is available from web browsers and Android apps throughout the world. I had the opportunity to try it out from Europe, and it worked perfectly, even with the natural latency that comes with sending packets halfway across the world. 


Final results

I have a cloud! More accurately, I have a set of web and mobile services that are accessible from anywhere I have an internet connection. These services provide pretty much the same set of services I would use from Google, Zoho, or Microsoft, give or take a few. Nextcloud is not perfect and I can see some instances where I might need to develop an app our two. 

I haven't abandoned the services I use from Microsoft, or Google or Zoho but I have a Plan B in case any of them raises prices a lot or cut back on services dramatically. My daily drive still comes from big tech, but I am no longer completely reliant on them. That, and I have a bunch of computer hardware that didn't end up in a landfill, and I had some fun doing it. 

It feels good to know that I didn't add to pollution, still have some technical chops, and can give a middle finger to big tech whenever I want. Nice!