Tom's Technology Take

Tom Petrocelli's take on technology. Tom was a IT industry executive, analyst, and practitioner as well as the author of the book "Data Protection and Information Lifecycle Management" and many technical and market definition papers. He is also a natural technology curmudgeon.

Monday, May 11, 2026

So, I Built a Cloud

What's that you say?

That's right. I built myself a cloud in my home lab. I used old computers and off-the-shelf free and open-source software (FOSS) to create a system that approximates the functionality of Google Workspace. 
Two things I immediately learned from this project. First, like a Tiki bar (IYKYK) it's never really done. There's always tweaks and new features to add. Second, you have to attend to it like any complex system, especially patches and updates.

Why did I build a Cloud?


The fact that I built a cloud system should immediately raise the question "Why?" There were three reasons really. First and foremost, I have concerns about being tethered to big tech for all of my computing needs. I have been using one form or another of the Microsoft technology stack as my daily drive for the past 40 years, starting with MS-DOS and Word back in the 80s. The difference between then and now is that I wasn't as reliant on Microsoft for so many applications. I could easily have swapped out my word processor since no one transmitted electronic files in those days. As time went on, however, more and more of my computing was being provided by Microsoft so that now I rely on them for my OS, all of my office applications, all of my communication applications, and a big chunk of my creative apps. Woof! Worse yet, I heavily rely on OneDrive for storage and the ability to access files remotely. If Microsoft decided to change something I didn't like, there would be little I could do about that.

A case in point is the discontinuation of Microsoft Publisher. Publisher has always been a desktop resident application; There is no online Publisher. Yet, when Publisher goes extinct, as it will this year, even the desktop version on my computer will stop working. In the bygone days, if a software developer stopped working on a product, what you already owned would continue to work albeit without updates. In the age of rented software, that's no longer true. When a company sunsets a cloud or subscription application, it disappears or stops working. There are entire websites devoted to the graveyard of big tech products because it happens so often. Having my own cloud makes me less vulnerable to big tech's tendency to cut bait and leave customer dangling.


Another reason for building my own cloud was so that I could reuse a bunch of hardware I had sitting around. Frankly, I hate to throw out usable stuff, especially when much of what's in that stuff is toxic to the environment. I live by the maxim "Reuse, Repurpose, Recycle!" I had recently gone through an upgrade cycle of my household's main desktop and laptop computers and had a bunch of hardware sitting around idle. most of it was under 10 years old. Add to that a bunch of ancient (16 years and 20+ years old respectively) hardware that was cluttering up my home lab, and it seemed like a good time to put it to use.


Finally, it seemed like a fun project. I know, not everyone would hear "building a cloud service" and think "Hey that's fun!" What can I say, I'm a computer geek. So, with that in mind, we were off to the races.


The Stack

I spend too many years in product development not to have a base set of requirements. In this case, hey didn't need to be detailed or stringent. I wanted to allow the project to evolve. Still, I needed a starting point. With my goals in mind, this is what I came up with:
  1. It must be FOSS. If my primary goal was freedom from big tech, relying on anything but FOSS was a mistake. Now, that meant relying on middle sized tech. For example, I knew that it would be a Linux stack. I choose Ubuntu Linux because I had the most experience with it. I could have chosen a project that was not tied to some company but that would have increased the degree of difficulty. Sorry to say, the best Linux implementations are from corporations not hobbyists.
  2. It must approximate Google Workspace. To that end, the software I used had to have enough functionality that was either core or from a community that could support remote file access, online office applications, email and calendar, preferably a bookmark manager, and a bunch of other smaller features such as notes. I deemphasized social media functionality and video conferencing since this was mainly for my use and I knew I didn't have the hardware to really run video well. 
  3. It has to run on more limited hardware. If I had to buy a rack of new servers to run the software, then it would defeat the purpose of reusing the old hardware. This was another reason to use Linux since you can always find a distribution that will run on more limited hardware.
  4. It did not need to be multi-tenant, only multi-user. Google, Microsoft, Zoho, Proton et al have to run their multiple instances on shared hardware platforms. That's why they deploy multi-tenant systems. For my cloud, I only had to support multiple users in the same implementation, not multiple instances. This vastly simplified my requirements. I didn't need something like Cloud Foundry or even Kubernetes to run separate instances of my cloud software.
  5. It had to all be off the shelf. Outside of a few maintenance scripts, I didn't want to have to develop my own software. I may in the future extend my cloud to include a set of services that don't yet exist but that's an entirely different project.
In the end, I came up with the following software stack:
  • Ubuntu Linux. It's stable, supported, and I had a lot of experience with it. I opted for the desktop versions so that I could make use of the GUI tools. If this was an enterprise stack instead of a home lab, it would be different. There I would likely not have a desktop absorbing resources and instead have Kubernetes running a bunch of containers. 
  • Snap containers. I know this could be controversial but snap containers are built into Ubuntu already and don't need any additional tooling to run. Meanwhile, they approximate the sandbox capabilities of other container systems such as Docker. I've used Docker before and found, for my more limited purposes, snaps and Docker containers are about the same.
  • SAMBA and SFTP. Most of the time, I need to access my files from my internal Windows environment. SAMBA provides the SMB remote file capabilities that are most recognizable to Windows and can work with Linux as well. Using SAMBA meant that I could sync OneDrive to my file store without too much effort. SFTP allowed for network access to files from within containers, which turned out to be a good thing. I didn't want to have to implement something like a service mesh to get secure file access across networked devices.
  • OpenSSH. Secure connections are a thing. I use SSH to remote into my servers and implement other secure protocols such as SFTP. That's expected.
  • Nextcloud. This was the meat of the software stack. Nextcloud is pretty well known and implements all the cloud basics. There is a vibrant community that produces other applications for Nextcloud and is available as snap and Docker containers, as well as standard Debian packages. There are also a series of mobile apps for Android that allow access to most major functions. 


Physical Architecture

Like I said, I had a bunch of leftover hardware floating around my home lab. Besides sitting idle, it sort of irritated my wife that all of this "useless" stuff was cluttering up my office. Useless? Ha!  

As a side note, using leftover hardware created some challenges. I wanted to add another hard drive (yes, I have a bunch of those lying around too) to one of the computers only to discover that all the SATA power connectors on the motherboard were in use. There were solutions for that problem, but it was not something I wanted to get into, especially since it meant buying more stuff. 

Anyway, I decided to separate the file store from the application server and add a bunch of other stuff to make it work better. This included network components, extra external archive storage, keyboards, monitors and network equipment. And yes, it was all leftovers.

The physical architecture ended up in three parts like this:

  • A file server. This contained a 1T disk, 16G of RAM, and 8 core processor. I had to place this in an out of the way spot in my house because it has a new but loud fan. Rather than connect it via the onboard WiFi, I used an extra remote station for my mesh network. I was able to attach the file server to the ethernet port on the remote station and achievemuch better performance than WiFi to a remote or base station. As I said, I used a desktop version of Ubuntu so that I could use the GUI tools, so this also needed a monitor, keyboard, and mouse.
    The file server implemented SAMBA which made it accessible to Windows and Linux clients as file shares. This made it was immediately useful to my Windows clients. It also implemented an SFTP server to facilitate network connections from snap containers.
  • An application server, or app server. The app server has a 500G hard drive, 16 core i7 processor, and 8G of RAM. The app server is wired into a switch that connects directly to the router. Its traffic has priority on the switch since the other devices on the switch are clients of some sort. Attached to the app server, besides the usual array of keyboard, monitor, and mouse, is a 1T USB drive. It is available on the network via SAMBA but is quite slow. That's fine since it only mirrors the most important files on the file server. It acts as a backup in case the file server is offline. 
    The app server runs the Nextcloud software in a snap. You can look up Nextcloud to see it's full range of capabilities but needless to say, it does the job well. Nextcloud rarely pushes any one core into double digit utilization, everything including the OS takes up less than half the disk space, and memory is typically only at about at 30% utilization. Needless to say, there is a lot of headroom on this box for more services.
  • A monitoring station. I could monitor everything from my primary desktop, but it has other things to do, like running some AI crap. Instead, I have a 16-year-old Dell laptop that runs Pop-OS (which is built on Ubuntu) in use for monitoring the boxes. It only has 4G of memory and a 128G SSD which makes it hard to use as a real laptop. It can operate as something akin to a Chromebook but even then, it's WiFi is slow and a two-core processor. It's just too slow to be truly useful. As a station for monitoring a couple of boxes, it's great. One more piece of equipment saved from a landfill. 
    I put Pop-OS on it because... well, why not? I wanted to see if the Cosmic desktop was all that folks were saying it was. Originally Pop-OS, which is developed by System7, used a heavily modified version of Gnome to achieve its signature look and feel. It was more of a mod than a new desktop. With its last release, System7 implemented the new Cosmic desktop written entirely in Rust. It uses all the familiar desktop elements such as a toolbar and popup launcher, along with a quick command line launcher, similar to PowerToys Command Palette. It looks pretty and is very responsive. It's also a bit unstable and I've already had to reinstall Cosmic when it became corrupted. But I digress.

No-IP makes it really cloudy 

A cloud system that is only available within your interior network is not very cloudy at all. If it can't be accessed from the wider world, then you can't unshackle yourself from the bonds of big tech. I'm a cheapskate so I didn't want to pay a lot of money to my already annoying and expensive ISP to get a static IP address. Thankfully there is a good solution. It's called Dynamic DNS or DDNS. 

See, the problem of accessing a cloud service from an everchanging IP address is that you can't know from moment to moment what that address might be. Subsequently, the internet naming system that translates your numerical address into something a human can remember can't know what your IP address is if it keeps changing. DDNS detects changes in the IP address from your ISP and updates DNS records with that new address. 

No-IP is a service provider for DDNS and a bunch of other network services. They have a free tier DDNS that is perfect for home labs. My router already supports No-IP so I didn't need to do anything fancy on my app server to make it work. No-IP provides me with a human readable address for my network and keeps it's IP address updated as my router detects them. 

A few changes in my firewall and I now have a cloud system that is available from web browsers and Android apps throughout the world. I had the opportunity to try it out from Europe, and it worked perfectly, even with the natural latency that comes with sending packets halfway across the world. 


Final results

I have a cloud! More accurately, I have a set of web and mobile services that are accessible from anywhere I have an internet connection. These services provide pretty much the same set of services I would use from Google, Zoho, or Microsoft, give or take a few. Nextcloud is not perfect and I can see some instances where I might need to develop an app our two. 

I haven't abandoned the services I use from Microsoft, or Google or Zoho but I have a Plan B in case any of them raises prices a lot or cut back on services dramatically. My daily drive still comes from big tech, but I am no longer completely reliant on them. That, and I have a bunch of computer hardware that didn't end up in a landfill, and I had some fun doing it. 

It feels good to know that I didn't add to pollution, still have some technical chops, and can give a middle finger to big tech whenever I want. Nice!

Friday, April 24, 2026

Overseas Tech Hell

I've just gotten back from Sicily. I was there for over a month. Travel there was made much easier by the travel company we used called The Good Life Abroad. I highly recommend it. 

Sicily was a truly magical place. I'm guessing that much of what you've heard about it is not true but, more on that later. You know what would have made my trip even more magical? Trouble free technology. 

Alas, that was not to be. Every bit of tech that I brought along to make it easier to operate in Italy was harder than anticipated and more difficult than it needed to be. We can communicate across the globe and run AI that streamlines our work and still, it kind of sucks. 

Here are the six layers of Tech Hell that made my trip more of a trial than it should be. After this trip, I yearned for the days when we could only write letters, used paper maps, and had no further expectations.

eSIM Hell

In the absence of WIFI, it used to be that your choice was to either forgo use of your smart phone or pay stupid amounts of money to have your mobile carrier provide you with an international plan. New phones, however, come equipped with the ability to have a virtual SIM, a.k.a an eSIM, stand in for your phone's hard SIM. With your eSIM in place you can, theoretically, connect to the local telephone network to get roaming data. I opted for an eSIM from Truely because it came recommended by my travel company.

The first problem was in the implementation. Even though our phones were listed as compatible, the automated setup failed on both of them. I dutifully followed their instructions for a manual mounting of the eSIM, which required more knowledge of your phone's settings than most people have. When I got to Italy, it didn't work. After some time on WiFi in a hotel lobby, tech support walked me through manual setup of an access point. If I weren't a computer geek, I would have been completely intimidated and befuddled. It should not have been this difficult for two different phones. 

When I used their app to add data to one of our plans, it didn't exactly work the way you think. It seems logical to assume that more data would appear on your current plan, but nope! Instead, you get another plan which will only activate when your old plan runs out. It would be nice to know that upfront. 

Oh, and performance sucked. I simply cannot recommend Truely for your eSims. When I travel overseas again, I will try a different company, if only to see if it was Truely or all eSims that are terrible.

Verizon and 2FA Hell

I decided that I would go ahead and check something in my Verizon account. I wanted to be sure they weren't charging me for an international plan I didn't order. Seems easy enough. Sign into my Verizon account from my laptop. HAHAHAHAHA! No. We have to deal with two factor authentication (2FA) because it's safer. They insist on sending you a text message to do the 2FA. My phone is not on their network and text and phone calls do not work with my eSIM. I can't get the authentication code. Now, you think someone would have thought of this already. Perhaps some enterprising Verizon product manager might have mused "What happens if someone's phone isn't working and they want to sign into their account, perhaps to buy a new phone?" Nope. There's a lot of nopes in this story. I get on their customer service chat and say " I can't sign into my account because I can't get text messages. The stupid bot tells me that I have to sign in before it can't help me or connect me to a human. I tell it that I can't sign in because I can't receive text messages. It gives me all kinds of ways to sign in when you have problems signing in EXCEPT for when I can't receive a text message. Gah! The stupid! It burns! They have my email - why can't they use that? This reliance on one type of authentication with no work around is just inexcusable stupid process design. 

ProtonVPN Hell

I thought to myself "What if I need to connect to something back in the USA that doesn't want to connect to an overseas computer? Get a VPN! I respect Proton and like their other products, so this seemed obvious. It is true that they give you a safer way to connect to things. The problem is that it is too safe. Countless websites just won't work because the VPN hides too much.

Let's be honest, one of the main reasons to use a VPN is to access content from back in your home country that you are already paying for. It's not like they give me a break on my subscription payments because I'm out of the country. The VPN is supposed to cure that problem. Supposed to...

Instead, ProtonVPN blocked so much information that streaming services either were wonky or didn't work at all. A great example of this is MLB TV. If I was connected to any country, including my home country, using the VPN, MLB TV wouldn't show me the list of games available. Strangely, it had no problem with the list of games if I wasn't connected to the VPN at all. The content, however, wouldn't play unless I was connected to the VPN. I had to choose a game from the list without the VPN connection and then connect using the VPN to actually watch a game. This sort of thing happened a lot. In most cases websites would just generate a variety of errors. 

Simply put, it may be safer, but it doesn't work. That's not a good tradeoff.

WhatsApp Hell

Much of Europe and Africa is really big on WhatsApp, the communication application from Meta. Like it's sibling, Facebook Messenger, WhatsApp allows you to text, make calls, and video conference. Unlike Messenger, WhatsApp uses your phone number as your chief identifier. Your phone number is just an ID and has no other connection to your phone. That doesn't mean it doesn't want to connect to everything on your phone. See, WhatsApp tries to ingest all of your phone numbers from your smartphone contacts list by default. This is ostensively to find other WhatsApp users that you know. The TOS, however, doesn't really say what else they might do with all that identifying information or how they store it. Kind of not the best thing from a security point of view. This is especially so since Meta is known to just handover identifying information to (at least) the United States government without a warrant. 

You can opt out from allowing them to ingest all of your phone numbers but, it becomes much harder to use after that. WhatsApp also constantly nags you to allow it to go ahead and slurp up your contact list. It's unrelenting in its quest for your information. Like too many tech bros, WhatsApp doesn't like to take No for an answer.

It also wants all kind of permissions to access your device, some of which don't seem to have anything to do with communication. You can opt out of some of these blanket permissions but then some features don't work. For example, rather than give WhatsApp permission to always access my phone's microphone, I changed it to ask each time. Subsequently, when someone tries to call me on WhatsApp, I get the standard Android permissions popup asking if I want to allow it all the time, never, or just this time. If I choose "just this time" the call immediately drops. WhatsApp also insisted on having access to my Phone app so that it wouldn't "intrerfere with incoming calls." Facebook Messager doesn't need to access my Phone app to function well, so it doesn't make sense that WatsApp needs this type of permission.

Somehow this seems more sinister than just bad UX. From the moment of installation, WhatsApp seems to want to own my phone. If you don't understand how cell phone permissions work, it's too easy to just accept Meta's attempt to access your information for whatever they want.

Google (and Apple) Maps Hell

I'm the first to admit that online maps have changed everything about how we navigate. Want to find out how to get to your friends' new house? Google/Apple Maps. Need to figure out what bus to take to get somewhere in a city? Google/Apple Maps. The days of paper maps and real land navigation are pretty much gone.

For that reason, it is very important that these mapping services perform well, if not flawlessly. The problem is, they don't. 

I spent much of my time in Palermo. It's a major city with a metro area of roughly 1.3M people. To put that in perspective, it's about twice the size of Boston, MA. You would think that Google Maps would have a pretty good picture of the city. If you thought that you'd be wrong. I can't tell you how many blind alleys Google Maps sent us down. If you think their locations services are any better, you'd also be wrong. At one point, Google Maps thought I was in two locations simultaneously. That would be a neat trick if it were true but, it would also break the Space Time Continuum. Space Time is safe for now. 

One of the biggest problems I encountered with Google Maps was the way it optimizes routes. Those optimizations seem to be based on either time or distance but never degree of difficulty. In old parts of ancient cities such as Palermo, Google Maps is happy to have you drive down a street so narrow that a car can barely make it without scraping the walls. Worse yet, it's happy to send people walking down those streets so that you can be pushed up against a wall for dear life. It's great in an American suburb but, not so much in an old city.

Folks I knew using Apple Maps had similar experiences you iPhone people shouldn't get snotty here.

The Internet Disinformation Hell

Before going to Sicily, I did some "research" on the Internet as to Sicilian customs, dress, and other clues to living like a local, albeit for a month. According to the Internet, everyone dressed better than Americans, everything closed between noon and 4pm, trains were unreliable, no one spoke English, and car drivers were more than happy to turn you into roadkill. I read dozens of travel articles obviously written by people who had never been to Sicily. 

Palermo is a major cosmopolitan center. There are people living here from Africa, the Middle East, and East Asia. Tamil is a major language spoken here. Most people in service jobs had at least a smattering of English, many spoke much better English than I could speak Italian or Sicilian. There are lots of ex-patriate British and Americans as well. With a little help from Google Translate, we had no problem getting around.

Shops and restaurants were open all day and everyone dressed like they were in Southern California with t-shirts, jeans, and sneakers the norm. There were a lot of Yankees baseball caps. Lots. Given that my heritage is Southern Italian, most Sicilians assumed I was at least Italian... until I opened my mouth.

Driving is a challenge with few traffic lights and multi-lane road often don't have lane markers. That said, drivers stop for pedestrians. You walk out into a crosswalk and traffic stops for you. Expect for those guys on scooters. They're just jerks.

The point is everything you hear on the Internet about Palermo is wrong. Not a little wrong but vastly wrong. The contrast between what is on the Internet and what is on the ground just accentuates the failure of the Internet as an information source. You can trust some curated sites that have controls and major news sites operated by real journalists but, everything else is highly suspicious at best.

Conclusion


Sicily was amazing. The people were friendly and helpful. Food and wine are incredible and inexpensive. Getting around is much easier than in many American cities. Oh, and the trains run on time and there are lots of them to all major locations. In that regard, Sicily outpaces the United States by a lot.

Good old fashioned American championed information technology was difficult to use, inaccurate, and downright frustrating. Many of the promises that information technology make are hollow and not to be trusted. 

After 40 years in the industry, what saddens me the most is how sloppy it is. Simple things such as a button adding time to a data plan actually doing something entirely different add stress and irritation especially when you are in an unfamiliar place. Security systems that keep you safe by locking you out of resources you need is just bad implementation. No access is not the same as safe access. Bad information is worse than no information.

At the end of the day, all the money spent on technology should have guaranteed that it at least be "good" by now. It hasn't and that's a major business fail.





Monday, March 23, 2026

Adventures in Vibe Coding

First off, who came up with the term Vibe Coding. It's stupid. You will accomplish nothing if you are going on a "vibe". I feel like Inigo Montoya - I don't think that word means what you think it means. But I digress.

I have been experimenting with AI assisted coding for a while. More like playing around with it. Needless to say, I have found it lacking. I checked in with some professional coders that I know and, to my surprise, found that many of them were using AI tools, but mostly to do simple stuff or give them ideas of how to solve a problem. So, while I still think it would be literally insane to create a mission critical application using so-called vibe coding, I figured I'd give a chance for something less dramatic.

Previous experiments didn't quite work out, but given some recommendations from folks I respect, I thought it worth another go.

The Project

To be fair to AI tools, I wanted to pick a project that it has a reasonable chance of accomplishing. I set down the following criteria for such as project as:

  • Must be simple enough that a moderately experienced programmer could accomplish it. 
  • Should be a well-known and, hence, easy to define problem. I was not looking for creativity or magic.
  • Needed to be short. A subset of "simple", a relatively short program would be unlikely to have too much complexity and would be easy to debug.
  • Should do something useful. Why bother with something esoteric when you might get something out of it.

I decided I wanted to port the Linux BASH script screenfetch to Powershell in Windows. Powershell is very useful for accessing Windows system information and screenfetch is a useful Linux tool for extracting system information. A Windows screenfetch would not be overly complex, would be useful, likely short, and has a relatively narrow and well-known problem domain. There was also a direct example to work from, thereby giving the AI tools a leg up.

The First Pass

Having used Microsoft Copilot and ChatGPT for coding projects before, I did not have a lot of confidence in either. I tried a different AI agent that I had heard good things about called Qwen. I initially wanted to try out Claude Code but had to have a subscription for that and I'm cheap about these things. I told the Qwen AI app that I wanted to recreate screenfetch for powershell and Windows and that it needed to be as close as possible to the original. Obviously, I said it better than that.

Qwen wrote the script which I downloaded and ran. It puked. Sigh.

Fixing the AI

I then created a project in Github, added a Readme with some basic placeholder text, and accessed it all in VS Code. Some quick debugging found that the Qwen code had used deprecated functions and, worse yet, functions that did not exist at all in Powershell. In other words, it made up things such as system calls and environment variables. That was a big miss, but predictable.

The best I can tell is that Qwen couldn't discern between code with errors and good code that it had ingested. I suspect it spit out someone's mistake. It was easy enough to find out where the errors were, use Powershell documentation to find the actual commands to call, and fix it so that it ran. Problem one solved. 

Incidentally, when I asked Copilot to fix the code, it came up empty. I don't think it was just a Qwen problem so much as a general AI ingesting problem. AI has no good way to know good code from bad when it ingests it and hence will spew bad solutions as well as good. This was ironic because Copilot and Powershell are both Microsoft products.

That said, the structure of the script was good, and it worked with my fixes. Qwen got me off to a good start and hence accelerated development. This suggests, as my developer friends said, that AI tools helped with simple tasks, but could not be relied upon to write code that was correct and reliable. AI provides a head start but is not trustworthy.

Documentation

The next task I gave AI was to write documentation. Documentation is something I'm very good at and am able to judge well. I asked Copilot to do the following:

write documentation for the Screenfetch.ps1 script that includes a description, proper usage, explanation of functionality, and limitations.

It did a pretty good job. Good, mind you, not great. The text it created was accurate, though not complete. Copilot didn't seem to understand where to put more explanation or emphasis. Unlike a human, it also couldn't think beyond the basic parameters of the prompt. 

For example, Copilot correctly showed how to run the script... in one way. The better way was to add the location of the script to your PATH command so that it could be run like a system command. That's how the original screenfetch works. I added that part. Copilot also didn't predict other ways, other than from within the powershell shell, that this might be run. I added instructions on how to run the script from the old-fashioned command line shell, cmd

The most useful part of the documentation was the limitations section. It correctly listed several limitations of the script which, in turn, suggested useful changes to the code. This is a powerful feature. If AI can analyze limitations of code, it is much easier for a human to identify changes that may not yet be tied to a requirement. Anticipating future requirements is very useful indeed.

Evolving Code using AI

With those limitations in mind, I decided to make some changes to the code. There were three areas to focus on. First, the original code didn't exactly replicate what was in the Linux version. It left out GPU and Packages entries. Second, as pointed out by Copilot, it didn't report on multiple disk drives, only the first fixed disk. Third, the graphic that Qwen had created was nonsensical. It was a weird ASCII blob that bore no relation to anything representing a Windows OS icon. 

I tried the last task first. I used Qwen, OpenGPT, and Copilot to create an ASCII representation of the current Windows logo. They came up with garbage. Just random ASCII characters. I ended up coding this by hand which is exactly the type of drudgery that AI is supposed to save us from. It seemed an easy enough task, but alas, nothing seemed to get it close, let along correct.

Next, I asked Copilot, from within VS Code, to add the GPU detection and account for multiple fixed disks. It actually did this without trouble. Copilot plugged in the code, VS Code showed both the original and changed code and let me decide if I wanted it or not. I was able to run the new code before accepting it. Definitely a win for Copilot and AI.

Next, I asked it to add the number of applications to the script. It did so without error. Interestingly, it still labeled it as Packages, as it is int he Linux version. Linux Packages and Windows Applications are not the same. Packages not only includes applications but also system and other shared libraries. Applications are a subset of packages. For Windows, packages are not important compared to applications. I left it in that way, however, so that it looked like the original screenfetch. I admit I'm on the fence on this one though since you would have to read the (human updated) documentation to know the difference.

Evaluation

Damn! For something simple like this, AI tools had a big effect. I would say that I was able to create this script in less than half the time it would have taken me otherwise. Given the initial and subsequent output, the narrow problem domain and existence of a known model were critical to its success. 

Therein lies the AI Achilles Heel. It needs an existing model to work from. If I was porting code, AI would be a lot of help. If I was doing something creative from scratch, it would have been a little help, perhaps as I was adding features later, but not nearly as much. Adding snippets of code seems to be easier than either wholesale creation or overhaul. 

The other problem is that a developer has to have enough experience debugging the equivalent of someone else's code. It's much easier to understand code you wrote than code someone else wrote, or in this case, something else. If you can't trust the code, this is a serious problem.

AI tools for coding are improving. You still need an expert behind the wheel, need to keep things simple, and the problem domain narrow. AI is a tool not a replacement for human coding. Use it as such.