Tom Petrocelli's take on technology. Tom was a IT industry executive, analyst, and practitioner as well as the author of the book "Data Protection and Information Lifecycle Management" and many technical and market definition papers. He is also a natural technology curmudgeon.

Showing posts with label misc. software. Show all posts
Showing posts with label misc. software. Show all posts

Monday, November 22, 2010

Novell Rides Off Into The Sunset

The curtain comes down for yet another 80’s era pioneer. Novell is finally throwing in the hat (not a Red Hat mind you) and selling itself off to Attachmate for the ungodly sum of US$2.2B. There are a couple of interesting questions about this acquisition but first a moment of silence for an historic old ship that has run up on the shoals of competition. At one point they were as hot as Google. But like Sun and other companies of my youth they didn’t keep up and will soon be no more.

Why sell now? Because Novell is obviously not going anywhere. At one time they had the number two PC server operating system, have the number two server Linux and generally were number two in a many things. You can’t be number two without eventually ending up on someone’s shoe. So, if someone offers you enough money to float a missile cruiser, you take it. That’s being responsible. Or maybe the rent’s too damn high. (Caution: Sound is too damn high in this web site).

Why US$2.2B? Got me. I mean that’s not that much of a premium over Novell’s market cap but it’s a lot of money for a company that is a shade of its former self. Part of why that number is so high is because Microsoft (through CPTN Holdings LLC) dropped US$450M into the pot. They have a lot of cash. For them, this is like buying a pack of gum. Still, I have a hard time seeing this pay off for Attachmate. Unless it’s not about paying off for Attachmate per se. (I love foreshadowing…)

Who? Attachmate? I know what you mean. Who the heck are these guys that they can go out and buy Novell. That’s like Meritline (a purveyor of cheap Chinese electronics) buying Best Buy. Seems backwards. Attachmate has a product portfolio that looks like a hodgepodge of data center management products. The deal makes sense from a product point of view in that Novell has their own hodgepodge of data center tools and technology. So, depending on what stays with Attachmate and what goes to Microsoft, you will have a company with a huge collection of somewhat related technology. Combine them into certain combinations and you have a bunch of companies. The funny thing is that Attachmate is nearly as old as Novell but you don’t think of them like Novell. I’m not sure if that’s good or bad.

Attachmate is owned by a group of private equity groups. That, plus it’s product portfolio mélange, makes it look like a rollup. Rollups keep going by rolling up more companies and selling them off in combinations. It’s like cooking – a little of this, and a little of that, a pinch of something else and Voila! you have a dish you can sell to investors. That might be where the pay off is.

Why should we care? Really we shouldn’t but we do. Whenever a company with a history like Novell’s gets absorbed and turns into little more than a brand it’s sad. We really should if something bad happens next like SUSE Linux goes away, reducing competition in the Linux market. But really, I doubt that will happen and if it does there’s still OpenSuse, right? If you’re a Novell customer of course you care. You don’t know what these guys at Attachmate (or Microsoft) might do and that has to mess with your head. Otherwise, it’s not a game changing acquisition.

So, what does happen next? My guess is that they package up SUSE Linux with some other stuff and spin it off to investors or another company.  If I’m the folks in Redmond I want the identity management IP. That would go along way to creating online services and backend software for trusted Internet environments. Attachmate absorbs the rest and moves on its merry way. Depending what it gets for the other pieces of Novell (like SUSE Linux and ZenWorks) and what it can combine with its own products and sell off, it might make money on this. This is not about product engineering. It’s about financial engineering. And in this type of financial engineering one plus one can equal three.

I wave my hat to Novell as it rides off into the sunset. We’ll miss you amigo.

Thursday, October 28, 2010

Different Strokes for Different Folks

Apps are changing the way we use computing devices in a number of ways. One significant effect of Apps is a return to the “right tool for the job” mentality in computing. For the past 20 years or so, computing has been based on a single platform for all. There were big versions (servers), little versions (notebooks), and an in-between size (developer workstations). Still,  it basically was all same computer. For a brief while it looked like a specialty application platform might emerge (namely the PDA) but, alas, it stayed a relatively small market and merged into our phones.
The Cius, iPad, smartphones, and all things Android point to a different future for consumer and business computing. As these devices gain traction, the market will split into platforms that match the software they host. Tablets, smartphones, or hybrid devices like the Air will be the software platform of choice for mobile sales and marketing professionals. These users do not need, in fact have never needed, the full power of a PC. Most of their work consists of communications such as voice, email, video conferencing, and document sharing. Word processing needs are minimal. Most sales people do not right books on the road. They do need access to corporate applications such as CRM and ERP but only in a limited fashion. A bigger format device such as a tablet will give them better access to corporate applications and documents. A smartphone/pad device hybrid like Cius will provide what they need to get the job done.
Consumers will also like the tablet/smartphone device – one can argue they already do. Most home applications are pretty simple and, again, it’s about communication. Sharing pictures with Grandma, contacting the kids via SMS, and keeping up with Facebook. These are the typical uses for computer at home. That and entertainment like music, books, and movies. Except for hard core gamers people don’t need a full blown PC at home.
Where will the PC continue to dominate? Business for one. Web-enabled applications, even internally hosted ones, delivered via a PC device will be the most popular. This will do well for people in accounting, human resources, legal, and administration. It is likely to be a thin client but still more than a tablet running Apps. Developers for sure will need powerful workstations as will most technical folks. And we will only get the powerful Macs away from the graphic artists and video editing people by prying it from their cold dead hands.
The PC will not be going away anytime soon. It will have to share it’s space with a bunch of new devices. These devices will not just be smaller versions of the PC, like netbooks. They will be entirely new devices running different operating systems, using Apps instead of full applications and have very different purposes. The Internet and networking in general makes it possible to have a all sorts of devices work together. This, in turn, allows for devices tailors for different needs.
The era of the one-size-fits-all hardware and software is coming to a close.

Friday, October 22, 2010

There Is Something In The Air Tonight, Hold On.

Apple’s new Macbook Air might well be the next step in the evolution of consumer and productivity software. Not revolution but certainly evolution. And I said software not hardware. The Air has been described as the spawn of the Mac and iPad. It certainly has elements of both such as solid state storage and a touch screen. Most importantly, it runs the same Apps from the Apple App store that the iPad does. The big step is the new Lion operating system, a hybrid of Mac OS X and iOS.
Software began to change when telecommunications providers started selling small bits of software for your phone. The idea of small, rich,  clients with a big back-end has finally caught up to the wider software market. And this is not just an Apple thing. Google’s Chromium OS is slated to come out in the next few months and will have many similar characteristics. Even now there a lot of PC platform Apps, or at least software that fits the model. There is the bazillion widgets/gadgets made for the Microsoft Sidebar, Google Gadgets, and Yahoo’s Widgets1 engines. These are trifles compared to the Evernote and Sobee applications which fit an App model more closely in that they run on the native client platform and do something useful yet are lightweight and web synced. Windows Live Writer, which I’m using to write this blog, is more App than traditional software.
Apps are different than traditional software or web-based software (SaaS) in a number of ways. They are generally small, rely on an extensive back-end system (if they do anything useful), and are tailored for a specific client platform. Apps are lightweight applications the way web-enabled are yet have a rich user experience normally associated with more heavyweight PC applications. The Apps for the iPhone, Android, Palm, and other smartphone platforms took the phone application to a higher level of functionality, creating software that was much more sophisticated. Much of it is still little more than a toy for your phone but that’s changing. The iPad cranked up the volume even more with full featured and full screen apps.
What Apple has done is take the phone/pad App mentality and moved it to a personal computer (PC) platform. There are some interesting ramifications to this:
  1. Apps are smaller, originally designed to run on very low resourced devices. This puts more responsibility on the back-end to get things done. The positive aspect is that you can build PC type devices that are less expensive, faster, and have longer battery life.
  2. They are sold through the Apple App store. There is a back to the future situation. In the very far past of the computer industry (before my time) you only bought software from the hardware vendor. When my Dad2 wanted software for his IBM System 3, he bought it from IBM. Even if it was sold by a third party, IBM was involved in the purchase somehow. Microsoft and Intel screwed that up for the industry. With an open platform, anyone could make and sell software and you didn’t having to give a pound of flesh to the platform vendor. Apps return us (at least briefly) to the old model that was quite lucrative for platform vendors. Each mobile phone provider has it’s own store and likes it that way. It can’t stay that way but I’m sure the client platform providers3 will try.
  3. Apps are cheap. Partly because they are subsidized by subscriptions and ads and partly because they don’t do anything, Apps sell like webware – for little or no money. This also must change but I think they will stay relatively less expensive than traditional client applications.
  4. Most of the processing shifts to the bank-end infrastructure, cloud4 or internal, while user experience stays on the client platform. This sets it apart from webware and traditional client-based software.
  5. Apps won’t muscle enterprise applications off the corporate desktop. They will, however, become an adjunct to enterprise applications. Not everyone needs all the functionality of massive applications that SAP or Oracle puts out. An employee needs a limited view of their PeopleSoft applications and a salesmen on the road needs more limited CRM functionality. Both might prefer a lightweight App that works on his notebook and mobile phone platforms.
Apps represents challenges to software vendors dependent on the old PC model. There is no way they will be able to avoid license or distribution fees to one or more platform vendors. Don’t think so? Just ask anyone who develops iPhone apps. The fees might be in the form of traditional licenses for required software or in having to use special tools that you need to buy from the platform vendor (such as a Mac). Once the App honeymoon is over, charges for listing in the App stores will become a way of life. At the moment platform vendors want to spread their platform around by having lots of software support. Since they control the distribution, eventually they will want to start charging substantial fees for listing Apps.
For Apps to be better than simple toys, software vendors will have to offer sophisticated back-end services. When your App is a word processor, you won’t be able to jam that onto the equivalent of an iPad with a keyboard. It will be much more like Google Docs or Zoho Writer. To be in the software industry will require that you have a data center of some sort. This is bad news for someone who just wants to write code but good news for cloud services providers. They will be the data center for the smaller App makers. Those GPS Apps that are the favorites of smartphone users require a lot of behind the scenes support. So will anything of any worth.
In the end, Apps mean shrinking software margins. One of the great things about software as a business is the margins. Unlike electronics and other hard goods, the cost of goods sold for software is incredibly low because there are no material costs. As an industry, we’ve even done away with much of the packaging. As even lower cost software becomes more prevalent and license and distribution fees to platform vendors go up, margins will get tighter.
Development costs are also likely to rise. No one platform vendor controls enough of the market in the same way that Wintel dominates the PC market. To be competitive you will have no choice but to write Apps and pay fees to multiple vendors. At least until one comes to clearly outpace the others. Apps also means that traditional partners and resellers might be caught out in the cold. You don’t buy Apps at Best Buy or from Ingram Micro. The retail and distribution part of this business is probably doomed. Just ask retailers who predominantly sell or sold music. Time to adapt or die5.
The one bright area will be infrastructure software. That will probably grow more than it would have normally. The distribution and license model for that segment of the software business won’t change either. Oracle knows this and is making sure it controls enough of the back-end framework to capitalize on the new reality. However, a lot of infrastructure will be “rented” through cloud computing services. Small ISVs are not going to build data centers but data centers will be built by someone. Frameworks for App computing will become big business too. Hopefully something akin to Java will emerge, allowing for multiplatform App development. This will be important to App developers as a means of reducing development costs.
Apps, until recently, were little more than an amuse bouche on your phone. With Apple’s announcement of the latest Macbook Air and Google’s Chromium in the wings (not to mention Android and Windows pad devices), that is about to change. The traditional model for software will change much more than it did with webware. If you’re an ISV, hold on. It’s going to be a hell of a ride.
Disclaimer: I use Evernote, Sobee (sometimes), Zoho Apps (especially their excellent CRM system), Microsoft Live Writer, and Google Docs. Now, they’re free to anyone but still, I thought I’d mention it because I mentioned them. On the other hand, I don’t use anything from Apple not even iTunes. It’s just the way it is.

Footnotes:
  1. Yahoo Widgets used be be Konfabulator. I liked the old name better. It was sort of steampunk. Now it’s just a generic name.
  2. Yes my Dad was a computer geek before he retired and my son is in school becoming a computer geek. We are thinking of starting a guild.
  3. I noticed I used the terms “client platform provider” and “client platform” a bunch of times with out defining it. In this case, a client platform is whatever device the software runs on (PC, Mac, Smartphone, pad device, shoe phone), The provider is who you get it from such as Microsoft, Apple, or Verizon. There is some overlap there I admit. Really, it’s who you will be forced to buy Apps from or through.
  4. Let’s not get into any “what is a cloud” arguments in the comments. When I say cloud here I mean an outside provider of virtualized computing resources. If it makes you happy to say IaaS be my guest.
  5. “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change.” – Charles Darwin. Charlie really knew what he was talking about.

Monday, August 23, 2010

Computer Industry Goes Zoom Zoom

You would think that last week’s announcement that Dell was acquiring 3Par for US$1.15B was news enough. Ha! Intel then raised eyebrows by announcing the acquisition of McAfee for US$7.6B. Now, comes Monday morning and HP raises the stakes against Dell by sending in their own and bigger bid for 3Par. It’s nice to be loved. Somewhere in all this, Hitachi Data Systems announced that they had acquired the Intellectual Property and core engineering team of Parascale, a cloud software company. Too bad for them. What should have been a sweet announcement was lost in all the noise.
So, what the heck is going on here? On the one hand, this is actually not that surprising. Computer tech companies tend to throw off lots of cash so they have a lot sitting around for acquisitions. Most of these big companies can thus afford to buy expertise or market share. This is especially true when you are coming out from the bottom of the market. Best to build up the arsenal before the economy really picks up.
This is an industry with a tradition of letting smaller companies trail blaze new technology and markets then get their payoff from a big company. In the long run this is cheaper and less risky for big companies but profitable for small ones. More unusual are the Googles and Microsofts who start in a garage and end up a behemoth. That’s the myth of computer tech but not the reality. What is not a myth is that deal making gives folks like me something to talk about. So here’s the talking about part.
Intel-McAfee Makes for Secure Communications
The Intel-McAfee deal has a lot of pundits scratching their heads. It’s a lot of money for a company with a big consumer business. McAfee’s revenue would barely be a rounding error for Intel. In 2009 Intel’s revenue was 18.5 times McAfee’s (~US$35B vs. US$1.9B). $1.9B is nothing to sneeze at but it will be a long time before a McAfee revenue stream makes up for the money Intel paid for it. What McAfee has going for it is lots of core security technology. More importantly, it’s spread across all aspects of the digital world – web, mobile, desktop, and server. Combined with Intel hardware and chips and you have a much higher revenue generating business than McAfee alone. It’s like having your cereal with fruit and milk. It’s part of a complete breakfast. It also well positions Intel for the long term. This is an example of the Gestalt principle – the whole is way better than the sum of the parts.  Besides, people said similar things about EMC’s RSA acquisition and that has worked out well for them, right?
3Par Bid Up by HP
I wasn’t that thrilled about Dell’s acquisition of 3Par, except insofar as it worked well for the 3Par folks (nice folks). I’m both more and less thrilled about the HP bid along the same lines. It’s better for 3Par financially, so I’m more thrilled. It’s makes less sense for HP though. Unlike Dell they have a coherent storage story, reputation and brand going back decades, as well as an extensive product line. Do they need 3Par? At least with Dell, 3Par would be a prominent part of the line up. They might have even kept their name, like Equalogic did. With HP, they will be absorbed. It’s hard to see what this deal adds to the HP product mix that they can’t get or build more cheaply. I doubt they need 3Par’s customer base really. Perhaps it’s just a way to keep Dell from becoming a serious competitor in storage. Perhaps. Generally, I don’t like this for HP but do for 3Par investors. It will be interesting to see how high this one gets bid up. There could be crazy amounts of money tossed around here.
HDS Goes Parascaling Up In The Clouds
The cloud is about software. It sells hardware but doesn’t exist without software.  Parascale provides software that makes storage and servers into clouds. I don’t know enough about Parascale to say if it worked or was particularly good software. Assuming it worked just fine, then this is the kind of technology play that I like. It adds immediate value, helps move hardware, has broad, future potential in an emerging market, and is a deal that is easy to do. It’s kind of conservative but conservative often pays the bills.
Bye Bye to OpenSolaris
There were also a bunch of other, smaller announcements too. One that is significant was that Oracle will be dropping support for the OpenSolaris project. This is sad since there was a vibrant community around OpenSolaris. It was not, however, unexpected. Oracle has nothing to gain by supporting an open Unix product. In the end, this will be good for the Open Source community. There are already too many Linux and Unix projects and variants diluting the talent pool. Do we really need OpenSolaris and FreeBSD and OpenBSD and NetBSD and Darwin and so on and so on. Not really. So, while I understand how this bothers some people and generates a lot of “what else will Oracle kill?” questions (Don’t worry it won’t be Java or MySql. They generate revenue) it’s really for the better. Time to move on.
I must admit, all this activity is exciting. It’s rare that this industry gets a week like this. Deals are usually more evenly spaced out. It’s like NASCAR for computer geeks.

Friday, August 06, 2010

Storm Clouds Approaching

IT shops have gotten to the point where they have a good handle on managing servers, networks, and storage. Headway is being made toward managing their virtual equivalents. Now, we have to add cloud computing to the mix. Cloud management may be the next great pain in the neck for IT shops. At the moment, not enough folks are seriously deploying in the cloud for it to be a crisis. That will change as more IT professionals accept cloud computing as something they can use to manage that tricky balance between cost and performance.
There are two paths we can go down (but in the long run) there’s still time to screw things up1. First, if cloud systems management is too difficult and the tools too primitive for too long, deployment to the cloud will be much slower and might even reverse. If, however, the benefits of the cloud are enough that deployment continues apace, sysadmins and programmers alike will find themselves wishing for someone to put them out of their misery. The sheer lack of tools will drive them to drink.
Cloud management is unlike other systems administration. Usually big chunks of an application, such as a service, are deployed to servers. With clouds, little bits of application, including individual and transient objects can be anywhere. Worse yet, individual objects might be parceled out to different clouds depending on their resource or security needs. It is possible to have different objects instantiated on different cloud services, public and private, which in turn execute them on different physical resources. Distribution on this scale can be very tough to deal with on conventional systems. No one is really sure how that will play out in a cloud.
The other big difference between clouds and other infrastructure, at least public clouds, is that you don’t necessarily have much visibility into the infrastructure. Managing applications that are not only distributed in a cloud but hidden behind a vendor’s veil of secrecy is like driving in a blinding snow storm. I don’t recommend it. There’s too much trust in the unseen and unknown.
How do you manage this type of environment? One approach is to build monitoring into the individual application objects. Daesin, an open source cloud API written for Java, allows this. Problem is that you have to build monitoring into your application objects and may be language dependant.
Standards will make a difference. The DMTF just announced an initiative for cloud monitoring called the Open Cloud Standards. DMTF management standards have spurred vendors to develop management products in the past.  Open cloud interfaces such as Open Stack will also help since it will provide an open and standard platform for accessing clouds. These type of standards make it easier to develop software to manage cloud services. Right now, folks who develop cloud infrastructure software such as TwinStrata have to interact with nearly a dozen APIs from many different vendors. It’s basically a Tower of Babel which makes software development much more difficult. Development will become easier when there is a single or limited number of APIs to deal with. When development is easier more tools will become available.
So, if you truly be believe that clouds are the next big thing, the next SAN/NAS or the next J2EE or .Net, then you need to start worrying about how to manage them. Now. 

1. My apologies to Led Zeppelin. Stairway to Heaven does not deserve that kind of abuse.

Monday, May 17, 2010

Building Castles in the Clouds

Despite my best intentions, I keep having conversations about cloud computing. This probably means it's almost in the mainstream. As both of my faithful readers know, I've been a bit critical of cloud computing. Not so much the idea of cloud computing itself. It's more about everyone piling onto the concept even where it is inappropriate. I also find the discussion of private versus public clouds mostly irrelevant. That's a business decision related to how you want to cost out IT. It has nothing to do with technology.

Having had a bit of time to think about it, here's what I think it is and isn't about. In a nutshell:

  • It's about running enterprise applications, wholly or in part , somewhere out in the IT infrastructure. You don't care where so long as it's somewhere appropriate. From a software perspective, it means instantiating application objects but not caring where that happens so long as it meets the needs of the object.

  • It's about metered usage, either as a service or in-house. Paying for only what you use is very attractive.

  • It's about better application resource utilization. You save money when you don't overbuy. Another way to look at it is that you align resources to how critical something in the application space is.

  • It's about flexibility. Being able to run application objects anywhere in the infrastructure means less dependence on certain assets. Makes for better availability and more cost savings.

  • Public versus private cloud arguments are only valid or relevant when talking about how you pay for IT. If it is in your best interest to convert a CAPEX to a variable expense, by all means go for the public cloud. The same is true when thinking of personnel costs. You might not have the expertise to run a private cloud so you either hire or go outside. These are classic outsourcing decisions.

  • Often, a cloud uses a virtualized hardware environment (storage, servers, and networks). but it doesn't have to. The virtual application space is what matters. That's why we have middleware.

The last point is key. While virtualized servers and storage provide a great environment for running a cloud, it's not necessary. It's the middleware environment that matters. For example, a lot of what we think of as cloud computing is achievable using existing Java 2 Enterprise Edition (J2EE) technology. J2EE environments, such as JBOSS, perform all the tasks needed to build a cloud. It handles:

  • Persistence

  • Coherency

  • Distribution

  • Synchronization

  • Object Caching and Reuse

J2EE allows you to instantiate objects on any physical server running the J2EE application server. It doesn't matter if that server is virtual or not. While that might be a good idea, it's not necessary to make a cloud.

One look at Google's cloud SDK tells the story. You import a library of Java objects that interact with the Google cloud and voila! Your application objects are running in their cloud. You could conceivably run some objects in their cloud and others in-house. It's not that easy of course but pretty close. Google provides all the infrastructure that you need to instantiate and manage application objects elsewhere. How they do it is unimportant.

The ultimate cloud would be virtual everything of course. That way you get maximum alignment and utilization. Virtual servers using virtualized/federated storage, with middleware that provides a virtual application space would meet the needs of a cloud nicely.

But in the end, it's the software that counts. The application is what it is really all about.

Friday, April 16, 2010

Tom's Plain Language License for Mere Mortals

For the past few months I've been working sporadically on a piece of software. It's not a great looking bit of code but interesting none-the-less. It's a simple document management system called Document Locker that allows you store files in a repository (so you know where they are) and wrap some metadata around them. That's not the interesting part. What's neat is that you can define relationships between documents and see which is connected to which. This approach will be most helpful when organizing multi-part documents, scientific papers, and a software project.

Finally, the day is near that I am to release it into the wild, including source code. Roughly two weeks hence, Document Locker 0.5 (the first iteration) will be made available through www.techalignment.com. Part of preparing for the release is developing a license. All software needs a license to keep it from being misused and to protect the creator. This includes open source software. It never ceases to amaze me how little people realize that open source has a license.

I looked over all the major licenses such as Apache and GPLv3. All of the open source licenses I examined had the same problem – they are a mass of legal gobbly gook. I'm used to reading contracts and license agreements (which are a kind of contract) and they were still a tough slog for me. And long? As Sarah Palin would say “You betcha!” No sane person would subject themselves to reading these documents unless driven by necessity. It's like eating bugs. You would do it if you had to but not if there was an alternative.

The reason that even open source licenses are like this is because they are written by lawyers. Lawyers, like engineers, have their own technical language. They have their own concerns and worries and they think in a certain manner. The documents reflect this. I'm not saying it's a bad thing but for a great many uses this type of language obscures more than it illuminates.

When you get down to it, a license exists to create an agreement between people about rights. As the creator, I hold all the rights. You can do certain things with my creation but only those things that I allow. If you can't understand what I'm allowing you to do, how can you be expected to uphold your end of the agreement? You can't.

So, I set out to develop my own license. I'm pretty sure that a large number of lawyers would think I'm nuts just I would if lawyers wanted to write software. My goals were:

  1. To make obvious what I was allowing a recipient to do.

  2. To make obvious what they can't do.

  3. To say so in plain language that anyone could understand.

  4. Keep it short. Life's too short for long licenses.

  5. I wanted to make a point about licenses.

The result was Tom's Plain Language License for Mere Mortals. It is as plain language as I could get. Unfortunately there is no getting away from referencing other, big, heavy, legalese licenses. Since I have to reference the Java and Neo licenses the reader is still struck with reviewing those licenses. Too bad. Otherwise, it's pretty straightforward and just a bit irreverent. Irreverent? That helps me to achieve goal number five and make the point that software licenses, even benign ones, are so complicated, so full of legal jargon, that they are useless as the basis for a relationship.

With Tom's Plain Language License for Mere Mortals you know where you stand. If you can't understand what you are agreeing to then you shouldn't be mucking about with software. Really.

If you are interested in a preview, you can check it out at www.techalignment.com/TomsPlainLanguageLicenseforMereMortals.pdf. Twitter comments (direct message please) or email them to me if you know my email.

Friday, April 09, 2010

Programmer's Religion

I've been thinking a lot about programming languages. It occurred to me that I know quite a lot of them. Though I personally prefer Java, I've also written code in C++, C, Pascal, various assembly languages, HTML (a kind of language), Javascript, XML, SQL, and a variety of others professionally. For amateur projects, I've used PHP, PERL, VB (ick!), C#, and now Python. I even took a shot a LISP once. So, I don't have a lot of religion about what I program in. They all have strengths and weaknesses, good and bad.

So it came as no surprise when a very knowledgeable technologist asserted to me that a good programmer could write code in any language. As expected, I agreed. Being a polyglot programmer myself, it certainly seemed correct. At the time it did. But maybe...

Since then I've come to believe that you can write a program in just about anything but not write code professionally in more than a few. Modern coding requires more than just understanding computer languages. With the exception of a few outliers like LISP and SmallTalk, most computer languages fall into a handful of syntax groups. Java, C++, C#, PHP, and Javascript are so similar that it's hard to imagine any experienced programmer not getting the basics.

But only the basics. The problem lies in what it takes to be a productive programmer. Understanding how to program a computer or write in some language is less than half of what you need to know to do it professionally. Enterprise applications especially are built around entire frameworks and environments.

Take Java. There are several graphics built-in libraries such as AWT and Swing. It has a rich set of general collection objects with the ability to adapt them to any type of object using simple syntax. There are lots of utility, security, reflection, multi-threading, and database connection classes. That's not even getting into Enterprise JavaBeans (the middleware/SOA framework) or servlets which are used for web programming. Keeping up requires books, training, and on-line resources. You have to focus on staying current with changes and new additions.

This is where the difference lies. There is Java the language and Java the development environment. C++ is a language but C++ in the .Net environment is what you write code in. It's very difficult to focus on being expert enough to be a productive in more than one of these complex environment. This is why you see programmers with so much religion around their languages and environments. .Net people can be passionately pro-.Net. Java people can be Java bigots.

It's a natural side effect from all the effort that goes into being expert enough at something to do a good job with it. It's not that one environment is better or worse – all have their strengths and weaknesses. The demands of being productive in any particular environment require that you focus all of your attention on being good at one thing. You can switch languages and environments but it's like switching religion. It can be done but it's a process that will take time, effort, and will.

So, I take back what I've said in the past. A good programmer needs to have religion. Not religion in the sense of unquestioning zealotry. Religion as in a deep devotion to their craft and a singular focus on what they are doing.

Monday, March 22, 2010

Monkeys Flinging Poo

As anyone who reads this blog (thanks to both of you) knows, I've taken up writing code again. It's a hobby to keep me busy while I look for my next great adventure. The act of writing code is an act of creation. You make something. Software is especially satisfying since, in a sense, you make something out of nothing. Feels kind of god-like in that way. You start with with nothing, say “let there be applications”, and it comes into being. I'll grant you, it's not as easy as that but neither was creation. The big band, stellar and planetary formation, and evolution all took energy.

At the same time, I've been watching various members of the software industry throw patent lawsuits at each other. It's a bit like watching monkeys in the zoo fling poo at each other. Mildly amusing until some of the poo escapes the confines of the cage and hits a spectator. All of a sudden, it's not so funny. Well, it is kind of funny but not for the one who gets hit with the poo.

All of this legal poo flinging just doesn't feel right to most people. Yes, we want our creations protected. If someone tries to steal my work, I would become an angry god and want to throw thunderbolts (and poo probably). On the other hand, what is being patented is ephemeral. There is still a lot of rancor over Amazon's One-Click patent. The idea of patenting the idea of a single click purchase seems absurd to most people. A lot of software patents are that absurd. The upshot for the software company is that they are expected to protect important assets but their own customers think they are greedy hatemongers when they do.

Worse of all is that customers get caught in the crossfire. They worry that they will lose their investment through no fault of their own. Will they have to change what is working for them in the future because of some crazy corporate rock throwing? In essence, they are afraid of being the spectator that gets hit when the monkeys go at each other.

Lawsuits are not good for companies either. In technology-based industries, even when you can claim victory in a lawsuit, it's almost always a Pyrrhic one. You don't so much win as lose less. Take Apple for instance. They are suing HTC for making a smartphone whose software, they feel, violates patents associated with the iPhone. It doesn't matter if, as a matter of law, they are right or wrong. The damage to their image is already done. Instead of appearing to be a technology company that wants to transform the world (“Think Different!”), they are revealed to be a company like any other - more concerned with money than with customers. Win or lose, they have already lost something. What did the Sun and NetApp lawsuits do besides make both look venial?

At the heart of the problem is the nature of software. It doesn't follow the same rules as other things that are awarded patents and copyrights. Software is not physical. You cannot hold it in your hand. Holding the a CD or DVD is not the same. It's like holding an empty glass and claiming your are really holding the air. A physicist might agree but everyone else will think you're being silly.

Software is not literature as much as we like to think of it as art. Digital music is still music and an ebook is still a book. Software is neither of these. It is a thing unto itself that follows it's own rules. Code is more than mere instructions but less than art.

Software represents a new type of intellectual property. We need to recognize that. Copyright law doesn't adequately protect the software creator which is why End User License Agreements stuffed into a PC game box read like the US Constitution. With the amendments and commentary. Patents don't work since there is no physical manifestation and software is hopelessly vague to define under patent law. Just read a couple of software patents and you will find yourself saying things like “ Well Duh!” and “We've been doing that for 20 years now!”

IP law, especially in the US, has struggled for two generations with software. How do we protect our creations when they are unlike any other creations? How do we set up rules that people can easily follow? Patent and Copyright wars are counter productive. We need guideposts that avoid these conflicts.

I propose a hybrid of copyrights and patents. Patent law gives a short term monopoly to someone who devises something unique. That uniqueness is the code base. For the software industry to keep moving apace, it needs to be a really short term. A year or so, not seven or ten. That's just enough to give a company a head start.

After that, it should be protected more like a copyrighted material. People shouldn't be able to just copy and distribute your product without permission. They can come up with something of their own but not take your product as their own. That forces them to invest something in their take on what you did. But not until you have time to grab a little market share.

I'll let the lawyers work out the details. They're good at that.

Like the aforementioned monkeys, the patent lawsuit winner is the one with less poo on them. They still end up with poo on them though. And no one wants to hang around and watch for fear of getting poo on themselves. In the end, you find yourself alone and covered in poo. Not the way to go.

Wednesday, February 17, 2010

Into the Matrix with Neo

Everyone needs a hobby. Lately, mine happens to be writing code. I used to be a software engineer so I used to code for a living. Over time two things happened. One, it ceased to be fun (that's why we call it work folks) and two I didn't need to do it anymore. As my career transitioned into management and then executive management, I rarely got my fingernails dirty with real coding projects.

What's good about that is that coding could become fun again. So a couple of months ago I decided to start on a new coding project. I had two goals – learn some new technology and do something at least marginally useful. That has led to my latest project, a document management system built on the idea of relationships between documents.

Most document management centers around classifying documents in some fashion. Whether you use a hierarchical category system or free form tagging schema, it's about putting documents in buckets. I wanted to add something else to the mix. Documents rarely stand on their own. They exist in relationship to other documents. Think social networking for your files.

Unlike people, documents don't know other documents nor do they care if another document is having lunch at Spot Coffee. Documents do belong to an ecosystem just like we humans do. They refer to other documents and are part of larger documents and collections of documents. They have their own relationships.

To model these relationships in more traditional databases is difficult. Using an SQL RDBMS you end up with a lot of cross reference tables and lots of Joins. It's not what SQL or relational databases were designed for. Instead, I decided to use a graphing database called Neo. Graphing databases organize data as a series of nodes connected by explicit relationships. This allows you to build applications that focus on finding like objects. For example, what documents are referenced by this one? Or, which are the child documents to this one? These questions are more easily answered by graphing database.

To date, Graphing databases are primarily used for social networking applications. That makes sense since managing data by relationships sits at the core of social networking. Graphing databases have a lot of other potential uses. They would be great for modeling workflows, simulations, and building ontologies, all hot areas of software.

Neo has a few warts. It's still only a release candidate so things are still changing. The recent most version, bringing Neo out of Beta and to an RC, changed the names of several basic objects. That forced me to go back and recode certain key sections of the application. The online documentation is good at documenting the API but light on how to make things work right. Figuring out the transaction model, even though it's pretty simple, required digging into the class level documentations and a bit of trial and error. Might be a book in there. Hmmm...

In the end, I won't have a commercial grade application. My GUI design skills are too poor to make it look and behave the way I want it to. However, once my pet project is done, the application will at least be useful. I will have learned something interesting and it will have been fun. What more can one want out of a hobby.

Wednesday, January 27, 2010

Nice. Not Thrilling But Nice.

I'm a bit puzzled by the recent Cisco-NetApp-VMWare announcement. Besides wondering how VMWare was even allowed to sleep with EMC's enemy, its focus on multi-tenancy security has me a a bit confused. Not confused in the “what the heck are they talking about” way. More of the “So what do they have to do with it anyway” manner.

Multi-tenancy is the sharing of an application amongst different users who, if they had their way, would much rather not share the same air . I saw this in the IP management software and call center outsourcing businesses. In both cases, customers needed to be assured that their incredibly valuable and secret data could never be viewed by someone else. For the outsourced software services provider, such as Salesforce.com, this is a a pain in the neck. An understandable one but a pain none-the-less. To get the economies of scale outsourcers need to be profitable, it is best if you don't have to repeat yourself too much. Multiple instances of the same applications require more hardware, more software licenses, and more maintenance. In other words, more costs.

In most cases, if an application is designed correctly you can use a (logically) single application and database for everyone. That's the crux of the matter – if it's designed right. Bugs happen and there is the potential for data to be exposed to the wrong people. This is a rare occurrence but people worry about it anyway. Customers should worry about backup processes more since there is much more risk there. It's like worrying about getting hit by a meteor. It can happen but almost never does. Meanwhile, you don't worry about getting in your car and driving on the highway. Guess which one is more likely to get you killed.

This intense customer worry drives many outsourced service providers to either give almost no guarantees about security of data or physically segregate data on different servers running separate instances of the application. Virtualization helps a lot in that you can run reasonably secure instances of applications on the same hardware with little chance of bleed over. Everyone gets their own application space but not their own physical box which cuts down on hardware costs. It still doesn't solve the major problem - the need to reduce the number of instances of databases and applications. Repeating software is expensive and still a problem.

This brings me back to the “Huh?” look on my face. While it's nice to see Cisco, NetApp and VMWare working together to support a secure virtual environment, it doesn't solve the main problem of multi-tenancy. You can already virtualize the heck out of your environment to save on hardware costs. Great, but that's not what the people in multi-tenancy environments really need. They need to run one instance of their database and one instance of their application and be sure that any one customer can't see another's data. One application that can act like a dozen applications. They need virtualized applications.

These applications exist. I've designed and marketed a couple myself. The problem is that customers don't believe it. They feel that if data is in one place or accessed from the same application, then it is a hazardous environment. That's not true of course. Your bank is able to keep your records secure from other users even when accessed online. These applications can be built now. Virtualized hardware resources don't really impact that.

What the new triumvirate (or Axis of Evil depending on who you talk to) is developing is great stuff for hardware service providers wanting to sell virtual resources. It's good for IT departments looking to save on hardware costs through high utilization. It really doesn't solve the multi-tenancy problem any more than VMWare, NetApp, or Cisco products do alone. It's fundamentally an application software problem that needs to be solved by application software vendors. Multi-tenancy problems need to be solved by Oracle, IBM, and Microsoft.

Now that would be a mind blowing announcement.

Monday, January 11, 2010

What's On My Mind

A little career downtime can sometimes be a good thing. Whether your sabbatical is planned or, as in my case unexpected, it represents a rare opportunity to delve into new areas of interest. Academics and clergy do this regularly as a way of expanding their skills, working on projects that they can never get to, or simply as a way of recharging their psychological batteries. This is not an extended vacation or time to simply relax. Career downtime has to be used to expand your horizons.

Consistent with my beliefs about downtime, I've been using my current “sabbatical” to embark on areas of discovery that I previously hadn't time for. As my Twitter followers know, some of my time was used to explore the biotech industry. On the more geeky side I've been going back to my software roots to look at things that have fascinated me for a long time. Namely, how to manage unusual or difficult data stores in different ways.

More precisely I've been looking into:

  • Managing large unstructured data sets, a constant problem in certain industries;

  • Applications driven by relationships between entities more than their structure and;

  • How to make smaller applications by embedding data management into them.

This journey of discovery has led me to a number of software technologies that I find very interesting. Let me share with you what's on my mind these days.

Managing Metadata

One of the major bugaboos of the last decade or so has been dealing with the explosion of unstructured data. The simple solution is to wrap metadata around the real data. This descriptive information adds the machine manipulable context that unstructured information lacks. However, managing the metadata itself has become a big problem. Individual metadata is generally small and many data management tools, such as relational databases, are overkill. Implementing a full blown SQL database to manage metadata is like hunting deer with a tank. Expensive and more than you need to get the job done.

XML has helped but managing XML text files is difficult when there is a lot of them. You might need to open and close lots of small files constantly, straining a lot of file systems. The other option is to process one giant XML file which can be processor intensive and slow. Worse yet, these are decisions you have to live with and are hard to change once an application is underway. XML is not ideal for dealing with relationships between entities either.

Social Media Does Change Everything

Okay, I don't believe social media changes everything. What it does do is address the fact that humans are social creatures. We view the world as a series of relationships. Relationships between ourselves and the world around us, between each other, and between everything that makes up the world. The natural schema for a human is relationship based.

Computers don't always reflect that. They tend to be concerned with structures more than the quality of a relationship. This is one of the problems with SQL. It is damn hard to code reciprocal relationships, the strength of a relationship, and the ways entities may be interact. A good DBA can tell you how to do it but it gets complicated quickly especially when modeling real human type relationships.

Small Applications That Can Grow

Enterprise applications have gotten huge. Worse yet, they require boat loads of infrastructure. This is why enterprise applications developers always talk in stacks. LAMP stack, WAMP stack, .Net stack. Developers can declare that their apps aren't really as big as they are because they assume that a stack is in place.

There are a lot of negatives to relying on these stacks. For one, you are at the mercy of whoever is designing the pieces of the stack. Applications also use different versions of the programs that make up these stacks, leading to compatibility problems. Not to mention finding an application that you love on a stack you don't support.

The biggest problem with stack-based enterprise applications is that they are not compact and simple. They don't port to small systems and devices easily. Try implementing a single user version of most enterprise apps. Who is going to install and maintain an Apache web server for one or two people? Cloud businesses like this since they provide an alternative but not all applications lend themselves to the cloud.

What Am I Looking At?

In thinking about these issue, I have come across technologies that address some or all of these problems. I especially like embedded data management tools. I especially like Derby and SQLite, Lucene, and Neo. Derby and SQLite are open source or public domain RDBMS' that allow developers to embed a SQL database in an application. Derby has a server version as well, allowing applications to be small and compact or to scale up to large enterprise size. Derby is from the Apache Foundation and Java-based. This allows it to nicely integrate with Java applications and object mapping frameworks like Hibernate. SQLite is C++ based making it excellent for embedded applications and is extensively used by the Mozilla Foundation. Being a Java geek, I'm planing on spending more time with Derby.

Another Apache project, Lucene, embeds a search engine in an application. With Lucene, a developer is able to manage large amounts of unstructured text using methods familiar to everyone. Lucene also works well with other types of data management tools to add search functionality to all kinds kinds of data.

One the other technologies that Lucene works well with is Neo. Neo is a graphing or network database (there is debate as to what the difference is between the two). Graphing databases view data a bit differently than an RDBM. Data is stored as key-value pairs called properties in an interconnected network of nodes. Finding data is through it's relationships with other nodes. With Neo, information is stored and retrieved in a way that humans organize information, by it's relationship to other data. This fits in well when modeling people or other entities that rely on interactions with others. Some examples are biological ontologies, proteins, and documents. At the moment I'm experimenting with Neo and Document and Content Management.

While there are a lot of things that stink about career downtime, if used effectively it can be a transformative experience. Discovery of this type almost always leads to something good. If nothing else, it helps us to grow as professionals and people. It's also better than sitting around watching television.

Thursday, January 07, 2010

Blessed Be The Makers

The current recession is a tough one for sure. Not only is it the deepest economic downturn since the Great Depression but the supposed recovery is looking to be a jobless one. In past recessions, lots of talented people were let loose on the marketplace with a few bucks of severance or buy out money in their pockets. Many of these creative people ran out and started companies. Others went and joined these new companies, which they would not have done in better times. Some of today's bellwether technology companies grew up in the midst of recession.

This is unlikely to happen this time around. With money so tight, funding a new business is a bigger challenge than it has been in the past. From banks to Venture Capitalists, the money is just not there and the requirements for funding are more stringent than ever before. The upshot of the lack of investment funding is that companies will need to fend for themselves much longer than they did in the past and not everyone has the stomach for that. Going without meaningful income while working like mad is hard to do. When you think you will do it for years, it can be downright disheartening.

There is a bright spot however. Bubbling up from the underground is a grassroots movement of people who like to build homegrown technology. Called Makers (and the related movement, Crafters), this is a DIY movement that celebrates homemade tech. Makers create electronic doo-dads from open source hardware like Arduino boards. They build funky mechanical devices. They blow stuff up and put it back together.

In my youth the computer industry was like this. Back in the day we were called Hackers until the term was co-opted by the bad guys. Many of these same hackers created software and hardware companies that still endure.

Like the Hackers before them, Makers do what they do for the sheer joy of it. They create devices to do interesting and sometimes silly things. Whereas launching a Christmas tree with rockets is kind of silly (and dangerous), other projects have real usefulness. For example, a cheap strobe algae bioreactor is serious stuff for biotech and alternative energy.

Makers know how to build product on a shoestring and have no wish for the pretensions of glitzy high tech companies. Instead, technology is reason enough itself. The simple fun of making something is what drives Makers.

Makers are also forming collectives to share resources and lab space. It is not hard to imagine these collectives turning into companies some day. Take a group of smart techie folks used to working with little money and stick them in one place. Before long you are bound to have a “Hey! I got an idea how we can make a few bucks” moment. There is some spill over into the software world too. Call it a resurgence of the old values. Groups of people with skills who are under or unemployed, writing code for giggles until one day – BAM! - the great idea emerges.

And these folks will have little use for the bankers who spurned them and cause so much economic misery. They will remember that they had to work at Best Buy because a bunch of greedy money people screwed things up. About the only people they will listen to will be the Angel investors because they've built something themselves. The Makers will drive hard bargains when they do take money since this is a labor of love not simply business. They will once again be an engine of growth in the technology market. Watch for it. It's already starting.

Blessed be the Makers for they will raise us out of the depths.



Tuesday, December 01, 2009

Mozilla Thunders Ahead

Warning: This is long. I'm in a gabby mood. But when you write about something so basic to everyday life as email, it's easy to get a bit verbose. As my friends will tell, I find it easier than most...

One of the problems with modern software applications is that they tend to be incredibly feature laden. That's a problem you say? Yes it is. Feature overload leads to a great many features never being used because you don't know what to do with them, don't know they exist, or are only useful to about 5% of the target market. Mozilla seems to have avoided that trap with the latest release of it's fabulous email client, Thunderbird. Most features are infinitely useful to a great many people.

At first blush, things don't appear to have changed much. For the most part, Thunderbird looks and acts pretty much the same. For an email program, thats a good thing. Productivity applications that you use all the time should not have major interface changes. No one wants to spend a week learning how to do something that was fine before. Just ask the legions of people who positively hate the Office 2007 interface. It doesn't matter if it's better. It is radically different enough to get in the way of getting the job done.

Instead, useful features should be added that enhance the usual experience. This exactly what Mozilla has done in all releases of Thunderbird. No jarring, radical changes to the user interface. Just enhancements that make things work a little bit better. Many of the UI changes are immediately recognizable since they are adapted from either Firefox or web-based email sites like Gmail. With email clients, usual and recognizable is what you want.

The GUI Got Better

For example, Thunderbird now supports tabs. A simple thing, putting tabs across the top, but really useful. Your calendar (assuming you have the Lightening extension, which of course you do because it only makes good sense) and tasks can live in their own tabs making navigation to them simple. Messages can also be opened in tabs allowing you to have multiple emails open in a neat space. No more having a dozen windows spewed all over your desktop. Everything is nice and neat.

In typical Mozilla fashion, you can turn off tabs and use Thunderbird in the old fashion way. This is important since it doesn't force a change in behavior. Users can choose to continue working the way the always have or easy in slowly. This is not a trivial matter when training budgets are under constant pressure. The ability to expose features slowly or only to power users is a great help.

Another useful GUI enhancement is the action buttons on the email itself. In the past (and in most email programs) when viewing email from the message pane, actions on an email such as Reply or Delete are initiated from a toolbar on the top of the window. While you can still do this in Thunderbird 3, you also have the most common action buttons right on the email message pane itself. This allows you to quickly review, read, and take action without your mouse flailing about like its rodent namesake stuck in a trap. You can choose a more minimalist toolbar at the top or keep the old one and the message pane buttons. It's the best of both worlds.

Organize, Search, See

The new T-Bird goes all out to bring better ways to find and view emails and RSS feeds. My favorite new feature is the summary list. If you select a group of email or RSS messages, a search engine type list is displayed in the message pane. It shows you the title and a snippet from the beginning of the message for each message selected. This gives you a Google-like view which helps you to skim through a big batch of messages.

This also works with the new global search capabilities. Searching for emails in earlier versions was a decidedly local affair. You could search through a folder from the search bar but had to go into the advanced search for anything else. Thunderbird now sports a global search bar similar to the Firefox one, including auto complete. It helps to search through the gobs of emails that pack rats like me accumulate. You can apply filters of various sorts after the fact, narrowing your results in much the way as a you would with an Internet search engine. This is a very powerful feature.

In Thunderbird 2, Mozilla introduced tags but they typically were underutilized. Most people still moved messages to complex folder structures. Tags allowed for better organization since you could dump messages into one folder and perform multi dimensional searches on them. I create virtual folders of saved search results that allow me to find messages based on a number of tags. Mozilla kicks it up a notch in this release by making it obvious what you are supposed to do. They have added an Archive button and matching folder. Now, when you want to save a message, you hit archive and it puts it in a folder based on the year. Combined with tags and the new search, looking through dozens of layers of folders is instantly as old fashioned as a rotary phone.

It's Like Having A Big Brother To Look Up To

A lot of great ideas besides tabs and search features have migrated over from Firefox. My two favorites are Weave and Personas. Weave synchronizes information between different instances of Firefox and now Thunderbird. If you have multiple computers, say a desktop and a netbook (or are like me and have more than two) this is a valuable feature indeed. Though there have been a number extensions that do this sort of thing, it is much better as a Mozilla project that gets updated regularly. I wasn't able to get it to work in Thunderbird 3 RC1 but if it works like it does in Firefox, I can't wait. My hope is that some day it becomes a core feature and not an extension.

Personas is also a neat feature from Firefox. It provides a way of skinning the GUI without going all out and writing XML and designing buttons. Pretty much anyone with the ability to create a JPEG can do this. Personas are kept in an online repository making it easy to share and change them. I think this signals the death knell for themes. Personas are more lightweight and portable. And now my browser and email can look the same. Sweeeeet!

Changes Under the Hood

There are also a number of changes to the core code. Like with the Firefox 3.0 upgrade, the memory footprint for Thunderbird has shrunk a bit. This is very good when you are dealing with a low memory devices like a netbook or an old PC. Or an old PC used as a netbook...

A lot of effort also went into IMAP improvements. For many Thunderbird users, that's not that important since they get their email from a POP server. More and more ISPs, however, are moving toward IMAP because it allows for better synchronization amongst different email clients on different machines. Gmail has an IMAP option and AOL requires it. It is also the best way (at the moment) for Thunderbird to interact with an Exchange server.

One somewhat geeky new feature that I'm not sure I like is the Activity Manager. It keeps a log of all the things you did on Thunderbird. On the one hand I can see it's potential for debugging and answering the question “Oh no! Did I delete that email? The one with the time for my job interview?” On the other hand, there is also the potential for eDiscovery problems since it can explicitly tell you that someone suddenly nuked 25 emails when there was a preservation order. Sometimes metadata and logging are not wanted.

And Yet All Is Not Perfect

There are a number of strange, ugly, and just plain wrong things about this new release. Hey! Nobody is perfect and Mozilla proves that in spades. First, the elephant in the room – no Microsoft Exchange support. I get that Mozilla and Microsoft don't get along. I also get that Mozilla may think they are not that interested in the big, bad corporate market (though I don't believe that for a second). But Exchange is so ubiquitous that you have to wonder why, after all this time, there is no support for it. Heck, my ISP offers it for five bucks a month! If Microsoft is the problem then they should remember that the real enemies are Google and Oracle and get over it. If Mozilla is the problem then they need to remember that email is serious business and get over it. In any event, when anyone puts together a list of why Thunderbird is not a real email contender, Exchange support is at the top of the list. They need to add it just to shut those people up.

Oddities abound, especially in the GUI. Some are inconsistencies that had to have come up during testing. For example, there is now an Outbox. Unsent emails used to sit in the Drafts folder. Perhaps this is another way to support offline work but it needlessly confuses the process of sending emails.

And why when you compose an email does it still open in a separate window? Other email messages open in a tab. Same goes for the address book. Inconsistencies like that confuse regular users and annoy the power users. Maybe that gets fixed in a later release.

Speaking of unusual behavior, why does the reply button on the message pane have a little selection arrow but only one selection, yet the reply all has one that shows reply all and reply? A bit redundant isn't it? What I do like is that the reply all button only shows up when there is more than one person to reply to. Nice touch.

Finally, whereas the search features are so much better than before, the page that is generated to show them is ugly as sin. We are talking about a page that looks like an amateur web site from 1994. Lots of functionality but no aesthetics.

Thunderbird 3 is still a release candidate but is really close to production grade. The GUI enhancements and search features make it a worthwhile upgrade. There are still a few unusual issues but those might be ironed out over time or someone will come up with extensions to deal with them. The enhancements are great and the complaints small. My kind of software!

Disclaimer: Like everyone else, I get Thunderbird for free. So while technically not a paid endorsement, it's best to mention it anyway. I don't want the FCC giving me grief. And it give me an excuse to be silly.

Thursday, November 12, 2009

Sudo You?

Whenever I write about patents, trademarks, and copyrights, I'm always careful to state two things up front. First, I'm not a lawyer. It is quite possible that I am missing some part of law that makes my opinion invalid. I try to understand the technical underpinnings of the patents and see what it means to the computer industry and economy at large. I'm not trying to be an attorney.

Second, I am completely in favor of intellectual property protection. I am not one of those folks who believes that patents are evil and that all software should be open source. The fact of the matter is, intellectual property protections provide motive to continue to innovate. They protect the small inventor from having their life's work pulled out from under them by a deep pocketed company. Same goes for copyright and trademark protections. History shows that if people can't benefit from their work, they'll do something else and we all lose out on the richness of life.

Patents are monopolies granted to an inventor in exchange for adding to the useful knowledge of the world. Without them, the world would be full of virtually permanent monopolies as inventors strive to keep inventions secret rather than disclose them. It is also well understood that without the time based monopoly, many inventors would not recoup their investment in innovation and wouldn't bother inventing in the first place. When it costs nearly US$800 million to develop a new drug, patent monopolies are the only way to recoup the costs and make a profit.

That's why I took it with a grain of salt when I saw the initial commentary on the new Microsoft Patent ( number 7,617,530 ) issued November 10, 2009 and originally filed in April of 2005. You see, there are a lot of people who hate the idea of software patents. In their eyes, no software patent is valid. I'm still not sure where I stand on software patents nor am I a Microsoft hater, so I tried to turn a critical eye to the patent. Once I read it (and read it and read it and read it...) I came to a very firm conclusion: What was the USPTO thinking? This is so obviously wrong that I can't imagine how this got through.

The patent is entitled “Rights Elevator”. It describes “systems and/or methods” to allow a computer user to elevate their rights from a lower, standard user account to higher level administrative rights. If this sounds familiar, that's because it is. It has existed in UNIX systems since at least the 1970's. In UNIX and Linux we use a command called su, for superuser, to obtain the rights of another user with higher level rights. There is even a short hand version called sudo which runs a command or program once with elevated rights.

All patents have to pass certain tests for them to be granted by the US Patent Office. For a patent to be granted the claims must describe an invention that is novel (new), useful, and non-obvious to practitioners of the inventor's art. These tests are important. Without them a lot of inventions would be granted patents that should not. That would, in turn, inhibit innovation.

It is hard to argue that this patent passes at least two of the three tests. I agree that it is useful. The ability to briefly elevate rights to install software or copy files to a restricted directory has been proven to be good method of balancing security with the need to do certain important functions. I have a Unix book from the late 1980's that tells a sysadmin how to do that using existing UNIX commands like su. Therein lies the problem. It has already been proven to work because it already exists. That kind of kills the novelty of the invention doesn't it. The patent makes the argument that remembering a user name and a password is too difficult. Part of what is says is innovative is not having to remember a user name. That's a load of hooey. How hard is root to remember?

Let's be nice though. The Microsoft patent also includes a component to select a higher level account with a GUI and ask for a password. You have to admit, given the prevalence of graphical operating systems today that seems like an obvious addition. Wait! Did I say obvious! That seems to run afoul of the non-obvious test. In fact, this is something that most Linux distributions do today. They even reference Ubuntu, Debian, and Red Hat in their prior art list. How is this method any different from what is already done in Linux and Unix systems.

To be fair, this method of elevating rights temporarily with a graphical interface may not have been used when the patent was filed in 2005. I don't think that's true but I'll give the USPTO the benefit of the doubt. The method outlined in the patent, however, doesn't move far beyond sudo (which is also referenced in the prior art listing). Certainly not far enough to claim novelty and non-obviousness. It doesn't take an expert in software and operating systems to see that, never mind someone practiced in the art of system administration.

This method is so ubiquitous that everyone does this. Everyone except Microsoft that is. Windows, in all it's forms, has always required you to either have administrator rights or log in as someone with those rights, when that was possible at all. The Windows Vista UAC allowed you to override built in restrictions not elevate your rights temporarily. The UAC never even kept you from doing something. It just nagged you that it was bad. Windows 7 finally catches up with the rest of the world and Microsoft is trying to patent it. Talk about making lemonade from lemons.

How did this one get by? There could be a lot of reasons including overworked patent examiners. The patent should be overturned and likely (hopefully) will. In the meantime the patent office needs to do something. Maybe independent panels who don't work for a vendor. I'm not sure. All that patents like this do is throw fuel on the fire for people who want to eliminate patents, especially software patents.

This one should never have been granted. But then, I'm no lawyer nor do I play one on TV.

Thursday, October 22, 2009

eBox Shebop

Back in the day, Linux was mostly a geek toy. You had to compile the kernel from source and install all the applications including the GUI by hand. Even by Windows 3.1 standards, it was very technical and primitive. In those days, Linux's best attributes were that it was free and basically UNIX. A lot has changed since then. Linux has become a viable UNIX replacement in servers, helping to fuel the rise of a great many Internet companies. It has also tried, with limited success, to become a desktop operating system and rival to Windows and Mac OS X.

One of the biggest holdups to widespread adoption of Linux has been installation and configuration of software applications. Linux distros seem to subscribe to the philosophy that real men hand edit configuration files. It's the command line that separates the men from the boys. Linux is like a techie version of a sports car. It's about proving something. I just won't say what that something is. Package managers have done a lot to streamline installation but configuration has always been a black art. There are entire books written to help trained system administrators tackle SAMBA configuration. Not to mention every other major Linux package.

This might be fine for the hard core sys admin. It makes them feel superior to the rest of the morons out there. It doesn't work, however, for the vast majority of people milk fed on Windows installation and GUI-based configuration. Even when there are decent configuration tools (which often, in an awesome display of irony, have to be installed and configured separately) and package managers, everything is piecemeal. To set up a user on a box requires configuring many different applications using different tools, some only available from the command line. All of this has been holding back adoption of Linux as a commercially viable alternative to the Windows hegemony.

The good news is that this is changing. A fairly new distribution called eBox has solved many of the problems that have plagued Linux server installation and maintenance. Perhaps it's fair to say that eBox is a mega-distro. It is based on Ubuntu Server, which is itself Debian based. What sets it apart is the comprehensive web-based management tools. They allow single screen configuration for many typical tasks that a sys admin faces. For example, you can set up a user account along with associated file shares, email accounts, and groupware configuration all from one place. This is even better than Windows which still relies on wizards walking you through the process.

Think of it this way. Old Linux is like shopping on a busy city street. You have to walk and walk to lots of individual stores to get what you want. Windows Server is like a department store. Everything is in one place but you still have to go from department to department. eBox is like a personnel shopper. Everything comes to you.

One of the best features of eBox is the initial package installation. It groups packages into functions like networking, security, communications, office (basically file and print services), and infrastructure. This makes it easy to configure a server for specific purposes such as an office file server or a network gateway. The documentation clearly shows where to place the different types of servers in your network to get maximum safety and effect.

eBox is not perfect by any means. Installation (on a virtual box I admit) was difficult. Not difficult in the sense of hard to do since it walked me through every step. Difficult because it hung up a bunch of times. The root password is not obvious. I wasn't asked for it and it isn't the same as the initial admin account as is typical of most Linux installations. That severely restricts what additional software I can install on the server.

While the seection of packages are good, there is not database server package. I know that PostGre SQL server is installed but configuration for it is not included in the web-based configuration system. Application or database server and developer packages would be a good idea. Virtualization packages would also be nice in the future. Maybe a “cloud” package although I suspect that way madness lies.

eBox is an important step forward in making Linux a viable alternative for enterprises of all types but especially for the Small-Medium Business market. With limited or no IT resources SME organizations need easy setup, configuration, and management. That was hard to deliver using Linux before eBox.

There is one truly unfortunate aspect of eBox: it's name. It shares said name with a small PC product from Taiwan. It's bound to cause confusion. I suggest changing it as soon as possible.

Disclosure: The eBox software was provided for free. Of course, it's provided to anyone for free. Just download it from their web site. So, I guess this doesn't really count but why mess with the FTC.

Monday, September 14, 2009

Free Software Tools for Geeks

Everyone knows that techies have different needs when it comes to software. Come on. Admit it! For the average person a slow network is a mystery and an annoyance. For us, it's a project and all projects need tools.

Over the years, I have accumulated many software tools that can be had for free. Some are open source tools supported by a vast community of developers. Others are a hobbyist's pride, given freely for all to enjoy. Some are particularly useful.

So here is my top software tools for the serious computer geek, with commentary. Did you expect any less?

PuTTY

If you don't occasionally need a terminal session then you have no right to consider yourself a computer geek. The command line is what separates the real deal from the poser. PuTTY is an excellent Telnet interface with support for SSH, Rlogin, and even a serial shell.

Pros

Git 'er done! PuTTY gets you a command line to almost anything. It emulates the most common types of terminal (ah! The VT100. You never forget your first) and has a bazillion options to tweak your terminal session. Mostly, the default settings are all you need.

Cons

PuTTY only handles one session at a time. You have to load multiple instances of the software to talk to multiple systems. Even more annoying, when you shut down a session, the whole program shuts down and you have to reload it to talk to another box. Still, these are annoyances not major flaws.

KeePass

There are lots of websites to visit, servers to manage, and PCs at home and in the office. All have passwords to manage and it's a pain. If you use the same password over and over you know that's a security risk. Besides, the user name and password requirements vary from site to site and box to box. Keeping an unencrypted file or database of passwords on your computer is inviting disaster.

Enter KeePass. It manages an encrypted database of information about your logins. Besides storing user names and passwords, KeePass also has search and organization capabilities.

Pros

Does the basic job of storing user names and passwords extremely well. Search is fast and accurate. The password generator is also useful when you want to create secure but different passwords.

Cons

The interface is a bit dull but, really, it's not a game after all.

MailStore Home

Unlike most people, techies tend to generate and receive a lot of emails. No matter how good your email system, sooner or later your system bogs down if you don't archive them. Or you bog down trying to find needles in your email haystack.

MailStore Home is the little brother, freeware version of a commercial package. It does a credible job of archiving and indexing emails. I find it useful for archiving emails to a USB drive which acts as a backup. I can then delete emails from my email client with confidence.

Pros

MailStore Home interfaces with most of the major clients such as Outlook and Thunderbird but also can archive from a server, including Exchange, POP3, and IMAP mailboxes, and webmail systems such as Google Mail. It's also pretty fast for an average techie which means an above average email user.

Cons

It can copy but not move emails. That's great if you want to backup your email but not so great if you want to truly archive them. Instead, you have to remember to go back to your email client or server and delete emails manually.

Sun VirtualBox

In the world of virtualization, VMWare has the mind share and Microsoft's VirtualPC comes bundled with their servers. Sun's VirtualBox is not as well known which is too bad. It's best feature is that it is really easy to use. You can run most anything you want with minimal effort. It's free for individual use which makes it a great choice for home or a hobbyist. Would I run a data center cloud on it? Probably not. For testing, developing, or just plain goofing around, it's so much easier to use.

Pros

Easy to use. You don't need a four week course to start using it. It does a very credible job of creating virtual servers or desktops.

Cons

Configuring inbound network access, such as an HTTP server, is not intuitive making VirtualBox more useful for virtual desktops or sandboxes. I still can't get FreeNAS to work right because of server access problems.

The list of freebie tools is much longer than this. This is but a sample of tools for the techie. Also in the mix is 7-Zip, an excellent archiver, and Filezilla, a classic and profoundly useful FTP client. For the software developer, I also recommend Sun's Netbeans. It's a full blown, commercial quality, IDE that is especially good for Java development but has decent support for PHP, Python, C/C++, and many other languages.

A lot of these tools are much better than the stuff you pay for. Some of them you do have to pay for if you want to use them in a commercial setting. It's always a good idea to check the license. And, if you are managing a large commercial environment, many of the tools won't provide the scope of features and services that you need. However, for the hobbyist, individual, or SOHO environment, these tools can't be beat. They give you what you need for a great price – free!

Thursday, August 27, 2009

Cloudy Skies This Week

Recent blog posts and comments I made on Twitter might give some people the impression that I'm against cloud computing. I bet I've given some people the impression that I hate cloud computing. Despise it! Want to see it die! Nothing could be further from the truth. I love the idea of cloud computing. It's the cloud computing marketing that I take issue with.

Overall, what's not to like about the cloud idea? The promise of cloud computing (notice I say promise, not reality) is the ability to only buy what you need with the option to buy more later if you want to. In that respect, it deals with one of the key problems in computing: coarse granularity in systems. If I need 10 percent of a server, I might have to buy a whole server. Someday I might need that whole server but not right at the moment. Then again, maybe never. We have wonderful terms for buying more than you need such as underutilization. The best term is “a waste of money”. So, buying only what I need when I need it is a great way to manage my budget. Same goes for software. I no longer have to buy a software package designed for fifty people for just three people to use. It's efficient and cost effective. It also makes it easier to quantify the cost of running an application.

Cloud computing is also evolution not revolution. We have been doing limited purpose cloud computing for years. It's called web hosting. And email hosting. Oh. And application hosting. Do I notice when my hosting provider adds new resources in order to add more customers. Not really. I pay ten bucks and get a chunk of resources adequate to running my simple web site and that's how I like it.

So what's not to like? Well a couple of things really. Security of a cloud is no better than security in a non-cloud data center. You still have the problems of internal espionage, external break-ins, and other Dick Tracy stuff.

There is also a migration problem. When the day comes that your application needs to move to a dedicated system (don't kid yourself – it will happen), you might have a heck of a time moving it. Unlike moving up to a bigger piece of iron, applications may have to be rebuilt to live in a different type of environment. In that way, I suppose, it is different. It's worse... and nobody wants that.

This is especially true of clouds built around service frameworks like Amazon's. At some point the application might get big enough that it makes sense to bring it in house. Worse yet, you could find yourself dissatisfied with the service provider (like that never happens!) and forced into an acrimonious divorce. This is an especially nasty problem because they have you by the data stores if you get my meaning.

These are not reasons to forgo the cloud. They are reasons to be careful. Figure these issues out ahead of time and make good choices up front. And ignore the hype. If someone slaps “cloud” on something that seems not so cloudy, be suspicious.

Remember, cloud computing is a strategy and maybe an architecture. It's not a product no matter how many times the corporate talking head says so.