I don't think that many people would dispute the success or value of the Wiki. For those of you who have been living in a cave for the past two years, a Wiki is a collaborative document system, environment, or philosophy, depending on who you talk to. While the most famous is Wikipedia, the online, user written encyclopedia, there are zillions of Wikis on the Internet. Most are information repositories of some sort, especially on-line documentation.
The advantage of the Wiki is that it is very easy to write and edit entries. Formatting those entries, while a bit unconventional, is also pretty easy. This yields highly attractive, easy to navigate online documents.
Deploying a Wiki application, such as MediaWiki, the software behind Wikipedia, is also incredibly easy assuming you have a working L/WAMP stack operating. I've worked with all types of collaborative systems including Lotus Notes and a number of knowledgebases, and few are as easy to use as a MediaWIki and none are as easy to install.
Some time ago, I installed and started using a Wiki for tracking storage companies, my bread and butter. That in many ways is a conventional use of a Wiki. Think of it as building an encyclopedia of storage companies. Nothing Earth shattering there.
I also started using my Wiki for development projects. I build my own applications from time to time to provide me something useful, try out technology that people are buzzing about, and simply to amuse myself. On my latest project I used a Wiki to help me to design the application and capture my decisions along the way. What I found was a product managers dream application.
Anyone who has had to build product specs, PRDs, etc. knows what a pain they can be. They are dynamic, require input from lots of often unwilling people, and whole sections usually need to be written by specialists. In other words, they require collaboration, the stock in trade of a Wiki. The same would go with design documents that engineers use.
As time has gone on, I have been able to add and expand sections, eliminate others, and make changes as needed. All without losing the thoughts behind them. As is typical with web applications, you can link to other documents or sections of documents with easy including outside ones. If there were engineers writing the technical design elements, they could simply write them in. If a change occurred to a feature requirement, everyone would know immediately.
Contrast that with the way this is usually done. Dozens of Word documents tossed at the product manager who has to somehow integrate then all without getting something terribly wrong. Playing cut and paste with e-mails and other documents, trying to pull them together into a unified whole but never having a sense of the entire project. Gack! Makes me choke just thinking about it.
Heck, by the end of this, I'll almost have a design manual on how to write this type of application, useful to anyone who would want to do the same in the future. Development and knowledge capture all at once.
This is the killer app for product managers. Using a Wiki, you could pull together requirements documents in half the time and tie them back to the design documents and anything else associated with the product. tech support would love what comes out of this if QA used it. And it cost practically nothing to deploy. The software is free as are most of the infrastructure.
Free and transformative. What could be better.
Tom Petrocelli's take on technology. Tom is the author of the book "Data Protection and Information Lifecycle Management" and a natural technology curmudgeon. This blog represents only my own views and not those of my employer, Enterprise Strategy Group. Frankly, mine are more amusing.
Thursday, December 14, 2006
Thursday, December 07, 2006
Same Engine, Different Body
For various reasons I have been writing software again. It's mostly hobbyist stuff or small, useful applications. My TagCloud software has been incredibly useful and was fun to work with. I used to write and design software for a living and it was not nearly as much fun (that's why it's called work folks!).
So, my latest project is a rewrite of an an old travel planning tool that I built in Access years ago. Keeping with my current fascination with modern web-based development, I decided to implement this application as a Java servlet within a Tomcat container. I've been writing Java code since it was new and like many aspects of the language and virtual machine architecture.
One of the big advantages to OO programming is the ability to hack together pre-written chunks of code to build applications quickly. Java makes it very easy to dump in different frameworks for important functions. Need to interface with a database? JDBC makes this easy. Make mine a web server application please. It's easy with the servlet interface.
This advantage also leads to Java's biggest problem: framework and component proliferation. This is most evident when you are building web interfaces. There are all types of competing frameworks and it's easy to get confused. Should I generate the interface as pure HTML/JavaScript from within the servlet? How about using Ruby on Rails, though that means learning yet another language. JavaServer Pages (JSP) might handle some of the static portions of the interface. Is that enough? Should I use a web UI framework? Okay, which one? Struts or JavaServer Faces (JSF)?
It makes my head hurt.
This is the dark side of component and object technology. The things you need to know become so great that you need armies of specialists to write even a simple application. If you don't think this is out of control consider this: Nine years ago, ever thing I needed to know about Java could be found in the two O'Reilly books "Java in a Nutshell, 2nd Edition" and "Java Examples in a Nutshell". At that time they had 11 books altogether about Java, but many rehashed the same subjects. The "Java in a Nutshell" book was only 610 pages long.
Contrast that with what is available today. They O'Reilly web site lists over 125 books on the subject (I got tired of counting) and the 5th edition of "Java in a Nutshell" is listed as having 1252 pages, more than twice the 2nd edition. In 1997, there was one UI choice, the AWT. Now there is still AWT but also Swing, Spring (whcih includes a bunch of other components), the aforementioned JSP, JSF, and Struts, and the Eclipse SWT. Or maybe I could use Ruby on Rails to generate the interface or even JavaScript on Rails. I don't even want to get into all the other extensions, component libraries and frameworks available (J2EE, JavaBeans, AspectJ, JXTA god know what else).
Now if you think this is simply the ravings of a weekend programmer who can't make a decision, consider the poor system architect who has to design a major enterprise application. Not only do they have to evaluate and choose what parts to use but those choices will drive specific costs. Training, books, and time will cost money. Just keeping up with versions and updates could occupy someone full-time. It's job security I suppose.
To be blunt, this is too much of a good thing. Granted, no one wants to have to write all of these parts from scratch. Talk about serious time issues. However, we could do without so much duplication. Do we need Struts and JSF? I'm sure that someone out there is yelling "Yes! By gum we do." and will have a dozen reasons why. Most of those reasons will turn on some minor technical or philosophical issues that matter to only tho a handful of people. I'm sure that some will also argue that it's about choice and choice is good. Maybe, but it's also confusing.
I don't offer a solution here. This probably won't happen with .Net. Microsoft keeps too tight a control on its technology to have this happen. It might also help if the Java community would line up at least the competing technology and pick just one in each group.
In the meantime, I will, like everyone else, make my choices and hope they are the best ones.
So, my latest project is a rewrite of an an old travel planning tool that I built in Access years ago. Keeping with my current fascination with modern web-based development, I decided to implement this application as a Java servlet within a Tomcat container. I've been writing Java code since it was new and like many aspects of the language and virtual machine architecture.
One of the big advantages to OO programming is the ability to hack together pre-written chunks of code to build applications quickly. Java makes it very easy to dump in different frameworks for important functions. Need to interface with a database? JDBC makes this easy. Make mine a web server application please. It's easy with the servlet interface.
This advantage also leads to Java's biggest problem: framework and component proliferation. This is most evident when you are building web interfaces. There are all types of competing frameworks and it's easy to get confused. Should I generate the interface as pure HTML/JavaScript from within the servlet? How about using Ruby on Rails, though that means learning yet another language. JavaServer Pages (JSP) might handle some of the static portions of the interface. Is that enough? Should I use a web UI framework? Okay, which one? Struts or JavaServer Faces (JSF)?
It makes my head hurt.
This is the dark side of component and object technology. The things you need to know become so great that you need armies of specialists to write even a simple application. If you don't think this is out of control consider this: Nine years ago, ever thing I needed to know about Java could be found in the two O'Reilly books "Java in a Nutshell, 2nd Edition" and "Java Examples in a Nutshell". At that time they had 11 books altogether about Java, but many rehashed the same subjects. The "Java in a Nutshell" book was only 610 pages long.
Contrast that with what is available today. They O'Reilly web site lists over 125 books on the subject (I got tired of counting) and the 5th edition of "Java in a Nutshell" is listed as having 1252 pages, more than twice the 2nd edition. In 1997, there was one UI choice, the AWT. Now there is still AWT but also Swing, Spring (whcih includes a bunch of other components), the aforementioned JSP, JSF, and Struts, and the Eclipse SWT. Or maybe I could use Ruby on Rails to generate the interface or even JavaScript on Rails. I don't even want to get into all the other extensions, component libraries and frameworks available (J2EE, JavaBeans, AspectJ, JXTA god know what else).
Now if you think this is simply the ravings of a weekend programmer who can't make a decision, consider the poor system architect who has to design a major enterprise application. Not only do they have to evaluate and choose what parts to use but those choices will drive specific costs. Training, books, and time will cost money. Just keeping up with versions and updates could occupy someone full-time. It's job security I suppose.
To be blunt, this is too much of a good thing. Granted, no one wants to have to write all of these parts from scratch. Talk about serious time issues. However, we could do without so much duplication. Do we need Struts and JSF? I'm sure that someone out there is yelling "Yes! By gum we do." and will have a dozen reasons why. Most of those reasons will turn on some minor technical or philosophical issues that matter to only tho a handful of people. I'm sure that some will also argue that it's about choice and choice is good. Maybe, but it's also confusing.
I don't offer a solution here. This probably won't happen with .Net. Microsoft keeps too tight a control on its technology to have this happen. It might also help if the Java community would line up at least the competing technology and pick just one in each group.
In the meantime, I will, like everyone else, make my choices and hope they are the best ones.
Tuesday, November 28, 2006
Another One Bites The Dust
The CDP consolidation continues! It was announced today that Symantec, who acquired Veritas some time back, has acquired the assets and intellectual property of Revivio. This is totally predictable. Symantec/Veritas was one of the few major backup software vendors without a decent CDP feature and Revivio was struggling to stay alive. Revivio's investors get something, Symantec gets a key feature.
This is all part of the ongoing absorption of VTL and CDP players into the portfolios of larger technology companies. Just recently we saw Avamar picked up by EMC and XOSoft plucked by CA. This absorption was inevitable. CDP and VTL are both features, not products. The do not change the data protection model so much as extend it and give it a new performance envelope. They are in no way disruptive in the way SANs were to storage and data protection.
There are now precious few independent CDP companies left. Timespring comes to mind plus a few others. How long can a CDP vendor stay independent? With this move by Symantec most of the major data protection companies now have this feature. Netapp might go for an acquisition here. They have the NearStore VTL software but not true CDP. HDS is more likely to source than buy CDP. The field is getting very narrow indeed.
Of course, CDP vendors may opt to stay the course and not sell out. That's fine if you enjoy getting stomped by one of the big boys. Most data protection vendors can now offer a full spectrum of solutions, hardware and software. Independents are left to pick up the scraps.Despite the customers they already have time is running out on these companies. There will be fewer and fewer folks ready to buy from a small company when they can get it as an add-on or feature of their backup software. Eventually the big boys will start pricing this software to move until it finally is a standard feature. Then what?
My advice to those still trying to swim against the tide is to get out of the ocean while you can. Otherwise, there will be nowhere left to swim to.
This is all part of the ongoing absorption of VTL and CDP players into the portfolios of larger technology companies. Just recently we saw Avamar picked up by EMC and XOSoft plucked by CA. This absorption was inevitable. CDP and VTL are both features, not products. The do not change the data protection model so much as extend it and give it a new performance envelope. They are in no way disruptive in the way SANs were to storage and data protection.
There are now precious few independent CDP companies left. Timespring comes to mind plus a few others. How long can a CDP vendor stay independent? With this move by Symantec most of the major data protection companies now have this feature. Netapp might go for an acquisition here. They have the NearStore VTL software but not true CDP. HDS is more likely to source than buy CDP. The field is getting very narrow indeed.
Of course, CDP vendors may opt to stay the course and not sell out. That's fine if you enjoy getting stomped by one of the big boys. Most data protection vendors can now offer a full spectrum of solutions, hardware and software. Independents are left to pick up the scraps.Despite the customers they already have time is running out on these companies. There will be fewer and fewer folks ready to buy from a small company when they can get it as an add-on or feature of their backup software. Eventually the big boys will start pricing this software to move until it finally is a standard feature. Then what?
My advice to those still trying to swim against the tide is to get out of the ocean while you can. Otherwise, there will be nowhere left to swim to.
Monday, November 20, 2006
Another One for the AOL DUH List
It is absolutely incredible how much AOL has lost it's way. In times gone by, AOL single-handedly made "on-line" cool with their flagship service. Another component of AOL, Netscape, practically defined the current Internet experience. Yet look at them now. The ISP part of the business is going away. No more "You got mail!" followed by that screechy modem sound. Netscape went from being the premier (and coolest) browser and one of the best Web servers to being eaten alive by open source products. Both have been transformed into lame on-line web sites who's only real value is in their storied names. It's like having a skateboard from Lincoln-Mercury. Not the same and a sad reminder of what was once a couple of great companies.
Business gurus and high tech pundits will analyze this one to death for years to come. Was it the misguided merger with Time-Warner? Was it Steve Case's ego? Maybe both. However, one factor I'm sure of - AOL forgot what is meant to be cool. As proof I offer you the latest atrocity from AOL. None other than the AOL Toolbar for Firefox.
It's kind of ironic that AOL has a toolbar for Firefox. It's a sad admission that they really don't do technology anymore. It's also not the toolbar itself that is the problem. It's just a toolbar. Use it or don't. Like it or not. What really shows the less-than-happy place that AOL lives in right now is the entry for it at the Firefox Extensions download site. Now, it is customary to put up a paragraph or two to describe what the extension does. That's just polite. Besides, if people have to guess what it does, they won't use it.
AOL, on the other hand, was not happy with a simple description. Nope. Not them. Instead they have (and I'm not making this up) a 4195 word, 7 page entry. Now, you may well ask yourself "What the heck could make a simple toolbar extension entry this long?" An End User License Agreement commonly known as a EULA, that's what. I'm not making this up either. It is a EULA that has 22 sections not including the introductory paragraph. What could be more offputting than that? Maybe the warnings that Canadians put on cigarette packages but the Canadian government doesn't want people to use cigarettes. Presumably, AOL made this toolbar so that people would use it and want to go to their site, making their advertisers happy. So what's the thinking here? Did someone say "If we put a huge batch of confusing legalese in the entry, people will love it"? Or "The computer geeks that love Firefox, can't wait to read 7 pages of legal documentation!" I can't imagine being at the meeting where this was proposed and not laughing out loud.
What this shows is how little cool is left at AOL. I would also have to question their product strategy if this is part of it. Some might argue that software, even consumer software, is no longer about cool. It's all about brand and fashion, like sweaters. Maybe I'm old school but I don't think so. Look at Google. They exude "cool" from their very pores and have a massive market valuation to show for it. That's why people still use their sites, despite all the ads, and why "to google" has become a verb.
I think that if AOL becomes a verb it will be more like "What an AOL you are!" Or even better "EULA AOL!" Not too cool.
Business gurus and high tech pundits will analyze this one to death for years to come. Was it the misguided merger with Time-Warner? Was it Steve Case's ego? Maybe both. However, one factor I'm sure of - AOL forgot what is meant to be cool. As proof I offer you the latest atrocity from AOL. None other than the AOL Toolbar for Firefox.
It's kind of ironic that AOL has a toolbar for Firefox. It's a sad admission that they really don't do technology anymore. It's also not the toolbar itself that is the problem. It's just a toolbar. Use it or don't. Like it or not. What really shows the less-than-happy place that AOL lives in right now is the entry for it at the Firefox Extensions download site. Now, it is customary to put up a paragraph or two to describe what the extension does. That's just polite. Besides, if people have to guess what it does, they won't use it.
AOL, on the other hand, was not happy with a simple description. Nope. Not them. Instead they have (and I'm not making this up) a 4195 word, 7 page entry. Now, you may well ask yourself "What the heck could make a simple toolbar extension entry this long?" An End User License Agreement commonly known as a EULA, that's what. I'm not making this up either. It is a EULA that has 22 sections not including the introductory paragraph. What could be more offputting than that? Maybe the warnings that Canadians put on cigarette packages but the Canadian government doesn't want people to use cigarettes. Presumably, AOL made this toolbar so that people would use it and want to go to their site, making their advertisers happy. So what's the thinking here? Did someone say "If we put a huge batch of confusing legalese in the entry, people will love it"? Or "The computer geeks that love Firefox, can't wait to read 7 pages of legal documentation!" I can't imagine being at the meeting where this was proposed and not laughing out loud.
What this shows is how little cool is left at AOL. I would also have to question their product strategy if this is part of it. Some might argue that software, even consumer software, is no longer about cool. It's all about brand and fashion, like sweaters. Maybe I'm old school but I don't think so. Look at Google. They exude "cool" from their very pores and have a massive market valuation to show for it. That's why people still use their sites, despite all the ads, and why "to google" has become a verb.
I think that if AOL becomes a verb it will be more like "What an AOL you are!" Or even better "EULA AOL!" Not too cool.
Friday, November 10, 2006
Mozilla Firefox 2.0 Update
Well, I finally got Firefox 2.0 to work. It required an extension (Tab Mix Plus) that allows you to control the behavior of tabs and links better. I am mystified as to why this isn't a core function. It's not like this is new. There have been extensions for this since the browser was first released and tab management extensions are among the most popular. By leaving it to extensions, there is always some lag between when a new version is released and the functionality becomes available again.
I was also able to change the security settings. When I first installed the software, it didn't want to recognize a previous profile for my local web server. Now it does. I don't know why that is, just that it is. That's a bit disconcerting. Things should only change when you make them change, not on their own. Change is good. Unpredictable change is not.
So far, I can say that I don't see many major improvements in Firefox. Certainly not enough to warrant wallowing in upgrade hell. It's a bit quicker and appears to have a lower memory footprint. The tab and session management options are okay, though not as important as the ones the extensions provide. I would argue that Mozilla has it backward in this regard. Tab sorting and session saving could be left to extensions while link behavior configuration made a core function.
The other big feature is the anti-phishing capabilities. I can't say I've noticed anything. Perhaps I've been around the block enough times to not be fooled so I don't run into the problem. I'm guessing it has more to do with not using webmail very often. Whatever. It does not add to my quality of life. If this was in Thunderbird, then you would have something.
Probably the best part of the upgrade is that it forced extensions makers to upgrade their extensions and fix bugs. A bunch of non-critical yet annoying bugs have been fixed in add-ons. Perhaps these are really manifestations of the "under the cover" changes that pundits and Mozilla supports keeping crowing about. However it came about, I don't care. Stuff works better now and that's what matters.
So, once again, I love Firefox. I don't think this was much for a major release. It's still the best browser and IE 7.0 is still playing catch up.
I was also able to change the security settings. When I first installed the software, it didn't want to recognize a previous profile for my local web server. Now it does. I don't know why that is, just that it is. That's a bit disconcerting. Things should only change when you make them change, not on their own. Change is good. Unpredictable change is not.
So far, I can say that I don't see many major improvements in Firefox. Certainly not enough to warrant wallowing in upgrade hell. It's a bit quicker and appears to have a lower memory footprint. The tab and session management options are okay, though not as important as the ones the extensions provide. I would argue that Mozilla has it backward in this regard. Tab sorting and session saving could be left to extensions while link behavior configuration made a core function.
The other big feature is the anti-phishing capabilities. I can't say I've noticed anything. Perhaps I've been around the block enough times to not be fooled so I don't run into the problem. I'm guessing it has more to do with not using webmail very often. Whatever. It does not add to my quality of life. If this was in Thunderbird, then you would have something.
Probably the best part of the upgrade is that it forced extensions makers to upgrade their extensions and fix bugs. A bunch of non-critical yet annoying bugs have been fixed in add-ons. Perhaps these are really manifestations of the "under the cover" changes that pundits and Mozilla supports keeping crowing about. However it came about, I don't care. Stuff works better now and that's what matters.
So, once again, I love Firefox. I don't think this was much for a major release. It's still the best browser and IE 7.0 is still playing catch up.
Time May Change Me, But I Can't Trace Time
The title of this entry is from the David Bowie song "Changes" off the album "Hunky Dory". While I know he didn't have computer data in mind when he sang these words, it sure applies. How do we trace time? Can we track changes in data and adjust resources based on change? The answer lies in another quote from Maestro Bowie "It Ain't Easy".
I've been researching a paper around the idea of just-in-time data protection. It pulls together some concepts that I have been batting around for quite some time including the idea of Service-Oriented Architectures and dynamic data protection. I've written about both of these topics before. In looking at how to make the data protection environment more responsive, I started to realize that resource allocation needs to be adjusted according to how quickly the data is changing. The rate of change of the data, should then drive one of the major data protection metrics, Recovery Point Objective or RPO. RPO basically says how far back in time you are committed to recover some piece of data. My thinking goes like this: Why spend money to provide high performance backup to data that isn't changing? Conversely, rapidly changing data justifies a short time frame RPO and more resources.
As I went about talking with vendors and IT people I quickly discovered that there was no good and easy way to determine the rate of change of data. We simply don't track data that way. There are some indirect measures, especially the rate of growth of disk storage. For homogeneous data stores, such as a single database on a disk, this works well, assuming your unit of measure is the entire database. It doesn't work well at all for unstructured data, especially file data. We might be able to look at how much space is used in a volume but that's a pretty gross measure. Databases have some tools to show how fast data is changing but that does not translate to the disk block level and does nothing for file systems.
What we need to understand is how often individual files are changing and then adjust their data protection accordingly. If a file is changing an awful lot, then it might justify a very short RPO. If it's not changing at all, perhaps we don't need to back it up at all so long as a version exists in an archive. In other words, we need to assign resources that match the metrics and rate of change affects the metrics. This is complicated because how often data changes is variable. It might follow along with a predictable lifecycle but then again, it might be more variable than that. The only way to know is to actually measure the rate of change of data, especially file data.
The simple solution is a system that tracks changes in the file system and calculates rate of change for individual files. This information would then be used to calculate an appropriate RPO and assign data protection resources that meet the metrics. The best system would so this on the fly and dynamically reassign data protection resources. A system like this would be cost effective while providing high levels of data protection.
No one has this yet. I'm thinking it is two to five years before this idea really takes hold in products. That's okay. It's usually a long time before something goes from a good idea to a usable product. We have to start with the good idea.
I've been researching a paper around the idea of just-in-time data protection. It pulls together some concepts that I have been batting around for quite some time including the idea of Service-Oriented Architectures and dynamic data protection. I've written about both of these topics before. In looking at how to make the data protection environment more responsive, I started to realize that resource allocation needs to be adjusted according to how quickly the data is changing. The rate of change of the data, should then drive one of the major data protection metrics, Recovery Point Objective or RPO. RPO basically says how far back in time you are committed to recover some piece of data. My thinking goes like this: Why spend money to provide high performance backup to data that isn't changing? Conversely, rapidly changing data justifies a short time frame RPO and more resources.
As I went about talking with vendors and IT people I quickly discovered that there was no good and easy way to determine the rate of change of data. We simply don't track data that way. There are some indirect measures, especially the rate of growth of disk storage. For homogeneous data stores, such as a single database on a disk, this works well, assuming your unit of measure is the entire database. It doesn't work well at all for unstructured data, especially file data. We might be able to look at how much space is used in a volume but that's a pretty gross measure. Databases have some tools to show how fast data is changing but that does not translate to the disk block level and does nothing for file systems.
What we need to understand is how often individual files are changing and then adjust their data protection accordingly. If a file is changing an awful lot, then it might justify a very short RPO. If it's not changing at all, perhaps we don't need to back it up at all so long as a version exists in an archive. In other words, we need to assign resources that match the metrics and rate of change affects the metrics. This is complicated because how often data changes is variable. It might follow along with a predictable lifecycle but then again, it might be more variable than that. The only way to know is to actually measure the rate of change of data, especially file data.
The simple solution is a system that tracks changes in the file system and calculates rate of change for individual files. This information would then be used to calculate an appropriate RPO and assign data protection resources that meet the metrics. The best system would so this on the fly and dynamically reassign data protection resources. A system like this would be cost effective while providing high levels of data protection.
No one has this yet. I'm thinking it is two to five years before this idea really takes hold in products. That's okay. It's usually a long time before something goes from a good idea to a usable product. We have to start with the good idea.
Thursday, November 02, 2006
More Fun for You at SNW
After last year, I had pretty low expectations of SNW this year. There was so little new and interesting that it was discouraging. So, going into this year's Expo I have to admit that I had a bad attitude. I am happy to report that I was mistaken and actually found some interesting trends and technology. That is not to say all is rosy. There is still a lot of silliness out there but that is easily balanced by intelligent responses to real problems.
The Most Interesting Announcement
The announcement that caught my attention right away was EMC's acquisition of Avamar. Avamar was an early entry into disk-to-disk backup. What is most interesting about Avamar is their software. The hardware is beside the point. They are leaders in making backup more efficient by only bothering with changed data, called de-duping. They have done a great job with presenting backed up data so that it is easy to find and retrieve.
This fills a significant hole in EMC's lineup. They now have traditional backup (from Legato), CDP, disk-to-disk backup, and remote copy. Pretty much a full spectrum data protection product line. Nice.
Application Specific Storage
There are way too many storage array vendors in the marketplace. They can't all survive. However, there is a nice trend emerging, one that I think may have legs - Application Specific Storage. By this I mean storage systems tailored for specific types of applications. In general, we have taken general purpose arrays and tweaked them for specific purposes. In some cases, some vendors have specialized arrays and software for certain industries such as graphics, media, or the large files typical in oil and gas explorations.
The newest classes of storage are similar in concept - build arrays that fit certain application criteria. This is married to specialized files systems and network operating systems as well as other hardware to make networked storage that is able to meet the performance and management needs of common applications. This is a trend to watch.
Annoying Techspeak
Okay, there are lots of silly acronyms and marketing speak in the computer technology industry. What I really hate is when it is downright misleading. I saw the term "adaptive data protection" tossed on some booths. That attracted me like a moth to a lightbulb of course. Unfortunately, there was nothing adaptive about it. What I found was configurable (read manually configurable) CDP. Aw comeon! Adaptive means that it changes when the environment changes. It does not mean that I can change it when I notice that something is different.
ILM In A Narrow Band
There is much less talk about ILM than last year or even than the year before. What there is now is more focused ILM products. Lots of advanced classification software and search and index engines. This is good. It shows the maturation of the ILM space.
Oh You Pretty Things!
Design has finally come to storage. I don't mean engineering design, functional but unattractive. Instead, design in terms of form and attractiveness. Lets face it, a lot of storage gear is downright ugly. Some of it so ugly that you need to put a paper bag over it before you put it in your data center. Now we have curves and glowing logos. You actually want to look at the stuff.
Gimme Shelter
Yes! More secure products. Software, secure storage, and security appliances. Not only is there critical mass in security products, but more and more security is integrated into other components. Let the black hats penetrate the network and server perimeters. They'll hit the wall with storage.
Give Me Some Relief
And what do IT professionals want? Relief. Relief from regulators and lawyers. Relief from high costs. And relief from the crying and wailing over lost data. They want to keep what they want while ditching what the lawyers want to ditch.
Perhaps this is a sign that things are changing in the storage industry. Innovation still exists and is growing. That's very good news.
The Most Interesting Announcement
The announcement that caught my attention right away was EMC's acquisition of Avamar. Avamar was an early entry into disk-to-disk backup. What is most interesting about Avamar is their software. The hardware is beside the point. They are leaders in making backup more efficient by only bothering with changed data, called de-duping. They have done a great job with presenting backed up data so that it is easy to find and retrieve.
This fills a significant hole in EMC's lineup. They now have traditional backup (from Legato), CDP, disk-to-disk backup, and remote copy. Pretty much a full spectrum data protection product line. Nice.
Application Specific Storage
There are way too many storage array vendors in the marketplace. They can't all survive. However, there is a nice trend emerging, one that I think may have legs - Application Specific Storage. By this I mean storage systems tailored for specific types of applications. In general, we have taken general purpose arrays and tweaked them for specific purposes. In some cases, some vendors have specialized arrays and software for certain industries such as graphics, media, or the large files typical in oil and gas explorations.
The newest classes of storage are similar in concept - build arrays that fit certain application criteria. This is married to specialized files systems and network operating systems as well as other hardware to make networked storage that is able to meet the performance and management needs of common applications. This is a trend to watch.
Annoying Techspeak
Okay, there are lots of silly acronyms and marketing speak in the computer technology industry. What I really hate is when it is downright misleading. I saw the term "adaptive data protection" tossed on some booths. That attracted me like a moth to a lightbulb of course. Unfortunately, there was nothing adaptive about it. What I found was configurable (read manually configurable) CDP. Aw comeon! Adaptive means that it changes when the environment changes. It does not mean that I can change it when I notice that something is different.
ILM In A Narrow Band
There is much less talk about ILM than last year or even than the year before. What there is now is more focused ILM products. Lots of advanced classification software and search and index engines. This is good. It shows the maturation of the ILM space.
Oh You Pretty Things!
Design has finally come to storage. I don't mean engineering design, functional but unattractive. Instead, design in terms of form and attractiveness. Lets face it, a lot of storage gear is downright ugly. Some of it so ugly that you need to put a paper bag over it before you put it in your data center. Now we have curves and glowing logos. You actually want to look at the stuff.
Gimme Shelter
Yes! More secure products. Software, secure storage, and security appliances. Not only is there critical mass in security products, but more and more security is integrated into other components. Let the black hats penetrate the network and server perimeters. They'll hit the wall with storage.
Give Me Some Relief
And what do IT professionals want? Relief. Relief from regulators and lawyers. Relief from high costs. And relief from the crying and wailing over lost data. They want to keep what they want while ditching what the lawyers want to ditch.
Perhaps this is a sign that things are changing in the storage industry. Innovation still exists and is growing. That's very good news.
Wednesday, October 25, 2006
First, Do No Harm Mozilla
ARRRGGGHHH! No, that is not the sound of me practicing to be a pirate for Halloween. Instead, it is a strangled cry of anguish brought on by my attempts to run the new Firefox browser. As my gentle readers are aware of, I am usually a Mozilla fan. I have tossed away Microsoft Outlook in favor of Thunderbird and use Firefox, not Internet Explorer. The new version of Firefox, version 2.0, is giving me pause however.
You would think that these folks never heard of backward compatibility. I'm not talking about the extensions that no longer work. I kind of expect that and usually they catch up over the next couple of weeks. I mean things that are supposed to work that no longer do. Worse, features that work inconsistently.
For instance, I had my browser configured to open most links in a new tab with the exception of links within a page. Bookmarks, search results, new windows all come in a tab. It is the single most useful feature of Firefox, in my opinion. The only thing that I wanted to open in the current window, is a link from within the current window. This is typical application behavior. I loaded up Firefox 2 and, behold, everything works exactly the opposite. Worse yet, changing the tab settings seems to have absolutely no effect on the behavior of the browser. Tell it to open external links in the current tab, it still opens them in a new one. No matter what I tell it to do, the silly browser does what it wants to do. Frustrating as all get out.
My TagCloud system no longer works. It somehow generates a 403 error from the web server. To put it plainly, some of my AJAX applications no longer function. Perhaps it is some new security feature that is ignoring the security profile that is in my user preferences. Perhaps it's ignoring my user preferences all together. Maybe it's just acting randomly.
The point of this particular rant is that this is what kills software. Backward compatibility is a key component of upgrading software. If business enterprises are to be expected to implement these product, if these products are to become something other than a hobbyist's toy, then upgrades have to respect current configurations. New features don't matter if the old, useful ones, no longer work.
In case anyone is thinking that this is just me, look around the message boards. You will find all kinds of similar problems. This is also not the first time this has happened. Several times in the past when Mozilla came out with upgrades a lot of things broke and for good.
The extension situation also exposes the soft white underbelly of open source. Over time, certain extensions, plug-ins, or whatever you want to call them, become so widely used as to become a feature. When that happens you can no longer rely on a weekend programmer to maintain it and keep it current. It is incumbent on the main development team to make sure these critical features are delivered.
New features are nice, often important. You can't deliver those by breaking other existing features. For me, it means a return to the older version that I have, v1.5. That presents it's own problems since the update mechanism won't update to my previous version, 1.7 and a lot of extensions will no longer function below that level. All this aggravation is making me take a good look at IE 7. Exactly what the Mozilla team does not want to happen.
Sorry Mozilla, this is a lousy roll out. You won't win corporate hearts and minds this way.
You would think that these folks never heard of backward compatibility. I'm not talking about the extensions that no longer work. I kind of expect that and usually they catch up over the next couple of weeks. I mean things that are supposed to work that no longer do. Worse, features that work inconsistently.
For instance, I had my browser configured to open most links in a new tab with the exception of links within a page. Bookmarks, search results, new windows all come in a tab. It is the single most useful feature of Firefox, in my opinion. The only thing that I wanted to open in the current window, is a link from within the current window. This is typical application behavior. I loaded up Firefox 2 and, behold, everything works exactly the opposite. Worse yet, changing the tab settings seems to have absolutely no effect on the behavior of the browser. Tell it to open external links in the current tab, it still opens them in a new one. No matter what I tell it to do, the silly browser does what it wants to do. Frustrating as all get out.
My TagCloud system no longer works. It somehow generates a 403 error from the web server. To put it plainly, some of my AJAX applications no longer function. Perhaps it is some new security feature that is ignoring the security profile that is in my user preferences. Perhaps it's ignoring my user preferences all together. Maybe it's just acting randomly.
The point of this particular rant is that this is what kills software. Backward compatibility is a key component of upgrading software. If business enterprises are to be expected to implement these product, if these products are to become something other than a hobbyist's toy, then upgrades have to respect current configurations. New features don't matter if the old, useful ones, no longer work.
In case anyone is thinking that this is just me, look around the message boards. You will find all kinds of similar problems. This is also not the first time this has happened. Several times in the past when Mozilla came out with upgrades a lot of things broke and for good.
The extension situation also exposes the soft white underbelly of open source. Over time, certain extensions, plug-ins, or whatever you want to call them, become so widely used as to become a feature. When that happens you can no longer rely on a weekend programmer to maintain it and keep it current. It is incumbent on the main development team to make sure these critical features are delivered.
New features are nice, often important. You can't deliver those by breaking other existing features. For me, it means a return to the older version that I have, v1.5. That presents it's own problems since the update mechanism won't update to my previous version, 1.7 and a lot of extensions will no longer function below that level. All this aggravation is making me take a good look at IE 7. Exactly what the Mozilla team does not want to happen.
Sorry Mozilla, this is a lousy roll out. You won't win corporate hearts and minds this way.
Friday, October 20, 2006
Mother Nature 1, Small Business 0
I live near Buffalo New York. Last week we had a freak October snow storm. Now, before anyone starts with the usual jokes, keep in mind this was not the normal Buffalo snow. Since the storm was widely but poorly reported, it is understandable that most people have an incomplete understanding of the scope of the disaster. Having fallen off the news cycle in a day, it's also unlikely people will ever get the whole story unless you live here or know someone who does. That's sad because it is an excellent object lesson in disaster preparedness.
The Facts.
To give some context to the situation (and hopefully stop the snickering) here are the facts of the storm:
Now, it's not like Buffalo and Western New York are new to snow. We are usually prepared for anything. Not this though. What it tells you is that despite your best planning for known risks, there are any number of unknown risks that you can't anticipate.
The Impact.
Most large businesses were effected by the storm because their employees couldn't get to work. Streets were blocked, there were driving bans in most municipalities, and folks had families to worry about. Small businesses, on the other hand, suffered because they had no power. My office was offline for an entire week. After six days I was able to get a generator to run my computers. Power came back after seven days and I was up and running.
As an aside, the latest joke running around Western New York goes like this:
Q. What's the best way to get your power back
A. Get a generator
Pretty sad, eh?
The Lesson.
What this experience underlines is the need for disaster planning by even the smallest of businesses. I know a great number of lawyers, doctors, dentists, and accountants that could not operate or operate effectively because they had no electricity. Computers did not operate and cell phones ran out of juice quickly. A common problem: cordless phones that don't operate at all without power.While large businesses will have gas fired generators that can operate nearly indefinitely, most small businesses have no alternate source of energy.
The situation also shows the dark underbelly of computer technology. Without juice, computers are inert hunks of metal and plastic, more useful as door stops than productivity tools. Even worse is having your critical data locked up in a device that you can't turn on. There were quite a few times when I found myself wishing for my old fashioned paper date book.
So what should the small business professional do? There are some actions or products that I'm considering or that saved my bacon. Here's my top tips:
The last tip is the most important. Technology will never have the power that people do. Over the past week I saw more signs of that then I ever thought possible. I know people who had their neighbors stay with them for over a week because they somehow had heat. I saw generators loaned like library books. On my block I encountered a group of high school students - cleaning out driveways so people could get out to the street.
That's right, roving bands of teenagers committing random acts of kindness. If this is what the world is coming to then I'm all for it. And like the Boy Scouts say "be prepared".
The Facts.
To give some context to the situation (and hopefully stop the snickering) here are the facts of the storm:
- The last time it snowed this early here was in 1880. We expect a small sprinkling at the end of the month but not this much this early
- According to NOAA, the average snowfall for Buffalo for October is 0.3 inches. This is averaged over 59 years. We got 23 inches in one night.
- On October 13, 2006 over 400,000 homes were without power. A full week later, on the 20th, there were still over 32,000 people without power.
- Costs for tree removal and damage for municipalities could top $135M. That doesn't include business losses and other economic factors, insurance losses, and repairs to homes.
- Schools have been closed all week in many districts and some will still be closed into next week, nearly two weeks into the disaster situation
Now, it's not like Buffalo and Western New York are new to snow. We are usually prepared for anything. Not this though. What it tells you is that despite your best planning for known risks, there are any number of unknown risks that you can't anticipate.
The Impact.
Most large businesses were effected by the storm because their employees couldn't get to work. Streets were blocked, there were driving bans in most municipalities, and folks had families to worry about. Small businesses, on the other hand, suffered because they had no power. My office was offline for an entire week. After six days I was able to get a generator to run my computers. Power came back after seven days and I was up and running.
As an aside, the latest joke running around Western New York goes like this:
Q. What's the best way to get your power back
A. Get a generator
Pretty sad, eh?
The Lesson.
What this experience underlines is the need for disaster planning by even the smallest of businesses. I know a great number of lawyers, doctors, dentists, and accountants that could not operate or operate effectively because they had no electricity. Computers did not operate and cell phones ran out of juice quickly. A common problem: cordless phones that don't operate at all without power.While large businesses will have gas fired generators that can operate nearly indefinitely, most small businesses have no alternate source of energy.
The situation also shows the dark underbelly of computer technology. Without juice, computers are inert hunks of metal and plastic, more useful as door stops than productivity tools. Even worse is having your critical data locked up in a device that you can't turn on. There were quite a few times when I found myself wishing for my old fashioned paper date book.
So what should the small business professional do? There are some actions or products that I'm considering or that saved my bacon. Here's my top tips:
- Get a small generator. A 2000-watt generator can be had for $250.00. A computer will need anywhere from 150 watts to 600 watts depending on the type of power supply. Laptops use even less. For the average small business professional, a 2000-watt generator will allow you to work at some level.
- Offline backup. My data was well protected - for the most part. However, as it got colder I began to worry that some of my disk-based on-line backup might be damaged. Thankfully it wasn't. Still it is clear to me that I need to get my data offsite. Burning DVDs or taking tapes offsite is not practical. So, I'm looking into offsite, online backup
- Extra cell phone batteries. The truth is, I mostly use my cell phone when traveling and can usually recharge it regularly. That works great when you have juice. An extra battery might make the difference between being operational and losing business. Some people used car chargers, many for the first time, to power up dead cell phone batteries.
- Network-based communication services. One of the best things I did was get a VoIP system. It obviously wasn't working with the power out but the network was. That meant I never lost my voice mail. I could also get into the voice mail to change it. Small, onsite PBXs or answering machines don't work once the lights go out. Also, keep an old-fashioned analog phone handy. These work off of a landline's own 5-volt power. In many instances, folks had viable phone connections but no analog phone to hook up to it.
- Duct tape. It works for everything. No, really.
- Give help, ask for help. Buffalo is the kind of community where everyone helps everyone else out. That's how I got a generator. Others I spoke with were operating out of other people's offices. Keep a list of you friends and colleagues who can help you. Be prepared to help them too. That's right on so many levels
The last tip is the most important. Technology will never have the power that people do. Over the past week I saw more signs of that then I ever thought possible. I know people who had their neighbors stay with them for over a week because they somehow had heat. I saw generators loaned like library books. On my block I encountered a group of high school students - cleaning out driveways so people could get out to the street.
That's right, roving bands of teenagers committing random acts of kindness. If this is what the world is coming to then I'm all for it. And like the Boy Scouts say "be prepared".
Thursday, October 12, 2006
The Evil truth of Open Source Code Dependancies
Lurking... just beyond the shadows... lies an evil so hideous that...
Okay too dramatic. Still, there is an big problem that is cropping up more and more with many open source projects. It seems that in an effort to leverage existing open source code (which is good) we are creating a series of dependencies that make implementation daunting. Just look at the dependency list for the Apache Software Foundation's Ant. Ant is build tool, something programmers use to help compile and link a big program that has a lot of interdependent components. For all of you UNIX/Linux fans out there, think make on steroids. In any event, one look at the dependency list is enough to make a strong stomached open source supporter turn green. There are over 30 libraries and other related components, from nearly as many different sources, that are required to use Ant. Use, not compile.
The core problems with this type of system are:
Complexity - The obvious problem. It's so difficult to get things installed and configured right that you go nuts
Version Control - You now have to worry about what version of each dependant component you are dealing with. A change in a small library can break the whole system. Woe be to the programmer who uses an updated version of a component in his new application.
Bloat - Open source used to have the advantage of being fairly lean. Not so anymore. This is not to say it's any more bloated than proprietary systems like Windows Server 2003. It's just not very different anymore in that respect
Conflicts - So, you have applications that use different versions of some core component. Have fun working that out.
This is a good example of why people go with closed frameworks like .NET. Even though you are at the mercy of Microsoft, they at least do the heavy lifting for you. Dealing with all this complexity costs money. It costs in terms of system management and development time, as well as errors that decrease productivity.
Ultimately, these factors need to be worked into the open source cost structure. It's one thing when open source is used by hobbyists. They can get a charge out of monkeying around with code elements like that. For professionals, it's another story. They don't have time for this. What's the solution? One solution has been installers that put the whole stack plus applications languages on your system. Another option is to pull these components into a coherent framework like .NET . Then you can install just one item and get the whole package. Complexity and conflicts can be managed by a central project with proper version control for the entire framework. There are commercial frameworks that do this but we need an open source framework that ties together all open source components. Otherwise, open source development will be cease to penetrate the large scale enterprise software market.
Okay too dramatic. Still, there is an big problem that is cropping up more and more with many open source projects. It seems that in an effort to leverage existing open source code (which is good) we are creating a series of dependencies that make implementation daunting. Just look at the dependency list for the Apache Software Foundation's Ant. Ant is build tool, something programmers use to help compile and link a big program that has a lot of interdependent components. For all of you UNIX/Linux fans out there, think make on steroids. In any event, one look at the dependency list is enough to make a strong stomached open source supporter turn green. There are over 30 libraries and other related components, from nearly as many different sources, that are required to use Ant. Use, not compile.
The core problems with this type of system are:
This is a good example of why people go with closed frameworks like .NET. Even though you are at the mercy of Microsoft, they at least do the heavy lifting for you. Dealing with all this complexity costs money. It costs in terms of system management and development time, as well as errors that decrease productivity.
Ultimately, these factors need to be worked into the open source cost structure. It's one thing when open source is used by hobbyists. They can get a charge out of monkeying around with code elements like that. For professionals, it's another story. They don't have time for this. What's the solution? One solution has been installers that put the whole stack plus applications languages on your system. Another option is to pull these components into a coherent framework like .NET . Then you can install just one item and get the whole package. Complexity and conflicts can be managed by a central project with proper version control for the entire framework. There are commercial frameworks that do this but we need an open source framework that ties together all open source components. Otherwise, open source development will be cease to penetrate the large scale enterprise software market.
Wednesday, October 11, 2006
Eating My Own Cooking
Like so many analysts, I pontificate on a number of topics. It's one of the perks of the job. You get to shoot your mouth off without actually having to do the things you write or speak about. Every once in awhile though, I get the urge to do the things that I tell other people they should be doing. To eat my own cooking you might say.
Over the past few months I've been doing a fair bit of writing about open source software and new information interfaces such as tag clouds and spouting to friends and colleagues about Web 2.0 and AJAX. All this gabbing on my part inspired me to actually write an application, something I haven't done in a long while. I was intrigued with the idea of a tag cloud program that would help me catalog and categorize (tag) my most important files.
Now, you might ask, "Why bother?" With all the desktop search programs out there you can find almost anything, right? Sort of. Many desktop search products do not support OpenOffice, my office suite of choice, or don't support it well. Search engines also assume that you know the something of the content. If I'm not sure what I'm looking for, the search engine is limited in it's usefulness. You either get nothing back or too much. Like any search engine, desktop search can only return files based on your keyword input. I might be looking for a marketing piece I wrote but not have appropriate keywords in my head.
A tag cloud, in contrast, classifies information by a category, usually called a tag. Most tagging systems allow for multidimensional tagging wherein one piece of information is classified by multiple tags. With a tag cloud I can classify a marketing brochure as "marketing", "brochure" and "sales literature". With these tags in place, I can find my brochure no matter how I'm thinking about it today.
Tag clouds are common on Web sites like Flickr and MySpace. It seemed reasonable that an open source system for files would exist. Despite extensive searching, I've not found one yet that runs on Windows XP. I ran across a couple of commercial ones but they were really extensions to search engines. They stick you with the keywords that the search engine gleans from file content but you can't assign your own tags. Some are extensions of file systems but who wants to install an entirely different file system just to tag a bunch of files?
All this is to say that I ended up building one. It's pretty primitive (this was a hobby project after all) but still useful. It also gave me a good sense of the good, the bad, and the ugly of AJAX architectures. That alone was worth it. There's a lot of rah-rah going on about AJAX, most it well deserved, but there are some drawbacks. Still, it is the only way to go for web applications. With AJAX you can now achieve something close to a standard application interface with a web-based system. You also get a lot of services without coding, making mutli-tier architectures easy. This also makes web-based applications more attractive as a replacement for standard enterprise appliacations, not just Internet services. Sweet!
The downsides - the infrastructure is complex and you need to write code in multiple languages. The latter creates an error prone process. Most web scripting languages have a syntax that is similar in most ways but not all ways. They share the C legacy, as does C++, C#, and Java, but each implements the semantics in their own way. This carried forward to two of the most common languages in the web scripting world, PHP and JavaScript. In this environment, it is easy to make small mistakes in coding that slow down the programming process.
Installing a WAMP stack also turned out to be a bit of a chore. WAMP stands for Windows/Apache/MySQL/PHP (or Perl), and provides an application server environment. This is the same as the LAMP stack but with Windows as the OS instead of Linux. The good part of the WAMP or LAMP stack is that once in place, you don't have to worry about basic Internet services. No need to write a process to listen for a TCP/IP connection or interpret HTTP. The Apache Web Server does it for you. It also provides for portability. Theoretically, one should be able to take the same server code and put it on an any other box and have it run. I say theoretically because I discovered there are small differences in component implementations. I started on a LAMP stack and had to make changes to my PHP code for it to run under Windows XP. Still, the changes were quite small.
The big hassle was getting the WAMP stack configured. Configuration is the Achilles heel of open source. It is a pain in the neck! Despite configuration scripts, books,a nd decent documentation, I had no choice but to hand edit several different configuration files and download updated libraries for several components. That was just to get the basic infrastructure up and running. No application code, just a web server capable of running PHP which, in turn, could access the MySQL database. I can see now why O'Reilly and other technical book publishers can have dozens of titles on how to set up and configure these open source parts. It also makes evident how Microsoft can still make money in this space. Once the environment was properly configured and operational, writing the code was swift and pretty easy. In no time at all I had my Tag Cloud program.
The Tag Cloud program is implemented as a typical three tier system. There is a SQL database, implemented with MySQL, for persistent storage. The second tier is the application server code written in PHP and hosted on the Apache web server. This tier provides an indirect (read: more secure) interface to the database, does parameter checking, and formats the information heading back to the client.
As an aside, I originally thought to send XML to the client and wrote the server code that way. What I discovered was that it was quite cumbersome. Instead of simply displaying information returned from the server, I had to process XML trees and reformat them for display. This turned out to be quite slow given the amount of information returned and just tough to code right. Instead, I had the server return fragments of XHTML which were integrated into the client XHTML. The effect was the same but coding was much easier. In truth, PHP excels at text formating and JavaScript (the client coding language in AJAX) does not.
While returning pure XML makes it easier to integrate the server responses into other client applications, such as a Yahoo Widget, it also requires double the text processing. With pure XML output you need to generate the XML on the server and then interpret and format the XML into XHTML on the client. It is possible to do that with fairly easily with XSLT and XPath statements but in the interactive AJAX environment, this adds a lot of complexity. I've also discovered that XSLT doesn't always work the same way in different browsers and I was hell-bent on this being cross-browser.
The JavaScript client was an exercise in easy programming once the basic AJAX framework was in place. All that was required was two pieces of code. One was Nicholas Zakas' excellent cross-browser AJAX library, zXml. Unfortunately, I discovered too late that it also included cross-browser implementations of XSLT and XPath as well. Oh well. Maybe next time.
The second element was the HTTPRequest object wrapper class. HTTPRequest is the JavaScript object used to make requests of HTTP servers. It is implemented differently in different browsers and client application frameworks. zXml makes it much easier to have HTTPRequest work correctly in different browsers. managing multiple connections to the web server though was difficult. Since I wanted the AJAX code to be asychronous, I kept running into concurrency problems. The solution was wrapper for the HTTPRequest object to assist in managing connections to the web server and encapsulate some of the more redundant code that popped up along the way. Easy enough to do in JavaScript and it made the code less error prone too! After that it was all SMOP (a Simple Matter of Programming). Adding new functions is also easy as pie. I have a dozen ideas for improvements but all the core functions are working well.
The basic architecture is simple. A web page provides basic structure and acts as a container for the interactive elements. It's pretty simple XHTML. In fact, if you look at the source it would look like nothing. There are three DIV sections with named identifiers. These represent the three interactive panels. Depending on user interaction, the HTTPRequest helper objects are instantiated and make a request of the server. The server runs the request PHP code which returns XHTML fragments that are for display (such as the TagCloud itself) or represent errors. The wrapper objects place them in the appropriate display panels. Keep in mind, it is possible to write a completely different web page with small JavaScript coding changes or even just changes to the static XHTML.
The system has all the advantages of web applications with an interactive interface. No page refreshes, no long waits, no interface acrobatics. It's easy to see why folks like Google are embracing this methodology. There's a lot I could do with this if I had more time to devote to programming but Hey! it's only a hobby.
At the very least, I have a very useful information management tool. Finding important files has become much easier. One of the nice aspects of this is that I only bother to tag important files, not everything. It's more efficent to bake bread when you have already seperated the wheat from the chafe. It's also good to eat my own cooking and find that it's pretty good.
Over the past few months I've been doing a fair bit of writing about open source software and new information interfaces such as tag clouds and spouting to friends and colleagues about Web 2.0 and AJAX. All this gabbing on my part inspired me to actually write an application, something I haven't done in a long while. I was intrigued with the idea of a tag cloud program that would help me catalog and categorize (tag) my most important files.
Now, you might ask, "Why bother?" With all the desktop search programs out there you can find almost anything, right? Sort of. Many desktop search products do not support OpenOffice, my office suite of choice, or don't support it well. Search engines also assume that you know the something of the content. If I'm not sure what I'm looking for, the search engine is limited in it's usefulness. You either get nothing back or too much. Like any search engine, desktop search can only return files based on your keyword input. I might be looking for a marketing piece I wrote but not have appropriate keywords in my head.
A tag cloud, in contrast, classifies information by a category, usually called a tag. Most tagging systems allow for multidimensional tagging wherein one piece of information is classified by multiple tags. With a tag cloud I can classify a marketing brochure as "marketing", "brochure" and "sales literature". With these tags in place, I can find my brochure no matter how I'm thinking about it today.
Tag clouds are common on Web sites like Flickr and MySpace. It seemed reasonable that an open source system for files would exist. Despite extensive searching, I've not found one yet that runs on Windows XP. I ran across a couple of commercial ones but they were really extensions to search engines. They stick you with the keywords that the search engine gleans from file content but you can't assign your own tags. Some are extensions of file systems but who wants to install an entirely different file system just to tag a bunch of files?
All this is to say that I ended up building one. It's pretty primitive (this was a hobby project after all) but still useful. It also gave me a good sense of the good, the bad, and the ugly of AJAX architectures. That alone was worth it. There's a lot of rah-rah going on about AJAX, most it well deserved, but there are some drawbacks. Still, it is the only way to go for web applications. With AJAX you can now achieve something close to a standard application interface with a web-based system. You also get a lot of services without coding, making mutli-tier architectures easy. This also makes web-based applications more attractive as a replacement for standard enterprise appliacations, not just Internet services. Sweet!
The downsides - the infrastructure is complex and you need to write code in multiple languages. The latter creates an error prone process. Most web scripting languages have a syntax that is similar in most ways but not all ways. They share the C legacy, as does C++, C#, and Java, but each implements the semantics in their own way. This carried forward to two of the most common languages in the web scripting world, PHP and JavaScript. In this environment, it is easy to make small mistakes in coding that slow down the programming process.
Installing a WAMP stack also turned out to be a bit of a chore. WAMP stands for Windows/Apache/MySQL/PHP (or Perl), and provides an application server environment. This is the same as the LAMP stack but with Windows as the OS instead of Linux. The good part of the WAMP or LAMP stack is that once in place, you don't have to worry about basic Internet services. No need to write a process to listen for a TCP/IP connection or interpret HTTP. The Apache Web Server does it for you. It also provides for portability. Theoretically, one should be able to take the same server code and put it on an any other box and have it run. I say theoretically because I discovered there are small differences in component implementations. I started on a LAMP stack and had to make changes to my PHP code for it to run under Windows XP. Still, the changes were quite small.
The big hassle was getting the WAMP stack configured. Configuration is the Achilles heel of open source. It is a pain in the neck! Despite configuration scripts, books,a nd decent documentation, I had no choice but to hand edit several different configuration files and download updated libraries for several components. That was just to get the basic infrastructure up and running. No application code, just a web server capable of running PHP which, in turn, could access the MySQL database. I can see now why O'Reilly and other technical book publishers can have dozens of titles on how to set up and configure these open source parts. It also makes evident how Microsoft can still make money in this space. Once the environment was properly configured and operational, writing the code was swift and pretty easy. In no time at all I had my Tag Cloud program.
The Tag Cloud program is implemented as a typical three tier system. There is a SQL database, implemented with MySQL, for persistent storage. The second tier is the application server code written in PHP and hosted on the Apache web server. This tier provides an indirect (read: more secure) interface to the database, does parameter checking, and formats the information heading back to the client.
As an aside, I originally thought to send XML to the client and wrote the server code that way. What I discovered was that it was quite cumbersome. Instead of simply displaying information returned from the server, I had to process XML trees and reformat them for display. This turned out to be quite slow given the amount of information returned and just tough to code right. Instead, I had the server return fragments of XHTML which were integrated into the client XHTML. The effect was the same but coding was much easier. In truth, PHP excels at text formating and JavaScript (the client coding language in AJAX) does not.
While returning pure XML makes it easier to integrate the server responses into other client applications, such as a Yahoo Widget, it also requires double the text processing. With pure XML output you need to generate the XML on the server and then interpret and format the XML into XHTML on the client. It is possible to do that with fairly easily with XSLT and XPath statements but in the interactive AJAX environment, this adds a lot of complexity. I've also discovered that XSLT doesn't always work the same way in different browsers and I was hell-bent on this being cross-browser.
The JavaScript client was an exercise in easy programming once the basic AJAX framework was in place. All that was required was two pieces of code. One was Nicholas Zakas' excellent cross-browser AJAX library, zXml. Unfortunately, I discovered too late that it also included cross-browser implementations of XSLT and XPath as well. Oh well. Maybe next time.
The second element was the HTTPRequest object wrapper class. HTTPRequest is the JavaScript object used to make requests of HTTP servers. It is implemented differently in different browsers and client application frameworks. zXml makes it much easier to have HTTPRequest work correctly in different browsers. managing multiple connections to the web server though was difficult. Since I wanted the AJAX code to be asychronous, I kept running into concurrency problems. The solution was wrapper for the HTTPRequest object to assist in managing connections to the web server and encapsulate some of the more redundant code that popped up along the way. Easy enough to do in JavaScript and it made the code less error prone too! After that it was all SMOP (a Simple Matter of Programming). Adding new functions is also easy as pie. I have a dozen ideas for improvements but all the core functions are working well.
The basic architecture is simple. A web page provides basic structure and acts as a container for the interactive elements. It's pretty simple XHTML. In fact, if you look at the source it would look like nothing. There are three DIV sections with named identifiers. These represent the three interactive panels. Depending on user interaction, the HTTPRequest helper objects are instantiated and make a request of the server. The server runs the request PHP code which returns XHTML fragments that are for display (such as the TagCloud itself) or represent errors. The wrapper objects place them in the appropriate display panels. Keep in mind, it is possible to write a completely different web page with small JavaScript coding changes or even just changes to the static XHTML.
The system has all the advantages of web applications with an interactive interface. No page refreshes, no long waits, no interface acrobatics. It's easy to see why folks like Google are embracing this methodology. There's a lot I could do with this if I had more time to devote to programming but Hey! it's only a hobby.
At the very least, I have a very useful information management tool. Finding important files has become much easier. One of the nice aspects of this is that I only bother to tag important files, not everything. It's more efficent to bake bread when you have already seperated the wheat from the chafe. It's also good to eat my own cooking and find that it's pretty good.
Labels:
development,
information management,
web development
Friday, October 06, 2006
ILM - The Oliver Stone Version
I've noticed that you don't hear much about ILM anymore. A few articles in InfoStor maybe. What news you do hear is mostly of restructurings or closings at small startup companies in the space. It seems that many ILM plays are moving into other markets or selling narrow interpretations of their technology under a different guise. I've pondered this turn of events and have collected a number of theories as to what has happened to ILM. So grab your popcorn and read "Tom's ILM Conspiracies"
It Was All Hooey To Begin With
One of the most popular theories is that ILM was a load of horse hockey from the start. The story goes this way:
This is a variation of the "lipstick on a pig" theory that says ILM was only other stuff repackaged. It was all a scam perpetrated by evil marketing and sales guys to make their quota and get their bonuses.
Unfortunately, there are several holes in this theory. First, a lot of the companies in the space are new ones. While they take the lead from the big ones, they don't get technology and products from the big companies. They actually had to go out and make new technology not just repackage old stuff. The same is true for the big companies. Many had to develop or acquire entirely new technology to create their ILM offerings.
I also don't think that IT managers are as stupid as this theory would make them out to be. Your average manager with a college degree and a budget to manage rarely buys gear just because the salesman said it was "really important". So the marketing ploy argument falls down.
Good Idea. Too Bad It Didn't Work
Also known as a "Day Late and a Dollar Short. This theory says that ILM was a great idea but just too hard to implement or build products around. There is some truth to this one. It is hard to develop process automation products that don't require huge changes in the IT infrastructure. ILM is especially susceptible to this since it deals with rather esoteric questions such as "what is information? , "how do I define context?", and "what is the value of information?". Packaging these concepts into a box or software is not trivial.
It probably didn't help that a lot of ILM product came out of the storage industry. Folks in that neck of the woods didn't have a background in business process automation, the closest relative to ILM.
Put another way - making ILM products is hard and a lot of companies found it to too hard to stay in the market.
The Transformer Theory or Morphing Into Something Bigger
ILM. More than meets the eye! It's funny if you say it in a robotic voice and have seen the kids cartoon and toys from the 80's. But seriously folks, the latest theory is that ILM hasn't disappeared, it's simply morphed into Information Management. This line of thought goes like this:
This is one of those sort of true and not true ideas. It is true that ILM is part of a larger category of Information Management. So is information auditing and tracking, search, information and digital asset management, document and records management, and even CAS. That doesn't mean that ILM as a product category has gone away. Not everyone needs all aspects of Information Management. Some folks only need to solve the problems that ILM addresses.
So, while ILM is now recognized as a subcategory of Information Management, it has not been consumed by it. This theory does not explain the relative quiet in the ILM world.
A twist on this theory is that ILM has simply gotten boring. No one wants to talk about it because it is pretty much done. That's boloney! Most vendors are only just starting to ship viable ILM products and IT has hardly begun to implement them. Nope. Put the Transformers toys and videos away.
Back and To The Left. Back and To The Left.
ILM was assassinated! Just as it looked like ILM would become a major force in the IT universe, evil analysts (who never have anything good to say about anything) and slow-to-market companies started to bad mouth it. It was a lie, they said. It was thin and there was nothing there. Unlike the "Hooey" theory, assassination assumes that there was something there but it was killed off prematurely by dark forces within our own industry.
That's a bit heavy, don't you think? Sure, there are naysayers and haters of any new technology - heck! I can remember someone telling me that the only Internet application that would ever matter was e-mail - but there are also champions. When you have heavy hitters like EMC and StorageTek (when they existed) promoting something, it's hard to imagine that a small group of negative leaning types could kill off the whole market.
Remember, there were a lot of people who hated iSCSI and said it would never work. Now, it's commonplace. Perhaps, like iSCSI it will just take awhile for ILM to reach critical mass. It is by no means dead. So that can't explain the dearth of noise
It was always about the process anyway
They have seen the light! Finally, all is clear. ILM was about the process. The process of managing information according to a lifecycle. A lifecycle based on and control by the value of the information. No products needed, so nothing to talk about.
Sort of. Okay, many people have heard me say this a hundred times. It's The Process, Stupid. However, designing the process and policies is one thing. Actually doing them is something else entirely. ILM relies on a commitment by a business to examine their needs, create processes, translate these into information policies, and then spend money on products to automate them. Just as it's hard to build a house without tools, it's hard to do ILM without tools. Software primarily but some hardware too.
It's good that we have woken to the fact that ILM is about process and policy. That doesn't explain why there isn't more news about tools that automate them.
No Time To Say Hello - Goodbye! I'm Late!" Theory
Also known as "That Ain't Working'. That's The Way You Do It" effect.. This theory postulates that IT managers simply have more important things to do. Who has time for all that process navel gazing? We have too many other things to do (sing with me "We've got to install microwave ovens.") and simply don't have the time to map out our information processes and design policies for ILM. I'm sure that's true. I'm sure that's true for almost every IT project. It's a matter of priority. However, we do know that lots of people are struggling with information management issues, especially Sarbannes-Oxley requirements. If that wasn't true, the business lobby wouldn't be expending so much energy to get SOX eliminated or watered down.
There is a kernel of truth here. If IT doesn't see ILM as a priority, it will die on the vine. I just don't think that ILM is such a low priority that it explains the lack of positive news.
Everything is Fine Thanks. No, Really. It's Okay.
What are you talking about, Tom? Everything is peachy keen! We're at a trough in the product cycle and that makes us a bit quiet. No. Um. Customers are too busy implementing ILM to talk about it. That's the ticket. No wait! They are talking about it but no one reports it anymore. It's too dull. That's because it is so successful.
Nice try.
Not nearly enough IT shops are doing ILM, this is the time in the cycle when products (and press releases) should be pouring out, and the trade press will still write about established technology. Just look at all the articles about D2D backup and iSCSI, two very well established and boring technologies (and I mean that in only the most positive way). I will admit that ILM is no longer the flavor of the month but you would think that you'd hear more more about it. We still hear about tape backup. Why not ILM?
My Theory
Or non-theory really. My feeling is that ILM turned out to be too big a job for small companies. They couldn't put out comprehensive products because it was too hard to make and sell these products. Complete ILM takes resources at the customer level that only large companies have. Despite trying to develop all-encompassing solutions, many have fallen back on doing small parts of puzzle. The companies that are still in the game are either big monster companies like EMC and IBM, implementing total ILM solutions, or small companies that do one small piece of the information management process. Classification software and appliances are still pretty hot. No one calls them ILM anymore because it is more narrow than that.
So ILM is not dead, just balkanized. It is a small part of a larger market (Information Management) that has been broken down into even smaller components such as information movement and classification. ILM is not dead or transformed. Just hiding.
It Was All Hooey To Begin With
One of the most popular theories is that ILM was a load of horse hockey from the start. The story goes this way:
- Big companies find core technology doesn't sell as well as it used to
- They quickly come up with a gimmick - take a basket of old mainframe ideas and give them a three letter acronym
- Hoist on stupid and unsuspecting customers
- Sit back an laugh as the rubes buy the same stuff with new facade
This is a variation of the "lipstick on a pig" theory that says ILM was only other stuff repackaged. It was all a scam perpetrated by evil marketing and sales guys to make their quota and get their bonuses.
Unfortunately, there are several holes in this theory. First, a lot of the companies in the space are new ones. While they take the lead from the big ones, they don't get technology and products from the big companies. They actually had to go out and make new technology not just repackage old stuff. The same is true for the big companies. Many had to develop or acquire entirely new technology to create their ILM offerings.
I also don't think that IT managers are as stupid as this theory would make them out to be. Your average manager with a college degree and a budget to manage rarely buys gear just because the salesman said it was "really important". So the marketing ploy argument falls down.
Good Idea. Too Bad It Didn't Work
Also known as a "Day Late and a Dollar Short. This theory says that ILM was a great idea but just too hard to implement or build products around. There is some truth to this one. It is hard to develop process automation products that don't require huge changes in the IT infrastructure. ILM is especially susceptible to this since it deals with rather esoteric questions such as "what is information? , "how do I define context?", and "what is the value of information?". Packaging these concepts into a box or software is not trivial.
It probably didn't help that a lot of ILM product came out of the storage industry. Folks in that neck of the woods didn't have a background in business process automation, the closest relative to ILM.
Put another way - making ILM products is hard and a lot of companies found it to too hard to stay in the market.
The Transformer Theory or Morphing Into Something Bigger
ILM. More than meets the eye! It's funny if you say it in a robotic voice and have seen the kids cartoon and toys from the 80's. But seriously folks, the latest theory is that ILM hasn't disappeared, it's simply morphed into Information Management. This line of thought goes like this:
- ILM was only a start, a part of the whole
- It was successful to a large degree
- Now it is ready to expand into a whole new thing
- Get more money from customers
This is one of those sort of true and not true ideas. It is true that ILM is part of a larger category of Information Management. So is information auditing and tracking, search, information and digital asset management, document and records management, and even CAS. That doesn't mean that ILM as a product category has gone away. Not everyone needs all aspects of Information Management. Some folks only need to solve the problems that ILM addresses.
So, while ILM is now recognized as a subcategory of Information Management, it has not been consumed by it. This theory does not explain the relative quiet in the ILM world.
A twist on this theory is that ILM has simply gotten boring. No one wants to talk about it because it is pretty much done. That's boloney! Most vendors are only just starting to ship viable ILM products and IT has hardly begun to implement them. Nope. Put the Transformers toys and videos away.
Back and To The Left. Back and To The Left.
ILM was assassinated! Just as it looked like ILM would become a major force in the IT universe, evil analysts (who never have anything good to say about anything) and slow-to-market companies started to bad mouth it. It was a lie, they said. It was thin and there was nothing there. Unlike the "Hooey" theory, assassination assumes that there was something there but it was killed off prematurely by dark forces within our own industry.
That's a bit heavy, don't you think? Sure, there are naysayers and haters of any new technology - heck! I can remember someone telling me that the only Internet application that would ever matter was e-mail - but there are also champions. When you have heavy hitters like EMC and StorageTek (when they existed) promoting something, it's hard to imagine that a small group of negative leaning types could kill off the whole market.
Remember, there were a lot of people who hated iSCSI and said it would never work. Now, it's commonplace. Perhaps, like iSCSI it will just take awhile for ILM to reach critical mass. It is by no means dead. So that can't explain the dearth of noise
It was always about the process anyway
They have seen the light! Finally, all is clear. ILM was about the process. The process of managing information according to a lifecycle. A lifecycle based on and control by the value of the information. No products needed, so nothing to talk about.
Sort of. Okay, many people have heard me say this a hundred times. It's The Process, Stupid. However, designing the process and policies is one thing. Actually doing them is something else entirely. ILM relies on a commitment by a business to examine their needs, create processes, translate these into information policies, and then spend money on products to automate them. Just as it's hard to build a house without tools, it's hard to do ILM without tools. Software primarily but some hardware too.
It's good that we have woken to the fact that ILM is about process and policy. That doesn't explain why there isn't more news about tools that automate them.
No Time To Say Hello - Goodbye! I'm Late!" Theory
Also known as "That Ain't Working'. That's The Way You Do It" effect.. This theory postulates that IT managers simply have more important things to do. Who has time for all that process navel gazing? We have too many other things to do (sing with me "We've got to install microwave ovens.") and simply don't have the time to map out our information processes and design policies for ILM. I'm sure that's true. I'm sure that's true for almost every IT project. It's a matter of priority. However, we do know that lots of people are struggling with information management issues, especially Sarbannes-Oxley requirements. If that wasn't true, the business lobby wouldn't be expending so much energy to get SOX eliminated or watered down.
There is a kernel of truth here. If IT doesn't see ILM as a priority, it will die on the vine. I just don't think that ILM is such a low priority that it explains the lack of positive news.
Everything is Fine Thanks. No, Really. It's Okay.
What are you talking about, Tom? Everything is peachy keen! We're at a trough in the product cycle and that makes us a bit quiet. No. Um. Customers are too busy implementing ILM to talk about it. That's the ticket. No wait! They are talking about it but no one reports it anymore. It's too dull. That's because it is so successful.
Nice try.
Not nearly enough IT shops are doing ILM, this is the time in the cycle when products (and press releases) should be pouring out, and the trade press will still write about established technology. Just look at all the articles about D2D backup and iSCSI, two very well established and boring technologies (and I mean that in only the most positive way). I will admit that ILM is no longer the flavor of the month but you would think that you'd hear more more about it. We still hear about tape backup. Why not ILM?
My Theory
Or non-theory really. My feeling is that ILM turned out to be too big a job for small companies. They couldn't put out comprehensive products because it was too hard to make and sell these products. Complete ILM takes resources at the customer level that only large companies have. Despite trying to develop all-encompassing solutions, many have fallen back on doing small parts of puzzle. The companies that are still in the game are either big monster companies like EMC and IBM, implementing total ILM solutions, or small companies that do one small piece of the information management process. Classification software and appliances are still pretty hot. No one calls them ILM anymore because it is more narrow than that.
So ILM is not dead, just balkanized. It is a small part of a larger market (Information Management) that has been broken down into even smaller components such as information movement and classification. ILM is not dead or transformed. Just hiding.
Friday, September 22, 2006
Service-oriented Architectures: Taking large scale software to the next level
Distributed, dynamically bound systems are not new. The Internet is an example of a highly distributed system where resources are made available over open protocols. Back in the 90's there were an awful lot of service bus architectures as well. These usually involved a big piece of infrastructure such as an object broker. Both approaches have shortcomings. The Internet doesn't have enough programming heft. You have to augment it with all types of other objects that are loaded into clients. Object brokers place restrictions on programming languages and are very complex and expensive to implement.
A service-oriented architecture, on the other hand, combines concepts from both. You have loosely coupled resources, protocol simplicity, language independence, and easy of implementation. This affords an opportunity to virtualize an enterprise application. In turn, this virtualization makes development easier and more robust. Scaling up applications becomes much simpler when the components are not tightly tied together. Need a new functions? Add a new callable service and publish it. Need to change something? Change one service without compiling the whole application.
.NET does this too but on too small a scale. When you consider systems with thousands of functions, working over the Internet or other long distance networks, accessing components becomes a serious problem. In an SOA, you don't need to request the processing object, just that it perform the function that you need. This is also much less bandwidth intensive.
The hang up with SOA is the underlying infrastructure. While SOA makes the application layer easier to manage, the components of the SOA are too often tightly bound to physical infrastructure such as servers and storage devices. To make SOA truly efficient, we need to virtualize and abstract this layer too. Servers, network resources, storage, and system level applications also need to become services. This way, applications can request system resources as they need them and only recieve what they need. That's very efficienct. If a service is running under a light load, it would be good to have it only ask for lower processing and network resources. That frees resources for other, heavy users. Rather than have administrators analyze and make these changes manually, dynamically allocating only the resources that a componet needs leads to a better use of available resources. By negotiating with those resources, every component gets mostly what they need rather then an all or nothing scenario.
SOA's are an evolution not revolution, but it's a good evolution. It give system architects and administrators an opportunity to have a more efficient and more easily manageable system. Let's get the hardware under the same control. Hardware, software, it's all just resources that need to be reallocated as needs change.
A service-oriented architecture, on the other hand, combines concepts from both. You have loosely coupled resources, protocol simplicity, language independence, and easy of implementation. This affords an opportunity to virtualize an enterprise application. In turn, this virtualization makes development easier and more robust. Scaling up applications becomes much simpler when the components are not tightly tied together. Need a new functions? Add a new callable service and publish it. Need to change something? Change one service without compiling the whole application.
.NET does this too but on too small a scale. When you consider systems with thousands of functions, working over the Internet or other long distance networks, accessing components becomes a serious problem. In an SOA, you don't need to request the processing object, just that it perform the function that you need. This is also much less bandwidth intensive.
The hang up with SOA is the underlying infrastructure. While SOA makes the application layer easier to manage, the components of the SOA are too often tightly bound to physical infrastructure such as servers and storage devices. To make SOA truly efficient, we need to virtualize and abstract this layer too. Servers, network resources, storage, and system level applications also need to become services. This way, applications can request system resources as they need them and only recieve what they need. That's very efficienct. If a service is running under a light load, it would be good to have it only ask for lower processing and network resources. That frees resources for other, heavy users. Rather than have administrators analyze and make these changes manually, dynamically allocating only the resources that a componet needs leads to a better use of available resources. By negotiating with those resources, every component gets mostly what they need rather then an all or nothing scenario.
SOA's are an evolution not revolution, but it's a good evolution. It give system architects and administrators an opportunity to have a more efficient and more easily manageable system. Let's get the hardware under the same control. Hardware, software, it's all just resources that need to be reallocated as needs change.
Friday, August 18, 2006
Fibre Channel Switches - The Ranks Are Getting Thin!
Now that Brocade is buying McData (which bought CNT some time ago) the number of FC switch vendors is getting to be quite small. Three to be exact. Once the deal closes, all that will be left is Brocade, Cisco, and QLogic. Sure, there are some smaller players that will insist they are FC switch vendors but they really are iSCSI switch makers with a bit of Fibre Channel tossed in. None of them make the range of product that the FC Three do nor do they make director grade switches. Everyone else, including the big storage vendors, buy from these three companies.
Is this good? I always feel that more competition is better. It drives innovation and lower cost. From the point of view of the eventual consumer (mainly IT shops) this is a bad thing. With fewer vendors selling FC Switches, prices may rise and product development slow. Costs more, less filling. I can only imagine that the arrogance factor will increase a bit too.
From a product perspective this isn't a happy place either. Both companies have almost total overlap of products including directors, mid-level switches, and small switches. You could pick through the products and find small differences but nothing that will give the combined company a huge competitive advantage. Fewer overall product choices for consumers is not good and it raises the risk of technology monoculture issues. With so much of the market concentrated in one company - Brocade - the chance of a bug effecting a large number of companies is one that needs consideration.
Which brings us to the "why". Where is the value of the merger or acquisition? Truthfully, Brocade gets nothing dramatic from McData. Granted, many feel that the Brocade director is not nearly as good as the McData one. McData also has some pretty good mainframe connectivity capabilities, mostly ESCON and FICON. That's not the biggest chunk of the market, so it's not enough to buy a company the size of McData. Perhaps, the reason is that they got all excited at prospect of swinging to a profit that simply had to go out and spend that money! Woo hoo! Keep these guys away from the Home Shopping Network.
The best reasons for Brocade to do the deal are to eliminate a competitor that sometimes gives it fits in certain accounts (such as EMC) and to bulk up. That suggests that Cisco is hurting both companies. With it's breadth of products, enormous, well developed, channel, and shear scale, Cisco is pushing these guys around a bit. Or at least scaring them. That makes this look like a defensive move rather than a strategic one.
Maybe the deal makes sense financially. I'll leave that to the stock analysts. Investors seem to be voting this one with their feet. Brocade's stock plunged right after the announcement and hasn't recovered despite a good earnings announcement. That's confidence for you. While I don't think customers will like it in the long run, Cisco will likely approve. It gives them one less target to worry about. They can now focus on crushing only one competitor. Sweet for them but not for the rest of the industry or IT consumers.
Is this good? I always feel that more competition is better. It drives innovation and lower cost. From the point of view of the eventual consumer (mainly IT shops) this is a bad thing. With fewer vendors selling FC Switches, prices may rise and product development slow. Costs more, less filling. I can only imagine that the arrogance factor will increase a bit too.
From a product perspective this isn't a happy place either. Both companies have almost total overlap of products including directors, mid-level switches, and small switches. You could pick through the products and find small differences but nothing that will give the combined company a huge competitive advantage. Fewer overall product choices for consumers is not good and it raises the risk of technology monoculture issues. With so much of the market concentrated in one company - Brocade - the chance of a bug effecting a large number of companies is one that needs consideration.
Which brings us to the "why". Where is the value of the merger or acquisition? Truthfully, Brocade gets nothing dramatic from McData. Granted, many feel that the Brocade director is not nearly as good as the McData one. McData also has some pretty good mainframe connectivity capabilities, mostly ESCON and FICON. That's not the biggest chunk of the market, so it's not enough to buy a company the size of McData. Perhaps, the reason is that they got all excited at prospect of swinging to a profit that simply had to go out and spend that money! Woo hoo! Keep these guys away from the Home Shopping Network.
The best reasons for Brocade to do the deal are to eliminate a competitor that sometimes gives it fits in certain accounts (such as EMC) and to bulk up. That suggests that Cisco is hurting both companies. With it's breadth of products, enormous, well developed, channel, and shear scale, Cisco is pushing these guys around a bit. Or at least scaring them. That makes this look like a defensive move rather than a strategic one.
Maybe the deal makes sense financially. I'll leave that to the stock analysts. Investors seem to be voting this one with their feet. Brocade's stock plunged right after the announcement and hasn't recovered despite a good earnings announcement. That's confidence for you. While I don't think customers will like it in the long run, Cisco will likely approve. It gives them one less target to worry about. They can now focus on crushing only one competitor. Sweet for them but not for the rest of the industry or IT consumers.
Monday, August 07, 2006
Linux Breaks Apart Under the Open Source Model
I first worked with Linux back in the mid-90s. At the time there was only one real distribution, the Slackware distro, and it was a serious chore to install. It required manual configuration that was much more difficult than your average UNIX box let alone Windows or DOS.
Back then, as we experimented with Linux, one thing became abundantly clear - it was a one size fits all approach and that didn't work. The truth is, operating systems variations proliferate because different missions have different needs. A server OS is different than a desktop OS. A secure server is a totally different animal altogether. The requirements of multimedia development, programming, and games all vary widely. Linux in the old days really wasn't designed for that.
To be fair, Linux was (and technically still is) just the kernel. Diversity tended to be across operating systems then instead of within them. Desktop applications? Windows. Server applications? UNIX and maybe Windows NT. File server? Windows NT and Novell for the die hards. Multimedia and graphics? No question about it, go Mac.
This is in sharp contrast to the OS world of today. Starting with fragmentation of UNIX in the 1980's, we now experience so many variants of operating systems, that the mind boggles. The latest list of Vista variations shows six different types and that doesn't include the Microsoft server operating systems or versions of Windows for embedded and mobile applications. Windows Server 2003 also has a bunch of variations, such as the Storage Server, and Longhorn is expected to be completely modular. Just trying to figure out which future Windows to use makes my head hurt.
It should then come as no surprise that the once unified Linux now ships in more distributions (a fancy word for versions) then there are mints in a Altoids tin. The Distrowatch website lists a top 100 distributions. That implies that there are more than 100 hundred distributions! I can believe it. There actually are hundreds of distributions. While many are simply different packaging, most are specialized distributions aimed at increasingly narrow markets. Talk about slicing the baloney thin. There are special multimedia distros like Gen Too, desktop-oriented ones like Ubuntu, server versions like Red Hat Enterprise, and many more. Some are designed for lower end machines, such as Damn Small Linux, and a few, like Slackware, seem to be heirloom distributions. More to the point, there are dozens of entries in each category, even relatively narrow ones like embedded real-time operating systems.
Ultimately, this is a ying-yang situation. While being able to find an operating system variant that suits a very narrow need is attractive, supporting that many operating systems drains resources away from the core system. It is also pretty confusing. Just trying to compare this many versions of an OS can be daunting. It's no wonder that only a small number of distributions make up the majority of installations with a few additional ones staking our majority claims only in special niches.
Altogether, this is what happens with open source. Everyone wants their own flavor or feels like tinkering with it. Before you know it, variations proliferate like zebra mussels. Even fairly obscure new technology like VoIP fractures quickly under the open source model. There are lots of SIP servers out there and a bunch of variations on one implementation called Asterix. Great stuff but there's no control.
It is clear that there should be variations on the Linux theme but a limited number. One for servers, one for desktop, one for embedded, etc. The very nature of the Linux and open source world makes this unlikely. Unfortunately, the proliferation of Linux distros will weight down on Linux, hurting it in the long term.
The fracturing of the Linux world into hundreds of variations is a side effect of the open source movement. In the end one of two things will happen. Either most of these distributions will fade away as the programmers get bored with it or Linux will eventually fail altogether, paving the way for more years of the Windows hegemony. I wonder if Bill Gates and Steve Ballmer are sitting down with a glass of champagne right now toasting Linux.
Back then, as we experimented with Linux, one thing became abundantly clear - it was a one size fits all approach and that didn't work. The truth is, operating systems variations proliferate because different missions have different needs. A server OS is different than a desktop OS. A secure server is a totally different animal altogether. The requirements of multimedia development, programming, and games all vary widely. Linux in the old days really wasn't designed for that.
To be fair, Linux was (and technically still is) just the kernel. Diversity tended to be across operating systems then instead of within them. Desktop applications? Windows. Server applications? UNIX and maybe Windows NT. File server? Windows NT and Novell for the die hards. Multimedia and graphics? No question about it, go Mac.
This is in sharp contrast to the OS world of today. Starting with fragmentation of UNIX in the 1980's, we now experience so many variants of operating systems, that the mind boggles. The latest list of Vista variations shows six different types and that doesn't include the Microsoft server operating systems or versions of Windows for embedded and mobile applications. Windows Server 2003 also has a bunch of variations, such as the Storage Server, and Longhorn is expected to be completely modular. Just trying to figure out which future Windows to use makes my head hurt.
It should then come as no surprise that the once unified Linux now ships in more distributions (a fancy word for versions) then there are mints in a Altoids tin. The Distrowatch website lists a top 100 distributions. That implies that there are more than 100 hundred distributions! I can believe it. There actually are hundreds of distributions. While many are simply different packaging, most are specialized distributions aimed at increasingly narrow markets. Talk about slicing the baloney thin. There are special multimedia distros like Gen Too, desktop-oriented ones like Ubuntu, server versions like Red Hat Enterprise, and many more. Some are designed for lower end machines, such as Damn Small Linux, and a few, like Slackware, seem to be heirloom distributions. More to the point, there are dozens of entries in each category, even relatively narrow ones like embedded real-time operating systems.
Ultimately, this is a ying-yang situation. While being able to find an operating system variant that suits a very narrow need is attractive, supporting that many operating systems drains resources away from the core system. It is also pretty confusing. Just trying to compare this many versions of an OS can be daunting. It's no wonder that only a small number of distributions make up the majority of installations with a few additional ones staking our majority claims only in special niches.
Altogether, this is what happens with open source. Everyone wants their own flavor or feels like tinkering with it. Before you know it, variations proliferate like zebra mussels. Even fairly obscure new technology like VoIP fractures quickly under the open source model. There are lots of SIP servers out there and a bunch of variations on one implementation called Asterix. Great stuff but there's no control.
It is clear that there should be variations on the Linux theme but a limited number. One for servers, one for desktop, one for embedded, etc. The very nature of the Linux and open source world makes this unlikely. Unfortunately, the proliferation of Linux distros will weight down on Linux, hurting it in the long term.
The fracturing of the Linux world into hundreds of variations is a side effect of the open source movement. In the end one of two things will happen. Either most of these distributions will fade away as the programmers get bored with it or Linux will eventually fail altogether, paving the way for more years of the Windows hegemony. I wonder if Bill Gates and Steve Ballmer are sitting down with a glass of champagne right now toasting Linux.
Friday, July 21, 2006
CA and XOSoft
In the past week or so I have had a lot of people ask me what I thought about the recent acquisition of XOSoft by CA. Overall the questions range from the overarching "Why did CA buy XOSoft?" to the more pointed "Isn't XOSoft worried about CA's reputation?" Here's my two second analysis.
Ultimately, it was best for XOSoft to take the deal with a known partner rather than wait on the sidelines until all the other CDP companies were gone. If they had done that, they would have been faced with limited choices and probably a lower selling point. If they had continued to stay independent, they would have been pushed to the back of the room, forever a small niche player with limited growing room. For CA, they get a quality purchase that fills out their product line, making them much more competitive. Sounds like a good deal to me.
As to the CA rep, remember what Timon says in the Lion King "You have to put your past behind you". At some point the industry has to give CA a chance to move beyond it's past indiscretions. The XOSoft acquisition is a step in that direction.
- CA Wins - CA needs to fill out their product line. With CDP and Replication technology from XOSoft, CA has one of the most complete data protection product lines in the industry. CA customers will soon have a complete, well integrated basket of data protection tools to draw from. CA also gets a willing partner and doesn't have to pay a premium needed to acquire an unwilling one.
- XOSoft Wins - Besides the obvious monetary gain for XOSoft founders and investors they now get access to big company resources and the entire range of data protection and security products that CA offers. Let's face it, CDP is becoming a feature. XOSoft would have had to acquire someone (a small or distressed cmopany with the obvious problems associated with that) or would have become a niche player catering to the "I hate the big guys" crowd. There's not much of a future in that.
- CA's Reputation - This is still a problem. Everyone thinks of CA as having questionable (at best) business practices and slow, inattentive, and non-responsive product development. While that's changing, it will be awhile before CA can change the industry's perception of itself. From CA's point of view, bringing energetic young companies into the fold helps this along. For the folks at those companies, it means having to manage an issue they have never dealt with.
- XOSoft Customers - Come on now. You didn't really think these guys were going to stay independent forever. You can now look forward to getting a well integrated data protection and security suite rather than cobbling one together on your own. It was inevitable and is beneficial in the long run.
Ultimately, it was best for XOSoft to take the deal with a known partner rather than wait on the sidelines until all the other CDP companies were gone. If they had done that, they would have been faced with limited choices and probably a lower selling point. If they had continued to stay independent, they would have been pushed to the back of the room, forever a small niche player with limited growing room. For CA, they get a quality purchase that fills out their product line, making them much more competitive. Sounds like a good deal to me.
As to the CA rep, remember what Timon says in the Lion King "You have to put your past behind you". At some point the industry has to give CA a chance to move beyond it's past indiscretions. The XOSoft acquisition is a step in that direction.
Riding That Train..
Riding the same train as my previous post on intellectual property, I turn my attention to those most reviled of creatures, The Patent Troll.
To many, Patent Trolls - those who buy patents only to generate licensing fees from them - are a form of life only sightly above parasitic worms. I don't understand the rancor. These aren't spammers or treet purveyors (all lower case, don't be upset Armour and Hormel). These folks simply bought an asset and are trying to get some money from it. If I bought a car and rented it out to people, would folks throw rocks at me? They might but not because of the car. This is no different.
When you read the stories of people complaining about Patent Trolls, two themes emerge.
In other words, it's their fault they are in legal hot water.
The most common charged levied at the so-called Patent Troll is that they squelch innovation. Really? It would seem to me that it is in the patent holders best interest to license the patent as widely as possible. Otherwise, they won't make much money from it. The more times they license the patent, the more money they make. It is also in the Patent Trolls best interest to do this without litigation. Paying lawyers costs money. A lot of money. That cuts into the profit they can expect to make from licensing, in essence raising the cost of deriving value from the asset. Why bother to do a thing like that?
The reason they bother is that some people don't want to pay what the patent holder wants. Think about this a minute. The chief complaint is that the Patent Troll won't sell their asset for what certain people are willing to pay for it. Keep in mind, I did not say that no one is willing to pay that price. If no one will pay what the Patent Troll is asking, then The Troll will price themselves out of the market. That means that someone is willing to pay for it at the price the Troll wants.
Now, this only applies to legitimate patents. Companies that claim patents on public work, such as Linux perhaps, don't fall into this category. Patent holders who knowingly lie in wait for someone to develop products and then immediately litigate are scum. Companies that buy patents so that no one can use them are hurting the economy and the technology industry. However, if you buy an asset and simply want to be paid for its use, then you are not a troll. You are a capitalist.
To many, Patent Trolls - those who buy patents only to generate licensing fees from them - are a form of life only sightly above parasitic worms. I don't understand the rancor. These aren't spammers or treet purveyors (all lower case, don't be upset Armour and Hormel). These folks simply bought an asset and are trying to get some money from it. If I bought a car and rented it out to people, would folks throw rocks at me? They might but not because of the car. This is no different.
When you read the stories of people complaining about Patent Trolls, two themes emerge.
- The complaining folks want something they did not work for, namely someone else's asset
- The complainers did not do their homework to see if there is a patent for the widget they are building
In other words, it's their fault they are in legal hot water.
The most common charged levied at the so-called Patent Troll is that they squelch innovation. Really? It would seem to me that it is in the patent holders best interest to license the patent as widely as possible. Otherwise, they won't make much money from it. The more times they license the patent, the more money they make. It is also in the Patent Trolls best interest to do this without litigation. Paying lawyers costs money. A lot of money. That cuts into the profit they can expect to make from licensing, in essence raising the cost of deriving value from the asset. Why bother to do a thing like that?
The reason they bother is that some people don't want to pay what the patent holder wants. Think about this a minute. The chief complaint is that the Patent Troll won't sell their asset for what certain people are willing to pay for it. Keep in mind, I did not say that no one is willing to pay that price. If no one will pay what the Patent Troll is asking, then The Troll will price themselves out of the market. That means that someone is willing to pay for it at the price the Troll wants.
Now, this only applies to legitimate patents. Companies that claim patents on public work, such as Linux perhaps, don't fall into this category. Patent holders who knowingly lie in wait for someone to develop products and then immediately litigate are scum. Companies that buy patents so that no one can use them are hurting the economy and the technology industry. However, if you buy an asset and simply want to be paid for its use, then you are not a troll. You are a capitalist.
Monday, July 10, 2006
The Intellectual Property Dilemma
The lifeblood of any technology venture is intellectual property, usually called IP. IP is what a tech venture owns. One look at a modern technology company and you realize the following:
All of this adds up to a need to develop and protect core technology. Technology drives innovation, products, and sales, not the other way around. The more walls you put around your technology, in order to prevent copying, the bigger your competitive advantage.
Herein lies the dilemma. The more you try and protect technology, the more difficult it is to work with others and the more restrictive the environment becomes for customers. If you open up technology so that others can use it - in order to develop the ecosystem your technology needs to thrive - you lose control over it. Over time, what was once a competitive advantage becomes commonplace and free. The tech company loses control over key technology and can no longer derive as much value from it.
The other problem is that too tight a control can actual inhibit innovation and usefulness. Let's face it, there are some features that are important to customers that you may not wish to or cannot develop. If others can, then everyone benefits. It's like holding Jello. Squeeze too tight and and it squishes through your fingers.
One solution is licensing. You can tie select partners, or anyone who wants to have a relationship with you, to agreements that restrict what they can and cannot do. That leaves you in the drivers seat. Unfortunately, these agreements are not always enforceable overseas. This is especial true in the developing world where IP and contract laws are not get fully realized. There are also a number of places where different cultural views of property exist. These work against the enforcement of agreements and basic IP rights.
You can sometimes use technology to protect your technology. The whole Digital Rights Management (DRM) space is about using technology to protect IP. However, DRM cannot easily discriminate between legitimate and illegitimate uses of IP.Two prime examples of DRM technology gone awry are the limitations placed on downloadable music or the silly way that Microsoft makes your computer phone home to reinstall your software. The restrictions placed on legitimate use angers customers and inhibits the spread of the product.
The open source world thinks it has the ultimate solution: give the technology away. This is a great way to develop core, commodity infrastructure. However, even open source relies on licensing, the GPL primarily, to make sure that there is no misuse. The open source license is, in this respect, just a different type of wrapper like proprietary licensing or DRM. As long as the wrapper can be enforced it works fine. The chief advantage of open source licensing is that by being less restrictive, there is less incentive to actually break it. There is also less advantage to be had from the IP. This is why you rarely see open source used for complete applications. The exception is community developed products like Firefox where the profit motive doesn't exist, and even that's changing.
So, where's the solution? Open source, DRM, restrictive licensing? Hardly any hold up in all environments. Either there are onerous restrictions on customers or it is impossible to truly protect the IP worldwide. However, I think I have a solution - social pressure. That's right, simple morality. If the world community responds with disgust and disdain to pirates, the pirates cannot thrive. Instead of suing customers or sending the FBI and Interpol after would be pirates, shame the the priates customers. Expose them and shame them! My guess is if the RIAA had simply called the parents of the big music sharers and said "do you know what your kid is doing" the big time distribution of downloaded music would have stopped sans all the negative publicity. What CEO wants to have his name in the paper with THIEF slapped across it. No one I know.
Of course there will still be outliers. There are always immoral people who will do what they want. Shame works here too. Others will not want to do business with wrong doers. If you don't think that Microsoft's reputation as a ruthless killer of small companies doesn't hurt it, then you haven't been looking at their stock price. Social conventions are very powerful tools.
So, as a community, lets resolve to gently inform people as to why we have IP rights and what their duty is. Bind us, not to a faceless legal document that practically no one understands except the lawyers, but to other people. Even in the developing world this can work since they presumably want to do business with Western countries. We have a strong cultural prohibition against piracy and should not do business with those who don't, regardless of cost considerations. If we make this clear and live up to our ideals the whole issue of how to protect IP will finally find some balance.
- They don't have factories - Manufacturing is almost universally outsourced, much of it overseas
- Brands are transient or nonexistent - Other than Apple with its "i" and "e" products (which is becoming tiresome) most technology companies don't have strong product brand recognition. The other exception to the rule is Microsoft with it's Windows and Office franchises but even those are slipping. Most people have stopped referring to Windows as anything but the version ("Are you running XP or 2000?")
- Products change rapidly - Product lifecycles in many categories, especially PCs and consumer electronics, are as short as six months. A technology company cannot hope to milk the same product for years on end like Proctor and Gamble can with Tide.
All of this adds up to a need to develop and protect core technology. Technology drives innovation, products, and sales, not the other way around. The more walls you put around your technology, in order to prevent copying, the bigger your competitive advantage.
Herein lies the dilemma. The more you try and protect technology, the more difficult it is to work with others and the more restrictive the environment becomes for customers. If you open up technology so that others can use it - in order to develop the ecosystem your technology needs to thrive - you lose control over it. Over time, what was once a competitive advantage becomes commonplace and free. The tech company loses control over key technology and can no longer derive as much value from it.
The other problem is that too tight a control can actual inhibit innovation and usefulness. Let's face it, there are some features that are important to customers that you may not wish to or cannot develop. If others can, then everyone benefits. It's like holding Jello. Squeeze too tight and and it squishes through your fingers.
One solution is licensing. You can tie select partners, or anyone who wants to have a relationship with you, to agreements that restrict what they can and cannot do. That leaves you in the drivers seat. Unfortunately, these agreements are not always enforceable overseas. This is especial true in the developing world where IP and contract laws are not get fully realized. There are also a number of places where different cultural views of property exist. These work against the enforcement of agreements and basic IP rights.
You can sometimes use technology to protect your technology. The whole Digital Rights Management (DRM) space is about using technology to protect IP. However, DRM cannot easily discriminate between legitimate and illegitimate uses of IP.Two prime examples of DRM technology gone awry are the limitations placed on downloadable music or the silly way that Microsoft makes your computer phone home to reinstall your software. The restrictions placed on legitimate use angers customers and inhibits the spread of the product.
The open source world thinks it has the ultimate solution: give the technology away. This is a great way to develop core, commodity infrastructure. However, even open source relies on licensing, the GPL primarily, to make sure that there is no misuse. The open source license is, in this respect, just a different type of wrapper like proprietary licensing or DRM. As long as the wrapper can be enforced it works fine. The chief advantage of open source licensing is that by being less restrictive, there is less incentive to actually break it. There is also less advantage to be had from the IP. This is why you rarely see open source used for complete applications. The exception is community developed products like Firefox where the profit motive doesn't exist, and even that's changing.
So, where's the solution? Open source, DRM, restrictive licensing? Hardly any hold up in all environments. Either there are onerous restrictions on customers or it is impossible to truly protect the IP worldwide. However, I think I have a solution - social pressure. That's right, simple morality. If the world community responds with disgust and disdain to pirates, the pirates cannot thrive. Instead of suing customers or sending the FBI and Interpol after would be pirates, shame the the priates customers. Expose them and shame them! My guess is if the RIAA had simply called the parents of the big music sharers and said "do you know what your kid is doing" the big time distribution of downloaded music would have stopped sans all the negative publicity. What CEO wants to have his name in the paper with THIEF slapped across it. No one I know.
Of course there will still be outliers. There are always immoral people who will do what they want. Shame works here too. Others will not want to do business with wrong doers. If you don't think that Microsoft's reputation as a ruthless killer of small companies doesn't hurt it, then you haven't been looking at their stock price. Social conventions are very powerful tools.
So, as a community, lets resolve to gently inform people as to why we have IP rights and what their duty is. Bind us, not to a faceless legal document that practically no one understands except the lawyers, but to other people. Even in the developing world this can work since they presumably want to do business with Western countries. We have a strong cultural prohibition against piracy and should not do business with those who don't, regardless of cost considerations. If we make this clear and live up to our ideals the whole issue of how to protect IP will finally find some balance.
Friday, July 07, 2006
Will EMC Get Integestion from Gobbling Up RSA?
Look! Up in the sky. It's a storage hardware company. It's a software conglomerate. No! It's EMC.
The big news last week in the storage world had nothing to do with storage. EMC announced that it is acquiring security systems outfit and pioneer RSA Security for $2.1B. Zowie, that's a lot of money to spend. The fact that EMC was willing to spend that much money on RSA tells me two things. One, that EMC really realizes that the storage business is not enough to sustain it as a major player. Obviously, the Documentum and VMWare acquisitions were not a fluke. It also shows that the primary legs of a systems company these days are data protection, information management, and security. It kinda suggests what IT is spending it's money on doesn't it...
I find this tremendously gratifying of course. About 18 months ago, as I was writing my book, I suggested just such a model at Storage Networking World. Unfortunately, having realized that this was the model that was going to stick, I ended up changing most of the first chapter of my book, adding a couple of chapters and generally rearranging things. It was worth it but took extra time. If anyone would like to see that presentation, send me an e-mail. Or buy the book. But, enough about me.
The acquisition, at least from a product and technology perspective, makes sense for EMC. With VMWare, RSA, and Documentum, they now have a critical mass in products that secure the IT environment. Couple that with their traditional data protection products and you have a powerhouse that can lock down storage, server, and network systems, as well as the data in them.
The server piece presents a tricky problem. Servers are becoming, or have become depending on your point of view, a fairly low margin business with lots of rivals. If EMC were to acquire a server vendor, they could end up spending a lot of money while adding very little to their bottom line. However, a line of servers would give them a complete toolbox. It's a tough call. So far, the strategy has been to bulk up in software and that looks to be a winning solution. Frankly, I would not be all that surprised to see them jettison the storage hardware business to someone who really needs the help like IBM. EMC could then focus on developing software assets like data protection tools. The problem is that the storage hardware business still generates the bulk of their revenue. That's like crack, a very hard habit to kick.
So what's in the future for EMC. As I rub the crystal ball I see a couple of things. One, they could pick up one of the dozens of new data protection software companies, such as XOSoft or Avamar. Most of these companies have gone about as far as they could on their own. They would provide some more depth in their upcoming fight with Symantec.
It is conceivable that they could go for a company that is big and broad but struggling. CA comes to mind. It has a very large software portfolio, decent mainframe software, but is in serious trouble. If it were me, I would start talking to them, do my due diligence, but wait until they get even deeper in trouble. Keeps the cost down.
So, my hat is off to Tucci and company. I like the RSA acquisition and think it will benefit EMC and EMC's customers tremendously. Good luck EMC folks!
The big news last week in the storage world had nothing to do with storage. EMC announced that it is acquiring security systems outfit and pioneer RSA Security for $2.1B. Zowie, that's a lot of money to spend. The fact that EMC was willing to spend that much money on RSA tells me two things. One, that EMC really realizes that the storage business is not enough to sustain it as a major player. Obviously, the Documentum and VMWare acquisitions were not a fluke. It also shows that the primary legs of a systems company these days are data protection, information management, and security. It kinda suggests what IT is spending it's money on doesn't it...
I find this tremendously gratifying of course. About 18 months ago, as I was writing my book, I suggested just such a model at Storage Networking World. Unfortunately, having realized that this was the model that was going to stick, I ended up changing most of the first chapter of my book, adding a couple of chapters and generally rearranging things. It was worth it but took extra time. If anyone would like to see that presentation, send me an e-mail. Or buy the book. But, enough about me.
The acquisition, at least from a product and technology perspective, makes sense for EMC. With VMWare, RSA, and Documentum, they now have a critical mass in products that secure the IT environment. Couple that with their traditional data protection products and you have a powerhouse that can lock down storage, server, and network systems, as well as the data in them.
The server piece presents a tricky problem. Servers are becoming, or have become depending on your point of view, a fairly low margin business with lots of rivals. If EMC were to acquire a server vendor, they could end up spending a lot of money while adding very little to their bottom line. However, a line of servers would give them a complete toolbox. It's a tough call. So far, the strategy has been to bulk up in software and that looks to be a winning solution. Frankly, I would not be all that surprised to see them jettison the storage hardware business to someone who really needs the help like IBM. EMC could then focus on developing software assets like data protection tools. The problem is that the storage hardware business still generates the bulk of their revenue. That's like crack, a very hard habit to kick.
So what's in the future for EMC. As I rub the crystal ball I see a couple of things. One, they could pick up one of the dozens of new data protection software companies, such as XOSoft or Avamar. Most of these companies have gone about as far as they could on their own. They would provide some more depth in their upcoming fight with Symantec.
It is conceivable that they could go for a company that is big and broad but struggling. CA comes to mind. It has a very large software portfolio, decent mainframe software, but is in serious trouble. If it were me, I would start talking to them, do my due diligence, but wait until they get even deeper in trouble. Keeps the cost down.
So, my hat is off to Tucci and company. I like the RSA acquisition and think it will benefit EMC and EMC's customers tremendously. Good luck EMC folks!
Thursday, June 22, 2006
I'm Running Out of Network Connections
Over the past year I have been adding all kinds of neat widgets, gadgets, AJAX-driven pages, and other Web 2.0 stuff to my working environment. Google, which used to be a simple search page, now displays my calendar, news, stock quotes, and other information, all updating automatically. Each piece of the page is its own little AJAX application and operate independently. Same goes for my desktop widgets courtesy of Yahoo!, formerly known as Konfabulator. My Google desktop, which used to be about searching for files on my hard drive, now has a bunch of little software applications called gadgets. These gadgets either float free like Yahoo! Widgets or attached to the "sidebar" docking bar. Microsoft Vista (when it finally comes out) is also supposed to have some similar little applications.
This has created an incredibly rich and useful desktop environment. All of these apps allow me to do the little things I used to have to go to a web page for or load some monstrosity of an application. It has also created a problem I've never encountered before on a desktop machine - I'm running out of network connections.
There are lots of other applications that want to get to the network too. Practically every piece of software wants to check for updates (although I turn this off for most applications). Add on all the communications most of us use now including VoIP/Skype, IM, and E-mail and you have major contention for network resources. There are limitations to the number of TCP/IP sessions that Windows allows for each network connection. The more small apps I have trying to get to the Internet or even my own network, the more contention there is for those connections. Network adapters, especially the cheap ones they put in most desktop computers, also have limitations. Even if that is very high, there is still limited bandwidth to work with and building up and tearing down network connections takes CPU and network adapter resources.
Now, I might be an ubergeek who wants to have all of these little do-dads loaded all the time but most of these are becoming normal features of the desktop environment. This coupled with the judicious use of AJAX on web sites and even the average person will soon find themselves running out of a precious resource that they didn't know was limited or even there. In a sense, we are back to the days when most of us would get those cryptic messages from Windows because we were running out of memory. Fine for the computer savvy of the world but mystifying to the average Joe.
Now, some of the problem is the applications themselves. When they encounter an overloaded network connection, they act like something has gone terribly wrong. Rather than wait for resources to become available, they spill out an error message in techno code. The upshot is that normal users of computers may start to see them as more harm than good and shy away from them.
Better application design would also help. These various desktop scripting programs should include a network management capability that takes this into account. Small app designers should also work out better a schema for building and releasing network connections. Some applications are constantly building and tearing down connections which is hard on a system.
For my part, I'm going to try an additional Ethernet card and spread the load out a bit. Most folks can't do that. A little more discipline may go a long way to ensuring that this new approach to software doesn't die on the vine.
This has created an incredibly rich and useful desktop environment. All of these apps allow me to do the little things I used to have to go to a web page for or load some monstrosity of an application. It has also created a problem I've never encountered before on a desktop machine - I'm running out of network connections.
There are lots of other applications that want to get to the network too. Practically every piece of software wants to check for updates (although I turn this off for most applications). Add on all the communications most of us use now including VoIP/Skype, IM, and E-mail and you have major contention for network resources. There are limitations to the number of TCP/IP sessions that Windows allows for each network connection. The more small apps I have trying to get to the Internet or even my own network, the more contention there is for those connections. Network adapters, especially the cheap ones they put in most desktop computers, also have limitations. Even if that is very high, there is still limited bandwidth to work with and building up and tearing down network connections takes CPU and network adapter resources.
Now, I might be an ubergeek who wants to have all of these little do-dads loaded all the time but most of these are becoming normal features of the desktop environment. This coupled with the judicious use of AJAX on web sites and even the average person will soon find themselves running out of a precious resource that they didn't know was limited or even there. In a sense, we are back to the days when most of us would get those cryptic messages from Windows because we were running out of memory. Fine for the computer savvy of the world but mystifying to the average Joe.
Now, some of the problem is the applications themselves. When they encounter an overloaded network connection, they act like something has gone terribly wrong. Rather than wait for resources to become available, they spill out an error message in techno code. The upshot is that normal users of computers may start to see them as more harm than good and shy away from them.
Better application design would also help. These various desktop scripting programs should include a network management capability that takes this into account. Small app designers should also work out better a schema for building and releasing network connections. Some applications are constantly building and tearing down connections which is hard on a system.
For my part, I'm going to try an additional Ethernet card and spread the load out a bit. Most folks can't do that. A little more discipline may go a long way to ensuring that this new approach to software doesn't die on the vine.
Monday, June 05, 2006
Apple Nixes Overseas Call Center
I just saw a news report on CNET that Apple has nixed the call center it was planning in India (Apple hangs up on India call center) . It just shows that Apple knows what they are doing. For them brand is everything and you can't protect that overseas. There is so much dissatisfaction amongst consumers with overseas call centers that they would have to be nuts to send calls to India. Mind you, this is nothing against India. It's houses one of the great cultures of the world.
The reasons not to put a call center overseas are twofold. One, there are cultural differences and these differences are amplified when there is stress. Few things are more stressful for anyone then customer service or technical support issues. You start out on the wrong foot and in fight mode. If you run up against a cultural misunderstanding or lack of integration with other functions (see my post on Buy.com below) you go from frustrated to downright angry in less time then it takes an Indy car to get to 60 MPH.
More important, the key reason for sending calls overseas is cost. I've heard (and made) the argument that it is about better round the clock service, faster response times, yada yada. Bull! It's to drive the cost per call from dollars to pennies. Understandable from a company perspective. Save a few pennies while performing the same function. However, once you start down that road, once you treat customer service as a function to be done as cheaply as possible, the whole operation starts to act that way. Soon enough, it's all about the costs and not the customer. Customers are treated as an inconvenience that has to be tolerated not embraced. Very short term thinking.
I've got to wonder why they were trying this in the first place. Perhaps Apple was setting this up to handle calls from India. Possible but not probable. Even if that is the case, it would have been inevitable that some otherwise bright person would have thought to send American calls to India. That would have been a disaster.
Now, if you're Apple, and your whole business is based on cool factor, you can't afford to look like any other schlocky consumer electronics company. People pay more for Apple products than for comparable ones because they get treated well. This drives tremendous loyalty in their customer base. I can't count how many people would rather choke on a chicken bone rather than give up their Mac.
Sending calls to a cheap overseas call center jeopardizes the ability to connect to customers and keep them part of the big loyal Apple family. Otherwise, who would pay extra for iPods and iTunes songs, or Macs for that matter. You buy Apple because you feel they connect with you, the average (yet hip) person. If you have a problem and you find yourself talking to someone overseas who doesn't get you as a person then all that goes away and Apple dies. Steve Jobs knows this from Apple's previous near death experience. Everyone at Apple knows this.
Apple consistently comes out on top for customer satisfaction. Why mess with that when it's so important to sales and high margins? They probably felt they had to do it to stay competitive. Thank goodness someone inside realized that not doing it was how they would stay competitive.
The reasons not to put a call center overseas are twofold. One, there are cultural differences and these differences are amplified when there is stress. Few things are more stressful for anyone then customer service or technical support issues. You start out on the wrong foot and in fight mode. If you run up against a cultural misunderstanding or lack of integration with other functions (see my post on Buy.com below) you go from frustrated to downright angry in less time then it takes an Indy car to get to 60 MPH.
More important, the key reason for sending calls overseas is cost. I've heard (and made) the argument that it is about better round the clock service, faster response times, yada yada. Bull! It's to drive the cost per call from dollars to pennies. Understandable from a company perspective. Save a few pennies while performing the same function. However, once you start down that road, once you treat customer service as a function to be done as cheaply as possible, the whole operation starts to act that way. Soon enough, it's all about the costs and not the customer. Customers are treated as an inconvenience that has to be tolerated not embraced. Very short term thinking.
I've got to wonder why they were trying this in the first place. Perhaps Apple was setting this up to handle calls from India. Possible but not probable. Even if that is the case, it would have been inevitable that some otherwise bright person would have thought to send American calls to India. That would have been a disaster.
Now, if you're Apple, and your whole business is based on cool factor, you can't afford to look like any other schlocky consumer electronics company. People pay more for Apple products than for comparable ones because they get treated well. This drives tremendous loyalty in their customer base. I can't count how many people would rather choke on a chicken bone rather than give up their Mac.
Sending calls to a cheap overseas call center jeopardizes the ability to connect to customers and keep them part of the big loyal Apple family. Otherwise, who would pay extra for iPods and iTunes songs, or Macs for that matter. You buy Apple because you feel they connect with you, the average (yet hip) person. If you have a problem and you find yourself talking to someone overseas who doesn't get you as a person then all that goes away and Apple dies. Steve Jobs knows this from Apple's previous near death experience. Everyone at Apple knows this.
Apple consistently comes out on top for customer satisfaction. Why mess with that when it's so important to sales and high margins? They probably felt they had to do it to stay competitive. Thank goodness someone inside realized that not doing it was how they would stay competitive.
Thursday, June 01, 2006
VoIP Has a Way To Go But It's On The Way
I've been watching the Vonage IPO fiasco with great interest. The company is becoming memorable as one of the worst IPOs in history. It opened to lackluster interest and dived almost immediately. Ever since, it has continued to trade in a narrow but ever decreasing band. Today it sits just a bit over $12, down roughly 25% from its opening. Yowzer!
Vonage deserves it, in a way. They don't make money. Instead they lose tremendous amounts of money. That's the problem with an IPO. Once public, you're not judged on your potential, concept, or technology. Just on your numbers. Their numbers stink, so there you have it.
My hope is that all VoIP doesn't get painted black because of this. Perhaps VoIP will be judged by the Skype experience instead. Perhaps, but life and business are cruel. Memories are short and folks usually remember the last stupid thing they heard. That's too bad because VoIP is a truly revolutionary technology. It is already transforming the way we communicate and holds the promise of finally bringing about communication convergence.
The hold up is that VoIP, unlike the PC or Internet, is really a bunch of technologies wired together and pushed to their limits. That means that the various technologies don't always play nice together. You have broadband networking (one set of providers), usually a bunch of SIP servers, and the traditional phone system (another set of providers) that all must work together. This creates all kinds of interface and provisioning problems. The result is that the call quality can vary from better than a land line to worse than a bad cell phone connection.
I've been experiencing this first hand over the course of the last month. I decided to finally drop my traditional landline in favor of a VoIP providers. The benefits were certainly there. What I got was:
I also get the warm and fuzzy feeling that my expensive broadband connection is being used for something other than low bandwidth e-mail or small stuttering renditions of 30 year old TV shows. Welcome Back Kotter! It's like you never left.
There are tradeoffs however. VoIP is not as plug and play as vendors make it appear. Hooking up the adapter to the network is a no brainer but getting it to work right is not. A couple of calls to tech support at least got the connection stable enough to use.
I'm still trying to find a way to get the call quality to be consistently good. I'm making progress and new friends in my provider's tech support department (as well as a few enemies I think). I say "progress" because call quality is no longer consistently bad. It's now inconsistently good or bad, depending if you are a "glass is half empty or half full" type. I still experience drop outs, echoes, and static. Just not all the time. So, sometimes the connection is good and sometimes it's lousy.
The problem seems to be a misunderstanding between my ISP and VoIP provider. The latency in my ISP's network is more than the VoIP system can handle. I don't usually notice it since Internet applications are engineered for high latency and are more concerned with bandwidth. VoIP is apparently more like storage and sensitive to latency. This is the killer problem that must be overcome. If VoIP needs low network latency or even a predictable latency, then it will have problems in the the very SOHO market it targets. Cable ISPs don't guarantee quality of service, especially to a home. DSL providers don't guarantee QoS. Yet, VoIP needs a guaranteed minimum QoS. That's a problem that needs fixing before they lose the market.
All in all, I'm quite happy with the VoIP experience. It has dramatically cut my costs while giving me services that I never could have afforded before. Time will tell but I'm betting that VoIP providers will work out the call quality problems. It just might take awhile.
Vonage deserves it, in a way. They don't make money. Instead they lose tremendous amounts of money. That's the problem with an IPO. Once public, you're not judged on your potential, concept, or technology. Just on your numbers. Their numbers stink, so there you have it.
My hope is that all VoIP doesn't get painted black because of this. Perhaps VoIP will be judged by the Skype experience instead. Perhaps, but life and business are cruel. Memories are short and folks usually remember the last stupid thing they heard. That's too bad because VoIP is a truly revolutionary technology. It is already transforming the way we communicate and holds the promise of finally bringing about communication convergence.
The hold up is that VoIP, unlike the PC or Internet, is really a bunch of technologies wired together and pushed to their limits. That means that the various technologies don't always play nice together. You have broadband networking (one set of providers), usually a bunch of SIP servers, and the traditional phone system (another set of providers) that all must work together. This creates all kinds of interface and provisioning problems. The result is that the call quality can vary from better than a land line to worse than a bad cell phone connection.
I've been experiencing this first hand over the course of the last month. I decided to finally drop my traditional landline in favor of a VoIP providers. The benefits were certainly there. What I got was:
- costs less than half of traditional phone service;
- network services like caller id and voice mail for free;
- long distance in North America that is free and international calling that's cheap and;
- did I mention it's half the cost of traditional service?
I also get the warm and fuzzy feeling that my expensive broadband connection is being used for something other than low bandwidth e-mail or small stuttering renditions of 30 year old TV shows. Welcome Back Kotter! It's like you never left.
There are tradeoffs however. VoIP is not as plug and play as vendors make it appear. Hooking up the adapter to the network is a no brainer but getting it to work right is not. A couple of calls to tech support at least got the connection stable enough to use.
I'm still trying to find a way to get the call quality to be consistently good. I'm making progress and new friends in my provider's tech support department (as well as a few enemies I think). I say "progress" because call quality is no longer consistently bad. It's now inconsistently good or bad, depending if you are a "glass is half empty or half full" type. I still experience drop outs, echoes, and static. Just not all the time. So, sometimes the connection is good and sometimes it's lousy.
The problem seems to be a misunderstanding between my ISP and VoIP provider. The latency in my ISP's network is more than the VoIP system can handle. I don't usually notice it since Internet applications are engineered for high latency and are more concerned with bandwidth. VoIP is apparently more like storage and sensitive to latency. This is the killer problem that must be overcome. If VoIP needs low network latency or even a predictable latency, then it will have problems in the the very SOHO market it targets. Cable ISPs don't guarantee quality of service, especially to a home. DSL providers don't guarantee QoS. Yet, VoIP needs a guaranteed minimum QoS. That's a problem that needs fixing before they lose the market.
All in all, I'm quite happy with the VoIP experience. It has dramatically cut my costs while giving me services that I never could have afforded before. Time will tell but I'm betting that VoIP providers will work out the call quality problems. It just might take awhile.
Subscribe to:
Posts (Atom)