Tom Petrocelli's take on technology. Tom is the author of the book "Data Protection and Information Lifecycle Management" and a natural technology curmudgeon. This blog represents only my own views and not those of my employer, Enterprise Strategy Group. Frankly, mine are more amusing.
Friday, August 06, 2010
Storm Clouds Approaching
Thursday, September 03, 2009
EMC Pacmans Kazeon. Mmmm. Good.
More news on the storage M&A front and once again it's about EMC. EMC announced that they are acquiring Kazeon a self avowed “leader in eDiscovery” tools. Stripped of all the eDiscovery hoopla, Kazeon makes a decent indexing and classification rules engine. In that space, it is a very useful thing indeed.
What Kazeon does is catalog and index your data and run the information past a rules engine to generate additional metadata. Again, that's good but only as good as the rules. They also layer on some analytics which is all the rage in eDiscovery. Analytics are also only as good as the data and metadata and, in my opinion, overemphasized in the eDiscovery market. But that's just me...
Kazeon is a hand in glove fit for EMC. For many years now EMC has looked to get past selling just hardware and has wanted to sell systems to store and manage data. That's a great strategy since it creates new value but sticks to their knitting. Kazeon is another value added tool that EMC can add to their toolbox.
The Kazeon acquisition also gives them some eDiscovery street cred. They have been trying to play in the eDiscovery sandbox for years, mostly through their Documentum offerings. Nothing wrong with that since the majority of eDiscovery work is about workflows anyway. However, automated tools for tracking data also are important not only to discovery during litigation but also to insure compliance with in-house data retention rules. And by retention I mean destruction. But you knew that didn't you...
The dirty secret about data retention is that no one can really comply with their own internal rules without knowing where and what all their data is. Knowing where all your data is is a really hard problem to solve. That's where Kazeon comes in. They create catalogs of vast amounts of data that allows you to better comply with discovery rules, a preservation and legal hold, and internal data retention policies.
So, Kazeon is an obviously good thing but why is it good for EMC? Actually, there are two (probably more) reasons why this works so well. First, it adds value. If I buy tons of EMC storage, the Kazeon/EMC products will help me to know what I have on it. Second, those catalogs of information and metadata need, you guessed it, more storage. It's the reason Documentum was such a good deal for EMC. It lets you get more value from your stored data and makes you store more data. A twofer for EMC.
EMC will now be able to deliver, by itself, one of the most comprehensive information management systems available. By combining Documentum, Kazeon, and all the other tools EMC has at its disposal, plus hardware, they will be able to deliver an information management solution that will make lawyers squeal with delight.
That's not to say it's perfect. Kazeon can't help you if someone dumps their files onto a flash drive or emails a smoking gun document to their Gmail account. Smartphones and PDAs are also a challenge that Kazeon will not help with. Still, they hit all the high notes and do better than what most companies do - which is nothing!
As an aside, Kazeon also has an intellectual property (IP) management component to their systems. IP management and eDiscovery are very similar problems – you need to know what data you have where, in order to comply with rules and regulations. EMC has often touted Documentum as an IP management tool. They haven't gotten too far with that since it takes so much effort to set up Documentum to do IP management. Unless you are already committed to Documentum across the company, there are better out of the box IP solutions. Kazeon will give them some more heft in that space. It will allow EMC to automate many of the sticky tracking and classification tasks associated with IP management, especially in preventing leakage. It's not there yet, but getting better.
I don't know if EMC is full yet after eating up so many companies. Kazeon is quite a tasty and healthy morsel for them though. It makes good, strategic sense. I wonder if they left room for dessert.
Monday, July 20, 2009
The Incredible Shrinking Communication
It seems that we are constantly inventing shorter ways to communicate. Note that I didn't say faster or more efficient, just shorter. The Internet especially seems to want to help us shorten the length of what we read. In the age of print, books and pamphlets dominated alongside newspaper and magazine articles. While radio and television started the process of condensing communication, it has accelerated dramatically since the Internet became more ubiquitous. Our attention spans shrink and so does what we read.
Of course, the perceived attention span shrinkage may be a symptom not a cause. As we have less time to devote solely to reading, we crave shorter forms that give us what we need most in the smallest amount of time possible. We still want longer form writing when we have the time. Reading a book on the beach is the ultimate summer pleasure. Other times, we barely have time to check Facebook. Subsequently, we now have a hierarchy of written communication. It starts off long and detailed and ends in microblogging which is incredibly short – haiku short – and lacking entirely in details.
Books provide deep understanding. If you want to become expert at something, books are a good place to start. Articles don't go as deep as books but the longer format allows you to become knowledgeable about a great many things in a short amount of time.
Unfortunately for the magazines and newspapers that typically publish articles, blogs are superseding them. Blogs have a two key advantages – instant distribution and easy publishing. Instead of waiting hours or even months to get something in print, a blog gets your “article” out there right away. And anyone can publish a blog. No wrangling with editors and publishers. No pesky fact checkers. That, of course, is the weakness of the blog. As a reader you don't always know if you are getting facts, opinion, spin, or outright falsehood. Blogs are killing newspapers and magazines and I worry that the truth will die with them. Disclosure: I always present this blog as opinion and nothing more. Don't believe everything you read. Fight the power!
Microblogging and status messages on services like Facebook are quickly becoming the way that many people broadcast information. Short, instantaneous bursts of information, microblogging leaves little room for understanding or explanation. In terms of depth of knowledge they are at the shallow end of the pool. But this is what we want or need. We want to know a little something about everything but don't have the time to read hundreds of books, newspapers, or articles. It's kind of like an information buffet. You take a taste of this and that so that you can see what you like.
As recent events in Iran have shown, microblogging is a very powerful media. Anyone can crank out a Tweet from a cell phone and have it be published before authorities even know it's there. It's hard to censor in those circumstances. Once again - Fight the power!
Perhaps in the future communication will get so short that no one will say anything at all. I could live with that. It would certainly cut down on the information overload if there was no information. I doubt very much that's where we will end up. But every time I predict we are at the floor, we push right through it.
Still, with SMS limited to 160 characters and Twitter limited to 140, I can't imagine how much smaller we could go. Perhaps we will need to write in glyph based languages like Chinese or Ancient Egyptian where more information is contained in each character.
Of course, many times there is beauty in simplicity and in an economy of words. In that vein I offer you this haiku:
Like the bird in spring
Sitting in the tallest tree
I must Tweet today
Thursday, January 03, 2008
Getting My Due
Now, there are lots of software packages to do this for you. Some are expensive, some cheap, some even free. The failure of all of them is the inability to record an expense with your computer off or when the package is not loaded. In other words, when you are actually incurring the expense you likely don't have access to the software. I probably could buy something for my cell phone or purchase a PDA to do this but I'm doomed if I drop it in the toilet.
Enter stage left: Xpenser. Xpenser is neat little web service, available for free, that allows you to record your expenses, then organize and track them. Neat! Even better is that you can interact with Xpenser via e-mail and even IM.
What makes Xpenser really cool is its connection with Jott, the previously blogged about voice transcription service. So, I can now record expenses from IM (via cell phone or laptop), e-mail (via same) or voice. I call Jott, tell them to send a jott to Xpenser and - voila - it is recorded on the site.
As all things with Jott, it's not perfect. The way Jott pronounces Xpenser (which is supposed to sound like ex-pen-ser) is comical. It sounds like Chinese when they pronounce the X and P together. Still, it's functional and getting better.
So now I can record and manage my expenses anywhere I am through whatever means is convenient. This is the real promise of the next wave in Internet services. The tethering of the 'Net is over and it is now time to truly go mobile. The 'Net is not its own place anymore. It's where I am wherever that may be.
Friday, March 30, 2007
Stikkits: Cool Technology You Can Use
Stikket, however, does way more then your typical notepad application. It includes a to-do list, calendar, and bookmarking system. Again, other services have these as well. They are not as well integrated but they do exist. What sets Stikkets apart is that everything is derived from the sticky note, called a stikket. The Value of N software analyzes what is written in the note for clues that tell it how to manage the content of the note. For example, if you say “remind me” in the note, it will send you a reminder via e-mail. If you say “Order Books Tomorrow” it will create a calendar entry for “Order Books” with tomorrow's date. It can recognize e-mail address, hyperlinks, and other typical clues. There is a simple and natural way to list items as to-do list entries that can be checked off. The stikket itself is then listed in the To-Do section as well.
What is remarkable is that bookmarks, to-dos, tags, and calendar entries are not treated as separate items. The stikket drives everything. At the very least, you always have the original note which could contain much more information then a typical calendar entry. In fact, a stikket could be interpreted as a calendar entry while portions inside as to-dos.
In the same manner as the Ask.com MyStuff system, you can create a bookmark for a website. However, to paraphrase Fred in the movie “Valley Girl”, it's not what they do, it's the way they do it that makes the difference. You can save a link on your toolbar to create a stikket. If you click on it while on a website, it pops up a stikket with the URL embedded in it and categorized as a bookmark. You can add to the stikket just like any other stikket. The software will also analyze the content of the page and provide a ready-made list of tags for the stikket. Neat!
Which brings us to yet another great feature – tagging. You can tag in a number of ways including explicit tags (say “tag as...” in your stikket) or by letting the software figure it out from the content. It is another example of content driven features that makes Stikkets so useful. At this time explicit tagging is the only reliable way to tag a stikket but I'm betting that will change.
But wait! There's more! Stikket contains collaboration features, again driven by the content of the stikket. Make a note in the stikket to share it with someone, or reference them by Stikket name or their e-mail, and copies of the stikket are made available to others. You can even send reminders and assign tasks this way.
A simple, familiar interface belies so much power. There are some features (like peeps) that I haven't even explored yet. Stikkets almost seems to understand what I want it to do without me telling it explicitly. Now that's automation. I'm hoping they can succeed. More likely they will be consumed by some big on-line company. In a way that would be good since this functionality would then become even more widespread. I would love to see this in office productivity applications or CRM packages. Hey Microsoft! You listening?
Next up on the Value of N landscape, an e-mail assistant with some similar features. Called IwantSandy, it promises to look at the content of your e-mail and automate the process of setting events, managing your address book, and organizing e-mail. That may prove to be a truly killer app.
Wednesday, February 21, 2007
Managing Metadata
Everywhere I turn, I hear more about metadata. It seems that everyone is jumping on the metadata bandwagon. For those of you who have never heard the term, it is data about data. More precisely, metadata describes data, providing the context that allows us to make it useful.
Bringing Structure to Chaos
Organizing data is something we do in our heads but that computers are pretty poor at. It is natural for a human being to develop schema and categories for the endless streams of data that invade our consciousness every moment we that are awake. We can file things away for later use, delete it as unimportant, and connect it with other data to form relationships. It is an innate human capability.
Not so with computers. They are literal in nature and driven by the commands that we humans give them. No matter how smart we think computers are, compared to us organics, they are as dumb as rocks.
Metadata is an attempt to give computers a brain boost. By describing data, we are able to automate the categorization and presentation of data in order to make it more meaningful. In other words, we can build schema out of unstructured data. Databases do this by imposing a rigid structure on the data. This works fine for data that is naturally organized into neat little arrangements. For sloppy situations, say 90% of the data in our lives, databases are not so useful.
Metadata Is All Around Us
We are already swimming in metadata. All those music files clogging up our hard drives have important metadata associated with them. That's why your iPod can display the name, artist and other important information when you play a song and iTunes can build playlists automatically. Your digital camera places metadata into all of those pictures of your kids. Because of metadata, you can attach titles and other information to them and have them be available to all kinds of software. Internet services use metadata extensively to provide those cool tag clouds, relevant search responses, and social networking links.
Businesses have a keen need for metadata. With so many word processor documents, presentations, graphics, and spreadsheets strewn about corporate servers, there needs to be a good way to organize and manage them. Information Lifecycle Management assumes the ability to generate and use metadata. Advanced backup and recovery also uses metadata. Companies are trying to make sense out of the vast stores of unstructured data in their clutches. Whether it's to help find, manage, or protect data, organizations are increasingly turning to metadata approaches to do so.
Dragged Down By The Boat
So, we turn to metadata to keep us from drowning in data. Unfortunately, we are starting to find ourselves drowning in metadata too. A lot of metadata is unmanaged. Managing metadata sounds a lot like watching the watchers. If we don't start to do a better job of managing metadata, we are going to find out an ugly truth about it – it can quickly become meaningless. Just check out the tag cloud on an on-line service such as Technorati or Flicr. They are so huge that it's practically useless. I'm a big fan of tag clouds when they are done right. The ability to associate well thought out words and phrases to a piece of data makes it much easier to find what you want and attach meaning to whatever the data represents.
The important phrase here is “well thought out”. A lot of metadata is impulsive. Like a three year old with a tendency to say whatever silly thought comes into their brains, a lot of tags are meaningless and transient. Whereas the purpose of metadata is to impart some extended meaning to the data, a lot of metadata does the opposite. It creates a confused jumble of words that shine no light on the meaning of the data.
The solution is to start to manage the metadata. That means (and I know this is heresy that I speak) rules. Rules about what words can be used in what circumstances. Rules about the number of tags associated with any piece of data. Rules about the rules basically. It makes my stomach hurt but it is necessary.
I don't expect this discipline from Internet services. It runs counter to the “one happy and equal family” attitude that draws people to these services. For companies this is necessary as they implement metadata driven solutions. Unfortunately, it means guidelines (tagged as “information”, “guidelines”, “metadata”, or something) and some person with a horrible, bureaucratic personality to enforce them. Think of it as a necessary evil, like lawmen in the old West.
For the most part, companies will probably start to manage metadata when it is already too late, when they are already drowning in the stuff. There is an opportunity to still avoid the bad scenario though. Metadata-based management of unstructured data is still pretty new. Set up the rules and guidelines now. Enforce the rules and review the tags and how they are used regularly. Eventually, there will be metadata analysis software to assist you but in the meantime, put the framework in place. The opportunity is there to do it right from the beginning and avoid making a big mistake. Use metadata to create value from your data rather than confuse things further.
Thursday, December 14, 2006
In The Wiki Wiki Wiki Wiki Wiki Room
The advantage of the Wiki is that it is very easy to write and edit entries. Formatting those entries, while a bit unconventional, is also pretty easy. This yields highly attractive, easy to navigate online documents.
Deploying a Wiki application, such as MediaWiki, the software behind Wikipedia, is also incredibly easy assuming you have a working L/WAMP stack operating. I've worked with all types of collaborative systems including Lotus Notes and a number of knowledgebases, and few are as easy to use as a MediaWIki and none are as easy to install.
Some time ago, I installed and started using a Wiki for tracking storage companies, my bread and butter. That in many ways is a conventional use of a Wiki. Think of it as building an encyclopedia of storage companies. Nothing Earth shattering there.
I also started using my Wiki for development projects. I build my own applications from time to time to provide me something useful, try out technology that people are buzzing about, and simply to amuse myself. On my latest project I used a Wiki to help me to design the application and capture my decisions along the way. What I found was a product managers dream application.
Anyone who has had to build product specs, PRDs, etc. knows what a pain they can be. They are dynamic, require input from lots of often unwilling people, and whole sections usually need to be written by specialists. In other words, they require collaboration, the stock in trade of a Wiki. The same would go with design documents that engineers use.
As time has gone on, I have been able to add and expand sections, eliminate others, and make changes as needed. All without losing the thoughts behind them. As is typical with web applications, you can link to other documents or sections of documents with easy including outside ones. If there were engineers writing the technical design elements, they could simply write them in. If a change occurred to a feature requirement, everyone would know immediately.
Contrast that with the way this is usually done. Dozens of Word documents tossed at the product manager who has to somehow integrate then all without getting something terribly wrong. Playing cut and paste with e-mails and other documents, trying to pull them together into a unified whole but never having a sense of the entire project. Gack! Makes me choke just thinking about it.
Heck, by the end of this, I'll almost have a design manual on how to write this type of application, useful to anyone who would want to do the same in the future. Development and knowledge capture all at once.
This is the killer app for product managers. Using a Wiki, you could pull together requirements documents in half the time and tie them back to the design documents and anything else associated with the product. tech support would love what comes out of this if QA used it. And it cost practically nothing to deploy. The software is free as are most of the infrastructure.
Free and transformative. What could be better.
Wednesday, October 11, 2006
Eating My Own Cooking
Over the past few months I've been doing a fair bit of writing about open source software and new information interfaces such as tag clouds and spouting to friends and colleagues about Web 2.0 and AJAX. All this gabbing on my part inspired me to actually write an application, something I haven't done in a long while. I was intrigued with the idea of a tag cloud program that would help me catalog and categorize (tag) my most important files.
Now, you might ask, "Why bother?" With all the desktop search programs out there you can find almost anything, right? Sort of. Many desktop search products do not support OpenOffice, my office suite of choice, or don't support it well. Search engines also assume that you know the something of the content. If I'm not sure what I'm looking for, the search engine is limited in it's usefulness. You either get nothing back or too much. Like any search engine, desktop search can only return files based on your keyword input. I might be looking for a marketing piece I wrote but not have appropriate keywords in my head.
A tag cloud, in contrast, classifies information by a category, usually called a tag. Most tagging systems allow for multidimensional tagging wherein one piece of information is classified by multiple tags. With a tag cloud I can classify a marketing brochure as "marketing", "brochure" and "sales literature". With these tags in place, I can find my brochure no matter how I'm thinking about it today.
Tag clouds are common on Web sites like Flickr and MySpace. It seemed reasonable that an open source system for files would exist. Despite extensive searching, I've not found one yet that runs on Windows XP. I ran across a couple of commercial ones but they were really extensions to search engines. They stick you with the keywords that the search engine gleans from file content but you can't assign your own tags. Some are extensions of file systems but who wants to install an entirely different file system just to tag a bunch of files?
All this is to say that I ended up building one. It's pretty primitive (this was a hobby project after all) but still useful. It also gave me a good sense of the good, the bad, and the ugly of AJAX architectures. That alone was worth it. There's a lot of rah-rah going on about AJAX, most it well deserved, but there are some drawbacks. Still, it is the only way to go for web applications. With AJAX you can now achieve something close to a standard application interface with a web-based system. You also get a lot of services without coding, making mutli-tier architectures easy. This also makes web-based applications more attractive as a replacement for standard enterprise appliacations, not just Internet services. Sweet!
The downsides - the infrastructure is complex and you need to write code in multiple languages. The latter creates an error prone process. Most web scripting languages have a syntax that is similar in most ways but not all ways. They share the C legacy, as does C++, C#, and Java, but each implements the semantics in their own way. This carried forward to two of the most common languages in the web scripting world, PHP and JavaScript. In this environment, it is easy to make small mistakes in coding that slow down the programming process.
Installing a WAMP stack also turned out to be a bit of a chore. WAMP stands for Windows/Apache/MySQL/PHP (or Perl), and provides an application server environment. This is the same as the LAMP stack but with Windows as the OS instead of Linux. The good part of the WAMP or LAMP stack is that once in place, you don't have to worry about basic Internet services. No need to write a process to listen for a TCP/IP connection or interpret HTTP. The Apache Web Server does it for you. It also provides for portability. Theoretically, one should be able to take the same server code and put it on an any other box and have it run. I say theoretically because I discovered there are small differences in component implementations. I started on a LAMP stack and had to make changes to my PHP code for it to run under Windows XP. Still, the changes were quite small.
The big hassle was getting the WAMP stack configured. Configuration is the Achilles heel of open source. It is a pain in the neck! Despite configuration scripts, books,a nd decent documentation, I had no choice but to hand edit several different configuration files and download updated libraries for several components. That was just to get the basic infrastructure up and running. No application code, just a web server capable of running PHP which, in turn, could access the MySQL database. I can see now why O'Reilly and other technical book publishers can have dozens of titles on how to set up and configure these open source parts. It also makes evident how Microsoft can still make money in this space. Once the environment was properly configured and operational, writing the code was swift and pretty easy. In no time at all I had my Tag Cloud program.
The Tag Cloud program is implemented as a typical three tier system. There is a SQL database, implemented with MySQL, for persistent storage. The second tier is the application server code written in PHP and hosted on the Apache web server. This tier provides an indirect (read: more secure) interface to the database, does parameter checking, and formats the information heading back to the client.
As an aside, I originally thought to send XML to the client and wrote the server code that way. What I discovered was that it was quite cumbersome. Instead of simply displaying information returned from the server, I had to process XML trees and reformat them for display. This turned out to be quite slow given the amount of information returned and just tough to code right. Instead, I had the server return fragments of XHTML which were integrated into the client XHTML. The effect was the same but coding was much easier. In truth, PHP excels at text formating and JavaScript (the client coding language in AJAX) does not.
While returning pure XML makes it easier to integrate the server responses into other client applications, such as a Yahoo Widget, it also requires double the text processing. With pure XML output you need to generate the XML on the server and then interpret and format the XML into XHTML on the client. It is possible to do that with fairly easily with XSLT and XPath statements but in the interactive AJAX environment, this adds a lot of complexity. I've also discovered that XSLT doesn't always work the same way in different browsers and I was hell-bent on this being cross-browser.
The JavaScript client was an exercise in easy programming once the basic AJAX framework was in place. All that was required was two pieces of code. One was Nicholas Zakas' excellent cross-browser AJAX library, zXml. Unfortunately, I discovered too late that it also included cross-browser implementations of XSLT and XPath as well. Oh well. Maybe next time.
The second element was the HTTPRequest object wrapper class. HTTPRequest is the JavaScript object used to make requests of HTTP servers. It is implemented differently in different browsers and client application frameworks. zXml makes it much easier to have HTTPRequest work correctly in different browsers. managing multiple connections to the web server though was difficult. Since I wanted the AJAX code to be asychronous, I kept running into concurrency problems. The solution was wrapper for the HTTPRequest object to assist in managing connections to the web server and encapsulate some of the more redundant code that popped up along the way. Easy enough to do in JavaScript and it made the code less error prone too! After that it was all SMOP (a Simple Matter of Programming). Adding new functions is also easy as pie. I have a dozen ideas for improvements but all the core functions are working well.
The basic architecture is simple. A web page provides basic structure and acts as a container for the interactive elements. It's pretty simple XHTML. In fact, if you look at the source it would look like nothing. There are three DIV sections with named identifiers. These represent the three interactive panels. Depending on user interaction, the HTTPRequest helper objects are instantiated and make a request of the server. The server runs the request PHP code which returns XHTML fragments that are for display (such as the TagCloud itself) or represent errors. The wrapper objects place them in the appropriate display panels. Keep in mind, it is possible to write a completely different web page with small JavaScript coding changes or even just changes to the static XHTML.
The system has all the advantages of web applications with an interactive interface. No page refreshes, no long waits, no interface acrobatics. It's easy to see why folks like Google are embracing this methodology. There's a lot I could do with this if I had more time to devote to programming but Hey! it's only a hobby.
At the very least, I have a very useful information management tool. Finding important files has become much easier. One of the nice aspects of this is that I only bother to tag important files, not everything. It's more efficent to bake bread when you have already seperated the wheat from the chafe. It's also good to eat my own cooking and find that it's pretty good.
Friday, October 06, 2006
ILM - The Oliver Stone Version
It Was All Hooey To Begin With
One of the most popular theories is that ILM was a load of horse hockey from the start. The story goes this way:
- Big companies find core technology doesn't sell as well as it used to
- They quickly come up with a gimmick - take a basket of old mainframe ideas and give them a three letter acronym
- Hoist on stupid and unsuspecting customers
- Sit back an laugh as the rubes buy the same stuff with new facade
This is a variation of the "lipstick on a pig" theory that says ILM was only other stuff repackaged. It was all a scam perpetrated by evil marketing and sales guys to make their quota and get their bonuses.
Unfortunately, there are several holes in this theory. First, a lot of the companies in the space are new ones. While they take the lead from the big ones, they don't get technology and products from the big companies. They actually had to go out and make new technology not just repackage old stuff. The same is true for the big companies. Many had to develop or acquire entirely new technology to create their ILM offerings.
I also don't think that IT managers are as stupid as this theory would make them out to be. Your average manager with a college degree and a budget to manage rarely buys gear just because the salesman said it was "really important". So the marketing ploy argument falls down.
Good Idea. Too Bad It Didn't Work
Also known as a "Day Late and a Dollar Short. This theory says that ILM was a great idea but just too hard to implement or build products around. There is some truth to this one. It is hard to develop process automation products that don't require huge changes in the IT infrastructure. ILM is especially susceptible to this since it deals with rather esoteric questions such as "what is information? , "how do I define context?", and "what is the value of information?". Packaging these concepts into a box or software is not trivial.
It probably didn't help that a lot of ILM product came out of the storage industry. Folks in that neck of the woods didn't have a background in business process automation, the closest relative to ILM.
Put another way - making ILM products is hard and a lot of companies found it to too hard to stay in the market.
The Transformer Theory or Morphing Into Something Bigger
ILM. More than meets the eye! It's funny if you say it in a robotic voice and have seen the kids cartoon and toys from the 80's. But seriously folks, the latest theory is that ILM hasn't disappeared, it's simply morphed into Information Management. This line of thought goes like this:
- ILM was only a start, a part of the whole
- It was successful to a large degree
- Now it is ready to expand into a whole new thing
- Get more money from customers
This is one of those sort of true and not true ideas. It is true that ILM is part of a larger category of Information Management. So is information auditing and tracking, search, information and digital asset management, document and records management, and even CAS. That doesn't mean that ILM as a product category has gone away. Not everyone needs all aspects of Information Management. Some folks only need to solve the problems that ILM addresses.
So, while ILM is now recognized as a subcategory of Information Management, it has not been consumed by it. This theory does not explain the relative quiet in the ILM world.
A twist on this theory is that ILM has simply gotten boring. No one wants to talk about it because it is pretty much done. That's boloney! Most vendors are only just starting to ship viable ILM products and IT has hardly begun to implement them. Nope. Put the Transformers toys and videos away.
Back and To The Left. Back and To The Left.
ILM was assassinated! Just as it looked like ILM would become a major force in the IT universe, evil analysts (who never have anything good to say about anything) and slow-to-market companies started to bad mouth it. It was a lie, they said. It was thin and there was nothing there. Unlike the "Hooey" theory, assassination assumes that there was something there but it was killed off prematurely by dark forces within our own industry.
That's a bit heavy, don't you think? Sure, there are naysayers and haters of any new technology - heck! I can remember someone telling me that the only Internet application that would ever matter was e-mail - but there are also champions. When you have heavy hitters like EMC and StorageTek (when they existed) promoting something, it's hard to imagine that a small group of negative leaning types could kill off the whole market.
Remember, there were a lot of people who hated iSCSI and said it would never work. Now, it's commonplace. Perhaps, like iSCSI it will just take awhile for ILM to reach critical mass. It is by no means dead. So that can't explain the dearth of noise
It was always about the process anyway
They have seen the light! Finally, all is clear. ILM was about the process. The process of managing information according to a lifecycle. A lifecycle based on and control by the value of the information. No products needed, so nothing to talk about.
Sort of. Okay, many people have heard me say this a hundred times. It's The Process, Stupid. However, designing the process and policies is one thing. Actually doing them is something else entirely. ILM relies on a commitment by a business to examine their needs, create processes, translate these into information policies, and then spend money on products to automate them. Just as it's hard to build a house without tools, it's hard to do ILM without tools. Software primarily but some hardware too.
It's good that we have woken to the fact that ILM is about process and policy. That doesn't explain why there isn't more news about tools that automate them.
No Time To Say Hello - Goodbye! I'm Late!" Theory
Also known as "That Ain't Working'. That's The Way You Do It" effect.. This theory postulates that IT managers simply have more important things to do. Who has time for all that process navel gazing? We have too many other things to do (sing with me "We've got to install microwave ovens.") and simply don't have the time to map out our information processes and design policies for ILM. I'm sure that's true. I'm sure that's true for almost every IT project. It's a matter of priority. However, we do know that lots of people are struggling with information management issues, especially Sarbannes-Oxley requirements. If that wasn't true, the business lobby wouldn't be expending so much energy to get SOX eliminated or watered down.
There is a kernel of truth here. If IT doesn't see ILM as a priority, it will die on the vine. I just don't think that ILM is such a low priority that it explains the lack of positive news.
Everything is Fine Thanks. No, Really. It's Okay.
What are you talking about, Tom? Everything is peachy keen! We're at a trough in the product cycle and that makes us a bit quiet. No. Um. Customers are too busy implementing ILM to talk about it. That's the ticket. No wait! They are talking about it but no one reports it anymore. It's too dull. That's because it is so successful.
Nice try.
Not nearly enough IT shops are doing ILM, this is the time in the cycle when products (and press releases) should be pouring out, and the trade press will still write about established technology. Just look at all the articles about D2D backup and iSCSI, two very well established and boring technologies (and I mean that in only the most positive way). I will admit that ILM is no longer the flavor of the month but you would think that you'd hear more more about it. We still hear about tape backup. Why not ILM?
My Theory
Or non-theory really. My feeling is that ILM turned out to be too big a job for small companies. They couldn't put out comprehensive products because it was too hard to make and sell these products. Complete ILM takes resources at the customer level that only large companies have. Despite trying to develop all-encompassing solutions, many have fallen back on doing small parts of puzzle. The companies that are still in the game are either big monster companies like EMC and IBM, implementing total ILM solutions, or small companies that do one small piece of the information management process. Classification software and appliances are still pretty hot. No one calls them ILM anymore because it is more narrow than that.
So ILM is not dead, just balkanized. It is a small part of a larger market (Information Management) that has been broken down into even smaller components such as information movement and classification. ILM is not dead or transformed. Just hiding.