Distributed, dynamically bound systems are not new. The Internet is an example of a highly distributed system where resources are made available over open protocols. Back in the 90's there were an awful lot of service bus architectures as well. These usually involved a big piece of infrastructure such as an object broker. Both approaches have shortcomings. The Internet doesn't have enough programming heft. You have to augment it with all types of other objects that are loaded into clients. Object brokers place restrictions on programming languages and are very complex and expensive to implement.
A service-oriented architecture, on the other hand, combines concepts from both. You have loosely coupled resources, protocol simplicity, language independence, and easy of implementation. This affords an opportunity to virtualize an enterprise application. In turn, this virtualization makes development easier and more robust. Scaling up applications becomes much simpler when the components are not tightly tied together. Need a new functions? Add a new callable service and publish it. Need to change something? Change one service without compiling the whole application.
.NET does this too but on too small a scale. When you consider systems with thousands of functions, working over the Internet or other long distance networks, accessing components becomes a serious problem. In an SOA, you don't need to request the processing object, just that it perform the function that you need. This is also much less bandwidth intensive.
The hang up with SOA is the underlying infrastructure. While SOA makes the application layer easier to manage, the components of the SOA are too often tightly bound to physical infrastructure such as servers and storage devices. To make SOA truly efficient, we need to virtualize and abstract this layer too. Servers, network resources, storage, and system level applications also need to become services. This way, applications can request system resources as they need them and only recieve what they need. That's very efficienct. If a service is running under a light load, it would be good to have it only ask for lower processing and network resources. That frees resources for other, heavy users. Rather than have administrators analyze and make these changes manually, dynamically allocating only the resources that a componet needs leads to a better use of available resources. By negotiating with those resources, every component gets mostly what they need rather then an all or nothing scenario.
SOA's are an evolution not revolution, but it's a good evolution. It give system architects and administrators an opportunity to have a more efficient and more easily manageable system. Let's get the hardware under the same control. Hardware, software, it's all just resources that need to be reallocated as needs change.
Tom Petrocelli's take on technology. Tom is the author of the book "Data Protection and Information Lifecycle Management" and a natural technology curmudgeon. This blog represents only my own views and not those of my employer, Enterprise Strategy Group. Frankly, mine are more amusing.
Friday, September 22, 2006
Subscribe to:
Posts (Atom)