Loose coupling – Winning strategy for hardware, software and processes

With all of the SDDC (Software Defined Data Center) and SDN (Software Defined Networking) coming into the fore these days, it is good to take a look at exactly why it is getting serious focus, and what particular qualities make it a winning strategy.

I’ve mentioned the term loosely coupled systems among my peers for quite a while, and it has finally begun to sink in. I still get asked regularly by people in all different levels and areas of IT exactly what that means.

For a lot of people these are well known concepts, but for some this falls into the buzzword category and doesn’t get the focus that it deserves.

What is Loose Coupling?

With computer systems (I use this general term because it covers hardware and software), we have interconnections which create the overall system. Huh? Don’t worry, that is a strange sentence to work out the meaning of. What it essentially means is that each part of the system (aka sub-systems) make up the “system” that we use.

An example is a web site. The web application can have a web server front end, a data layer where the content is stored, a networking layer which connects the outside world in, and that connects the web server to the database server. Each of these layers connect to each other with loose coupling. Requests are created and completed without creating a persistent tunnel. Different software can be injected at each layer as long as a connector (which many know as a driver) for the next layer.

What is Tight Coupling?

One example of a tightly coupled system that many of us are facing right now is VMware View. If you are running VMware View 4.7 on VMware vSphere 4.x or 5.0 though vCenter 5.0 you are all good. The challenge comes now with wanting to move your vCenter version. If you migrate to vCenter 5.1 to update your vSphere to 5.1 you have one major issue: VMware View 4.7 is not supported for the 5.1 platform.

So in the very connected ecosystem of the VMware products, they have created a tightly coupled system. Because of the tight coupling, there is a limitation on the way that the systems can be updated. To many, this is where Microsoft has presented real challenges. As OS and applications become more inter-dependent, the tight coupling increases which makes for a nightmare when you want to upgrade in parts and not necessarily the whole end-to-end environment.

At the same time, my VMware example could be a situation where the systems are coupled so to speak, but the lack of API compatibility creates an interoperability issue. The dependencies have increased, and version management becomes a nightmare unless you can have all systems brought up to new revisions in tandem.

Another example could be a .NET environment which uses specific .NET 3.5 methods that are not compatible with SQL 2012, or rather that SQL 2012 cannot accept connections using the 3.5 application code. There will have to be extra work done to enable connecting these environments, and in some cases where there are large enough gaps in software builds, it may not be possible.

What is the deal with APIs?

Every vendor that approaches me to discuss their product gets the same question from me. “Do you expose your API for me to interact with?”. Why is this an important question? In the past (and still today), many systems are treated as black-box apps which only provide access to the customer through a proprietary interface like a web front end, a GUI or a command line interface (CLI).

An API (Application Programming Interface) provides a programmatic way to interact with the system. This allows us to read, write and manage the content of the system and with a published API, you can now extend that source system into any of a number of your own internal systems. APIs also allow other vendors to jump on board and leverage the methods offered by the source vendor.

An example would be VMware’s VAAI (vStorage APIs for Array Integration) which provides other vendors with a way to speak directly to the underpinnings of vSphere storage and get the most out of the baked in features.

Read more on VAAI here: http://www.vmware.com/products/datacenter-virtualization/vsphere/storage-api.html

REST is best!

REST (acronym for REpresentational State Transfer) is an architecture which has specific constraints that ensure its standardization, and by using the HTTP verbs (GET, PUT, POST, DELETE) it provides a common method to address any resource. When we refer to RESTful methods, it is a guarantee that the system behaves using a well known set of rules. Writing connectors and scripts for RESTful APIs is a saving grace for developers. It doesn’t guarantee a future-proof method to address that system, but if is much more likely that the target vendor will maintain that standard addressability and any changes happen underneath the covers.

You need to understand these concepts

Quite simply, this is where things are going. I realize that my major work lately has been away from code and admin processes and concentrating on bringing concepts to life with people, process and technology and preparing environments for the next step. It is quite a nice evolution for things, but it is also a challenge because breaking to the next step is where the real challenge is.

I would recommend that you should conceptually understand cloud topology, and more importantly, get a handle on the methodologies that create working cloud environments. Even if you aren’t in a development shop, get your hands on some DevOps reading and put those practices into place wherever you can. Every step you take now is a great step forward with your career, and for your technology organization.

Don’t be in the laggards category on the technology adoption curve because it is much more difficult to accelerate once these systems are at a point of maturity.

Hmmm…this doesn’t seem like a loosely coupled system

 

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.