Setting up a Slack WebHook to Post Notifications to a Team Channel

If ChatOps is something you’ve been hearing a lot about, there is is a reason. Slack is fast becoming the de facto standard in what we are calling ChatOps. Before we go full out into making chatbots and such, the first cool use-case I explored is enabling notifications for different systems.

In order to do any notifications to Slack, you need to enable a WebHook. This is super easy but it made sense for me to give you the quick example so that you can see the flow yourself.

Setting up the Slack Webhook

First, we login to your Slack team in the web interface. From there we can open up the management view of the team to be able to get to the apps and integrations. Choose Additional Options under the settings icon:

You can also get there by using the droplets in left-hand pane and selecting Apps and Integrations from the menu:

Next, click the Manage button in the upper right portion of the screen near the team name:

Select Custom Integrations and then from there click the Incoming WebHooks option:

Choose the channel you want to post to and then click the Add Incoming WebHooks Integration button:

It’s really just that easy! You will see a results page with a bunch of documentation such as showing your WebHook URL:

Other parts of the documentation also show you how to configure some customizations and even an example cURL command to show how to do a post using the new WebHook integration:

If you go out to a command line where you have the cURL command available, you can run the example command and you should see the results right in your Slack UI:

There are many other customization options such as which avatar to use, and the specifics of the command text and such. You can get at the WebHook any time under the Incoming WebHooks area within the Slack admin UI:

Now all you have to do is configure whatever script or function you have that you want to send notifications to Slack with and you are off to the races.




Top vBlog Voting 2017 – Supporting Community Bloggers

Every year we are seeing more and more community contributors in the blogging ecosystem. My own work here at DiscoPosse.com and through my role at Turbonomic in the community has been so enjoyable to be a part of because of the support that I continue to receive from readers and peers in many tech communities.

Eric Siebert has been hosting the Top vBlog voting for years, and it has grown from a handful of participants to a veritable must-read list that covers every aspect of virtualization, networking, scripting, and more. This year I am honoured to be among the contributors listed and am also very proud to have Turbonomic sponsor the voting.

My blog is listed in the voting under my name (just search for DiscoPosse) and my podcast (GC ON-Demand) is also in the running for best podcast.

I would greatly appreciate a vote if you feel that I’m providing content that is valuable, and of course, please extend your votes to all of the great IT community who surrounds us all. For those who know the work that Angelo (@AngeloLuciani), Melissa (@vMiss33) and I do with Virtual Design Master, you will know that many of the participants are also in the voting.

Your support of our amazing blogger and podcast community is always appreciated.  Thank you!

Vote here for this year’s event: http://vsphere-land.com/news/voting-now-open-for-top-vblog-2017.html




MSPOG – Accepting the Reality of Multiple Single Panes of Glass

You probably dread the phrase as much as I do. We hear it all the time on a sales call or a product demo: “this is the single pane of glass for you and your team”. The problem is that I’ve been working in the industry a long time and have been using a lot of single panes of glass…at the same time. Many of my presentations have been centered around the idea that we must embrace the right tool for the right task, and not try to force everything through one proverbial funnel because the reality is that we cannot do everything with any single product.

For this reason, it’s time to embrace MSPOG: Multiple Single Panes of Glass

Many Tools, Many Tasks, One Approach

Using a unified approach to something is far more important than the requirement to using a single product to do it. I’m not saying that you should just willy nilly glue together dozens of products and accept it. What I am saying is that we have to dig into the core requirements of any task that we performa and think about things in a very Theory of Constraints (ToC) way. Before we even dive into some use-cases, think about what we are taught as architects: use the requirements to define the conceptual, logical, and then physical solution. All the while, understanding and making our decisions based on risks and constraints.

If you have a process that requires two or three different processes within it, you may be able to use a single tool for those processes. What if one of the processes is best solved with a different tool? This becomes the question of the requirements. Is it a risk if we embrace a second tool? More importantly, is it a risk or a constraint to use a single tool? This is the big question we should be asking ourselves continuously.

Imagine a virtual machine lifecycle process. We need to spawn the VM from a template, give it a network address, deploy an application into it, and then make sure it is continuously managed by a patch management and configuration management system. I know that you’re already evaluating how we should do this at the physical level by saying “use Ansible!” or “use Puppet!” or “use vRealize Automation!”. Stop and think about what the process is from end-to-end.

Our constraints on this is that we are using a VMware vSphere 6.5 hypervisor, a Windows 2016 guest, and using NGINX and a Ruby on Rails application within the guest.

  1. Deploy a VM from template – You can do this with any number of tools. Choose one and think about how we move forward from here
  2. Define IP address – We can use vRO, vRA, Puppet, Chef, or any of a number of tools. You can also even do some rudimentary PowerCLI or other automation once the machine is up and running
  3. Deploy your application – App deployment can be done with something like Chef, Puppet, or Ansible, as well as the native vRO and vRA with some care and feeding
  4. Patch management – Now we get more narrow. Most likely, you are going to want to use SCCM for this one, so this is definitely bringing another pane of glass in
  5. Configuration management – Provided you use SCCM because of the Windows environment, you can use that as well for configuration management…but what about the nested applications and configurations such as websites and other deeper node-specific stuff. Argh!!!

Even if you came out of the bottom of those 5 steps with just two tools, I would be thinking you may need to reevaluate because you have have overshot on the capabilities of those two tools. It is easy to see that if we start narrowing to a single pane of glass approach, that we are now jamming square blocks into round holes just to satisfy our supposed need to use a single product.

What we do need to do look for the platforms within that subset of options that has the widest and deepest set of capabilities to ensure we aren’t stacking up too many products to achieve our overall goals.

The solution: Heads up Display for your Single Pane of Glass

Automate the background and display in the foreground. We need to think more about having the proverbial single pane of glass be a visible layer on top of the real-time activity that is happening. Make your toolkit a fully-featured solution together with focus on how you can do as much as possible within each product. Also, reevaluate regularly. I can’t even count how many times i’ve been caught out by using something a specific way, only to find out that in a later version that the functionality was extended and I was using a less-desirable method, or even a deprecated method.

There is a reason that we have a mainframe at the centre of many large infrastructure shops. You wouldn’t tell them to shed their mainframe just to deploy all their data on NoSQL, right? That would be lunacy. Let’s embrace our Multiple Single Panes of Glass and learn to create better summary screens to annotate the activity. This way we also train ourselves to automate under the covers and trust the underlayers.

I, for one, welcome our Multiple Single Panes of Glass.

 

Image source:  https://hudwayglass.com



Why I Aeropress Coffee but Automate Everything Else

Many of my presentations start with me quoting the Rule of Three. Then I tell you three things about myself:

  1. I’m lazy
  2. I despise inconsistency
  3. Did I mention I’m lazy?

The reason that these are important things to know is that being lazy is a fundamental reason why I have leapt into automation from early on in my career.

Being the Right Kind of Lazy

The word lazy can sound like a bad thing. In the case of automation, it is a good thing. Clarence Bleicher of Chrysler was once quoted in the early days of the company as saying:

“When I have a tough job in the plant and can’t find an easy way to do it,” Mr. Bleicher said, “I have a lazy man put on it. He’ll find an easy way to do it in 10 days. Then we adopt that method.”

That pretty much sums it up. Laziness in the sense of not wanting to do repetitive, mundane tasks is the kind of laziness that we are aspiring for here. Not just lay down and do nothing lazy, as tempting as that is.

There was a key moment that happened in my first year of my work. When I saw a way to make something faster, or more efficient by taking safe and appropriate shortcuts, I took them. When I made the leap into a technology career, it didn’t take long to find the shortcuts. That was the whole idea of technology after all!

I drive a Stick Shift and Aeropress my Coffee

The reason that I personally do a lot of automation is so that I can choose to take the additional time I have to put towards the removal of other technical debt or even just enjoying myself. Part of the fun dichotomy of many of the very pro-automation technologists is that a lot of us tend to also be huge coffee enthusiasts. I’m talking about the hand-grind, slow steep, Aeropress and nearly scientific recipe kind of coffee people. Shouldn’t a lazy, automation-oriented person try to eliminate the time being spend on that effort? Ahhhh…there is the interesting part.

Automation needs to be in service of the goal. The goal is quality. I could increase my output by putting a Keurgig or a Nespresso, or some kind of automated espresso machine. That rolls up to additional costs, and then the quality of the taste. My choice is to take the lower cost to get a handcrafted taste that I know I enjoy. I have also done the math and realize that it may amortize over the long run to buy the machine, but I can also use my Aeropress on road trips and such. That is the consistency target I choose.

My choice to drive a manual transmission was primarily about cost, and secondarily around the enjoyment of it. That is really all there is to it. The time/effort savings of having an automatic impacts my personal enjoyment of the experience which I don’t necessarily feel is worth it.

Knowing and Measuring your Goal

How we define quality is as important as how we achieve it. Without a tangible way to measure the results of automation and the net effect on quality, we can end up just acquiring more technical debt, or in spending time on tasks that don’t remove constraints at the right level. Just like with my Aeropress and my manual transmission, I have chosen my measurement of where I can achieve quality through automation to attack other constraints.

Personal coffee taste is somewhat intangible. The time you spend deploying servers to the cloud and running patch management routines and other repetitive tasks is very tangible. Between time and quality, the effort to automate many operational tasks pays off rather quickly. Having had a background in desktop support at the onset of my enterprise IT career, I quickly created scripts and processes to avoid doing those repetitive tasks. Put all of that on a server and then all I need to do manually is connect to the server. Voila!

Measure your quality in either time, cost, or sometimes in those intangible ways such as just personal enjoyment of the work. If you are spending hours in a week doing repetitive work, you could spend a little time automating it and then spending that time you gain back the following weeks into new work and more exciting tasks.

Measure always, and not just for you, but for your team and your organization. Then you can sit back and make a nice Aeropress coffee while you watch your automation work happening for you.




Turbonomic Technical Poster Goodness

As a long-time fan of the technical posters that came out of the vCommunity and PowerShell community, I was very happy to have a chance to work with Rene Van Den Bedem (aka @VCDX133) on something fun at Turbonomic.  Rene and I teamed up to craft the first official Turbonomic Technical poster for Turbonomic version 5.9 that you can download in PDF format right from the Green Circle Community

Big thanks to Rene for all that he has done to help my team with this, and of course for all of his continued support of many community efforts across all of our shared IT communities.

Click the handy dandy link here to go to the site and get your own Turbonomic Technical poster!




The Need for IT Operations Agility: Lessons of WannaCry

There is little doubt that the news of ransomware like the recent outbreak of the WannaCry (aka Wcry, WannaCrypt) taking hold in critical infrastructure hits home with every IT professional. The list of affected clients of any ransomware or critical vulnerability is made even more frightening when it means the shutting down of services which could literally affect people’s health like the NHS is experiencing.

Would it be any different if it were a small hardware chain? What if it was a bank? What if it was your bank, and your money was now inaccessible because of it? The problem just became very real when you thought about that, didn’t it?

Know Your (Agile) Enemy

Organizations are struggling with the concept of more rapid delivery of services. We often hear that the greatest enemy of many products is status quo. It becomes even more challenging when we have bad actors who are successfully adopting practices to deliver faster and to iterate continuously. We aren’t talking Lorenzo Lamas and Jean Claude Van Damme kind of bad actors, but the kind who will lock down hospital IT infrastructure putting lives at risk in search of ransom.

While I’m writing this, the WannaCry ransomware has already evolved and morphed into something more resilient to the protections that we had thought could prevent it from spreading or taking hold in the first place. We don’t know who originally wrote the ransomware but we do know that in the time that we have been watching it that it has been getting stronger. As quickly as we thought we were fighting it off by reducing the attack surface,

The Risks of Moving Slowly

Larger organizations are often fighting the idea of risks of moving quickly with things like patching and version updates across their infrastructure. There are plenty of stories about an operating system patch or some server firmware that was implemented on the heels of its release to find out that it took down systems or impacted them negatively in one way or another. We don’t count or remember the hundred or thousands of patches that went well, but we sure do remember the ones that went wrong. Especially when they make the news.

This is where we face a conundrum. Many believe that having a conservative approach to deploying patches and updates is the safer way to go. Those folks view the risk of deploying an errant patch as the greater worry versus the risk of having a vulnerability exposed to a bad actor. We sometimes hear that because it’s in the confines of a private data center with a firewall at the ingress, that the attack surface is reduced. That’s like saying there are armor piercing bullets, but we just hope that nobody who comes after us has them.

Hope is not a strategy. That’s more than just a witty statement. That’s a fact.

Becoming and Agile IT Operations Team

Being agile on the IT operations side of things isn’t about daily standups. it’s about real agile practices including test-drive infrastructure and embracing platforms and practices that let us confidently adopt patches and software at a faster rate. A few key factors to think about include:

  • Version Control for your infrastructure environment
  • Snapshots, backups, and overall Business Continuity protections
  • Automation and orchestration for continuous configuration management
  • Automation and orchestration at all layers of the stack

There will be an onslaught of vendors using the WannaCry as part of their pitch to help drive the value of their protection products up. They are not wrong in leveraging this opportunity. The reality is that we have been riding the wave of using hope as a strategy. When it works, we feel comfortable. When it fails, there is nobody to blame except for those of us who have accepted moving slowly as an acceptable risk.

Having a snapshot, restore point, or some quickly accessible clone of a system will be a saving grace in the event of infection or data loss. There are practices needed to be wrapped around it. The tool is not the solution, but it enables us to create the methods to use the tool as a full solution.

Automation and orchestration are needed at every layer. Not just for putting infrastructure and applications out to begin with, but for continuous configuration management. There is no way that we can fight off vulnerabilities using practices that require human intervention throughout the remediation process. The more we automate, the more we can build recovery procedures and practices to enable clean rollbacks in the event of a bad patch as well as a bad actor.

Adapting IT Infrastructure to be Disposable

It’s my firm belief that we should have disposable infrastructure wherever possible. That also means we have to enable operations practices which mean we can lose portions of the infrastructure either by accident, incident, or on purpose, with minimal effect on the continuation of production services. These disposable IT assets (software and hardware) enable us to create a full stack, automated infrastructure, and to protect and provide resilience with a high level of safety.

We all hope that we won’t be on the wrong side of a vulnerability. Having experienced it myself, I changed the way that I approach every aspect of IT infrastructure. From the hardware to the application layers, we have the ability to protect against such vulnerabilities. Small changes can have big effects. Now is always the time to adapt to prepare for it. Don’t be caught out when we know what the risks are.