Introducing the Cloud-Monolithic Computing Foundation

Raise your hands if you started the virtualization journey using oversized virtual machines <raises hand>. Let’s face it, we are in the process of making the same mistakes in the cloud that we did with early virtualization. It’s ok.

As many of the world’s leading open source and closed source traditional vendors lean forward to embrace the cloud, we can see that someone is not going to get the memo.

bill-lumbergh

“Yeah if you could just go ahead and refactor everything on Docker and use microservices, that would be great.” Bill Lumbergh

The Infrastructure Over Apps Mistake

Why are we using the cloud? There are numerous reasons that drove us to use the public cloud as a potential solution. They include:

  • Changing from capital to operational expense model
  • Utilizing self-service capabilities and APIs
  • Leverage scale-out capabilities to distribute applications
  • Reduce costs of infrastructure using the commoditized cloud pricing

Well, we have the first two covered out of the gate. The challenge is that we get to number three and we hit the first hurdle. A number of organizations who are looking towards the cloud as a new place to host infrastructure are finding out that their staff are simply re-platforming the VM from their current hypervisor to the cloud.

The problem that has been discovered is that the lift-and-shift methodology has become the more common one for folks to test out the cloud. The impact is that it gives a skewed result which will most likely look like a failed implementation. The implementation is failed, but not from the infrastructure perspective. It was the choice to switch the infrastructure without thinking in the context of the application.

So, when these organizations who wanted to adopt the scale-out infrastructure attempt their early migrations, they will also hit a nasty surprise with point number four. The commoditized pricing that is available in the cloud quickly turns to sticker shock if workloads are sized like traditional virtual machines.

Only You Can Stop Forest Fires…and V2C

The practice of V2C (Virtual to Cloud) is only good as you deploy very small instances. Even then, my recommendation is to always take a build-from-source approach and ensure that the only thing traversing the environments is data as you backup and restore to the various locations where data is required.

only-you

Having done the V2C testing myself, I can speak from experience that it is a terrible approach. Remember that cloud installations should be optimized for the thinnest possible underlying OS layer, reduced services, reduced I/O and also the smallest potential footprint to allow selecting small flavors for your cloud instances.

aws-pricing

That 4 vCPU machine with 16 GB of RAM that ran in your vSphere environment without much issue is now a going to run you a healthy 180$ per month. That’s one of the reasons that AWS shows all of their pricing in hours. By illustrating pricing by the hour (0.252$/hour) it looks much more attractive. Hourly pricing is very effective when you are quiescing instances during lulls in utilization, but let’s be honest about how often that happens…not often.

The Cloud-Native Computing Foundation may need to have another twin group created to tackle how to deal with monolithic cloud deployments. It’s happening already. The hope is that as organizations practice more with cloud infrastructure that they realize all of the benefits to what cloud can offer by adapting their deployments to suit the newly adaptive and agile infrastructure.




AWS Dedicated – Reducing Noisy Neighbors and More

It’s pretty interesting to see the back and forth as folks discuss the merits of cloud over bare-metal, and vice versa. Something signaled a real nod to the value of bare-metal as a valuable asset in the cloud was the recent announcement that AWS will be offering dedicated EC2 hosts. If AWS is the be-all end-all as some may say, this was clearly an indicator that bare-metal is necessary for a variety of reasons.

Keeping out the Noisy, and Potentially Naughty Neighbors

Say what you will about AWS and application performance, but the reality is that bare-metal does provide a more consistent performance in many, if not most, cases. It isn’t that AWS doesn’t perform to meet much of the requirements, but there are some workloads that just run better on dedicated instances. AWS already knows this and has been providing dedicated instances already to tackle some of these needs.

This tackles both performance and security in the dedicated instance offering which has become a popular destination for many AWS users. Perhaps that was the start of the migration to a new plan which opened the door to the upcoming dedicated hosts feature.

Dedicated hosts takes it to the next level of isolation by ensuring that your organization has the entire EC2 host only accessible to your tenant environment. Offering a number of features which are over and above the dedicated instances, you can see that dedicated hosts have the familiar look of many on-premises virtual environments:

host-table

While some have leaned towards AWS offering an on-premises product for enterprise data centers, my opinion is that Amazon will be doing more enterprise-friendly feature additions like dedicated hosts to create a smoother path for new customers to jump into the cloud in slightly more iterative steps from the traditional legacy environments.

Licensing Savings using Dedicated Hosts

Let’s face it, the lawyers haven’t caught up with technology. Or maybe it is just that they want to stick it to customers by still using what many refer to as the virtualization tax. Running many products in virtualized environments requires the licensing of the entire host to support the nested guests. It’s most commonly found in what many call the Oracle “parking garage” licensing where you must license every possible host that an Oracle instance could run on. This is a big driver for organizations to seek out dedicated hosts to maintain compliance.

Speaking of compliance, there is also a whole lineup of regulatory challenges introduced with public cloud hosting on multi-tenant environments. Luckily, AWS and other public cloud providers have been good about capturing that demographic in many ways with compliance certifications. Dedicated hosts is the most certain way to ensure that your workloads maintain compliance. Keeping track of it is a whole different challenge, which needs more than just a dedicated host to solve that issue.

Should you Dedicate Instances and Hosts?

Pricing will be different. Requirements will be different. The important thing to look at with any technology decision is the requirements that drive the needs to choose one provider, product, platform, or process over another.

I applauded the decision when Rackspace launched their OnMetal service, and I am giving the same praise here. Amazon is well-placed to provide this next style of service to its current and potential customers. The doors open a little wider to financial services and health services clients who may now be able to make the AWS cloud part of their compliant IT infrastructure. Dedicated hosts are a great place to start at the very least.




Thinking Like the Bad Actors and Prioritizing Security

Assume you’ve been breached. Period.

The reason that I start there is because I’ve learned from practice, that we have to work on the assumption that we have had our systems violated in one way or another. The reason that this is important is that we have to start with a mindset to both discover the violation, and prevent it in future.

Who is it that has breached our systems? Well, we have a fun name for them…

Bad Actors

bad-actor

Hey, I like Kirk too, but you have to admit…he’s not really a good actor

No, not the kind that you see in SyFy remakes of popular movies, but the ones that have been infiltrating your infrastructure for nefarious purposes. Bad actors are those who have the single-minded purpose of breaching your security, and doing something either inside the environment, or taking something back out.

All too often we hear about breaches long after they have happened. I’m a big fan of Troy Hunt’s web site Have I Been Pwned? It’s a helpful resource, and a reminder of just how important it is that we understand that bad actors exist and are pervasive in the world of internet connected resources.

Bad actors love the internet of things. Just imagine how much simpler it is to access resources when they are interconnected and internet accessible. Physical security is the first place to look, and all the way up the stack to the application layers. Using your mobile to access your bank site when you’re in Starbucks? Not a good idea. Seem paranoid to say that? That’s what every bad actor hopes you say.

Assume security is failed. Assume you’ve been breached. The next step comes with how you plan and prepare to discover and recover.

White Hat (aka Ethical) Hacking

Just under a year ago, I attended the BSides Delaware event. This was a very interesting opportunity to go outside of the normal conference circuit that I am used to attending. I would liken this to the VMUG equivalent where DefCon is the VMworld of security.  These are great events, and touch on every aspect of security from application, to network, to physical, and even security of yourself including self-defense tactics.

One thing that you learn about hacking, is that it takes a hacker to find and prevent a hacker. White Hat hacking has been a practice for many years, and it is an important part of the security and networking ecosystem. If you aren’t already engaging an organization to help with penetration testing or some form of security analysis, you absolutely should.

The same skills that drive the bad actors have been embraced by white hat hackers to provide a positive result from that experience. We use real users to provide UX guidance, so it only makes sense that we should use the same methodology for our security strategy.

Make Security Part of Infrastructure Lifecycle

Whether it’s your application lifecycle, or your infrastructure deployment, security and automated testing should very definitely be a part of the workflow. I was lucky to have a great conversation on my Green Circle Live! podcast recently with Edward Haletky.

gc-live-episode-5

We chatted about how there is a fundamental flaw in both the home and the data center. The whole podcast is a must listen if you ask me, and I encourage folks to rethink security as something that should be top of mind, not an after thought.

There are lots of bad actors out there. I prefer to keep them in the movies and out of my data, how about you?




Adding Yourself to Your Own Twitter Lists

This may seem like a rather simple task, but apparently it is one that the team at Twitter in charge of the web UI decided wasn’t needed. I was creating a Twitter list for the #vDM30in30 group and realized that I was unable to add myself to the list. Huh?!

First, let’s start with why a Twitter list is helpful:

  • Organizes a single group of folks so you can view all timelines
  • Tracks membership that is visible to everyone on Twitter or privately if you’d like
  • Can be subscribed to
  • Can be embedded into a web page

Those are great reasons when we run events like the Virtual Design Master, or #vDM30in30 event where we want to be able to store people from Twitter who are contributing content. It’s a great way to bring focus to the work that may otherwise have slipped by in your main Twitter timeline with so much happening out there.

Twitter Web UI for Lists

Firstly, the Twitter web UI for lists seems backward to me. You can view lists on your profile using the https://www.twitter.com/YOURPROFILE/lists URL. In my case, it’s https://twitter.com/discoposse/lists to view all of my lists.

Thanks to the magic of RESTful URLs, I also know that I can use the slug name to view the list directly. For the “#vDM30in30 2015” list, Twitter assigned “vdm30in30-2015” as the HTTP friendly slug: https://twitter.com/discoposse/lists/vdm30in30-2015

twitter-lists-view

Further additions to the RESTful goodness for members means you can see them with this URL: https://twitter.com/discoposse/lists/vdm30in30-2015/members

twitter-lists-members

Cool, right? So, it should be super easy to add new members right here from this page. (SPOILER ALERT: It isn’t!!)

Twitter seems to think that you should be adding Users in the User view. For some reason, this seems entirely reversed to me. I want to see a list view and add dozens of users. Instead, I have to search the user:

user-lists-add

And from there, check off the list of choice for each user:

user-lists-membership

If I was managing the process for a large list of users, this would be unruly to say the least. This is reminiscent of the “there has to be a better way!!” infomercials.

BetterWay

TweedDeck to the Rescue!

I’m a TweetDeck for Mac user, so luckily there IS a better way. Just open TweetDeck, and click the Add Column plus icon which brings up the column dialog. Choose Lists:

TweetDeck-add-column

You’ll see your available lists to choose from:

TweetDeck-pick-list

Just highlight one which brings it up in the right-hand pane and presents a nifty Edit button:

TweeDeck-list-edit

Now we have the list of members and a search dialog:

list-view

Type in your Twitter username into the dialog. This works for anyone you want to add, but the point was that we normally can’t add ourselves to the list:

list-user-search

Click the + icon in the user dialog and it will become green check mark. You will also see your user show up in the right-hand pane as a list member now:

list-add-user

There really was a better way.

The End Result!

After toiling over the UX failings of the Twitter web UI, I now have a way to add myself to lists, and to easily add more users to list thanks to Tweetdeck. Many other third-party clients provide the same ease of use, so flavor to taste with the Twitter client of your choice.

Happy Twitter Listing, and here is the embedded list for your pleasure 🙂

Tweets from https://twitter.com/discoposse/lists/vdm30in30-2015