Thinking Like the Bad Actors and Prioritizing Security

Assume you’ve been breached. Period.

The reason that I start there is because I’ve learned from practice, that we have to work on the assumption that we have had our systems violated in one way or another. The reason that this is important is that we have to start with a mindset to both discover the violation, and prevent it in future.

Who is it that has breached our systems? Well, we have a fun name for them…

Bad Actors

bad-actor

Hey, I like Kirk too, but you have to admit…he’s not really a good actor

No, not the kind that you see in SyFy remakes of popular movies, but the ones that have been infiltrating your infrastructure for nefarious purposes. Bad actors are those who have the single-minded purpose of breaching your security, and doing something either inside the environment, or taking something back out.

All too often we hear about breaches long after they have happened. I’m a big fan of Troy Hunt’s web site Have I Been Pwned? It’s a helpful resource, and a reminder of just how important it is that we understand that bad actors exist and are pervasive in the world of internet connected resources.

Bad actors love the internet of things. Just imagine how much simpler it is to access resources when they are interconnected and internet accessible. Physical security is the first place to look, and all the way up the stack to the application layers. Using your mobile to access your bank site when you’re in Starbucks? Not a good idea. Seem paranoid to say that? That’s what every bad actor hopes you say.

Assume security is failed. Assume you’ve been breached. The next step comes with how you plan and prepare to discover and recover.

White Hat (aka Ethical) Hacking

Just under a year ago, I attended the BSides Delaware event. This was a very interesting opportunity to go outside of the normal conference circuit that I am used to attending. I would liken this to the VMUG equivalent where DefCon is the VMworld of security.  These are great events, and touch on every aspect of security from application, to network, to physical, and even security of yourself including self-defense tactics.

One thing that you learn about hacking, is that it takes a hacker to find and prevent a hacker. White Hat hacking has been a practice for many years, and it is an important part of the security and networking ecosystem. If you aren’t already engaging an organization to help with penetration testing or some form of security analysis, you absolutely should.

The same skills that drive the bad actors have been embraced by white hat hackers to provide a positive result from that experience. We use real users to provide UX guidance, so it only makes sense that we should use the same methodology for our security strategy.

Make Security Part of Infrastructure Lifecycle

Whether it’s your application lifecycle, or your infrastructure deployment, security and automated testing should very definitely be a part of the workflow. I was lucky to have a great conversation on my Green Circle Live! podcast recently with Edward Haletky.

gc-live-episode-5

We chatted about how there is a fundamental flaw in both the home and the data center. The whole podcast is a must listen if you ask me, and I encourage folks to rethink security as something that should be top of mind, not an after thought.

There are lots of bad actors out there. I prefer to keep them in the movies and out of my data, how about you?




SDN challenges – “You can keep your networking gear. Period.”

You my recall a statement regarding some big U.S. legislation that led us to the forever quoted phrase: “You can keep your insurance. Period.” that has caused quite a ruckus in the insurance industry both for providers and customers because it was found to be untrue.

So just imagine that a similar situation that is about to come up in the enterprise networking environment. With Software Defined Networking (SDN) being the hottest buzzword and most aggressively marketed paradigm shift in recent months, we are about to hit a crossroads where adoption may leave many customers taking on unexpected costs despite being pitched a similar line that SDN will simply run as an overlay, but you can keep your existing networking hardware.

Let’s take a look three particular challenges which are present as companies take a look at SDN and figuring out the cost/benefit and how it relates to existing infrastructure.

Challenge 1 – No reduction of ports

This is one of the most common misconceptions around SDN. The idea that ports will be reduced is unfounded because the number of uplinks that will exists into host systems, virtualized or not, will continue to be the same. If anything, we will have more uplinks as scale-out commodity nodes are utilized in the data center  to spread the workloads around more.

The reduction in ports will happen as a result of the migration to higher speed ports like 40GbE and up, but the consolidation level will be limited for physical endpoints. SDN is a great enabler for creating and leveraging overlay networks and making physical configuration less of a factor in the logical design of the application workloads.

In order to get the savings on per-port utilization, the move to 40GbE and higher ports will trigger the rollover of existing hardware and expansion to new physical networking platforms. In other words, you need to change your existing hardware. Hmmm…that wasn’t in the original plan.

Another interesting shift in networking is the new physical topology which includes ToR (Top of Rack) switches which are connected to a centralized core infrastructure. The leaf-spine design is being more widely used and continues to prove itself as an ideal way for separation of workloads and effective physical isolation which has other benefits also.

Challenge 2 – Policy-based delivery requires policies

This is the business process part that can add a real challenge for some organizations. Putting a policy-based framework into place is only truly going to add value when you have business policies that can leverage it. Many CRM and Service Desk implementations fail because of the lack of adoption which stems from a lack of understanding of existing processes.

Many organizations are having difficulty adapting to cloud implementations because it is a very process-oriented technology. As more and more companies make the move to embrace cloud practices, the move towards SDN will be more natural. There is much more awareness now about where the efforts are needed to make SDN deployments successful.

Challenge 3 – Your physical gear doesn’t support your SDN platform

Other than the previous limitations where we mentioned the port speed issues for higher consolidation levels, there is also the issue of firmware and software capability on existing ASIC hardware. As an example, you can use Cisco ACI as your SDN product of choice, but if you are running all Cisco Catalyst equipment I have some bad news for you. (*UPDATE 11/21*: Thanks to @jonisick for the tip that there are smaller physical investments to allow the use of ACI. It is not a full rip and replace, but more some additional hardware to augment the current deployment in most cases).

There will be a barrier to entry for many SDN products because there are requirements for baseline levels of hardware and firmware to support the enhancements that SDN brings. This will be less of an issue in a few years I am sure, but for right now the move to embrace an SDN architecture may be held back by the need to upgrade physical hardware to prepare.

Have No Fear! SDN will work…No seriously, it will

While these scenarios may be current, realistic barriers to the adoption of a SDN platform, we are also dealing with hardware and software lifecycles that are becoming shorter and more adaptive.

The hardware platforms you are running today are inevitably going to be upgraded, extended, or replaced within a reasonable time frame. During that time we will also see the shift the way that we manage and deploy the networking inside organizations. This fundamental shift in process will align with the wider acceptance of SDN platforms which are being regarded as only accessible to agile organizations sometimes.

What SDN brings to us is really the commoditization of the underlying physical hardware platforms. Not necessarily the reduction of quality or cost of the hardware, but the commoditization of its role in the networking architecture.

What is important for us all as technologists is that we are prepared for the arrival of these new products and methodologies. We have a responsibility to stay ahead of the curve as much as possible to get to the real benefit of SDN which is to enable agility for your business.




DevSecOps – Why Security is Coming to DevOps

With so many organizations making the move to embrace DevOps practices, we are quickly highlighting what many see as a missing piece to the puzzle: Security. As NV (Network Virtualization) and NFV (Network Function Virtualization) are rapidly growing in adoption, the ability to create programmable, repeatable security management into the development and deployment workflow has become a reality.

Dynamic, abstracted networking features such as those provided by OpenDaylight participants, Cisco ACI, VMware NSX, Nuage Networks and many others, are opening the doors to a new way to enable security to be a part of the application lifecycle management (ALM) pipeline. When we see the phrase Infrastructure-as-Code, this is precisely what is needed. Infrastructure configuration needs to extend beyond the application environment and out to the edge.

NFV: The Gateway to DevSecOps

Network virtualization isn’t the end-goal for DevSecOps. It’s actually only a minor portion. Enabling traffic for L2/L3 networks has been a major step in more agile practices across the data center. Both on-premises and cloud environments are already benefitting from the new ways of managing networks programmatically. Again, we have to remember that data flow is really only a small part of what NV has enabled for us.

Moving further up the stack to layers 4-7 is where NFV comes into play. From a purely operational perspective, NFV has given us the same programmatic, predictable deployment and management that we crave. Using common configuration management tools like Chef, Puppet, and Ansible for our regular data center management is now extensible to the network. This also seems like it is the raison d’être for NFV, but there is much more to the story.

NFV can be a confusing subject because it gets clouded as being L2/L3 management when it is really about managing application gateways, L4-7 firewalls, load balancers, and other such features. NFV enables the virtualization of these features and moving them closer to the workload. Since we know that

NV and NFV are Security Tools, not Networking Tools

When we take a look at NV and NFV, we have to broaden our view to the whole picture. All of the wins that are gained by creating the programmatic deployment and management seem to be mostly targeting the DevOps style of delivery. DevOps is often talked about as a way to speed application development, but when we move to the network and what we often call the DevSecOps methodology, speed and agility are only a part of the picture.

The reality is that NV and NFV are really security tools, not networking tools. Yes, that sounds odd, but let’s think about what it is that NV and NFV are really creating for us.

When we enable the programmatic management of network layers, we also enable some other powerful features which include auditing for both setup and operation of our L2-L7 configurations. Knowing when and how our entire L2-L7 environments have changed is bringing great smiles to the faces of InfoSec folks all over, and with good reason.

East-West is the new Information Superhighway

Well, East-West traffic in the data center or cloud may not be a superhighway, but it will become the most traffic-heavy pathway over the next few years and beyond. As scale-out applications become the more common design pattern, more and more data will be traveling between virtualized components on behind the firewalls on nested, virtual networks.

There are stats and quotes on the amount of actual traffic that will pass in this way, but needless to say it is significant regardless of what prediction you choose to read. This is also an ability that has been accelerated by the use of NV/NFV.

Whatever the reasons we attach to how DevSecOps will become a part of the new data center and cloud practice, it is absolutely coming. The only question is how quickly we can make it part of the standard operating procedures.

Just when you thought you were behind the 8-ball with DevOps, we added a new one for you. Don’t worry, this is all good stuff and it will make sense very soon. Believe me, because I’ll be helping you out along the journey. 🙂




How about some exCLUSive Cisco news?

ciscoliveWith technology event season rapidly approaching, it is time to get your planning sorted for what exciting conventions, events, and community gatherings to join into. As a Cisco Champion I have a particular wish this year that I would love to fulfill which is to attend Cisco Live in San Franciso which happens from May 18-22.

Unfortunately, it’s not in the cards for me this year, but that doesn’t mean that I can’t excite you all about what’s happening in the Cisco world in the next while around the event!! Let’s put the US in CLUS 🙂

And I kind of teased with the headline, but in case you didn’t catch the pun, it is not exclusive, but exCLUSive 😉

Do UC what I see?

It’s no secret that Unified Communications is a feature platform for Cisco, so it should also be no surprise about the new goodies coming out of the Cisco camp as we head into the second quarter of the year.

The video conference experience has come a long way luckily.

The video conference experience has come a long way luckily.

Collaboration is the key to success in so many ways for modern businesses and for people in their personal lives. If you aren’t already connected to your colleagues through collaborative tools and technology, there are inevitably things coming that will be enabling better collaboration, remote workforce, and ultimately more closeness for people.

A lot great info was released today from Rowan Trollope (@RowanTrollope on Twitter) as discussed in his article here: http://blogs.cisco.com/collaboration/creating-the-next-generation-of-collaboration-experiences/

Product announcements leading up to CLUS

There are some really cool products that are a part of the collaborative tools Cisco is promoting at the event. You can catch up on some of those here:

All work and no play? Not at a Cisco event!

I’ve done my time on some stages in the past and even played a few corporate gigs, but let me tell you that when Cisco puts on a party, they tend to go a bit bigger 🙂

Perhaps you’ve heard of a fellow by the name of Lenny Kravitz?

Yes...that Lenny Kravitz

Yes…that Lenny Kravitz

Or perhaps his friends that also be there, a little group called Imagine Dragons?

imagine-dragons

Convinced yet?

More than just a show

One of the tenets of the Cisco Champion program, and of Cisco as an organization, is to support collaboration and sharing of information. As a consumer of the services, a blogger who has intimate access to products and engineers, and as a lover of technology in general, I can’t say enough how positive the big event experience can be.

I’ve attended a number of events from one to 5 days, and the content that people have come away with, along with the great social collaboration at Cisco Live, is one of the unparalleled experiences for a customer, partner, or just a die hard technologist like myself.

March 14th Early Bird deadline!!

If you want to save a up to a cool 300$ off of your event ticket, you can sign up before March 14th (yup…just about 48 hours left!) here: http://www.ciscolive.com/us/registration-packages/

If you get a chance to go, make sure you tell them that @DiscoPosse sent you 🙂

 

 




Why it is always, and never, the year of VDI, but network virtualization is here to stay

You’ve all heard it: The Year of VDI. It has consistently been the mantra of the launch of each calendar year since Citrix and VMware gained significant adoption during recent years. But why is it both true and false at the same time?

Desktop versus Server Virtualization

Desktop_VirtualizationServer virtualization has taken hold in an incredible fashion. Hypervisors have become a part of every day datacenter deployments. Whatever the flavor, it is no longer necessary to justify the purchase of a products like VMware vSphere or Microsoft Hyper-V. And for those who embraced open source alternatives already, KVM, Xen and the now burgeoning OpenStack ecosystem are joining the ranks as standard step-1 products when building and scaling a datacenter.

Server virtualization just made sense. We have 24 hour workload potential because of a 24/7/365 usage scenario plus backups, failover technologies and BCP needs.

Desktop Virtualization is a good thing

The most commonly quoted reason for desktop virtualization is the cost of managing the environment. In other words, the push to move towards VDI is about policy based management of the environment. Removing or limiting the variables in desktop and application management makes the overall management and usage experience better. No arguments there.

So why hasn’t it hit? One powerful reason is the commoditization of desktop hardware. It used to cost thousands of dollars in the 70s to purchase basic desktop hardware. Throughout the 80s, 90s and 2000s the price of desktop hardware has plummeted to the point where corporate desktops are now available for $300-$500 dollars and they are amortized over 2 or 3 year cycles.

And now the CFO has their say

The impetus to use VDI save money on desktop hardware went away. We now have thin desktops that are nearly the same price as full physical desktops. There is no doubt that this has slowed the uptake of VDI in a strong way. When it comes to putting together our annual expenses, the driver has to be strong to make the shift.

spend-saveNext up is the classic “Microsoft Tax”. While we may reduce the cost somewhat at the hardware layer, we are still bound to the needs of the consumer of the desktop to provide Microsoft OS and software. There is a reason why we don’t even talk about Linux on the desktop anymore. If people are ready for Linux, they will just use it. There are however millions of software consumers that require Microsoft tools. That’s just a fact.

So now that we enter 2014 and all of the analysts and pundits tout the new DaaS (Desktop-as-a-Service) revolution, we have to still be realistic about the amount of impact it will have on the overall market place. I don’t doubt that it will continue to gain footing, but nowhere near the level of adoption that server virtualization was able to produce.

A Patchwork Quilt

patchesIn my opinion, we have already gone down a parallel timeline on policy based desktop management. With Microsoft SCCM, LanDesk and a number of other imaging and application packaging tools already in many organizations, there is less of a need to make the shift towards VDI. There are great use cases for it for sure, but it will be a difficult battle to siphon away the physical desktop processes that have done us well up to now.

Patch management and application delivery can do a lot towards providing the policy based management that we are being told is the prime objective of many VDI products. I’m a big proponent for VDI myself, but I am also realistic about how much of the overall market it has already and will cut into.

So, is this the fate of network virtualization?

Network Virtualization is costly, but that’s OK

So now we have an interesting shift in the market again. Network virtualization has gone from a project in the labs of Stanford to becoming a real, market ready product with many vendors putting their chips on the table.

Not only are ASIC producers like Cisco and Juniper Networks coming forward with solutions, but VMware with their purchase and integration of Nicira to produce VMware NSX has created a significant buzz in the industry. Sprinkle in the massive commitment from open source producers with OpenFlow and Open vSwitch and there is undoubtedly a real shift coming.

2015 will be the year of Network Virtualization

In 2014 we will see a significant increase in the understanding and adoption of network virtualization tools and technologies. With the upcoming GA release of Cisco ACI and more adoption of open source solutions in the public and private cloud, we will definitely see a growth in the NV adoption.

Image source: http://blogs.vmware.com/networkvirtualization/2013/08/vmware-nsx-network-operations.html

Image source: http://blogs.vmware.com/networkvirtualization/2013/08/vmware-nsx-network-operations.html

Remember, NV isn’t about reducing physical network hardware. It is about reducing the logical constraints and increasing the policy and security integration at the network layers. Server virtualization has laid the groundwork to create a perfect pairing really.

When does NV become the standard in networking deployment?

This is the real question we need to ask. As all of the analysts pore over the statistics and lay out what the landscape looks like, we as architects and systems administrators have an important task to deal with: Making NV work for us.

Joe-tweet-300x141

In my mind, network virtualization is a powerful, enabling technology. We have already come a long way in a short time in the evolution of networking. From vampire taps to the upcoming 100GBE hardware in a couple of decades is pretty impressive. Now we can fully realize the value of this hardware that we have sitting on the datacenter floor by extending it with virtualization tools and techniques that gave us exponential gains in productivity and efficiency at the server levels.

It’s coming to us one way or another, so I say that we dive in and do something wondrous together.

Who’s in for the ride? Count me in!




Cisco UCS Platform Emulator on VMware Workstation

For those who want to try out the Cisco UCS Platform in a test system, you have a great option available to you which is the Cisco UCS Platform Emulator. This can be downloaded and run using VMware as your host. You can use VMware vSphere, or you can also run this on the very popular and powerful VMware Workstation.

Not only can you test out the UI and run through configuration scenarios, but you can actually import and export configuration files to and from your existing UCS environment. How cool is that, right?

Even with the existing features supported with the previous version, you have a number of fresh new features in the new version 2.2 (1bPE1).

newfeatures

The download of the UCS Platform Emulator is available in OVA format, or as a set of VM files. For our example here, I have used the VM files download which is appropriate for VMware Workstation. If you wish to use on VMware vSphere, you can download the OVA instead and use the Deploy OVF Template… option in the vSphere Client or vSphere Web Client.

Getting the Cisco UCS Platform Emulator

As new versions have become available, Cisco has kindly updated the platform emulator to match the currently supported version that ships with their versatile UCS environment. I’ve been running the emulator for a while with the previous version, but now is the right time to provide a quick ground-up walk through of deploying the new version.

Main download page: https://communities.cisco.com/docs/DOC-37827

Docs and VM download page: https://communities.cisco.com/docs/DOC-37897 (Requires Cisco.com username)

The download files show right on the page, and once you log in with your Cisco.com credentials, you will see the VM file link:

download-file

Make sure that you download the User Guide while you are there to keep handy for reference. As mentioned above, you can also download the OVA file using the link at the bottom of the file selection.

UPDATED: Chris Wahl (@ChrisWahl) did a great video to show the download process. The video is shown below and the original post is here: http://wahlnetwork.com/2014/01/11/download-ucs-platform-emulator/)

Installing the Cisco UCS Platform Emulator with VMware Workstation

You will get your VM files once you unpack your ZIP file which include everything you need to load into your VMware platform.

file-list

For our VMware Workstation deployment, you can simply copy your VMware Workstation folder which is usually c:usersyour-usernameDocumentsVirtual Machines.

Just copy the root folder (UCSPE) to the appropriate destination:

folder

Now, you can open the folder and right-click on the UCSPE.VMX file, then select Open with VMware Workstation:

right-click

Now you will see your newly added machine in your VMware Workstation console:

vmw-newmachine

It’s almost too easy 🙂

Next you have to just power on the machine and you will see the console show the progress as it boots up. NOTE: When the machine is first powered on, you will see a “press any key to continue” in the console. Just click into the console and press a key to trigger the boot up.

The first boot will take some time as the emulator unpacks and installs. Don’t worry, it will come up in due time and future boot ups will be much quicker.

Once the boot process is completed, you will see the console at the login prompt:

ready-for-login 

You will see the login information on the screen. Write that down just in case, or take a screen shot (or just bookmark this page) and you will also see the all important UCS UI address at the top of the console, which in this case is 192.168.79.147 in my instance.

Using your internet browser, you can open up the web UI at the address provided:

web-ui

Using the links in the web UI, you can launch the UCS Manager or the KVM Manager now using these buttons, and you can also click on the left hand pane to the Emulator Settings option to display the configuration screen for the overall UCS Platform Emulator environment:

emulator-settings

Adding virtual devices to your UCS is as simple as dragging and dropping from the catalog of products below up to the Stash and you will then see them in your overall configuration platform.

ucs-inventory

With this kind of flexibility, you can see all of the features, functions, and limitations of each of the physical components of a UCS system without having to have bare metal hardware in the data center.

If you click through the various links on the left hand pane, you will see a significant number of configuration options. There are as many options for our virtual UCS as there are in a physical UCS platform with every possible configuration.

Fabric Interconnect

fabric-interconnect

Database Persistence

This option lets us choose if we have our UCS Platform Emulator go to factory reset on each boot up, or to preserve the configuration for multiple boots for longer term use:

database-persistence

High Availability

You can even configure the HA options:

high-availability

Single Wire Management

single-wire

Direct Connect Rack

direct-connect

Startup Config URL

You can set up the startup configuration URL to an existing, or the local URL:

startup-config-url

Hardware Catalog

How would you like to have an unlimited shopping cart of Cisco hardware for the UCS platform at your fingertips? Guest what comes with the UCS Platform Emulator? 🙂

hardware-catalog

UCS Manager

The UCS Manager will require Java to be installed (boooooo!), as will the KVM Manager. For users of Google Chrome, the UCS Manager will launch a download of the Java launcher and you can initiate it by clicking the downloaded file.

The usual array of Java warnings has to be accepted to get into the interface of course:

java-warning

Once you are presented with the login screen, you can use the username of config and a password of config which is the default administrative user:

ucsmanager-login

Once you’re logged in, you will be in the UCS Manager interface:

ucsmanager-mainscreen

Inside the UCS Manager screen you can click around to your heart’s content and fully manage your virtual UCS platform

KVM Manager

The KVM Manager will be accessed right in the UCS web UI and the login screen will need us to use the same credentials which are username config and password config but we also need to select the domain of {native} from the drop list:

kvm-login

After you are logged in, you will see the main KVM Manager screen:

kvm-main

It’s rather unexciting because we are seeing after the first launch with nothing in our UCS platform.

More Tools and Help

In the left hand pane of the web UI there are a number of links to tools and help files for the UCS environment.

ucs-help

Accessing with the PuTTY SSH Client

Rather than using your virtual console in VMware Workstation, you can also launch a remote shell session over SSH. I’m a fan of PuTTY myself, and the configuration is super easy. Just launch PuTTY and type in the IP address that your UCS Platform Emulator is on:

putty

Remember from your virtual console that there are two users provided for console login. The importance of this is that one user (User: config, Password: config) is only for console access, and the second (User: clisuser, Password: cliuser) can be used locally on the console and through the SSH session.

Once you accept the SSH fingerprint in PuTTY (only happens at the first connection) you will be able to log in using the cliuser account:

cliuser

You’re all set now for remote management of your UCS Platform Emulator just as if it was a real UCS environment.

Powering Down and Restarting the UCS Platform Emulator

The links to manage the power state of your virtual UCS environment are in the web UI by clicking the Restart tab in the left hand pane:

restart

Each of the options has a button to ensure that you confirm the restart, plus there are settings options to save the configuration.

Go forth and be awesome!

Now it’s up to you how you would like to configure and test drive your system. We can use VMware Workstation snapshots to protect the machine just like any other virtual machine which adds some extra safety as you drill around to manage the virtual UCS system, however with the ability to do factory reset and configuration export, you may never need to use the native VMware Workstation protection.

Even if you aren’t running on Cisco UCS today, this is a great way to take a look at the logical configuration of the system and how you can administer it via the CLI and the web UI.

Happy UCSing!