Tuesday March 3rd was another great day for community as the Boston CNCF Meetup group arrived at the Turbonomic offices on Boylston street in Boston for a great evening of discussions and learning. I was happy to be able to be in town for the event and to watch as my colleague, Meng Ding, who presented the Turbonomic Lemur project.
With the challenges of Coronavirus, it was actually quite a good turnout. I know that a lot of folks are re-evaluating meetup and travel policies. Big thanks go out to Chris Graham and Asena Hertz from the Turbonomic team for pulling this together.
Introducing Turbonomic Lemur
The Turbonomic Lemur project is a packaged set of open source tools which includes Grafana, Kiali, Jaeger, and also a light version of Turbonomic which allows for the visualization of the dependencies and resourcing of the entire containerized stack. This project has been created because of the amount of requests we hear out in the community about how challenging it can be to configure and deploy these tools, plus that even with them, there is no context to how it relates to the actual applications.
Meng kicked off the event with a demo of what Lemur is and what it looks like when deployed. Being able to have a single interface for all of the tooling and easy ways to interact with the products was definitely a hit for the folks in the audience. Since I’ve been contributing to the documentation and some community engagement work for Lemur, this was especially exciting to see that people really loved what it did as a graphical tool….but then the real fun began!
Lemur on the big screen!
The next part of the demo featured the featured the Lemurctl command line tool and this really got some folks excited about the potential. Meng was able to easily spin up command lines that showed the different resources, utilization, and most importantly, the dependencies in a single CLI of everything from the app inside the container, through the container, pod, and node. There was a literal “wait, you can see all that and the dependencies in a single CLI command?!” from someone in the audience. So cool!!
Listening and Learning
Meng and all of the folks there are big fans of any community interaction because we can all learn so much from each other. Tech and business do not exist in bubbles and learning from the lessons of others can really save time and effort for our own projects and deployments of new products and ideas. Meng had a keen audience gathering after his presentation as you can see here:
There is much more planned for Lemur and for my own sharing of information about it. If you’re keen to get involved with this or to learn more, jump into the Github for Lemur and take a look. I’ll be posting some demo resources here on my blog as well in the coming weeks as I do more exploration and documentation to help folks get the most out of it and make it as easy as possible to build and deploy.
Meng and the entire engineering team who built it deserve some big kudos. They’ve made three particularly challenging open platforms deployable in a single package. That’s a massive time and resource saver.
Thanks to everyone again for supporting the CNCF meetup in Boston and the Lemur project. Looking forward to sharing much more very soon!
Welcome Project Nautilus! Running Containers Natively with VMware Fusion on MacOS
There is a lot of hype around containers. There is also a lot of truth in what’s ahead for the industry as containers are becoming important parts of many new applications. VMware has just released what they dubbed Project Nautilus. Shout out to Michael Roy (a fellow Canadian) who has been doing wicked cool work in the Fusion/Workstation product line. I’ve been lucky enough to work with Michael on some other projects in the past doing some VMware product design research as a customer.
Running Nautilus and vctl, the new VMware command line tool for managing containers in Nautilus/Fusion, you need to be running at least Fusion 10.14 on MacOS at the time of this writing.
Downloading the new Fusion tech preview is easy…click here!
NOTE: You can run the GA version of Fusion alongside the new 20H1 tech preview which is very cool. There is a restriction that you can only run one tech preview edition at a time so you may need to uninstall and install the more recent versions as they come out.
Most importantly, I can finally put this amazing cartoon of my youth into play as part of an article. Virtual fist bump to all the other folks who grew up watching some cartoons like the classic below featuring the Nautilus 🙂
Why Nautilus over other local native container platforms?
You have a few different options for kicking the tires on containers in a local development environment. You can use Minikube or Minishift or a small implementation of tools like Rancher. Each has its own merits and your choice will depend on what your bigger picture plans are for deploying and managing Kubernetes as a production implementation.
If you’re holding out K8s hope for the VMware Project Pacific which will allow for a K8s-native endpoint running inside an upcoming vSphere release, this is probably a good way to see just how the product roadmap and command line tools will play out. The bonus of running Fusion is that you can also use other local virtualized environments for development in the same nifty tool.
What is interesting about Project Nautilus is the closeness to the model that Project Pacific is said to have in store. The new deployment pattern of the underlying tooling is described as a”very special, ultra-lightweight virtual machine-like process for isolating the container host kernel from the Host system. We call that process a PodVM or a ‘Native Pod’.” as shown in the Fusion Blog on the release.
More of my time is leaning towards containerization with Nomad, OpenShift, and Kubernetes, so this new tools is definitely going to be part of the testing I’m sharing here on the blog in the coming months. If you’ve got any questions on how to get it working I hope to be a good resource for you (or can connect you to someone who is).
If you want to have your voice heard, this is your chance. Being an early adopter of the tools also gives you a chance to influence the results of the early development. Happy installing and watch for more next week with my Getting Started with Nautilus and vctl guide.
Ask the Expert: The Hyperconnected Data Center – My Interview with Yvonne Deir of CoreSite
I am super happy to bring another great BrightTALK interview that I was a part of recently at AWS re:Invent 2019 with CoreSite’s Yvonne Dier, Strategic Director of Sales. We had a chance to discuss where the hyperconnected data center is headed in 2020 which covers the core challenges (get it…core challenges…zing!) faced by organizations and technology teams as more cloud adoption occurs but latency risks impact the ability to get the best value and performance on cloud infrastructure in hybrid deployments.
CoreSite is tackling how next-generation data centers will power digital transformation, the impact of 5G and the IoT, and we even touch on the classic on-premises vs cloud data center debate.
AWS Outposts – Hybrid Cloud that Will Define the Next Generation of Cloud Consumption
We now have AWS Outposts as a generally available service which you can order today. Outposts is an AWS built and owned rack using AWS hardware and software to deliver their services within your data center. The Outposts solution also easily taps into existing AWS logical constructs including security and networking (VPC) for easy integration with the rest of your infrastructure.
You will use your native AWS tools (console, CLI, SDK) and presumably deployment tools (CloudFormation, Terraform, Chef, etc.) because Outposts uses the same underlying APIs and constructs as other cloud-hosted AWS infrastructure.
“With AWS Outposts, customers can extend the AWS experience on-premises for a truly consistent hybrid cloud experience” – quote from promotional video
I’m super happy to see this new offering hit the production market. From conversations I’ve had with a variety of sources at re:Invent, the uptake is already strong and only going to increase now that the product is in GA.
Let’s begin with the most important question about AWS Outposts…
Why is AWS is Bringing Outposts to You?
The answer is rather simple: we aren’t moving to AWS fast enough. Slower than hoped speed on migration/adoption of AWS native infrastructure opens the door for two important issues which Andy Jassy and the AWS team need to remove:
Competitive landscape – VMware and Microsoft aim to keep workloads on-premises and sell cloud-like infrastructure offerings to keep the customer account control and workload ownership. Azure Stack is a full cloud platform which aligns the closest to Outposts and also comes with OEM hardware managed by Microsoft and 3rd party partners.
Inertial Continuum – Less access true cloud infrastructure, usual due to data access and latency, means that old practices continue and incumbent providers will sell on the comfort of how you’ve done things until now.
Despite the growing use of other architectures for hybrid deployments, it just makes sense that the AWS model of deployment and shared responsibility for the platform will lead many organizations to begin an aggressive journey to making their world cloud-native on the leading cloud platform.
What’s In the Box?
Much like Brad Pitt’s character in Seven, many of us systems and enterprise architects want to know something…
This is the description of what’s available in the current GA release as of December 2019 as part of the AWS re:Invent announcement:
EC2 (Elastic Compute Cloud) – EC2 compute options based on some pre-configured combinations (see catalog section below)
EBS (Elastic Block Storage) – GP2 storage is the only storage being offered at the moment but that will inevitably see some changes as the adoption increases and customer feedback drives changes to include other storage tiers
ECS (Elastic Container Service) – The leading container platform on AWS now easily deployable in your own data center
EKS (Elastic Kubernetes Service) – K8s the easy way for those who don’t want to deal with the challenge of K8s deployment and administration
RDS (Relational Database Service) – Currently offering PostgreSQL and MySQL in preview form
EMR (Elastic Map Reduce) – Big Data goodness including support for Apache Spark, Hadoop, HBase, Presto, Hive, and other Big Data Frameworks
Outposts is also much more than just the initial deployment. This is a new operational model. AWS owned, customer operated, and partner managed. You can opt to manage updates/upgrades through a few methods including partners and AWS services teams.
The locations available for deployment today with the GA launch include North America (United States), Europe (All EU countries, Switzerland, Norway), and Asia Pacific (Japan, South Korea, Australia).
The Outposts Catalog
My favorite time of year used to be getting the Sears Christmas Wish Book in September which previewed the goodies you can ask Santa for the upcoming Christmas season. Now you can bust open the AWS Outposts catalog and get something for the person who has everything!
You can see in the image above that there are pre-defined configurations of instance types and storage capacity. This is one of the interesting things about AWS because as elastic as the Elastic Compute Cloud is, it is fixed in how much they will allow you to use on their host hardware configurations.
No oversubscription option here as you may have enjoyed with VMware which is also why many are still looking at VMware Cloud on AWS with VMware Cloud Foundation locally as their hybrid option. The same will go for Azure and the on-premises Azure Stack options.
Does it Scale?
It’s also worth noting that the purchase of AWS Outposts is not quite as on-demand as other solutions. The purchase of AWS Outposts is done on a 3-year term which covers the EC2 and EBS capacity across the full period using no upfront, partial upfront, and all upfront payment options. Other services used (e.g. RDS, EKS, ECS) will be billed on-demand along with any data egress and data transfer charges which align with current AWS data transfer pricing.
Scaling of AWS Outposts allows up to 16 racks to be treated as a single capacity pool. The AWS team has already noted that future scaling is hopefully going to allow thousands of racks to be spanned as a single capacity pool. Even the most aggressive AWS consumer today should be satisfied with those numbers.
Big Goals for Outposts Revenues and Adoption
The end goal will be a significant amount of revenue (relative to the current 0$ for on-premises) as part of the ongoing AWS revenue stream, potentially double-digit percentage within 5 years according to some conversations on the re:Invent show floor. There should be little uncertainty that an AWS hybrid infrastructure will prove to be the leader within the first 12 months of operations compared to any alternatives.
It will be interesting to see how AWS reports adoption numbers in future earnings calls and through the AWS Summits in 2020. This is also a massive opportunity for services and support partners through the APN (AWS Partner Network) to be a part of deployment and design of AWS Outposts solutions for their customers.
My opinion is that this offering beating the secondary VMC on AWS which is still in beta also signifies that AWS is once again reminding everyone who is leading the charge on tomorrow’s hybrid cloud. That said, don’t doubt the power of VMware ecosystems needing to stay VMware-native which gives a big opportunity to use the AWS Outposts running VMware Cloud Foundation to extend a near-zero touch hardware option for operating your hybrid cloud.
It’s Not About the Hardware
Let’s dial it back for a moment to remind everyone what AWS Outposts is and is not. It’s built on hardware but that is simply the packaging for the real thing that is being sold. AWS is delivering a methodology. This is a way for customers to use cloud-native and native cloud infrastructure to rebuild, replatform, and relocate their applications, workloads, and data, to AWS.
Sub-10ms latency connections using Direct Connect to AWS Outposts and AWS Local Zones will open the doors to more data-intensive applications being able to live in AWS infrastructure. If you bring the data, the rest of the workloads will follow.
You bring the data center and uplinks, AWS brings the rest. A beautiful pairing.