Running PowerShell Core using Docker

More and more of the Microsoft ecosystem is making its way into open source platforms. One of the very interesting products coming from the Microsoft camp lately is the PowerShell Core platform which is now ported to run on multiple underlying operating system environments.

I covered the process to install the Mac OSX version which is very cool, but let’s take the abstraction one level higher and look at running the PowerShell core inside a Docker candidate.

The first thing you’ll want to do is head on over here to make sure you’re running the nifty Docker Toolbox for your laptop or desktop environment if you haven’t already got Docker available to use.

Running your first PowerShell Core container

The commands here may seem a little too easy, but that’s by design. The containerized implementation makes the deployment and use of PowerShell core super easy!

Let’s launch our first container with the docker run -it microsoft/powershell that will kick up a new container based on the image which is in the Docker public hub under the Microsoft organization. The -it means that we are launching in an interactive mode inside the container.

That gets you up and running to be able to run the PowerShell environment.

NOTE: There is still limited functionality compared to the full PowerShell on Microsoft environments. This is something that is changing with each release as the community and Microsoft themselves contribute towards more features.

Exiting and re-entering the container

Getting out of the container is as easy as typing exit and the prompt. This will bring you back out to the local environment. That gives is an interesting situation where we have the container present, but it is stopped. If you run the same command as you did before, that actually launches an entirely new container.

We need to do three things in order to get back in to the same container:

  1. find the ID of the existing container
  2. start the container using that ID
  3. attach to the container

First, let’s check the containers to find out the ID of the one we want using the docker ps -a command:

Use the docker start [CONTAINER-ID] command where [CONTAINER-ID] is the ID you see in your console:

Use the same ID and attach to the now active container with the docker attach [CONTAINER-ID] command:

That is all there is to it! Each time you exit, the container will automatically stop because we don’t need to keep it running in the background. There are other ways to keep it running, but that is for another blog post 🙂

Removing the container is as simple as running the docker rm [CONTAINER-ID] where [CONTAINER-ID] is the ID we used before to attach to the existing container.

Installing and Using Docker Toolbox for Mac OSX and Windows

One of the most compelling reasons to run Docker on your local machine is the speed at which you can deploy and build lab environments. As a huge fan of Vagrant, I love the ability to spin up environments such as the sandbox labs I’ve been using for a long time with Vagrant and VirtualBox.

Switching to Docker as an option for many of my quick labs has also meant the same ability to run as an abstraction on top of my laptop so that I don’t end up in dependency hell with development libraries and underlying infrastructure needs that quickly begin to conflict as I do more testing and development.

Installing Docker Toolbox on Mac OSX or Windows

The best way to get started is to run the Docker Toolbox platform which deploys a Docker environment with popular and important Docker tools including:

  • docker-engine
  • docker-compose
  • docker-machine
  • Kitematic

Navigate over to to get your appropriate version:

Rather than document the steps on a continuously changing set of screens, I recommend that you follow the installation process with the tools you desire using the guides provided by Docker here:

Once you’re installed, you can kick the tires on Docker using your first Docker Hello World test container using the docker run hello-world command:

You can see that the container image was not local, so a download process started and then the container was launched. As long as you see the results like above, you’re in business!

We will be using this as a baseline for a lot of other examples in the blog. As usual, this is meant to emulate a basic Docker configuration and does not really reflect a multi-node deployment with overlay networking. The goal is to be able to quickly and easily launch containers using Docker Engine for a number of admin tasks that can replace what we may have been doing inside dedicated workstations or sandbox virtual machines in the past.

Open Source Does Not Equal Vendor-Agnostic

One of the most common incorrect associations we see in the IT industry is that an open source project is vendor-agnostic. It is a false equivalence which can be something that really confuses what the open source ecosystem is all about. I don’t like to sound negative on some company’s approaches to using open platforms, but when I see folks trying to say that they are vendor-agnostic, I have to correct a few assumptions.

Open Code versus Open Source

Open source, like DevOps, is a methodology. At the Interop event this year in Las Vegas, I quoted from an open source panel with a very frank comment that “putting your project on Github does not make it an open source community”. It may sound harsh, but remember that there is a difference between open code and open source.

Running an open source project with multiple contributors from disparate vendors and individuals is an entirely different thing than just dropping your code onto Github. This is where I talk about the plugins that my team creates for open source projects as “fully open and available to be contributed to on Github” at the moment. We are publicly contributing to schedulers for Kubernetes and Mesosphere. Contributions are entirely from our internal engineering team at the moment. That is the reason I choose to word my description a little differently.

VMware and the Photon team are working on Photon OS and Photon Controller projects. Those are hosted on Github with the ability to accept contributions from external contributors. The question about whether they are truly running an open source community when someone submits a pull request for something that may take the projects off of the predetermined path as defined by VMware.

Currently, there are 30 contributors to the Photon OS and 33 contributors to Photon Controller. Most of whom work for VMware. It’s natural for this to happen at the start of a project. The real judge of the openness and vendor-agnostic nature comes with the shift over time of contributors towards a larger community of participants.

Go Fork Yourself – Vendor-Agnostic over Open Source

fork-a-repoDocker has now reached an interesting point where the challenge of being an open source project has begun competing with the origin of the container ecosystem juggernaut which they have become. There is a lot of talk in the industry about other organizations making a fork of the current Docker project in order to bring it back to align with what many community contributors need to do in order to fulfill some technical requirements.

The pull requests being refused by the Docker project are losing out because they are beginning to compete with Docker the company. This is where we see an example of the split between open source as a methodology and vendor-agnostic as a truism.

When a single vendor, or even a group of vendors on a project, steer the direction away from what contributing members of the community are working towards, it loses its true openness. This is what I mean when talking about the difference between open source and open code.

Nearly Nothing is Agnostic

The reality of any project is that they are rarely successful with complete altruism. OpenStack is an example that I would use as a close example of truly maintaining a vendor-agnostic, open source ecosystem. Cloud Foundry is also succeeding with this. Despite massive contributions to the projects from corporate entities, the overall ecosystem remains fully open and serves the needs of the community of consumers that use these platforms.

OpenStack is not without its own challenges around technical direction as I’ve written about before which I dubbed TrumpStack. Every ecosystem will feel these pains as there are choices to make around direction on features and functions.

I applaud every company that contributes to OSS, even if it is only to publicly present their software for open contribution. I just want to make sure that they represent the intentions honestly. This is the reason that we need to take it with a grain of salt when someone who represents a vendor says that they support OSS (Open Source Software) and are vendor-agnostic. One does not equal the other.

Deploying a Docker Sandbox using Vagrant and VirtualBox

Even with the addition of more client-side tools for running Docker, it still requires installing development tools that may impact your local environment. Plus, I’m a fan of using VirtualBox to make sandbox environments that I can spin up and tear down as needed. This means that I never have to worry about version conflicts with development languages and dealing with package management on my Macbook or on my Windows machine. Every machine will use the same recipe.

Deploying the VirtualBox Docker Sandbox Environment

We are assuming you’ve read the initial post, you have already installed Vagrant, VirtualBox and Git. Once you are up to that point and ready to go, we just need to pull down the code to deploy our Docker sandbox using git clone first:


Next, change directory into the folder cd virtualbox-docker-sandbox and then run the vagrant up command:


That’s going to run for a while, and you will see after about 10 minutes that you’re back at the prompt. Now we can use vagrant ssh dockersandbox to get onto the console and confirm our environment:


You’ve got the nested instance running, and as you may have read in the file from the code we cloned, we have installed docker as well as docker-compose and docker-machine which will help us test the waters on a few different Docker features:


Now you can start up some Docker containers without having touched your local machine at all, and this ensures a clean sandbox deployment for you that is able to be quiesced using the vagrant suspend dockersandbox command:


To bring the environment back up, just run a vagrant resume dockersandbox

Running Docker as Sudo

One thing to note is that we have deployed the Docker runtime as sudo, and because of that we must always run the Docker commands as sudo. This is a best practice in general, so when you try to run docker pull or any docker commands, you will see errors about the daemon not being available:


Run the sudo docker pull nginx instead of docker pull nginx and you will see a marked difference in the results:


Happy Dockering!