EC2Instances.info – A Handy Interactive Guide to AWS EC2 Instance Sizing and Pricing

One of the most challenging aspects of the AWS ecosystem is navigating the pricing and sizing options when looking at EC2 instances.  Luckily, there is a rather nifty tool out there which has been created by a community member and hosted on GitHub which you can find at http://ec2Instances.info

The ec2Instances.info site lets you dig around all of the different configuration options including (at the time of this blog):

  • EC2 Instance types by region
  • Reserved Instance options
  • RDS Instance types (also at http://rdsinstances.info)
  • Pricing for On-Demand licenses such as Windows and SQL
  • Hourly/Daily/Weekly/Monthly/Yearly pricing detail

You can also see and contribute to the code directly on GitHub by visiting the source repository.

This is a very helpful resource that you should bookmark for reference. The project is being updated by 53 contributors (at the time of this blog) and has well over a 1000 stars on the GitHub project.

You can see from the column selector that there is a lot of potential data to show:

Big thanks go out to Garret Heaton for putting this together and sharing it out with the community.  Nicely done!




Attaching Turbonomic to your AWS Environment

While it’s a seemingly simple task, I wanted to document it quickly and explain one of the very cool things about attaching your Turbonomic instance to AWS.  For the latest release of the TurboStack Cloud Lab giveaway , we wanted to move further up the stack to include the AWS cloud as a target.

Even without the TurboStack goodies, you can already attach to AWS and get back great value in a few ways.  Let’s see the very simple steps to attach to the cloud with your control instance.

Attaching to an AWS Target

First, log in to your server UI with administrative credentials that will allow you to add a target.  Go to the Admin view and select Target Configuration from the workflows panel:

01-add-target

Click on the Add button and enter the following details:

  • Address: aws.amazon.com
  • Username: Your Access Key from AWS
  • Password: Your Access Key Secret from AWS

02-aws-target-type

Next click the Add button in the Pending Targets area below the form, then press the Apply button at the very bottom.  That will take you to the next step which validates your configuration:

04-added-validation

Now that you are validated, you will begin to see data populating as it is discovered from AWS.  The discovery cycle runs by default every 10 minutes, and as each entity is discovered, it is polled asynchronously from there for continuous gathering of instrumentation.

In your Inventory view, you will see the addition of AWS entities under the Datacenters, Physical Machines, Virtual Machines, and Applications sections:

05-aws-inventory

If you expand one of the Datacenters, you will see that it is defined by Regions (example: ap-northeast-1) and then underneath that, you can expand to see the Availability Zones represented as Hosts:

region-az

Let’s expand our Applications, and Virtual Machines, where you see the stitching of the entities across all of the different entity types:

06-aws-consumption-path

You can see that we have a Virtual Machine (EC2 instance) named bastion, which also has an Application entity, and you can see that it consumes resources from the us-east-1a AZ, with an EBS volume in the same AZ.

You can also see the cumulative totals under the Virtual Machines list to get a sense of how many instances you have running across the entire AWS environment.  The running instances are counted in brackets at the end of each region listing.  How cool is that?!  As someone who forgets instances all the time running across numerous regions from testing, this has been a saving grace for me.

You can also use the Physical Machines section to view each region.  When you drill down into the PMs_ section, you will see the AZ listings underneath there.

pm-counters

 

That’s all it takes to get your AWS environment attached, and we will dive into some other use cases on what you can do for application level discovery, and much more on the AWS public cloud.

 

 




Unable to Delete Empty Elastic Beanstalk S3 Bucket

For those who are doing AWS work among the different projects, you will most likely do some storage on S3 (Simple Storage Service) for templates and logs.  Each AWS service has the ability to write its configuration and logs to S3 and is usually a part of the setup wizard.

Sometimes the permissions set by the AWS wizard may leave you with some challenges.  A common and simple example is when using AWS Elastic Beanstalk.  When you clear out an Elastic Beanstalk configuration, the S3 bucket is left behind because it is not deleted as part of the removal process.

Normally, we just select the bucket and then you can empty it and delete it.  This is what happens instead.  First, select your bucket:

s3-eb-bucket-name

Once selected, we  then choose the Delete Bucket option from the Actions button:

01-s3-delete-bucket-button

Then we are disappointed by seeing this error message:

03-s3-delete-bucket-error

Access Denied?!  That shouldn’t be the case.  I’m using an account that does have enhances privileges, and have even attempted it using the root level account for my entire AWS environment.  NOTE:  It’s not recommended to use the root account, but I did try it to prove the point.

Fixing the S3 Bucket Access Denied Issue

The issue is a simple one as it turns out.  Open up the properties for the bucket and click the Edit bucket policy button:

04-s3-edit-bucket-policy-button

When the bucket is created by the system, it is created with a specific bucket policy that has been set to deny the s3:DeleteBucket action:05-s3-deny-perms

That’s a safety measure so that we don’t accidentally remove the contents which could be driving an active Elastic Beanstalk configuration.  Change the Deny effect to Allow in the JSON editor and save the policy:

06-s3-allow-perms

Once you’ve saved the policy, go ahead with the Delete bucket process under the Actions menu again, and you will see a much more appropriate response.  This time you will see a Done result in the results window.

07-s3-delete-bucket-success

This is one of those oddities around saving ourselves from ourselves by making sure we don’t accidentally delete things.  Sometimes we really do want to delete stuff 🙂




Run the AWS Shell Quickly and Easily in a VirtualBox Instance Using Vagrant

Because we like the Martha Stewart pre-baked oven version of things, this is the shortcut to running the AWS CLI without having to dirty up your own local environment.

Why use a Sandbox?

While development entirely using your local machine is handy, there can often be configuration issues, and also conflicts with other development libraries. In my earlier post, you saw how to configure a basic sandbox environment. I’m assuming that you’ve already got the following installed as documented in that post:

  • Git client
  • Vagrant
  • VirtualBox

This is how to deploy and configure the AWS Shell environment using a sandbox server in just a couple of simple steps!

Clone the GitHub Repo

From your command line, you can just run a git clone https://github.com/discoposse/virtualbox-aws-shell-sandbox.git to get ready with all the Vagrant code:

git-clone

Next, we change into the directory and we run a vagrant status to confirm that the configuration is ready to run:

cd-vagrant-status

We can see that it says not created for the machine, so let’s run vagrant up and get this party started!

vagrant-up

Once the process is completed, you will see this message:

vagrant-done

What’s in the GitHub Repo to make AWS CLI work?

The basic build of a sandbox machine was documented in my previous post, and the secret sauce for this is really quite easy. The only reason I like this approach is that I have a super simple deployment of a purpose-built machine to test out AWS CLI work. All with two simple statement:

sudo apt-get install -y python-pip sudo pip install aws-shell

Yes, it is just that easy.

Once the machine completes the installation, you are ready to log in to the console and give it a try. We do this by running vagrant ssh awssandbox:

vagrant-ssh

Et voila! You are now ready to start up the AWS shell.

Configuring and Running AWS Shell

The environment is all installed. Now, we need to run our AWS shell which will do some basic configuration at the first launch:

aws-shell-first-launch

After the console comes up (it takes about 30 seconds to build the autocomplete cache), we have to configure it to use our AWS credentials.

When we start to type the configure command which you will see does an autocomplete:

configure-autocomplete

The three pieces of information you need to configure the AWS shell are your Access Key, your Secret Access Key, and the Region name you are working within. Just like with the web client, you have to choose a default region to explore. That can be changed again at any time by re-running the configure command:

aws-shell-configured

We are all set with our configuration, and you can start typing away on the AWS shell commands. There is an autocomplete function on everything, and you can even scroll up and down when the autocomplete suggestions come up:

command-autocomplete

For example, we can list the regions by using the ec2 describe regions command that outputs a JSON response of the available regions:

ec2-describe-regions

We can also use the ec2 describe instances command to list out any active instances that are in this region under our account:

ec2-describe-instances

The output will span a couple of screens in a prompt window, but luckily we can scroll up or down and also copy the content out from the shell to evaluate in a text editor if you so desired.

There are literally hundreds of commands, but the main thing we wanted to see here was how to just get the basic configuration up and running so that you can start exploring.

To exit, you can use F10 or Ctrl-D to go back to the Linux shell prompt.

Now you have your AWS shell environment ready to save and reuse as needed without having to deploy anything on your home environment. Another advantage is that you can snapshot this one, run it on any platform that supports VirtualBox, and not worry about versioning or anything at all with dependencies at the OS level.