Migrating MySQL to AWS RDS Aurora

Let’s just say that you have a standalone MySQL instance that you want to put on something more resilient. You’ve got a few choices on how to do that, and Amazon Web Services RDS using Aurora DB is a great place to host it. Here are the steps that I’ve taken to migrate from a Digital Ocean one-click WordPress instance to running the data are on Aurora DB.

Things to think about during this transition include:

  • Single AZ (Availability Zone) or Multi-AZ deployment
  • RDS instance size (price and performance will matter)

One of the great things about AWS is that you can scale dynamically to meet your needs.  There is always a tradeoff (price/performance/resiliency) in your architecture, but that’s a different discussion that we can have in another post.

Cost and performance of operating RDS

AWS is super easy to run infrastructure, but my shift from 10$ a month on Digital Ocean to a Multi-AZ RDS instance is based on performance over cost. It’s a tradeoff that I chose to make. Make sure that you are fully aware of the implications of your database hosting choice.

Prerequisites Needed:

  • AWS account
  • AWS RDS Cluster configured
  • Root credentials for source and target databases

Migrating MySQL to RDS Aurora DB using mysqldump

The full instructions as provided by AWS are here, but these are my quick notes on the transition to prove out that it works as simply as AWS says so.

First, find out your current RDS cluster endpoint address by going to your RDS console:

01-rds-cluster

We can see that in this case, there is a writer endpoint and a second reader endpoint. We will use the writer endpoint to migrate the data:

02-cluster-nodes

I’m using the root account on both the source and target, so make sure you have the credentials for both instances to be able to do the same.

The export/import one-liner is as follows. Replace the CAPITALIZED sections with the appropriate information:

mysqldump -u root -pSOURCEPASSWORD --database SOURCEDATABASE --single-transaction --compress --order-by-primary | mysql -u root -pTARGETPASSWORD --port=3306 —-host=mytargetdb.cluster-uniquename.us-east-1.rds.amazonaws.com

Once you’ve created the database by populating it from the source data, you have to create a user and allow access to the database. Launch the MySQL client to attach to your target database:

mysql -u root -pTARGETPASSWORD —host=mytargetdb.cluster-uniquename.us-east-1.rds.amazonaws.com

Now you can create the user and give the appropriate admin privileges on the database needed. Replace the CAPITALIZED sections with the appropriate information:

grant all privileges on YOURDATABASE.* to ‘YOURUSER'@'%' identified by ‘YOURPASSWORD’;

Once you’ve done that, simply point your application towards the new database using the configuration file. For a WordPress database connection, this is found in your wp-config.php file in the root folder of your site.

I know it works, because you’re reading this from my site which was transferred from an all-in-one WordPress deployment in Digital Ocean and is now running on RDS inside AWS.




Taking a DNS Domain for a Drive on AWS Route53

Every once in a while I revisit a lot of the IT assets I’ve got on the go. I use GoDaddy for a lot of my domain services, and despite some challenges, it has done me well so far. One of the things that I do find limiting is the lack of a programmatic way to manage DNS for a GoDaddy domain.

They have openly stated that they want you to use the DNS manager portal that they offer, and it does work quite well. That only works if you have a human touch every time you spin up a resource though. And as you can imagine, I prefer to automate all the things!

It is worth noting that running a zone on Route53 is not free. It will cost you for the basic zone hosting as well as some cost for queries if you get a significant amount of them. There is a relatively simple Route53 cost calculator that Amazon provides here which can help give you an idea of what you can expect for the cost.

For this post, I assume that you’ve already got an AWS account. We are going to get right to the good stuff and set up our DNS domain to migrate away from GoDaddy. This could also work for whomever you host your DNS with, but obviously the instructions would vary slightly.

Creating a Route53 DNS Zone

Let’s start with a very simple switch from a basic DNS zone away from GoDaddy as my example. First, we need to go to the AWS Route53 site:

route-53-setup

Click on the Get Started Now button, and you’ll be brought to a fresh page with a handy Create Hosted Zone button:

route-53-new-hosted-zone

Now, we will be able to name our zone and select the type. You have the option of a Public Hosted Zone or a Private Hosted Zone for Amazon VPC. In this case, I will set up my zone for a public DNS zone:

new-zone

As we setup the new hosted zone, you will see all of the new settings including the NS (name server) records and the SOA (Start of Authority) record:

route-53-new-ns

In my GoDaddy DNS Manager for the domain I will edit my name server settings which brings up the default setup of a GoDaddy hosted configuration:

godaddy-zone-before

Let’s change it from Standard to Custom and add our NS record information from the Route53 environment. We can use any number of name server records, so that’s up to you. 3 or 4 is ideal because you never know when a name server could go sideways on you:

godaddy-update-ns

Save the changes and confirm that they look as you expect:

godaddy-zone-after

Now that we have that configured, we can create a new record in the zone which will let us host a server. I’ll use an A record to build the first test. Parameters are going to be default settings, and all that I’m customizing is the name and IP address. The sample will be an IPv4 address and I’m using the default Simple routing policy:

create-a-record

The last thing we have to do is to wait for the update to happen. Luckily, the internet DNS servers are rather rapid to update and this could take as little as a few minutes, but for complete updates around the world we should assume it could take up to 24 hours as an upper limit.

Using a simple ping command, we can see now that the record is updated:

ping-record

This is a great lead in to some other posts I’ll have to show how to do the same thing programmatically by using the Route53 API. That is the real reason I began the exercise because I was looking to add some DNS registrations into a server deployment workflow which are going to be publicly registered.




Double-Take Move – vSphere to Hyper-V migration

For anyone who has had to move virtual guests from a VMware vSphere environment to a Microsoft Hyper-V environment, you know the challenges that are involved.

Now, if you read my blog you’ll find it surprising that I’d be moving content from vSphere to Microsoft Hyper-V since I haven’t highlighted this as a need before. That’s true for my experiences, but I’m dabbling in some multi-hypervisor work lately mostly around KVM and Hyper-V to prepare for some proof-of-concept testing.

Double-Take Move for the Datacenter

Since I’m a Double-Take customer already, it seemed like this was the first place to go for a commercial tool to get this task underway. I’ve tried the P2V export, import and Microsoft migration tools, but no matter which way I tried, it was very manual and very error-prone.

If you know me, the only thing I like manual is my car transmission. This was another reason to dive into a product to get the task done because it answered specific need: limited downtime and complexity.

The Double-Take Move is also attractive because it does a test drive to ensure that the migration will complete successfully. This feature in the Double-Take Availability has been a life saver, and I use it for my BCP testing regularly.

Integration with System Center 2012

The bigger plan should definitely involve full integration with the Service Catalog, so this is a nifty feature of Double-Take that allows for adding this directly Microsoft System Center 2012 to create your migration request.

By connecting the DT servers into System Center 2012, we can use the Orchestrator workflows to trigger the migrations.

how-it-works

There’s a video that gives the overview of the process here which will be helpful:

Double-Take Move to the Cloud

If on-premises to the cloud is your challenge, you’re in luck too. The Double-Take Move also migrates from VMware vSphere and Microsoft Hyper-V to Microsoft Azure which is a great option for near-zero-downtime migrations.

I have this next on my planning list to test out, so I’ll post separately on how that process goes. In the mean time you can view the Vision Solutions videos on each platform migration process at their site:

private-to-cloud

There is a whitepaper available to fully describe the features that you can download here: http://www.visionsolutions.com/webforms/WPD-Move-Dynamic.aspx?CampaignId=701600000005yg6&WhitePaper=WP_Migration_E.pdf

The full datasheet on the System Center integration is also available here: http://www.visionsolutions.com/downloads/Product-Sheets-E/DS_DTMove-MS_System%20Center_E.pdf

For your specific fit it is probably ideal to contact the folks at Vision Solutions directly to ensure that you choose the best product licensing and that you can get the most out of your Double-Take experience. I’ve used their professional services team for a previous engagement and I really stand behind what they can do.

Hopefully this will be helpful if you are preparing to evaluate the effort and cost for migrating from one virtualization platform to another. I can tell you the reduction in time will be felt in a very positive way by your IT Operations staff.

I’ll have more info soon because of some really cool things coming up in 2014 from the Vision Solutions team so if you want to find out more, reach out to me (eric – at – discoposse – dot – com) and find me on Twitter of course @DiscoPosse and I’ll do my best to answer your questions and connect you with the experts to get you where you need to be!