Home Blog General Migrating websites from dedicated cpanel servers to Rackspace Cloud

Migrating websites from dedicated cpanel servers to Rackspace Cloud



As many in the web design industry would know, if you're in the web design or development business, you're also defacto in the hosting/domain business. Some web companies choose to pass that role back on the client, advising them where to buy hosting. Other web design agencies will get a reseller account with a control panel host (e.g. Plesk or cPanel) such as Hostgator or Bluehost. That was our story many years ago - we got a reseller account at a cpanel host. What we ran into with that was a lack of control. We weren't able to install the php packages to suit our needs and had to dance around server config such as upload limits. One day we had a huge fiasco with a shared ip address that was directly linked into a newsletter mailing and we couldn't get the ip address back. That stress was enough for us and shared cpanel hosting.

So we moved to dedicated hosting. That brought a whole other set of challenges. We had to learn all about full administration of a linux server. For a while we continued to use cpanel but eventually even got frustrated with cpanel's limitations and just went to manual configuration. When we surveyed clients we found that the majority of our clients didn't even use the control panel and even found it a hassle. So in order to make changes to hosting config, clients would just submit tickets and we'd take care of it. We did find a very small percentage though that did still want a control panel so we always had bringing that back somehow in the back of our minds.

Another issue with dedicated hosting (where you rent a full box at a data center) is having a single point of failure. We had single disk servers to start with and then felt it would be better to move to raid systems. We had raid systems with Layered Tech and that was a nightmare. They seemed to just continuously reuse old hard drives. We had a major windows server crash three times in six months. That was enough for us there. We also had dedicated servers with Server Beach. Their support was better but unfortunately we had more hardware failures on raid5 that we did on single disk servers.

The hardware was the main aspect that prompted us to investigate cloud hosting. In the cloud sites serving load is spread across hundreds of servers. If one goes down it can easily be replaced without downtime. With dedicated servers we had to know and depend on our backup mechanisms as our life depended on it. If a server went down it certainly meant 48 hours doing a complete reload of all files and sites on the server. So being able to rest a bit more peacefully in regards to downtime was alluring to us.

Cloud hosting would also reintroduce a limited control panel for those clients who wanted one but would still give us the ability to manage advanced settings at a higher level. Even though we had staff in house that knew how to admin linux servers, having a control panel also meant that folks in admin could setup accounts or delete accounts if needed.

How to migrate from linux/cpanel to rackspace

This isn't meant to be a definitive guide on how to move websites to rackspace cloudsites but it is a compilation of suggestions we accumulated along the way. This guide is also a bit high-level / not detailed. You need a base of knowledge to do this. If you can't make these suggestions work then you may want to hire a more experienced technician to do this for you.

To us it seemed that there would be phased migration.

  • Step One: Migrate all DNS to zones to Rackspace AS THEY ARE. This means that you can repoint your hosting company's name servers to Rackspace's nameserver IPs and then all your clients DNS will also be performed from Rackspace BUT this is before you've moved ANY of the sites over. The DNS will actually point all of the domains BACK to your current main cpanel/dedicated servers.
  • Step Two: After migrate all the actual site files and databases.
  • Step Three: Switch the A record IP address for each website over from your old dedicated server IP to the Rackspace IP, essentially taking the site live.

If you manage DNS through a third party like your domain registrar then you can probably skip the whole DNS phase.

Migrating DNS

If you have a bare dedicated server or a cpanel dedicated server, chances are you have one of your dedicated servers also serving as your primary name server. If you're running cpanel, you're probably using bind (even if you don't know it) or if you don't have cpanel then you know you're running bind (because you have to manually edit named.conf to add domains).

So why do the DNS first? We use ns1/2.ourhostingcompany.com as our name servers. We have hundreds of domains registered. We've registered most of them but 10-20% are registered by the clients themselves. They all are pointed at ns1/2.ourhostingcompany.com though. We control that domain and we get to set the IP addresses of the name servers for ns1/2.ourhostingcompany.com. With moving to rackspace we could spend countless hours going and updating all the domains to point to ns1/2.stabletransit.com or we could just leave them and reroute the traffic to rackspace's nameservers.

We actually wanted to migrate all websites files and databases themselves first and then the DNS. The first way we tried this essentially failed. We went into our named.conf and made our nameserver an open forwarder. We set forward {any;}. The plan was to move the site files and database and make sure the website was working at Rackspace, then after we'd confirmed that each site had successfully migrated to Rackspace just go in their zone file on our name server and say "forward all DNS requests for this domain to Rackspace." This failed because when we did that rackspace's name servers thought they were authoritative yet our name servers were still thinking they were authoritative so there was confusion.

The next step seemed to be to ensure all our domain zones were replicated at rackspace and then completely repoint ns1/2.ourhostingcompany.com to rackspace's nameserver's IP addresses. And THEN we'd move the sites.

Getting ready

This was a huge task. Again we have 100s of accounts. The first thing we had to do was get someone to manually set up a client in Rackspace for each client we had. Rackspace, if you're reading this, please create an API that lets you programmatically add clients and websites to rackspace cloudsites. It would have sped things up for us massively. Having our admin staff do this in their spare time probably took 2-3 weeks. They would create the client, then create the website under the client, and then they would create the database, the username for the database, the database password and even take note of the ip address and hostname for the database. Most of our site's have databases so we needed to do this.

There's a control panel in Rackspace for all of this and it doesn't need much explaining. Once all that groundwork was laid we had to figure out how to transfer all the custom records in our DNS server to Rackspace's nameserver.

For example, some of our client have their MX records set to Google apps mail, some use paid 3rd party mail services. Some clients have custom cnames, etc. We didn't want human beings to have to sit there and manually copy and paste 1000s of domain record entries from our nameserver over to Rackspace's control panel (its slow if you're doing this type of repetitive task). Luckily Rackspace DOES have a DNS API. It's not hugely documented.

The actual code to migrate DNS from bind to rackspace via the api

The coolgeex one provides the rackDNS.php class and also a sample.php to show how to interact with the API. Using the sample.php we wrote our own script to do what we wanted. The coolgeex one didn't work immediately for us. We found there was some issue with curl and https. We had to set a curl parameter to ignore https problems and then it worked fine.

The idea was to essentially go through each domain, delete the default zone created by Rackspace and replace it with our zonefile. Of course if you run bind, all your zonefiles are located in /var/named or /var/named/chroot/var/named. The problem is that Rackspace only accepts FQDN format zone files as input so if some of your zone files are relative, it won't work. E.g., bind will let you point multiple domains at one zone file and just say

A xx.xx.xx.xx
sub CNAME whatever.domain.com ;subdomain

But Rackspace needs absolute zone files like this:

Domain.com A xx.xx.xx.xx
sub.domain.com CNAME whatever.domain.com

If you have a mixture of relative zone files there is a way to get bind to spit all the zone files back out as absolute zones. Here's a link on how to do that: http://ubuntuforums.org/showthread.php?t=903651

Basically it creates all your nameserver's domain zones in one file. You can then use PHP to loop through those zones and work with the Rackspace DNS API.

The second issue we had was account IDs. If you create a subclient in RS and then a domain under that client, if you connect to the DNS API and then say "show me all the domains" it will only show ones in YOUR account, not sub-client domains. In order to see sub-client domains, you have to connect to the api using the sub-client's ID number. There's more info on this topic at the donetexec link above. So consequently, what we needed was another file that mapped all our domains to client id's at rackspace.

Once we had these two pieces we could have a php script

  1. Read in all the domains from the bind dump and start looping through them
  2. For each domain connect to the dns api using the client's id
    1. Wipe what RS had there by default
    2. Load our zone file

I've provided the code that we ran with the coolgeex rackDNS.php.

* Rackspace DNS PHP API sample.php
* @author Alon Ben David
* @copyright CoolGeex.com
require_once "rackDNS.php";
$rs_user = 'YOURRSUSERNAME';
$rs_api_key = 'YOURRSAPIKEY';
//first load all the codes we have for domains
$codes = array();
//now lets load the zones into an array
$temp = explode(";", $tempstring);
//now lets rework the array with the key as domains
foreach($temp AS $zone)
//get the domain name from the first bit
$domain = substr($zone,0,strpos($zone," "));
$domain = str_replace(array(" ","\t","\n","\r"),"",$domain);
$domain = substr($domain,0,-1);
$domainsdone = array();
$domainsmissingcode = array();
$domainserror = array();
if ( isset( $_COOKIE["domainsdone"] ) ) { $domainsdone = explode("|",$_COOKIE["domainsdone"]); }
if ( isset( $_COOKIE["domainsmissingcode"] ) ) { $domainsmissingcode = explode("|",$_COOKIE["domainsmissingcode"]); }
if ( isset( $_COOKIE["domainserror"] ) ) { $domainserror = explode("|",$_COOKIE["domainserror"]); }
//now lets loop through the zones and see which domains we have codes for.
//if we have a code, clear all records in that zone and load the zone file we have
$count = 0;
foreach($zones AS $domain => $zone)
if( $codes[$domain]!="" AND !in_array($domain,$domainsdone) AND !in_array($domain,$domainserror) )
//we've got the code, now lets do this.
//establish connection
$dns = new rackDNS($rs_user,$rs_api_key,'US',$codes[$domain]); //($user, $key, $endpoint = 'US') $endpoint can be UK or US
//echo "Working with ".$domain." which has a code of ".$codes[$domain]."\n\r";
//get the domain for this code
$tempdomains = $dns->list_domains(50,0);
$domainslist = $tempdomains['domains'];
//loop through the domains for this account and ensure we grab the right info for the right domain
foreach($domainslist AS $theirdomain)
if (strtolower($theirdomain['name']) == strtolower($domain))
$domainID = $theirdomain['id'];
$domainName = $theirdomain['name'];
//if (!$domainID) die("domain missmatch for ".$domain." and code ".$codes[$domain].". Halting.");
//should probalby check if $domain and $domainName matches
//if so we just delete it
if( strtolower($domain) == strtolower($domainName) ) $delete = $dns->delete_domains($domainID);
//wait a bit because deleting takes a while
//now lets import it according to our zone
$import = $dns->domain_import($zone);
if($import['status']=='COMPLETED' OR $import['status']=='RUNNING') {$domainsdone[] = $domain; }
else {
echo "<pre>Working with ".$domain." whose client has a code of ".$codes[$domain]."
and whose ID is ".$domainID."\n\r";
echo "All of their domains are:\n\r";
echo "This is the result of the delete:\n\r";
echo "This is the result of the import:\n\r";
if ($count >=500) break;
} elseif (!in_array($domain,$domainsmissingcode) AND !in_array($domain,$domainsdone) AND !in_array($domain,$domainserror) ) {
$domainsmissingcode[] = $domain;
setcookie("domainsdone", implode("|",$domainsdone), time()+86400);
setcookie("domainsmissingcode",implode("|",$domainsmissingcode), time()+86400);
setcookie("domainserror",implode("|",$domainserror), time()+86400);
echo '<html><head><meta http-equiv="refresh" content="30"></head><body>';
echo "<pre>";
echo "Domains that are done:\n";
echo "\n";
echo "No codes for these domains:\n";
echo "Error occured when importing zones for these domains:\n";

Note that you will have to modify to suit you. I'm not going to provide a copy of our domain dump from bind because that is a very standard format of file, but I will show a sample of how we loaded the "codes" (or client id numbers corresponding to each domain).

$codes["domain1.com"] =123123;
$codes["domain2.com"] =123124;

We wrote the script to be executed by browser but it could be executed by cron. We wrote it to process X domains per run and then stop. We also wrote it to meta-refresh to keep running until all the domains had been processed. We decided to store the domains that had been processed in a cookie. We just didn't want to mess around with writing files to the server or setting up a database just for this one function.

One thing to note here. RS still gave errors on trying to import some of our domains even though they were in absolute format. What we found was that TTL times below 300 seconds would be rejected. So where we'd set TTLs on our server very low, those domain imports failed until we manually updated the domain dump file with higher TTLs. RS DNS imports failed for other reasons too. We modified the rackDNS.php file to show more detailed error results back from the RS DNS API to help diagnose this. It doesn't do it by default. See "show detail" in the RS DNS API documentation for more info.

Now that DNS had been migrated, essentially what we had was a complete mirror of our DNS server at Rackspace. All the domains pointed back to our current hosting server, not to rackspace itself. We then logged into our registrar, went under "register nameserver" and updated the IP addresses of our nameserver to those IPs of Rackspace's nameservers instead of ours. Everything kept running without a hitch.

Step two: move the files, databases and update config files

This part should be a little easier for most folks who have done a bit of hosting reselling to understand. The simple process is to use FTP to upload sites, use MySQL to upload databases and then update config files so sites at RS connect to RS's mysql servers.

We also chose to script this into one step. If you run cpanel there is some consistency in the way your accounts are setup, even if you don't, you have some consistency.

This is the basic perl script we used:

#push an account to rackspace, files, database, etc. 
#NOTE THAT YOU have to update your server paths correspondingly
if [ -z "${11}" ]; then
echo 'Please supply arcuments after the script name in the following order'
echo '$1 = name of the home folder locally'
echo '$2 = username at rackspace'
echo '$3 = password at rackspace'
echo '$4 = the domain name' 
echo '$5 = the local database'
echo '$6 = the remote database hostIP'
echo '$7 = the remote database username'
echo '$8 = the remote database password' 
echo '$9 = the remote database name'
echo '$10 = the user number at rackspace'
echo '$11 = the remote database hostNAME'
#dump the database 
mysqldump -uroot -pYOURMYSQLROOTPASS $5 > /home/$1/$5.sql
#upload the database
mysql -h$6 -u$7 -p$8 $9 < /home/$1/$5.sql
#compress everything
cd /home/$1/public_html/
tar cvf /home/$1/thesite.tar *
#upload it
ncftpput -u $2 -p $3 ftp2.ftptoyoursite.com /$4/web/content /home/$1/thesite.tar
#we need to make sure a 'decompress' php file makes it up
ncftpput -u $2 -p $3 ftp2.ftptoyoursite.com /$4/web/content /home/PATHTOWHEREYOURESTORINGTHISFILE/unpack.php
#now run it so the site is decompressed (this also removes the tar and index.html file)
wget <a href="http://$4.php5-22.dfw1-1.websitetestlink.com/unpack.php">http://$4.php5-22.dfw1-1.websitetestlink.com/unpack.php</a>


You need to install ncftpput on your server for this to work. Try yum install ncftpput or google.

The script needs you to supply all the details as command line parameters. So what we did was we had a spreadsheet of all the websites along with columns for things like ftp username, password, database name, mysql hostname, etc.

At first we just did an ncftpput -r, which would recursively upload all the files one by one. We then found this not to be working too well for speed so we moved to uploading a TAR file. We got a little creative and wrote a php file that we could also upload to the hosting space at rackspace and then trigger it remotely from the script using wget. The script would basically unpack the tar at rackspace.


exec("rm index.html",$out,$retval);
exec("tar --keep-old-files -xvf thesite.tar",$out1,$retval1);
exec("rm thesite.tar",$out2,$retval2);


If the site is large the tar fails to fully unpack within the 30 second timeout that rackspace allows. No worries though. Just --keep-old-files parameter. Wget will keep hitting the unpack.php and each time the untar happens, it will get further, finally unpacking all the files. Some large sites took 11-12 tries to fully unpack but they did work.

Also note in the script where it says "wget…" You'll have to edit that line to what ever websitetestlink.com address RS is generating for you. You could make the whole domain a command line parameter.

As I noted in the subheading above, the third part of this phase is to update all the config files. It's likely you're running a lot of wordpress, maybe some joomla, magento, drupal, etc. We got creative and actually included the edits for the config files in the script that was migrating.

Here's some sample code that edits a Joomla configuration.php (caution, your mileage may vary):


cp /home/$1/public_html/configuration.php /home/$1/public_html/tmp/configuration.php
#edit configuration.php
sed -i "/\$log_path/s/'.*'/'\/mnt\/stor3-wc1-dfw1\/MASTERRSACCOUNTID\/${10}\/$4\/web\/content\/logs'/" /home/$1/public_html/tmp/configuration.php
sed -i "/\$tmp_path/s/'.*'/'\/mnt\/stor3-wc1-dfw1\/MASTERRSACCOUNTID\/${10}\/$4\/web\/content\/tmp'/" /home/$1/public_html/tmp/configuration.php
sed -i "/\$host/s/'.*'/'${11}'/" /home/$1/public_html/tmp/configuration.php
sed -i "/\$user/s/'.*'/'$7'/" /home/$1/public_html/tmp/configuration.php
sed -i "/\$password/s/'.*'/'$8'/" /home/$1/public_html/tmp/configuration.php
sed -i "/\$db /s/'.*'/'$9'/" /home/$1/public_html/tmp/configuration.php
#upload config files because we customized those and need to overwrite remote old with new
ncftpput -u $2 -p $3 ftp2.ftptoyoursite.com /$4/web/content /home/$1/public_html/tmp/configuration.php


We're using sed to edit the config file on the fly and then upload the version that correctly configured for RS to RS. Note that we put this at the end of the script after all the other stuff is done. There's probably a more graceful way to do the sed but we did what we could. This approach could be adapted for any type of config for "insert your open source script here."


The last thing to do after all this was to simply test the sites work. Just visit the websitetestlink in your browser. The thing about this whole process is that you'll hit various snags along the way. For example with the DNS migration various zones got stuck and we had to fix them. Then with transferring files, sometimes the write permissions on the configuration.php would be wrong and we'd have to fix that. It's just troubleshooting at that point.

A special note on SSL. Some sites of course will require SSL. There's no need to generate a new CSR from rackspace and get a new cert. Just simply copy the certificate files from the old server. You'll have to do a bit of research to find out where they are. First you'll have to find your httpd.conf. On a cpanel server it may be difficult to know where that actually is. It's been a while since I've worked with cpanel so I can't recall exactly. A common place is /etc/httpd/conf/httpd.conf. Once you're in that file check to see if the virtual host entry for the domain is in there. Or else check at the bottom of the file for the include statement. Its generally going to include a block of files like /etc/httpd/conf/vhosts/*.conf. Once you find where the actual vhost file is you can look at that. Look for an entry for "SSLCertificateKeyFile". Note the location of that file. Go and open the file and copy and paste the key over to Rackspace. Then also copy and paste the "SSLCertificateFile" over. Note that when you save the SSL at rackspace, based on the methodology described in this guide, it WILL take the site live immediately - because it's updating the A record for the domain.

Beware of HTACCESS. If you're site such as Joomla or Magento is using SEF urls with htaccess rewrite - you MUST set the RewriteBase in htaccess. If you don't the site will not show. You also may find it extremely valuable to add the following values to the end of the htaccess file. It will enable your clients to upload larger files via their admins. 

php_value upload_max_filesize 50M
php_value post_max_size 50M
php_value max_execution_time 200
php_value max_input_time 200

We actually automated this too by tagging this code to the very end of our migration scripts: 


sed -i "s/# RewriteBase/RewriteBase/g" /home/$1/public_html/.htaccess
echo "
php_value upload_max_filesize 50M
php_value post_max_size 50M
php_value max_execution_time 200
php_value max_input_time 200" >> /home/$1/public_html/.htaccess
ncftpput -u $2 -p $3 ftp2.ftptoyoursite.com /$4/web/content /home/$1/public_html/.htaccess


Beware of include paths! If you have code in your old sites that have something like "include('/home/whatever/blah/file.php')" that will fail. Not only that, it will disable php error reporting so the error is not shown to you. I'm not sure why error reporting is disabled for this type of error but I'm thinking that it's a security thing? Maybe they don't want people to be able to probe server root paths and publicly show what paths exist or not. If you have a blank site - check the includes.


So once the site is working its time to switch it over. Because we created placeholders for all the sites using the RS control panel to start with, then we moved all the current DNS zones over to RS via the API, then we repointed the hostingcompany's dns servers to RS's Nameserver's IP addresses - RS's Nameservers are still pointing back to the old hosting servers. The only thing to do is to go into RS's control panel, into the client's website and update the A record back to the default it would have been set at before we overwrote them with the custom zone entries (that, again, point back to the old dedicated servers). So HOW do we know what the IP is at RS???

Do a dig on the "websitetestlink.com" url supplied in the General tab of the website settings. If you dig that, it will give you the correct A record. If you also have an FTP cname for the domain, you may want to point that to ftp2.ftptoyoursite.com

Don't forget parked domains! When you migrate the sites, don't forget to also migrate the alias/addon domains. Putting an alias domain on a website at rackspace is a bit weird. See if you can figure out how to do it, but if not, chat support and they'll tell you. I'm not writing in here how it's done because I just want to make a point that I think this part of the process is a bit funky at RS.

If you migrated your aliased domains to RS before migrating the sites, RS will not let you reset their A records via the API. To repoint the alias domains over to RS, you have to use the "donotexec" toolkit above. Its easy to forget aliased domains but clients will call up at some point asking where their other domains aren't working.

If you refrained from actually going to theclientsdomain.com in your browser, you can actually go to it now and it SHOULD direct you to the client's site at RS so you can further test. The reason for this is that if you haven't directly hit the client's domain recently, then you don't have their IP cached locally and your computer/router is going to go grab the fresh IP. If you're not sure then open up /etc/system32/drivers/etc/hosts file and "hardcode" the new RS IP in for the clients domain and visit it. Your computer will definitely be showing the client's site at RS so you can test it. If you're really unsure, make a change to the site via FTP or websitetestlink.com and see if you can see the change.

The last thing you want is to do a switchover but then get a call from the client that their site is in actual fact, down.


I hope some have found this guide helpful. I wrote it because migrating from a dedicated server to a cloud environment can be a real bear. What makes it difficult is the control panels and lack of api to speed up the process. You have to do so much work manually. If it were migrating from one dedicated server to another, it would just be a matter of copying all the files and doing general config. If this helped you please feel free to comment below.

Comments (0)Add Comment

Write comment

security code
Write the displayed characters

Web Content Management System Web Development Technical Partners

Newsletter Signup

Signup for WinWorld's monthly newsletter
and special offers.

Our Web Design page on Facebook
Our Twitter Feed