Search code examples
deploymentcloudrabbitmqrackspace-cloudrackspace

Queuing in Rackspace Cloud


I've been using EC2 for deployment all the time and now I wanna give Rackspace a try ,My application is have to be scalable, so I used RabbitMQ as the main queuing system . The actions on the front-end could lead to a very large amount of jobs that need execution which I want to queue somewhere.

Due to the expected load profile of the application it makes sense to use a scalable infrastructure like the rackspace cloud. Now I am wondering where it would be best to queue the jobs. Queueing them on the front-end server means that the number of front-end servers can only be scalled back down once the queues are processed which is a waste of resources if the peak load on the front-end is over we want to scale that down and scale up on machines that process the queue items.

If we queue them on the database server we are adding the load onto a single machine which in the current setup is already the most likely botleneck. How would you design this?

is there any built-in queuing for Rackspace something like amazon SQS or something ?


Solution

  • They don't have anything like SQS but there are a few good services that you may be able to take advantage of:

    Cloud Files

    With Akamai CDN - push all your static stuff right out to your clients (I'm in Gold Coast Australia and cloud files public content comes to me from some server in Brisbane (13 msec vs 250 msec ping times for USA servers) and due to the effect of distance on download speed - faster download times for your users, plus absolutely no clogging the pipes on the web server during the Christmas rush.

    The way I use it is:

    1. I create a Cloud files container; this gets a unique hostname.
    2. I create a CNAME DNS record (for example: cdn.supa.ws) pointing to that unique hostname.
    3. I use cloudfuse to mount the directory both on my cloud server and on my home linux box.
    4. Then just copy or upload files straight to that directory, then serve them from http://cdn.yourdomain.com

    Load balancers as a service

    http://www.rackspace.com/cloud/cloud_hosting_products/loadbalancers/ - Basically a bunch of Zeus load balancers that you can use to push requests to your back end servers. Cool because they're API programmable, so you can scale on the fly and add more backend servers as needed. They also have nice weighting algorithms, so you can send more traffic to certain servers if needed.

    Internal VLAN

    I would recommend using the 'internal IPs' (10.x.y.z) (the eth1 interface) for message queuing and DB data between Cloud Servers as they give you a higher outgoing bandwidth cap.

    Outgoing Bandwidth (speed) caps:

    • 256 MB Ram - 10 Mb/s eth0 - 20 Mb/s eth1
    • 512 MB Ram - 20 Mb/s eth0 - 40 Mb/s eth1
    • 1 GB Ram - 30 eth0 - 60 Mb/s eth1
    • 2 GB Ram - 40 eth0 - 80 Mb/s eth1
    • 4 GB Ram - 50 eth0 - 100 Mb/s eth1
    • 8 GB Ram - 60 eth0 - 120 Mb/s eth1
    • 15.5 GB Ram - 70 eth0 - 140 Mb/s eth1

    eth1 is called an Internal VLAN, but it is shared with other customers, so best to firewall off your eth1 as well as your eth0, for example only allow mysql connections from your Cloud Servers; and if you have really sensetive stuff maybe use myqsl with ssl, just in case :)

    MySQL as a service

    There is also a MySQL as a service private beta. I haven't tried it yet, but looks like it has a lot of potential coolness: http://www.rackspace.com/cloud/blog/2011/12/01/announcing-the-rackspace-mysql-cloud-database-private-beta/