Wednesday, December 15, 2010

Chef and encrypted data bags.

As part of rolling out Chef at the new gig, we had a choice - stand up our own Chef server and maintain it or use the Opscode platform. From a cost perspective, the 50 node platform cost was pretty much break even with standing up another EC2 instance of our own. The upshot was that I didn't have to maintain it.

 

However, part of due diligence was making sure everything was covered from a security perspective. We use quite a few hosted/SaaS tools but this one had the biggest possible security risk. The biggest concern is dealing with sensitive data such as database passwords and AWS credentials. The Opscode platform as a whole is secure. It makes heavy use of SSL not only for transport layer encryption but also for authentication and authorization. That wasn't a concern. What was a concern was what should happen if a copy of our CouchDB database fell into the wrong hands or a "site reliability engineer" situation happened. That's where the concept of "encrypted data bags" came from for me.

 

Atlanta Chef Hack Day

I had the awesome opportunity to stop by the Atlanta Chef Hack day this past weekend. I couldn't stay long and came in fairly late in the afternoon. However I happened to come in right at the time that @botchagalupe (John Willis) and @schisamo (Seth Chisamore) brought up encrypted data bags. Of course, Willis proceeded to turn around and put me on the spot. After explaining the above use case, we all threw out some ideas but I think everyone came to the conclusion that it's a tough nut to crack with a shitload of gotchas.

 

Before I left, I got a chance to talk with @sfalcon (Seth Falcon) about his ideas. While he totally understood the use cases and mentioned that other people had asked about it as well, he had a few ideas but nothing that stood out as the best way.

 

So what are the options? I'm going to list a few here but I wanted to discuss a little bit about the security domain we're dealing with and what inherent holes exist.

 

Reality Checks

  • Nothing is totally secure.

          Deal with it. Even though it's a remote chance in hell, your keys and/or data are going to be decrypted somewhere at some point in time. The type of information we need to read, unfortunately, can't use a one-way encryption algo like MD5 or SHA because we NEED to know what the data actually is. I need that MySQL password to provide to my application server to talk to the database. That means it has to be decrypted and during that process and during usage of that data, it's going to exist in a possible place that it can be snagged.

  • You don't need to encrypt everything

          You need to understand what exactly needs to be encrypted and why. Yes, there's the "200k winter coats to troops" scenario and every bit of information you expose provides additional material for an attack vector but really think about what you need to encrypt. Application database account usernames? Probably not. The passwords for those accounts? Yes. Consider the "value" of the data you're considering encrypting.

  • Don't forget the "human" factor

          So you've got this amazing library worked out, added it to your cookbooks and you're only encrypting what you need to really encrypt. Then some idiot puts the decryption key on the wiki or the master password is 5 alphabetical characters. As we often said when I was a kid, "Smooth move, exlax"

  • There might be another way

          There might be another way to approach the issue. Make sure you've looked at all the options.

 

Our Use Case

So understanding that, we can narrow down our focus a bit. Let's use the use case of our applications database password because it's a simple enough case. It's a single string.

 

Now in a perfect world, Opscode would encrypt each CouchDB database with customer specific credentials (like say an organizational level client cert) and discards the credentials once you've downloaded them.

 

That's our first gotcha - What happens when the customer loses the key? All that data is now lost to the world. 

 

But let's assume you were smart and kept a backup copy of the key in a secure location. There's another gotcha inherent in the platform itself - Chef Solr. If that entire database is encrypted, unless Opscode HAS the key, they can't index the data with Solr and all those handy searches you're using in your recipes to pull in all your users is gone. Now you'll have to manage the map/reduce views yourself and deal with the performance impact where you don't have one of those views in place.

 

So that option is out. The Chef server has to be able to see the data to actually work.

 

What about a master key? That has several problems.

 

You have to store the key somewhere accessible to the client (i.e. the client chef.rb or in an external file that your recipes can read to decrypt those data bag items).

  • How do you distribute the master key to the clients?
  • How do you revoke the master key to the clients and how does that affect future runs? See the previous line - how do you then distribute the updated key?

 

I'm sure someone just said "I'll put it in a data bag" and then promptly smacked themselves in the head. Chicken - meet Egg. Or is it the other way around?

 

You could have the Chef client ASK you for the key (remember apache SSL startups where the startup script required a password? Yeah, that sucked.

 

 

Going the Master Key Route

So let's assume that we want to go this route and use a master key. We know we can't store in with Opscode because that defeats the purpose. We need a way to distribute the master key to the clients so they can decrypt the data so how do we do it?

 

If you're using Amazon, you might say "I'll store it in S3 or on an EBS volume". That's great! Where do you store the AWS credentials? "In a data ba...oh wait. I've seen this movie before, haven't I?"

 

So we've come to the conclusion that we must store the master key somewhere ourselves locally available to the client. Depending on your platforming, you have a few options:

  • Make it part of the base AMI
  • Make it part of your kickstart script
  • Make it part of your vmware image

 

All of those are acceptable but they don't deal with updating/revocation. Creating new AMIs is a pain in the ass and you have to update all your scripts with new AMI ids when you do that. Golden images are never golden. Do you really want to rekick a box just to update the key?

 

Now we realize we have to make it dynamic. You could make it a part of a startup script in the AMI, first boot of the image or the like. Essentially, "when you startup, go here and grab this key". Of course now you've got to maintain a server to distribute the information and you probably want two of them just to be safe, right? Now we're spreading our key around again.

 

This is starting to look like an antipattern.

 

But let's just say we got ALL of that worked out. We have a simple easy way for clients to get and maintain the key. It works and your data is stored "securely" and you feel comfortable with it.

 

Then your master key gets compromised. No problem, you think. I'll just use my handy update mechanism to update the keys on all the clients and...shit...now I've got to re-encrypt EVERYTHING and re-upload my data bags. Where the hell is the plaintext of those passwords again? This is getting complicated, no?

 

So what's the answer? Is there one? Obviously, if you were that hypersensitive to the security implications you'd just run your own server anyway. You still have the human factor and backups can still be stolen but that's an issue outside of Chef as a tool. You just move the security up the stack a bit. You've got to secure the Chef server itself. But can you still use the Opscode platform? I think so. With careful deliberation and structure, you can reach a happy point that allows you to still automate your infrastructure with Chef (or some other tool) and host the data off-site.

 

Some options

Certmaster

 

Certmaster spun out of the Func project. It's essentially an SSL certificate server at the base. It's another thing you have to manage but it can handle all the revocation and distribution issues.

Riak

 

This is one idea I came up with tonight. The idea is that you run a very small Riak instance on all the nodes that require the ability to decrypt the data. Every node is a part of the same cluster and this can all be easily managed with Chef. It will probably have a single bucket containing the master key. You get the fault tolerance built in and you can pull the keys as part of your recipe using basic Chef resources. Resource utilization on the box should be VERY low for the erlang processes. You'll have a bit more network chatter as the intra-cluster gossip goes on though. Revocation is still an issue but that's VERY easily managed since it's a simple HTTP put to update. And while the data is easily accessible to anyone who can get access to the box, you should consider yourself "proper f'cked" if that happens anyway.

 

But you still have the issue of re-encrypting the databags should that need to happen. My best suggestion is to store the encrypted values in a single data bag and add a rake task that does the encryption/revocation for you. Then you minimize the impact of something that simply should not need to happen that often.

 

Another option is to still use Riak but store the credentials themselves (as opposed to a decryption key) and pull them in when the client runs. The concern I have there is how that affects idempotence and would it cause the recipe to be run every single time just because it can't checksum properly? You probably get around this with a file on the filesystem telling Chef to skip the update using "not_if". 

 

Wrap Up

 

As you can see, there's no silver bullet here. Right now I have two needs, storing credentials for S3/EBS access and storing database passwords. That's it. We don't use passwords for user accounts at all. You can't even use password authentication with SSH on our servers. If I don't have your pubkey in the users data bag, you can't log in.  

 

The AWS credentials are slowly becoming less of an issue. With the Identity Access beta product, I can create limited use keys that can only do certain things and grant them access to specific AWS products. I can make it a part of node creation to generate that access programatically. That means I still have the database credentials issue though. For that, I'm thinking that the startup script for an appserver, for instance, will just have to pull the credentials from Riak (or whatever central location you choose) and update a JNDI string. It spreads your configuration data out a bit but these things shouldn't need to change to often and with proper documented process you know exactly how to update it.

 

One thing that this whole thing causes is that it begins to break down the ability to FULLY automate everything. I don't like running the knife command to do things. I want to be able to programatically run the same thing that Knife does from my own scripts. I suppose I could simply popen and run the knife commands but shelling out always feels like an anti-pattern to me.

 

I'd love some feedback on how other people are addressing the same issues!

 

Thursday, December 2, 2010

Automating EBS Snapshot validation with @fog - Part 2

This is part 2 in a series of posts I'm doing - You can read part 1 here

Getting started

I'm not going to go into too much detail on how to get started with Fog. There's plenty of documentation on the github repo (protip: read the test cases) and Wesley a.k.a @geemus has done some awesome screencasts. I'm going to assume at this point that you've at least got Fog installed, have an AWS account set up and have Fog talking to it. The best way to verify is to create your .fog yaml file, start the fog command line tool and start looking at some of the collections available to you.

For the purpose of this series of posts, I've actually created a small script that you can use to spin up two ec2 instances (m1.small) running CentOS 5.5, create four (4) 5GB EBS volumes and attach them to the first instance. In addition to the fog gem, I also have awesome_print installed and use it in place of prettyprint. This is, of course, optional but you should be aware.

WARNING: The stuff I'm about to show you will cost you money. I tried to stick to minimal resource usage but please be aware you need to clean up after yourself. If, at any time, you feel like you can't follow along with the code or something isn't working - terminate your instances/volumes/resources using the control panel or command-line tools. PLEASE DO NOT JUST SIMPLY RUN THESE SCRIPTS WITHOUT UNDERSTANDING THEM.

The setup script

The full setup script is available as gist on github - https://gist.github.com/724912#file_fog_ebs_demo_setup.rb

Things to note:

  • Change the key_name to a valid key pair you have registered with EC2
  • There's a stopping point halfway down after the EBS volumes are created. You should actually stop there and read the comments.
  • You can run everything inside of an irb session if you like.

The first part of the setup script does some basic work for you - it reads in your fog configuration file (~/.fog) and creates an object you can work with (AWS). As I mentioned earlier, we're creating two servers - hdb and tdb. HDB is the master server - say your production MySQL database. TDB is the box which will be running as the validation of the snapshots.

In the Fog world, there are two big concepts - models and collections. Regardless of cloud provider, there are typically at least two models available - Compute and Storage. Collections are data objects under a given model. For instance in the AWS world, you might have under the Compute model - servers, volumes, snapshots or addresses. One thing that's nice about Fog is that, once you establish your connection to your given cloud, most of your interactions are the same across cloud providers. In the example above, I've created a connection with Amazon using my credentials and have used that Compute connection to create two new servers - hdb and tdb. Notice the options I pass in when I instantiate those servers.

  • image_id
  • key_name

If I wanted to make these boxes bigger, I might also pass in 'flavor_id'. If you're running the above code in an irb session, you might see something like the following when you instantiate those servers: Not all of the fields may be available depending on how long it takes Amazon to spin up the instance. The above shot is after the instance was up and running. For instance, when you first created 'tdb', you'll probably see "state" as pending for quite some time. Fog has a nice helper method for all models call 'wait_for'. In my case I could do:

tdb.wait_for { print "."; ready?}

And it would print dots across the screen until the instance is ready for me to log in. At the end, it will tell you the amount of time you spent waiting. Very handy. You have direct access to all of the attributes above via the instance 'tdb' or 'hdb'. You can use 'tdb.dns_name' to get the dns name for use in other parts of your script for example. In my case, after the server 'hdb' is up and running, I now want to create the four 5GB EBS volumes and attach them to the instance:

I've provided four device names (sdi through sdl) and I'm using the "volumes" collection to create them (AWS.volumes.new). As I mentioned earlier, all of the attributes for 'hdb' and 'tdb' are accessible by name. In this case, I have to create my volumes in the same availability zone as the hdb instance. Since I didn't specify where to create it when I started it, Amazon has graciously chosen 'us-east-1d' for me. As you can see, I can easily access that as 'hdb.availability_zone' and pass it to the volume creation section. I've also specified that the volume should be 5GB in size.

At the point where I've created the volume with '.new' it hasn't actually been created. I want to bind it to a server first so I simply set the volume.server attribute equal to my server object. Then I 'save' it. If I were to log into my running instance, I'd probably see something like this in the 'dmesg' output now:

sdj: unknown partition table

sdk: unknown partition table

sdl: unknown partition table

sdi: unknown partition table

As you can see from the comments in the full file, you should stop at this point and setup the volumes on your instance. In my case, I used mdadm and created a RAID0 array using those four volumes. I then formatted them, made a directory and mounted the md0 device to that directory. If you look, you should now have an additional 20GB of free space mounted on /data. Here I might make this the data directory for mysql (which is the case in our production environment). Let's just pretend you've done all that. I simulated it with a few text files and a quick 1GB dd. We'll consider that the point-in-time that we want to snapshot from. Since there's no actual constant data stream going to the volumes, I can assume for this exercise that we've just locked mysql, flushed everything and frozen the XFS filesystem. Let's make our snapshots. In this case I'm going to be using Fog to do the snapshots but in our real environment we're using the ec2-consistent-snapshot script from Aelastic. First let's take a look at the state of the hdb object:

Notice that the 'block_device_mapping' attribute now consist of an array of hashes. Each hash is a subset of the data about the volume attached to it. If you aren't seeing this, you might have to run 'hdb.reload' to refresh the state of the object. To create our snapshots, we're going to iterate over the block_device_mapping attribute and use the 'snapshots' collection to make those snapshots:

One thing you'll notice is that I'm being fairly explicity here. I could shorthand and chain many of these method calls but for clarity, I'm not.

And now we have 4 snapshots available to us. The process is fairly instant but sometimes it can lag. As always, you should check the status via the .state attribute of an object to verify that it's ready for the next step. Here's a shot of our snapshots right now:

That's the end of Part 2. In the next part, we'll have a full fledged script that does the work of making the snapshots usable on the 'tdb' instance.

Automating EBS Snapshot validation with @fog - Part 1

Background

One thing that's very exciting about the new company is that I'm getting to use quite a bit of Ruby and also the fact that we're entirely hosted on Amazon Web Services. We currently leverage EBS, ELB, EC2 S3 and CloudFront for our environment. The last time I used AWS in a professional setting, they didn't even have Elastic IPs much less EBS with snapshots and all the nice stuff that makes it viable for a production environment. I did, however, manage to keep abreast of changes using my own personal AWS account.

Fog

Of course the combination of Ruby and AWS really means one thing - Fog. And lot's of it.

When EngineYard announced the sponsorship of the project, I dove headlong into the code base and spent what time I could trying to contribute code back. The half-assed GoGrid code in there right now? Sadly, some of it is mine. Time is hard to come by these days. Regardless, I'm no stranger to Fog and when I had to dive into the environment and start getting it documented and automated, Fog was the first tool I pulled out and when the challenge of verifying our EBS snapshots (of which we're currently at a little over 700), I had no choice but to automate it.

Environment

A little bit about the environment:

  • - A total of 9 EBS volumes are snapshotted each day
  • - 8 of the EBS volumes are actually raid0 mysql data stores across two DB servers (so 4 disks on one/4 disks on another)
  • - The remaining EBS volume is a single mysql data volume
  • - Filesystem is XFS and backups are done using the Aleastic ec2-consistent-snapshot script (which currently doesn't support tags)

The end result of this is to establish a rolling set of validated snapshots. 7 daily, 3 weekly, 2 monthly. Fun!

Mapping It Out

Here was the attack plan I came up with:

  • - Identify snapshots and groupings where appropriate (raid0, remember?)
  • - create volumes from snapshots
  • - create an m1.xlarge EC2 instance to test the snapshots
  • - attach volume groups to the test instance
  • - assemble the array on the test instance
  • - start MySQL using the snapshotted data directory
  • - run some validation queries using some timestamp columns in our schema
  • - stop MySQL, unmount volume, stop the array
  • - detach and destroy the volumes from the test instance
  • - tag the snapshots as "verified"
  • - roll off any old snapshots based on retention policy
  • - automate all of the above!

I've got lots of code samples and screenshots so I'm breaking this up into multiple posts. Hopefully part 2 will be up some time tomorrow