Tuesday, September 13, 2011

Relocating again.

FYI, I'm relocating my blog back to my own domain. All the new posts will go there and eventually I'll get everything pulled in from here.

New Old Place is here: http://blog.lusis.org

Thursday, July 21, 2011

Monitoring Sucks - Watch your language

The following post is a recap of what was discussed in the 07/21/11 #monitoringsucks irc meeting

Before I get any further, I just want to thank everyone who attended, either in virtual person or in spirit. There have been so many awesome discussions going around this topic since we started. I am truly priviledged to interact with each and every one of you. I struggle to count myself a peer and I can only hope that I provide something in return.

I mentioned to someone today that I’m literally sick of the current landscape. I consider the current crop of monitoring solutions to be technologically bankrupt. The situation is fairly untenable at this point.

I just installed (after having a total loss of an existing Zenoss setup) Nagios again. I’m not joking when I say that it depressed the everliving hell out of me. The monitoring landscape simply has not kept up with modern developments. At first it was mildly frustrating. Then it was annoying. Now it’s actually a detriment.

Now that we have that out of the way….

Darmok and Jalad at Tanagra

Communication is important. Like Picard and Dathon, we’ve been stranded on a planet with shitty monitoring tools and we’re unable to communicate about the invisibile threat of suck because we aren’t even speaking the same language. I say event, you hear trigger, I mean data point. So the first order of business was to try and agree on a set of terms. It was decided that we would consider these things primitives. Here they are:

Please read through this before jumping to any conclusions. I promise it will all become clear (as mud).


a numeric or boolean data point

The data type of a metric was something of a sticking point. People were getting hung up on data points being various things (a log message, a “status”, a value). We needed something to describe the origin. The single “thing” that triggered it all. That thing is a metric.

So why numeric OR boolean? It was pretty clear that many people considered, and rightly so I would argue, that a state change is a metric. A good example given by Christopher Webber is that of a BGP route going away. Why is this a less valid data point than the amount of disk space in use or the latency from one host to another? Frankly, it’s not.

But here’s where it gets fuzzy. What about a log message. Surely that’s a data point and thus a metric.

Yes and No. The presence of a log message is a data point. But it’s a boolean. The log message itself?


metadata about a metric

Now metadata itself is a loaded term but in this scope, the “human readable” attributes are considered context. Going back to our log example. The presence of the log message is a metric. The log message itself is context. Here’s the thing. You want to know if there is an error message in a log file. The type of error, the error message text? That’s context for the metric to use in determining a course of action.

Plainly speaking, metrics are for machines. Context is for humans. This leads us to….


metric combined with context

This is still up in the air but the general consensus was that this was a passable definition. The biggest problem with a group of domain experts is that they are frequently unable to accept semantic approximation. Take the discussion of Erlang Spawned process:

  • It’s sort of like a VM on a VM
  • NO IT’S NOT.
  • headdesk

The fact is that an Erlang spawned process has shades of a virtual machine is irrelevant to the domain expert. We found similar discussions around what we would call the combination of a metric and its context. But where do events come from?


the source of a metric

Again, we could get into arguments around what a resource is. One thing that was painfully obvious is that we’re all sick and tired of being tied to the Host and Service model. It’s irrelevant. These constructs are part “legacy” and part “presentation”.

Any modern monitoring thought needs to realize that metrics no longer come from physical hosts or are service states. In the modern world, we’re taking a holistic view of monitoring that includes not only bare metal but business matters. The number of sales is a metric but it’s not tied to a server. It’s tied to the business as a whole. The source of your metrics is a resource. So now that we have this information - a metric, its context and who generated it - what do we do? We take….


a response to a given metric

What response? It doesn’t MATTER. Remember that these are primitives. The response is determined by components of your monitoring infrastructure. Humans note the context. Graphite graphs it. Nagios alerts on it. ESPER correlates it with other metrics. Don’t confuse scope here. From this point on, whatever happens has is all decided on by a given component. It’s all about perspective and aspects.

Temba, his arms wide

I’m sure through much of that, you were thinking “alerting! graphing! correlation!”. Yes, that was pretty much what happened during the meeting as well. Everyone has pretty much agreed (I think) at this point that any new monitoring systems should be modular in nature. As Jason Dixon put it - “Voltron”. No single system that attempts to do everything will meet everyone’s needs. However, with a common dictionary and open APIs you should be able to build a system that DOES meet your needs. So what are those components? Sadly this part is not as fleshed out. We simply ran out of time. However we did come up with a few basics:


getting the metrics

It doesn’t matter if it’s push or pull. It doesn’t matter what the transport is - async or point-to-point. Somehow, you have to get a metric from a resource.

Event Processing

taking action

Extract the metric and resource from an event. Do something with it. Maybe you send the metric to another component. Maybe you “present” it or send it somewhere to be presented. Maybe you perform a change on a resource (restarting a service). Essentially the decision engine.


While you might be thinking of graphing here, that’s a type of presentation. You know what else is presentation? An email alert. Stick with me. I know what’s going through your head. No..not that…the other thing.


This is pretty much correlation. We didn’t get a REAL solid defintion here but everyone was in agreement that some sort of analytics is a distinct component.

The “other” stuff

As I said, we had to kind of cut “official” things short. There was various discussion around Storage and Configuration. Storage I personally can see as a distinct component but Configuration not so much. Configuration is an aspect of a component but not a component itself.

Logical groupings

Remember when I said I know what you’re thinking? This is what I think it was.

You can look at the above items and from different angles they look similar. I mean sending an email feels more like event processing than presentation. You’d probably be right. By that token, drawing a point on a graph is technically processing an event. The fact is many components have a bit of a genetic bond. Not so much parent/child or sibling but more like cousins. In all honesty, if I were building an event processing component, I’d probably handle sending the email right there. Why send it to another component? That makes perfect sense. Graphing? Yeah I’ll let graphite handle that but I can do service restarts and send emails. Maybe you have an intelligent graphing component that can do complex correlation inband. That makes sense too.

I’m quite sure that we’ll have someone who writes a kickass event processor that happens to send email. I’m cool with that. I just don’t want to be bound to ONLY being able to send emails because that’s all your decision system supports.

Shaka, when the walls fell

Speaking personally, I really feel like today’s discussion was VERY productive. I know that you might not agree with everything here. Things are always up for debate. The only thing I ask is that at some point, we’re all willing to say “I know that this definition isn’t EXACTLY how I would describe something but it’s close enough to work”.

So what are the next steps? I think we’ve got enough information and consensus here for people to start moving forward with some things. One exercise, inspired by something Matt Ray said, that we agreed would be REALLY productive is to take an existing application and map what it does to our primitives and components. Matt plans on doing that with Zenoss since that’s what he knows best.

Let me give an example:

Out of the box, Nagios supports Hosts and Services which map pretty cleanly to resources. It is does not only collection but event processing and presentation. It not only supports metrics but also context (Host down is the boolean metric. “Response timeout” is the context. Through something like pnp4nagios, it can support different presentations. It has very basic set of Analytic functionality.

Meanwhile Graphite is, in my mind, strictly presentation and deals only with metrics. It does support both numeric and boolean metrics. It also has basic resource functionality but it’s not hardwired. It doesn’t really do event handling in the strict sense. Analytics is left to the human mind. It certainly doesn’t support context.

I’d love to see more of these evaluations.

Also, I know there are tons of “words” that we didn’t cover - thresholds for instance. While there wasn’t a total consensus, there was some agreement that somethings were attributes of a component but not a primitive itself. It was also accepted that components themselves would be primitives. You correlation engine might aggregate (another word) a group of metrics and generate an event. At that point, your correlation engine is now a resource with its own metrics (25 errors) and context (“number of errors across application servers exceeded acceptable limits”) which could be then sent to an event processor.

That’s the beauty of the Voltron approach and not binding a resource to a construct like a Host.

Special note to the Aussies

I’m very sorry that we couldn’t get everyone together. I’ve scheduled another meeting where we can start from scratch just like this one, or build on what was discussed already. I’m flexible and willing to wake up at whatever time works best for you guys

Thanks again to everyone who attended. If you couldn’t be there, I hope you can make the next one.

Tuesday, July 12, 2011

Monitoring Sucks - Round 2 - FIGHT

Monitoring sucks - Round 2 - FIGHT

Recently there’s been more and more attention to the whole ##monitoringsucks campaign. Peopel are writing blog posts. Code is being written. Discussions are being had. Smart people are doing smart things.


I figure it’s time to get the band back together for another official irc session and stop flooding infra-talk with random rants about where things stand. As much as I hate to even consider it, I think having something of an agenda so that it doesn’t turn into a bitch fest.

I’m open to any ideas that people have.

Jordan Sissel brought up some things that might be nice to have:

  • use cases
  • primitives

I’d like to also love to get input from others before we settle on the details. This one is going to be a bit more “formal” than the first session. Don’t let that turn you off. Everyone has something valuable to contribute.


I can’t believe I’m actually suggesting this part but I’d also love to walk away with something that people can work with. It’s not like people will have homework or a task list.

This is more of a “this is what we would like to see, if you want to write some code it’s a decent place to start”.

International flavor

I’m also keen on finding a time when we can get as many folks as possible to contribute. This may not be possible with that whole rotation of the earth thing but we’ll see.

Suggestions, Comments and Rude Remarks

If you’ve got anything you’d like to see or contribute on the planning side, drop me a message on IRC, email or twitter and I’ll include it (if possible) when I draft up the final “notes” beforehand.

Sunday, June 5, 2011

Why Monitoring Sucks

Why Monitoring Sucks (and what we're doing about it)

About two weeks ago someone made a tweet. At this point, I don't remember who said it but the gist was that "monitoring sucks". I happened to be knee-deep in frustrating bullshit around that topic and was currently evaluating the same effing tools I'd evaluated at every other company over the past 10 years or so. So I did what seems to be S.O.P for me these days. I started something.

But does monitoring REALLY suck?

Heck no! Monitoring is AWESOME. Metrics are AWESOME. I love it. Here's what I don't love: - Having my hands tied with the model of host and service bindings. - Having to set up "fake" hosts just to group arbitrary metrics together - Having to either collect metrics twice - once for alerting and another for trending - Only being able to see my metrics in 5 minute intervals - Having to chose between shitty interface but great monitoring or shitty monitoring but great interface - Dealing with a monitoring system that thinks IT is the system of truth for my environment - Perl (I kid...sort of) - Not actually having any real choices

Yes, yes I know:

You can just combine Nagios + collectd + graphite + cacti + pnp4nagios and you have everything you need!

Seriously? Kiss my ass. I'm a huge fan of the Unix pipeline philosophy but, christ, have you ever heard the phrase "antipattern"?

So what the hell are you going to do about it?

I'm going to let smart people be smart and do smart things.

Step one was getting everyone who had similar complaints together on IRC. That went pretty damn well. Step two was creating a github repo. Seriously. Step two should ALWAYS be "create a github repo". Step three? Hell if I know.

Here's what I do know. There are plenty of frustrated system administrators, developers, engineers, "devops" and everything under the sun who don't want much. All they really want is for shit to work. When shit breaks, they want to be notified. They want pretty graphs. They want to see business metrics along side operational ones. They want to have a 52-inch monitor in the office that everyone can look at and say:

See that red dot? That's bad. Here's what was going on when we got that red dot. Let's fix that shit and go get beers

About the "repo"

So the plan I have in place for the repository is this. We don't really need code. What we need is an easy way for people to contribute ideas. The plan I have in place for this is partially underway. There's now a monitoringsucks organization on Github. Pretty much anyone who is willing to contribute can get added to the team. The idea is that, as smart people think of smart shit, we can create new repository under some unifying idea and put blog posts, submodules, reviews, ideas..whatever into that repository so people have an easy place to go get information. I'd like to assign someone per repository to be the owner. We're all busy but this is something we're all highly interested in. If we spread the work out and allow easy contribution, then we can get some real content up there.

I also want to keep the repos as light and cacheable as possible. The organization is under the github "free" plan right now and I'd like to keep it that way.

Blog Posts Repo

This repo serves as a place to collect general information about blog posts people come across. Think of it as hyper-local delicious in a DVCS.

Currently, by virtue of the first commit, Michael Conigliaro is the "owner". You can follow him on twitter and github as @mconigliaro

IRC Logs Repo

This repo is a log of any "scheduled" irc sessions. Personally, I don't think we need a distinct #monitoringsucks channel but people want to keep it around. The logs in this repo are not full logs. Just those from when someone says "Hey smart people. Let's think of smart shit at this date/time" on twitter.

Currently I appear to be the owner of this repo. I would love for someone who can actually make the logs look good to take this over.

Tools Repo

This repo is really more of a "curation" repo. The plan is that each directory is the name of some tool with two things it in:

  • A README.md as a review of the tool
  • A submodule link to the tool's repo (where appropriate)

Again, I think I'm running point on this one. Please note that the submodule links APPEAR to have some sort of UI issue on github. Every submodule appears to point to Dan DeLeo's 'critical' project.

Metrics Catalog Repo

This is our latest member and it already has an official manager! Jason Dixon (@obfuscurity on github/twitter - jdixon on irc) suggested it so he get's to run it ;) The idea here is that this will serves as a set of best practices around what metrics you might want to collect and why. I'm leaving the organization up to Jason but I suggested a per-app/service/protocol directory.

Wrap Up

So that's where we are. Where it goes, I have no idea. I just want to help where ever I can. If you have any ideas, hit me up on twitter/irc/github/email and let me know. It might help to know that if you suggest something, you'll probably be made the person reponsible for it ;)


It was our good friend Sean Porter (@portertech on twitter), that we have to thank for all of this ;)

From Public Photos

Update (again)

It was kindly pointed out that I never actually included a link to the repositories. Here they are:


Friday, May 20, 2011

On Noah - Part 4

On Noah - Part 4

This is the fourth part in a series on Noah. Part 1, Part 2 and Part 3 are available as well

In Part 1 and 2 of this series I covered background on Zookeeper and discussed the similarities and differences between it and Noah. Part 3 was about the components underneath Noah that make it tick.

This post is about the "future" of Noah. Since I'm a fan of Fourcast podcast, I thought it would be nice to do an immediate, medium and long term set of goals.

Immediate Future - the road to 1.0

In the most immediate future there are a few things that need to happen. These are in no specific order.

  • General
    • Better test coverage ESPECIALLY around the watch subsystem
    • Full code comment coverage
    • Chef cookbooks/Puppet manifests for doing a full install
    • "fatty" installers for a standalone server
    • Documentation around operational best practices
    • Documentation around clustering, redundancy and hadr
    • Documentation around integration best practices
    • Performance testing
  • Noah Server
    • Expiry flags and reaping for Ephemerals
    • Convert mime-type in Configurations to make sense
    • Untag and Unlink support
    • Refactor how you specify Redis connection information
    • Integrated metrics for monitoring (failed callbacks, expired ephemeral count, that kind of stuff)
  • Watcher callback daemon
    • Make the HTTP callback plugin more flexible
    • Finish binscript for the watcher daemon
  • Other
    • Finish Boat
    • Finish NoahLite LWRP for Chef (using Boat)
    • A few more HTTP-based callback plugins (Rundeck, Jenkins)

Now that doesn't look like a very cool list but it's a lot of work for one person. I don't blame anyone for not getting excited about it. The goal now is to get a functional and stable application out the door that people can start using. Mind you I think it's usable now (and I'm already using it in "production").

Obviously if anyone has something else they'd like to see on the list, let me know.

Medium Rare

So beyond that 1.0 release, what's on tap? Most of the work will probably occur around the watcher subsystem and the callback daemon. However there are a few key server changes I need to implement.

  • Server
    • Full ACL support on every object at every level
    • Token-based and SSH key based credentialing
    • Optional versioning on every object at every level
    • Accountability/Audit trail
    • Implement a long-polling interface for inband watchers
  • Watcher callback daemon
    • Decouple the callback daemon from the Ruby API of the server. Instead the daemon itself needs to be a full REST client of the Noah server
    • Break out the "official" callback daemon into a distinct package
  • Clients
    • Sinatra Helper

Also during this period, I want to spend time building up the ecosystem as a whole. You can see a general mindmap of that here.

Going into a bit more detail...

Tokens and keys

It's plainly clear that something which has the ability to make runtime environment changes needs to be secure. The first thing to roll off the line post-1.0 will be that functionality. Full ACL support for all entries will be enabled and in can be set at any level in the namespace just the same as Watches.

Versioning and Auditing

Again for all entires and levels in the namespace, versioning and auditing will be allowed. The intention is that the number of revisions and audit entries are configurable as well - not just an enable/disable bit.

In-band watches

While I've lamented the fact that watches were in-band only in Zookeeper, there's a real world need for that model. The idea of long-polling functionality is something I'd actually like to have by 1.0 but likely won't happen. The intent is simply that when you call say /some/path/watch, you can pass an optional flag in the message stating that you want to watch that endpoint for a fixed amount of time for any changes. Optionally a way to subscribe to all changes over long-polling for a fixed amount of time is cool too.

Agent changes

These two are pretty high on my list. As I said, there's a workable solution with minimal tech debt going into the 1.0 release but long term, this needs to be a distinct package. A few other ideas I'm kicking around are allowing configurable filtering on WHICH callback types an agent will handle. The idea is that you can specify that this invocation only handle http callbacks while this other one handles AMQP.

Sinatra Helper

One idea I'd REALLY like to come to fruition is the Sinatra Helper. I envision it working something like this:

    require 'sinatra/base'

class MyApp < Sinatra::Base
register Noah::Sinatra

noah_server "http://localhost:5678"
noah_node_name "myself"
noah_app_name "MyApp"
noah_token "somerandomlongstring"

dynamic_get :database_server
dynamic_set :some_other_variable, "foobar"
watch :this_other_node


The idea is that the helper allows you to register your application very easily with Noah for other components in your environment to be know. As a byproduct, you get the ability to get/set certain configuration parameters entirely in Noah. The watch setting is kind of cool as well. What will happen is if you decide to watch something this way, the helper will create a random (and yes, secure) route in your application that watch events can notify. In this way, your Sinatra application can be notified of any changes and will automatically "reconfigure" itself.

Obviously I'd love to see other implementations of this idea for other languages and frameworks.

Long term changes

There aren't so much specific list items here as general themes and ideas. While I list these as long term, I've already gotten an offer to help with some of them so they might actually get out sooner.

Making Noah itself distributed

This is something I'm VERY keen on getting accomplished and would really consider it the fruition of what Noah itself does. The idea is simply that multiple Noah servers themselves are clients of other Noah servers. I've got several ideas about how to accomplish this but I got an interesting follow up from someone on Github the other day. He asked what my plans were in this area and we had several lengthy emails back and forth including an offer to work on this particular issue.

Obviously there are a whole host of issues to consider. Race conditions in ordered delivery of Watch callbacks (getting a status "down" after a status "up" when it's supposed to be the other way around..) and eventual consistency spring to mind first.

The general architecture idea that was offered up is to use NATS as the mechanism for accomplishing this. In the same way that there would be AMQP callback support, there would be NATS support. Additional Noah servers would only need to know one other member to bootstrap and everything else happens using the natural flows within Noah.

The other part of that is how to handle the Redis part. The natural inclination is to use the upcoming Redis clustering but that's not something I want to do. I want each Noah server to actually include its OWN Redis instance "embedded" and not need to rely on any external mechanism for replication of the data. Again, the biggest validation of what Noah is designed to do is using only Noah itself to do it.

Move off Redis/Swappable persistence

If NATS says anything to me, it says "Why do you even need Redis?". If you recall, I went with Redis because it solved multiple problems. If I can find a persistence mechanism that I can use without any external service running, I'd love to use it.


If I were to end up moving off Redis, I'd need a cross platform and cross language way to handle the pubsub component. NATS would be the first idea but NATS is Ruby only (unless I've missed something). ZeroMQ appears to have broad language and platform support so writing custom agents in the same vein as the Redis PUBSUB method should be feasible.

Nanite-style agents

This is more of a command-and-control topic but a set of high-performance specialized agents on systems that can watch the PUBSUB backend or listen for callbacks would be awesome. This would allow you really integrate Noah into your infrastructure beyond the application level. Use it to trigger a puppet or chef run, reboot instances or do whatever. This is really about bringing what I wanted to accomplish with Vogeler into Noah.

The PAXOS question

A lot of people have asked me about this. I'll state right now that I can only make it through about 20-30% of any reading about Paxos before my brain starts to melt. However in the interest of proving myself the fool, I think it would be possible to implement some Paxos like functionality on top of Noah. Remember that Noah is fundamentally about fully disconnected nodes. What better example of a network of unreliable processors than ones that never actually talk to each other. The problem is that the use case for doing it in Noah is fairly limited so as not to be worth it.

The grand scheme is that Noah helps enable the construction of systems where you can say "This component is free to go off and operate in this way secure in the knowledge that if something it needs to know changes, someone will tell it". I did say "grand" didn't I? At some point, I may hit the limit of what I can do using only Ruby. Who knows.

Wrap up - Part 4

Again with the recap

  • Get to 1.0 with a stable and fixed set of functionality
  • Nurture the Noah ecosystem
  • Make it easy for people to integrate Noah into thier applications
  • Get all meta and make Noah itself distributed using Noah
  • Minimize the dependencies even more
  • Build skynet

I'm not kidding on that last one. Ask me about Parrot AR drones and Noah sometime

If you made it this far, I want to say thank you to anyone who read any or all of the parts. Please don't hesitate to contact me with any questions about the project.

Wednesday, May 18, 2011

On Noah - Part 3

On Noah - Part 3

This is the third part in a series on Noah. Part 1 and Part 2 are available as well

In Part 1 and 2 of this series I covered background on Zookeeper and discussed the similarities and differences between it and Noah. This post is discussing the technology stack under Noah and the reasoning for it.

A little back story

I've told a few people this but my original intention was to use Noah as a way to learn Erlang. However this did not work out. I needed to get a proof of concept out much quicker than the ramp up time it would take to learn me some Erlang. I had this grandiose idea to slap mnesia, riak_core and webmachine into a tasty ball of Zookeeper clonage.

I am not a developer by trade. I don't have any formal education in computer science (or anything for that matter). The reason I mention this is to say that programming is hard work for me. This has two side effects:

  • It takes me considerably longer than a working developer to code what's in my head
  • I can only really learn a new language when I have an itch to scratch. A real world problem to model.

So in the interest of time, I fell back to a language I'm most comfortable with right now, Ruby.

Sinatra and Ruby

Noah isn't so much a web application as it is this 'api thing'. There's no proper front end and honestly, you guys don't want to see what my design deficient mind would create. I like to joke that in the world of MVC, I stick to the M and C. Sure, APIs have views but not in the "click the pretty button sense".

I had been doing quite a bit of glue code at the office using Sinatra (and EventMachine) so I went with that. Sinatra is, if you use sheer number of clones in other languages as an example, a success for writing API-only applications. I also figured that if I wanted to slap something proper on the front, I could easily integrate it with Padrino.

But now I had to address the data storage issue.


Previously, as a way to learn Python at another company, I wrote an application called Vogeler. That application had a lot of moving parts - CouchDB for storage and RabbitMQ for messaging.

I knew from dealing with CouchDB on CentOS5 that I wasn't going to use THAT again. Much of it would have been overkill for Noah anyway. I realized I really needed nothing more than a key/value store. That really left me with either Riak or Redis. I love Riak but it wasn't the right fit in this case. I needed something with a smaller dependency footprint. Mind you Riak is VERY easy to install but managing Erlang applications is still a bit edgy for some folks. I needed something simpler.

I also realized early on that I needed some sort of basic queuing functionality. That really sealed Redis for me. Not only did it have zero external dependencies, but it also met the needs for queuing. I could use lists as dedicated direct queues and I could use the built-in pubsub as a broadcast mechanism. Redis also has a fast atomic counter that could be used to approximate the ZK sequence primitive should I want to do that.

Additionally, Redis has master/slave (not my first choice) support for limited scaling as well as redundancy. One of my original design goals was that Noah behave like a traditional web application. This is a model ops folks understand very well at this point.


When you think asynchronous in the Ruby world, there's really only one tool that comes to mind, EventMachine. Noah is designed for asynchronous networks and is itself asynchronous in its design. The callback agent itself uses EventMachine to process watches. As I said previously, this is simply using an EM friendly Redis driver that can do PSUBSCRIBE (using em-hiredis) and send watch messages (using em-http-request since we only support HTTP by default).


Finally I slapped Ohm on top as the abstraction layer for Redis access. Ohm, if you haven't used it, is simply one of if not the best Ruby library for working with Redis. It's easily extensible, very transparent and frankly, it just gets the hell out of your way. A good example of this is converting some result to a hash. By default, Ohm only returns the id of the record. Nothing more. It also makes it VERY easy to drop past the abstraction and operate on Redis directly. It even provides helpers to get the keys it uses to query Redis. A good example of this is in the Linking and Tagging code. The following is a method in the Tag model:

    def members=(member)
member.tag! self.name unless member.tags.member?(self)

Because Links and Tags are a one-to-many across multiple models, I drop down to Redis and use sadd to add the object to a Redis set of objects sharing the same tag.

It also has a very handy feature which is how the core of Watches are done. You can define hooks at any phase of Redis interaction - before and after saves, creates, updates and deletes. the entire Watch system is nothing more than calling these post hooks to format the state of the object as JSON, add metadata and send the message using PUBLISH messages to Redis with the Noah namespace as the channel.

Distribution vectors

I've used this phrase with a few people. Essentially, I want as many people as possible to be able to use the Noah server component. I've kept the Ruby dependencies to a minimum and I've made sure that every single one works on MRI 1.8.7 up to 1.9.2 as well as JRuby. I already distribute the most current release as a war that can be deployed to a container or run standalone. I want the lowest barrier to entry to get the broadest install base possible. When a new PaaS offering comes out, I pester the hell out of anyone I can find associated with it so I can get deploy instructions written for Noah. So far you can run it on Heroku (using the various hosted Redis providers), CloudFoundry and dotcloud.

I'm a bit more lax on the callback daemon. Because it can be written in any language that can talk to the Redis pubsub system and because it has "stricter" performance needs, I'm willing to make the requirements for the "official" daemon more stringent. It currently ONLY works on MRI (mainly due to the em-hiredis requirement).

Doing things differently

Some people have asked me why I didn't use technology A or technology B. I think I addressed that mostly above but I'll tackle a couple of key ones.


The main reason for not using 0mq was that I wasn't really aware of it. Were I to start over and still be using Ruby, I'd probably give it a good strong look. The would still be the question of the storage component though. There's still a possible place for it that I'll address in part four.


This was something I simply had no idea about until I started poking around the CloudFoundry code base. I can almost guarantee that NATS will be a part of Noah in the future. Expect much more information about that in part four.


You have got to be kidding me, right? I don't trust my data (or anyone else's for that matter) to a product that doesn't understand what durability means when we're talking about databases.

Insert favorite data store here

As I said, Redis was the best way to get multiple required functionality into a single product. Why does a data storage engine have a pubsub messaging subsystem built in? I don't know off the top of my head but I'll take it.

Wrap up - Part 3

So again, because I evidently like recaps, here's the take away:

  • The key components in Noah are Redis and Sinatra
  • Noah is written in Ruby because of time constraints in learning a new language
  • Noah strives for the server component to have the broadest set of distribution vectors as possible
  • Ruby dependencies are kept to a minimum to ensure the previous point
  • The lightest possible abstractions (Ohm) are used.
  • Stricter requirements exist for non-server components because of flexibility in alternates
  • I really should learn me some erlang
  • I'm not a fan of MongoDB

If you haven't guessed, I'm doing one part a night in this series. Tomorrow is part four which will cover the future plans for Noah. I'm also planning on a bonus part five to cover things that didn't really fit into the first four.

Tuesday, May 17, 2011

On Noah - Part 2

On Noah - Part 2

This is the second part in a series on Noah. Part 1 is available here

In part one of this series, I went over a little background about ZooKeeper and how the basic Zookeeper concepts are implemented in Noah. In this post, I want to go over a little bit about a few things that Noah does differently.

Noah Primitives

As mentioned in the previous post, Noah has 5 essential data types, four of which are what I've interchangeably refered to as either Primitives and Opinionated models. The four primitives are Host, Service, Application and Configuration. The idea was to map some common use cases for Zookeeper and Noah onto a set of objects that users would find familiar.

You might detect a bit of Nagios inspiration in the first two.

Analogous to a traditional host or server. The machine or instance running the operating system. Unique by name.
Typically mapped to something like HTTP or HTTPS. Think of this as the listening port on a Host. Services must be bound to Hosts. Unique by service name and host name.
Apache, your application (rails, php, java, whatever). There's a subtle difference here from Service. Unique by name.
A distinct configuration element. Has a one-to-many relationship with Applications. Supports limited mime typing.

Hosts and Services have a unique attribute known as status. This is a required attribute and is one of up,down or pending. These primitives would work very well integrated into the OS init process. Since Noah is curl-friendly, you could add something globally to init scripts that updated Noah when your host is starting up or when some critical init script starts. If you were to imagine Noah primitives as part of the OSI model, these are analagous to Layers 2 and 3.

Applications and Configurations are intended to feel more like Layer 7 (again, using our OSI model analogy). The differentiation is that your application might be a Sinatra or Java application that has a set of Configurations associated with it. Interestingly enough, you might choose to have something like Tomcat act as both a Service AND an Application. The aspect of Tomcat as a Service is different than the Java applications running in the container or even Tomcat's own configurations (such as logging).

One thing I'm trying to pull off with Configurations is limited mime-type support. When creating a Configuration in Noah, you can assign a format attribute. Currently 3 formats or types are understood:

  • string
  • json
  • yaml

The idea is that, if you provide a type, we will serve that content back to you in that format when you request it (assuming you request it that way via HTTP headers). This should allow you to skip parsing the JSON representation of the whole object and instead use it directly. Right now this list is hardcoded. I have a task to convert this.

Hosts and Services make a great "canned" structure for building a monitoring system on top of Noah. Applications and Configurations are a lightweight configuration management system. Obviously there are more uses than that but it's a good way to look at it.


Ephemerals, as mentioned previously, are closer to what Zookeeper provides. The way I like to describe Ephemerals to people is a '512 byte key/value store with triggers' (via Watch callbacks). If none of the Primitives fit your use case, the Ephemerals make a good place to start. Simply send some data in the body of your post to the url and the data is stored there. No attempt is made to understand or interpret the data. The hierarchy of objects in the Ephemeral namespace is completely arbitrary. Data living at /ephemerals/foo has no relationship with data living at /ephemerals/foo/bar.

Ephemerals are also not browseable except via a Linking and Tagging.

Links and Tags are, as far as I can tell, unique to Noah compared to Zookeeper. Because we namespace against Primitives and Ephemerals, there existed the need to visualize objects under a custom hierarchy. Currently Links and Tags are the only way to visualize Ephemerals in a JSON format.

Tags are pretty standard across the internet by now. You might choose to tag a bunch of items as production or perhaps group a set of Hosts and Services as out-of-service. Tagging an item is a simple process in the API. Simply PUT the name of the tag(s) to the url of a distinct named item appended by tag. For instance, the following JSON posted to /applications/my_kick_ass_app/tag with tag the Application my_kick_ass_app with the tags sinatra, production and foobar:

{"tags":["sinatra", "production", "foobar"]}

Links work similar to Tags (including the act of linking) except that the top level namespace is now replaced with the name of the Link. The top level namespace in Noah for the purposes of Watches is //noah. By linking a group of objects together, you will be able to (not yet implemented), perform operations such as Watches in bulk. For instance, if you wanted to be informed of all changes to your objects in Noah, you would create a Watch against //noah/*. This works fine for most people but imagine you wanted a more multi-tenant friendly system. By using links, you can group ONLY the objects you care about and create the watch against that link. So //noah/* becomes //my_organization/* and only those changes to items in that namespace will fire for that Watch.

The idea is also that other operations outside of setting Watches can be applied to the underlying object in the link as well. The name Link was inspired by the idea of symlinking.

Watches and Callbacks

In the first post, I mentioned that by nature of Noah being "disconnected", Watches were persistent as opposed to one-shot. Additionally, because of the pluggable nature of Noah Watches and because Noah has no opinion regarding the destination of a fired Watch, it becomes very easy to use Noah as a broadcast mechanism. You don't need to have watches for each interested party. Instead, you can create a callback plugin that could dump the messages on an ActiveMQ Fanout queue or AMQP broadcast exchange. You could even use multicast to notify multiple interested parties at once.

Again, the act of creating a watch and the destination for notifications is entirely disconnected from the final client that might use the information in that watch event.

Additionally, because of how changes are broadcast internally to Noah, you don't even have to use the "official" Watch method. All actions in Noah are published post-commit to a pubsub queue in Redis. Any language that supports Redis pubsub can attach directly to the queue and PSUBSCRIBE to the entire namespace or a subset. You can write your own engine for listening, filtering and notifying clients.

This is exactly how the Watcher daemon works. It attaches to the Redis pubsub queue, makes a few API calls for the current registered set of watches and then uses the watches to filter messages. When a new watch is created, that message is like any other change in Noah. The watcher daemon sees that and immediately adds it to its internal filter. This means that you can create a new watch, immediately change the watched object and the callback will be made.

Wrap up - Part Two

So to wrap up:

  • Noah has 5 basic "objects" in the system. Four of those are opinionated and come with specific contracts. The other is a "dumb" key/value store of sorts.
  • Noah provides Links and Tags as a way to perform logical grouping of these objects. Links replace the top-level hierarchy.
  • Watches are persistent. The act of creating a watch and notifying on watched objects is disconnected from the final recipient of the message. System A can register a watch on behalf of System B.
  • Watches are nothing more than a set of filters applied to a Redis pubsub queue listener. Any language that supports Redis and its pubsub queue can be a processor for watches.
  • You don't even have to register any Watches in Noah if you choose to attach and filter yourself.

Part three in this series will discuss the technology stack under Noah and the reasoning behind it. A bit of that was touched on in this post. Part four is the discussion about long-term goals and roadmaps.