Patrick Debois was kind enough to comment on my previous post and asked some very good questions. I thought they would fit better in a new post instead of a comment box so here it is:
I read your post and I must say I'm puzzled on what you are actually achieving. Is this a CMDB in the traditional way? Or is it an autodiscover type of CMDB, that goes out to the different systems for information? In the project page you mention à la mcollective. Does this mean you are providing GUI for the collected information? Anyway, I'm sure you are working on something great. But for now, the end goal is not so clear to me. Enlighten me!
Good question ;) I think it sits in an odd space at the moment because it tries to be flexible and by design could do all of those things. Mentioning Mcollective may have clouded the issue but is was more of a nod to similar architectural decisions - using a queue server to execute commands on multiple nodes.
My original goal (outside of learning Python) was to address two key things. I mentioned these on the Github FAQ for Vogeler but it doesn't hurt to repost them here for this discussion:
- What need is Vogeler trying to fill?
Well, I would consider it a “framework” for establishing a configuration management database. One problem that something like a CMDB can create is that, to meet every individual need, it tends to over complicate. One thing I really wanted to do is avoid forcing you into my model and trying to provide ways for you to customize the application.
I went the other way. Vogeler at the core, provides two things – a place to dump “information” about “things” and a method for getting that information in a scalable manner. By using a document database like CouchDB, you don’t have to worry about managing a schema. I don’t need to know what information is actually valuable to you. You know best what information you want to store. By using a message queue with some reasonable security precautions, you don’t have to deal with another listening daemon. You don’t have to worry about affecting the performance of your system because you’re opening 20 SSH connections to get information or running some statically linked off-the-shelf binary that leaks memory and eventually zombies (Why hello, SCOM agent!).
In the end, you define what information you need, how to get it and how to interpret it. I just provide the framework to enable that.
So to address the question:
If we're being semantic, yes it's probably more of a configuration database than a configuration MANAGEMENT database. Autodiscovery, though not in the traditional sense, is indeed a feature. Install the client, stand up the server side parts and issue a facter command via the runner. You instantly have all the information that facter understands about your systems in CouchDB viewable via Futon. I could probably easily write something that scanned the network and installed the client but I have a general aversion to anything that sweeps networks that way. More than likely, you would install Vogeler when you kicked a new server and managed the "plugins" via puppet.
I hope that makes sense. Vogeler is the framework that allows you to get whatever information about your systems you need, store it, keep that information up to date and interpret it however you want. That's one reason I'm not currently providing a web interface for reporting right now. I just simply don't know what information is valuable to you as an operations team. Tools like puppet, cfengine, chef and the like are great and I have no desire to replace them but you COULD use this to build that replacement. That's also why I use facter as an example plugin with the code. I don't want to rewrite facteri. It just provides a good starting tool for getting some base data from all your systems.
Let's try a use case:
I need to know which systems have X rpm package installed.
You could write an SSH script, hit each box and parse the results or you could have Vogeler tell you. Let's assume that the last run of "package inventory" was a week ago:
vogeler-runner -c rpms -n all
The architecture is already pretty clear. Runner pushes a message on the broadcast queue, all clients see it ('-n all' means all nodes online) and they in turn push the results into another queue. Server pops the messages and dumps them into the CouchDB document for each node. You could then load up Futon or a custom interface you wrote and load the CouchDB design doc that does the map reduce for that information. You have your answer.
Now let's try something of a more complicated example:
I need to know what JMX port all my JBoss instances are listening on in my network.
Well I don't provide a "plugin" for you to get that information, a key for you to store it under in CouchDB or a design doc to parse it by default. But I don't need to. We take the Nagios approach. You define what command returns that information. A shell script, a python script, a ruby script whatever works for you. All you need to tell me is what key you want to store it under and something about the basic structure of the data itself. Maybe your script provides emits JSON. Maybe it emits YAML. Maybe it's a single string. Maybe you run multiple JBoss instances per machine each listening on different JMX ports (as opposed to aliasing IPs and using the standard). I'll take that and create a new key with that data in the Couch document for that system. You can peruse it with a custom web interface or, again, just use Futon.
Does that help?