In Lieu of Coal, Naughty Children May Be Given M4 Files
EXPERIMENTAL CODE AHEAD
Last week, I wrote about the somewhat experimental Throwable::X. This week, I'll be talking about the much more experimental DNS::Oterica. It's been unchanged for a while, but this article is meant to tantalize more than to offer a warrantied product.
Better Living Through Macros
Like, nearly any other internet business, my company has to manage DNS records. For a number of reasons, not the least of which was our heavy reliance on MX records and email transports to get work done. With on the order of 200 host names (A
records) alone to manage, the programmers of old had very wisely decided that maintaining thousands of lines of tinydns configuration was madness. They very wisely decided to automate the generation of these files. They very wisely decided to generate output based on the jobs performed by actual hosts. Then they went and decided to use M4.
dns/data.m4 was not a well-loved file, and with good reason. Here's a brief sampling of some of its contents:
1: | # create tinydns-data "+" or "=" lines |
This is probably some of the clearest m4 found in the file. I won't show you anything worse, because nobody wants his or her Friday ruined by something like this:
1: | define(DEFAULT_MAIL, ` |
So this file was a relic of days gone by, produced by programmers who had long since moved on to other ventures. Every shop has this kind of system: it works, its custodians can make very basic adjustments to it, but nobody dares make significant changes. Unfortunately, these kind of systems usually get built very early in a product's lifecycle, to solve very important problems that are not within the business's central competency. That is: we were not a company that existed to manage DNS records, but we needed to have something to manage them as soon as possible.
The problem is that eventually the business will grow past this kind of solution, and if it can't be maintained, it will have to be rewritten. There may be plenty of ways to make that take longer and to keep these systems more maintainable, but their eventual retirement and replacement, over and over, seems inevitable.
The time for our system to die was at hand, and before we could replace it, we had to understand it.
Decomposing data.m4
Hosts in the M4 file were macros. That meant that when you said, "XYZ sends public mail," m4 would eventually say, "XYZ's IP appears as reverse DNS for outbound-1.example.com." Other macros were more complex, like the REDIRECTING_VIRTUAL_DOMAIN
macro, which set up quite a few records of both forward and reverse DNS lookup. All these macros would be passed the host macro, which stood in for a number of identifying attributes of the host.
In fact, once you got past a lot of the weird M4-ness of the file, it was trying to say some pretty simple things, like this:
All hosts have a primary name.
Hosts might have IP addresses for their name.
Hosts might have extra names that go to the same address.
Hosts might belong to service groups, like MX or webservers.
Then there was a second layer of logic that said "a host turns into a bunch of tinydns lines based on its names and IP addresses" and "service groups turn into a bunch of tinydns lines based on the hosts in them." The only things that varied, really, was what it meant to be in a given group.
So, there were two kinds of entities to consider: hosts and groups. Hosts were easy, since they're all pretty much the same. We put our host definitions into YAML files, like this one:
1: | hostname: a-lb-mx-fastnet |
Files like this sit in a directory organized however the sysadmins want. Ours, for example, puts each server in one file, along with any Solaris zones running on that server. The tool doesn't care. It just reads in a bunch of YAML documents and turns them into Host objects. When every host is loaded in, the hosts can be turned into DNS data lines with their as_data_lines
method:
1: | sub as_data_lines { |
($self->rec
is a record generator, so that we can generate diagnostic output rather than tinydns-config, as needed.)
This is pretty straightforward. Sure it's longer than the corresponding M4 found in our defq
macro, but it's a heck of a lot clearer. That said, it's also pretty boring. This code might solve a problem, but it doesn't solve the more interesting, difficult problem we have: composing the behavior found in each service group -- here, called families.
In our YAML document above, we put the host into several families, one of which was com.example.mx, which we implement in a class seen below. We'll go through it in chunks:
1: | package DNS::Oterica::NodeFamily::ExampleMX; |
It's just a Moose class, implementing the abstract NodeFamily class. (There ended up being boring reasons that NodeFamily couldn't be a role.) The name
corresponds to the family name used in the config files.
1: | has mx_nodes => ( |
The family keeps track of all the MX nodes that it's seen; there's an mx_nodes
attribute on the object, and every time it adds a node, it records the node and gives it a new name. The first is mx-1.example.com, the mx-2.example.com and so on.
1: | augment as_data_lines => sub { |
Then, when we're ready to turn the family into configuration we... wait, what?? What's augment? Rather than a length explanation, here's a snippet of the method definition from the base NodeFamily class:
1: | sub as_data_lines { |
We're going to return a list of lines. We create start and end comments, and get the rest by calling inner
. inner
is the other end of augment
. When we call inner
, the augment blocks get called in subclasses. With "normal" method calls, we'd end up entering the subclass's method first, and it would decide how (and whether) to call the superclass's method. Here, we're working backward, in a sense: first the superclass method is begun, then any subclasses may contribute to the output, and then the superclass is finalized.
This avoids any need to worry about calling the superclass method in subclasses, which can then write the simplest thing needed. Unfortuantely, you can't call augment
in a role. To get that kind of method composition, you may have to wait for a future Advent posting!
A Triumph of the Moose
In the end, we replaced literally hundreds of thousands of lines of M4 with 42 very short class definitions -- our node families -- and one file per physical server. The new system can be the target of regression tests, either of individual groups or of the whole thing. Most of the time, the sysadmins can just edit simple YAML documents. Occasionally, node families need updating, but they're all very simple Perl classes with only one or two methods of any note.
Best of all, nobody needs to learn M4.