Autodeploying Servers - A Proof of Concept

A couple years ago, during the brief pause in the middle of a semester, I figured that some time spent on re-thinking how we deploy remote servers might be worth time spent. Fortunately we have sharp sysadmin's who like challenges.

Our network is run by a very small group that has to cover a rather large state, 7x 24. Driving a 600 mile round trip to swap some gear out isn't exactly fun, so we tend to be very cautious about what we deploy & where we deploy it. I've always had a strong preference for hardware that runs forever, and my background in mechanical sorts of things tells me that moving parts are generally bad. So If I have a choice, and if the device is a long ways away, I'll pick a device that boots from flash over an HDD any time.

That sort of works in with my thoughts on installing devices in general, that we ought to design & build to the 'least bit' principle, where we minimize the software footprint, file system permissions, ports, protocols, etc, as much as possible and still maintain required functionality.

Anyway, at the time, we'd had Snort IDS's out state for quite a while, and we were thinking that they needed a hardware swap. But I wasn't enthused about installing more spinning media 5 hours from home.

So that led to a proof-of-concept.
  • How close could we come, with no money, to making a IDS that had no moving parts?
  • How minimal could the software footprint be?
  • How would we reliably upgrade, patch and maintain the remote IDS's without travel?
  • Could we make the remote IDS's 'self deploying' (with no persistent per-device configuration)
  • Could we run and Intel PC headless (no keyboard/monitor?) and manage it via 9600,n,8,1?
Knoppix

We started by taking a look at the CD-bootable Knoppix. At the time, it didn't like being run headless. There was no access to the serial port early in the boot process, so if it didn't boot, we wouldn't know why. That doesn't work too well for remote devices, but fortunately there was enough prior work in that space to get us a bootable Knoppix that lit up the serial port early enough to make us happy. Booting from flash would have been better, but a spinning CD is way closer to 'no moving parts' than a spinning HDD, and we didn't have any no-cost flash-bootable computers laying around.

It was still way too fat to fit the 'least bit' principle, and there still was no easy way of maintaining the Snort binaries and configs without touching each server. I didn't want to have to do that. So we stripped Knoppix down to something reasonably sized, convinced it to look at the serial port instead of the keyboard, burned a new image, and took a look at how to maintain Snort.

We had just prototyped a process that used SVN to manage and deploy the binaries for a J2EE application, so why not look at SVN for Snort? Check the Snort RPMs, compiled binaries, libraries and rules into SVN, and when they change, deploy from SVN to the PC acting as the IDS. Snort has logs, but it can log to a remote database, so persistent, writable storage is really not necessary. The spinning media problem more or less goes away. So far, so good.

Self Deploy

A long time ago, Cisco made routers that were smart enough to an ARP/RARP/BOOTP & tftp to find a config. That meant that if the router, brand new and unconfigured, were attached to a network and found a tftp server with a 'router.conf' on it, the router would boot that config. That made it possible to buy a new router, ship it directly to a remote side, and with no router-savy person on site, install the router unconfigured, have it 'self configure' and become a useful router. All you needed was an upstream network device (like the router on the near side of the T1) that was smart enough to feed a couple of bits of information to the new, unconfigured router, and the router would 'self deploy'.

So I wanted our new toys to 'self deploy'. Send a new, unconfigured, generic PC to a remote site, plug it in and have it learn enough from the network to figure out who it is and what it's supposed to do.

Having a minimal Knoppix-like boot/run CD that talked to its serial port in a reasonable manner in hand, we figured that the network could feed the server enough information to make it useful, like a IP address and gateway. But getting Snort and related bits on the new server need some more thinking. If we put Snort on the CD, we'd always have to burn new CD's when Snort needed updating, and we'd still have to manage the constantly changing Snort rules. That would either require lots of CD burning or persistent writable storage.

What we came up with was - put the Snort packages in SVN at the home base, add an SVN client to the boot CD, and install Snort and related dependencies from SVN on the fly on each boot. The remote server would boot from CD, learn who it is from the network (DHCP), figure out where its SVN repository was (either DNS or DHCP options), look up its MAC address in the SVN repository, and check out whatever it saw. If what it saw were Snort binaries, it checked them out & installed them.

Boot -> Learn -> Install -> Run.

To upgrade the boot image, we'd still have to ship a CD to the remote site. But that doesn't happen too often. To upgrade Snort, we'd check a new version of Snort into the master SVN repository at the home base & reboot the remote servers. They'd learn about it on bootup. If we wanted to tweak the config on a sensor, we'd check a new config into SVN and reboot the remote.

Pretty simple.

And - If we didn't want the remote to be a Snort sensor anymore, we could check some other bits into SVN, reboot the remotes & they happily check out, install & run the new bits. Turn it into a web server? Just check in new binaries & reboot the remote.

It worked.

But then real projects came along, so we never deployed it.