Thursday, December 11, 2014

The Password? You Changed It, Right?

Right at this moment, there's a swarm of little password guessing robots trying for your router's admin accounts. Do yourself a favor and do some logs checking right away. Also, our passwords are certainly worth a series of conferences of their own.

As my Twitter followers may be aware, I spent the first part of this week at the Passwords14 conference in Trondheim, Norway. More about that later, suffice for now to say that the conference was an excellent one, and my own refreshed Hail Mary Cloud plus more recent history talk was fairly well received.

But the world has a way of moving on even while you're not looking, and of course when I finally found a few moments to catch up on my various backlogs while waiting to board the plane for the first hop on the way back from the conference, a particular sequence stood out in the log extracts from one of the Internet-reachable machines in my care:


Dec 9 19:00:24 delilah sshd[21296]: Failed password for invalid user ftpuser from 81.169.131.221 port 37404 ssh2
Dec 9 19:00:25 delilah sshd[6320]: Failed password for invalid user admin from 81.169.131.221 port 38041 ssh2
Dec 9 19:00:26 delilah sshd[10100]: Failed password for invalid user D-Link from 81.169.131.221 port 38259 ssh2
Dec 9 19:03:53 delilah sshd[26709]: Failed password for invalid user ftpuser from 83.223.216.46 port 43261 ssh2
Dec 9 19:03:55 delilah sshd[23796]: Failed password for invalid user admin from 83.223.216.46 port 43575 ssh2
Dec 9 19:03:56 delilah sshd[12810]: Failed password for invalid user D-Link from 83.223.216.46 port 43833 ssh2
Dec 9 19:06:36 delilah sshd[14572]: Failed password for invalid user ftpuser from 87.106.66.165 port 52436 ssh2
Dec 9 19:06:37 delilah sshd[427]: Failed password for invalid user admin from 87.106.66.165 port 53127 ssh2
Dec 9 19:06:38 delilah sshd[28547]: Failed password for invalid user D-Link from 87.106.66.165 port 53393 ssh2
Dec 9 19:14:44 delilah sshd[31640]: Failed password for invalid user ftpuser from 83.223.216.46 port 35760 ssh2


Yes, you read that right. Several different hosts from widely dispersed networks, trying to guess passwords for the accounts they assume exist on your system. One of the user names is close enough to the name of a fairly well known supplier of consumer and SOHO grade network gear that it's entirely possible that it's a special account on equipment from that supplier.

Some catching up on sleep and attending to some high priority tasks later, I found that activity matching the same pattern turned up in a second system on the same network.

By this afternoon (2014-12-11), it seems that all told a little more than 700 machines have come looking for mostly what looks like various manufacturers' names and a few other usual suspects. The data can be found here, with roughly the same file names as in earlier episodes. Full list of attempts on both hosts here, with the rather tedious root only sequences removed here, hosts sorted by number of attempts here, users sorted by number of attempts here, a CSV file with hosts by number of attempts with first seen and last seen dates and times, and finally hosts by number of attempts with listing of each host's attempts. Expect updates to all of these at quasi-random intervals.

The pattern we see here is quite a bit less stealthy than the classic Hail Mary Cloud pattern. In this sequence we see most of the hosts trying all the desired user names only a few seconds apart, and of course the number of user IDs is very small compared to the earlier attempts. But there seems to be some level of coordination - the attackers move on to the next target in their list, and at least some of them come back for a second try after a while.

Taken together, it's likely that what we're seeing is an attempt to target the default settings on equipment from a few popular brands of networking equipment. It's likely that the plan is to use the captured hosts to form botnets for purposes such as DDOSing. There is at least one publicly known incident that has several important attributes in common with what we're seeing: Norwegian ISP and cable TV supplier GET found themselves forced to implement some ad hoc countermeasures recently (article in Norwegian, but you will find robots) in a timeframe that fits with the earliest attempts we've seen here. I assume similar stories will emerge over the next days or weeks, possibly with more detail that what's available in the digi.no article.

If you're seeing something similar in your network and you are in a position to share data for analysis similar to what you see in the files referenced abovee, I would like to hear from you.



A conference dedicated to passwords and their potential replacements.

Yes, such a thing exists. All aspects of passwords and their potential replacements have been the topics of a series of conferences going back to 2011. This year I finally had a chance to attend the European one, Passwords14 in Trondheim, Norway December 8-10.

The conference has concluded, but you can find the program up still here, and the video from the live stream is archived here (likely to disappear for a few days soon, only to reappear edited into more manageable chunks of sessions or individual talks). You'll find me in the material from the first day, in a slightly breathless presentation (58 slides to 30 minutes talking time), and my slides with links to data and other material are available here.

Even if you're not in a position to go to Europe, there is hope: there will be a Passwords15 conference for the Europe-challenged in Las Vegas, NV, USA some time during the summer of 2015, and the organizers are currently looking for a suitable venue and time for the 2015 European one. I would strongly recommend attending the next Passwords conference; both the formal talks and the hallway track are bound to supply enlightening insights and interesting ideas for any reasonably security oriented geek.

Now go change some passwords!

I'll be at at least some of the BSD themed conferences in 2015, and I hope to see you there.

Saturday, October 25, 2014

The Book of PF, 3rd Edition is Here, First Signed Copy Can Be Yours

Continuing the tradition started by Michael Lucas with the Absolute OpenBSD, 2nd edition auction, I will be auctioning off the first signed copy of the Book of PF, 3rd edition.

Updated - the ebay auction has concluded, final bid was US $3,050.00 - see below
 
Today I took delivery of two boxes full of my The Book of PF, 3rd edition author copies. They are likely the first to arrive in Norway as well (a few North Americans received their copies early last week), but of course this is somewhere in the range hard to impossible to verify.

Anyway, here is the long anticipated with book selfie:


(larger size available here)

The writing process and the subsequent editing and proofing steps that you, dear reader, will know to appreciate took significantly longer than I had expected, but this edition of the book has the good luck to become available just before the release of OpenBSD that it targets. My original plan was to be in sync with the OpenBSD 5.5 release, but to nobody's surprise but mine the process took longer than I had wanted it to.

As regular readers will know already, the main reason this edition exists is that from OpenBSD 5.5 on, we have a new traffic shaping system to replace the more than 15 years old experimental ALTQ code. The book is up to date with OpenBSD 5.6 (early preorderers have received their disks already, I hear) and while it gives some hints on how to migrate to the new queues and priorities system, it also notes that ALTQ is no longer part of OpenBSD as of version 5.6.

And of course there have been various improvements in OpenBSD since 2010 and version 4.8, which were the year and version referenced in the second edition. You will see updates reflecting at least some of those changes in various parts of the book.

Even if you're not on OpenBSD at all, this edition is an improvement over previous versions, we've taken some care to include information relevant to FreeBSD and NetBSD as well, and where there are significant differences between the systems, it's noted in the text and examples.

It could have been tempting to include specific references to Apple's operating system as well, but I made a decision early on to stay with the free systems. I have written something about PF and Apple, but not in the book -- see my Call for Testing article How Apple Treats The Gift Of Open Source: The OpenBSD PF Example for a few field notes.

But now for the main item. For this edition, for a limited time only, there will be a

Book of PF Auction

You have a chance to own the first author signed copy of The Book of PF, 3rd edition.

The auction is up at http://www.ebay.com/itm/The-Book-of-PF-3rd-ed-signed-by-the-author-First-Copy-signed-/321563281902? - I'll look into extending the auction period, for some odd reason the max offered was 10 days. If your bid is not the successful one, I strongly urge you to make a direct donation of the same amount to the OpenBSD Foundation instead.

I've signed the book, and will fill in the missing spaces once we have the name and amount:




UPDATE 2014-10-26 01:00 CEST: Whatever it was that stopped ebay from listing the auction was resolved. The auction is up at http://www.ebay.com/itm/The-Book-of-PF-3rd-ed-signed-by-the-author-First-Copy-signed-/321563281902? - I'll look into extending the auction period, for some odd reason the max offered was 10 days. If your bid is not the successful one, I strongly urge you to make a direct donation of the same amount.to the OpenBSD foundation instead.

The first signed copy, and incidentally also the first copy my wife picked out of the first box we opened, will come with this inscribed in my handwriting on the title page:

FOR (your name)
Winner of the 2014 Book of PF Auction
Thank you for Supporting OpenBSD with your
(CAD, USD or EUR amount) donation

Bergen, (date), (my signature)

That's just for your reference. My handwriting is not a pretty sight at the best of times, and when you, the lucky winner, receive the book it's entirely reasonable that you will not be able to dechipher the scrawls at all.

If you think your chances of actually winning are not worth considering, please head over to the OpenBSD donations or orders page and spend some of your (or your boss') hard earned cash!

My speaking schedule has not been set for the upcoming months, but there is a reasonable chance I'll attend at least a few BSD events in the near future. See you there!

UPDATE 2014-11-26: The auction concluded on November 4th, with Bill Allaire as the successful bidder. He paid via PayPal (as you almost inevitably will on an Ebay auction) immediately, and I sent the signed book to him two days later.

As the lady at the post office said, the package took about a week to turn up in Bill's mailbox. But it took 21 days before PayPal finally made the funds available to me, and after a bit of wrestling with the possibly very intuitive (to someone else) PayPal interface, I transferrred the amount to the OpenBSD Foundation today. PayPal of course racked up fees both incoming and outgoing, to a degree that I think if the fees we paid there ar considered at all competitive, whoever coined the phrase "the giant vampire squid" to describe US banks must have been trying desperately to make the traditional US banks sound all nice and cuddly.

For the unsuccessful bidders, I urge you to head over to the OpenBSD Foundation's Donations page and make a donation equal to your highest bid.

Tuesday, August 12, 2014

Password Gropers Take the Spamtrap Bait

We have just seen a new level of password gropers' imbecility. Or, Peak Stupid.

Regular readers of this column know that I pay attention to my logs, as any sysadmin worth his or her salt would. We read logs, or at least skim summaries because in between the endless sequence of messages that were successfuly delivered, users who logged in without a hitch and web pages served, from time to time unexpected things turn up. Every now and then, the unexpected log entries lead to discoveries that are useful, entertaining or even a bit bizarre.

In no particular order, my favorite log discoveries over the years have been
  • the ssh bruteforcer password guessers, a discovery that in turn lead to some smart people developing smart countermeasures that will shut each one of them down, automatically.
  • the faked sender addresses on spam, leading to bounces to non-existent users. Once again, thanks to the OpenBSD developers, I and others have been feeding those obviously fake addresses back to spamdb, slowing spammers down and generating publishable blacklists.
  • and never forgetting the relatively recent slow, distributed ssh bruteforcers, also known as "The Hail Mary Cloud", who in all likelihood had expected to avoid detection by spreading their access attempts over a long time and a large amount of hosts in coordination that was only detectable by reading the logs in detail.

After the Hail Mary Cloud cycle of attempted attacks, I thought I'd seen it all. 

But of course I was wrong.  Now we have more likely than not found evidence of another phenomenon which by perverse coincidence or fate appears to combine features of all these earlier activities.

Early in July 2014, my log summaries started turning up failed logins to the pop3 service (yes, I still run one, for the same bad reasons more than a third of my readers probably do: inertia).

At our site (which by now serves mainly as my personal lab in addition to serving a few old friends and family), pop3 activity besides successful logins by the handful of users who still use the service had been quite low for quite a while. And if we for now ignore the ssh bruteforcers who somehow never seem to tire of trying to log in as root, my log summaries had been quite boring for quite a while. But then the failed pop3 logins starting this July were for the unlikely targets admin, sales and info, user names that have never actually existed on that system, so I just kept ignoring the irregular and infrequent attempts.

My log summaries seemed to indicate that whoever is in charge of superonline.net (see the log summary) should tighten up a few things, but then it's really not my problem.

But then on July 18th, this log entry presented itself:

Jul 18 17:01:14 skapet spop3d[28606]: authentication failed: no such user: malseeinvmk - host-92-45-149-176.reverse.superonline.net (92.45.149.176)

That user name malseeinvmk is weird enough that I remembered adding it as part of one of the spamtrap addresses at one point. I had to check:

$ grep malseeinvmk sortlist
malseeinvmk@bsdly.net

which means yes, I remembered correctly. That user name is part of one of the more than twenty-eight thousand addresses on my traplist page, added at some point between 2006 and now to my spamdb database as a spamtrap, and as such known to be non-deliverable. So I started paying a bit more attention, and sure enough, over the next few days the logs (preserved here) were still turning up more entries from the traplist page. We would see local parts (usernames) only, of course, but grep would find them for us.

Now, trying to log on to a system with user names that are known not to exist sounds more than a little counterproductive at the best of times. But from my perspective, the activity was in fact quite harmless and could be potentially fun to watch, so I adjusted my log rotation configuration to preserve the relevant logs for a little longer than the default seven days.

Coming back to peek at the results every now and then, I noticed fairly soon that the attempts in almost all cases were concentrated in the first two minutes past every full hour. There are a couple of hour long periods with attempts spread more or less evenly with a few minutes in between, but the general pattern is a anything from one to maybe ten attempts in the :00:00-:02:00 interval.

The pattern was not exactly the Hail Mary Cloud one, but similar enough in that the long silent intervals could very well be an attempt at hiding in between other log noise.

But that returns us to the question, Why would anybody treat a list of known to be non-existent user names as if they actually offered a small chance of access?

They could be trying to weed out bad entries. One possible explanation would be that whoever is at the sending end here is trying to weed out the bad addresses in a long list that may or may not be the wanted quality. If an attempted login gives an indication whether the user exists or not, it might be worth trying.

They could be thinking the list is all good, and they want access. Brute force password guessing is not limited to ssh. We will explore this option further in a bit.

This could be an elaborate joke. The Hail Mary Cloud got passing mention in some mainstream press, and there are people out there who might be able to pull this off just for the hell of it.

Let's put each of those hypotheses to the test.

First, when you try to log in to the service, do you get any indication whether the user you attempt to log in as exists?

Here's what it looks like when a valid user logs in:

$ telnet skapet.bsdly.net pop3
Trying 213.187.179.198...
Connected to skapet.bsdly.net.
Escape character is '^]'.
+OK Solid POP3 server ready
USER peter
+OK username accepted
PASS n0neof.y3RB1Z
+OK authentication successful

At this point, I would have been able to list or retrieve messages or even delete them. But since my regular mail client does that better than I do by hand, I close the session instead:

quit
+OK session ended
Connection closed by foreign host.

And in case you were wondering, that string is not my current password, if it ever was.

Next, let us compare with what happens when you try logging in as a user that does not exist:

$ telnet skapet.bsdly.net pop3
Trying 213.187.179.198...
Connected to skapet.bsdly.net.
Escape character is '^]'.
+OK Solid POP3 server ready
USER jallaballa
+OK username accepted
PASS Gakkazoof
-ERR authentication failed
Connection closed by foreign host.

Here, too, the user name is tentatively accepted, but the login fails anyway without disclosing whether the password was the only thing that was invalid. If weeding out bad entries from the list of more than twenty-eight thousand was the objective, they're not getting any closer using this method.

Unless somebody actually bothered to compromise several hundred machines in order to pull off a joke that would be funny to a very limited set of people, the inescapable conclusion is that we are faced with would-be password guessers who
  • go about their business slowly and in short bursts, hoping to avoid detection
  • base their activity on a list that was put together with the explicit purpose of providing no valid information

If throwing semi-random but 'likely' user names and passwords at random IP addresses in slow motion had monumental odds against succeeding, I'm somewhat at a loss to describe the utter absurdity of this phenomenon. With trying to sneak under the radar to guess the passwords of users that have never existed, I think we're at the point where the Internet's bottom-feeding password gropers have finally hit peak stupidity.

More likely than not, this is the result of robots feeding robots with little or no human intervention, also known as excessive automation. Automation in IT is generally a good thing, but but I have a feeling somebody is about to discover the limits of this particular automation's usefulness.

I hate to break it to you, kids, but your imaginary friends, listed here, never actually existed, and they're not coming back.

And if this is actually a joke that has somebody, somewhere rolling on the floor laughing, now is a good time to 'fess up. There is still the matter of a few hundred compromised hosts to answer for, which may be a good idea to clear up as soon as possible.

As usual, I'll be tracking the activities of the miscreants and will refresh these resources at semi-random intervals as long as the activity persists:

At the rate they're going at the moment, we could be seeing them hang on for quite a while. And keep in mind that the list generally expands by a few new finds every week.

Update 2014-08-19: The attempts appear to have stopped. After some 3798 access attempts by 849 hosts trying 2093 user IDs, the last attempt so far was

Aug 17 22:40:58 skapet spop3d[20058]: authentication failed: no such user: jonatas-misbruke - host-92-45-135-208.reverse.superonline.net (92.45.135.208)

I take this as an indication that these access attempts are at least to some extent monitored, and with those numbers of attempts with a total of 0 successes, any reasonably constructed algorithm would have found reason to divert resources elsewhere. We can of course hope that some of the participating hosts have been cleaned up (although nobody actually wrote me back about doing that), and of course you can't quite rule out the possibility that whoever runs the operations reads slashdot.

But then again, the fact that the pop3 password gropers have moved on from my site should not lead you to believe that your site is no longer a target.

Update 2014-08-21: I spoke too soon about the pop3 password gropers giving up and moving on. In fact, only a few hours after the last update (in the early morning CEST hours of August 20th), two more attempts occured:

Aug 20 05:07:41 skapet spop3d[2943]: authentication failed: no such user: info - 94.102.63.160
Aug 20 05:31:58 skapet spop3d[15882]: authentication failed: no such user: info - host-92-45-150-182.reverse.superonline.net (92.45.150.182)


before another long silence set in and I decided to wait a bit before making any new announcements.

But at just after ten in the morning CEST on August 21st, the slow and distributed probing with usernames lifted from my spamtraps page resumed. At the time of this writing, the total number of attempts has reached 3822, with 856 participating hosts and attempts on 2103 distinct user names. I'll publish refreshed files at quasi-random intervals, likely once per day if not more frequently.

Update 2014-09-01: Statistics improved, estimates revised.

A couple of days ago, I tweeted:
which, based on some quick calculations at the time, seemed a reasonable number. Now we've reached the end of the first full month of this extended incident, and it's time to present some preliminary statistics, lightly polished.

For the estimated end date, which will be loosely based on the rate of attempts at new user names, the calculation is as follows: On the spamtraps page, roughly half the addresses belong to domains whose primary MX is not skapet.bsdly.net (we're secondary or further out). Stripping those addresses from our total, we're left with 14809 possible usernames. Of course, we have no idea when the list they're working from was sucked in, but this is our latest data. Next, by the time I started writing this update, a total of 2523 usernames had been attempted. We're now well into the 57th day, so dividing 2523 usernames by 57, we get a little more than 44 usernames per day on average. Dividing our average per day with total number of usernames, we get approximately 334 days to try them all, which means that the first attempt on the last username in the list is likely 277 (344 - 57) days in the future. This means that at the current rate, it will be early June 2015 before they run out of usernames. How long they stick around will likely depend on their strategy for selecting passwords to match.

If you want to see some more detail on the attackers' progress, here are some data I generated while actively procrastinating and putting off other tasks with a defined and fixed deadline:

A graph of attempts per day, based on this .csv file and massaged via LibreOffice in this spreadsheet:



(larger .png here). I also tried feeding LibreOffice this .csv file with per hour data, but getting the data graphed in any sensible manner seems to require more effort than I'm inclined to put into that particular task.

Hosts participating with first and last seen dates: Possibly more useful is this .csv file, which has participating hosts in the same order as the List of compromised hosts participating but expanded to comprise the fields Attempts,Host,User Names,First Seen,Last Seen, where the last two are dates in formats that your spreadsheet should be able to parse and use as a sort key with only minor efforts. The host with the most attempts as of the last dump was only first seen on August 29th, while several of the top 10 have been with us since some time in July. And of course, at the other end of the scale, quite a few have made only one attempt to date. I'll be updating the data at semi-random intervals, likely at least once a day. Raw data and generated files can be found here, along with scripts used and even a few temporary files. The data can be used for any purpose as long as proper attribution is included. See the Hail Mary Cloud Data page for details, and the Hail Mary Cloud overview article for background information.

Update 2014-09-04: A separate effort detected Perceptive readers will have noticed that the data now includes traces of activity from hosts in the dynamic.adsl.gvt.net.br domain starting on September 2, 2014, which deviates far enough from the general distributed pattern that it's likely it represents a separate, if not overly successful, effort.

These four (so far) hosts have tried relatively long sequences of attempts at single user names at a time, targeting what I assume are 'probable' user names for that part of the world, 22 user IDs so far.  None of those user IDs expand to valid addresses in our domains, and just because I can, I've now added these user IDs to the spamtraps page as presumed members of our nxdomain.no domain. If you feel those hosts should be removed from the data for purposes of your own analyses, it's fairly easy to strip them away. In fact, a simple

$ grep -v dynamic.adsl.gvt.net.br filename

where filename is  either the raw authentication log or the extracted files that include hostnames will remove them from sight.

Update 2014-12-10: Data from this incident as well as some others have been included in my Passwords14 presentation, PDF version available too, here.

Update 2015-08-24: Since this article was originally posted, there have been several shorter episodes of pop3 groping activity, but mostly very low volume and short lived enough to be not really worth noting. However, at the very end of July 2015, the intensity increased sligtly, and I've preserved the log entries starting July 27 in a convenient place -  raw log data, overview .csv file, user names by frequency and finally hosts by frequency.

A very preliminary analysis of the data says we have a total of 268 hosts attempting access at least once, and apparent coordination (several hosts processing the same user name for a short while before moving on to the next) in several long sequences. We also see long sequences of one single host attempting one user name or an assortment of user names, and while we have a number of 'likely' user names (including my first name) as attempted user IDs, we also see an assortment of user IDs apparently lifted from the spamtraps page.

If you're seeing something like this in your logs, I'm interested in hearing from you. And if you're responsible for one or more of the systems that appear in those logs, you have a bit of cleaning up to do.


Book of PF News: All ten chapters and the two appendixes have been through layout and proofing, front and back matter is still a work in progress, expected arrival for physical copies is still not set. Those most eager to read the thing could try early access or preorder.

It's almost certain that physical copies will not be available at EuroBSDCon in Sofia, but If you make it to one of my sessions there, a special discount code worth 40% off on both the ebook and print version will be disclosed to attendees. (And I'm told it's active already, so if you guess what it is, you can use it.)

(And of course, as this blog post shows, the book came out and was well received, at least in some quarters.)

Update 2018-05-10: It appears that my spamtraps have entered a canon of sorts. On our freshly configured imapd, this happened. A few dozen login attemps to spamtrap IDs, earning of course only a place in the permanent blocks along with the popflooders as described in this article.

Monday, May 12, 2014

Have you changed your password lately? Does it even matter?

Does enforced password change at set intervals actually enhance security? I want to hear your opinion and your reasoning.

All sites are bound to have some collection or other of rules regarding passwords. In most cases, the rules dictate some level of complexity or at least length, some sites have requirements for various classes of characters involved, and in most if not all cases, site administrators implement some kind of mechanism for making you change your password at intervals.

At some places I've worked, I've been part of setting those parameters, and at others I've done my best to comply. The alternative being, of course, having my access to systems that were in fact crucial to my job blocked.  I can sympathise with policies that require some level of password complexity.

But coming up with a good, complex, password or passphrase that is at the same time both hard to guess and possible to remember is not easy. In fact, whenever I've been subject to a regime that requires password change at short enough intervals that I remember the last one, I've spent considerable energy in the grace period from the 'your password is about to expire' warning trying to come up with a good password or passphrase.

The way out has almost always been to figure out the minimum complexity the regime requires, and in some cases pinpointing the amount of difference needed between two succeeding passwords or passphrases.

So what features of a password regime do actually improve site security? Is enforcing frequent password changes such a feature? I offer this poll, where I want your honest opinion:


In your honest, qualified opinion, do frequent and enforced password changes
  
pollcode.com free polls 


Please also give your opinions in the comments.

In other news, I'm still taking questions for my BSDCan tutorials (see the Upcoming Talks panel (top right in the big screens version) or the post BSDCan Tutorials: Please Help Me Improve Your Experience for further details. I look forward to seeing some of you in Ottawa. Depending on how the ala carte sessions work out, similar sessions may be on offer at upcoming conferences. Stay tuned for developments.

Thursday, April 3, 2014

BSDCan Tutorials: Please Help Me Improve Your Experience

A good tutorial should sound to passersby much like an intense but amicable discussion between colleagues.
 
In a little over a month, I'll be heading out to Ottawa to attend BSDCan 2014. I've been a regular at BSDCan since 2006, attending every year since except 2008 -- I wanted to go that year too, but other business (actually the business of getting out of a company I'd helped build) kept blocking my preparations even though I had a fresh book out with the first edition of The Book of PF published late 2007.

But I've kept coming back after that, and I've almost always given the PF tutorial at BSDCan, this year I'm branching out a bit to give two separate sessions:

Building the Network you need with PF, the OpenBSD Packet filter

and

Transitioning to OpenBSD 5.5

Both sessions have been allocated 3 hour slots, and they also share another characteristic: I've invited my attendees to send me an email about what they're interested in learing during the tutorial. The main reason I do that is that I want to improve the experience for you, my prospective tutorial attendee.

Let me give you a bit of background. I've been giving the PF tutorial in various forms quite a few times over the years, and at one point I'd accumulated enough useful PF material that writing a book about PF seemed to be a natural next step. There was always some material that did not quite fit the book format, but a lot stayed at least for a while in my tutorial slides.

Over time I ran into a bit of trouble with the fact that BSDCan tutorials are always 'half day' or 3 hours. My collection of slides and notes have tended to expand over time, partly as a function of more experience, and partly due to the sad fact that the other BSDs have been slow to adopt any post-OpenBSD 4.5 syntax changes and other innovations. At some conferences and events I've done the PF session as a sometimes bit overfull full day event, depending on the number of questions and amount of other interaction.

But this time around it's three hours only, and I think that's quite an opportunity to improve the experience. I have more than enough material, but I've found that I usually know next to nothing about the people who will attend the sessions, and people's backgrounds vary enough that it's sometimes hard to find common ground or even pick out at short notice which parts of the material actually fits the group. I've had groups where some attendees had barely used any BSD at all along with OpenBSD developers who committed updates to man pages during the session in response to my slides and remarks, and most levels in between.

So with a strict limit on time, I would very much like to tailor the event specifically to the people who will be attending and who have something of an idea of what they want to learn. So please send me that email (to tutorial@bsdly.net), and I probably will end up updating the dense mass of slides anyway, and after the session they will be put in the usual place for browsing at your leisure.

If the format works, it's likely I'll try the short and tailored approach again at future events.

But there's more. As it says in the Transitioning to OpenBSD 5.5 tutorial description, OpenBSD has been the source of a number of BSD innovations over the years, and OpenBSD 5.5 has several noteworty improvements: time_t is now a 64-bit value so time will not wrap anytime soon, we have a new traffic shaping system wrapped into PF and a clear path to replacing the once-experimental but now aging ALTQ, signify(1)-signed install sets and packages, and quite a few more bits. The relase page is filling out nicely at the moment.

It's likely that those changes alone cold be made to fill a 3 hour tutorial slot nicely, but once again I would very much like to shape the session to fit the needs of the people who are planning to attend, so it's likely that more general what to look out for when switching to OpenBSD style material will be useful too. And if you haven't already, your experience will be much improved if you prepare a bit. The OpenBSD FAQ and the website in general is a valuable resource, and Michael W. Lucas' Absolute OpenBSD, 2nd edition is a very good source of information.

This year's BSDCan will be the first time I do the Transition to OpenBSD N.m session (unless I do get a rehearsal run organized with some locals), but it's likely I'll try again at later events for whatever is the just released or soon to be released OpenBSD version. Things are shaping up nicely for OpenBSD 5.6 at the moment, but the details of that future release will be out of scope for the Ottawa session. So please do send me an email (to transition@bsdly.net) if you plan to attend the session, and I will do my best to tailor the tutorial to your needs.

For both sessions, my ambition is to have the tutorial sound like an intense, but amicable discussion among colleagues. I look forward to seeing you in Ottawa.

Update 29 April 2014: A few people have asked, and I answer: Even if you're not able to attend the BSDCan session, you're of course welcome to send me questions to indicate what you would like to see covered in the tutorials and in the slides I'll publish afterwards. I won't give a firm promise to cover every question, but I'm happy to hear from you.

In other things, the manuscript for the third edition; of The Book of PF -- which main reason to exist is the new traffic shaping system -- is complete, going through the various editing steps and will be available at a yet to be determined date in 2014. I will be updating here and through twitter, G+ and other channels once more detailed information is available.

If you are unable to attend BSDCan, all is not lost: the EuroBSDCon 2014 conference is still accepting submissions for papers and tutorials, so if you have an interesting BSD-related topic you want the world to know about, please drop us a line (or even better a title, abstract and short biographical description) at submission@eurobsdcon.org (full disclosure: I'm on the program committee). This year's conference is set in beautiful Sofia, Bulgaria in late september.

Thursday, February 27, 2014

Yes, You Too Can Be An Evil Network Overlord - On The Cheap With OpenBSD, pflow And nfsen

Have you ever wanted to know what's really going on in your network? Some free tools with surprising origins can help you to an almost frightening degree.

One question I get a lot (or variants that end up being very close) is, "How do you keep up with what's happening in your network?". A close cousin is "how much do you actually know about your users?".

The exact answer to both can have legal implications, so before I proceed to the tech content, I'll ask you to make sure you understand the legal framework you will be working under with respect to any regulatory requirements or other legal limits as they apply to monitoring in general and your users' privacy in particular before you proceed to setting up a monitoring infrastructure. Legalisms can be tiring to a techie, but illegality can bite you really really hard.

Now for the tech side of things, of course I have network monitoring and a few favorite tools. This article has been brewing, for some values of, for quite a while. While I was collecting notes and anecdotes, last (Northern hemisphere, 2013) summer yielded news stories that showed more pervasive surveillance than most had even imagined, operated by a three letter US government agency, and writing about the relatively benign techniques in my favorite toolbox became less appealing for a while.

But the questions about how to really get to know your network are still relevant to networking practitioners, so I'll let you in on a few not really secret facts about how it's done. Of course all of the things I describe here are easier if you're using OpenBSD, but then you probably knew that fact about our favorite operating system already.

OpenBSD has traditionally had an impressive suite of networking tools, and as we know every release brings new enhancements and sometimes brand new tools for us to make use of.



Enter pflow(4), Yet Another Network Pseudo Device

The NetFlow protocol was invented at Cisco in the early 1990s. It's designed to collect traffic metadata, where the basic unit of reference is the flow, defined as the source and destination IP address pair, the matching source and destination port for protocols that use them, the protocol identifier, time started and ended, number of packets sent, number of bytes sent, and a few other fields that have varied somewhat over the NetFlow versions.

Flows are unidirectional, and a TCP connection will typically consist of a pair of flows, one in each direction. For contexts where you do not need to store the content of the traffic, this is the data you want. A multi-gigabyte file transfer, once it concludes, will produce a netflow record that takes up only on the order of a few hundred bytes, much the same as the almost dataless name service request that probably preceded it.

On OpenBSD, various netflow sensors and collectors had been available for a while when the new network pseudo device pflow(4) debuted in OpenBSD 4.5. As you would expect on OpenBSD, pflow is tightly assosciated with PF, and collecting data from an OpenBSD machine (typically a gateway) involves adding the state option pflow to PF rules that you want to collect Netflow data for, much like you would pick rules for logging with log or log (all) options. To wit, a rule for collecting pflow data would look something like this:

pass out log inet proto tcp from <clients> to port $email keep state (pflow)

But then generating pflow data proved so enormously useful in a lot of contexts that the OpenBSD 4.5 release also included an option to set state-defaults that would apply to all rules in the rule set unless specifically excempted. You guessed it, the most popular set in a number of PF shops became

set state-defaults pflow

more or less overnight after the OpenBSD 4.5 release.

Once you have reloaded your rule set with the pflow option in place, you are generating pflow data (in this case, for any traffic that matches a pass rule in the rule set). But to actually get the data to somewhere you can study them, you need to set up both a sensor and collector. The sensor is the pflow interface, which you configure via ifconfig commands, or for a permanent configuration, in the /etc/hostname.pflow0 interface configuration file. The /etc/hostname.pflow0 on the gateway closest to me right now looks like this:

flowsrc 213.187.179.198 flowdst 192.168.103.252:9995
pflowproto 10

which means, essentially, that any pflow data generated will be sent with a source address of 213.187.179.198 to the collector we hope is listening at 192.168.103.252, UDP port 9995. Every flow is recorded, and sent to the collector. The flowproto 10 part means we use flow protocol version 10, the latest one with all the newest bells and whistles (which is recommended on OpenBSD only on version 5.5 or newer).


The Collector

Up to this point, you are free to choose any collector at all, or for that matter, let your pflow sensor send data endlessly into the void. In The Book of PF I spend quite a bit of time explaining netflow via Damien Miller's excellent flowd, mainly because it's damned fine software and very well suited for the purpose, but here I'll go the lazy route and show you the tool I actually use, which is nfsen, which comes out of the OpenBSD package system with a usable web interface as a front end to nfcapd and a host of related tools.

Do take some time to click that nfsen reference, the documentation there is quite usable and provides better illustrations than what I can offer at the moment.

Installing nfsen on OpenBSD is, as expected, as simple as can be. On an otherwise normally configured OpenBSD system, the single command

$ doas pkg_add nfsen

will get you most of the way there. Do read the package readme as the messages instruct you to. Basically, you will need edit the configuration file /etc/nfsen.conf. Adding data sources is likely the only thing you will need to do at first, look for the stanza that looks like this:

%sources = (
    'upstream1'    => { 'port' => '9995', 'col' => '#0000ff', 'type' => 'netflow' },
    'peer1'        => { 'port' => '9996', 'IP' => '172.16.17.18' },
    'peer2'        => { 'port' => '9996', 'IP' => '172.16.17.19' },
);

Here you add the sources you have configured earlier. I give all my sources a distinct color (picking among the CSS-style RGB values you youngsters probably know by heart but old farts like me always have to look up), IP address, type and port, so it's easier to tell them apart.

Then you run a perl script to configure the package, start httpd, start the nfsen package (and add it to the pkg_scripts= line in your /etc/rc.conf.local so it will start at next reboot too).

That's all there is to it. Soon the web interface will start filling in the graphs, and you can point and click your way around address ranges, time ranges and a host of other parameters. You will find that every connection you specified in your configuration is indeed logged, and you have all the metadata you asked for.

After a while you will start appreciating that nfsen displays the command line version of your point and click choices, so you have a better starting point for those wrinkles in the data that are not easily or at all accessible via the web interface.


The All-Seeing Eye Of The Evil Network Overlord

You can tell just who, or at least what IP addresses interacted with each other when, how much data was transferred and to of from what services or ports. It stands to reason that in most jurisdictions there are rules about how data of this kind is to be handled and secured. Make sure you deal properly with the data you collect, staying within whatever limits apply to you. But within those limits, here's your chance to be an evil network overlord. Use it wisely.

Netflow data has been used for a number of things. In his very readable book Network Flow Analysis, Michael W. Lucas relates a story about how they pinpointed the source of entry for a Windows worm into a corporate network using netflow data. I've found netflow to be very useful in a number of contexts myself (as briefly mentioned in the earlier DDOS article, and using netflow data to charge for metered access is not unheard of either), but the most striking example I've seen did not involve an attack, merely an intermittent network nuisance that occasionally cost insane amounts of money.

The setting was this: A couple of years ago, I was a relatively new hire in a large corporation that serves IT services of various kinds among others to an almost equally-sized financial firm. In one part of the financial firm there was a place where trades involving dollar values larger than most of us can imagine were made using a telnet interface to something else, and the 80 by 25 character displays were at times not moving at all. Trades were lost because the tiny packets did not arrive on time.

By the time I joined the company, the regular network crew that took care of that particular arm of the financial firm had been unsuccessfully trying to debug and fix the disruptions for quite a while. A call went out for help, and I proposed setting up a Netflow collector much like what I described earlier in the article.

The proposed budget was pretty close to nothing at all besides my time, so I got the go-ahead. The OpenBSD part of the configuration was done inside half an hour, and after peeking at Michael's book I even fished out the right sequence for the Cisco wranglers to input in their gear so useful data started arriving.

Then came the long wait. Graphs were accumulating, and after a while I would put several weeks' graphs on top of each other and hold them up to a light source. They mathched perfectly. I could tell when people started arriving at work, I could tell when trading started in various cities, I could see the dip for lunch breaks, and the traffic peak for the nightly backups was easy to identfy.

But the source of the random network disruption did not turn up in the overall data volumes.

After a few weeks, I asked the local IT support to send me an email as soon as possible when disruptions occurred, with the name and/or IP address of the computers seeing disruption. Soon after, the first messages started arriving. I used the nfsen web interface to search the data around the reported times and looking at the IP ranges. At first, nothing really stood out. There was no sudden increase in data transferred at my sensors.

But then it occurred to me that the overall data volume was not necessarily the problem, so I started looking at hosts in the likely address range by number of flows (as in, number of open connections). That was all it took. Going back over a handful of reports, I noticed that on every occasion, for a few minutes one particular IP address stood out. For a very short time, a few days every week, one host on the network owned essentially all flows that passed by my sensors. No other host came even close.

It turned out that the machine was used to generate some rather heavy duty reports, collecting data from a large number of data sources. My guess is that the reporting software was one of those things that started small and grew over time, and after a few years it became a marked liability, simply because it was connected to the same switch that the traders were using, and reports were generated during trading hours.

I wrote up my report with graphs taken from nfsen (since destroyed and anyway not for public consumption, ever), and recommended that they find a way to move the report generator off to a separate location, perhaps even one with better connectivity to important data sources. I think they took that advice and acted upon it, but I suppose I'll never know for sure.

If you're interested in network traffic monitoring in general and NetFlow tools in particular, you could do worse than pick up a copy of Michael W. Lucas' recent book Network Flow Analysis. Michael chose to work with the flow-tools family of utilities for the book, but he does an outstanding job of explaining the subject in both theory and practical applications. What you read in Michael's book can easily be transferred to other toolsets once you get at grip on the matter.

I've focused mainly on OpenBSD here, but netflow sensors exist or should exist for essentially anything that has a TCP/IP stack. And nfsen works well on Linux and other Unix-like systems, too, I've heard tell.


As I write this I'm still working on the third edition of The Book of PF. The third edition came to be mainly because of changes introduced in OpenBSD 5.5, and the plan we're working towards is to have the book ready in time for the release.

BSDCan: I will be at BSDCan again this year, offering two tutorials (see the Upcoming Talks panel at top right). More details will follow later, but these sessions will be designed mainly from input I receive from prospective attendees, and so will be critically dependent on your input, or even more so than earlier. See you there!


Update 2014-03-01: Thanks to Sebastian Benoit for pointing out that configuring pflow with flowproto 10 is really only well supported on OpenBSD 5.5 and newer.

Update 2014-04-27: PF tables vs html tags sometimes does not end well. Fortunately fixable.

Update 2015-10-25: For running nfsen with the OpenBSD httpd (and possibly others), you likely will be happier if you add php_fpm (which the nfsen package pulls in as a dependency) to the pkg_scripts variable in your /etc/rc.conf.local, much like this:

pkg_scripts="php_fpm nfsen" 

Discovered the hard way, one could say, only after a power outage broke the serenity of my lab's nfsen installation, the web server only spitting out 500 internal server error messages.

Update 2015-10-30: Several correspondents have asked whether NetFlow export is doable on various proprietary products. The answer is in most cases yes, but terminology may vary. On Cisco products you can be fairly sure to find terms NetFlow and IPFIX, while I discovered today that Citrix Netscaler for reasons of their own entirely mask the feature behind the term AppFlow. For other products, check the documentation for the obvious keywords.

Sunday, February 2, 2014

Effective Spam and Malware Countermeasures - Network Noise Reduction Using Free Tools

In order to keep you entertained while I work on a new edition of The Book of PF, I dug deep in the archives for material you might enjoy reading. Here, for your weekend reading pleasure, is a minimally edited version of my malware article, originally written for a BSDCan presentation (also presented at BLUG and UKUUG events):

A certain proprietary software marketer stated in 2004 that by 2006 the spam and malware problems would be solved.  This article explores what really happened, and presents evidence that the free software world we are in fact getting there fast and having fun at the same time. We offer an overview of principles and tools with real life examples and data, and cover the almost-parallel evolution of malware and spam and effective counter-measures.  We present recent empirical data interspersed with examples of practical approaches to ensuring a productive, malware and spam free environment for your colleagues and yourself, using free tools.  The evolution of content scanning is described and contrasted with other methods based on miscreants' (and their robot helpers') behavior, concluding with a discussing of recent advances in greylisting and greytrapping with an emphasis on those methods' relatively modest resource demands.

(Updated 2016-12-13, see the addendum at the end)

Malware, virus, spam - some definitions

In this article we will be talking about several varieties of the mostly mass produced nuisances we as network admins need to deal with every day. However, you only need to pick up an IT industry newspaper or magazine or go to an IT subject web site to see that there is a lot of confusion over terms such as virus, malware and for that matter spam. Even if a large segment of the so called security industry does not appear to put a very high value on precision, we will for the sake of clarity spend a few moments defining the parameters of what we are talking about.

To that end, I've taken the time to look up the definitions of those terms at Wikipedia and a few other sources, and since the Wikipedia definitions agree pretty well with my own prejudices I will repeat them here:
  • Malware or Malicious Software is software designed to infiltrate or damage a computer system without the owner's informed consent.

  • A computer virus is a self-replicating computer program written to alter the way a computer operates, without the permission or knowledge of the user.

  • Another common subspecies of malware is the worm, commonly defined as “a program that self-propagates across a network exploiting security or policy flaws in widely-used services” (This definition is taken from a 2003 paper, Weaver, Paxson, Staniford and Cunningham: “A Taxonomy of Computer Worms”.)

  • The term zombie is frequently used to describe computers which are under remote control after a successful malware or manual attack by miscreants.

  • Spamming is the abuse of electronic messaging systems to send unsolicited, undesired bulk messages. While the most widely recognized form of spam is e-mail spam, the term is applied to similar abuses in other media [ … ]

You will notice that I have left out some parts at the end here, but if you're interested, you can look up the full versions at Wikipedia. And of course, if you read on, much of the relevant information will be presented here anyway, if possibly in a slightly different style and supplemented with a few other items, some even of distinct practical value. But first, we need to dive into the past in order to better understand the background of the problems we are trying to solve or at least keep reasonably contained on a daily basis.

A history of malware

The first virus: the Elk Cloner

According to the Wikipedia 'Computer Virus' article, the first computer virus to be found in the wild, outside of research laboratories, was the 1982 "elk cloner" written by Rich Skrenta, then a teenager in Southern California.

The virus was apparently non-destructive, its main purpose was to annoy Skrenta's friends into returning borrowed floppy disks to him. The code ran on Apple II machines and attached itself to the Apple DOS system files.

Apple DOS and its single user successors such as MacOS up to System 9 saw occasional virus activity over the following years, much like the other personal systems of the era which all had limited or no security features built into the system.

The first PC virus: the (c)Brain

It took a few years for the PC world to catch up. The earliest virus code for PCs to be found in the wild was a piece of software called (c)Brain, which was written and spread all over the world in 1986. (c)Brain attached itself to the boot sector on floppies. In contrast to quite a number of PC malware variants to follow, this particular virus was not particularly destructive beyond the damage done by altering the boot sectors.

Like most of the popular personal computer systems of the era, MS-DOS had essentially no security features whatsoever. In retrospect it was probably inevitable that PC malware blossomed into a major problem.

With system vendors unable or unwilling to rewrite the operating systems to eliminate the bugs which let the worms propagate, an entire industry grew out of enumerating badness. (The origin of the term enumerating badness is uncertain, but most frequently attributed to Marcus Ranum, in the must-read, often cited web accessible article “The Six Dumbest Ideas in Computer Security”. It's fun as well as useful and very readable.)

To this day a large part of the PC based IT sector remains dedicated to writing malware and producing ever more elaborate workarounds for the basic failures of the MS-DOS system and its descendants. Current virus lists typically contain signatures for several hundred thousands of variants of mainly PC malware.


The first Unix worm: The Morris Worm

Meanwhile in the Unix world, with its better connected and relatively well educated user base, things were relatively peaceful, at least for a while. The peace was more or less shattered on November 2, 1988 when the first Unix worm, dubbed the Morris worm hit Unix machines on the early Internet. This was both the first replicating worm in a Unix environment and the first example of a worm which used the network to propagate.

More than 20 years later, there is still an amazing amount of information on the worm available on the net, including what appears to be the complete source code to the worm itself and a number of analyses by highly competent people. It's all within easy reach from your favourite search engine, so I'll limit myself to repeating the main points. Some of the Morris worm's characteristics will be familiar.
  1. It was system specific Even though there are indications that the worm was intended to run on more architectures, it was in fact only able to run successfully on VAXes and sun3 machines running BSD.

  2. It exploited bugs and sloppiness Like pretty much all of its successors, the Morris worm exploited bugs in common programs, such as a buffer overflow in fingerd, used the commonly enabled debug mode in sendmail - which allowed remote execution of commands - along with a short dictionary of likely passwords.

  3. It replicated and spread Once the worm got in, it started the process of spreading. Fortunately, the worm was designed mainly to spread, not to do any damage.

  4. It lead to denial of service Unfortunately, the worm code itself had a bug which made it more efficient at spreading itself than its author had anticipated, and caused a large increase in network traffic, slowing down Internet traffic to a large number of hosts. Some hosts worked around the problem by disconnecting themselves from the Internet temporarily. In one sense, it may have been one of the earliest Denial of Service incidents recorded.
The worm was estimated to have reached rougly 10% of the hosts connected to the Internet at the time, and the most commonly quoted estimate of an absolute number is "around 6,000 hosts".

The event was quite stressful for, by today's standards, a very small group of people. In retrospect, it is probably fair to say that the episode mainly served to make Unixers in general aware that there was a potential for security problems, and developers and sysadmins set out to fix the problems.

Microsoft vs the internet

The final components to form the current mess arrived on the scene in the second part of the 1990s when Microsoft introduced modern networking components to the default setup of their PC system software which came preinstalled on consumer grade computers. This happened at roughly the same time that several office type applications started shipping with their own fairly complete programming environments for macro languages.

Riding on the coattails of the early 1990s commercialization of the Internet, Microsoft started real efforts to interface with the Internet in the mid 1990s. Up until some time in 1995, Internet connectivity was an optional extra to Microsoft users, mainly through third party stacks and frequently through hard to configure dial-up connections.

Like the third party offerings, Microsoft's own TCP/IP stack was an optional extra -- downloadable at no charge, but not installed by default until late editions of Windows 3.11 started shipping with the TCP/IP stack installed by default.

However, the all-out assault and their as good as claims to have invented the whole thing came only after a largely failed attempt at getting all Windows 95 users to sign up to the all-proprietary, closed-spec, dial-in Microsoft Network, which was in fact the first to use the name and the MSN abbreviation. The original Microsoft Network service did have some limited Internet connectivity; anecdotal evidence indicates that simple email transmissions to Internet users and back could take several days each way.

As luck or misfortune would have it, by the time Microsoft's Internet adventure started, several of their applications had been extended to include application macro programming languages which were pretty complete programming environments.

In retrospect we can confidently state that malware writers adapted more quickly to the changed circumstances than Microsoft did. The combination of network connectivity, powerful macro languages and applications which were network aware on one level but had not really incorporated any important security concepts and, of course, the sheer number of targets available proved quite impossible to resist.

The late 1990s and early 2000s saw a steady stream of network enabled malware on the Microsoft platform, sometimes with several new variants each day, and never more than a few weeks apart. A semi-random sampling of the more spectacular ones include Melissa, ILOVEYOU, Sobig, Code red, Slammer and others; some were quite destructive, while others were simply very efficient at spreading their payload.

They all exploited bugs and common misconfigurations much like the Morris worm had done a decade or more earlier. Greg Lehey's June 2000 notes on one of the more pervasive worms is still worth reading. (See Greg Lehey: Seen it all before?, Daemon's Advocate, The Daemon News ezine, June 2000 edition) The description is one of many indications that by 2000, malware writers had learned to mine the data in their victims' mail boxes and contact lists for useful data.

During the same few years, Microsoft's stance also developed somewhat. Their traditional response had been We do not have bugs, then moved gradually to releasing patches and 'hot fixes' at an ever increasing rate, and finally moving to a regime of a monthly “Patch Tuesday” in order introduce some predictability to their customers' workday.

Characteristics of modern malware

Back in the day, the malicious and destructive software got all the attention. From time to time a virus, worm or other malware would grab headlines for destroying people's systems, in one case even overwriting system BIOSes of a common variety of PCs. I have no real numbers to back this up, but one likely theory is that during the early years malware writers may have been mainly youthful pranksters and the odd academic, and getting attention may have been the main motivator.

In contrast, modern malware tries to take over your system without doing any damage a user or less attentive system administrator would notice. Typical malware today delivers its payload which then proceeds to take control of your computer - turning it into a zombie, usually to send spam, to infect other computers, or to perform any function the malware writer's customer needs to be done by remote control.

There is ample evidence that once machines are taken over, installed malware is likely to record users' keystrokes, mine the file systems for financial and identification data, and of course any sort of remote controlled network activity such as participation in attacks on specific networks. There is also anecdotal evidence to suggest that a significant subset of online casino players are in fact remote controlled game playing robots running on compromised computers.

Spam - the other annoyance

The first spam message sent is usually considered to be a message sent via ARPANET email in 1978, from a marketing representative at the Digital Equipment Corporation's Marlboro site. Acccording to much repeated anecdotes the message was sent to "every Arpanet address on the west coast" of the USA. (See Reflections on the 25th Anniversary of Spam, by Brad Templeton). The message announced a demo of the then new and exciting DEC20 line of computers and the TOPS-20 operating system, and like many of its successors showed signs of sender's incompetence - the list of intended recipients was longer than the mail application was able to accept, and the list overflowed into the message itself.

The message was well intended, but the reaction was overwhelmingly negative, and unsolicited commercial messages appear to have been close to non-existent, at least by modern standards, for quite a while after this particular incident.
The spam problem remained more or less a dormant, potential problem until the commercialization of the Internet started in the early 1990s. By then, email spam was still close to non-existent, but unsolicited commercial messages had started appearing on the USENET news discussion groups.

In 1994, there were several incidents involving messages posted to all news groups the originators were able to reach. The first incident, in January, involved a religious message, followed a few weeks later by message hawking the services of a US law firm. At the time this would have meant that several thousand unrelated discussion groups received the same message, crossposted or repeated.

The spam problem is sometimes cited as a major part of the reason why USENET declined in readership in favor of web forums, but in fact the USENET spam problem was largely solved within an impressively short time. Counter measures by USENET admins, including USENET Death Penalty (kicking a site off the USENET), cancelbots (automatic cancelling of articles which meet or exceed set criteria) and various semi-manual monitoring schemes were largely, if not totally effective in eliminating the spam problem.

However, with an increasing Internet user population, the number of email users grew faster than the number of USENET users, and spammers largely turned their attention back to email towards the end of the 1990s. As we mentioned earlier, mass mailed messages were found to be effective carriers of malware.

Spam: characteristics

The two main characteristics of spam messages have traditionally been summed up as: A typical spam run consists of a large number of identical messages, and the content of the messages tend to form recognizable patterns. In addition, we will be looking at some characteristics of spammer and malware writer behavior.

Into the wild: the problem and principles for solutions

The ugly truth

In order to understand how malware propagates, we need to recognize a few basic truths about people, programming and the code we produce and consume. Some groups, such as the OpenBSD project, has turned to code audits, motivated by what can be summed up as the following two clauses:
  1. All non-trivial software has bugs
  2. Some of these bugs are exploitable
Even though we all wish we were perfect and never made any mistakes, it is a fact of life that even highly intelligent, well educated, mentally balanced and well disciplined people do occasionally make mistakes.

The code audits, sometimes described as a process of reading the code like the Devil reads the Bible, concentrate on finding not only individual errors, but also recognizing patterns of the errors programmers make, and have turned up and eliminated whole classes of bugs in the source code audited.

For more information on the goals and methods of these code audits, see the OpenBSD Project's Security page and Damien Miller's AsiaBSDCon 2007 presentation, available from the OpenBSD project's Papers and presentations page.

The code audits also lead to the creation of a few exploit mitigation techniques, which are the subject of the next section.

Fighting back, on the system internals level

The code audits spearheaded by the OpenBSD project lead to the realization that even though we can become very good at eliminating bugs, we should always consider the possibility that we will not catch all bugs in time. We already know that some of the bugs in our code can be used or exploited to make the system do things we did not intend, so making it harder for a prospective attacker to exploit our bugs may be worthwhile. The OpenBSD project coined the term exploit mitigation for these techniques (The techniques described here are covered in far more detail in Theo de Raadt's OpenCON 2005 presentation Exploit Mitigation techniques, as well as the more recent Security Mitigation Techniques: An update after 10 years, also by Theo de Raadt.)

I will cover some of these techniqes briefly here:
  1. Stack smashing/random stack gap:
    In several types of buffer overflow bug exploits, the exploit depends critically on the fact that in most architectures, the stack and consequently the buffer under attack starts at a fixed position in memory. Introducing a random-sized gap at the top of the stack means that jumping to the fixed address the attackers 'know' contains their code kills a large subset of these attacks. The buggy program is likely to crash early and often.

  2. W^X: memory can be eXecutable XOR Writable
    Some bugs are possible to exploit because it is possible to have writable memory which is also executable. Implementing a sharp division involved some subtle surgery on how the binaries are constructed, with a slight performance hit. However, the performance was optimized back, and any attempts at writing to eXecutable memory will fail. Once again, buggy software fails early and often.

  3. Randomized mmap(), malloc()
    One of the more ambitious bits of work in progress is to introduce randomization in mmap() and malloc(). Like the other features we have touched on here, it has been eminently useful in exposing bugs. Flaws which just lead to random instabilities or odd behavior are much more likely to break horribly with randomized memory allocation.

  4. Privilege separation
    One classic problem which has proved eminently exploitable is that programs have tended to run effectively as root, with more privileges than they actually need once they've bound themselves to the reserved port. Some simple programs were easy to rewrite to drop privileges and execute their main task with only the privileges actually needed. Other, larger daemons such as sshd needed to be split into several processes, some running in chroot, some bits retaining privileges, others running at minimum privilege levels.
If it is not already obvious, one important effect of implementing these restrictions has been that these changes in the system environment has exposed bugs in a lot of software. For example, Mozilla's Firefox was for some time known to crash a lot more often on OpenBSD than almost anywhere else. However, the fixes for the exposed bugs tend to make it back into the various projects' main code bases.

Content scan

Virus scanners One of the first ideas security people hit upon when faced with files which could be carriers of something undesirable was to scan the files for specific kinds of content. Early content scanners were pure virus scanners which ran on MS-DOS and scanned local file systems for known bad content such as the byte sequences equal to known malware.

Over time as the number of known bad sequences grew, the technology to do hashed lookups was introduced. At present the total number of known types of malware is estimated to exceed 200,000 signatures. Makers of most malware scanning products issue updates on an as needed basis, recently this means that they might issue several signature updates per day.

Spam filters were at first close cousins to the bruteforce signature or substring lookup based virus packages. However, packages such as the freeware, Perl based SpamAssassin soon introduced rule based classification systems. The rule evaluation model SpamAssassin uses assigns weights to individual rules, allowing for site specific adjustments. Modern evaluation tools typically contain rules to evaluate both the message bodies and the message header information in order to determine the probability that a message is spam.

Another feature of modern filtering systems is that they are either built around or employ as optional modules various statistics based classification methods such as Bayesian logic, the Chi-Square method, Geometric and Markovian Discrimination. The statistics based methods are generally customized via training, based on a corpus of spam and legitimate mail collected by the site or user.

As the lists of signatures have grown to include an ever larger number of entries and have been supplemented with the more involved statistical calculations, content scanning has developed into one of the more resource intensive computations most of us will encounter.

The comedy of our errors: Content scanning measures and countermeasures


Even with such a formidable arsenal of tools at our disposal, it is important to keep in mind that all the methods we have mentioned have a nonzero error rate. Once you are done with setting up your filtering solution, you will find that care and feeding will include compensating for problems caused by various errors.

In a filtering context, our errors will fall into two categories, either false negatives or data which our system fails to recognize as undesirable even when it is, or false positives, where the system mistakenly classifies data as undesirable. Here is a sequence of events which illustrates some of the problems we face when we rely on content evaluation:

Keyword lookup: Matching on specific words which were known to be more common in unwanted messages than others was one of the early successes of spam filtering software. The other side soon hit on the obvious workaround - misspelling those keywords slightly, for a short time shrouding the message behind the likes of V1AGR4 or pr0n. Again the countermeasures were fairly obvious; soon all content filtering products included regular expression substring match code to identify variations on the key words.

Word frequency and similar statistics As the text analysis tools grew ever more accurate thanks to statistical analysis, the other side hit on the obvious countermeasure of including largish chunks of unrelated text in order to make the message appear as close as possible to ordinary communications to the content scanners. The text could be either semi-random strings of words or fragments of web accessible text, as illustrated by this example:

Spam message containing random text

Hidden in there is a very short sequence of characters which describes what they are trying to sell. At times we see messages which appear not to have any such payload, just the random text. It is not clear whether these messages are simply products of errors by inept spamware operators or, as some observers have speculated, if they are part of a larger scheme to distort the statistical basis for content scanners.

Text analysis vs graphics So it became rather obvious that we are getting rather good at scanning text, and the other side made their next move. shows an example of a stock scam, all text really, but promoted via an embedded graphic, along with a semi-random chunk of text grabbed from somewhere on the web:

This stock scam text is actually a picture
The text-as-picture messages spurred the development of optical character recognition (OCR) plugins for content scanning antispam tools, and a few weeks later text-as-picture spams started coming with distorted backgrounds, as seen in this example:

This could make you think they're selling flowers



All of these examples were taken from messages I have received, the last one in November 2006 when the various tools were not yet perfectly tuned to get rid of those specific nuisances. Newer SpamAssassin plugins such as FuzzyOcr are making good progress in identifying these variants, at the cost of some processing power.

Recent innovations in spam content obfuscation includes carrying as little content as possible such as a one-word subject line followed by a message body with at most half a dozen words in addition to a URL as well as the re-emergence of ASCII art, such as illustrated in here:

Spam with ASCII art

The figure displays the main message content. The main message as well as the web site URL are rendered as ASCII art, followed by apparently random text. The message came with enough spam characteristics that filtering system awarded this particular message a spamassassin score of 8.3, well into the 'likely but not definitely spam' range.

The sequence is certainly not unique, and we should probably expect to see similar mini arms races in the future. One obvious consequence of the ever-increasing complexity in content filtering is that mail handling, once a reasonably straightforward and undemanding activity, now requires serious number crunching capability. And it bears repeating that you should expect a non-zero error rate in content classification.

Behavioral methods

Up to this point we have looked at what we can achieve first by making any bugs in our operating system or applications harder to exploit, and next what can be done by studying the content of the messages once we've received them or while our mail transfer agent is processing the DATA part. From what we have seen so far, it is fairly obvious that the other side is trying to hide their tracks and avoid detection.

Spammers lie This shows even more clearly if we study their behavior on the network level. The often repeated phrase "Spammers lie, cheat and steal" at least to some extent proves to be rooted in reality when we study spam and malware traffic.

Forged headers Spammers may or may not be truthful when describing the wares they are promoting, but we can be more or less certain that they do their very best to hide their real identities and use other people's equipment and resources whenever possible. Studying the message headers in a typical spam message, we can expect to find several classes of forged headers, including but not limited to the Received:, From: and X-Mailer: headers. Perhaps more often than not, the apparent sender as taken from the From: header has no connection whatsoever to the actual sender.

Sender identification Some such discrepancies are easy to detect, such as when a message arrives from an IP address range radically different from the one you would expect when performing a reverse DNS lookup based on the stated sender domain. Traditional Internet standards do in fact not define a standard for determining whether a given host is a valid mail sender for a given domain.

However, by 2003 work started on extensions to the SMTP protocol incorporating checks for domain versus IP address mismatches. After a sometimes confusing process with attempts at formalizing workable standards, these ideas were formalized into two competing and somewhat incompatible methods, dubbed Sender Policy Framework (SPF) and Sender ID respectively, one championed by a group of independent engineers and researchers, the other originating at Microsoft.

The initial hope that the differences and incompatibilities would be resolved was further dashed in April 2006 when the two groups chose to formulate separate RFCs describing their experimental protocols (The relevant RFCs are RFC 4406 and RFC 4407 for the Microsoft method, which describe the Sender ID protocol and the Purported Responsible Address (PRA) algorithm it depends on respectively, and RFC 4408 for SPF.).

The world fortunately chose SPF and moved on to further work involving signing outgoing messages (DKIM) and finally the umbrella specification DMARC which builds on SPF and DKIM and adds its own wrapper, all to be stuffed into DNS TXT records for the sender domain. Expect more rants along these lines from here to follow.

Blacklists Once a message has been classified as spam, recording the IP address the message came from and adding the address to a list of known spam senders is a relatively straightforward operation. Such lists are commonly known as blacklists, which may in turn be used in blocking, tarpitting or filtering.

Greylisting Possibly as a consequence of their using other people's equipment for sending their unwanted traffic, spam and malware sender software needs to be relatively lightweight, and frequently the SMTP sending software does not interpret SMTP status codes correctly.

This can be used to our advantage, via a technique which became known as greylisting. Greylisting as a technique was presented in a 2003 paper by Evan Harris. The original Harris paper and a number of other useful articles and resources can be found at the greylisting.org web site. Even though Internet services are offered with no guarantees, usually described as 'best effort' services, a significant amount of effort has been put into making essential services such as SMTP email transmission fault tolerant, making the 'best effort' one with as close as does not matter to having a perfect record for delivering messages.

The current standard for Internet email transmission is defined in RFC5321, which in section 4.5.4.1, "Sending Strategy", states
"In a typical system, the program that composes a message has some method for requesting immediate attention for a new piece of outgoing mail, while mail that cannot be transmitted immediately MUST be queued and periodically retried by the sender."
and
"The sender MUST delay retrying a particular destination after one attempt has failed. In general, the retry interval SHOULD be at least 30 minutes; however, more sophisticated and variable strategies will be beneficial when the SMTP client can determine the reason for non-delivery."
RFC2821 goes on to state that
"Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days."
(unfortunately retaining the vague language in the sections relevant to greylisting unchanged from earlier versions. This means that validity of the practice is still a matter of interpretation according to the updated RFC. The event was not widely reported, my own column on the subject is available at http://bsdly.blogspot.com/2008/10/ietf-failed-to-account-for-greylisting.html)

But the main points still stand: After all, delivering email is a collaborative, best effort thing, and the RFC states clearly that if the site you are trying to send mail to reports it can't receive anything at the moment, it is your DUTY (a MUST requirement) to try again later, after an interval which is long enough that your unfortunate communication partner has had a chance to clear up whatever was the problem.
 
The short version is, greylisting is the SMTP version of a white lie. When we claim to have a temporary local problem, the temporary local problem is really the equivalent of “my admin told me not to talk to strangers”. Well behaved senders with valid messages will come calling again later, but spammers have no interest in waiting around for the retry, since it would increase their cost of delivering the messages. This is the essence of why greylisting still works. And since it's really a matter of being slightly pedantic about following accepted standards, false positives are very rare.

Greytrapping The so far final advance in spam fighting is greytrapping, a technique pioneered by Bob Beck and the OpenBSD team as part of the spamd almost-but-not-quite SMTP daemon. This technique makes good use of the fact that the address lists spammers routinely claim are verified as valid, deliverable addresses are in fact anything but.

With a list of greytrap addresses which are not expected to receive valid mail, spamd adds IP addresses which try to deliver mail to the greytrap addresses to its local blacklist for 24 hours. Blacklisted addresses are then treated to the tarpit, where their SMTP dialog receives responses at a rate of one byte per second.
The intention, and to a large extent the actual effect, is to shift the load back to the sender, keeping them occupied with a very slow SMTP dialogue. We will return to this in a later section.


Combined methods and some common pitfalls

It is worth noting that products frequently use some combination of content scan and network behavior methods. For example, spamassassin incorporates rules which evaluate message header contents, using SPF data as a factor in determining a message's validity, while at the same time using locally generated bayesian token data to evaluate message contents.

We have already touched on the danger of false positives and the main downside of content filtering, and it is worth noting the possible downsides and pitfalls which come with the behavior based methods too.

The inner workings of proprietary tools are generally secret, but one particularly bizarre incident involving Microsoft's Exchange Hosted Services reveals at least some of the inner workings of that particular product. All available evidence indicates that their system treats substring match based on a phishing message to be a valid reason to block or “quarantine” messages from a domain, and that their data do not expire. The incident is chronicled by a still puzzled network administrator at this site.

Header mismatches While most simple header mismatch checks are reliable, the one important criticism of SPF and Sender ID is that the schemes are incompatible with several types of valid message forwarding, another that the problem of roaming users on dynamic IP adresses who still need to send mail has yet to be solved.

Blacklists The ways blacklists are generated, maintained and used are almost too numerous to list here. The main criticism and pitfalls lie in the way the lists are generated and maintained. Some lists have tended to include entire ISP networks' IP ranges as “know spam senders” in an attempt to force ISPs to cancel spammers' contracts. Another recurring complaint is that lists are less than actively maintained and may include out of date data. Both can lead to false positives and legitimate mail lost. Unfortunately, some popular blacklists have at times been abused and employed as instruments in personal vendettas. For those reasons, it always pays to check a list's maintenance policy and its reputation for accuracy before using a list as sufficent reason to reject mail.

Greylisting Even valid senders will experience a delay in delivery of the initial message. The length of the delay varies according to a number of factors, some of which are not under the greylister's control. A more serious issue is that some large sites do not necessarily perform the delivery retries from the same IP address as the one used for the initial attempt. A large enough pool of possible sending hosts and a sufficiently random retry pattern could lead to delivery timeout. Whitelisting the sites in question may be a temporary workaround, however with greylisting entering the mainstream it is expected that the problem of random redelivery will decrease and hopefully disappear entirely.

Greytrapping The only known risk of using greytrapping to date is that the backscatter of “message undeliverable” bounce messages resulting from spam messages sent with one of your trap addresses as apparent sender may cause mail servers configured to send nondelivery messages to enter your blacklist. 

This will cause loss or delayed delivery of valid mail if the backscattering mail server needs to deliver valid mail to your site. How often, if at all, this happens depends on several semi-random factors, including the configuration policies of the other sites' mail servers.

A working model

Where do we fit in?

Unix sysadmins find themselves in an inbetween position of sorts. We can never totally rule out that our systems are vulnerable, but malware which will actually manage to exploit a well run UNIX system is rarely seen in the wild, if at all.

A well run system means that best practice procedures are applied to system administration: we do not run unneccessary services, we install any security related updates, we enforce password policies and so on.

However, we more likely than not run services for users who run their main environment on vulnerable platforms. Malware for the vulnerable platforms more likely than not spreads via email, which is quite likely one of the services we handle.

We'll take a look at email handling, then move on to some productive uses of packet filtering (aka firewalls) later.

Setting up a mail server

Back when SMTP email was designed, the main emphasis was on making as sure as possible, without actually making hard guarantees, that mail would get delivered to the intended recipient. As we have seen, things get a little more complicated these days. The main steps to configuring the mail service itself are as follows:
  1. Choose your MTA
    BSDs generally come with sendmail as part of the base system. For our sites we have chosen to use exim for several reasons. Despite its human readable configuration files, it offers enormous flexibility, and on FreeBSD users will find that the package message offers a screenful of help to configure your mail service to do spam and malware filtering during message receipt.

    The main point is that your mail transfer agent needs to be able to cooperate with external programs for filtering. Most modern MTAs do; the other popular choices are postfix or sendmail, and in recent times, OpenSMTPd which is developed as part of the OpenBSD project, is showing great promise.

  2. Consider setting up your mailserver to do greylisting
    All the early greylisting implementations and several of the options in use today were written as optional modules for mail transfer agents. If, for example, you will not be using PF anywhere, using spamd (which we will be covering in more detail later) is not really an option, and you may want to go for and in-MTA option, such as a sendmail milter such as greylist-milter or a postfix policy server such as postgrey.

    In some environments, the initial delay in delivery of the first message may be undesirable or downright unacceptable; in such cases, the option of greylisting is unfortunately off the table.
    We feel your pain.


  3. Choose your malware scanner
    There are a number of malware scanners available, some free, some proprietary. The favorite seems to be the one we chose, clamav. clamav is GPL licensed and conveniently available through the package system on your favourite BSD.

    The product appears to be actively maintained with frequent updates of both the code itself and the malware signature database. Once it is installed and configured, clamav takes care of fetching the data it needs.

    Signature database update frequency appears to be on par with competing commercial and proprietary offerings.

  4. Choose your spam filtering

    Spam filtering is another well populated category in the BSD package systems. Several of the free offerings such as dspam and spamassassin are very complete filtering solutions, and with a little care it is even possible to combine several different systems in a sort of cooperative whole.

    We chose a slightly simpler approach and set up a configuration where messages are evaluated by spamassassin during message receipt. spamassassin is written mainly in perl, shepherded by a very active development team and is very flexible with all the customizability you could wish for.

Once all those bits have been configured and are running, any messages with malware in them are silently discarded with a log entry of the type

2007-04-08 23:39:17 1Haf6Q-000M6I-Cd => blackhole (DATA ACL discarded recipients): This message contains malware (Trojan.Small-1604)

Messages which do not contain known malware are handed off to spamassassin for evaluation. spamassassin evaluates each message according to its rule set, where each matching rule tallies up a number of points or fractions of points, and in our configuration, the very clear cases are discarded:

2007-04-08 02:39:35 1HaLRE-000Kq0-3P => blackhole (DATA ACL discarded recipients): Your message scored 116.0 Spamassassin points and will not be delivered.

The messages which are not discarded outright fall into two categories:

Clearly not spam A large number of rules are in play, and for various reasons valid messages may match one or several of the rules. We chose a definitely not spam limit which means that messages which accumulate 5 spamassasin points or less are passed with only a X-Spam-Score: header inserted.

The interval of reasonable doubt Messages which match a slightly larger number of rules are quite likely to be spam, but since they could still conceivably be valid, we change their Subject: header by prepending the string *****SPAM***** for easy filtering. The result ends up looking like the illustration below to the end user:

Likely spam message, tagged for filtering
Mainly for the administrator's benefit, a detailed report of which rules were matched and the resulting scores is included in the message headers.

Detailed spam scores for a likely spam message:


        Content analysis details:   (10.0 points, 5.0 required)
        pts rule name              description
        ---- ---------------------- --------------------------------------------------
        0.8 EXTRA_MPART_TYPE       Header has extraneous Content-type:...type= entry
        1.0 HTML_IMAGE_ONLY_28     BODY: HTML: images with 2400-2800 bytes of words
        0.0 HTML_MESSAGE           BODY: HTML included in message
        2.0 RCVD_IN_SORBS_DUL      RBL: SORBS: sent directly from dynamic IP address
        [62.31.124.248 listed in dnsbl.sorbs.net]
        1.3 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
        [Blocked - see <Mhttp://www.spamcop.net/bl.shtml?62.31.124.248>]
        3.1 RCVD_IN_XBL            RBL: Received via a relay in Spamhaus XBL
        [62.31.124.248 listed in sbl-xbl.spamhaus.org]
        1.7 RCVD_IN_NJABL_DUL      RBL: NJABL: dialup sender did non-local SMTP
        [62.31.124.248 listed in combined.njabl.org]
X-Spam-Flag: YES
Subject: *****SPAM***** conservatively enrichment

This means you have real data to work with for any fine tuning you need to do in your local customization files, and for valid senders who for some reason trigger too many spam characteristics, you may even whitelist using regular expression rules. Optional spamassassin plugins even offer the possibility of automated feedback to hashlist sites such as Razor, Pyzor and DCC - a few scripts will go a long way, and the spamassassin documentation is in fact quite usable.
Performing content scanning during message receipt means you run the risk of having mail delivery to your users stop if one of your content scanner services should happen to crash.

For that reason it can be argued that since content scanning, as opposed to greylisting, does not have to be performed during message receipt, it should be performed later. Server or end user processes can for example be set up to do filtering on user mail boxes, using tools such as procmail or even filtering features built into common mail clients such as Mozilla Thunderbird or Evolution.

Now of course all of this content scanning adds up to rather extensive calculations, well into what we until quite recently would have considered “serious number crunching”. The next section will present some recent advances which most likely will lighten the load on your mail handlers.


Giving spammers a harder time: spamd

The early days of pure blacklisting

As content filtering grew ever more expensive, several groups started looking into how to shift the burden from the recipient side back to the spammers. The OpenBSD project's spamd is one such effort which is inteded to integrate with OpenBSD's PF packet filter. Both PF and spamd have been ported to other BSDs, but here we will focus on how spamd works on OpenBSD in the present version.

The initial version of spamd was introduced in OpenBSD 3.3, released in May 2003. The basic idea was to have a basic tarpitting daemon which would produce extremely slow SMTP replies to hosts in a blacklist of known spammers. Known spammers would have their SMTP dialog dragged on for as long as possible, where the spamd at our end would serve its part of the SMTP dialog at a rate of one byte per second.

spamd was designed to operate independently, with no direct interactions with your real mail service. Instead, it integrates with any PF based packet filtering you have in place, and frequently runs on the packet filtering gateway. Typical packet filtering rules to set up the redirection to spamd looked something like this with the PF syntax of the time:


table <spamd> persist
table <spamd-white> persist
rdr pass on $ext_if inet proto tcp from <spamd> to { $ext_if, $int_if:network } \
            port smtp -> 127.0.0.1 port 8025
rdr pass on $ext_if inet proto tcp from !<spamd-white> to { $ext_if, $int_if:network } \
            port smtp -> 127.0.0.1 port 8025


Here the table definitions denote lists of addresses, <spamd> to store the blacklist, while the addresses in <spamd-white> are not redirected. (See eg this article for a more recent configuration example - details of how spamd works has changed over the years).

Note: Since this was originally written, the uatraps list was unfortunately retired from service and is no longer available. See the update notes at the end of the article for some more information.

Blacklists and corresponding exceptions (whitelists) are defined in the spamd.conf configuration file, using a rather straightforward syntax:

all:\
:uatraps:whitelist:
uatraps:\
        :black:\
        :msg="SPAM. Your address %A has sent spam within the last 24 hours":\
        :method=http:\
        :file=www.openbsd.org/spamd/traplist.gz
whitelist:\
        :white:\
        :method=file:\
        :file=/etc/mail/whitelist.txt

Updates to the lists are handled via the spamd-setup program, run at intervals via cron.

spamd in pure blacklisting mode was apparently effective in wasting known spam senders' time, to the extent that logs started showing a sharp increase in the number of SMTP connections dropped during the first few seconds.

Introducing greylisting

Inspired by the early in-MTA greylisters (see the discussion of greylisting earlier), spamd was enhanced to include greylisting functions in OpenBSD 3.5, which was released in May 2004. The result was a further reduction in load on the content filtering mail handlers, and OpenBSD users and developers have found spamd's greylisting to be so effective that from OpenBSD 4.1 on, spamd greylists by default. Pure blacklisting mode is still available, but requires specific configuration options to be set.

A typical sequence of log entries in verbose logging mode illustrates what greylisting looks like in practice:

Oct  2 19:55:05 delilah spamd[26905]: (GREY) 83.23.213.115: 
<gilbert@keyholes.net> -> <wkitp98zpu.fsf@datadok.no> 
Oct  2 19:55:05 delilah spamd[26905]: 83.23.213.115: disconnected after 0 seconds.
Oct  2 19:55:05 delilah spamd[26905]: 83.23.213.115: connected (2/1)
Oct  2 19:55:06 delilah spamd[26905]: (GREY) 83.23.213.115: <gilbert@keyholes.net> -> 
<wkitp98zpu.fsf@datadok.no>
Oct  2 19:55:06 delilah spamd[26905]: 83.23.213.115: disconnected after 1 seconds.
Oct  2 19:57:07 delilah spamd[26905]: (BLACK) 65.210.185.131: 
<bounce-3C7E40A4B3@branch15.summer-bargainz.com> -> <adm@dataped.no>
Oct  2 19:58:50 delilah spamd[26905]: 65.210.185.131: From: Auto lnsurance Savings 
<noreply@branch15.summer-bargainz.com>
Oct  2 19:58:50 delilah spamd[26905]: 65.210.185.131: Subject: Start SAVlNG M0NEY on 
Auto lnsurance
Oct  2 19:58:50 delilah spamd[26905]: 65.210.185.131: To: adm@dataped.no
Oct  2 20:00:05 delilah spamd[26905]: 65.210.185.131: disconnected after 404 seconds. 
lists: spews1
Oct  2 20:03:48 delilah spamd[26905]: 222.240.6.118: connected (1/0)
Oct  2 20:03:48 delilah spamd[26905]: 222.240.6.118: disconnected after 0 seconds.

Here we see how hosts connect for 0 or more seconds to be greylisted, while the blacklisted host gets stuck for 404 seconds, which is roughly the time it takes to exchange the typical SMTP dialog one byte at the time up to the DATA part starts and the message is rejected back to the sender's queue. It is worth noting that spamd by default greets new correspondents one byte at the time for the first ten seconds before sending the full 451 temporary failure message.

The graph below is based on data from one of our greylisting spamd gateways, illustrating clearly that the vast number of connection attempts are dropped within the first ten seconds.

Number of SMTP Connections by connection length

The next peak, in the approximately 400 seconds range, represents blacklisted hosts which get stuck in the one byte at the time tarpit. The data in fact includes a wider range of connection lengths than what is covered here, however, the frequency of any connection length significantly longer than approximately 500 seconds is too low to graph usefully. The extremes include hosts which appear to have been stuck for several hours, with the outlier at 42,673 seconds, which is very close to a full 12 hours.

Effects of implementation: Protecting the expensive appliance

Users and administrators at sites which implement greylisting tend to agree that they get rid of most of their spam that way. However, real world data which show with any reasonable accuracy the size of the effect are very hard to come by. People tend to just move along, or maybe their frame of reference changes.

For that reason it was very refreshing to see a message with new data appear on the OpenBSD-misc mailing list on October 20, 2006 (See Steve Williams' October 20th, 2006 message to the OpenBSD-misc mailing list).

In that message, Steve Williams describes a setting where the company mail service runs on Microsoft Exchange, with the malware and spam filtering handled by a Mcafee Webshield appliance. During a typical day at the site, Williams states, "If we received 10,000 emails, our Webshield would have trapped over 20,000 spam" - roughly a two to one ratio in favor of unwanted messages. The appliance was however handling spam and malware with a high degree of accuracy.

That is, it was doing well until a new virus appeared, which the Webshield did not handle, and Williams' users was once again flooded with unwanted messages. Putting an OpenBSD machine with a purely greylisting spamd configuration in front of the Webshield appliance had dramatic effects.

Running overnight, the Webshield appliance had caught a total of 191 spam messages, all correctly classified. In addition, approximately 4,200 legitimate email messages had been processed, and the spamd maintained whitelist had reached a size of rougly 700 hosts.

By the metrics given at the start of Williams' message, he concludes that under normal circumstances, the unprotected appliance would have had to deal with approximately 9,000 spam or malware messages. In turn this means that the greylisting eliminated approximately 95% of the spam before it reached the content filtering appliance. This is in itself a telling indicator of the relative merits of enumerating badness versus behavior based detection.

spamdb and greytrapping

By the time the development cycle for OpenBSD 3.8 started during the first half of 2005, spamd users and developers had accumulated significant amounts of data and experience on spammer behaviour and spammer reactions to countermeasures.

We already know that spam senders rarely use a fully compliant SMTP implementation to send their messages. That's why greylisting works. Also, as we noted earlier, not only do spammers send large numbers of messages, they rarely check that the addresses they feed to their hijacked machines are actually deliverable. Combine these facts, and you see that if a greylisted machine tries to send a message to an invalid address in your domain, there is a significant probability that the message is a spam, or for that matter, malware.

Consequently, spamd had to learn greytrapping. Greytrapping as implemented in spamd puts offenders in a temporary blacklist, dubbed spamd-greytrap, for 24 hours. Twenty-four hours is short enough to not cause serious disruption of legitimate traffic, since real SMTP implementations will keep trying to deliver for a few days at least. Experience from large scale implementations of the technique shows that it rarely if ever produces false positives, and machines which continue spamming after 24 hours will make it back to the tarpit soon enough.

One prime example is Bob Beck's "ghosts of usenet postings past" based traplist, which rarely contains less than 20,000 entries. The reason we refer to it as a “traplist” is that the list is generated by greytrapping at the University of Alberta. At frequent intervals the content of the traplist is dumped to a file which is made available for download and can be used as a blacklist by other spamd users. The number of hosts varies widely and has been as high as almost 200,000.

The peak number of 198,289 entries was registered on Monday, February 25th 2008, at 18:00 CET.

The diagram here illustrates the number of hosts in the list over a period of a little more than two years.

Hosts in the uatraps list - active spam sending hosts

At the time this article was originally written (mid March, 2008), the list typically contained around 100,000 entries. While still officially in testing, the list was made publicly available on January 30th, 2006. The list has to my knowledge yet to produce any false positives and was available from http://www.openbsd.org/spamd/traplist.gz.

Note: Since this was originally written, the uatraps list was unfortunately retired from service and is no longer available. See the update notes at the end of the article for some more information.

 Setting up a local traplist to supplement your greylisting and other blacklists is very easy, and is straightforwardly described in the spamd and spamdb documentation.

Anecdotal evidence suggests that a limited number of obviously bogus addresses such as those which have already been seen in spamd's greylisting logs or picked from Unknown user messages in your mail server logs will make a measurable dent in the number of unwanted messages which still make it through.

Some limited ongoing experiments started in July 2007 (See the blog post http://bsdly.blogspot.com/2007/07/hey-spammer-heres-list-for-you.html and followups) indicate that publishing the list of greytrap addresses on the web has interesting effects. After a spike in undeliverable bounce messages to non-existent addresses, we began adding backscatter addresses in our own domains to the local greytrap list and publishing the greytrap addresses on a web page that was referenced with a moderately visible link on the target domains' home pages.

The greytrap list quickly grew to several thousand entries (The list, with some accompanying explanation, is maintained at bsdly.net, fed by what appears to be several different address generating operations with slighly different algorithms and patterns). See the field notes at http://bsdly.blogspot.com/2007/11/i-must-be-living-in-parallel-universe.html for some samples. Addresses would typically start to appear in our greylist dumps (and occasionally in mail server logs) as intended recipients for messages with From: addresses other than <> within days of being added to the published list.

The net effect of a sizeable list of published greytrap addresses is both a higher probability of detecting spam senders early and further worsening spammers' effective hit rate by lowering the quality of their address lists.

Incremental spamd improvements

One of the main overall characteristics of the changes implemented in the most recent OpenBSD release is that they tend to be what users and developers see as sensible, best practice compliant defaults.

Typical of the sensible defaults theme is the decision to have spamd run in greylisting mode by default. This change was implemented in OpenBSD 4.1, released May 1st, 2007.

Sites with several mail exchangers and corresponding spamd instances will appreciate the synchronization feature for greylisting databases between hosts, also an OpenBSD 4.1 improvement.

Sites and domains with several mail exhangers with different priorities have seen that spammers frequently attempt to deliver to secondary mail exchangers first. As a consequence, the greytrapping feature was extended to detect and act on such out of order mail exchanger use.

Conclusion

The main conclusions are that the free tools work, and that by using them intelligently you can actually make a difference.

If our goal is to achieve relative peace and quiet in our own networks so we get our real work done, there are real advantages in stopping undesirable traffic as early as possible, and stopping most of it at the perimeter is actually doable.

All the tools we have studied are open source. The open source model, which is closely related to the peer review style of development seen in academic research, produces effective, high quality tools which truly make your life easier. The often repeated argument that development in the open would make it easier for the other side to develop countermeasures does not match our experience. If anything we see that development in the open means that ideas get exposed to real world conditions quickly, exposing the less robust approaches in ways that closed development is apparently unable to match.

The data I presented earlier as graphs seem to indicate that our efforts have some effect. There appears to be a trend which has the number of greytrapped hosts seemingly stabilize at a higher level over time. This could be taken as an indicator that the number of compromised machines is rising, but could equally well be interpreted to mean that spammers and malware senders need to try harder now that effective countermeasures are becoming more widely deployed.

By studying our adversaries' behavior patterns we have trapped them, and we may just be starting to win.

Resources

  1. Slides for this talk
  2. Nicholas Weaver, Vern Paxson, Stuart Staniford and Robert Cunningham: A Taxonomy of Computer Worms
  3. Marcus Ranum: The Six Dumbest Ideas in Computer Security, September 1, 2005
  4. Greg Lehey: Seen it all before?, Daemon's Advocate, The Daemon News ezine, June 2000 edition
  5. Brad Templeton: Reflections on the 25th Anniversary of Spam
  6. The Morris Worm 18th anniversary site
  7. Sender Policy Framework
  8. Theo de Raadt: Exploit mitigation techniques and the update,  Security Mitigation Techniques: An update after 10 years
  9. Peter N. M. Hansteen: Firewalling with PF tutorial
  10. Peter N. M. Hansteen: The Book of PF, No Starch Press 2014 (first edition 2007, second edition 2010)
  11. Peter N. M. Hansteen: That Grumpy BSD Guy, blog posts
  12. Bob Beck: PF, it is not just for firewalls anymore and OpenBSD spamd - greylisting and beyond
  13. Barbie: Understanding Malware
If you enjoyed this article, please consider buying OpenBSD merchandise or donating to the project either directly or via the OpenBSD Foundation.




Update 2016-12-13: A discussion on the OpenBSD-misc mailing list lead me to review this article, and I found a couple of points I would like to add:

The uatraps greytrapping-based blacklist was unfortunately retired from service in May 2016 and is no longer available. My own bsdly list, which is another greytrapping-generated list stabilized at a slightly higher number of entries after uatraps dispappeared, and for that reason I chose to replace the uatraps reference with that one in a common example. Other good sources of information is the example spamd.conf in your /etc/mail directory if you're running OpenBSD. And if you want to explore a useful, if slightly unusual way to use routing protocols, take a look at Peter Hessler's BGP-spamd initiative.

The OpenBSD-misc discussion also touched on expected connection lengths for the SMTP traffic spamd captures, and I realized I could possibly shed some further light on the issue. The graph earlier in the article was based on logs covering approximately 1 million connections.

Even with slightly different log rotation settings between them, the still existing logs on the three gateways described in the Voicemail Scammers piece between them contained data on some 8,145,183 connections, graphed below.

 

That is, like I did with the earlier graph I chose to limit the data to the 0 to 1,000 seconds interval. The general pattern seems to be unchanged: The vast majority of the connections drop during the first few seconds, but the data includes some outliers, all the way out to the one that hung on for a whole 63423 seconds (17 hours, 37 minutes and 3 seconds). Which of course makes the full data impossible to graph.

Click on the image to get the raw size, and if you like you can download the spreadsheet or the CSV version. The full data runs into the gigabytes, but if you want to take a peek for research purposes, please contact me via the various conventional means. 

Update 2016-12-14: Added a paragraph mentioning DKIM and DMARC, which were either preliminary specifications or not yet thought up at the time this was originally written.