RE: RBL-type BGP service for known rogue networks?
Do you think that the car thief scenario comes into play here? Maybe an alarm system wont *really* keep a determined thief from stealing a car, but isn't he more likely to move onto something easier? And, yes, I do understand the mentality of the "bigger challenge". But, I've been able to identify the true source of a forged packet and filter it knowing that they could switch to attacking from another IP. However, I think only once or twice out of thirty or so incidents over the past few years have they come back in anytime soon from anywhere else. Karyn -----Original Message----- From: jlewis@lewis.org [mailto:jlewis@lewis.org] Sent: Thursday, July 06, 2000 2:35 PM To: Dan Hollis Cc: nanog@merit.edu Subject: Re: RBL-type BGP service for known rogue networks? On Thu, 6 Jul 2000, Dan Hollis wrote:
1) Someone sets up server X on company Y network and starts rooting sites. 2) company Y, once notified, refuses to shut down server X, even when its been CONFIRMED server X is indeed rooting sites. 3) company Y has a HISTORY of such attacks and refuses to take any action.
tin.it obviously fits all 3 criteria and thus would be blackholed. it might not get them to change their behaviour, but at least people who subscribe to the blackhole list wouldnt be rooted by tin.it customers
Except that any good script kid has root on numerous boxes. Just blocking a well known site full of rooted boxes probably won't do much good since they crack and scan from random boxes all over the world as they root them. ---------------------------------------------------------------------- Jon Lewis *jlewis@lewis.org*| I route System Administrator | therefore you are Atlantic Net | _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
On Thu, 6 Jul 2000, Karyn Ulriksen wrote:
Do you think that the car thief scenario comes into play here? Maybe an alarm system wont *really* keep a determined thief from stealing a car, but isn't he more likely to move onto something easier?
It didn't stop mosthateD, who missed his day in federal court because he was in jail for Burglary of a house and auto theft. This kid is the one who gets labeled as the most infamous hacker of recent times. All he did was deface a few websites and talk trash on IRC. That's another story entirely. Seriously though, your argument is a decent one. Hardening systems and networks to attack is a great idea, and it's been talked about many, many times before. It never seems to catch on though. How many OS's out there are proactive about their security? It's certainly not Windows, and to a large extent, it's not the people building the userland side of Linux systems, though there are two or three projects designed to build a secure Linux userland. The OpenBSD project seems to do a hell of a job building secure systems, as do the NetBSD folks, and to some extent, I'm sure FreeBSD benefits from that work as well, though I am less familiar with their security structure. A really nice bunch of guys though. As a prime example, when I started running linux back in 1994, just about every Linux installation came with almost every service in inetd.conf turned on with no tcp wrappers. Lots of boxes were installed with exploitable services, and there was no distribution security list announcing that there was a newer version or a fix available. You were on your own to watch Bugtraq and keep abreast of what might come up. And there were a lot of people who did not read Bugtraq to keep up with security issues. Legacy boxes from installations several generations back can still be found in production with exploitable holes. And, it wasn't just the various linux distributions of the time, as Solaris, SCO, HPUX, IRIX and BSD/OS had similar problems, though the commercial vendors at least had a system for dealing with these issues. But as most of us know, it was not uncommon for vendors to sit on bug reports until the floodgates were open and the exploit code was widely available and being used before things were fixed. Still, you had reccomended patch clusters you could install that fixed a decent chunk of problems, just like we have today. And my, how times have changed. Or, do they really change at all? While most vendors are pretty good at dealing with security issues now, they still occasionally sit on bugs until they cannot be sat on any longer. With the emergence of the Linux start-ups, we've seen the development of individual distributions having their own security groups to handle problem software, but that's reactive security. Distributions no longer come with the mentality that you're generating a "do-everything" server, which was a big step in improving the security of Linux. You can now pick what kind of install you want and prune or add softare as you like to tailor the distribution as you like. However, what happens to the NT admin just given his first copy of RedHat, and installs everything on the new web server because he may need something and doesn't want to have to mess with it once it's installed. Wow, now he's running BIND, sendmail, ftpd, telnetd, and lord knows what else on his web server. It certainly increases the chance that one of those services can be broken to compromise the system. What the internet needs is proactive security. And, unfortunately, that's not in the business model of a lot of people, from the multi-million dollar .com start-ups to the mom and pop ISP. Even in places where it is in the business model, there isn't necessarily adequate clue available to understand what's needed to effectively implement a security policy. This is the reason we still have places with firewalls getting hacked. Each new security product positions itself as the alpha and omega of computer security, and there are a lot of business folks out there who believe the marketing slicks. Unfortunately, security is a continual process, not one or two actions. More importantly, there is no one magic "Security in a box" out there. No one product is a panacea. I believe it really boils down to two root problems: Lack of clue and laziness. Most intrusions can be stopped by following the generally accepted security basics set forth by just about any organization from SANS to your vendor. Turn off any service you don't need. (This includes IIS. Given it's security history, you really don't need it). Simply turning off services like RPC or BIND on machines that don't need them to function will stop most script kiddies in their tracks, and it certainly limits your vulnerability to outside attacks. Then you just have to check yourself by watching your vendors' lists or Bugtraq for problems with what you are running. This one step is included in just about every vendor or interest groups security checklist, and it's ignored more times than it's not. I've seen many people make the mistakes I talk about. I've made many of them myself when I first started out in this field and even over the years. And, for all the good that free software has done (I am a firm believer in free software), it has also allowed lots of people to write some of the worst code imaginable. While it's plausible that there is peer review going on, who's to say the people reviewing the source have any better idea? It seems that the majority of the source code audits are being done by the gray/black hats, with a few stunning white-hat examples like The l0pht and the OpenBSD project out there. Furthermore, it's a modern day Dodge City out there. People are roaming free with "weapons" firing them at will with very little chance for prosecution unless you can claim a decent amount of damage and the investigation is successful in down the culprit with all the evidence being handled properly. The unwritten rule is that if you're talking less than six figures in damage or the theft of national security secrets, the FBI doesn't really want to hear from you. It's not because they don't care, it's just that they already have too much to do. And I'm sure it's hard to keep talent on an FBI salary when the .com world is willing to pay talented people. I don't expect there's going to be a Wyatt Erp coming along anytime soon to tame things, and with the global nature of the Internet, it's quite possible that it may never happen. You stand a better chance of getting a judgement in civil court than relying on the penal system, especially when dealing with anything outside the US.
And, yes, I do understand the mentality of the "bigger challenge". But, I've been able to identify the true source of a forged packet and filter it knowing that they could switch to attacking from another IP. However, I think only once or twice out of thirty or so incidents over the past few years have they come back in anytime soon from anywhere else.
Karyn
Sorry for the soapbox dissertation. __ joseph
Here's what I'm implementing in order to a) dynamically disallow hosts/nets which are causing me problems, and b) ensure that -my- customers aren't causing problems for anyone else: http://www.cisco.com/warp/public/cc/pd/sqsw/sqidsz/ http://www.cisco.com/univercd/cc/td/doc/pcat/nerg.htm There are similar commercial products like the Network Flight Recorder from www.nfr.net as well as Snort - www.snort.org - which is a freeware product. The things I like about the Cisco solution are its tight integration with their routers and its scalability. I can set up this system so that upon detection of an inbound or outbound attack (tuning it to avoid false positives is key), it automagically - or with the click of a mouse for purposes of manual oversight - rewrites the ACLs on designated routers so as to disallow the offending traffic. It's a scalable solution, so I can deploy as many sensor boxes as are necessary, and implement a hierarchy of 'director' machines to run them all. I can dump all the logging into Oracle, with the forensic benefits that implies. This rewriting of ACLs on the fly is called "shunning" in Cisco terminology, and it can be done on a per-host or per-network basis, as one would expect. In fact, Cisco routers may be used as 'sensors' themselves, at the cost of a bit of CPU overhead. I haven't experimented with using a router in this way, yet, but plan on doing so in the near future. If it doesn't impact performance too much to do so, I could probably avoid having to set up SPAN ports for use by the dedicated 'sensor' boxes, as well as the host ports required for 'sensor'-to-'director' communications. Since the core of my network is MPLS running on Catalysts with NFFC II cards, the processing overhead for running extensive ACLs is pretty low. Whilst I'm nowhere near the size of a Verio or an Exodus, I should think that a system such as this, coupled tightly with the routing/switching infrastructure, could go a long way towards freezing out the hax0rs and script-kiddies as we all wait to enter the IPv6 Promised Land. And it also avoids the pitfalls involved in tinkering with the functionality of BGP, etc. Is anyone else out there using an intrusion detection system in this manner? Any suggestions or comments would be greatly appreciated. -- ----------------------------------------------------------- Roland Dobbins <rdobbins@netmore.net> // 818.535.5024 voice
participants (3)
-
Joe Shaw
-
Karyn Ulriksen
-
Roland Dobbins