Saturday, March 24, 2007

Online Predators – A Security Risk to Our Homes and Families

I am going to take a break from enterprise information security and talk about computer security on the home front for a bit. The security aspects of online predators, children, and the Internet are yet again getting a huge amount of publicity, and are worth discussing. In a recent news article in Denver, “Police crack down on Internet predators,” police are using online chat rooms to lure predators into a situation where they think they are going to meet a child for sex, and they then actually get arrested. The article goes on to list the names and personal information about these worthless scum for all to see.

First of all – Good on the cops and law enforcement agencies nationwide who are cracking down on these worthless animals that prey on our kids. Bad on the liberal morons who are criticizing this effort and saying that these people getting caught are victims of entrapment. The predators are making the conscious decision to pursue their uncontrolled urges online. The cops are just acting as the decoys for the predators to go after instead of the predators going after our kids. One predator going after a decoy means that one less kid is becoming the next victim. Kind of like why we use “honey pots” on our corporate networks – to give the bad guys something to attack so as to keep them distracted, and so that they won’t attack our real servers, right?

Now – in my opinion, there are two parts to the solution for deterring would-be predators. One strategy being that which is already being done by our law enforcement agencies, as cited in the article. Shows like Chris Hansen and Dateline’s “To Catch a Predator” are giving high visibility to these pathetic people, and showing these perverts getting busted publicly, exposing them for who they really are. Chris Hansen and John Walsh (“America’s Most Wanted”) are two of my biggest heroes. They are making a difference, and are truly positive forces in our society today. Good job guys – you are two of the true heroes of our time.

The other part of this solution is that parents need to be more proactive in protecting children from these online perverts, and in fact protecting children from their own inability to protect themselves. Children are immature, lack experience, and just don’t have the knowledge and logical thinking tools developed yet to allow them to rationally deal with these types of situations. This is through no fault of children themselves – that’s part of being a child, right? Many will argue that parents should not censor their children’s activities. There is a fine line between censorship and protecting them. True, children can indeed think for themselves on many issues. But their thoughts are often not logically constructed, and tend to be rather impulsive at times. Of course, I could say the same for many adults! Children often do not know any better, believe what they are told, and these animals have become so good at disguising themselves that it is easy for a child to be deceived. Children think that they are hiding behind the anonymity of the Internet, and often feel very uninhibited when chatting online. They then get pulled into the webs spun by these scum bags.

Parents don’t need to hover over their children’s shoulders every minute that they are on the computer to be god parents. Rather, they can take some very easy technical and low-tech steps to protect their children’s Internet usage. All they have to do is be a little pro-active and put a few safeguards in place to show their children that they care about them.


Enforce Internet Hours:

Much of what the experts will tell you about how to prevent your children from venturing into dangerous waters on the Internet has to do with not allowing them to be up all hours of the night chatting. Even if you have the family computer in a common area as suggested, how do you monitor usage if it is late and you are already in bed? If you have broadband service, you can use your router to specify hours of operation. Even if you have only one computer (and think you don’t need a router), people have heard me say over and over that you need to have one of these routers anyway - for the other security measures that they offer, such as firewall protection. I am harping on people yet AGAIN to get one because the broadband router can also help you protect the people that use the computer, not just the data on the computer. Most broadband routers allow you to set hours of operation for all or certain specified computers. The computer will still work as it normally would - allowing your children to print, access files on another computer, and do their homework. Should they be up all hours of the night doing it is your concern, but at least the Internet access will be turned off. If you have multiple computers, you can limit Internet hours to some, but not necessarily all. Many times I am in my office late at night researching something (during a bout of insomnia) and need the Internet to be accessible. But the kids can't use my computer from fear of death, or at least my strong password gets in the way :)


Use Parental Controls:

Just like the V-Chip on your television, your broadband router has the ability to help you sign up for and put parental controls in place. You can specify and allow only content that is appropriate for your family, protecting them from questionable material and web sites that cater to a variety of offensive content from pornography to web sites that contain hidden malicious code. These sites are also often used for phishing and other identity theft scams. Much of what is being discussed as far as the dangers of online predators is the idea that children are often lured to seemingly innocent web sites or chat rooms, but are then exposed to all kinds of things that can lead to, among other things, identity theft - theirs and yours. By signing up for the parental controls services, you can leverage the ability of the service by knowing that they are keeping their definitions up to date and monitoring for the many new dangerous sites that pop up so that you don't have to worry about constant upkeep. You can also specify your own list of prohibited web sites using your router's built-in functions as well.


Use Protection Software:

There are also a wide variety of software packages out there that will allow you to permit and restrict web sites that your children can visit. NetNanny is one such product. There are many others - the Internet Filter Review web site provides a wealth of info, as well as software comparisons. Many of these types of software allow you to prevent access to suspicious web sites, monitor chat room and email activities, and even send you alerts of suspicious activities that are taking place.

Even the more sophisticated personal firewall software has the ability to restrict application access to the Internet. ZoneAlarm, for example, has the ability to allow or disallow any application of your choosing access to the Internet. If you feel your children's usage of their favorite chat program has gotten out of hand or is suspicious, simply turn off access, talk to them about it, and then come up with a strategy for safer usage.

For those of you who use Comcast broadband Internet service, McAfee Personal Firewall comes to you free of charge. I use McAfee, although I have been a ZoneAlarm fan for many years - because it is free with my current service. The McAfee product provides a very robust set of features to allow protect you and your system from harmful activities.


Upgrade to Windows Vista:

The parental controls features of Windows Vista allows parents to more tightly control what and when their children use the Internet. Parents can set hours for computer use, set sites as off-limits or even limit browsing to only a few sites, and even monitor what sites their children are viewing. Easy to confuse this with censorship, but we are talking about children, after all. It is (in my humble opinion) the parent's job to keep children from things that will hurt them or bring liability for illegal activities onto the parents. This allows for a more granular setting of computer restrictions. The other thing I personally like about it is that the parental controls block what you specify, but give a reason why - letting the kids know that you are taking an active interest in their computer activities.


A Low Tech Approach to Web Site Access Prevention:

Within your computer is a low-tech way to prevent the computer from accessing questionable web sites and sites that host chat rooms called the HOSTS file. When you type in a web site address or click on a link in your web browser, you have just told your computer you want to visit an address somewhere on the web. We as humans can only think in terms of plain English names, like www.wflinn.com or www.google.com. Our computers, however, only think of this in terms of addresses known as Internet Protocol (IP) addresses. An IP address looks like the form 192.168.1.1. For instance, what you know as www.wflinn.com is actually located at address 66.226.64.9. When you type in the plain English name, your computer has to do what is known as "name resolution" to find out what IP address you need to go to. The HOSTS file is a file that your computer looks to first to find out the IP address of a web site's location. If it doesn't find a suitable address in the HOSTS file, it goes out to what is known as a Domain Name Services (DNS) server to get the address. Therefore, if you put an entry into your HOSTS file to tell your computer the address of a specific site, it will look no further for the address.

So - you fake your computer out by telling it that the address of a questionable web site is 127.0.0.1. The address 127.0.0.1 is a special address - it is the loop-back address of your own computer. Regardless of what address your Internet Service Provider assigns you, your computer's internal address is always 127.0.0.1. When you tell the HOSTS file that the address of a questionable web site, such as www.myspace.com is actually 127.0.0.1, your web browser will try to go to that address, find out it is not a web server, and simply display the plain white "Page not found" error that you get when you try to go to a web site that doesn't exist. I'm not necessarily trying to pick on MySpace, by the way - but they have been singled out lately as one of the most popular sources that many online predators look to for victims, so I have chosen to outright block all access to that site from all of my computers.

This method, by the way, is an easy method for preventing all those annoying advertising pop-ups in your web browser. There are many web sites where you can obtain entries to copy and paste into your HOSTS file - so you don't have to do the research to figure it out and type them all yourself. The good news is that this method is easy, no cost, and works very well. The bad news is that it must be updated, and if you r kids are computer savvy, they can can find this file and erase the entries to give them back access to web sites that you have blocked.


Summing it all up:

The Internet has exploded into a virtually unlimited resource for finding things and getting information. Unfortunately, it has also brought out the worst in some people. A recent news article made mention of the fact that most of these online predators wouldn't be able to carry out their abhorrent behaviors if not for having a computer and access to the Internet. It was interesting when one young girl on the news article said that parents tell them not to talk to strangers and such - all things related to being safe outside the home. But now, the Internet has brought certain dangers inside the home and can affect your whole family.

There are many ways to protect your kids, from outright prohibition of certain things, to allowing access to everything, but helping them make wise choices. As I said, I am not going to get into this whole debate about what is and isn't censorship and invasion of privacy - that's up to you as parents to decide for yourselves. I will, however, tell you that you can use technology to help enforce your choices, and I encourage you to explore and use the various technologies at your disposal to do so. Not only will you be ensuring more safety for your family, but you will be adding to your overall computer security posture as well.

See my article on my web site from last year for a repeat of this information with images to help you configure the items mentioned in this article :

http://www.gonzosgarage.net/computers/archive0506.html


Thought for the day: Stupid people suck, but worthless predator scum suck even more!


Page copy protected against web site content infringement by Copyscape

Thursday, March 22, 2007

When “Smart” People Make Stupid Security Decisions

Warning: Here’s the deal – I have had a week consisting of four “Mondays” in a row. Bad drivers and stupid people have been working my last nerve, so I gotta vent! This is an angry rant about stupid people. If you are a stupid person and you are easily offended, then you should turn away now. Maybe go play on a porn site for awhile. Either that or get some brains and rational thought, and you can join us for some intelligent conversation.

Here’s why I’m angry - I read an interesting article recently that highlights the folly of allegedly “smart” people who show their information security ignorance and make stupid decisions when they don’t even understand the most fundamental of technologies and reasoning behind information security requirements. Then, when someone with intimate technical knowledge of what the issues are and how to solve them steps in, they are instantly rebuffed when even daring to mention the problems. I have experienced this type of thing my whole working life: I see people go through college, get a degree in underwater basket weaving, then somehow get into the pipeline to become managers. Either that or they drink their way through college, become lawyers or doctors, buy beemers, and act like spoiled children the rest of their lives. I had to laugh when I read the following line in this article:


“The attitude among the legal staff was, ‘This is my computer and my network; you’re just a computer janitor.’”


To give a quick synopsis of the article – there are a bunch of attorneys in a District Attorney’s office (city unknown). These lawyers are the very buffoons behind creating an environment which operates with a wide open network, wide open access to data, and confidential data exposed to anyone on the network (and possibly outside the network) who wanted it. Additionally, there were malware and peer-to-peer applications installed on numerous (most) computers throughout the office. When a network support person in the IT department mentioned the dangers of this existing environment, he/she was presented with numerous roadblocks – arguments from lawyers rationalizing how their activities (mostly music file sharing via Napster) were acceptable. Lawyers, after all, are great at making an argument to support ANY position, no matter how lame or morally wrong it may be. It appears from this article that they expended great energy to make their attitude toward information security seem justifiable instead of facing the fact that they were putting their network and data at grave risk. Essentially, non-technical people were allowed to dictate the standards for technical systems, and all because they didn’t want to be inconvenienced and have their toys taken away. The network support person was later fired for being insubordinate to his/her “betters.” In other words – he/she told these cry babies how it is, what it would take to fix it, and they didn’t like it. Need I remind you – this was allegedly a District Attorney’s Office. I sure wouldn’t want to be that District Attorney when the network gets breached, the data gets stolen, and even ends up getting distributed though the peer to peer sharing network. Notice that I didn’t say “if,” I said “when” because it is going to happen unless they fix it and fix it quick, fast, and in a hurry. What a story that would be in the national news! Of course it wouldn’t be the first time a top lawyer was found to be criminally negligent of something, now would it?

That is why this article seemed to call out to me because I hear of and even see the same thing everyday. The attitude that:


“Your computer security mumbo-jumbo is fine for everyone else, but don’t you dare inconvenience ME!”


It’s all about “ME” and it’s all about the fact that these people are so very important that inconveniencing them would be the most heinous crime committed against humanity.

And this “ME” attitude is coming from people with master’s degrees, doctorates, professional status, and high power positions. Seems the richer they are, the more spoiled and whiny they are. The lawyers in this article are perfect examples. But not only are these types of people complaining about security that keeps them from playing with their toys on the corporate network, some managers these days are complaining about security measures that are revealing large numbers of vulnerabilities and security problems. It’s not even that there are problems that need to be fixed – it is that the numbers are making them look bad. It’s all about the numbers, and it’s all about looking bad. No thought is given to the fact that they look bad because they ARE bad. If they want to look good, then why not just fix the underlying problems? Is that so hard?

(This is the part where I rant about the bad drivers) This is the same population of people, no doubt, who are claiming the roadways as their own as they carelessly drive their beemers with no regard for others. While keeping a cell phone glued to their heads, they are then complaining that the speed limits and laws of common sense are keeping them from totally owning the road for themselves. In fact just today, one of these morons couldn’t find a parking spot at our building, so they parked their car in the motorcycle parking – how stupid is that? Justice was served – the campus police slapped a parking ticket right on that Mitsubishi. Hope the laziness was worth it. (Bad driver rant completed).

In many cases, it all comes down to this:


“Your security reports are making me look bad, so my management is giving me heat and withholding my budget until I fix the problems. So why don’t you come up with a way to make me not look so bad?”


They will try to rationalize how the data needs to be collected a different way so that the numbers (of problems) look better. My answer to that: Rather than waste so much time and energy trying to manipulate numbers to make you look good, why not just fix the problems and it will make you be good – for real! Manipulating numbers and hiding vulnerability problems is one way to make it looked fixed, but taking real action will actually fix it. But, as one of my graduate professors often said: “Figures don’t lie, but a liar sure figures.”

Another clever issue evasion strategy: the smoke screen. When faced with data that clearly shows that their area has problems, the management will ask irrelevant questions and demand explanations in order to throw off or divert effort. They have no idea what they are asking in many cases, and often look like jack asses because their questions show their glaring ignorance of information security concepts. These activities will often tie up security professionals for days while they make every effort to ensure that they are explaining the justification for valid and relevant security measures. Security people shouldn’t have to do this – it is a waste of time and keeps them from the business of keeping networks secure. Security professionals shouldn’t have to agonize how to explain something so simple to allegedly intelligent people. This is more like explaining to your small kids why they can’t run down the hall with scissors.

But time after time, these people want to send us off to find an answer that will appeal to their twisted sense of logic. It may not be the right answer, and it may not be the one that is actually going to solve the problems. This is what an acquaintance of mine refers to as a “find me a rock” exercise. Someone will tell you to go find a rock, and when you bring one back, they say: “No! That isn’t the kind of rock I wanted! Go find me another one.” These types of senseless tactics are meant to waste other people’s time and buy the stupid people some time to think up another excuse. And these people are making decisions! Wow – no wonder so many companies are in trouble.

OK – so let’s bite the bullet and see what it will take to do something about this. In the case of the lawyers in the story above, or even the situations I have described here, it is going to take some work - a lot of work - up front. It is going to take a huge amount of effort and many staff hours in the beginning. But the interesting thing I have found is that if a methodical plan is put into place, and some reasonable time given to remediate the problems, they will eventually get fixed or at least minimized to a tolerable level. If some well-spent time is dedicated up front toward attacking the problems, then the rest of the effort simply becomes a continual maintenance routine. If there are a lot of security problems, it is a matter of prioritizing them in order of severity, tackling the most serious first, cleaning up the rest, then putting a plan in place to keep them under control.

New security issues will always come up as new attacks are discovered, and patches from vendors are released. But if the bulk of the serious issues are already taken care of, then tackling these new issues will be a fairly simple exercise.

But in order for any of this to work, people’s attitudes toward information security have got to change. IT people are not janitors, the computers and network that people in the work place are using do NOT belong to the workers, and these are not toys simply put in place for their enjoyment. Being negligent about information security can get people in trouble – big trouble. So before a plan is put in place to tackle the technical issues, perhaps a plan should be put in place to teach security awareness. Teach people why security is so important, how to be secure, and how they will be held accountable for non-compliance. The touchy feely attitudes have got to give way to terminating buffoons who refuse to comply. If you were a CEO, and your employees continually put your company’s finances, data and reputation at risk, just how long would you put up with it?


My closing Thoughts:

Computer Janitor – indeed! My last tax return I reported income from salaries and earned military pensions in the $$$,$$$ range (six figures for you folks who didn’t get it). Many of my colleagues are pulling down similar salaries, and they are so far from being janitors – to make a statement such as that, or even think such a thing is just so wrong. I don’t know too many janitors who make that much money and have post-graduate educations. But I see all too many instances where otherwise smart, educated people feel and behave just that way – they feel that the equipment and resources that they use on the job don’t belong to anyone but them, and that the IT people are just there to help them when they can’t figure out how to copy a document from one folder to another, or their mouse isn’t doing the little “clicky” thing like it should. Heaven help anyone who should inconvenience these poor babies by telling them that they can’t run Napster un-abated on the corporate wire. Give me a break! Maybe there is a lot of validity to Nick Burns’ (Saturday Night Live) attitude toward users. Automatic drink holder giving you problems today?

Ooops – gotta run. Time to get out the Swiffer and get after those viruses. And by the way… You’re Welcome!!!

Reference: “When Lawyers Use Napster At Work” (Anonymous, InfoWorld, 2/27/07)


  • What do you call 350 lawyers resting at the bottom of the sea? A good start!

  • Stupid people – you can’t live with them, and there are only so many of them that you can cut up and stick in an ice chest.

  • Hey – my rat terrier is smarter than your CEO.

  • Hey you in the beemer – hang up and drive!
  • There is en epidemic in America - Fools! (Mr. T)

Monday, March 19, 2007

Why are Some Software Vendors So Security Unaware?

It seems odd to me that software vendors are releasing products that have vulnerabilities, and that they do not do anything to patch them. In fact in some cases, patching the host operating system breaks certain of these errant applications, and the remedy from the software vendor is to put the original, vulnerable file right back in its place. For example, a security patch is released from the operating system vendor. The minute it is applied, another third party application that relies on these files breaks. Instead of the software vendor releasing a patch for its own product, it relies on a “self repair” method that just restores previous, vulnerable versions of the files that need to be fixed.

Clearly, the software vendors are not talking to each other. Or they just don’t care that they aren’t fixing their applications to keep up with the threats. Either way, these companies are causing more work for IT department security people, and they are putting systems at risk. In Part 2 of my series on investigating false positives and other security anomalies, I discussed just such an instance - where a manual, self researched, and self developed fix had to be applied because the software vendor had no intention of fixing their product. This was clearly a case where the vendor did not care that they were injecting vulnerabilities into my environment. Good thing I'm not mentioning who it is here, eh?


Related Links:


Investigating False Positives and Other Security Anomalies Part 2

In Part 1 of this series, I talked about investigating vulnerability scan results where the scanner alerted on something and further investigation revealed that the vulnerability was a leftover file from an upgrade. For example, the computer was upgraded from Microsoft Office XP to Office 2003. As far as Windows/Microsoft Updates and the enterprise patch management system are concerned, the computer is running Office 2003 completely patched for the installed software. An in-depth investigation was performed which involved going into the scanner session logs and finding out which file caused the scanner to alert on the vulnerability. Indeed it turns out to be a left over file from Office XP that Office 2003 doesn’t even use. Renaming or removing the file fixes the vulnerability, and Office continues to work normally, so all fixed, right? After all, it was a pretty straight forward fix – we knew that a Microsoft product was upgraded, the new Microsoft product didn’t clean up after the old version, and a vulnerability was left on the box. The entire solution of renaming an old Office file seemed logical and one thing was related to the other.


Not so fast! Let’s move on to the next type of scenario in the investigative process that is even a little more difficult to troubleshoot. The vulnerability scanner alerts on something that experience showed was easily remediated by renaming a file or removing it. The vulnerability was related to a left over file, and getting rid of it resolved the vulnerability – for the time being. Later on, the computer is scanned again and the same vulnerability has returned. Nothing had changed. Noting new was installed, and the same versions of the Office software are still on the machine. So let’s take a more in-depth look at this type of scenario and see what happened.


Scenario 3: A scan is run, and the now much discussed vulnerability related to MS Office products has appeared on several computers. The previously developed fix of renaming or removing a vulnerable left-over file proves successful. Later, these same computers are scanned again. Many of them show that the vulnerability has been successfully remediated, but on a few of them, the vulnerability has reappeared. Investigation into the scan session logs shows that the previously renamed vulnerable file is again the culprit causing this vulnerability to appear. Physical inspection of the file system on the target computers verifies that the renamed file is still in its renamed form, but now another copy of the original vulnerable file is on the box. One thing interesting is noted about these computers: They all something in common – they all a have a piece of third-party software (not Microsoft software) installed. The software title and vendor is not important here, and I don’t want to be accused (or worse) of name calling and accusing on the Internet, so I just won’t get into a name-calling session here.


Further in-depth troubleshooting reveals that again renaming the vulnerable file, and performing an immediate scan shows the vulnerability remediated. Now for the next step: verifying that all of the software works. MS Office works fine, the corporate email client works fine, as do the web browser and other normally used applications. The computer is scanned, and the machine is still clean of the vulnerability. Since all of the computers with this problem had in common another piece of software, this particular application is tested last. The application in question is started up, and produces an error. The error is that there is a corrupt or missing DLL file, and is prompting the user to install the original software CD for this application. This is done, and the software repairs itself. The application now runs normally. Another scan reveals that the vulnerability is now present. Looking at the folder on the computer where the vulnerable file resides, we see that sure enough the renamed file is still there, but the original vulnerable file has returned.

In this case, it is clear that another piece of software (not from Microsoft) is related to, and interacting with, the Microsoft native files for an MS Office installation. Not sure what to make of this, a call to the vendor’s tech support reveals that the suspect Microsoft DLL may be used by their software, but they are not sure. This will have to be investigated further with the software developers. There are some known versions of the DLL file that are not vulnerable, so the hypothesis was that replacing the offending DLL with a non-vulnerable version will fix the problem. Replacing with a non-vulnerable version allows the software to operate normally and error free. A re-scan of the computer now shows that it is vulnerability free also.

Note: As of this writing, the software company in question has no intention of fixing this vulnerability in their software. I was in communication with them today and the tech support person I spoke with stated that the company will not be releasing a patch for this product - it is Microsoft's problem, evidently. This brings up the issue that a piece of third party software is latching onto a known application (Microsoft Office) for its functionality, and the vendors are not keeping up on the security ramifications of their software installing known vulnerabilities onto a computer.


Investigations Start with Patch and Scan Testing Process:

It is quite clear from the events discussed in the two parts of this article that a proactive strategy for patching and scanning is in order. Such a strategy will ensure that vulnerability scanning is built in to the patch testing process so that 1) patches will be verified as being applied and that they do not have adverse affects on the system, and 2) the vulnerabilities that the patch is meant to target are actually being remediated. Testing the patches as they are received will ensure that they apply properly and do not break applications. Then a follow up of deploying patches to a pilot group will give the patches more rigorous testing in a real environment, and allow IT staffs to clear up any problems quickly before deploying to the full production environment. Once this is done, a follow-up scan on those same pilot computers will verify whether or not the applied patch mitigated the vulnerability. If it does, then the desired goal was achieved. If it does not, then it is time to have an investigative process to find out if 1) the patch is not doing its job, or 2) the scanner is alerting on a false positive condition. This process will allow for the discovery of scanner alert anomalies as soon as possible, and a fix to be developed before the scanner hits the full production environment.


It is important to note that testing patches and developing vulnerability remediations can be tricky in that hidden causes will sometimes not be found right away. This was evident when scenario 4 as described above brought to light newly discovered problems for a situation that was thought to be previously resolved. For this reason, it is important to carefully choose those users who will be in the pilot group for the second phase of patch testing. They should be fairly computer savvy users who know how to properly respond to error messages, and that they also know how to carefully document any problems that they run into. This is the group of people that will know that these errors are possibly going to occur, and won’t fly off the handle when they do. They will know to calmly notify their IT support staff, and won’t panic and click through all the error messages until the IT staff has had a chance to see them and work the issues. So having said all that, let’s take a look at the chronological steps that would take place in this whole testing and investigative process.


The Steps (in chronological order):

  1. The new patches are released from the vendor and the new cycle of patch and scan testing begins.
  2. Non-production machines in a lab and/or virtualized environment are scanned and verified clean of all vulnerabilities before patch testing begins.
  3. All discovered vulnerabilities are remediated on the designated test machines before patch testing begins. Those that cannot be remediated are documented with the reason why they cannot be resolved (ie false positive, etc.).
  4. The new patches are first tested on the non-production machines in lab or virtualized environment.
  5. All applications on the lab machines are tested for proper operation, and that no errors are experienced on the machines.
  6. The scanner profile is verified to have the proper checks for the latest patches and other newly discovered conditions.
    • Note: This often happens after the new patches are released, and it can sometime take a few days for the new scanning profiles to be configured on the scanner. However, steps 1 – 5 can be performed prior to the new scanner profiles being configured. Step 7 and beyond, however, are dependant on the scanner being configured to look for the new patches that are being tested in this phase.
  7. A test scan is performed on the lab machines to verify that they are free of vulnerabilities. Any vulnerabilities found are investigated and resolved.
  8. Patches are deployed to the designated pilot group of production users.
  9. The designated pilot users are to use their computers for a pre-determined testing period. Three days to one week is recommended for this testing period.
  10. A sample of this pilot group is selected for another verification scan, and the scanner is run against these machines to verify that the machines are clean of the vulnerabilities that the new patches were meant to mitigate.
    • Note: This step can be done concurrently with the operational testing period described in step 9.
    • Any vulnerability conditions that are related to the new patches that exist as a result of this scan are investigated, documented, and solutions determined.
  11. The new patches are deployed to the remainder of the production machines.
  12. Full scan of the production environment is run.
    • Note: The full scan of the production environment to look for the new patches should take place only after allowing sufficient deployment time. This will vary depending on the size and geographical diversity on the organizations.


Wrapping It All Up:

Having a standardized, methodical approach to patching and scanning will help give more structure to the whole process. Using a checklist, like the one above or a locally developed checklist will help ensure that testing is performed properly. It is easy to overlook things, and very easy to be led down an incorrect path when investigating the types of situations mentioned in this series. It is important to use several different tools and analyze the similarities and differences in information that each of the tools provides.


So the lesson learned in this whole exercise is that IT staffs should be less prone to jumping on the “False Positive” bandwagon, and more inclined to using research and investigative techniques to find out what is really happening. Don’t rely on just one analysis tool or set of data to make a conclusion. Security is hard work, and often involves many steps to get it right. Overlooking even a single vulnerability by claiming that it is a false positive gets it off your to-do list, but doesn’t actually clear it up – your machines are still vulnerable. If the bits are on the box, you MUST remediate. Calling it a false positive when it is not does not constitute a valid remediation strategy.

Use some industry respected assessment tools, come up with a good (consistent) methodology, search for clues, and above all else – do some research and investigation! As a line from the movie Apollo 13 goes – “Work the problem! Don’t make it worse by guessing!” Guessing that it is a false positive is a dangerous habit to get into.

Sunday, March 04, 2007

Investigating False Positives and Other Security Anomalies Part 1

Find vulnerabilities on the computers on your network; apply a patch, and all done, right? Well, maybe, and maybe not. Part of any good security program includes using a variety of tools to assess the risks in your environment. Specifically, I am talking about the periodic vulnerability assessments that are performed on the desktop and server computers in your network. Let’s assume you are an all Windows shop for the moment; On the most fundamental level, you get this risk assessment done for you every time you visit the Windows Updates site on the Internet. The Windows Update site uses a scanning engine to determine what is installed on your computer, what the most current patching levels are, and whether or not your computer has those patches. Same with your antivirus software – you are given the latest updates based on the most currently known threats and whether or not you currently have the definitions for those threats (in this case viruses).

Unfortunately, Windows Updates and your antivirus software aren’t the final and definitive answer about the status of your computer’s security level. The same is true for any other single security assessment tool. In a large enterprise environment, it is often necessary to use a variety of tools to assess whether or not you are protecting your systems. The information from one can often be used to validate or refute the information from the others. In other words, having multiple tools gives you a system of “checks and balances” in determining the total picture. Having multiple tools is also a good way to aid in investigations concerning the validity of security assessment information.

The purpose of this article is to give some definitions of a few types of vulnerability assessment tools, discuss definitions of types of vulnerability indications, and discuss situations where investigations may reveal a different story than what was originally thought to exist. It is important to understand that vulnerability scanning and other assessments aren’t necessarily straight forward or cut and dried insofar as the information they can tell you about what vulnerabilities exist. You often have to rely on your skills as an investigator to uncover the real story and figure out how to truly remediate the situation.

First some definitions: It is useful to understand the types of vulnerability situations that may be incurred at any given time. The true existence of a vulnerable item or configuration must be known in order to remediate or mitigate the vulnerability. It is also important to understand the different types of tools used to obtain vulnerability information. To get an idea what I mean by that, take a look at the following definitions:

Patch Management System: A system, usually centrally managed, that is used to assess patch statuses on end systems, determine which patches are applicable, and which patches need to be applied based on patching levels of the target system. Such a system can then be used to deploy the Patches To the end nodes, and return a follow-on assessment of whether or not the patch successfully applied. Windows/Microsoft Updates and Microsoft Baseline Security Analyzer (MBSA) are fundamental examples of this. Large enterprise environments may choose to use Microsoft WSUS or SMS, or third party tools such Shavlik, Ecora, or PatchLink. These types of systems usually look at what patches are needed based on what operating system and software are installed.

Vulnerability Scanner: These types of tools are a bit different in scope and purpose that patch management systems. Whereas patch management systems look for needed patches based on what is installed and operating on the system, a vulnerability scanner usually looks deeper for the existence of certain files and registry entries, whether or not the files and settings are actually used. Scanners can also look for other vulnerable configurations, such as too many admin users on the box, passwords that don’t expire, and similar items. Vulnerability scanners typically have scanning profiles that are based on CVE data and other vulnerability definitions. These profile definitions usually tell the scanner to look for the existence of certain file versions and date stamps that are known to be vulnerable. It doesn’t matter whether or not these files are actually in use, or are just left over files from an upgrade. If they exist, the computer is vulnerable. The saying often heard in the security community: “If the bits are on the box, you MUST remediate.”

True Positive: This means that a vulnerability item has been found, and it is correctly identified as existing on the computer. This is what is commonly referred to as a “known/known” situation.

True Negative: One or more vulnerabilities that were being tested for were not found on the target machine, and they were correctly ruled out from existing on the machine. Again – this is a situation of “known/known” data.

False Positive: Vulnerability was detected on the target, but investigation reveals that the vulnerability was incorrectly identified. The danger here is that this may be ignored in the future, even if the vulnerability detection later reveals a true positive situation. The other danger is that something is an evident false positive, but investigation reveals that another condition exists which caused the false positive to occur. Is this, then, really a false positive? More on that later.

False Negative: A vulnerability condition exists on the target machine, but the vulnerability assessment tools failed to identify it as being present. This is the most dangerous of all situations – your nodes are vulnerable, but you don’t even know it.

For the reasons alluded to in the definitions above, it is necessary to use a combination of these tools to get a true picture of an end node’s vulnerability status. Multiple tools that agree on vulnerability information can leave you with a pretty good level of confidence about whether or not vulnerabilities exist. If the tools don’t agree, however, then that should immediately cause you to launch an investigation to determine the true status. Unfortunately, the false negative situation may still exist. As mentioned before, this is truly the most dangerous situation where your vulnerability status is concerned, because you really “don’t know what you don’t know.” And your tools may simply all be in agreement that a vulnerability does not exist when it really does. Fortunately, this particular scenario in which all tools agree on a false negative is very rare.

False Positive? For an example of how a seemingly “false positive” situation exists that is worthy of further investigation, consider the following scenarios:

Scenario 1: A computer on the centralized patch management system shows that it is completely patched and up to date. Even a visit to the Windows Updates site reveals that the computer is up to date – no critical patches are offered. A later scan with a vulnerability scanner reveals that a vulnerability item exists on the computer. An in depth investigation includes using multiple tools to verify the vulnerability – the patching tools once again all say that the computer is patched, the vulnerability scanners again all say that it is vulnerable. Further investigation leads to a review of the session logs generated when the vulnerability scanner performed its scan. The logs reveal that a particular DLL file exists in the \Common Files\Office10 folder of the computer. You immediately say “Wait a minute!” because you know that the computer is now running MS Office 2003, which uses the Office11 folder for its files. The real story is that the computer was upgraded from MS Office XP to MS Office 2003, and the vulnerable DLL file is just a leftover. Here it is a seemingly false positive, but really and truly, the bits are on the box – the box is vulnerable. “If the bits are on the box, you must remediate.”

In the case above, testing revealed that simply renaming the file removed the vulnerability. This is done, by the way, in case doing so breaks an application, and roll-back is needed to troubleshoot. If it turns out that this file is indeed needed by some other program, then a non-vulnerable version can be obtained from the vendor and the vulnerability will be resolved. Exploits often target a file of a specific name, and if the file is not found (even if it is renamed) then the exploit won’t be effective.

Scenario 2: Your patch management system says that your computers are all the way patched, good to go (hint: all the way patched for the current software installation). Your vulnerability scanner then says that the computers are vulnerable for several MS Office patches. You double check, and sure enough, the patch management system says that those patches are applied. A visit to Windows Updates offers you no critical patches. So you dig out your MBSA tool and do a few scans. MBSA even says that those particular patches are applied. But wait a minute: You look further into the MBSA scan data and it reveals that a service pack for MS Office has not been applied. You apply the service pack. You perform another vulnerability scan – the office patches are still needed. Now you go to the Windows Updates site and your centralized patch management system, and wouldn’t you know it? Those patches called out by the vulnerability scanner are now needed. You apply the patches and your are now clean.

How did this happen? Remember: I gave you the hint in the beginning of the scenario that the computers were patched for the software that was currently installed. Installing that MS Office service pack significantly changed that installation and the old patches that were installed only applied to a computer that did not have the latest service pack. There were newer versions of those same patches that applied to the latest service pack. You applied these new patches, and then all of your assessment tools show that you are now clean and fully up to date.


Wrapping it All Up:

Vulnerability assessments are a vital part of your security program, and often involve using multiple tools to be effective. One tool will act as a system of “checks and balances” against each of the others. It is easy to get lulled into a sense of false security if only relying on one tool – it is even possible that all of your tools won’t find all of the problems. Remediation is not always straight forward either. Applying the patches doesn’t fix everything, and sometimes your system is vulnerable for things that can’t be patched. In some cases, changing a configuration by simply applying a service pack drastically changes the picture.

It is often necessary to rely on investigative skills and go in directions that are seemingly irrelevant.But by relying on multiple tools, performing sound investigations, and keeping up with due diligence, it is possible to minimize risk on your network and keep the threats somewhat in check.Remember: You will never eliminate risk completely.You can only hope to minimize it.But by having valid data from a variety of sources, you can make prudent risk assessments and ensure that your environment is as secure as possible.



On to Part 2


Additional Resources:

Microsoft Baseline Security Analyzer: http://www.microsoft.com/technet/security/tools/mbsahome.mspx

Vulnerability Scanners Explained:
http://www.windowsitpro.com/Article/ArticleID/43888/43888.html

Free Vulnerability Scanning Tools:
http://netsecurity.about.com/od/vulnerabilityscanners/Free_Vulnerability_Scanning_Software.htm

Retina Single Audit Scanners:
http://www.eeye.com/html/resources/downloads/audits/NetApi.html

Foundstone Free Scanning Tools:
http://www.foundstone.com/

The Dirty Dozen: 12 Ways to Kill False Positives:
http://www.bcs.org/server.php?show=ConWebDoc.9384