Filtering by Category: Tech

Is Big Data Analytics for Security really just SIEM 2.0?

A lot of companies are now touting big data analytics that will find badness that your other security tools do not find.   Almost every pitch I see has something very much like the SIEM Funnel from 2005.  The assertion tends to say these solutions would have bubbled those two or three compromised Target servers up to the top.  It will be interesting to see what this segment of the security market looks like in two years. For those of us that bought into the SIEM market, we found out that a SIEM is very needy in terms of man-hours.  There are several reasons for this:

  • Getting all the data feeds going (no small feat in most companies)
  • Building network and asset models
  • Creating rules, dashboards, reports, etc
  • Tuning out false positives
  • Troubleshooting the inherent performance problems of processing hundreds of millions of events per day

In short, you spend a lot of time up front getting the system going and on-going effort to keep it running.

These new products want all your SIEM data plus many other sources like DNS queries, and netflow.  Think about that volume of data for a second.

Here are the typical promises for today's big data security platforms:

  • No rules to create
  • Zero (or very few) false positives
  • Scalability - these systems will not be over-run with data
  • Little to no analyst time required to find badness
  • Full kill chain visibility for compromises

I suspect the majority of these systems will fall short of the promises laid out above.  However, I do believe we will have some great tools a couple of years from now that will make us wonder if we really do need that SIEM.  Oh, and by the way, this is not a "SIEM is dead" post by any means.  There will almost always be a place for SIEM in the SOC environments.

The "P Squared" security strategy - Procrastination Pays

procrastinationThere has been a lot in the press about the Heartbleed vulnerability lately. If you want more details on the vulnerability itself, read Tory Hunt's article entitled "Everything you need to know about the Heartbleed SSL bug".   Great, well-rounded article. What does Heartbleed have to do with procrastination you ask?  Well, if you'd done what a lot of companies do, you'd ignore old software and let it sit there.  Had you done that with OpenSSL, you're probably good!   You would not be scrambling to get emergency maintenance windows and having meetings with the CIO about the risk of rolling out half-testing OpenSSL patches versus taking the time to thoroughly test the patches.  You would not be carefully crafting a message to explain to your users that you were vulnerable and they need to be changing their passwords.

No, had you followed the what I call the "P2 security strategy" (aka - Procrastination Pays), you'd be chillin' like Bob Dylan.  You would be able to tell the CIO, "We're good.  That vulnerability does not affect us at all.  Tight security is how we roll."  You'd proudly tell your users that their data is safe with you because you were not susceptible to that latest bug in the Wall Street Journal.   Damn, it feels good to be a gangsta.

Realize this though.  You traded this one highly visible vulnerability for several other vulnerabilities.  It's just those vulnerabilities did not make the media circuit.   Your CIO does not even know the weaknesses exist.   There was one very similar (though less severe) vulnerability in early 2012 which is just about the time the Heartbleed bug was introduced.  So, you're probably susceptible to a very similar attack, but nobody knows it.

The moral of the story is this.  Keep your stuff up to date because you'll have to pay the piper eventually.

How did Target get hacked?

Protection of windows means different things in different environments! If you follow the news, you know that Target got hacked to the tune of at least 110 million credit card numbers (and some PINs) lost.  But, how did it happen?  Hardly anyone is asking or answering that question.  You can find plenty of articles that tell you what happened once the attackers go in:  Memory scraping on the POS devices and servers, a Russian teenager, famous coders, etc.

My question is different.  How did they get in initially?  I think as an industry we focus too much effort on what happened after the attackers get it.  Do not misunderstand me.  We absolutely need to scope the breach, determine what happened, what was stolen, changed,  and such.

We need to spend more time, money, and technology on understanding exactly how the compromises are being made.  I was just talking with another security professional recently who was telling me what versions of Java that current variants of Zeus was exploiting.  Guess what?  Zeus doesn't exploit anything.  It does not take computers over.  Zeus is just a piece of software that can get installed by anyone with administrative access to the computer.  It is what some people call "Stage 2" malware.  In the Cyber Kill Chain, this would be the Install phase.

There is a whole world of what we call "Stage 1" malware.  Some of these software packages are also called "Exploit kits" as it has gotten pretty commercial.  Ones that come to mind are Blackhole, Cool, Phoenix, and others.  There are custom exploit tools as well.    In the Cyber Kill Chain, I'm talking about the Exploit phase.

The problem with what I'm asking is that it is not easy to find out how computers got exploited.  There are very few tools on the market that help give you visibility into Stage 1 malware.  FireEye and Mandiant (now one company) create tools to help.  Most of your anti-virus vendors really focus on Stage 2 malware.  In other words, they are looking for the malware that makes the news like Zeus and others.

Typically, Stage 1 malware (the real exploit) is deleted from the box after it's job is done and the Stage 2 (Zeus, BlackPOS, etc) malware is installed.  That's why it is hard to determine how the computer got "infected".

If we take the money we would spend on that latest silver bullet security product and double down on visibility and process, we can really cut down on large intrusions like the one at Target and now Neiman Marcus.

Here is a list of action items off the top of my head.  I'd like to drill into these in later posts.

  • Build visibility into networks and computers
  • Design an ecosystem to capture that visibility
  • Make it easy to search and narrow down events by time
  • Have your users send you anything they feel is suspicious
  • Determine what exactly got exploited each both by analyzing the events and user input
    • You need people to do this:  Analysts
    • This is where you get some of your best threat intelligence, by the way
  • Measure and track the exploits seen on your network
  • Research what vulnerable pieces of software are hit most often on your network
  • Uninstall that vulnerable software OR put a lot of rigor around patching those vulnerable applications
  • Feed current threat intelligence (not lists of 90000 bad IP addresses) into your detection platform
  • Measure time to detect and remediate exploits and work hard to lower that time.
  • Look for data leaving your company.   Show that to management.  Often.
  • Demonstrate the tie between the trend of exploits and data leaving the building.

There are companies and vendors that get this and are working hard to solve the problem as stated above.  Other companies just want to sell you "signature update" subscriptions on an annual basis and are not really interested in solving the problem wholesale.  The companies most interested in selling subscriptions are short sighted because there will always be a better mouse no matter how good we build the mouse traps!

Just remember:  Stage 1 malware (aka - The exploit) and kick-butt Incident Response is where the money is.  If you cannot get access to the computer, you cannot install your cool botnet or memory scraping software.  When the bad guys are successful, they will have time to stage these large hacks (ala Target, TJX, Sony, etc)  if the Incident Response team kicks them out quickly.

Until next time...


MIRcon 2013 Day One overview

logo_mircon2013Richard Bejtlich (CSO) kicked off the Mandiant's MIRcon 2013.  He talked briefly about the past year including overviews of two public incidents and introduced Kevin Mandia (CEO). The two incidents Bejtlich described are:

Kevin Mandia talked about the evolution of computer incidents from both the attacker and defender perspectives from when he started in security (1993) until today.  Good stuff.  He even talked a little about the old Air Force system that we used to use back in the 1990s, ASIM (Automated Security Incident Manager).  I remember when they started rolling those out.  If memory serves, it was around 1996 or so.

Two points that Kevin made stood out in my mind:

  • Defenders should align vertically the way attackers tend to do.  Bottom line:  Share with like-minded folks in your industry or sector.  He noted that attackers tend to align on sectors and build expertise on like companies.
  • We need to reduce containment times down to ten minutes.  Yes, that is aggressive.  Is it fast enough?  Kevin's simple answer:  Yes.  :-)

Grady Summers hosted a panel of folks that do real-world response for Mandiant.  Keeping with my theme of two's.  There were a couple of things I took away from this panel.

  • Trends show a decrease in the use of malware for maintaining persistence in organizations.   What this really means is that attackers are obtaining and using legitimate user credentials much more often.  This really raises the complexity for incident responders.
  • Attackers are using legitimate sites to get around domain blocking or blacklisting.   Sure, this one is in the weeds, but it just struck a chord with me.  The primary example they mentioned was using Google Translate or Babelfish to get C2 from online discussion forums.  That is beautiful in it's simplicity and effectiveness.  Our global economy requires language translation from time to time.    Another reason it struck me, after the fact, is that Mike Siko blogged about this almost two years ago and I missed it!  Bottom line:  Make sure your web proxy is configured to look for URLs in GET requests.

There are two main tracks for the conferece:  Management and Technical.  I attended sessions from both tracks and will put out some thoughts from those sessions if anyone is interested.

Former FBI director, Robert Mueller gave a late afternoon keynote.  His talk warrants a dedicated blog post.  Lots of wisdom from him.  It was a great way to close the day.

What Makes a Good Vulnerability Management Program?

This is a question I have been pondering and talking about lately.  Full disclosure:  I'm not the best person to answer this as I have been primarily and deliberately on the Threat side of security for a while now.  My experience has been a lot of vulnerability programs are really just network or applications scans with a compliance piece bolted on.   Having said that, here's a vulnerability program according to me with some ideas thrown in from recent conversations with others. A vulnerability program needs the following steps:

  • Understand your environment
  • Identify vulnerabilities
  • Validate
  • Prioritize
  • Remediate (or accept or mitigate)
  • Report on all of the above.

If you're not careful though, it can easily turn into a compliance program rather than a security program.   Let me explain.

Vulnerability Management for compliance

A lot of organizations start by picking a tool to scan the network looking for known bad stuff.  These tools generally have a reporting engine and might even have a way to flag false positives and track remediation.  Most everybody realizes there are false positives and filter out a few glaring exceptions.  Once that is done, they agree that the list of things left are bad and must be fixed.  It becomes a routine, the metrics get better and may even approach zero.  Man, that looks great in the monthly operations review.  It looks like a very mature program and it is.  The question is:  Are they more secure?  Absolutely.   Every organization should get to this place.

Once your network hits a certain size, you have to be more selective about where to scan.  We have data centers that literally take tens of hours to finish a full network scan.   There are all kinds of shortcuts you can take like doing a ping sweep first or focusing on a particular service (HTTP, SMTP, etc).   You'll start to miss things, by the design.  You may have critical services that run on non-standard ports or boxes with host-based firewalls that don't answer to pings.  Now what?  This is the point where a lot of organizations start to flesh out the reporting, showing progress, and advertising their program's maturity.

The scenario above will take care of what I'm starting to call "commodity vulnerabilities".  They are pretty well known weaknesses that no network should have.

Now, enter this whole APT world.  I'm pretty sure most professional attackers know what commercial IDS and vulnerability scanners look for.  These attackers likely have major commercial tools in their lab.  This is their baseline.  It's their starting point.

Vulnerability Management for security

Let's start look what the next steps could look like using the list of steps above.  This is where pen-testing or "red teams" and different tools come into play.

Understand your environment

Go out to the business and have conversations about what data is important.  What data can have the biggest impact on your business if it gets into the wrong hands.  People generally know.  Sometimes they don't.  Start the conversation, build trust, and people will open up.

Start your search for vulnerabilities as close to the important data as possible and work outwards from there.   Remember also to protect administrators, their credentials, and the hosts they use just like the data repositories.

Identify vulnerabilities

This step can start with the commercial vulnerability scanner from the compliance scenario above.  It cannot stop there though.  You scan use slightly more offensive commercial tools like Metasploit to get your feet wet.  Then probe systems manually, look at header data coming back from servers, try to modify URLs of web applications, etc.  Write scripts to help you.   Automate as much as you can and no more.  One trick here is keep it simple enough that you can have metrics.

Validate the vulnerabilities

Once you have vulnerabilities identified, you have to validate them.  An analyst with a solid technical understanding of the vulnerabilities is required here.

Is it possible to cause a denial of service?  Can you actually get data?  In other words, make sure you can state the issue in plain language terms and that your assessment is accurate.  This sounds obvious, but I've seen commercial scanners say a particular instance of Apache is out-of-date only to go to the server and see the current version of Apache installed.  Tools lie sometimes.  If you do not validate the vulnerabilities, eventually, your team's credibility will suffer.  That's bad news for a vulnerability program.  It's a hard enough sell when people trust you.

Prioritize the vulnerabilities

This really is a key step and a lot of people mess it up.  Again, an analyst with technical chops is required.

Obviously, a denial of service is less important than the ability to execute arbitrary code, right.  Well, not if the denial of service is on your main customer support site.   There are these kinds of things that are difficult to put into a tool.


Now, that you've got real vulnerabilities and know which ones you want to tackle first.  What are you going to do with them?  You have about three options:

  • Accept
  • Mitigate
  • Remediate

Accepting the vulnerability is just that.  In pretty much any compliance framework, the management of an organization has the flexibility to just ignore something as long as they document it.  This officially puts an issue out to pasture.  This is where you're best bet is to put on your consulting hat.  Do not get emotionally attached to your advice if you see the senior leadership is going to accept the risk.  It's their business, not yours.  Learn FIDO.  Forget It and Drive On.  There are other vulnerabilities to fix.

Mitigating the vulnerability is putting something in place to attempt to prevent it being exploited even though the actual weakness still exists.  For example, you could have a vulnerable web server, leave it unpatched, and just put a host-based firewall that only allows traffic to that web server from known, trusted hosts.   If someone is able to exploit one of those known, trusted hosts, you're out of luck.  But, it's better than nothing.

Remediating the vulnerability is getting that weakness out of your network (as far as you know).  The cardinal rule of programming I learned over twenty years ago is:  There  is always one more bug.  This is the most secure route.  To get here, you need to learn how to explain the vulnerability in a way that resonates with the business owners and support staff of the systems.

Report on progress

This is the part where you toot your own horn.  More importantly, this is what builds your team's credibility over time.  The most effective programs combine vulnerability reporting with threat data from their network.  That can show that you're getting owned because you have not fixed problems or it can show the attacks being thwarted because you plugged the holes.  Either way, it shows you have a handle on your space.

The reporting will also help strategically with lessons learned, drive future technology deployments, and help define security standards such as baselines.   Tactically, the output of your vulnerability management program will aid in making intrusion detection more focused.  If you know where your weaknesses are, it would be a good idea to tighten up detection in those areas, if possible.

Be as specific as you can about the following:

  • What versions of a particular application are attacks targeting?  Contrast this to the application versions on your network.
  • Did you see data leaving your network owing to one of these vulnerabilities?  If so, show samples of this data if at all possible.  This makes it personal.
  • What parts of the business are affected by successful attacks?  This can help target user education programs.
  • What percentage of your company has a certain vulnerability?  Rarely is 100% of your company vulnerable.  Show where some parts are patched.  This gives you a foot in the door to patching everything else.
  • What progress is being made on remediation?  Everyone loves good news and hopefully this is good news.
  • How long does remediation take?  The Ops folks love to look at MTTR over time.

Set targets for your number based metrics and remember that Zero is a valid target.  Do you really want to advertise a goal that is more than zero successful attacks against your company's assets?  At the same time, 100% compliance on patching may not be achievable.  Do not let the perfect be the enemy of the good.

Well, that's my version of a vulnerability program for now.  I'd love to hear thoughts, suggestions, corrections, etc.

Wyman's Security Bites - Your daily security newsletter

Please check out my online security newsletter.   There is a link to it at the top menu of the blog as well.  Just click on News at the top.  These are news articles that are part of my constant stream of open source intelligence about IT security and management issues.  Mostly security. There are two editions of the newsletter daily.  Morning and Evening.  The morning edition comes at 0000 GMT and the evening edition comes at 1200 GMT.  To save you the time zone math, that is 0800 and 2000 US East Coast time.

I firmly believe a security professional needs to have daily input as to what is going on in the world.  This may have come from my military background.  Sure, we had closed sources of information, but pretty much everywhere I went there was CNN or something similar playing in the background.  The reason is simple:  Closed sources of information will eventually lead to a closed mind about what is happening as well as what is possible.  The world evolves very quickly if you're not paying attention!

So, subscribe today to my online newsletter today or at least get your own stream of external information to keep you informed on security events around the world.

My Security Mantra - "Nothing sells tires like nails."

If you've talked with me much about security, then you've heard the phrase, "Nothing sells tires like nails."   I use it all the time.  It's been written on my white board in the office for a long time.  But, why? I grew up near an old country store.  The guy running the store made his money selling tires.  He attracted customers, mostly farmers, by having a good supply of snacks along with a few seats around a heater in the back of the store.  Everyone knew everyone.  Small town.  The farmers would sometimes give the store keeper a hard time.  They would playfully accuse him of throwing debris on the road out in front of the store when business was slow.  He would jokingly respond by saying, "Nothing sells tires like nails."

I heard this phrase many times in my youth.  As I got older I found this principle applies to many things and especially to security.

How many times have you bought tires because they were cool or sexy?  Probably not many unless you are really into cars in some way.  Most people buy tires for one of two reasons:

  • A blowout  - Think security Breach
  • Failed inspection - Think Compliance

Security is pretty much the same way.  Companies tend to spend more on security when they get hacked or they fail an audit.  This may sound sad, but it is true.  Companies manage risk and spending in many areas.  Security is just one of them.

Security folks would do well to keep this in mind when both running and justifying a security program.   No manager wants a big fail on their resume.  Be truthful, realistic, and keep in mind why companies typically decide to spend money on security (or anything really).  This will help you scope the type of information you report up the management chain.

ArcSight Rule Testing Tip

One thing I learned today is patience when testing an ArcSight rule on old events.  This has always sort of been voodoo to me.  Maybe everyone knows this already, but I thought I'd share. When testing a new rule using the "Test" button (see image to left) in the Rule definition window there is a behavior difference.   It essentially uses an Active Channel and inserts your test rule into that stream of events.

So, it looks like any other active channel with one exception.  The channel will speed through messages (examples below) such as  showing you "Percentage Complete", telling you it is Retrieving events, and may even show a message saying "No data matches this query".    This does not necessarily mean your rule does not work.

Just let the Active Channel sit there for a few minutes and you may get results.  Apparently, when ESM says it is done, it is not done.

This has tripped me up many times and I finally figured out it was lying to me today when my ADHD kicked in.  I came back to the channel later and had results.

So, I learned something new today after seven years of using ESM.   Sort of embarrasing, but hopefully it can help you test your rules better before putting them in production.

Until next time... Wyman

Custom Email Notifications with ArcSight ESM

Email notifications from your SIEM can be very useful especially if you have a small team.  The default in ArcSight ESM is to dump every event field into an email.   While better than nothing, the format is hard to read and forces you to search for the information you need to work the event.  Enter custom notification templates. Disclaimer:  This works as of ESM 4.5.  I have not tested it on ESM 5.0.  Would love to hear your experiences!

There are a few of moving parts and I'll go through each below.

  • Rule Actions- You need at least two rule actions.  See screen shot below for an example.
    • Send Notification - Set AckRequired as you wish.   The NotificationMessage is what will ultimately be the subject line of the email.  Resource is where you pick what group of users the email will go to.  This will have to be a group.
    • Set Event Field Actions - This is the secret sauce.  You can technically pick any field name.  I typically use one of the flexString fields to avoid a conflict of that field being used in some other way.  For agentSeverity, it really is up to you and does not affect the email.

  • Notification conf file - This file is in $ARCSIGHT_HOME/config/notification/ and named Email.vm and has logic to help it decide which template to use when sending the notification.  In a clean install of ESM, it only has one option, the default template I mentioned at the top of the post.  It makes decisions based on a particular field.  In this example, it is the flexString1 field shown in the screen shot above and code snippet below.  The decision works from top to bottom using Velocity, so make sure any custom entries are above the default entry.  Add an entry like the one below to the file for each custom template you have.  The #parse field will be the name of the Notification Template File described in the next section.
#if($introspector.getDisplayValue($event, "flexString1") == "malware")
#parse ("Custom-Email-Malware.vm")
  • Notification Template File - This is the file that will literally be a template for the body of the email.  Remember the subject line of the email is set in the Rule Action.    For this example, the template file name is "Custom-Email-Malware.vm".  The file format again uses Velocity and is pretty straight-forward once you have seen one.  See example below.
Description: $introspector.getDisplayValue($event, "name")
Event Time:  $introspector.getDisplayValue($event,"endTime")

User Name:  $introspector.getDisplayValue($event,"sourceUserName") 
IP Address: $introspector.getDisplayValue($event,"sourceAddress") 
Host Name:  $introspector.getDisplayValue($event,"sourceHostName")
Location:   $introspector.getDisplayValue($event,"sourceZoneName")

Target Port: $introspector.getDisplayValue($event,"targetPort")
Event Count: $introspector.getDisplayValue($event,"baseEventCount")

Extra Information (where applicable)

Description of Event
This computer appears to be infected with malware 

Why this is Important
Malware can take complete control of a computer remotely.

Next Steps
Start the malware infected procedure on this computer.

Note all the field names in both the config file and template file start with a lowercase letter, have no spaces, and each word is capitalized except the first one.  This can bite you if you are not careful.

Good luck!  If you have questions, comments, or suggestions, please leave comments below and I'd be glad to help.

What should I learn to get into IT security?

Several people have asked me over the years some variation of "What should I learn if I want to do IT security work?" This is a hard question to answer without knowing your goals and interests.   However, most people I talk to only have one very high-level goal:  Get into security.

Here's what has been most beneficial for me on the technical side in no particular order.

Operating systems:  Not just windows versus UNIX.  But, how they work.  Learn about IPC, pipes, what really happens at boot times, how things get started at boot, etc.   You can bet that attacks know how systems boot inside and out.  It's called maintaining persistence.  Also, Linux != unix.  Yes, it is a unix variant.  But, I have seen "Linux gurus" get totally lost when trying to show someone something on Solaris.  Don't be that person.  Learn at least on distro/flavor of Linux, BSD, and take a look at Solaris.  They are different yet the same.  Do not try to figure out the difference during an incident.

- Learn how to interpret logs:  Seriously.  Make for darn sure, you know where the logs are on every OS and application you touch.  Look at the logs every day and after making any OS or application changes.  How did that affect the logs.  This may be the difference between an making a timely intrusion detection and an attacker having free reign on your network for months.  I am amazed when something serious has gone wrong how many people reply, "I have not looked yet."  when I asked them what's in the logs.

Networking fundamentals:  Beyond three-way handshake and default gateway.  How do network connections make their way up each layer of the network stack on an OS, how does a given program bind a network port in order to accept connections, know subnet masking inside and out.  Make for darn sure you can make sense of tcpdump output.  Learn the structure of basic protocols:  HTTP, DNS, SMTP, FTP, IRC, etc.

- At least two scripting languages:  One portable like Perl or Python and know some unix shell type stuff (bash, csh, etc).  Get cygwin and play with it if you have not already.  Think tool building and quick and dirty text parsing.

- At least one compiled language:  C is a good choice.  C++ or C# would be a good second for any GUI stuff, but C will suffice.  Visual Basic or similar if you must.  Again, think tool building.

- Learn basic unix (and cygwin) utils like the back of your hand: sed, awk, grep, sort, uniq.  These will save you one day while one of your coworkers is working on some fancy formula in Excel.

- Databases and Web Development:  SQL.  You'll need it for tool building if nothing else.  Learn PHP while your on SQL.  They go together like peas and carrots.  PHP could easily be swapped out for AJAX, Ruby on Rails, etc.  The point is learn what it takes to get that data out of the database and on the screen of someone across the network.  This is literally where the money is.  Think e-commerce, online banking, etc.

But, wait, Wyman.  What about Firewalls, Intrusion Detection, Virtual Private Networks, Identity Management, Data Loss Prevention, Security Information and Event Managers, etc?  I want to do security!  They will come.  Wax on, wax off.   Trust me.

Build a foundation on what makes your computer and the Internet work.  Only then are you adequately prepared to start defending it.  Otherwise, you'll see something in your IDS and have no clue if that is normal or malicious.   You do not have to be a guru at any of the items above, but be average in all of them and you're way, way ahead of the game.

I'd love to hear other thoughts and suggestions.

P2P .... what P2P?

Here is a summary of an email thread between our team and an end user today:

  • Security: You have P2P software installed. This is the third time we told you.
  • User: I don't know what you're talking about. I didn't do it, nobody saw me do it. Can't prove a thing.
  • Security: How about all these movies you have been downloading. Here is the file listing including time stamps.
  • User: Oh, those. Oh, right. It was not P2P the movies came from email. Sorry about that. Never happen again.

This is fairly typical. People just want their tunes and movies, man.

The trouble is two-fold:

  1. Bad guys use the same protocols to get data about your company.
  2. Set up one of these P2P clients wrong and your HR person just shared employee data out to the Internet.