Using Splunk to Catch Pesky Employees Outsourcing Their Job

There was a Case Study published by Andrew Valentine over at the Verizon Business Security Blog titled “Pro-active Log Review Might Be A Good Idea” which details an incident where an employee working for a “U.S. critical infrastructure company” was found to have outsourced his own job to a Chinese consulting firm. Here’s a quick snippet from the Case Study:

“As it turns out, Bob had simply outsourced his own job to a Chinese consulting firm. Bob spent less that one fifth of his six-figure salary for a Chinese firm to do his job for him. Authentication was no problem, he physically FedExed his RSA token to China so that the third-party contractor could log-in under his credentials during the workday. It would appear that he was working an average 9 to 5 work day.”

The complete Case Study can be found here:

http://securityblog.verizonbusiness.com/2013/01/14/case-study-pro-active-log-review-might-be-a-good-idea/

Now I don’t think I was alone when I read this story and laughed. However after reading some reactions on Twitter and other various places on the web I felt I was laughing for a different reason. The reason I laughed? This is so easy to prevent and almost two years ago now I created a method that will trigger an alert when this exact type of incident occurs – and it took less than a couple hours work by using a log management tool named Splunk (bundled with the Google Maps Splunk App) and having access to two types of log files.

First off, to specifically address the RSA SecurID side of things as mentioned in the article, there’s an easy way to identify “odd” events coming from your SecurID appliances. Some quick background… for me it all started with the RSA breach that occurred in April, 2011 and affected primarily their SecurID product line. This was obviously a high profile event that made a lot of organizations cringe and caught the eye of Executives worldwide. The standard questions were being thrown around “Whats the risk?”, “What controls are in place?” and so on. So as I listened to the conversations I thought to myself “Data is there, why not use it?” – Something that probably should’ve have been thought of by the organization mentioned in the Verizon blog post. RSA SecurID appliances generate SNMP traps for a wide number of the events that occur on the appliance, knowing this I pointed the SNMP traps at my Splunk instance and started creating reports. Within a weekend I had built a dashboard driven application for Splunk that produces over 25 reports and an interactive search form.  Using this application I now had complete visibility into the events occurring on my appliance and could alert based on thresholds, geographic location, time of day, specific users, etc…

More details can be found in my blog post here or the application can be found on Splunkbase here.

Now let’s get back to the main focus on this post, how to create reports/alerts using Splunk, and some readily accessible logs, on employee’s that may be outsourcing their job as noted in the Verizon Case Study.

So what did I have to do? It’s simple. First I went in search of the proper data to assist me:

  1. VPN logs from a Cisco ASA
  2. Badge Access logs from a Diebold physical security system

After procuring the data, I used Splunk to ingest and index it for me so that I now had a tool which could easily search the data, build reports and potentially alert based on the searches I had created.

The first thing I wanted to identify was any WebVPN logins from outside of the Province, simple enough right? Right. So I wrote a Splunk search like so:

index=main eventtype=asa-webvpn-login | geoip IPAddr | geonormalize | search IPAddr_region_name!=”ON”

Viola! I now have a list of all the WebVPN logins that originated from outside of the Province of Ontario (yeah – that big one in Canada eh). I could now schedule this search to run in real-time and email me an alert any time it triggers, providing the raw results of the offending event(s). Note that I could easily change the search so that it notifies outside of the Province, Country or isolate to a specific location – you know we’re all watching those Chinese IPs 🙂

Next thing I wanted to identify was any WebVPN logins followed by a badge event at an entrance by the same user signaling they were entering the building. As we all know, normal behavior (99% of the time) dictates that when you go to the office you would not need to VPN into the office network. So obviously this would be an event that warrant’s investigation, why not alert on it? Awesome, lets do that. Since I have my badge access logs accessible by the same Splunk instance where I can access my VPN logs it was as simple as writing the following Splunk search:

index=badge_access “* In *” | eval loginid=lower(substr(FIRSTNAME,1,1)+LASTNAME) | search loginid!=”” | append [ search index=main eventtype=webvpn-started OR eventtype=webvpn-terminated | eval loginid=lower(webvpn_username) | transaction loginid maxevents=2 startswith=”*WebVPN session started.” endswith=”*WebVPN session terminated:*” | fields + loginid ] | sort + _time | transaction loginid startswith=”* In *” endswith=”* WebVPN session started.” | eval timebetween=round(((duration/60)/60),2) | where timebetween<8

And there we go! We now have a search built that returns grouped events by loginid where there was a WebVPN login followed by a Badge “In” Event (corresponding loginid entering the office) within the defined timeframe (timebetween). Now keep in mind the previous search we did purely on WebVPN login’s by geographic area, that same geoip command can be used here to narrow down the WebVPN logins by City, Province/State, Country, etc… Thus further eliminating false positives.

Since it’s not as straight forward as our last search lets break down, command by command, what the search actually does:

index=badge_access “* In *” – This will simply search the badge access logs for any event where someone physically enters an area

eval loginid=lower(substr(FIRSTNAME,1,1)+LASTNAME) – The use of the eval command here is to take the FIRSTNAME and LASTNAME from the badge event(s) and create a new field named loginid thats in lowercase which is a composition of the first letter of FIRSTNAME and the entire LASTNAME.

search loginid!=”” – This will now search and only match the badge event(s) where the field loginid is not null, because we want to eliminate badge events where there was no corresponding user information.

append – This command is used to execute a subsearch which is the search enclosed by [  ] and append the results to the preceding search that was executed.

search index=main eventtype=webvpn-started OR eventtype=webvpn-terminated – This command is the beginning of our subsearch, it will search the main index for any log events that match the eventtype webvpn-started OR webvpn-terminated. This allows for the inclusion of only events where a WebVPN session was started or terminated.

eval loginid=lower(webvpn_username) – The use of the eval command here is to create a field named loginid (as we did before) thats in lowercase based on the value of the field named webvpn_username from our subsearch.

transaction loginid maxevents=2 startswith=”*WebVPN session started.” endswith=”*WebVPN session terminated:*” – Using the transaction command we can group two related events by their related field – loginid. The events are grouped with the first event being the one that matches the “startswith” string and last event being the one that matches the “endswith” string. The “maxevents” parameter tells the transaction command to only return two events that match the “startswith” and “endswith” for each group.

fields + loginid – This command tells the subsearch to only return the loginid field previously created.

sort + _time – This command sorts the results by time (newest to oldest).

transaction loginid startswith=”* In *” endswith=”* WebVPN session started.” – Once again, we’re grouping events based on the loginid field. This time we’re looking for events that start with a badge in event and ends with a VPN session being started.

eval timebetween=round(((duration/60)/60),2)Using the eval command here we create a field named timebetween which is the amount of hours between each event.

where timebetween<8 – Using the where command only the resulting events with a time difference of less than 8 hours will be displayed. Typical for a work day, however this value can be changed to whatever you want.

Now you may be thinking what about reporting or alerting on the reverse of this scenario where a Badge Event happens followed by a WebVPN login. No problem, one simple change is all thats needed, just change the following line:

sort + _time

to

sort – _time

And now the events will be organized by a Badge Event first followed by a WebVPN login given the defined timebetween that’s been set.

And that’s it, take these searches, save them and then schedule them to run on whatever predefined schedule you see fit then alerts can be produced and emailed automatically. Problem solved…..for now!

As always I’d love to hear any feedback, comments, questions or suggestions…hell I even welcome flames, because I know this method is not totally perfect but is does allow for some sleep at night.

Happy Hunting!