Cyber Threat Intelligence – Lesson One: Know Thyself

Posted by Michael Paulie
Security operations centers have SIEMs and log analyzers or custom logs alerts with hundreds or thousands of log feeds with millions of lines of data every second, correlating, analyzing, and searching for indicators of compromise and anomalies.  Through all that noise what you’re not likely to be able to tell is if that last connection to your website was legitimate or possibly malicious.

Enter honeypots

There are various types of honeypots that can be used for specific scenarios such as detection, deflection, even deception, the type I will discuss are those used to detect attempted unauthorized use of systems.  The honeypots are deliberately vulnerable systems used to observe malicious behavior.  There’s usually no need to advertise these systems, and normal customer and business activity will never interact with them because they are not connected to any business processes.  In this configuration, there are no false positives, every connection to the honeypot is a possible threat because no one should be trying to connect to it.
Honeypots can be deployed internally on your network, for example on the same network as employees or users, or deployed facing the internet to mimic production systems such as an ecommerce site.  The benefit of deploying internally is to observe possible insider threats or compromised systems which may be trying to spread malware across the network.  The benefit of deploying honeypots externally where they’re available via the internet, is to gain insight and quickly visualize your current cyber threats, current vectors of attacks, and possible exposure.

Deploying honeypots can be very simple and instantly become fruitful via the information they provide.  They can be made to be general purpose, low interaction to get a feel for the network and application based attacks directed at your organization or they can be set up with more interaction and mimic your production systems.

Practical applicability

This is not theory, if you deploy a honeypot that is accessible via the internet it will get probed, prodded, and attacked.  Some of this will be generic scans of malicious campaigns looking for specific vulnerable systems and some will be directed specifically towards your organization for a personalized view of targeted attacks. Either way it will provide invaluable information as well as actionable threat intelligence which can be immediately used to reduce your cyber risk. When deployed internally to mimic a file server for example, a honeypot can identify possible malicious insiders or worse you may find out there’s malware on your network attempting to spread.

As a personal project, using virtual server hosting services I deployed three honeypots for a year in data centers in New York, Los Angles, and Netherlands.  These honeypots were internet facing and non-descript, meaning there was nothing labeling them as belonging to any person or organization. The names of the servers where just a jumble of characters and there wasn’t even a DNS record associated to them, just an IP address and an internet connection. With just that the intelligence they provided on current attack campaigns and targeted services and applications would be invaluable to any security operation.  In a future article I plan to review the data as well as provide a tutorial on how to setup your own honeypots along with Splunk to easily visualize and analyze the data they provide.

Take Action

Time to use the threat intelligence your honeypots are providing to get the most use out of them.  Some use cases include using the source addresses to feed your blacklist or if you’re interested to find out how effective your current blacklist feed is against the latest known malicious IP addresses, it’s possible to compare the source IPs collected by the honeypots to the blacklist.  I mentioned earlier a honeypot deployed internally will help identify insider threats and compromised machines.  Another use case is to feed your SIEM with the honeypot data to provide context, yes I know another feed, however the security industry is taking notice and honeypots are more and more becoming part of a holistic security program.  Companies such as LogRythm are building into their products the functionality to automate and contextualize the use of the honeypot data to identify compromised credentials and protect against zero-day malware.

I’m a big advocate of honeypots as they are a valuable piece of any security puzzle providing intelligence at the network level.  Insight on other attack vectors such as phishing may be just as or more important to your organization and every security program should be customized and prioritized to fit its needs.
[Read more]

Know Your Risk: Lessons From The JPMorgan Chase Breach

Posted by Michael Paulie

On Tuesday, information emerged about the JPMorgan Chase breach in 2014 where data on approximately 80 million customers had been stolen.  From the details in the indictment of the accused perpetrators, fourteen other firms in the financial services sector were also targeted(although not all confirmed to have been breached) including ETrade, Scottrade, the Wall Street Journal, TD Ameritrade, Fidelity Investments, Dow Jones, and a Boston-based mutual fund firm.

The long and short of it is this: customer data, not including the attributes normally associated as higher value like SSNs and account numbers, was easily monetized through criminal activity, online casinos, and pump-and-dump stock manipulation schemes, generating millions of dollars.  Criminals targeted customers to get them to purchase stocks, which were artificially inflated and shown to be continuing their increase in value.  The stocks were then dumped for a profit, sending their value down the drain and leaving the investors at a loss.

That being said, the first lesson is that your customer and other non-public data, even without the high value bits, is worth much more than most companies valuation. This data will continue to be targeted as cybercrimes like these evolve, and it deserves more protection.

The indictment also revealed the method the criminals used to hack some of the data, which provides us with the second lesson.  The cyber attackers involved were not just on the outside looking to get in; they seemed to be veterans of the financial industry.  They used their customer, merchant, and third party vendor accounts as well as created multiple shell accounts and identities to footprint, find, and exploit vulnerabilities in these institutions.

While insider threats are usually well vetted and vendor/third party risk is currently a popular topic, how often have possible threats and risks from your own customers been reviewed?  The lesson here? Paying customers can be attackers.  Perform penetration tests and application vulnerability scans from their perspective and ensure least privileged access.

Lesson three is unpatched vulnerabilities should be taken more seriously.  Heartbleed was a very high profile vulnerability which affected just about every SSL service running on every device.  Everyone in the IT/IS communities understood the vulnerability, the exposure, and were quick to patch it.  However, some organizations took days and even weeks to completely patch for Heartbleed after its public announcement and there is supporting evidence that the criminals were successfully gaining access to these systems during that time. 

Does your organization underestimate being exposed to vulnerabilities for even a short period of time, or do they understand a breach could have taken place and hunt for indicators of compromise?   This is a question of culture and security mindfulness, to accept that even the smallest exposure can result in the worst case scenario.

Additional attack methods by the criminals in the indictment included brute forcing passwords and social engineering credentials to the Scottrade and ETrade networks.  These types of attacks should rarely occur if appropriate polices and access controls are in place, such as two-factor authentication and policies for account lockout and password complexity.

The events at JPMorgan Chase might be a glimpse into the future of cybercrime, which is just a piece of a puzzle in a larger criminal enterprise. This was "hacking to support a diversified criminal conglomerate," Manhattan U.S. Attorney Preet Bharara said.  "Fueled by their hacking, the defendants’ criminal schemes allegedly generated hundreds of millions of dollars in illicit proceeds."
Share your thoughts below...


Photo by Alex Proimos  / CC BY
[Read more]

Will you be fired for clicking on the wrong email?

Posted by Michael Paulie

It's very much possible at some point in the near future, opening an attachment or clicking on a link in a phishing email will get you terminated.  Everyone is now responsible for security at the organization they work for.  I think that is something we can all agree on, even if there are still those colleagues who don't give it a second thought.  Security awareness training has been part of most organization's security program to help employees detect and report security incidents, however, only recently has the human factor played such a direct role in incidents and breaches of data.

Social engineering is nothing new but a well crafted phishing email, based on current events and information about the target, for the purpose of having the person click on a link or open an attachment, has become very prevalent over the past few years.  As more and more companies and government agencies are infiltrated by attackers gaining a foot hold via those well crafted phishing emails, they're finding out their awareness programs are not effective or people just aren't caring enough to be vigilant in their daily activities about these types of attacks.

This week Paul Beckman, CISO of the Department of Homeland Security, discussed how federal employees with security clearance are failing email phishing tests “that look blatantly to be coming from outside of DHS.”  Beckman noted that those who fall for the emails, and in instances have entered their credentials after following the links, are required to take additional security training.  Beckman made headlines with his proposal to revoke the security clearance of repeat offenders stating those employees “have clearly demonstrated that you are not responsible enough to responsibly handle that information.”

Beckman has said what many heads of Information Security departments have been thinking for a long time, and like a concerned parent, is wondering how much punishment needs to be dished out to change the behavior of their colleagues.  If Beckman revokes security clearance, will those employees still be able to perform their job responsibilities?  Maybe not if the job required the clearance in the first place.  The next logical step is a demotion to a position which doesn't require the clearance or termination of the employee all together. 

Email phishing tests have been around for a while but this may be the tip of the iceberg for human security tests.  A new tool, AVA, created by Laura Bell, CEO of SafeStack, performs social engineering tests meant to utilize as much information about the target as possible.  This includes trawling social media and connecting to internal systems to learn about thing like reporting lines. It will use this information, for example, to send text messages which look like they're from your boss, to execute a task outside of standard approval controls.  If you got a frantic text from your boss, would you send that wire transfer or execute a production change?  Those are some of the more psychological and situational tests AVA can execute to test an organization's human risk.

Is this the future, vulnerability scanners for people?  Should this be required for government employees with security clearance or employees in your organization with access to your critical assets and information?  What can be done to make security awareness stick? These are the questions being asked to find ways to reduce the risk.  I’m a proponent of testing those with security clearance or access to critical data because that responsibility comes with the job and must be upheld every day.  However, this may cause long term effects such as stressed out employees worried about real and test spear phishing attempts or a reduction in staff because of failed tests.

I don’t know if this behavior can be changed through training and consequences of demotion and termination or if we will ever be able to truly manage the human risk completely from well crafted phishing and human psychology.  One thing is for certain though, we are the weakest link and the ones carrying out the attacks know it.

Photo by Matthias Ripp  / CC BY
[Read more]

General Computer & Application Controls, A Primer

Posted by Michael Paulie

The following guide is meant to be a primer on the subject of general computer controls and general application controls, it is by no means a complete guide to these controls but rather aims to provide a foundation by which to build upon.

Businesses rely on technology to operate and achieve their goals, and with all technology come risks. To mitigate the risks every IT system and environment requires controls to be in place to keep the system and data within it secure, maintain continuous operations, and reduce the chance of errors in data processing and transactions.  These types of controls are commonly called general computer controls and general application controls, and ensure IT systems are functioning reliably and as management intended.

In Information Security and Information Technology audits most things boil down to the CIA triad… Confidentiality, Integrity, and Availability.  The blend of general and applications controls in every system are the measures put in place to support the CIA triad and ensure IT systems can be relied on to sustain business operations.  We also test to ensure the controls are functional, effective, and comply with policies and procedures.

Compliance is also a reason why general computer controls and application controls are important, regulations such as HIPAA, SOX, GLBA as well as PCI-DSS compliance require the attestation of the effectiveness of certain general computer and application controls.

General Computer Controls

General controls are the controls applied over the IT infrastructure of a system, without them it is possible unauthorized changes may occur, users with privileged access may go unknown, measures would not be taken to ensure systems are available, and data may be accessible to unauthorized users.  General controls form the control environment to ensure these risks are appropriately mitigated.  The following list is a high level description of the controls you should expect to see in just about every IT system.

Along with best practices, all of the following general computer controls should be tested against documented company policies, procedures, and standards.

Physical security

Controls should be in place to ensure physical access is limited and controlled (ID badges, locks, man-traps, guards), fire suppression systems are in place, and power systems are adequate.  I'm not going to go into more detail here because most of the time when reviewing general controls for systems, they are located in a data center which should have it's own review performed.

Change and patch management

Generally speaking changes to a system, including installing patches, should be performed in accordance to change management policies and procedures with proper approvals and separation of duties. System owners should ensure they are made aware of patches, specifically security patches, timely and evaluate them based on criticality and risk to the organization.

Performance monitoring and capacity management

Key performance indicators (KPIs) should be monitored based on the function of a system and its criticality with automated alerting for the timely response to any problems.  Monitoring should also be performed for capacity management of resources such as disk, processor, memory, bandwidth, and license usage.  This should be done periodically to ensure the capacity of the system supports its current and projected usage.

Backup & recovery and high availability

IT systems should have data and configuration backed up appropriately to support the recovery of the system in the event of a disaster or loss of data.  If the system has an associated Recovery Point Objective (RPO), which is a specified maximum amount of time the business can afford to lose data, backup jobs should be scheduled accordingly.  For example if a system has an RPO of 1 hour, data must be backed up at a minimum, every hour.  For high availability, based on risk, systems and infrastructure should be configured to be highly available in support of business continuity.  Examples included clustered or fail-over system configurations, redundant network connections, disk arrays, etc.

Security configuration, administration, and access review

Security configuration within IT systems is made up of the configuration for how users are authenticated (centrally or local),  secure communications of the authentication traffic (e.g. Secure LDAP, Kerberos, SSH, SSL/TLS),  password policies, audit logging, etc. Systems should comply with documented security standards and analyzed based on risk.

Security administration concerns the granting and removal of access to systems, access should be based on least privileged with the use of groups or roles where applicable. There should be a separation of duties between the grantors of access and the users with privileged access.

Security access reviews should be performed periodically to verify the access of users within the system is appropriate and access has been granted or removed timely.  There should be a separation of duties here as well between the reviewers of access and the grantors of access.

General Application Controls

Application controls center around the accuracy and validity of data as it is processed through a system.  The objective is to ensure that data is accurate and approved when sent to a system, processed, and output.  It's very important to walk-through and understand the process flow and flow of data when it comes to reviewing general application controls.  To properly test them you should have a full understanding of the flows and know where and when specific controls should apply.

Controls for input data include validity and approval of the input, its accuracy, as well as it's completeness and management overrides. In many cases there should be segregation of duties between initiation and approval of transactions.  For the processing of transactions there should be controls in place which record every transaction as well as check for completeness, and accuracy.  There are many ways to test transactions and calculation including re-performance through the use of computer aided auditing techniques (CAATs).

The same controls apply for the output data, they should ensure completeness and accuracy of the data.  A couple additional controls which apply to output data are error reporting and the security over the new data wherever it has been output.  No matter on where the data is stored or its location in a database, controls should be in place to ensure only authorized uses should have access.

Final Thoughts

General computer and application controls can quickly become a very involved topic and noticeably I have not covered everything in this guide, including controls over operations and systems development, acquisition, and maintenance. Nor have I covered approaches to testing the controls such as compliance or attribute and substantive testing, however the purpose was to provide solid overview and foundation which could easily be built on and translated to multiple environments.  Feel free to leave your comments below, happy auditing!

Photo by Faramarz Hashemi   / CC BY
[Read more]

Cybersecurity: Why every employee is a target, and what you can do about it

Posted by Michael Paulie
Photo Credit: Gianni Dominici

As of late you've probably been hearing about various types of cyber attacks and the methods used to execute them relating to the ever mounting number of security and data breaches at large organizations.  Advanced Persistent Threats (APTs), spear phishing, and whaling are just some of the more sophisticated ways threat agents are launching attacks by specifically targeting employees and vendors to gain access to corporate and government networks.

Social media sites like Facebook and LinkedIn offer a wealth of information which is used to select and target individuals for social engineering, phishing, and malware attacks.  From information in your profile such as your job title, department, and company, attackers can extrapolate the type and extent of your access to data and systems within their target organization, and begin launching attacks against YOU.  Examples include emailing or in-app messaging of a job with a description attachment, that you might be interested in, which executes malware or the message might contain links to articles in your field which direct you to a website attempting to execute malware. .

Zero-day vulnerabilities, which are as yet unknown and do not have a patch, have always been a risk, however the attacks have become increasingly more sophisticated. You may have noticed web browsers on your home computers complaining about Adobe Flash needing to be updated frequently over the past couple of months. This was due to recent zero-day vulnerabilities in the Adobe software.  In an effort to compromise government and U.S. financial services employees, attackers compromised advertising servers used by to serve up malicious advertisements.  The malicious advertisements were then displayed on which exploited Adobe Flash and Internet Explorer of visitors to the site1.  The end result of this targeted attack was malware installed on home, work, and government workstations which sent data back to the attackers.

Threat agents are not only attacking the front door but targeting those who have the keys and stealing them.

What you can do about it.

1. Limit public information on social media and be cautious of who you let in your social media network.  Use privacy settings to limit the information for people you don’t know but wish to network with.  If settings are unavailable, limit the information provided on your job and responsibilities.

2. Think before you click. Links and attachments in email, on social media, and advertising are often the way computers become compromised. If it looks suspicious, even if you know the source, it’s best to delete it.

3. Passwords, we're stuck with them until there's a better and more secure method of authentication.  I’m sure you’ve heard this before, make passwords long, complex, and change them regularly.  Use two-factor authentication if available.   Also, use different passwords, especially between work and personal accounts in the event one is compromised.

4. Keep software updated.  Having the latest updates to your anti-virus software, operating system, and web browser is one of the best defenses against viruses and malware.  Use the option to have these automatically update.

5. Using public Wi-Fi is like talking on the phone in the middle of a crowd, assume everyone can hear your conversation.  Limit the type of usage and if you must use it, be sure to use secure VPN.

6. BYOD, a convenience and security risk. Using your personal device for work means what you do at home or on other networks can put your corporate network at risk.  The best thing to do is keep your work and personal devices separate.

These measures will not only protect your organization but also yourself from identity theft and fraud.

Photo by Gianni Dominici  / CC BY
[Read more]