Harvesting Value from Open Data

On one side, we’re talking about Data Privacy, User Privacy, and legality of Survellience itself, but at the same time, there is Data that is supposed to be Public information and easily accessible by Human Beings, and also Computers, to process and take value out of it.

Just to set the context of this whole topic, here is a very interesting and extremely powerful use case that talks about a Dashboard created by the Open Data Analytics company Appallicious, which is being billed as a solution that pairs local disaster response resources with open data, and offers citizens real-time developments and status updates.

@jasonshueh has an interesting post on GovTech about the methods that could be used to harvest value out of Open Data repositories, for more such use cases.

Sunlight Foundation is a Washington, D.C. based non-profit advocacy group promoting open and transparent government. According to the foundation’s California Open Data Handbook, data must first be both “technically open” and “legally open.”

  • Technically open: [data] available in a machine-readable standard format, which means it can be retrieved and meaningfully processed by a computer application.
  • Legally open: [data] explicitly licensed in a way that permits commercial and non-commercial use and re-use without restrictions.

I think Junar is doing some interesting work in this area. And I especially liked these lines by Diego May, co-founder and CEO of Junar, in the article

What we see today is that the real innovation is not necessarily coming from hackathons, but now it’s about working with companies or entrepreneurs to solve problems

University of Massachusetts Boston is also doing some interesting work in this area and also the Fraunhofer Society in Berlin are doing some great research in this space.

This (Open Data Analytics) and the relevance of Security in it, is going to be one of the interesting areas in the Data Analytics space.

NetFlow-based security tool for Incident Response

Charles Herring of Lancope has a short but interesting post on how NetFlow data can be leveraged for Incident Response purposes.

He says

The collection and analysis of network metadata, such as NetFlow, is an effective way to identify advanced attacks, insider threats or data exfiltration.

There are three major features/activities required by an effective NetFlow management tool:

  • Deduplicate the flow to remove redundant information
  • Directionality to determine the relationship between flow endpoints
  • Robust Querying capabilities

There is a Part 2 coming up soon, which will focus on the Analytics aspects of this.

Title Image courtesy: jimjansen.blogspot.com

Identifying actionable threat intelligence

Ran Mosessco from Websense Security Labs has a very interesting post on solving a key issue every Security Analysts in a SOC (Security Operations Center) faces – the overwhelming amount of security alerts (even after correlation), also called Attack Indicators, an Analyst has to acknowledge and investigate.

Actionable threat intelligence is buried deep within terabytes of seemingly interesting but irrelevant data. Plausible deniability, false positives, lack of traceability and attribution, skillful attackers, adaptation of warfare techniques, and the like only add to the confusion. How does one bubble up prioritized, actionable threat intelligence in an automated fashion from the depths of the data morass?

This approach is still at a nascent stage and requires further study and we need to come up with an implementable solution. But I think this is a good place to start, and the following lines capture the way forward, accurately:

With attacks becoming more advanced and sophisticated each day, combining big data engineering, unsupervised machine learning, global threat intelligence and cybersecurity know-how is required to deal with them in a timely, automated and efficient manner.


This topic is one of my key focus areas professionally, and so I will be writing more about it here. 

Title Image credit: communities.websense.com

Analysis of China-based APT “Deputy Dog” by FireEye and Microsoft TI teams

FireEye has just released an interesting report on the obfuscation techniques used by China-based APT “Deputy Dog”. The FireEye TI (Threat Intelligence) team reportedly found suspicious activity on Microsoft’s TechNet site, early last year, which appeared to have been related to the BLACKCOFEE malware, a malware supposedly employed by the same group in China.

In late 2014, FireEye Threat Intelligence and the Microsoft Threat Intelligence Center discovered a new Command-and-Control (CnC) obfuscation tactic on Microsoft’s TechNet web portal—a valuable web resource for IT professionals.

The threat group took advantage of the ability to create profiles and post in forums to post encoded C2 for use with a variant of the malware BLACKCOFFEE. This technique can make it difficult for network security professionals to determine the true location of the CnC, and allow the CnC infrastructure to remain active for a longer period of time. TechNet’s security was in no way compromised by this tactic.

Here is a representation of the technique by the FireEye team:

Screen Shot 2015-05-14 at 9.00.09 pm

This is a really smart way of fetching and using the C&C IP address, by the attacker, and detecting this communication is going to be a bit tricky and interesting, and so the adversaries will use these obfuscation techniques more often.

The FireEye team has also shared the Indicators of compromise for this, on Github, which will come in very handy to tune our detection rules.

The importance of security in IoT

Wikipedia’s definition of IoT is:

The Internet of Things (IoT) is the network of physical objects or “things” embedded with electronics, software, sensors and connectivity to enable it to achieve greater value and service by exchanging data with the manufacturer, operator and/or other connected devices. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure.

To put it in even simpler words, IoT depicts a world where objects communicate with each other, and the same objects with humans too, seamlessly.

IoT is such an important area of focus today, that there is also a Search Engine for IoT found here which provides a geographical index of where things (IoT) are, who owns them, and how and why they are used.

The below graph (Courtesy Verizon DBIR 2015) shows the scale of growth of IoT devices in the next five years.

B2B Internet of Things connections, 2011 to 2020 (forecast)
B2B Internet of Things connections, 2011 to 2020 (forecast)

There was this funny definition of “Big Data” that was trending on Twitter recently, and I found it to be quite true. Big Data has become one of the most popular terms used by IT professionals, Businesses, Product Companies and individuals who have anything to do with data or information. But only few actually understand and use this concept and the relevant tools, in the right places. Product companies have been using “Big Data” as a key Marketing jargon.

Similarly, “IoT” is becoming one of the widely used terms in the Tech. and Non-Tech. industry. There are conferences held on IoT, there are marketing initiatives running in full swing in this domain, and every company is in a rush to introduce products in this category.

Following Infographic captures the already prevalent impact of IoT on our lives (Image source: http://cdn.xenlife.com.au):

Source: http://cdn.xenlife.com.au
Impact of IoT in our daily lives

But very few people, companies and institutions are actually spending time and effort in understanding the big picture, and studying and discussing the larger implications of IoT on the industry, our daily lives, and our society as a whole, and building products and solutions around them.

The International Journal of Computer Trends and Technology is one of such institutions which has been doing some research in this area. Their paper An Algorithmic Framework Security Model for Internet of Things is a definite read, and it describes one of the approaches that can be used to understand and implement IoT technologies without affecting security, privacy and integrity of information.

These lines set the context for the whole situation, and the paper:

The biggest role researchers are obliged to undertake is to find and advance the best algorithms for enhancing secure use of Internet of Things especially cutting across different application environments.

The basis of coming up with a security model for Internet of things (IoT) is on the understanding of the source of concern from the functionality modalities of Internet of Things. The functional modalities hereby refer to the different application environments where IoT are applicable, such as health, agriculture, retail, transport and communication, the environments both virtual and physical as well as many other potential areas of application depending on classifications employed at the point of discussions at hand.

Given also the possibilities that IoT have, to extend beyond present applications , especially enabled by emerging technologies in mobile and wireless computing, the scope of concerns from such a web of connectivity, should not be focused in defined areas but should have a broader scope.

The paper handles this issue in the following order:
  1. A world with IoT in place
  2. Problems with the situation
  3. Where should security start – the modalities involved – Lampson’s Access Matrix
  4. Augmented Approach Model for IoT Security – theoretical design

AAM is a good place to start, however, area that will require further research is the way the interaction between the augmented IoT applications can be controlled, because the code from numerous and possibly untrusted users and applications will be placed in the same security domain, which raises security and integrity concerns.

IoT Security is a vast topic, and this is just tip of the iceberg, with lot of nuances still unknown to us. I shall be writing more about this topic. There is no doubt in the potential of IoT in our lives, and it is going to be one of humanity’s biggest creations in this century. For us to realise its true potential, we must learn from our mistakes from the last two decades of developing software without considering security as a design principle; the numerous Cyber Security Breaches in the recent times and Incident Reports are indicators of the impact of this lack of augmented approach. But the repercussions of security compromises in IoT technologies can be far reaching, as IoT touches various levels of our social, economic and political lives.

Here is a picture showing one such scenario (Image source: http://spectrum.ieee.org/)

IoT: We can’t hide

IoT is the future of technology beyond 2020, and its one of key tools to realize United Nations Millennium Development Goals, and building security principles into IoT technologies is going to be instrumental to its use to humanity.

Further Reading:

Title Image courtesy: http://www.cmswire.com

Microsoft to end Patch Tuesday fixes

Microsoft recently showed, during their Ignite 2015 conference, some of the new security mechanisms embedded in Windows 10, which also means a change in the software update cycles, reports @iainthomson with The Register.

Terry Myerson, Head of Windows Operating System division, took a shot at Google’s approach (or lack of) in his keynote last week:

Google takes no responsibility to update customer devices, and refuses to take responsibility to update their devices, leaving end users and businesses increasingly exposed every day they use an Android device.

Google ships a big pile of [pause for effect] code, with no commitment to update your device.

The article reports:

Myerson promised that with the new version of Windows, Microsoft will release security updates to PCs, tablets and phones 24/7, as well as pushing other software “innovations,” effectively putting an end to the need for a Patch Tuesday once a month.

And,

On the data protection side, Brad Anderson, veep of enterprise client and mobility, showed off a new feature in preview builds today: Microsoft’s Advanced Threat Analytics (ATA). This tries to sense the presence of malware in a network, and locks down apps to prevent sensitive data being copied within a device…

Using Azure, administrators can choose to embed metadata in files so that managers can see who read what document, when, and where from. If a particular user is trying to access files they shouldn’t, an alert system will let the IT manager know.

Well, controls like these have been around for sometime, but most of them implemented through third party products, but its interesting to see Microsoft building these capabilities within the Operating system itself.

Microsoft’s decision to release patches whenever they are ready or available, is definitely a move in the right direction, and is in line with what Apple has been doing with Mac OS for quite sometime.

Title Image Courtesy: blog.kaspersky.com

Microsoft’s HTTP.sys vulnerability – MS15-034

Just last week Microsoft patched a critical vulnerablity that effects the Windows HTTP stack. which if exploited by an attacker by sending a specially crafted HTTP request, could give the adversary an ability to execute arbitrary code in the context of the System account.

Background
For those who aren’t aware already, the HTTP listener in Microsoft IIS, is implemented as a kernel-mode device driver called the HTTP protocol stack (HTTP.sys). IIS uses HTTP.sys for the following tasks:
  • Routing HTTP requests to the correct request queue.
  • Caching of responses in kernel mode.
  • Performing all text-based logging for the WWW service.
  • Implementing Quality of Service (QoS) functionality, which includes connection limits, connection timeouts, queue-length limits, and bandwidth throttling.
Vulnerability
The problem here stems from HTTP.sys not safely handling the Range header in a HTTP request. The Range header parameter is used to fetch part of a file from a server, which is sometimes handy for resuming downloads. If you set the range way too large, it causes the Windows kernel to crash.
I found these two articles quite useful, while researching this vulnerability.
Exploits found
Two exploits have been discovered to be in the wild as of this post: one to test if a server is vulnerable, and one that crashes it.  Mattias Geniar of hosting solutions provider Nucleus claims to have tracked down one of these exploit code and he covers it good detail here.

Patch released
Microsoft has released a patch as part of their last Patch Tuesday advisory.
The vulnerability has been assigned a reference and is further described here.

Detecting zero day attacks
Software and Hardware are bound to have bugs in them, because they are written by Human Beings! The best way to detect exploits of these bugs/vulnerabilities is to have a holistic approach to setting up an intrusion detection solution. One of the effective frameworks for thinking about cyber defense is called the Cyber Kill Chain, originally created by Lockheed Martin. This is a very interesting framework and I shall be talking in more detail about it in a later post. But briefly, as per this framework, every attack has a set of stages or sequence of steps, that an adversary performs, to accomplish his/her mission.
Cyber Kill Chain - Attack Stages
Cyber Kill Chain – Attack Stages
As per the framework, vulnerabilities are only a part of the whole attack sequence, called here the Exploitation Stage. So by having detection mechanisms that are tuned to detect anomalies at different stages of cyber attack, we get the capability of breaking the sequence even before and post-exploitation stages, thus increasing the possibilities of detecting zero day attacks.

To quote from the Lockheed Martin paper:
Using a kill chain model to describe phases of intrusions, mapping adversary kill chain indicators to defender courses of action, identifying patterns that link individual intrusions into broader campaigns, and understanding the iterative nature of intelligence gathering form the basis of intelligence-driven computer network defense (CND). Institutionalization of this approach reduces the likelihood of adversary success, informs network defense investment and resource prioritization, and yields relevant metrics of performance and effectiveness. The evolution of advanced persistent threats necessitates an intelligence-based model because in this model the defenders mitigate not just vulnerability, but the threat component of risk, too.

Hence, I believe that in order to succeed in the race with the cybersecurity adversaries, who use zero day exploits and vulnerabilities to accomplish their missions, Enterprises must evolve from using signature/discrete event based detection, to a holistic approach of using the Cyber Kill Chain based intrusion detection framework.

This is a very interesting topic, and I will be talking more about it in my forthcoming posts on this blog.

Title Image Courtesy: slashgear.com

Dos and Don’ts with Document Embedded Objects

Phishing is a form of online identity theft in which fraudsters trick Internet users into submitting personal information to illegitimate web sites.
The word ‘Phishing’ is a neologism created as a homophone of fishing due to the similarity of using fake bait in an attempt to catch a victim (and hence the picture I have used in this post)
Phishing scams are usually presented in the form of spam or pop-ups and are often difficult to detect. Once the fraudsters obtain your personal information, they can use it for all types of identity theft, putting your good credit and good name at risk. One of the most widely used Phishing techniques is email spoofing, which necessarily means where the attacker sends a legitimate looking email to a victim, which can have links to websites which is malicious or is controlled by the attacker. Emails are also the most widely used Delivery mechanisms, that an attacker uses to deliver the Attack payload or the exploit itself. (I shall talk about Delivery mechanisms and the larger Cyber Kill Chain Concept in a later post).
These emails can also contain attachments like Word documents, Spreadsheets, PDF files, etc.. And embedding objects within these attachments, is one of the easiest ways of delivering the payload, because embedding objects is something that we IT Professionals also use frequently for legitimate reasons. So this is leveraged by attacker to his advantage.
As Amanda Stewart a FireEye says, in her recent post on their blog:
Phishing emails are one of the most common delivery mechanisms for malware authors. The attachments in those phishing emails have a variety of payloads. Well-known delivery methods include: exploiting vulnerabilities in the document program (e.g., doc, xls, rtf), using macros, or embedding user-clickable objects that drop payloads. Out of all these methods, embedding objects in the document is considered a “gray area” because both IT professionals and malware authors use this technique.
 
In the post, she also talks in detail about the Dos and Don’ts when embedding objects within documents.
Dos
 
  • If you must send someone an installation executable or even a form helper program, compress the executable in a password protected ZIP file, where the password is not easily guessable. Using a standardized strong password limits access to users or employees that need to access the program.
  • Educate your employees to not click on objects in documents without first confirming the source email address.
  • Enforce content filtering on web and email to prevent employees receiving executable files from the internet
  • Remove admin/local admin privileges to prevent employees installing new and unknown software onto devices.
  • Consider Advanced Threat Prevention technologies that can examine emails for sophisticated multi-stage droppers that evade detection of all email security gateways today. 
 
Here is the link to her post; a must read for IT Admins, and also for Security Analysts and Incident Responders: https://www.fireeye.com/blog/threat-research/2015/04/dos_and_don_ts_with.html
Picture courtesy: http://www.cyberoam.com

Digital Intelligence – Whitepaper by GCHQ’s Former Director

David Omand was the Director of GCHQ, from 1996-1997, and the UK’s security and intelligence coordinator from 2000-2005. If you don’t know already, the Government Communications Headquarters (GCHQ) is a British intelligence and security organisation responsible for providing signals intelligence (SIGINT) and information assurance to the British government and armed forces.
He has just published this new paper “Understanding Digital Intelligence and the Norms That Might Govern It.” The paper does have government’s perspective on the whole internet governance topic, a topic which has gained a whole lot of significance & attention after Edward Snowden’s revelations. But it is definitely an interesting read.
Executive Summary:
This paper describes the nature of digital intelligence and provides context for the material published as a result of the actions of National Security Agency (NSA) contractor Edward Snowden. Digital intelligence is presented as enabled by the opportunities of global communications and private sector innovation and as growing in response to changing demands from government and law enforcement, in part mediated through legal, parliamentary and executive regulation. A common set of organizational and ethical norms based on human rights considerations are suggested to govern such modern intelligence activity (both domestic and external) using a three-layer model of security activity on the Internet: securing the use of the Internet for everyday economic and social life; the activity of law enforcement — both nationally and through international agreements — attempting to manage criminal threats exploiting the Internet; and the work of secret intelligence and security agencies using the Internet to gain information on their targets, including in support of law enforcement.
 
He suggests that the norms applicable to digital intelligence, must broadly cover the following. This is definitely reassuring:
  • There must be sufficient sustainable cause
  • All concerned must behave with integrity
  • The methods to be used must be proportionate
  • There must be right authority
  • There must be reasonable prospect of success
  • Necessity
The full paper is available here:
Picture courtesy: www.cigionline.org