Cybersecurity in 2017 and going forward…

Cybersecurity in 2017 and going forward…

2017 has come to an end, and its time to reflect back on the year gone by, and look forward to what is in store for us, the cybersecurity professionals, in 2018.

To start with, lets look at some of the major security events/incidents of 2017. Following are five security and data breaches that made headlines all over the globe:

Equifax

This breach was publicly disclosed in September this year. This is a truly vast breach, as the data stolen included social security and driver’s license numbers of US Consumers, upto the tune of 143 million. Credit card numbers and other personally identifying information were also compromised for a smaller number of U.S. consumers. With this sensitive data now exposed, the operational impact of this on businesses is, that many organizations, including banks, that rely on the data to prove the identity of online users may need to implement additional, expensive and cumbersome authentication procedures.

Apparently, the attack vector was a simple one; the cyber criminals leveraged the critical remote code execution vulnerability CVE-2017-5638 on Apache Struts2. And ironically, this wasn’t a zero day, and the patch to this vulnerability, was available since March this year.

Yahoo

Although the attack occurred, or at least began in 2013, the same year when Target was also exposed to a cyber attack, it only came into light this year when parent company Verizon announced in October that every one of Yahoo’s 3 billion accounts was hacked in 2013. That’s more than three times the initial assessment done last year. In addition to the massive size of the attack, what is astonishing is the fact that it remained largely hidden for so many years. It makes me wonder, how many other huge attacks have occurred that we still don’t know about?

Uber

Last month, Uber CEO Dara Khosrowshahi revealed that two hackers broke into the company in late 2016 and stole personal data, including phone numbers, email addresses, and names, of 57 million Uber users. Among those, the hackers stole 600,000 driver’s license numbers of drivers for the company. Instead of disclosing the breach, as the law requires, Uber paid $100,000 to the hackers to conceal the fact that a breach had occurred. Why is this attack significant?

  • The vast number of records compromised
  • The fact that it was a ransomware attack; the most widely used attack vector in 2017
  • The company paid the attackers (and thus encouraged the illegal industry), and,
  • Nobody at such a large company disclosed the breach.

Shadow Brokers leak of NSA/CIA files

In 2013, a mysterious group of hackers that calls itself the Shadow Brokers stole a few disks full of National Security Agency secrets. Since the beginning of 2017, they’ve been dumping these secrets on the internet. They have publicly embarrassed the NSA and damaged its intelligence-gathering capabilities, while at the same time have put sophisticated cyber-weapons in the hands of anyone who wants them. The reason this hack is significant is because, with all this information now in the hands of cybercriminals, we are already seeing crimes committed by smaller organizations that used to be limited to well-funded, state sponsored attackers. The level of sophistication among attackers took a giant leap forward.

WannaCry

There has been enough said and written about WannaCry, which has turned out to be the most widely used attack vector by cyber criminals, this year. This ransomware plagued thousands in massive global cyberattacks. The widespread impact of WannaCry can be attributed to NSA losing control of its key hacking tools, to the Shadow Brokers group, which enabled hackers to install backdoors that distributed the ransomware to millions of computers.

A key outcome and learning from these incidents has been, organisations shifting focus to incident / breach detection and response. And more importantly, the need for automation in these two areas, powered by Machine Learning techniques, has gained a lot of momentum.

And looking back through 2017, there has been significant progress, in the effective use of Machine Learning techniques in detecting and responding to cyber attacks. Following are some examples that demonstrate this.

I have broken down these examples into two categories / applications – Offensive side (the cybercriminal’s perspective) and the Defensive side (the security architect/incident analyst perspective)

Developments in Offensive security

Attackers have more actively, started leveraging machine learning to improve their attacks. There is not much evidence available of this use in the breaches I called out above. So I pick a few examples from the recently held BlackHat conference (US).

One BlackHat talk called Bot vs Bot: Evading Machine Learning Malware Detection explored how adversaries could use ML to figure out what other ML-based malware detection mechanisms were “looking” for. They could then create malware that avoided those things and thus evade detection. Another talk, Wire Me Through Machine Learning investigated how spammers might improve the success rate of their phishing campaigns by leveraging ML to improve their phishing emails.

At DEFCON, researchers shared how to Weaponize machine learning (humanity is overrated anyway)”. They introduced a tool called DeepHack, an open source AI that hacks web applications. Meanwhile, ML was often an underlying subject in many other talks that weren’t directly about it. It’s clear cybersecurity researchers and attackers alike are leveraging ML & AI to speed up and improve their projects.

Developments in Defensive security

  1. Lets start with picking on the Equifax breach. As mentioned earlier, attackers used the Apache Struts Jakarta Multipart Parser Vulnerability – CVE-2017-5638 here. In this particular case, we could look at using various anomaly detection techniques. Some examples include, Suspicious Process/Service Activity Anomalies (For ex., Suspicious Process Activity Rare Process/MD5 For User/Host Anomaly), Suspicious Network Activity Anomalies (Suspicious Network Activity Traffic to Rare Domains Anomaly), Suspicious Web Server Tomcat Access Anomalies. These Anomaly detection rules can be a starting point in a machine learning based intrusion detection tool.

2. Detecting web-application attacks: Web applications are the primary target by cyber criminals as these applications are mostly exposed to the internet, and in many cases, as also seen in Equifax attack, are not effectively configured to prevent exploits of web application vulnerabilities. One of the most widely used web application attack is SQL injection. There are many methods of detecting it, without depending on signatures based systems, and just using machine learning algorithms. One such approach is described in detail, in this white paper. This method identifies SQL injection codes by their HTTP parameters’ attributes and a Bayesian classifier. Such methods are a lot more effective than using traditional web-application firewalls.

3. A deep learning approach to network intrusion detection: In the last 2 years, there have been many developments in the using of conventional machine learning algorithms, in building network intrusion detection systems (NIDS). These tools are basically developed as classifiers to differentiate the normal traffic from the anomalous traffic. These NIDSs perform a feature selection task to extract a subset of relevant features from the traffic dataset to enhance the result of the classification. These feature selection helps in the elimination of the possibility of incorrect training through the removal of redundant features and noise.

However, recently, deep learning based methods have been successfully applied in audio, image, and speech processing applications. These methods aim to learn a good feature representation from a large amount of unlabelled and unstructured data and subsequently apply these learned features on a limited amount of labeled data in the supervised classification. The labeled and unlabelled data may come from different distributions, however, they must be relevant to each other. Thus, combining signals from unlabelled and unstructured data, with the labelled and structured (logs) data, we will be able to significantly improve the possibilities of detecting an anomalous behaviour, and in turn a real cyber incident. Here is an interesting white-paper that describes one such system, in detail.

4. Detecting Wannacry using machine learning: Ransomware has exploded in the past two years, as software programs with names like Locky and Wannacry infect hosts in high-profile environments on a weekly basis. From power utilities to healthcare systems, ransomware indiscriminately encrypts all files on the victim’s computer and demands payments (usually in the form of cryptocurrency, like Bitcoin). Conventional techniques of detecting them always fail, as there are new variants to these malware being released on a daily and hourly basis. One potentially useful anti-ransomware tool, that uses machine learning, was one that was presented at Black Hat 2017 was ShieldFS, created and presented by a group of researchers from Politecnico di Milano and Trend Micro. The key to this technique is applying machine learning to operating-system-level file access patterns.

Implemented as a Windows filesystem filter, running in the kernel, ShieldFS isn’t a filesystem proper. Instead, it adds functionality to the underlying filesystem. As you would know, two most common challenges in machine learning are feature engineering (how to come up with a list of “features” about the input) and the feature selection itself (figuring out which of those features productively contribute to generating the correct answer). Feature engineering in ShieldFS seemed straightforward to me, since many of the features were simple counts of types of events the filter observed, such as directory listings and writes. They were also fortunate that so many of the features showed obvious qualitative differences between malicious (red) and benign (blue) programs, making feature selection also a high-confidence process.

Using binary inspection (called “static analysis”), they were able to supplement results based on operation statistics (“dynamic analysis”). The team implemented a multitiered machine learning model to preserve long-term trends but also be able to react to new behavioural patterns. By using a copy-on-write policy, if a process started to exhibit ransomware behavior, they could kill it and restore all the copies. This system detected ransomware with a 96.9% success rate, but even the other 3.1% of cases still had the original content stored, so 100% of encrypted files were able to be restored. This is unheard of, in the world of signature based malware detection tools.

How will 2018 turn out to be?

Based on the the cybersecurity events that we saw in 2017, following are some of the trends to watch out for, in 2018. Though not intended to be a comprehensive overview, the following are some of key areas in cyber security, that will undoubtedly shape the security conversations in 2018.

GDPR

Data privacy and data security have long been considered two separate missions with two separate and distinct objectives. But all that will change in 2018. With serious global regulations kicking into effect, especially in Europe, and with the regulatory responses to data breaches increasing, organizations will build new data management frameworks centered on controlling data – controlling who sees what data, in what state, and for what purpose. 2018 will prove that cybersecurity without privacy is a thing of the past.

Ransomware to continue to play

Ransomware will continue to represent the most dangerous threat to organizations and end-users. The number of new Ransomware families will continue to increase; authors will be more focused on mobile devices implementing new evasion techniques making these threats even more efficient and difficult to eradicate.

New ransom-as-a-service platforms will be available on the dark web making very easy to wannabe crooks to arrange their ransomware campaigns.

IoT, a privileged target of hackers

During 2017, botnets targeted over 122,000 IP cameras with DDoS attacks, and IoT attacks on wireless routers virtually shut down the internet for several hours in a day. Baby and pet monitors, medical devices, and dozens of other gadgets were hacked. Although we are a long way from securing the IoT, these incidents served as a wake-up call, and many organizations have added IoT security to their agendas and are talking seriously about securing it moving forward.

Critical infrastructure to include Social media too

Until recent past, Social media was only limited to being a fun way to communicate and stay up to date with friends, family and the latest viral videos. But along the way, as we started to also follow various influencers and use Facebook, Twitter and other platforms as curators for our news consumption, social media has become inextricably linked with how we experience and perceive our democracy.

The definition of critical infrastructure, previously limited to areas like power grids and sea ports, will likely expand to include internet social networks. While a downed social network will not prevent society from functioning, these websites have the ability to influence elections and shape public opinion generally, and also elections, thus making their security essential to preserving our democracy. And protecting them from cyberattacks, will become utmost necessary.

 

Standardised hacking techniques

In 2018, more threat actors will adopt plain-vanilla tool sets, designed to remove any tell-tale signs of their attacks. As mentioned earlier, this can also be attributed to the NSA and CIA toolkits now made available to rookies, thanks to Shadow Brokers.

For example, we will see backdoors sport fewer features and become more modular, creating smaller system footprints and making attribution, more difficult than ever.

And so, as accurate attribution becomes more challenging, the door is opened for even more ambitious cyberattacks and influence campaigns from both nation-states and cybercriminals alike.

Crypto currencies

The rapid and sustained increase in the value of some cryptocurrencies will push crooks in intensifying the fraudulent activities against virtual currency scheme.

Cyber criminals will continue to use malware to steal funds from victims’ computers or to deploy hidden mining tools on machines.

Another perspective is – Cryptocurrencies, including Bitcoin, Ethereum, Litecoin and Monero, maintain total market capital of over $1 billion, which makes them a more appealing target for hackers as their market value increases. Several hacks against Ethereum have temporarily dropped its value in the past few years. So, there are chances that, in 2018, a major hack against one of these cryptocurrencies will damage public confidence.

Artificial Intelligence as a double-edged sword

Across the board, more criminals will use AI and machine learning to conduct their crimes. Ransomware will be automatic – bank theft will be conducted by organized gangs using machine learning to conduct their attacks in more intelligent ways. Smaller groups of criminals will be able to cause greater damage by using these new technologies to breach companies and steal data.

At the same time, as I mentioned above, research and practical applications of Machine Learning and AI, in detecting and responding to cyber attacks, are improving month-over-month. So, large enterprises will turn to AI to detect and protect against new sophisticated threats. AI and machine learning will enable them to increase their detection rates and dramatically decrease the false alarms that can so easily lead to alert fatigue and failure to spot real threats by incident responders, thus resulting in significantly reduced MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond).

 

Thanks for reading.

I shall look forward to your comments and point-of-views as well.

 

 

 

Title image courtesy: https://www.askideas.com

Visualising the performance of Machine learning models

Visualising the performance of Machine learning models

Evaluating the performance of machine learning models using various metrics like accuracy, precision, recall, is straightforward, but visualising them has never been easy. But Ben Bengfort at District Data Labs, has developed a python library for this purpose, called YellowBrick.

It definitely looks interesting. Our very own Charles Givre shows how this package can be used, in his blog.

It's a definite read.

Machine Learning and EU GDPR

Machine Learning and EU GDPR

In this post, I share my thoughts on the impact of using machine learning to conduct profiling of individuals in the context of the EU General Data Protection Regulation (hereon referred to as GDPR). My analysis is based on, specifically, Article 22 of the GDPR regulation, which can be found here, which refers to the “automated-processing and profiling of data subjects” requirement.

One of the arguments I discuss is, though using machine learning for profiling (of users/consumers, hereon referred to as ‘data subjects’) may complicate data controllers’ compliance with their obligations under the GDPR, at the same time it may lead to fairer decisions for data subjects, because human intervention whilst classifying data or people is flawed and is subject to various factors, whereas, machines/computers eliminate the subjectivity and biased approaches used by humans.

Lawful, Fair and Transparent

One of the fundamental principles of EU data protection law is that personal data must be processed lawfully, fairly and in a transparent manner.

GDPR’s definition of ‘processing’, is as follows:
‘any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction’

‘Profiling’ is a subset of automated processing, and GDPR defines it as:
‘the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’.

Now, lets analyse the three key tenets of the GRPD requirement – personal data must be processed lawfully, fairly and transparently

Lawfulness

If we break down the definition of ‘profiling’ in GDPR, in the context of machine learning, following are three key elements in this process:

Data profiling – key elements:

  • Data collection
  • Model development
  • Decision making

The outcome of these steps is that, machine learning is used for:

  • Automated data processing for profiling purposes
  • Automated decision making, based on the profiles built

Data collection

The regulation says that the collection of personal data should comply with the data protection principles and there must be a lawful ground for processing of this data. This means that personal data should only be collected for specified, explicit, and legitimate purposes and should not be processed subsequently in a manner that is incompatible with those purposes.

A machine learning algorithm may build a profile of a subject, based on the data that has been provided by the ‘data controller’ or by a third party or by both. Many organisations use Cloud Computing services for these activities, as the process may require significant resources in terms of computational power and storage. Depending on the nature of the business/application/usecase of such profiling, this processing may take place locally on the data controller’s machines, while a copy of this data is also sent to the Cloud to continue the dynamic training of the algorithm.

Elaborating on the “lawfulness” of this profiling, an individuals’ personal data are not only processed to create descriptive profiles about them but also to check against predefined patterns of normal behaviour, and to detect anomalies. This stage of profile construction will be subject to the GDPR rules governing the processing of personal data including the legal grounds for processing this data.

An interesting point to note is that, the final text of Article 22 of the GDPR refers to a ‘data subject’ and not a ‘natural person’. This could be interpreted as the protection against solely automated decision-making might not apply if the data processed are anonymized. This means, if profiling does not involve the processing of data relating to identifiable individuals, the protection against decisions based on automated profiling may not be applicable, even if such decisions may impact upon a person’s behaviour or autonomy. However, as Article 22 seems only to apply to profiling of individual data subjects and not groups, the question arises whether data subjects are protected against decisions that have significant effects on them but these decisions could be based on group profiling.

This can be an issue, because if inferences about individuals are made based on shared characteristics with other members of a group, there may be significant number of false positives or false negatives. A good example of this “anonymised” data collection for machine learning application, is Apple’s approach, which they refer to as ‘differential privacy’

Decision making

When it comes to decision making, based on the ‘processing’ of personal data described above, does ‘automated individual decision-making’ only cover situations where a machine makes decisions without any involvement by human actors? This may not be true in most of the situations as some human intervention is likely to occur at some point in the automated decision-making process. And so, I think the scope of the protection is broader than just covering wholly automated decision-making. Also, human intervention would have to be actual and substantive, i.e. humans would have to exercise ‘real influence on the outcome of a particular decision-making process, in order to lead to the inapplicability of this protection.

In addition, the GDPR does not specify whether the decision itself has to be made by a human or whether it could potentially be made by a machine. Nevertheless, as I mentioned above, it is highly likely that one or more humans will be involved in the design of the model, training it with data, and testing of a system incorporating machine learning.

Legal impact

Another important element of the decision is that it has to produce legal effects or similarly significantly affect the data subject. Some examples could be an automatic refusal for an online credit application or e-recruitment practices without human intervention. The effects can be both material and / or immaterial, potentially affecting the data subject’s dignity, integrity or reputation. And so the requirement that ‘effects’ be ‘legal’ means that a decision must be binding or that the decision creates legal obligations for a data subject.

Potential consequences of non-compliance

It is important to bear in mind that if data controllers violate the rights of data subjects under Article 22, they shall ‘be subject to administrative fines up to 20,000,000 EUR, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover of the preceding financial year, whichever is higher’. In the face of potential penalties of this magnitude and considering the complexities of machine learning, data controllers may have apprehensions in using the technology for automated decision making in certain situations. Moreover, data controllers may insist that contractual arrangements be put in place, with providers that are part of the machine learning supply chain, which contain very specific provisions regarding the design, training, testing, operation and outputs of the algorithms, and also the relevant technical and organisational security measures to be incorporated.

Fairness

Lets now turn to the meaning of ‘fairness’ in the context of using machine learning either to carry out automated processing, including profiling, or to make automated decisions based on such processing. Whether personal data will be processed in a fair way or not, may depend upon a number of factors. Machine learning processes may be biased to produce the results pursued by the person who built the model. Also, the quantity and quality of data used to train the algorithm, including the reliability of their sources and labelling, may have significant impact on the construction of profiles.
For example, an indirect bias may arise where data relate to a minority group that has been treated unfairly in the past in such a way that the group is underrepresented in specific contexts or overrepresented in others. Also, in case of a hiring application, if fewer women have been hired previously, data about female employees might be less reliable than data about male employees.

So the point is, reliability while using machine learning for automated decision-making, will depend, on the techniques and the training data used. Further, machine learning techniques often perform better when the training data is large (more data about data subjects), and when the variance is wide spread. However, this may collide with the data minimisation principle in EU data protection law, a strict interpretation of which is that ‘the data collected on the data subject should be strictly necessary for the specific purpose previously determined by the data controller’.

And so it is very important that the data controllers decide, at the time of collection, which personal data they are going to process for profiling purposes. Then, they will also have to provide the algorithm with only the data that are strictly necessary for the specific profiling purpose, even if that leads to a narrower representation of the data subject and possibly a less fair decision for him/her.

Transparency

Machine learning algorithms may be based on very different computational learning models. Some are more amenable to allowing humans to track the way they work, others may operate as a ‘black box’. For example, where a process utilises a decision tree it may be easier to generate an explanation (in a human-readable form) of how and why the algorithm reached a particular conclusion; though this very much depends on the size and complexity of the tree. The situation may be very different in relation to neural network-type algorithms, such as deep learning algorithms. This is because the conclusions reached by neural networks are ‘non-deductive and thus cannot be legitimated by a deductive explanation of the impact various factors at the input stage have on the ultimate outcome’

This opacity of machine learning techniques might have an impact on a data controller’s obligation to process a data subject’s personal data in a transparent way. Whether personal data are obtained directly from the data subject or from an indirect source, the GDPR imposes on the data controller the obligation, at the time when personal data are obtained, to provide the data subject with information regarding:

‘the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.’

Does this mean that whenever machine learning is used to conduct profiling the data controller must provide information regarding the existence and type of machine learning algorithms used? If so, to what does the term ‘logic’ refer and what would constitute ‘meaningful information’ about that logic? Does the term ‘logic’ refer to the data set used to train the algorithm, or to the way the algorithm itself works in general, for example the mathematical / statistical theories on which the design of the algorithm is based? And what about the criteria fed into the algorithm, the variables, and the weights attributed to those variables? And how does this relate to the role of different service providers forming part of the ‘machine learning’ supply chain? All these are important clarifications to be sought.

Due to all the above complexities, it is clear that transparency might not be the most appropriate way of seeking to ensure legal fairness but that compliance should be verified, for instance, through the use of technical tools. For example to show bias to a particular attribute like the use of race in credit decisions or the requirement that a certain class of analysis be applied for certain decisions. This might also be achieved by testing the trained model for unfair discrimination against a number of ‘discrimination testing’ datasets, or by assessing the actual outcomes of the machine learning process to prove that they comply with the lawfulness and fairness requirements.

Conclusion

According to Article 22 of the GDPR, data subjects have a right not to be subject to a decision based solely on automated processing, including profiling that produces legal effects concerning them or significantly affects them. When data controllers use machine learning to carry out automated processing, including profiling of data subjects, they must comply with the requirement of lawful, fair and transparent processing. This may be difficult to achieve due to the way in which machine learning works and / or the way machine learning is integrated into a broader workflow that might involve the use of data of different origins and reliability, specific interventions by human operators, and the deployment of machine learning products and services, including ‘Machine Learning as a Service’ services (provided by Amazon, Google, Microsoft, and others).

In order to be compliant, data controllers must assess how using machine learning to carry out automated processing affects the different stages of profiling and the level of risk to data subjects’ rights, and the impact of how the data controller can produce evidences of the compliance to the regulator and the data subject. In some cases where automated processing, including profiling, is permitted by law, data controllers still have to implement appropriate measures to protect the data subjects’ rights. The underlying objective of GDPR is that a decision significantly affecting a person cannot just be based on a fully automated assessment of his or her personal characteristics. However, as I called out in the very beginning of this post, in the context of machine learning, in some cases, it might be more beneficial for data subjects if a final decision is based on an automated assessment, as it is devoid of prejudices induced by human intervention.

Whether a decision about us is being made by a human or by a machine/computer, right now the best we can hope for is such a decision, which can produce legal effects or significantly affect us in any manner, will be as fair as humans can be. And eventually, we, as machine learning practitioners, must aim to build machine learning models where the decisions are far more fair, than what humans can be.

This is taking into account that machines may soon be able to overcome the limitations of human decision makers and provide us with decisions that are demonstrably fair. Indeed, it may already in some contexts make sense to replace the current model, whereby individuals can appeal to a human against a machine decision, and also where individuals would have a right to appeal to a machine against a decision made by a human!

Well that sounds a bit weird, ain’t it! Has the time for Skynet to take over Planet earth, finally arrived!

I am sure that many of the questions that we, the machine learning enthusiasts and practitioners, have, about the implication of GDPR to it, will eventually be answered, after GDPR becomes a regulation in May 2018. And also, we will see interesting changes to how machine learning models are designed and applied, especially in the context of personal data processing.

 

 

Title image courtesy: LinkedIn.com

AI powered Cyber Security startups

AI powered Cyber Security startups

Artificial Intelligence (AI) and Machine Learning have become mainstream these days, but at the same time, they are some of the most used (abused) term/jargon in the last 2-3 years.

Last year’s Gartner hype cycle report (2016 Hype Cycle for Emerging Technologies – shown below) shows this trend clearly.

emerging-tech-hc-2016.png;wa59f7b006c484099e.png

Why do we need AI in Cyber security

The biggest challenge in the Cybersecurity Threat Managment space today, is the ability (or lack of) of effective “detection” of cyber attacks. One of the key levers in making “detection” work is reducing the dependency on the “human” element in this entire threat management lifecycle:

  • Let it be the detection techniques (signatures, patterns, and for that matter ML models and their hyper-parameters), or,
  • The incident “response” techniques:
    • involving human security analysts for analysing the detections, or,
    • human security administrators to remediate/block the attacks at the network  or system level

Introducing automation and bringing in cognitive methods in each of these areas, is the only way forward, to take the adversaries head-on. And there has been numerous articles, presentations and whitepapers published on why Machine Learning (ML) and AI will play a key role in addressing the cyber threat management challenge.

In my pursuit of understanding how AI can be used effectively in the cybersecurity space, I have come across products developed by some of the leading startups in this domain. And in this blog post, I attempt to share my thoughts on 10 of these products, chosen primarily on their market cap/revenue, IP (intellectual property) potential, and any reference materials available about their successful detections so far.

Note:

  • I have tried to cover as much breadth I can, in terms of covering Products falling under various domains of Cybersecurity – Network detection, UEBA, Application security and Data security, and so there is a good chance I have missed some contenders in this area. AI in Cyber is a rapidly growing plateau, and I hope to cover more ground in the coming months.
  • These Products are listed below in no particular order.

Lets get started.

1. PatternEx

Founded 2013, San Jose, California
https://www.patternex.com/
@patternex

PatternEx’s Threat Prediction Platform is designed to create “virtual security analysts” that mimic the intuition of human security analysts in real time and at scale. The platform reportedly detects ten times more threats with five times fewer false positives compared with approaches based on Machine Learning-Anomaly Detection technology. Using a new technology called “Active Contextual Modeling” or ACM, the product synthesizes analyst intuition into predictive models. These models, when deployed across global customers, can reportedly learn from each other and achieve a network effect in detecting attack patterns.

The process of Active Contextual Modeling (ACM) facilitates communication between the artificial intelligence platform and the human analyst. Raw data is ingested, transformed into behaviors, and run through algorithms to find rare events for an analyst for review. After investigation, an appropriate label is attached to each event by the analyst. The system learns from these labels and automatically improves detection efficacy. Data models created though this process are flexible and adaptive. Event accuracy is continuously improved. Historic data is retrospectively analyzed as new knowlege is added to the system.

Training the AI happens when the AI presents a set of alerts to human analysts, who review the alerts and define them as attacks or not. The analyst applies a label to the alert which trains a supervised learning model that automatically adapts and improves. This is a trained AI, and interesting concept, that attempts to simulate a security analyst, helping the AI system to improve the detection over a period of time.

PatternEx was founded by Kalyan Veeramachaneni, Uday Veeramachaneni, Vamsi Korrapati, and Costas Bassias.

PatternEx has received funding of about $7.8M so far.

2. Vectra Networks

Founded 2011, USA
http://www.vectranetworks.com/
@Vectra_Networks

Vectra Networks’ platform is designed to instantly identify cyber attacks while they are happening as well as what the attacker is doing. Vectra automatically prioritizes attacks that pose the greatest business risk, enabling organizations to quickly make decisions on where to focus their time and resources. The company says that platform uses next-generation compute architecture and combines data analytics and machine learning to detect attacks on every device, application and operating system. And to do this, the system uses the most reliable source of information – network traffic. Logs only provide low-fidelity summaries of events that have already been seen, not what has been missed. Likewise, endpoint security is easy to compromise during an active intrusion.

The Vectra Networks approach to threat detection blends human expertise with a broad set of data science and machine learning techniques. This model, known as Automated Threat Management, delivers a continuous cycle of threat intelligence and learning based on cutting-edge research, global learning models, and local learning models. With Vectra, all of these different perspectives combine to provide an ongoing, complete and integrated view that reveals complex multistage attacks as they unfold inside your network.

They have an interesting approach to use Supervised and Unsupervised ML models to detect cyber attacks. They have a “Global Learning” element, where supervised ML algorithms are used to build models to detect “generic” and “new known” attack patterns. “Local learning” element uses Unsupervised ML algorithms are used to collect knowledge of local norms in an enterprise, and then detecting deviations from those norms.

Vectra networks has received funding of about $87M so far, and has seen very good traction in the Enterprise Threat Detection space, where ML models are a lot more effective than using conventional signature/pattern based detections.

3. Darktrace

Founded 2013, UK
https://www.darktrace.com/
@Darktrace

Darktrace is inspired by the self-learning intelligence of the human immune system; it’s Enterprise Immune System technology iteratively learns a pattern of life for every network, device and individual user, correlating this information in order to spot subtle deviations that indicate in-progress threats. The system is powered by machine learning and mathematics developed at the University of Cambridge. Some of the world’s largest corporations rely on Darktrace’s self-learning appliance in sectors including energy and utilities, financial services, telecommunications, healthcare, manufacturing, retail and transportation.

DarkTrace has a set of products, which use ML and AI in detecting and blocking cyber attacks:

DarkTrace (Core) is the Enterprise Immune System’s flagship threat detection and defense capability, based on unsupervised machine learning and probabilistic mathematics. It works by analyzing raw network data, creating unique behavioral models for every user and device, and for the relationships between them.

The Threat Visualizer is Darktrace’s real-time, 3D threat notification interface. As well as displaying threat alerts, the Threat Visualizer provides a graphical overview of the day-to-day activity of your network(s), which is easy to use, and accessible for both security specialists and business executives.

Darktrace ICS retains all of the capabilities of Darktrace in the corporate environment, creating unique, behavioral understanding of the ‘self’ for each user and device within an Industrial Control systems’s network, and detecting threats that cannot be defined in advance by identifying even subtle shifts in expected behavior in the OT space.

Darktrace Antigena is capable of taking a range of measured, automated actions in the face of confirmed cyber-threats detected in real time by Darktrace. Because Darktrace understands the ‘pattern of life’ of users, devices, and networks, Darktrace Antigena is able to take action in a highly targeted manner, mitigating threats while avoiding over-reactions. It basically performs three steps, once a cyber attack is detected by the DarkTrace Core:

  • Stop or slow down activity related to a specific threat
  • Quarantine or semi-quarantine people, systems, or devices
  • Mark specific pieces of content, such as email, for further investigation or tracking

DarkTrace has received funding of about $105M so far.

4. Status today

Founded 2015, UK
http://www.statustoday.com/
@statustodayhq

StatusToday was founded by Ankur Modi and Mircea Danila-Dumitrescu. It is a SaaS based AI-powered Insights Platform that understands human behavior in the workplace, helping organizations ensure security, productivity and communication.
Through patent-pending AI that understands human behavior, StatusToday maps out human threats and key behavior patterns internal to the company.

In a nutshell, this product collects all the user activity log data, from various IT systems, applications, servers and even everyday cloud services like google apps or dropbox. After collecting this metadata, the tool extracts as many functional parameters as possible and present them in easily understood reports graph. I think they use one of the Link analysis ML models to plot the relationship between all these user attributes.

The core solution provides direct integrations with Office 365, Exchange, CRMs, Company Servers and G-Suite (upcoming) to enable a seamless no-effort Technology Intelligence Center.

StatusToday has been identified as one of UK’s top 10 AI startups by Business Insider, TechWorld, VentureRadar and other forums, in the EU region.

Status Today has received funding of about $1.2M so far.

5. Jask

Founded 2015, USA
http://jask.io/
@jasklabs

Jask aims to use AI in solving the age old problem of tsunami of logs fed into SIEM tools which then generate events & alerts, and other indicators that security analysts face every day, which produce a never ending flood of unknowns which forces these analysts to spend their valuable time sorting through indicators in the endless hunt for real threats.

At the heart is their product Trident, which is a big data platform for real time and historical analysis over an unlimited amount of stored security telemetry data. Trident collects all this data directly from the network and complements that with the ability to fuse other data sources such as threat intelligence (through STIX and TAXII), providing context into real threats. Once Trident identifies a sequence that indicates an attack, it generates SmartAlerts, which analysts can use to have the full picture of an attack, also allowing them to spend their time on real analysis instead of an endless hunt for the attack story.

They have really interesting blog posts on their site, which are worth a read.

Jask has received funding of about $2M so far.

6. Fortscale

Founded 2012, Israel
https://fortscale.com/
@fortscale

Fortscale uses a machine learning system to detect abnormal account behavior indicative of credential compromise or abuse. The company was founded by security engineers from the Israeli Defense Force’s elite security unit. The products key ability is to rapidly detect and eliminate insider threats. From rogue employees to hackers with stolen credentials, Fortscale is designed to automatically and dynamically identify anomalous behaviors and prioritizes the highest-risk activities within any application, anywhere in the enterprise network.

Behavioral data is automatically ingested from SIEM tools and enriched with contextual data, and multi-dimensional baselines are created autonomously and statistical analysis reveals any deviations, which are then captured in SMART Alerts. All of this can viewed and analysed in Fortscale Console.

Fortscale was named Gartner Cool Vendor (2016) in the UEBA< Fraud Detection and User Authentication category.

More info about the product can be found here.

Fortscale has received funding of about $40 million so far.

7. Neokami

Founded 2014, Germany & USA
https://www.neokami.com/
@neokami_tech

Neokami attempts to tackle a very important problem we all face today – keeping a track of where all our and an enterprises’s sensitive information resides. Neokami’s CyberVault uses AI to discover, secure and govern Sensitive Data in the cloud, on premise, or across their physical assets. It can also scan images to detect sensitive information, as it uses highly optimized NLP for text analytics & Convolutional Neural Networks for image data analytics.
In a nutshell, Neokami uses a multi-layer decision pipeline, wherein it takes in data stream or files, and performs pattern matching, text analytics, image recognition, N-gram modelling and topic detection, using ML learning methods like Random Forest, to learn user-specific sensitivity over time. Post this analysis, a % sensitivity Score is generated and assigned to the data, which can then be picked up for further analysis and investigation.

Some key use cases Neokami tackles are – isolating PII to meet regulations such as GDPR, HIPPA, etc., discovering a company’s confidential information and intellectual property, scan images for sensitive information, protect information in Hadoop clusters, cloud, endpoints or mainframes.

Neokami was acquired by Relayr in Feb this year, and has received $1.1million funding so far, from three investors.

8. Cyberlytic

Founded 2013, UK
https://www.cyberlytic.com/
@CyberlyticUK

Cyberlytic call themselves the ‘Intelligent Web application security’ product. Their elevator pitch is they provide advanced web-application security using AI to classify attack data, identify threat characteristics and prioritize high-risk attacks.

The founders have had a stint with the UK Ministry of Defense, where this product was first used and has been in use support critical cybersecurity research projects in the department.

Cyberlytic analyzes web server traffic in real-time, and determines the sophistication, capability and effectiveness of each attack. This information is translated into a risk score, to prioritize incident response and prevent dangerous web attacks. And the underlying ML models adapt to new and evolving threats without requiring the creation or management of firewall rules. They key to their detection, is their patented ML classification approach, which appears to be more effective in detecting web application attacks than the conventional signature/pattern based detection.

Cyberlytic is a combination of two products – the Profiler, and the Defender. The Profiler provides real-time risk assessment of web-based attacks, by connecting to the web server and analyzing web traffic, to determine the capability, sophistication and effectiveness of each attack. And Defender, is deployed on web servers, and acts on the assessment performed by Profiler, by blocking and preventing web-based cyber-attacks from reaching critical web applications or the underlying data layer.

Cyberlytic has also been gaining a lot of attention in the UK and EU region; Real Business, an established publication in the UK, has named Cyberlytic as one of the UK’s 50 most disruptive tech companies in 2017.

Cyberlytic has received funding of about $1.24 million.

9. harvest.ai

Founded 2014, USA
http://www.harvest.ai/
@harvest_ai

Harvest.ai aims at detecting and stopping data breaches, by using AI-based algorithms to learn the business value of critical documents across an organization, and offer what it describes as an industry-first ability to detect and stop data breaches. In a nutshell, Harvest.ai is an AI powered advanced DLP system having the ability to perform UEBA.

Key features of their product MACIE, includes:

  • Use AI to track intellectual property across an organization’s network, including emails and other content derived from IP.
  • MACIE understands the business value of all data across a network and whether it makes sense for a user to be accessing certain documents, a key indicator of a targeted attack.
  • MACIE can automatically identify risk to the business of data that is being exposed or shared outside the organization and remediate based on policies in near real-time. It not only classifies documents but can identify true IP matches to protect sensitive documents that exist for an organization, whether it be technology, brand marketing campaigns or the latest pharmaceutical drug.
  • MACIE not only detects changes in a single users behavior, but it has the unique ability to detect minor shifts in groups of users, which can indicate an attack.

Their blog has some interesting analysis of some of the recent APT attacks, and how MACIE detected them. Definitely work a read.

Harvest.ai has received funding of about $2.71 million so far, and interestingly, they have been acquired by Amazon in Jan this year, for reportedly $20 million.

10. Deep Instinct

Founded 2014, Israel
http://www.deepinstinct.com/
@DeepInstinctSec

Deep Instinct focuses as End point as the pivot point, in detecting and blocking cyber attacks, and thus fall under the category of EDR. There is something going on in israel, for the last few years, as many cybersecurity startups (Cyberreason, Demisto, Intsights, etc.) are being founded by ex-IDF engineers in Israel, and a good portion of these startups are to do with Endpoint Detection and Response (EDR).

Deep Instinct uses deep learning to detect unknown malware in real-time, just by analysing the binary raw details of the binary picked up by the system. The software runs efficiently on the combination of central processing units (CPUs) and graphics processing units (GPUs) and Nvidia’s CUDA software for running non-graphics software on graphics chips. The GPUs enable the company to do in a day what would take three months for a CPU.

I couldn’t find enough documentation on their website to understand how this deep learning system actually works, but their website has a link to register for an online demo. So it must be definitely worth a try.

They are also gaining a lot of attention in the EDR space, and NVIDIA has selected Deep Instinct as one of the 5 most disruptive AI startups this year.

Deep Instinct has raised $50 million so far, from Blumberg Capital, UST Global, CNTP, and Cerracap.

Model Evaluation in Machine Learning

Model Evaluation in Machine Learning

One of the most important activities for a Data Scientist to perform, is to measure and optimize the prediction accuracy of Machine Learning models one has built. Though there are various approaches to do this, they can be grouped into three major key steps.

Sebastian Raschka, the author of the bestselling book “Python Machine Learning”, who is a Ph.D. candidate at Michigan State University, developing new computational methods in the field of computational biology, has published an excellent article describing these steps.

In a nutshell, he breaks down the evaluation process into three main steps:

  1. Data generalisation – ensure that the training data and the test data have good ‘variance’ and a fair proportion of various classifications’. This could be achieved by a couple of techniques:
    • Stratification
    • Cross validation – k fold or bootstrap
    • Hold out method – training data set, hold out data set, test data set
    • Bias variance trade-off
  2. Algorithm selection – picking the right algorithm that is best suited for the use case in hand
  3. Model selection
    1. Hyper parameters tuning – cross validation techniques
    2. ‘Model parameters’ are of models and ‘Hyper parameters’ (also called tuning parameters) are of algorithms; for ex., the depth of Trees in Random Forests

Sebastian has put together a detailed 3 part tutorial where he goes into the details of each of these steps:

 

These are great reads for anyone who is having a tough time picking the right model for their ML project, and also having difficulty measuring its efficiency and accuracy.

 

 

Title Image courtesy: biguru.wordpress.com