AI powered Cyber Security startups

AI powered Cyber Security startups

Artificial Intelligence (AI) and Machine Learning have become mainstream these days, but at the same time, they are some of the most used (abused) term/jargon in the last 2-3 years.

Last year’s Gartner hype cycle report (2016 Hype Cycle for Emerging Technologies – shown below) shows this trend clearly.

emerging-tech-hc-2016.png;wa59f7b006c484099e.png

Why do we need AI in Cyber security

The biggest challenge in the Cybersecurity Threat Managment space today, is the ability (or lack of) of effective “detection” of cyber attacks. One of the key levers in making “detection” work is reducing the dependency on the “human” element in this entire threat management lifecycle:

  • Let it be the detection techniques (signatures, patterns, and for that matter ML models and their hyper-parameters), or,
  • The incident “response” techniques:
    • involving human security analysts for analysing the detections, or,
    • human security administrators to remediate/block the attacks at the network  or system level

Introducing automation and bringing in cognitive methods in each of these areas, is the only way forward, to take the adversaries head-on. And there has been numerous articles, presentations and whitepapers published on why Machine Learning (ML) and AI will play a key role in addressing the cyber threat management challenge.

In my pursuit of understanding how AI can be used effectively in the cybersecurity space, I have come across products developed by some of the leading startups in this domain. And in this blog post, I attempt to share my thoughts on 10 of these products, chosen primarily on their market cap/revenue, IP (intellectual property) potential, and any reference materials available about their successful detections so far.

Note:

  • I have tried to cover as much breadth I can, in terms of covering Products falling under various domains of Cybersecurity – Network detection, UEBA, Application security and Data security, and so there is a good chance I have missed some contenders in this area. AI in Cyber is a rapidly growing plateau, and I hope to cover more ground in the coming months.
  • These Products are listed below in no particular order.

Lets get started.

1. PatternEx

Founded 2013, San Jose, California
https://www.patternex.com/
@patternex

PatternEx’s Threat Prediction Platform is designed to create “virtual security analysts” that mimic the intuition of human security analysts in real time and at scale. The platform reportedly detects ten times more threats with five times fewer false positives compared with approaches based on Machine Learning-Anomaly Detection technology. Using a new technology called “Active Contextual Modeling” or ACM, the product synthesizes analyst intuition into predictive models. These models, when deployed across global customers, can reportedly learn from each other and achieve a network effect in detecting attack patterns.

The process of Active Contextual Modeling (ACM) facilitates communication between the artificial intelligence platform and the human analyst. Raw data is ingested, transformed into behaviors, and run through algorithms to find rare events for an analyst for review. After investigation, an appropriate label is attached to each event by the analyst. The system learns from these labels and automatically improves detection efficacy. Data models created though this process are flexible and adaptive. Event accuracy is continuously improved. Historic data is retrospectively analyzed as new knowlege is added to the system.

Training the AI happens when the AI presents a set of alerts to human analysts, who review the alerts and define them as attacks or not. The analyst applies a label to the alert which trains a supervised learning model that automatically adapts and improves. This is a trained AI, and interesting concept, that attempts to simulate a security analyst, helping the AI system to improve the detection over a period of time.

PatternEx was founded by Kalyan Veeramachaneni, Uday Veeramachaneni, Vamsi Korrapati, and Costas Bassias.

PatternEx has received funding of about $7.8M so far.

2. Vectra Networks

Founded 2011, USA
http://www.vectranetworks.com/
@Vectra_Networks

Vectra Networks’ platform is designed to instantly identify cyber attacks while they are happening as well as what the attacker is doing. Vectra automatically prioritizes attacks that pose the greatest business risk, enabling organizations to quickly make decisions on where to focus their time and resources. The company says that platform uses next-generation compute architecture and combines data analytics and machine learning to detect attacks on every device, application and operating system. And to do this, the system uses the most reliable source of information – network traffic. Logs only provide low-fidelity summaries of events that have already been seen, not what has been missed. Likewise, endpoint security is easy to compromise during an active intrusion.

The Vectra Networks approach to threat detection blends human expertise with a broad set of data science and machine learning techniques. This model, known as Automated Threat Management, delivers a continuous cycle of threat intelligence and learning based on cutting-edge research, global learning models, and local learning models. With Vectra, all of these different perspectives combine to provide an ongoing, complete and integrated view that reveals complex multistage attacks as they unfold inside your network.

They have an interesting approach to use Supervised and Unsupervised ML models to detect cyber attacks. They have a “Global Learning” element, where supervised ML algorithms are used to build models to detect “generic” and “new known” attack patterns. “Local learning” element uses Unsupervised ML algorithms are used to collect knowledge of local norms in an enterprise, and then detecting deviations from those norms.

Vectra networks has received funding of about $87M so far, and has seen very good traction in the Enterprise Threat Detection space, where ML models are a lot more effective than using conventional signature/pattern based detections.

3. Darktrace

Founded 2013, UK
https://www.darktrace.com/
@Darktrace

Darktrace is inspired by the self-learning intelligence of the human immune system; it’s Enterprise Immune System technology iteratively learns a pattern of life for every network, device and individual user, correlating this information in order to spot subtle deviations that indicate in-progress threats. The system is powered by machine learning and mathematics developed at the University of Cambridge. Some of the world’s largest corporations rely on Darktrace’s self-learning appliance in sectors including energy and utilities, financial services, telecommunications, healthcare, manufacturing, retail and transportation.

DarkTrace has a set of products, which use ML and AI in detecting and blocking cyber attacks:

DarkTrace (Core) is the Enterprise Immune System’s flagship threat detection and defense capability, based on unsupervised machine learning and probabilistic mathematics. It works by analyzing raw network data, creating unique behavioral models for every user and device, and for the relationships between them.

The Threat Visualizer is Darktrace’s real-time, 3D threat notification interface. As well as displaying threat alerts, the Threat Visualizer provides a graphical overview of the day-to-day activity of your network(s), which is easy to use, and accessible for both security specialists and business executives.

Darktrace ICS retains all of the capabilities of Darktrace in the corporate environment, creating unique, behavioral understanding of the ‘self’ for each user and device within an Industrial Control systems’s network, and detecting threats that cannot be defined in advance by identifying even subtle shifts in expected behavior in the OT space.

Darktrace Antigena is capable of taking a range of measured, automated actions in the face of confirmed cyber-threats detected in real time by Darktrace. Because Darktrace understands the ‘pattern of life’ of users, devices, and networks, Darktrace Antigena is able to take action in a highly targeted manner, mitigating threats while avoiding over-reactions. It basically performs three steps, once a cyber attack is detected by the DarkTrace Core:

  • Stop or slow down activity related to a specific threat
  • Quarantine or semi-quarantine people, systems, or devices
  • Mark specific pieces of content, such as email, for further investigation or tracking

DarkTrace has received funding of about $105M so far.

4. Status today

Founded 2015, UK
http://www.statustoday.com/
@statustodayhq

StatusToday was founded by Ankur Modi and Mircea Danila-Dumitrescu. It is a SaaS based AI-powered Insights Platform that understands human behavior in the workplace, helping organizations ensure security, productivity and communication.
Through patent-pending AI that understands human behavior, StatusToday maps out human threats and key behavior patterns internal to the company.

In a nutshell, this product collects all the user activity log data, from various IT systems, applications, servers and even everyday cloud services like google apps or dropbox. After collecting this metadata, the tool extracts as many functional parameters as possible and present them in easily understood reports graph. I think they use one of the Link analysis ML models to plot the relationship between all these user attributes.

The core solution provides direct integrations with Office 365, Exchange, CRMs, Company Servers and G-Suite (upcoming) to enable a seamless no-effort Technology Intelligence Center.

StatusToday has been identified as one of UK’s top 10 AI startups by Business Insider, TechWorld, VentureRadar and other forums, in the EU region.

Status Today has received funding of about $1.2M so far.

5. Jask

Founded 2015, USA
http://jask.io/
@jasklabs

Jask aims to use AI in solving the age old problem of tsunami of logs fed into SIEM tools which then generate events & alerts, and other indicators that security analysts face every day, which produce a never ending flood of unknowns which forces these analysts to spend their valuable time sorting through indicators in the endless hunt for real threats.

At the heart is their product Trident, which is a big data platform for real time and historical analysis over an unlimited amount of stored security telemetry data. Trident collects all this data directly from the network and complements that with the ability to fuse other data sources such as threat intelligence (through STIX and TAXII), providing context into real threats. Once Trident identifies a sequence that indicates an attack, it generates SmartAlerts, which analysts can use to have the full picture of an attack, also allowing them to spend their time on real analysis instead of an endless hunt for the attack story.

They have really interesting blog posts on their site, which are worth a read.

Jask has received funding of about $2M so far.

6. Fortscale

Founded 2012, Israel
https://fortscale.com/
@fortscale

Fortscale uses a machine learning system to detect abnormal account behavior indicative of credential compromise or abuse. The company was founded by security engineers from the Israeli Defense Force’s elite security unit. The products key ability is to rapidly detect and eliminate insider threats. From rogue employees to hackers with stolen credentials, Fortscale is designed to automatically and dynamically identify anomalous behaviors and prioritizes the highest-risk activities within any application, anywhere in the enterprise network.

Behavioral data is automatically ingested from SIEM tools and enriched with contextual data, and multi-dimensional baselines are created autonomously and statistical analysis reveals any deviations, which are then captured in SMART Alerts. All of this can viewed and analysed in Fortscale Console.

Fortscale was named Gartner Cool Vendor (2016) in the UEBA< Fraud Detection and User Authentication category.

More info about the product can be found here.

Fortscale has received funding of about $40 million so far.

7. Neokami

Founded 2014, Germany & USA
https://www.neokami.com/
@neokami_tech

Neokami attempts to tackle a very important problem we all face today – keeping a track of where all our and an enterprises’s sensitive information resides. Neokami’s CyberVault uses AI to discover, secure and govern Sensitive Data in the cloud, on premise, or across their physical assets. It can also scan images to detect sensitive information, as it uses highly optimized NLP for text analytics & Convolutional Neural Networks for image data analytics.
In a nutshell, Neokami uses a multi-layer decision pipeline, wherein it takes in data stream or files, and performs pattern matching, text analytics, image recognition, N-gram modelling and topic detection, using ML learning methods like Random Forest, to learn user-specific sensitivity over time. Post this analysis, a % sensitivity Score is generated and assigned to the data, which can then be picked up for further analysis and investigation.

Some key use cases Neokami tackles are – isolating PII to meet regulations such as GDPR, HIPPA, etc., discovering a company’s confidential information and intellectual property, scan images for sensitive information, protect information in Hadoop clusters, cloud, endpoints or mainframes.

Neokami was acquired by Relayr in Feb this year, and has received $1.1million funding so far, from three investors.

8. Cyberlytic

Founded 2013, UK
https://www.cyberlytic.com/
@CyberlyticUK

Cyberlytic call themselves the ‘Intelligent Web application security’ product. Their elevator pitch is they provide advanced web-application security using AI to classify attack data, identify threat characteristics and prioritize high-risk attacks.

The founders have had a stint with the UK Ministry of Defense, where this product was first used and has been in use support critical cybersecurity research projects in the department.

Cyberlytic analyzes web server traffic in real-time, and determines the sophistication, capability and effectiveness of each attack. This information is translated into a risk score, to prioritize incident response and prevent dangerous web attacks. And the underlying ML models adapt to new and evolving threats without requiring the creation or management of firewall rules. They key to their detection, is their patented ML classification approach, which appears to be more effective in detecting web application attacks than the conventional signature/pattern based detection.

Cyberlytic is a combination of two products – the Profiler, and the Defender. The Profiler provides real-time risk assessment of web-based attacks, by connecting to the web server and analyzing web traffic, to determine the capability, sophistication and effectiveness of each attack. And Defender, is deployed on web servers, and acts on the assessment performed by Profiler, by blocking and preventing web-based cyber-attacks from reaching critical web applications or the underlying data layer.

Cyberlytic has also been gaining a lot of attention in the UK and EU region; Real Business, an established publication in the UK, has named Cyberlytic as one of the UK’s 50 most disruptive tech companies in 2017.

Cyberlytic has received funding of about $1.24 million.

9. harvest.ai

Founded 2014, USA
http://www.harvest.ai/
@harvest_ai

Harvest.ai aims at detecting and stopping data breaches, by using AI-based algorithms to learn the business value of critical documents across an organization, and offer what it describes as an industry-first ability to detect and stop data breaches. In a nutshell, Harvest.ai is an AI powered advanced DLP system having the ability to perform UEBA.

Key features of their product MACIE, includes:

  • Use AI to track intellectual property across an organization’s network, including emails and other content derived from IP.
  • MACIE understands the business value of all data across a network and whether it makes sense for a user to be accessing certain documents, a key indicator of a targeted attack.
  • MACIE can automatically identify risk to the business of data that is being exposed or shared outside the organization and remediate based on policies in near real-time. It not only classifies documents but can identify true IP matches to protect sensitive documents that exist for an organization, whether it be technology, brand marketing campaigns or the latest pharmaceutical drug.
  • MACIE not only detects changes in a single users behavior, but it has the unique ability to detect minor shifts in groups of users, which can indicate an attack.

Their blog has some interesting analysis of some of the recent APT attacks, and how MACIE detected them. Definitely work a read.

Harvest.ai has received funding of about $2.71 million so far, and interestingly, they have been acquired by Amazon in Jan this year, for reportedly $20 million.

10. Deep Instinct

Founded 2014, Israel
http://www.deepinstinct.com/
@DeepInstinctSec

Deep Instinct focuses as End point as the pivot point, in detecting and blocking cyber attacks, and thus fall under the category of EDR. There is something going on in israel, for the last few years, as many cybersecurity startups (Cyberreason, Demisto, Intsights, etc.) are being founded by ex-IDF engineers in Israel, and a good portion of these startups are to do with Endpoint Detection and Response (EDR).

Deep Instinct uses deep learning to detect unknown malware in real-time, just by analysing the binary raw details of the binary picked up by the system. The software runs efficiently on the combination of central processing units (CPUs) and graphics processing units (GPUs) and Nvidia’s CUDA software for running non-graphics software on graphics chips. The GPUs enable the company to do in a day what would take three months for a CPU.

I couldn’t find enough documentation on their website to understand how this deep learning system actually works, but their website has a link to register for an online demo. So it must be definitely worth a try.

They are also gaining a lot of attention in the EDR space, and NVIDIA has selected Deep Instinct as one of the 5 most disruptive AI startups this year.

Deep Instinct has raised $50 million so far, from Blumberg Capital, UST Global, CNTP, and Cerracap.

Advertisements

Thoughts on Union Bank hack

Thoughts on Union Bank hack

It was recently reported in the media, that Union Bank, one of the leading Public sector banks in India, was hacked last year (July 2016). Funds to the tune of about $171 million was siphoned off, and a 7 country hunt had to be spearheaded at the top levels of government to reverse the theft. 

Though the events involved in the breach itself are interesting and needs a detailed analysis, what caught my attention is how the Bank managed to track the trail of the fund transfer to the last mile and how quickly they recovered every single penny that was stolen, within a week’s time. 

Gopika Gopakumar and Leslie D’Monte of Live Mint have the best analysis report of this incident, I’ve seen so far. 

I highly recommend their report. 

I have taken some excerpts from their report and shared my thoughts on them. Let’s get straight to how the hackers got into the Bank’s systems:

Phishing e-mails were sent to 15 email IDs. “Three people reported that the email was suspicious to the IT security. The other Union Bank employees were “technically-savvy” persons. They noticed that although the email address said @rbi.org.in, it had an attachment that a zip file. Within the zip file, there was a dot (xer) file and not a dot pdf file, which is why they reported it as suspicious”

I am curious to know how legitimate was the RBI email ID that was used here – if it was a real RBI domain and a valid RBI email address, then this is a matter of larger concern as this raises questions about RBI’s email system being hacked before this incident. This requires a lot more serious investigation. 

If you look at these sequence of events, from Cyber Kill Chain perspective, this is a successful demonstration of “Delivery” followed by “Exploit & Installation”. 
Then, the malware once downloaded on one system, started spreading across the Bank’s network and eventually onto the Bank’s servers, demonstrating a successful “Internal Recon”, followed by “Lateral movement”. 

To me, this looks like a classic case of externally originating exploit attempt, followed by internal recon and lateral movement. Though it is easier said than done, I feel that a good security anomaly detection system would have been able to flag this off, considering the sequence of events revealed by this report – pre and post exploit. Also, I am curious to know what were the Intrusion and Anomaly detection tools and techniques the bank had deployed, which failed to detect these events occurring within the bank’s internal systems and network. 

So, if the Bank didn’t detect these patterns while they were occurring, how did the Bank discover this anomaly? Thanks to SWIFT’s (Society for Worldwide Interbank Financial Telecommunication) daily reconciliation report, as Live Mint goes on to report:

“When a bank does a SWIFT transaction during the day, they typically get a reconciliation report the next day and all the corresponding banks send them the “end-of-the-day balance” report the following morning.

When Union Bank got it from the originating bank, they saw a difference of $170 million and that alerted them because of one mistake—the hackers deleted the six entries they had made.”

This is an interesting revelation of how the SWIFT system actually tracks any transaction anomalies, and I am sure this system is a lot more sophisticated. But what the hackers did, appears to be utterly dumb to me – deleting their transaction logs, whilst leaving the funds debit logs unchanged! 

Coming to the recovery of the funds itself, and where it took the Bank a few extra days:

“One tricky negotiation was with the Taiwanese government with which India doesn’t have diplomatic ties, particularly as a court order was needed to secure the banking reversal instruction. However, with some pushing from U.S. officials, the entire $171 million was traced.”

It is commendable to see how the bank, worked with the Indian Govt. agencies, including CERT-IN and RBI, and other international banks in getting the money back in a few days. This entire episode is worth a case study on how other national and international banks should mobilise the right tools, people and government and inter-country legal processes, for executing an effective cybersecurity incident response procedure. 

The CEO of SWIFT India, acknowledged the impact of cyber threats to the banking industry, and thanks to the various guidelines laid out by RBI (Reserve Bank of India), there appears to be good momentum amongst the public and private sector banks in India, in implementing cyber security controls in thwarting such security threats. 

“Cyber threat is real and is growing”. According to him, the pace of digitization that we have seen in the last decade and at a more accelerated pace, requires the same level of investment on the cyber side as well. The regulator (RBI), he added, has introduced regulations around a CISO (chief information and security officer) directly reporting to the board. There is also a customer security programme where “we are now mandating 27 controls, of which 16 are mandates and 11 are advisory. If you don’t have 16, we will start reporting to the regulator.”

Closing thoughts:

Though the Incident report of this breach will never be made public, and it shouldn’t, the most important learning from this incident, for other banks and the cyber security community, would be, to know what controls worked and what didn’t:

  • both technical control in terms of the intrusion detection tools/techniques that worked and didn’t work, or could have worked (if the bank didn’t have them – for ex., Machine Learning based threat detection tools which can detect new/unknown patterns of threats a lot more efficiently than tradition systems”, and
  • non-technical controls (security awareness initiatives amongst the bank’s employees, and the processes and SLA established between the Bank and CERT-IN, RBI, Legal depts (Cyber Vigilance committee), and the cross-border relations with other nations).

Finally, the fact that caught my attention and made me read more about the Union Bank hack – the recovery of the stolen funds – Kudos to the collaborative effort between the officials from Union Bank, Cert-In, RBI in not only investigating and tracking the trail of the money flow, but also recovering every cent of the theft, in 6 days. Great work!

One of my friends in the cyber security industry, posed a very logical question to me – if Google can keep a track of where am I going, what and where am I eating, what I am watching and what am I reading, inspite of me being in the general public domain and Google merely using the open internet to track all this, why is an Enterprise/Organisation, still unable to track the use of its own resources and assets by its entities (users, machines, devices), within the network that the organisation has provisioned and controls?

Machine Learning talks in RSA Con 2017

Machine Learning talks in RSA Con 2017

The RSA Conference is one of the most widely attended security conferences in the world, and the 2017 edition, held in SFO, concluded just about 10 days ago.

There were close to 20 presentations this time, around using Machine Learning (referred to as ML hereon in this post) in detecting/preventing cyber attacks of various kinds. And in this post I share my take and a summary (detailed in some cases) on the Top 10 talks on ML.

Some of these talks, especially research projects, require a detailed discussion and analysis, but I’ve tried to do justice to them by keeping my summary as detailed as possible. I plan to dive deeper into some of these topics, in the future.

Note: I have included a link to the original Talk (presentation or video) wherever I could find them, so do check them out.

  1. A Vision for Shared, Central Intelligence to Ebb a Growing Flood of Alerts

Dan Plastina, who heads Threat Protection at Microsoft, gave a talk on striking a balance between using ML in threat detection and also in Incident Management/Orchestration process, using linked Graph and chat Bots, in “SecOPS Console”, to better manage the growing flood security alerts. What I found interesting in this talk is the mention of a whole gamut of Microsoft products, many of which are familiar to us, like AD, Office, Azure security center. But I couldn’t find if Dan was also referring to an IR Orchestration tool that Microsoft has built or is int roadmap. Also, I see that R is being tightly integrated into various Microsoft products.

An interesting talk indeed, and here is the link to the original talk.

2. Advances in Cloud-Scale Machine Learning for Cyber-Defense

Another talk from Microsoft; this one by Mark Russinovich, the CTO for Microsoft Azure. This one was quite a deep dive into how Microsoft uses ML in detecting cyber attacks on the Azure platform. My quick notes below:

  • He started off with some metrics:
    • More than 10,000 location-detected attacks (detected/reflected attacks) – I didn’t understand what exactly he meant here.
    • 1.5 mil compromise attempts deflected
  • Red team and Blue team kill chain – it was interesting to see how each of the blue team’s “response” are mapped to read team’s malicious action stages
    • Attack disruption shows execute stage before move stage
  • Their “supervised” learning approach enables detection with minimal FP – this is an interesting claim
  • “Attack disruption” requires us to think of ML beyond detection
  • He also covered properties of successful ML solution – adaptable, explainable, actionable, results in successful detection
  • Framework for a successful detection – honestly this is one of the best and simple visual representation/explanation of how an ML based solution should look like. He also talks about two Case studies where IPFIX data is used as a training set, and detecting malware using a combination of Rules and ML
  • Then he goes deep into Case study 2 where he talks about the algorithms and compares fingerprint based detection to behaviour based.
  • Triage incidents not alerts – very valid point
  • In a nutshell – attack disruption means to shorten blue team kill chain

The Video to the original talk is available here.

3. Combatting Advanced Cybersecurity Threats with AI and Machine Learning 

This one was by Andrew B. Gardner, Head of Symantec’s ML Program. My notes below:

  • Interesting perspective shared here, but a bit high level.
  • He starts off with comparing AI & ML and how they differ in cyber – interesting point about the use of ML in cybersecurity, rather than AI, for various reasons:
    • complex sequential data
    • not human intuitive (logs)
    • labels are expensive (scarce)
    • closed research models
  • Typical use of ML in cyber today: collect data sets > training algorithms > build a model > updated classifiers > ingested to another “threat detector”
  • Though the advantages of using ML in cybersecurity are good, Andrew poses interesting argument around what are disadvantages of using ML in cyber security:
    • dependency on data (quality, completeness), and system
    • adversaries also have access to ML
  • ML at Symantec
    • some interesting approaches shown, about optimizing models – True positive to false positive ratios (ROC) and how to optimize them
    • use of string scoring services – Charlatan

Link to the original talk is here.

4. Automated prevention of ransomware with Machine learning and GPOs

This talk was by Rod Soto (Security Researcher at Splunk) and Joseph Zadeh (Security Data Scientist at Splunk). My notes below:

  • Rod and Joseph started with some key aspects of detecting ransomeware in the “new age” – behavioural modeling, unsupervised ML, anomaly detection and leveraging big data
  • Use of Aktaion tool kit for building the detection system
    • Take PCAPs of known (labeled) exploits and known (labeled) benign behavior and convert them to bro format
    • Convert each Bro log to a sequence of micro behaviors (machine learning input)
    • Compare the sequence of micro behaviors to a set of known benign/malicious samples using a Random Forest Classifier
    • Derive a list of indicators from any log predicted as malicious
    • Pass the list of IOCs (JSON) to a GPO generation script
  • Key is to focus on delivery of exploit (in addition to using system specific and call back specific behaviours) – following key steps were covered:
    • training a model (Random forest algorithm used in this case), to detect exploit delivery, using known malicious indicators
    • tuning the hyper parameters – risk factor, age, session time, entropy, etc.
    • model classifier built with 6 trees
    • the model will start generating output that separates signal from noise (they use the Splunk MLTK in this case)
    • link it to GPO scripts to automate the response procedures via power shell (active defense)
  • Training set and test data used in the demo include datasets from Contagio, DeepEnd Research, Ransomware samples with some call back and file system level indicators, labelled benign http user traffic (anonymized bluecoat logs)
  • The talk then ends with a PoC demo of this whole workflow
  • Summary: ML + GPO = Active Defense

Link to the original talk here.

5. Big Metadata: Machine Learning on Encrypted Communications

This one was by Jennifer Fernick and Mark Crowley, Security Researchers from University of Waterloo. My notes below:

  • This is derived from a research project, and was a very interesting session where not just the application of ML in cybersecurity was discussed, but also the inverse – security in the computational functions of ML
  • In this talk Jennifer and Mark talk about
    • ML research in cyber security – applying ML to problems in cybersecurity
      • using ML in cyber security
      • cybersecurity for ML – adversarial ML – study of ML systems in adversarial environments, where an attacker might train the system in hopes of modifying its behaviour to allow for an attack
      • a mid way – secure ways of computing ML functions
    • Candidate problems depend on information sources
    • Metadata – how can we use metadata for building the training set, while keeping privacy concerns intact?
    • ML 101 – a crash course
    • Their work in the field, and
    • Future direction
  • In the “security for ML” topic, there were some very interesting concepts presented – secure multi-party computation, privacy preserving data mining, homomorphic encryption, differential privacy. All these are deep mathematical and computation fields in themselves and definitively requires intensive reading. And so I am going to stop at that!
  • In the “ML in cybersecurity” topic, some fundamental questions were called out – what problem am I trying to solve
    • securing my learning data?
    • learning my security data?
  • On “ML 101” topic, they give an excellent crash course on ML and how to use it in cybersecurity
    • use of clustering (unsupervised learning) and classification (supervised learning)
    • system design and algorithm choices
  • Their work in ML – use of ML on encrypted data – analysing private and public communication networks to detect anomalies
  • I have to confess I found this talk to be the most difficult to thoroughly grasp, as the talk was research oriented and definitely calls for an in depth reading on each of the sub-topics covered. A great presentation indeed!

Link to the original talk here.

6. Applied Cognitive Security: Complementing the Security Analyst

This one was by Vijay Dheap, Program Director, Cognitive Security at IBM.

  • This talk was primarily about IBM’s Cognitive security product built on Watson their Qradar Security intelligence platform, and how it can help a Security Analyst better detect, analyse and respond faster to security incidents.
  • The presentation was high level and didn’t get into the details of how Cognitive Security with IBM Watson actually works. For ex., what algorithms are used, and what are the typical hyper parameters, and how they are used in conjunction with contextual feeds (vulnerability, asset, identity, behaviour) to detect security incident more effectively.
  • The presentation did cover one case study with a Botnet use case, but didn’t reveal much information on the inner workings (atleast some indication) of how ML and Watson’s AI detected this incident.
  • A good “high level” talk over all.

Link to the original talk here.

7. Dealing with Millions of Anomalies

This one was by Chris Larsen, Threat Researcher with Symantec

  • The talk was about detecting malicious traffic, by using ML (anomaly detection), and TI data
  • He first approach to handle the issue of picking “interesting anomalies” in millions of anomalies, is to pick “One Hit Wonders” and “One Day Wonders”, and then investigating them further by using various attributes (IP address licenses, ports used, are they DGA, etc.)
  • Once we have this “interesting anomalies” filtered out, then run it against good TI, to pick the most probable malicious traffic.
  • Summary: good TI is the key, and a good place to start, are TI that has malware/attack “families” context, industry/vertical/geo context.
  • Definitely an interesting talk with real world examples like using IOC data for Angler and Magnitude exploit kits, to filter out “most probable” malicious traffic, and then drilling further down from there.

There is a video of Chris’s gal available here. Definitely worth watching.

8. Machine Learning: Cybersecurity Boon or Boondoggle

This one was by Dr. Zulfikar Ramzan, CTO of RSA.

  • The talk starts at an elementary level, covering the fundamentals of ML and its use in Cyber security.
  • But towards the end, Zulfikar covered some very interesting facts/tips/best practices while using ML in cyber security. For ex.:
    • The importance of ROC (Receiver Operating Characteristic Curve) while making a trade-off between True positive and false positive classifications.
    • ML (in this case unsupervised) only is helpful in detecting bad “actions”, and not bad “intent”, and thus resulting in calling out lot of legitimate “unusual actions” as “bad/malicious”.

Link to the original talk here.

9. Applied Machine Learning: Defeating Modern Malicious Documents

This one was by Evan Gaustad, Sr. Manager, CSIRT – Target.

  • The talk basically starts with typical vulnerabilities exploited in Microsoft Office (Macros), and some examples of the attack lifecycle using malicious documents itself
  • Evan then gets into the details of the project he has been working on, where he used supervised ML (classification) to detect malicious documents. There is a video recording of his talk here, and I strongly recommend it. He covers a lot of details of how the model and its classifier actually works, with examples.

There is a video of Evan’s talk available here. Its a must watch.

10. An Introduction to Graph Theory for Security People Who Can’t Math Good

This one was by Andrew Hay, CISO, Data Gravity.

  • Though this talk didn’t actually cover how ML is used in detecting/preventing cyber attacks, it was a great crash course on Graphs theory (for the non-mathematicians amongst us), and how it can be extremely useful in visualising an attack lifecycle
  • Application of Graphs in security context
    • incident response – use of Google’s Fusion tables to visually represent the communication/interactions between user and entity in a security incident
    • actor tracking – detecting the source of a phishing campaign – using the IOCs available, use Maltego (CE)
  • What was interesting in this talk was – it is so easy to build a visual representation of the interaction. However, it can get way too complicated to interpret, due to a bad choice of dataset and the “vertices” (nodes) and “edges” (connections) in it.

The link to the original talk is available here.

 

Thanks for reading through my point of view RSA Con USA 2017. I hope I was able to provide byte sized (mega!) summary of some of the most interesting talks in this conference this year.

PS: Do subscribe to this blog, to get notified the moment I publish my next post.

Model Evaluation in Machine Learning

Model Evaluation in Machine Learning

One of the most important activities for a Data Scientist to perform, is to measure and optimize the prediction accuracy of Machine Learning models one has built. Though there are various approaches to do this, they can be grouped into three major key steps.

Sebastian Raschka, the author of the bestselling book “Python Machine Learning”, who is a Ph.D. candidate at Michigan State University, developing new computational methods in the field of computational biology, has published an excellent article describing these steps.

In a nutshell, he breaks down the evaluation process into three main steps:

  1. Data generalisation – ensure that the training data and the test data have good ‘variance’ and a fair proportion of various classifications’. This could be achieved by a couple of techniques:
    • Stratification
    • Cross validation – k fold or bootstrap
    • Hold out method – training data set, hold out data set, test data set
    • Bias variance trade-off
  2. Algorithm selection – picking the right algorithm that is best suited for the use case in hand
  3. Model selection
    1. Hyper parameters tuning – cross validation techniques
    2. ‘Model parameters’ are of models and ‘Hyper parameters’ (also called tuning parameters) are of algorithms; for ex., the depth of Trees in Random Forests

Sebastian has put together a detailed 3 part tutorial where he goes into the details of each of these steps:

 

These are great reads for anyone who is having a tough time picking the right model for their ML project, and also having difficulty measuring its efficiency and accuracy.

 

 

Title Image courtesy: biguru.wordpress.com

Donald Trump Lost Most of the American Economy in This Election

A very interesting and shocking revelation indeed:

“The divide is economic, and it is massive. According to the Brookings analysis, the less-than-500 counties that Clinton won nationwide combined to generate 64 percent of America’s economic activity in 2015. The more-than-2,600 counties that Trump won combined to generate 36 percent of the country’s economic activity last year.”

Jim Tankersley, writing for The Washington Post. Do read on. 

Data Scientist’s take on the US Election results

Data Scientist’s take on the US Election results

It would be an understatement if I say that the outcome of the recently concluded US election has been a shocker for many people in the US, and across the world. Also, as called out by various media outlets, these results have indicated the failure of the political polling/predictive analytics industries and the power of data and data science.

In this post I share my thoughts on this matter.
From a Data Science perspective, there are two possibilities of why the predictions were so off the charts:
a) The predictive models were wrong
b) The data used in the models was bad
Lets look at both these possibilities in detail.

a) Predictive Models were wrong

  • There is this adage which is widely accepted in the statistics world that “All models are wrong”. The reason for this stand is that ‘data’ beats ‘algorithms’, and that the models are only as good as the data used to validate them. But in this particular case, the models used, have been in use in polling predictions for decades, and its not clear to me on what went wrong with the models, in this case.
  • Having said that, there is definitely some interesting work published in the last few weeks that show the use of Inference and Regression models in understanding the outcome of this election results. Here is a whitepaper published by professors in the Dept. Of Statistics at Oxford University. To summarize the paper:

We combine fine-grained spatially referenced census data with the vote outcomes from the 2016 US presidential election. Using this dataset, we perform ecological inference using dis- tribution regression (Flaxman et al, KDD 2015) with a multinomial-logit regression so as to model the vote outcome Trump, Clinton, Other / Didn’t vote as a function of demographic and socioeconomic features. Ecological inference allows us to estimate “exit poll” style results like what was Trump’s support among white women, but for entirely novel categories. We also perform exploratory data analysis to understand which census variables are predictive of voting for Trump, voting for Clinton, or not voting for either. All of our methods are implemented in python and R and are available online for replication.

b) Data used in the models was bad

  • Not everyone will be open about their opinion, especially if the opinion is not aligned to the general consensus among public. And such opinions are usually not welcome in our society. A recent example of this is Mark Zuckerberg reprimanding employees for stating that “All Lives Matter” on a Black Lives Matter posting inside the Facebook headquarters. So there is a good chance such opinions wouldn’t have made it through to the dataset being used in the models.
  • Groupthink also played a major role in adding to the skewed dataset. When most of the media and journalist agencies were predicting Hillary’s landslide victory over Trump, only a courageous pollster would contradict with the widely supported and predicted poll results. And so this resulted in everybody misreading the data.
  • Incomplete analysis methods, which only used traditional methods of collecting data like surveys and polls, instead of using important signals from the social media platforms, especially Twitter. Social media engagement of the candidates with the voters, was a definitely ignored data set, inspite of social media analysts sounding the alarm that all of the polls were not reflecting the actual situation on the ground in the pre-election landscape. Clinton outspent Trump on TV ads and had set up more field offices, and also sent staff to swing states earlier, but Trump simply better leveraged social media to both reach and grow his audience and he clearly benefitted from that old adage, “any press is good press.”

To summarize…

Data science has limitations

  • Data is powerful only when used correctly. As I called out above, biased data played the biggest spoil sport in the predictions in this election
  • Variety of data is more important than volume. There is constant rage these days to collect as much data as possible, from various sources. Google and Facebook are leading examples. As I called out above, depending on different data sets, including social media, could have definitely helped in getting the predictive models to be closer to reality. Simply put, the key is using the right “big data”.

Should we be surprised?

  • To twist the perspective a little bit, if we look at this keeping in mind how probabilistic predictions work, the outcome wouldn’t be a surprise to us. For ex., if I said that “I am 99% sure that its going to be a sunny day tomorrow”, and if you offer to bet on it at odds of 99 to 1, I might say that “I didn’t mean it literally; i just meant it will “probably” be a sunny day”. Will you be surprised if I tossed a coin twice and got heads both times? Not at all, right?
  • This New York Times article captures the gist of what actually went wrong with the use of data and probabilities in this election, very well. I think following lines say it all:
The danger, data experts say, lies in trusting the data analysis too much without grasping its limitations and the potentially flawed assumptions of the people who build predictive models.
The technology can be, and is, enormously useful. “But the key thing to understand is that data science is a tool that is not necessarily going to give you answers, but probabilities,” said Erik Brynjolfsson, a professor at the Sloan School of Management at the Massachusetts Institute of Technology.
Probabilistic prediction is a very interesting topic, but it can also be very misleading if the probabilities are not presented correctly (70% chances of Clinton winning over Trump’s 30% chances).
I shall dwell deeper into this in a follow up post…

 

 


Title image courtesy: http://www.probabilisticworld.com

RStudio v1.0 is out

RStudio v1.0 is out

RStudio has finally moved out of “beta” status last week, and the first official production version is now available. This is great news for all of us who use RStudio as the primary IDE for R programming. 

Check out this link for the release history of RStudio and all changes that’s been it has gone through over the last 6 years.

Some of the major new functionality added in this release are:

  • Support for R Notebooks, a new interactive document format combining R code and output. It’s similar to (but not based on) Jupyter Noteooks, in that an R Notebook includes chunks of R code that can be processed independently (as opposed to R Markdown documents that are processed all at once in batch mode.)
  • GUI support for the sparklyr package, with menus and dialogs for connecting to a Spark cluster, and for browsing and previewing the available Spark Dataframe objects.
  • Profiling tools for measuring which parts of your R code are consuming the most processing time, based on the profvis package.
  • Dialogs to import data from file formats including Excel, SAS and SPSS, based on the readr, readxl and haven packages.

Checkout the official blog for more information about this release.