Packet capture: How important is it for cyber security?

Packet capture: How important is it for cyber security?

Our Solution Architect here at KedronUK, Chris Booth, shares his thoughts on the question, packet capture: how important is it for cyber security?

Historically, packet capture has been a tool for troubleshooting complex problems where other information sources are not providing enough detail. Some enterprises have deployed permanent packet capture solutions within data centres but the investment required in storage to provide even short-term data retention deterred many interested users. With 10Gbps (or faster) backbones commonly in use, a busy network will generate Petabytes of data on a weekly basis. Analysing this vast amount of data to provide meaningful insights is also challenging.

However, over the past two to three years a wave of new vendors has seen many businesses investigate traffic based tools, with Gartner naming this sector Network Traffic Analytics (NTA). NTA tools make use of machine learning to automate the analysis of the captured data (be that flow records like NetFlow or raw wire-data) and from this be able to detect/alert on anomalous traffic and events. These data feeds should include both North-South (to/from the Internet) and East-West (internal) traffic.

Whilst attackers will try to hide their presence once a device has been compromised, they have to traverse a network to scan for targets, access resources, attack and/or exfiltrate data. Therefore the network can be seen as the “source of truth”, as it provides empirical evidence.

The Enterprise Management Association (EMA) has recently released a report entitled “Unlocking High Fidelity Security (2019)”. The majority of respondents to the report were IT managers or directors for SME sized companies (1000 to 4999 employees).

Key findings in the report include:

  • Although depending on the type of attack, 60% believed network data is the better source of data for the earliest detection of a breach (compared to endpoint data).
  • The report identifies metadata as a new class of data. Metadata is not the full packet, but the most useful parts, along with additional supporting information which can be deduced from the contents of the packet. For instance, an IP address extracted from a packet can then be geo-located. 65% of respondents identified that metadata is “very valuable” in assisting with investigations, with a further 14% marking it as “extremely valuable”. Metadata can also offer benefits from a retention perspective – by not storing the entire packet, the “lookback” window can be much bigger.
  • Enterprises that were using packet data had the highest confidence they were detecting threats at the reconnaissance of the “Kill Chain”.
  • The report concludes “While network packets do not contain all of the information needed to complete an investigation, the fact that 99% of daily activities across a network makes it easy to understand why companies feel they have a heightened sense of awareness. They can detect issues faster than businesses replying on perimeter, systems, application, and authentication logs”.

Packet data solutions can also provide useful insight for network teams, as they can determinate a range of metrics such as round-trip-times and potential TCP issues like zero windows.

How can KedronUK help?

KedronUK can assist organisations looking to deploy NTA technology for security and/or performance requirements. Our vendor partnerships include both flow data and packet data based NTA solutions, allowing us to pragmatically discuss and demonstrate the benefits and value of these tools.

Phil Simms

Phil Simms

Account Executive

Aligning your technical and business requirements with the right network, application and security management solution.

Call us today on 01782 752 369
KedronUK, Kern House, Stone Business Park, Stone, Staffordshire ST15 0TL

How to achieve “multi-cloud” monitoring

How to achieve “multi-cloud” monitoring

Continuing our focus on the challenge of performance monitoring in the Cloud, here David Hock, Director of Research at Infosim, discusses their approach to “Multi-Cloud Monitoring.” We interviewed David Hock and this is what he had to say.

You discuss Multi-Cloud monitoring, what do you mean by that term?

Sourcing of services from different Cloud services providers e.g. Azure, Amazon, Office 365 requires monitoring the services availability, performance and utilization across the different platforms and technologies.

MultiClouds require Cross-Silo monitoring of services for Servers, Storage, distributed, regional Networks, Virtual Systems, Containers, multiple Clouds, App/Web STMs, Enterprise Applications & Infrastructure.

Ensure hybrid services based on legacy infrastructures and systems and multi-provider Cloud services via cross-provider, cross-technology, and cross-silo monitoring in a way, your operations staff can still handle.

Be able to track issues internal and external where they occur and in a reasonable time.

Lots of people when they think about monitoring infrastructure in the Cloud, they think about the tools native in the Cloud providers solutions, why do you think an enterprise should consider a third-party solution such as StableNet?

Cloud SP tools do usually work only per Cloud SP and not cross SP! You need to be able to get a holistic view on your utilized services and not a large number of individual limbs which your operations team needs to correlate manually.

StableNet combines the “passive” monitoring of reading out data from the APIs provided by the respective Cloud providers with “active” probing testing different parameters externally.

Here are two examples below:

Amazon, Azure & Co do “natively” provide information about the CPU usage, the memory usage, the number of disk writes, etc. but not about the actual services running in the virtual machine/Cloud instance. If you want to know how many emails you sent/calendar entries you have/etc., you can ask the Cloud API. If you want to measure how long sending and receiving an email takes or how long adding a contact takes, you need to do external probing – StableNet offers this and combines both worlds/approaches.

If your Cloud provider offers you certain RAM, CPU, etc., the APIs help you to check the actual usage. However, if you want to measure the actual SLA, i.e., whether your service runs smoothly or not, you need external measurements.

How does StableNet differ from other third-party monitoring solutions for Cloud monitoring?

StableNet addresses and ensures hybrid services monitoring based on legacy infrastructures and systems and multi-provider Cloud services via cross-provider, cross-technology, and cross-silo monitoring in a way your operations staff can still handle.

In particular, you do not need yet another tool and graphical user interface but can combine the data in the existing monitoring using renown interfaces for integration.

If a customer has an end-to-end enterprise application service which may consist of elements of public, private and on-premise infrastructure is StableNet able to understand the interrelationship between these components?

StableNet e.g. StableNet Service Analyzer and highly automated StableNet Network Service Analyzer do support analysis tools to model and track cross-provider, cross-technology, and cross-silo infrastructure constellations.

Furthermore, well, known StableNet technologies like the automated root cause analysis, derived measurements, dynamic rule generation can also be used to combine different measurement sources of hybrid Cloud, network, etc.

Phil Simms

Phil Simms

Account Executive

Aligning your technical and business requirements with the right network, application and security management solution.

Call us today on 01782 752 369
KedronUK, Kern House, Stone Business Park, Stone, Staffordshire ST15 0TL