Business visibility in the new business normal

Business visibility in the new business normal

For organisations of all stripes and sizes, the COVID-19 pandemic has been a challenge. Whether you’ve benefited from a huge increase in online sales or been swamped with customer service calls, whether you have a desk-based team all happily working from home or delivery drivers out on the road at all hours, your IT infrastructure will have played a critical role in keeping your business up and running – and so will your IT team. 

Maintaining good end-user experiences throughout this period was – and still is, in most cases – key to the longer term sustainability of your enterprise. Where IT performance, availability and security issues aren’t dealt with quickly and efficiently, it can cause huge reputational damage or loss of business. 

If you don’t have visibility of remote workers when it comes to productivity, engagement, access, network security and a host of other areas, how do you know things are functioning as they should be? There may be problems you’re not aware of, and if you are aware you may be struggling to find the root cause.

Such problems are compounded by the fact that your IT team are working remotely themselves, with a proportion off on sick leave at any given time. And at the same time, this department is being relied on to support other staff with their IT needs and likely firefighting demand and downtime issues. 

For 15 years now, we’ve been vendor-independent consultants specialising in application and network performance monitoring, so we know which software will give you the visibility you need to track your business metrics. Our solution architects are able to identify new ways to use tools you already have to tackle new problems, as well as recommending tools that will integrate with your existing infrastructure. 

There are numerous examples of software that can be deployed quickly and remotely to provide better visibility of your key data. To give just two, you can use:

  • Ixia’s Hawkeye to run tests from end user laptops or mobile devices, quickly understanding how their home environment is impacting access to key business services. 
  • Instana to gain 1 second granularity into application performance and issues, supporting key containerised applications. This SaaS platform provides immediate visibility of key web applications such as e-commerce. 

We offer support to deploy these solutions quickly and easily, with remote configuration and installation or even technical staff on-site where necessary. By exploring what the most helpful solutions might be for your circumstances, you can use data to your advantage to make better business decisions now and improve your monitoring capability permanently.

You can find out more about some of the solutions we provide in our guide to getting visibility fast. To talk about your specific monitoring needs, call us on 01782 752 369.

Kirsty Jones

Kirsty Jones

Marketing and Brand Development Lead

Spreads the word further and wider about how we can help connect and visualise your IT Ops and Sec Ops data.

IT monitoring for the remote age

IT monitoring for the remote age

We don’t have a magic wand to make everything better. We don’t have medical training or a vaccine in development. We don’t even have a sewing machine to whip up scrubs like some of the heroes out there.

What we do have is a very particular set of skills. 

Over recent weeks, our technical team have been putting their expertise to use to help IT departments across the UK who are battling hard to maintain vital networks and services while working from home themselves. Many have needed to make rapid changes to their infrastructure, a process that we have been able to make quicker and safer. 

With ongoing projects as well as new challenges arising from the global pandemic, we’ve continued to support clients including large NHS trusts, critical national infrastructure, and organisations in the financial services sector. 

One NHS trust was already aware of issues with their patient-facing IT services, which are obviously more important than ever at the moment. Part of our role was gathering and analysing critical data to get to the root cause of an issue involving multiple service providers.

We’ve deployed a security analytics solution for another large NHS trust in a project that was underway before the crisis hit. At a time when rapid changes are being made to infrastructure, it’s essential that those changes can be made safely and our health service protected from opportunistic criminals. The solution we’ve put in place applies AI and machine learning to all data traversing their network to keep security threats at bay.

One organisation we work with was experiencing a problem where, after a massive increase in the number of staff working remotely, many of these staff were unable to connect to the network. Our technical team was able to troubleshoot the issue using software the client already had in place to find and address the root cause. 

Another financial services sector client, to whom we’ve provided security solutions in the past, identified a performance issue. They were lacking critical visibility of the infrastructure being used by their home workers. We arranged a free extended trial of the software they needed, which we were also able to remotely install and configure on their behalf. 

We’re not saying all of this to blow our own trumpets or make a fast buck. Like many companies, we’ve found ourselves in the slightly awkward – but also positive – position where our expertise and the solutions we offer can actually help with real problems you’re facing right now. 

As a vendor-independent consultants, we know which software will give you the visibility you need in various complex environments, including Citrix and SAP. We know how to recommend tools that will integrate best with your existing set-up. We can offer support with remote configuration and installation, and even supply technical staff on-site if necessary. We are more than happy to apply our knowledge to help you get the most from the software you already have. 

Download our guide to getting visibility fast to find out more about some of the solutions we provide.

If you would like to discuss a specific issue with your critical data monitoring, call us on 01782 752 369.

Phil Simms

Phil Simms

Account Executive

Aligning your technical and business requirements with the right network, application and security management solution.

Who monitors the monitoring and manages the management?

Who monitors the monitoring and manages the management?

Chris Booth, Solution Architect at KedronUK, considers how organisations and enterprises can get the most out of their Network Monitoring/Management System.

As an enterprise management consultancy, we might be biased, but we believe that for large or enterprise-size organisations, network monitoring is critical. A Network Management/Monitoring System (NMS) needs to provide functions such as pro-active altering, metrics to aid troubleshooting (e.g. root cause analysis), configuration management and reporting. When considering what tool to purchase, important factors such as functionality, cost and scalability will often drive the decision-making process.

However, it is important to also consider who will monitor and manage the NMS. This could be in the form of general administration and day-to-day support to ensure it is available and performing as expected. There is little point in having an NMS which isn’t working properly and thus unable to deliver timely alerts about infrastructure problems – the initial investment in the NMS has effectively been wasted. 

The NMS might also have some form of High Availability (HA), so data replication and integrity need to be monitored so that it can correctly fail-over to an alternative DC when needed. On a more irregular basis, this could be upgraded to the NMS and it’s supporting infrastructure (e.g. an SQL database back-end). To answer the title of this post, the answer is KedronUK’s Technical Team. 

We monitor and manage the NMS for a number of our customers, removing this workload from the IT operations team and making use of our extensive product knowledge. 

We know that to ensure a smooth upgrade, a particular version upgrade needs to be staged via an intermediate point release. In fact, we monitor the products we provide to our customers with the same products! Each service/support package is bespoke to the customer’s requirements, but typically includes system governance and our Technical Team holding regular calls to offer advice on best practice or to discuss any open support tickets. A contracted package from KedronUK helps fix the TCO and maximise the ROI for the NMS.

The quality of the data in the NMS can also become a challenge. When multiple team members can add, edit or remove devices, the adherence to standards can become somewhat variable. For instance, when a new network switch is on-boarded, is it named correctly and tagged with a location to help identify the datacentre, rack or office location? 

In response to this, KedronUK has developed a bespoke web portal which integrates with the NMS tools we offer such as Infosim StableNet and SolarWinds. The portal provides data validation rules and drop-down choices to remove free-form text input and the impact that has on the data. Role-Based Access Control (RBAC) is also supported, allowing a junior team member to onboard but not delete devices. Thus, administrative tasks can be controlled, ensuring the NMS data is meeting the defined standards.

For more information, contact us here.

 

Chris Booth, Solution Architect at KedronUK, considers how organisations and enterprises can get the most out of their Network Monitoring/Management System.

As an enterprise management consultancy, we might be biased, but we believe that for large or enterprise-size organisations, network monitoring is critical. A Network Management/Monitoring System (NMS) needs to provide functions such as pro-active altering, metrics to aid troubleshooting (e.g. root cause analysis), configuration management and reporting. When considering what tool to purchase, important factors such as functionality, cost and scalability will often drive the decision-making process.

However, it is important to also consider who will monitor and manage the NMS. This could be in the form of general administration and day-to-day support to ensure it is available and performing as expected. There is little point in having an NMS which isn’t working properly and thus unable to deliver timely alerts about infrastructure problems – the initial investment in the NMS has effectively been wasted. 

The NMS might also have some form of High Availability (HA), so data replication and integrity need to be monitored so that it can correctly fail-over to an alternative DC when needed. On a more irregular basis, this could be upgraded to the NMS and it’s supporting infrastructure (e.g. an SQL database back-end). To answer the title of this post, the answer is KedronUK’s Technical Team. 

We monitor and manage the NMS for a number of our customers, removing this workload from the IT operations team and making use of our extensive product knowledge. 

We know that to ensure a smooth upgrade, a particular version upgrade needs to be staged via an intermediate point release. In fact, we monitor the products we provide to our customers with the same products! Each service/support package is bespoke to the customer’s requirements, but typically includes system governance and our Technical Team holding regular calls to offer advice on best practice or to discuss any open support tickets. A contracted package from KedronUK helps fix the TCO and maximise the ROI for the NMS.

The quality of the data in the NMS can also become a challenge. When multiple team members can add, edit or remove devices, the adherence to standards can become somewhat variable. For instance, when a new network switch is on-boarded, is it named correctly and tagged with a location to help identify the datacentre, rack or office location? 

In response to this, KedronUK has developed a bespoke web portal which integrates with the NMS tools we offer such as Infosim StableNet and SolarWinds. The portal provides data validation rules and drop-down choices to remove free-form text input and the impact that has on the data. Role-Based Access Control (RBAC) is also supported, allowing a junior team member to onboard but not delete devices. Thus, administrative tasks can be controlled, ensuring the NMS data is meeting the defined standards.

For more information, contact us here.

 

Chris Booth

Chris Booth

Solutions Architect

Listens to your problems, then identifies the best tools and products to build solutions.

Packet capture: How important is it for cyber security?

Packet capture: How important is it for cyber security?

Our Solution Architect here at KedronUK, Chris Booth, shares his thoughts on the question, packet capture: how important is it for cyber security?

Historically, packet capture has been a tool for troubleshooting complex problems where other information sources are not providing enough detail. Some enterprises have deployed permanent packet capture solutions within data centres but the investment required in storage to provide even short-term data retention deterred many interested users. With 10Gbps (or faster) backbones commonly in use, a busy network will generate Petabytes of data on a weekly basis. Analysing this vast amount of data to provide meaningful insights is also challenging.

However, over the past two to three years a wave of new vendors has seen many businesses investigate traffic based tools, with Gartner naming this sector Network Traffic Analytics (NTA). NTA tools make use of machine learning to automate the analysis of the captured data (be that flow records like NetFlow or raw wire-data) and from this be able to detect/alert on anomalous traffic and events. These data feeds should include both North-South (to/from the Internet) and East-West (internal) traffic.

Whilst attackers will try to hide their presence once a device has been compromised, they have to traverse a network to scan for targets, access resources, attack and/or exfiltrate data. Therefore the network can be seen as the “source of truth”, as it provides empirical evidence.

The Enterprise Management Association (EMA) has recently released a report entitled “Unlocking High Fidelity Security (2019)”. The majority of respondents to the report were IT managers or directors for SME sized companies (1000 to 4999 employees).

Key findings in the report include:

  • Although depending on the type of attack, 60% believed network data is the better source of data for the earliest detection of a breach (compared to endpoint data).
  • The report identifies metadata as a new class of data. Metadata is not the full packet, but the most useful parts, along with additional supporting information which can be deduced from the contents of the packet. For instance, an IP address extracted from a packet can then be geo-located. 65% of respondents identified that metadata is “very valuable” in assisting with investigations, with a further 14% marking it as “extremely valuable”. Metadata can also offer benefits from a retention perspective – by not storing the entire packet, the “lookback” window can be much bigger.
  • Enterprises that were using packet data had the highest confidence they were detecting threats at the reconnaissance of the “Kill Chain”.
  • The report concludes “While network packets do not contain all of the information needed to complete an investigation, the fact that 99% of daily activities across a network makes it easy to understand why companies feel they have a heightened sense of awareness. They can detect issues faster than businesses replying on perimeter, systems, application, and authentication logs”.

Packet data solutions can also provide useful insight for network teams, as they can determinate a range of metrics such as round-trip-times and potential TCP issues like zero windows.

How can KedronUK help?

KedronUK can assist organisations looking to deploy NTA technology for security and/or performance requirements. Our vendor partnerships include both flow data and packet data based NTA solutions, allowing us to pragmatically discuss and demonstrate the benefits and value of these tools.

Phil Simms

Phil Simms

Account Executive

Aligning your technical and business requirements with the right network, application and security management solution.

How to achieve “multi-cloud” monitoring

How to achieve “multi-cloud” monitoring

Continuing our focus on the challenge of performance monitoring in the Cloud, here David Hock, Director of Research at Infosim, discusses their approach to “Multi-Cloud Monitoring.” We interviewed David Hock and this is what he had to say.

You discuss Multi-Cloud monitoring, what do you mean by that term?

Sourcing of services from different Cloud services providers e.g. Azure, Amazon, Office 365 requires monitoring the services availability, performance and utilization across the different platforms and technologies.

MultiClouds require Cross-Silo monitoring of services for Servers, Storage, distributed, regional Networks, Virtual Systems, Containers, multiple Clouds, App/Web STMs, Enterprise Applications & Infrastructure.

Ensure hybrid services based on legacy infrastructures and systems and multi-provider Cloud services via cross-provider, cross-technology, and cross-silo monitoring in a way, your operations staff can still handle.

Be able to track issues internal and external where they occur and in a reasonable time.

Lots of people when they think about monitoring infrastructure in the Cloud, they think about the tools native in the Cloud providers solutions, why do you think an enterprise should consider a third-party solution such as StableNet?

Cloud SP tools do usually work only per Cloud SP and not cross SP! You need to be able to get a holistic view on your utilized services and not a large number of individual limbs which your operations team needs to correlate manually.

StableNet combines the “passive” monitoring of reading out data from the APIs provided by the respective Cloud providers with “active” probing testing different parameters externally.

Here are two examples below:

Amazon, Azure & Co do “natively” provide information about the CPU usage, the memory usage, the number of disk writes, etc. but not about the actual services running in the virtual machine/Cloud instance. If you want to know how many emails you sent/calendar entries you have/etc., you can ask the Cloud API. If you want to measure how long sending and receiving an email takes or how long adding a contact takes, you need to do external probing – StableNet offers this and combines both worlds/approaches.

If your Cloud provider offers you certain RAM, CPU, etc., the APIs help you to check the actual usage. However, if you want to measure the actual SLA, i.e., whether your service runs smoothly or not, you need external measurements.

How does StableNet differ from other third-party monitoring solutions for Cloud monitoring?

StableNet addresses and ensures hybrid services monitoring based on legacy infrastructures and systems and multi-provider Cloud services via cross-provider, cross-technology, and cross-silo monitoring in a way your operations staff can still handle.

In particular, you do not need yet another tool and graphical user interface but can combine the data in the existing monitoring using renown interfaces for integration.

If a customer has an end-to-end enterprise application service which may consist of elements of public, private and on-premise infrastructure is StableNet able to understand the interrelationship between these components?

StableNet e.g. StableNet Service Analyzer and highly automated StableNet Network Service Analyzer do support analysis tools to model and track cross-provider, cross-technology, and cross-silo infrastructure constellations.

Furthermore, well, known StableNet technologies like the automated root cause analysis, derived measurements, dynamic rule generation can also be used to combine different measurement sources of hybrid Cloud, network, etc.

Phil Simms

Phil Simms

Account Executive

Aligning your technical and business requirements with the right network, application and security management solution.

The development of NetFlow – time to look again?

The development of NetFlow – time to look again?

Over a decade ago now, KedronUK were extremely successful providing customers with one of the first complete NetFlow solutions, called NetFlow Tracker, developed by an Irish company called Crannog Software (later purchased by Fluke).

It was almost like a traffic analysis revolution, as Network Engineers were reliant on wire data ‘Sniffer’ Packet Capture technologies, to analyse traffic flows NetFlow, although it had been around for a little while, had not really taken off.

Crannog developed a very intuitive, cost-effective, NetFlow collector and reporting solution that was extremely powerful. A customer that before had to deploy costly probes to report on the make-up of their routed traffic, could now do this centrally using the routers they already had deployed. Happy Days!

Other network device vendors followed suit and it became possible to monitor cFlow, sFlow, jFlow, and the emerged standard IPFix. Like Packet Inspection, NetFlow was, and still is, utilised for both management and security use cases.

In time SNMP and Packet-based vendors realised they were in a great position to add this functionality to their existing portfolio and this probably, in truth, started the Unified Management trail.

So, standalone NetFlow for performance, although still very much there, was slowly being replaced by tools that included it as part of their data sources. This happened with mixed results, however, with some vendors nailing it and others have some pretty obvious limitations, such as the necessity to immediately aggregate data, which seriously affected the troubleshooting use case.

From a Network management standpoint, the big difference in the early days was that NetFlow was all about ‘accounting data’ – the who, what, where and when.

Where customers wanted to understand how fast and how delayed – Packets were still king. The other limitation was that NetFlow was only for routed traffic (layer 3), so you still needed the Packet insight to understand your local traffic performance.

It’s safe to say that for many, taking the above into account, NetFlow has become a tick in the box when looking at a Network Management or SIEM tool, and not a primary focus.

We think that view has been changing in recent years with developments in technologies such as NBAR2, CBQoS, Flow Generation probes and improvements in the built-in intelligence within NetFlow led products to deliver AI and Anomaly Detection.

We recommend you revisit your current NetFlow capabilities and see if you’re getting everything you could from this valuable data. The previous limitations are now not always the case and we find ourselves often recommending a more powerful Flow solution to our customers, for integration via standard APIs into their existing monitoring stack.

Kirsty Jones

Kirsty Jones

Marketing and Brand Development Lead

Spreads the word further and wider about how we can help connect and visualise your IT Ops and Sec Ops data.