Streamlining Efficiency Part Two: A Dive into our Latest Automation Project

Streamlining Efficiency Part Two: A Dive into our Latest Automation Project

In Part One, we discussed how many organisations are hesitant to disrupt existing technical workflows, even though they may now be extremely inefficient. However, this mindset can be a barrier to progress and innovation, potentially costing time and money.

While certain processes may appear functional on the surface, they may still harbour inefficiencies or limitations that could be addressed through automation. By challenging the status quo and being open to change, organisations can uncover hidden opportunities for improvement and unlock new levels of efficiency and effectiveness.

In this second part of our mini blog post series, we will explore how even seemingly “unbroken” processes can benefit from automation, demonstrating the value of a proactive approach to innovation, as highlighted by a recent automation project with a manufacturing company in an industry dating back to the 19th century.

An open-minded, collaborative approach is essential for successful automation projects, especially in network management. By focusing on objective evaluations of workflows and processes, stakeholders can identify areas for improvement and implement solutions effectively. This approach was exemplified in a recent automation project carried out by KedronUK for a manufacturing company, where a clear focus on process evaluation led to significant improvements.

In the first part of this series, we outlined the project’s focus, highlighting three main elements:

1. Mitigating Ticket Proliferation in IT Service Management
2. Workflow Automation for Manual Tasks
3. Efficient Ticket Generation for the NG Firewall Platform

Let’s dive into the second focus area:

Workflow Automation for Manual Tasks

Many of us perform tasks regularly that we know could be more efficient, but the old saying comes to mind; “I don’t have time to stop to get on my bike,” or in other words, “I’m so busy with the task at hand, that I don’t have time to stop and improve the process.” Double, or even triple entry, is a common example of this inefficiency.

I was talking with a new potential customer today, who manages a network supporting over 30,000 employees. When asked if they find themselves double-entering data between their CMDB and the NMS platform, they responded as expected: “Integration would be great, but we just don’t have that level of automation, and we don’t seem to be able to find the time to implement it.”
Our existing manufacturing company customer faced a similar situation. They used one system for ordering, another for CMDB, another for discovery and management, and another for ITSM, with users manually entering information at each layer. Additionally, vital information often wasn’t migrated between systems due to the manual workload. Such data would be extremely useful in the event of an incident or service assessment.

While this scenario can be politically sensitive, the benefits of a consistent, up-to-date flow of data from source to end-point, without the bottleneck of manual entry, are clear.

The first step was to identify what data was needed, where was it needed, and what was the entry point for that data. The goal was for an alarm produced by the NSM platform to generate a ticket in the ITSM platform, automatically, with all the required details to triage the ticket, such as:

• Hostname
• IP Address
• Serial Number of all FRU components
• Services affected
• Service Status
• Device Criticality
• Asset Value
• End of Life Status
• Site Information
• Rack and U location
• Site access details
• Service Contract Status
• Support Contact Numbers
• SLA information
• Root cause Alert detail (what’s wrong)

As can be deduced from the list, the source of each data point was spread across several different platforms.

The second step was to identify the unique identifier between platforms to link the data sets. For example, the unique identifier between the ordering platform and the CMDB was the device serial number, and between the CMDB and the IPAM system, it was the hostname.

Using TOTUUS®, a data framework solution with capabilities such as CMDB and ETL, we configured several data connector APIs. The customer CMDB was configured to send new or updated device information to TOTUUS®. A listening (PUSH) connector (DCx) was configured in TOTUUS® to receive this data, ensuring secure communication with tokenised URLs.

Upon receiving data from the customer CMDB, the PUSH DCx matched the data internally with a unique identifier, updated the local CMDB, and executed secondary DCx’s to connect to other platforms for additional data augmentation.
StableNet® (NMS platform) by InfoSim, was configured to discover additional asset information regularly against the TOTUUS® CMDB. Another TOTUUS® DCx was configured to extract relevant information from StableNet®, after asset discovery, such as device end-of-life details and configuration policy status.

Finally, a TOTUUS® DB Object was configured, allowing the customer CMDB to pull detailed information of interest back to maintain up to date information.

With all required data in one place, an alarm script in StableNet® was configured to augment alarm details and send the necessary information to the ITSM platform, automatically creating an ITSM ticket, with the required level of detail.

This engagement resulted in data being entered once, at its point of origin, and automatically collected and passed to secondary systems, augmented en-route, and collated at its destination. This reduced errors, data bottlenecks, and time spent understanding required information during incidents, significantly reducing MTTR.

In the final segment of this blog series, we’ll continue exploring facets of network management automation, with efficient ticket generation for the NG Firewall Platform.

If you would like to discuss an Automation or Consolidation project, please contact phil.swainson@kedronuk.com.

Phil Swainson

Phil Swainson

Head of Technology

Responsible for the KedronUK portfolio, including in-house product development.

Call us today on 01782 752 369
KedronUK, Kern House, Stone Business Park, Stone, Staffordshire ST15 0TL

Streamlining Efficiency Part Two: A Dive into our Latest Automation Project

Streamlining Efficiency: A Dive into our Latest Automation Project

The age-old adage “if it ain’t broke, don’t fix it” often serves as a deterrent to embracing automation. Many organisations are hesitant to disrupt existing technical workflows, especially if they’ve proven effective, if not efficient, over time. However, this mindset can also be a barrier to progress and innovation, and in some cases, costing time and therefore money.

While certain processes may appear functional on the surface, they may still harbour inefficiencies or limitations that could be addressed through automation. By challenging the status quo and being open to change, organisations can uncover hidden opportunities for improvement and unlock new levels of efficiency and effectiveness.

This blog post will explore how even seemingly “unbroken” processes can benefit from automation, demonstrating the value of taking a proactive approach to innovation, as highlighted by a recent automation project with a manufacturing company in an industry that dates back to the 19th Century.

“By focusing on objective evaluations of workflows and processes, stakeholders can identify areas for improvement and implement solutions effectively”.

An open-minded, collaborative approach is essential for successful automation projects, especially in network management. By focusing on objective evaluations of workflows and processes, stakeholders can identify areas for improvement and implement solutions effectively. This approach was exemplified in a recent manufacturing company’s successful automation project caried out by KedronUK, where a clear focus on process evaluation led to significant improvements.

Initially, ten workflows were identified for evaluation.

The goal was to:

• Identifying the stakeholders involved in each workflow.
• Thoroughly understand the current workflows.
• Quantify the time and effort involved in each workflow.
• Assess the feasibility of automation of each workflow.

Each of the ten workflows were assessed and categorised for feasibility, efficiency benefit and cost verses benefit. It was found that the initial ten workflows fell into the following three main categories:

1. Mitigating Ticket Proliferation in IT Service Management:
There was a need to address the issue of an excessive number of tickets being generated in the IT Service Management Platform. Streamlining and refining the ticketing process would be pivotal in enhancing overall operational effectiveness.

The Network Operation Centre (NOC) team, where finding it very difficult to stay on top of the 10,000 plus tickets being generated through existing integrations.

2. Worlkflow Automation for manual Tasks:
An automation opportunity was identified within manual workflows to eliminate redundancy associated with repetitive tasks. This included expediting the onboarding of new devices and the cessation process for existing devices. By automating these procedures, the aim was to enhance efficiency, reduce errors, and accelerate the overall pace of operations.

3. Efficient Ticket Generation for the NG Firewall Platform:
Automating the process of ticket generation for the Next Generation Firewall platform, with threat intelligence, to ensure a swift and accurate response to detected threats and assessments. This would involve integrating automation solutions that expedite the identification, logging, and resolution of issues on the platform, ultimately contributing to a more responsive and agile operational environment.

Lets look at these in turn:

Mitigating Ticket Proliferation in IT Service Management
Three tiers, or components were involved in raising ITSM tickets for the organisation, which within themselves already had a degree of automation implemented. The results however had become unmanageable, with approximately 10,000 tickets per month being raised for the 24×7 NOC team to triage and close. This equated to roughly 14 tickets per hour around the clock.

Upon investigation, it was discovered that a significant portion of these tickets were duplicates or repetitions of similar events, leading to a staggering 70% increase in ticket volume. The existing automation had become inadequate, exacerbating the issue rather than resolving it.

The first tier, the Network Management tool, had root cause calculation capabilities but was configured to forward all alarms, without root cause, to the second tier—a Network Management tool with integrations to the third tier, the ITSM platform. While this setup seemed promising in theory, it proved ineffective in practice, as evidenced by the overwhelming volume of tickets inundating the NOC team.

The immediate and pressing question arose: Why wasn’t a tool equipped with root cause analysis capabilities being fully leveraged? The answer, though somewhat surprising, revealed that the second-tier solution possessed the capability to filter—not correlate—for alarms tagged with a root cause from the first tier. Furthermore, the business had decided to only address root cause incidents of specific types via the proactive team in the ITSM, with the remainder managed by the Business as Usual (BAU) Team through reports. Consequently, this criterion was also added to the filter.

This setup meant that, regardless of the configurations in tier one, tier two would only forward what it was configured to, resulting in the decision to send everything from tier one to tier two.

We recommended reversing this logic, making the more capable tier one tool the one with the intelligence to determine what to send after calculating root cause. This approach would leave tier two with the straightforward task of merely forwarding what it receives. Additionally, this approach simplifies future configuration changes, as there is only one tool to configure.

This change resulted in a 33% reduction in the number of alarms sent to tier two, all of which matched the proactive team’s criteria. However, the number of ITSM tickets remained roughly the same.

In delving into the root cause of the ticket surge, we examined a month’s worth of ticket data. Our analysis revealed a trend: a substantial number of tickets were being closed by the proactive team, marked as acceptable within business utilisation thresholds. Moreover, we observed a proliferation of seemingly duplicate incidents, where multiple tickets were processed and closed by the team, referring to existing open tickets.

The investigation yielded two significant recommendations. Firstly, we proposed fine-tuning the tier one management platform to trigger alarms based on business utilization thresholds, which notably curtailed the number of utilization related ITSM tickets.

Secondly, we investigated the issue of apparent ticket duplication for identical incidents. We uncovered a limitation within the tier two platform—its ticket-raising process lacked an update mechanism. When a condition resolved to its KPI, an “OK” notification was issued from tier one to tier two. Tier two would then locally close the incident without updating the ITSM. This oversight led to recurrent breaches generating new tickets. The oversight was blamed on the business requirement for all tickets to be closed manually.

A solution was needed to update open tickets with both the “OK” notification and recurrent breaches. However, we hit dead ends with the tier two solutions capabilities and with the ITSM platform team due to a reluctance to alter logic, therefore, we redirected our focus to tier one. Leveraging its capability to directly interface with the ITSM tier bypassing tier two, we achieved the required ticket creation and update process.

Overall, monthly tickets saw a remarkable 77% reduction, plummeting from 10,000 to 2,300. This significant improvement allows the team to allocate more resources to incident resolution rather than ticket deduplication. Furthermore, the business is now evaluating the business case for the tier two solution, with potential cost reductions on the horizon.

As can be seen, the insights gained from our analysis led to recommendations aimed at optimising processes and enhancing productivity. In the upcoming segments of this blog series, we’ll continue our dive into additional facets of network management automation with a look at Workflow Automation for Manual Tasks followed by Efficient Ticket Generation for the NG Firewall Platform.

If you would like to discuss an Automation or Consolidation project, please contact phil.swainson@kedronuk.com.

Phil Swainson

Phil Swainson

Head of Technology

Responsible for the KedronUK portfolio, including in-house product development.

Call us today on 01782 752 369
KedronUK, Kern House, Stone Business Park, Stone, Staffordshire ST15 0TL

Breach & Attack Simulation: UK Market Report

Breach & Attack Simulation: UK Market Report

In today’s digital age, businesses must be proactive in protecting their sensitive data and networks from cyber threats. One way to do this is through the use of breach and attack simulation (BAS) tools. BAS tools are designed to test the resilience of a company’s cybersecurity policies and procedures by simulating real-world cyber-attacks. This allows businesses to identify vulnerabilities and weaknesses in their systems before a malicious actor can exploit them. However, many business leaders may be unsure of the differences between breach and attack simulation, vulnerability scanning, and penetration testing.

Vulnerability scanning is the process of identifying and assessing vulnerabilities in a company’s systems and networks. This is typically done using automated tools that scan for known vulnerabilities and provide a report on any that are found. Penetration testing, on the other hand, goes one step further by actively attempting to exploit vulnerabilities in a company’s systems and networks. This is done by a team of ethical hackers who simulate real-world attacks to identify and assess the effectiveness of a company’s cybersecurity defences.

BAS takes a different approach by simulating real-world cyber-attacks in a controlled environment. This allows businesses to test their cybersecurity policies and procedures in a realistic scenario and identify any gaps or weaknesses. One of the challenges when deploying BAS is knowing how to deploy it within different customers’ unique technical architectures, to test all the critical security policies. Kedron provides this expertise as part of their service along with ongoing support and review. This means customers get the benefit of a delivered managed service but without the higher costs of a total outsource arrangement.

Kedron offers the ThreatSim product from Keysight, a market leading BAS solution, as part of their service. Many experts in the field, such as Gartner and Forrester, have stated that Breach and Attack Simulation is essential for enterprise security teams. Gartner states that “BAS solutions are essential for enterprise security teams to test the effectiveness of their security controls and identify vulnerabilities that need to be prioritized for remediation.” Forrester notes that “BAS has emerged to provide an attackers view, with deeper insights into vulnerabilities, attack paths, and weak/failed controls, making it an essential tool for any enterprise security team looking to proactively identify and remediate vulnerabilities before they can be exploited by attackers.”

In conclusion, breach and attack simulation is an important tool that should be used in addition to vulnerability scanning and penetration testing. It allows businesses to test their cybersecurity policies and procedures in a realistic scenario and identify vulnerabilities before they can be exploited.

Read our recent Survey Report in partnership with Keysight Technologies, to learn more about how KedronUK and ThreatSim can help you business with BAS services.

Phil Swainson

Phil Swainson

Head of Technology

Responsible for the KedronUK portfolio, including in-house product development.

Call us today on 01782 752 369
KedronUK, Kern House, Stone Business Park, Stone, Staffordshire ST15 0TL

New Partnership with Allegro Packets!

New Partnership with Allegro Packets!

Who are Allegro Packets and when was the company established?

Allegro Packets was formed by Klaus Denger, a serial tech entrepreneur, in 2007. Based out of Leipzig Germany his mission was to provide affordable, fast and easy to use insights into Network issues. This led to a range of 4th generation Network Performance management solutions.

How did Allegro Packets and Kedron come together?

Kedron was identified as a Partner who could add to Allegro’s channel only focus, as previous experience of working together of management made a good fit. Kedron’s customer first ethos fitted perfectly with Allegros, who’s continued development of the solution is based on customer feedback. 90% of all development is done this way with regular user feedback days.

What gap is Kedron filling for Allegro Packets?

Kedron as a partner brings real benefit as a true Value-added reseller. Years of experience in the Network performance management field has lead to a wealth of expertise that could see the benefit of the Allegro range. From small portable solutions to large Enterprise installations Kedron has the staff and project management skills to ensure a successful deployment.

What can Allegro Packets bring to Kedron?

When the initial solution was created. The first pillar was performance. 3rd Generation systems captured all the packets, then extracted for analysis. This had two problems. The first is time to extract those packets, mining through all the captured packets, takes time. The second is capturing and storing all those packets require huge drive arrays. 4th generation Allegros overcome this by real time analysis that allows users to instant go to the issues. Packets of interest can then be stored. This leads to the second pillar affordability. Less storage = lower cost. Allegro’s bring performance management back to sensible budgets levels with superb ROI. The third pillar, ease of use, is from a simple intuitive L2-7 menu system with a top down view means issues can be found quickly and easily. Add the software is the same on a large data centre as it is for portables an easy hybrid monitoring and ad-hoc environment can be created, without learning two sets of software.

Phil Swainson, Head of Technology at KedronUK says: “We’ve found that customers managing enterprise networks are struggling to find a network performance management tool focused on packets that can handle the demands of high-speed, high-bandwidth networks, while not breaking the bank with excessive storage requirements. The unique way Allegro Packets solutions work means that network managers and IT Ops managers can get the information they need without having to search petabytes of data.”

To find out more about Allegro Packets, please Contact us or get in contact with our sales team through sales@kedronuk.com

Phil Swainson

Phil Swainson

Head of Technology

Responsible for the KedronUK portfolio, including in-house product development.

Call us today on 01782 752 369
KedronUK, Kern House, Stone Business Park, Stone, Staffordshire ST15 0TL

Network Management a Simple Truth Garbage In, Garbage Out

Network Management a Simple Truth Garbage In, Garbage Out

I’m often ridiculed for how frequently I use the expression ‘garbage in garbage out’, in fact many of my colleagues know when I’m about to say it and mouth it back to me, so let’s get it out of the way now; When it comes to commissioning, or onboarding, an NMS I believe the administrators mantra should be GARBAGE IN GARBAGE OUT!

An NMS comes into its own when alerting an incident, providing the location of a root cause or allowing you to obtain data for a report, and if the commissioning and maintenance of the system has been sloppy, then you will get what you deserve.

With the best will in the world, however, unless data input is managed and controlled to defined best practises, users will often resort to ‘minimal input’ to achieve their goal. This is often exacerbated by new and or inexperienced users being tasked with the process of commissioning or on-boarding, which often occurs a little while after initially installing a new NMS, when the experienced engineers move onto the next project.

During the initial commissioning phase of a new NMS, time and emphasis is placed on the quality of the input, only later for this repetitive task being given to less experienced users with ‘Local Work Instructions’ (LWI’s). LWI’s provide a guideline of best practise, but provide little if no governance of best practises, add to this human error, and a company quickly moves to the ‘garbage in garbage out’ situation, there, I said it again.

Usually at the outset of deploying a new NMS platform, great emphasis is placed on the presentation and structure of the solution:

  • Will device grouping be geographical, service, business unit, device type or the like?
  • Will augmented data be required from additional sources, such as address or Geo data, service contracts and SLA details?
  • What device tags will be required for filtering and reporting?
  • Which user groups will have which roles and visibility?
  • What views will be created and who will have visibility?

Defining these, however, is only the first step; enforcing them, moving forward, is entirely another.

Why, though, is this so important? Carefully crafted reports, for example, with filters that include inventory items based on specific attributes, will only continue to include all the required inventory, if the specific attributes used in your filters are completed correctly by your users. It is very common to see systems that looked so complete and fit for purpose, when initially commissioned, to be far from being fit for purpose just six months later.

Additionally, understanding what’s missing from an NMS, continues to present a major challenge to administrators; how do we know what we don’t know?

For well over a decade, KedronUK have worked in conjunction with some of the UK’s leading network management companies with best of breed network analysis and management tools and platforms, in the process we have defined best practises in utilising these tools and platforms. In recent years, our Customers have increasingly called for automated processes and procedures to enforce these defined practises.

Therefore, we have developed our ‘Commissioner Portal’ to define and enforce configured inputs, reducing human error and ‘free form’ inputs to a minimum. Empowering inexperienced users to on-board devices exactly as administration define and presenting network management tools with a clean set of data.

The Commissioner Portal allows for the definition of associated data sets to be created in advance which are simply selected by the end user rather than having to be input at the point of inventory insertion each time. This could be a data set related to a physical site location for example. The data set for a Site can be as extensive as required but does not require re-entry every time an inventory item is added to that site.

With a set of predefined reports, the user is able to see at a glance the current status of his onboarding attempts and ‘see’ what is missing from the NMS, when compared to the commissioning database.

We have recently made the decision to combine the Commissioner Portal with the KedronUK TOTUUS solution. The Commissioner Portal now becomes a module of the already extensive TOTUUS solution, with its Data Connectors (DCX) providing automation of Inventory updates from 3rd party systems and flat file locations, along with automated data normalisation, allowing for seamless commissioning of an NMS solution from 3rd Party systems or element managers.

If any of the above strikes a cord with you please get in touch with us here.

Phil Swainson

Phil Swainson

Head of Technology

Responsible for the KedronUK portfolio, including in-house product development.

Call us today on 01782 752 369
KedronUK, Kern House, Stone Business Park, Stone, Staffordshire ST15 0TL