Visibility Leads to Knowledge
Friday, March 17, 2017 @ 11:03 AM gHale
Learn how to Boost Process Performance, Fix Hidden Network Issues, View Alerts for Increasing Amount of Cybersecurity Threats
Knowledge is king in the industrial manufacturing environment today as decisions need to occur with 20/20 vision, no blind spots. What worked yesterday, may not work today and surely won’t work tomorrow.
Network visibility allows manufacturers to discover issues that impact process integrity and performance, fix hidden problems in their networks and protect critical systems against cyber threats.
Having the right technology allows the user to understand the network and know what is happening or to have data to analyze what happened.
Take the December attack on a Kiev, Ukraine remote power transmission facility that shut down the remote terminal units (RTUs) that control circuit breakers and caused a power outage for an hour.
• Incident management that automatically aggregates multiple alerts and messages into incidents
• Era of a specialized “subject matter expert” giving way to engineers becoming generalists
• Dashboards that simplify and streamline corporate policy, security monitoring and operational reporting across plants
• Compare a process from different times to understand and visualize changes in the ICS environment and place everything in proper context
• Visibility into what has happened and what is on the network can mean the difference between a profit or a significant loss
• Knowing the network and having an awareness and a logging capability of events occurring at all times.
• Help in IT-OT convergence
Unlike a 2015 cyberattack that cut out 27 power distribution operation centers across the country and affected three utilities in western Ukraine, the December 2016 attack hit the electrical transmission-level substation Pivnichna, a remote power transmission facility and shut down the remote terminal units (RTUs) that control circuit breakers, causing a power outage for about an hour.
Investigators were able to go in and retrieve logs to start the forensics process.
Analyzing and understanding data points from the logs allowed a research team with UkrEnergo, the national power company that oversees the Pivnichna substation and others, come to a conclusion the incident was a part of a bigger ongoing attack throughout the country.
Enhancing Network Performance
While the technology was able to cull details from the network and analyze an attack, the same technology can also provide data that can ensure the process is running in top working order.
Any network is a collection of connected devices that can communicate with one another over a common transport or communication protocol. Communication can mean the transfer of data among users or instructions between nodes in the network, such as computers, mobile devices, output devices, management elements, servers, routing and switching devices.
Networks have evolved from being flat where there were only a handful of elements. Now networks are more complex, and growing faster, with smarter and robust technologies, such as cloud, wireless, remote users, VPN, IoT, and mobile devices, among others.
RELATED WHITE PAPERS
Statseeker eBook: Maximizing Network Uptime
Analytics Through Network Monitoring
Understand and Assess Risk
Gaining Visibility on Malware Attacks
The Wireless Edge
Benefits of Virtualization
The Internet of Things Part 2: Virtualization & Cloud Technologies
The loT Part 1: loT’s Impact
Networks from the OT Point of View
Wireless Reshaping IT/OT Network Best Practices
Proactive Network Management
Virtualizing Your Network
The IT/OT Convergence
Three Key Challenges for Network Managers
Despite the ongoing technological evolution, one factor remains and that is the need for network monitoring. There is a need for faster network monitoring speed, and increased data granularity that allows for greater insight on network performance. That deeper level of monitoring allows network administrators to know what is going on in their network, be it their WAN, LAN, VoIP and other connections or nodes like switches, routers, firewalls, servers, and client systems.
Along those lines, it is very easy to talk about Industrial Internet of Things (IIoT) and Big Data analytics and Cloud coverage and they are important, but the reality is manufacturers today are not jumping headfirst into all those technologies, they are dipping their toes in the water and going slowly. While those big trends will eventually become de rigeur in the industry, that is way down the road. Right now, users need to focus on keeping their systems up and running and gaining more productivity, while ensuring a secure environment. All that is happening while manufacturers are losing more workers to retirement and a general lack of engineers coming into the industry. That means network monitoring tools are becoming as important as a scalpel is to a surgeon. Visibility into what has happening and what is on the network can mean the difference between a big profit or a significant loss.
The goal of network monitoring is to collect network statistics like data usage, latency, errors, discards, CPU, memory, disk values. Statistics end up collected via non-intrusive polling.
The issue is knowing the network and having an awareness and a logging capability of events occurring at all times. That all can happen once the user develops a solid understanding of what the network should look like. By running the network monitoring tool over time, data points end up collected and stored to prove and show just how the network is operating.
The user needs to have more meaningful business related data to be able to make real-time decisions. It is not just a case of knowing I have a high-utilization component on the network, but why is that happening? What is causing that? Is there any deeper insight into that? What is the good business decision to make? Do I need to upgrade the network? Am I investing too much money in this portion of my network? How do I know that? By having deeper analytics into the network, a user can make decisions that can potentially save costs or steer them into a different direction where they may need to invest in other areas. That can all happen by knowing exactly how the network is performing, understanding the trends, and knowing the hotspots.
That can occur by monitoring and visualizing processes by grabbing a baseline with a degree of granularity that allows the user to understand what is going on at all times. Non-intrusive real-time mapping, monitoring and visualization provide immediate insights for faster troubleshooting and remediation of IT and operational issues without having any kind of impact on industrial processes.
Network monitoring allows for greater:
• Incident management that automatically aggregates multiple alerts and messages into incidents, using an intelligent approach to problem solving. That can allow an operator to manage a network in an understandable method.
• Customizable portable dashboards that simplify and streamline the standardization of corporate policy, security monitoring, and operational reporting across plants.
• Compare a process from different times in order to understand and visualize changes in the ICS environment and place everything in the proper context.
• Optimize performance by having greater response times giving instantaneous answers.
The numbers back that up with 54 percent of U.S. manufacturers saying they lack a unified view of what’s happening on the plant floor, according to research by the Aberdeen Group consulting firm. In addition, with a conglomeration of legacy control systems permeating plant floors, users find they are expensive to maintain and operate with each passing quarter, so that is why manufacturers are investing in “standardizing production processes across their network of factories to create better visibility, coordination, and orchestration,” according to research from IDC.
The need for better operational efficiency stands out as a major pressure for all companies. In another Aberdeen study, 40 percent of respondents cited operating costs being too high, 30 percent said reduced budgets, 30 percent end up challenged by managing multiple datasets and 18 percent said increasing supplier lead times present issues.
Overall, 55 percent of respondents said too many operational inefficiencies are a huge challenge for their organization. Additionally, operational intelligence survey respondents feel pressure by expectations for real-time, critical decision-making abilities across operations, the Aberdeen research said.
RELATED BLOG POSTS
Proactive Alerts for Peace of Mind: Part One
Statseeker Network Monitoring Solution Reviews
Licensing, The Statseeker Way
Statseeker Delivers Value as a Cisco Solutions Partner
Visit Statseeker at Tradeshows Across Europe, North America & Australia
User Cases – Statseeker Fits Many Network Performance Applications
Statseeker Version 5.1 Now Available
Improve your Network Performance in 2017
Get a Secure Grip on Your Virtual Infrastructure
Network Downtime: More at Stake than Lost Revenue
Gaining Visibility on Malware Attacks
Infrastructure Management and Asset Tracking with Statseeker
Statseeker Dashboards Display Network Data That Matters
Big Data Analytics in Network Monitoring
Monitoring NetFlow with Statseeker’s RNA Tool
Statseeker Adds Access to Data with RESTful API
Let’s face it, networks are more complex now. It is more difficult because users have to deal with different technologies, software, devices and protocols. People don’t want to just know if the device is up or down. They want more. They want deep packet inspection. Application performance monitoring (APM). They need to know what is happening on the network and what is happening with applications from the client to the server end-to-end.
Having an agent technology sitting out on the network and having a system that talks to this agent technology that can send data to the proper places locally or globally is vital today. That agent in a particular area can perform tests and collect metrics and then feed the data back. That allows operators to have more visibility into their network, not just want bits and bytes. They need that deep dive into the network.
SME to Generalist
Having that detail at a user’s fingertips ends up being paramount as operators and engineers are retiring and technology becomes a more robust aspect to every working environment. That also means the era of a specialized “subject matter expert” is giving way to the idea of engineers becoming generalists.
Whether it is IT or OT, an engineer can’t be just a Cisco engineer, a Juniper expert, a process engineer, or a system administrator. They have to have a generalist skill set. They are looking after systems, they are looking after networks, after applications, and they also need to have some programming skills as well.
As companies downsize and automate, they are looking to short-circuit, circumvent, and quicken processes. People and technology cannot be an island anymore. Instead, one big driver is to allow for a more open and interoperable environment able to integrate and talk to other things.
That kind of environment also has the potential for attackers to jump in and potentially pilfer or damage a manufacturing enterprise. Visibility into what should or should not be on the network becomes vital.
Along those lines, there were 295 critical infrastructure attacks reported last year to the United States Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) division of the U.S. Department of Homeland Security, according to its annual report. But everyone knows that is just the tip of the iceberg.
In addition, consulting company, Enterprise Strategy Group, published results of a survey for critical infrastructure providers that showed dramatic increases in the number of attacks where 68 percent of respondents said they experienced one or several cyber security incidents over the past two years; 36 percent said cyber security incidents led to a disruption of operations; and 66 percent of cyber security experts at critical infrastructure providers believe the threat landscape is more dangerous today than it was two years ago.
The security story is simple, it is not a matter of if a manufacturer will suffer an attack, it is more like when. So, that means understanding a network and what the network looks like becomes even more important.
The security story is simple, it is not a matter of if a manufacturer will suffer an attack, it is more like when. So, that means understanding a network and what the network looks like becomes even more important. An ICS network while complex, does not have a constantly changing dynamic set of protocols or software. They remain fairly static. That is good news when it comes to viewing the network. That means if something appears out of the ordinary, it should cause suspicion.
While ensuring a secure network on the OT side remains a difficult task, there are reports that help may be on the way.
A SANS Institute survey found 46 percent of the respondents have job responsibilities that cover IT and OT. With more people having joint IT and OT responsibilities, there will be more intelligent hands on deck to view and understand configuration compliance monitoring, regular security assessments and the utilization of threat intelligence engines, all of which most operations engineers cannot or want to do. However, having the tools that have the potential to protect mission-critical control networks is essential.
No Need to Reinvent the Wheel
In the OT industry, there is a growing use of IT protocols. In one case, companies’ controller kits were supporting SNMP management protocol. SNMP, a protocol used the manage IT equipment, traditionally ended up used in the IT space. However, OT practitioners are now seeing it as an effective management protocol and they are embracing it. It is easy to build into products and it is simple then to build applications to manage it and monitor it.
The driver really is analyzing the OT data, which can improve the efficiency of the business. By moving the data into the IT environment, they have mature analytics tools they can add an extreme amount of value to the business because now they can pull data and give real-time visibility, metrics, data analytics, real-time reporting, real-time alerting, and thresholding.
In the OT space, yes, there are platforms that alert when things are going wrong, but to make good business decisions, you need to have historical data to work with. You have to see what the trends are. What happened before, so you can make a clear decision on what potentially will happen in the future.
Forecasting, analytics and anomaly detection are extremely important. The IT space has the maturity in it where OT does not. So, gluing the two together you can have the best of both worlds so you get the robustness of the OT space with the hardware you have and the systems you have in place, but now you can start making clearer decisions by taking the data stored on a server and start to analyze it and build a trend to see efficiencies or potentially learn about and improve OT issues.
Visibility allows for greater knowledge to make clear and profitable decisions today and well into the future.
Leave a Reply
You must be logged in to post a comment.