Benefits of Virtualization

Friday, June 3, 2016 @ 01:06 PM gHale


Visibility into Network Allows for Cost Savings, Reduced Downtime
A manufacturing executive’s goals are simple in the end: Increase profits, while at the same time achieving reduced costs. Of course, reaching those goals is incredibly complex and involves a multitude of steps and procedures.

One of those steps to cut costs and improve productivity is moving to a virtual environment. Growth in virtualization in manufacturing is continuing as more end users are taking advantage of the cost benefits it brings to the table.

But in a tight market and reduced workforce, it is even more imperative to keep a weather eye on the network to make sure it is running properly and there are no outsiders peering in trying to gather, or steal, vital information.

The goal is, of course, to the keep the network up and running by eliminating any unplanned downtime, so that is where network monitoring comes into play as a strong tool to alert and keep end users aware of nuances and changes going on in a network. In short, increase network visibility.

Virtualization is growing in all industries, especially manufacturing and the numbers seem to back that up with one industry snapshot saying the global storage virtualization market will enjoy a compound annual growth rate of more than 24 percent from 2015 to 2019, according to Technavios market research.

Just a little background on the benefits of virtualization, from a hardware perspective, virtualization makes it possible to run more applications on the same hardware, which translates into cost savings. If you buy less servers, then you will incur less capital expenditures and maintenance costs.

Advantages to Virtualization
— Reduced deployment and upgrade cost
— Reduce impact and frequency of OS and hardware
— Hardware reduction
— Improved availability (reduced downtime by hot standby)
— In a virtualized environment easy to make snapshots resulting in faster recovery

In addition, virtual machines can end up centrally managed and monitored, thereby allowing a manufacturer to more easily achieve greater process consistency across the enterprise. Benefits include ease of continuous process improvement, greater agility and less training burden as employees transfer, or leave the company, get promoted, or retire.

Manufacturing software like Manufacturing Execution Systems (MES), among others, have a long lifecycle, given the upfront time and cost to implement, as well as the needed ongoing training. Oftentimes, manufacturers avoid making changes to their software configurations as a way to reduce risk. By separating software from hardware updates, a virtual IT environment might offer benefits to ease this management lifecycle of software and OS system updates. Hardware purchases can also occur on a regular or scheduled basis, resulting in greater consistency in system specifications.

Virtualization: Greater Cost Savings
As mentioned virtualization is definitely growing on the industrial, or OT, side. Let’s face it, automation’s gains over the past decade have come from the ability to connect business systems to the plant floor and drive factories based on orders received and collect data out of the plant and use that to analyze and improve performance. Knowing and understanding all that, end users are deriving great cost savings by virtualizing PCs onto fewer physical servers.

Virtualization Brings Savings
Whirlpool Corporation knew they had to go virtual. It just made sense.

After a merger with Maytag, officials discussed how virtualization would ease managing multiple computer systems.

If there is a network of two physical servers that hosts four or more virtual servers, with management tools, the network administrator can easily balance the load between the two physical servers. The administrator could cluster the virtual servers on one physical server so the other could end up taken down for maintenance, and roll out new virtual computer workstations or servers in half the time it would take with physical equipment.

With virtualized servers in place, the agility of a business can increase while technology ends up streamlined. Virtualization helped Whirlpool’s IT infrastructure become more manageable after the Maytag merger.

In the merger, Whirlpool made a switch from Maytag’s old data center in Newton, Iowa, to Whirlpool’s location in Kalamazoo, Michigan. Whirlpool had VMware Converter in place, which converts two physical servers into one machine. The tool allowed a seamless move of 52 systems and 13 terabytes of Maytag’s data to Kalamazoo, saving around $2 million in migration costs. The move allowed Whirlpool to switch to a virtualization-first policy while spending less time managing older physical servers.

When doing that, the top benefit is cost savings and another benefit is manufacturers are protecting themselves against hardware failure. Downtime would be lower because you can build clusters of PCs that add redundancy you would not have if each application or server was running on its own dedicated piece of hardware.

Along those lines, with manufacturers increasing their network dependency, it means more network connected devices which will all need a monitoring tool.

In a complex cloud style cluster of virtual machine servers, there is a need for monitoring and configuration tools.

CapEx-OpEx Spending Down
Virtualizing servers in a manufacturing environment does return a considerable cost savings in terms of the amount of equipment the user would need to purchase. So capital expenditures (CapEx) would be down when going to a virtual environment and operational expenditures (OpEx) over time would also reduce because the less you buy in CapEx the less you have to buy in three years when they run out of warranty.

By going the virtualization route, it gives the manufacturer the ability to buy two physical servers and then run five or six virtual servers on each of those and if one fails the other five or six virtual servers on the other machine can continue to run so it reduces downtime in a failure for a business that has a mission critical application. That reduced downtime can lead to an increase in productivity, which leads to making more product which could mean higher profitability.

Another aspect in an industry always looking to squeeze more and more out of its processes is it can also reduce costs for staffing. Traditionally, a manufacturer may have IT staff internally, but they would need less staff to manage a platform that has been virtualized. In addition, a staff that works over a virtualized platform has a cross pollination of their skill set so there is not one person focused on the network and another for the servers, there are just folks skilled on the virtualized platform.

One more aspect often overlooked is virtualization allows a manufacturer to scale up or down according to market demands.

Monitoring More Interfaces
Along the lines of scalability, there are tools that can monitor up to 600,000 interfaces from a physical server and there are some right now monitoring up to 900,000 interfaces from a virtual server.

While that may sound like quite a bit, it is possible to treat each interface separately. That can allow for the user to pull metrics independently of the device and then run reports and then show metrics around how much traffic is occurring on an interface, how many errors are occurring on an interface, whether the interface has a connection or not. Also, one of the benefits of treating each interface separately is when it comes to troubleshooting a network, it is easier to find out exactly which port or physical connection to go to find a faulty cable or a device that has a problem.

RELATED WHITE PAPERS
Networks from the OT Point of View
The IT/OT Convergence
Wireless Reshaping IT/OT Network Best Practices
Virtualizing Your Network
Proactive Network Management
The loT Part 1: loT’s Impact
The loT Part 2: Virtualizaton & Cloud Technologies
Three Key Challenges for Network Managers

Network monitoring is a solid tool, but its mission is to report on what it sees from the interface’s perspective. The interface will say whether it is suffering from a physical fault, potentially a cable fault or a misconfiguration, or if there is too much traffic going across the interface at the time which could be an indication someone is using the network inappropriately.

It also allows the user to create alerts across any metrics. As an example, it would be possible to write a custom alert to go to the administrator if there is a potential cable fault. The alert could say there is a potential cable fault on Router One, Interface Two. It can be smart enough to give them that kind of feedback. So, it is possible to bring together multiple metrics from an interface to look at bandwidth and errors at the same time and send them in one email. The administrator at that point could look at it and say it doesn’t look like a bandwidth issue but it is a fault with the device because we are dropping interface metrics, or packets, on the interface.

Setting a Baseline
As mentioned before, the issue is all about knowing the network and understanding what is going on. That all can happen once the user develops a baseline of what the network should look like. Then they can find and determine discrepancies.

By running the network monitoring tool over time, it is possible to determine the average usage so if something was happening on the network, the administrator would get a report saying this interface normally would see 30 megs of traffic. So it would then be possible, knowing the baseline, to set an alert saying send a notification if the traffic gets to 50 or 60 megs.

The goal is to keep the history as long as the user wants. By knowing the history of that baseline that could go back for years, it could also help in designing the network of the future. In doing that, you could say, “last year we were using 30 megs, but in the last three months we have been at 50 megs.” That may not trigger any alerts, but it could allow the manufacturer to capacity plan and say, “if we keep growing at this rate we are going to have to replace our equipment to handle the new bandwidth requirement in this area of the network.”

Defense in Depth Tool
Whether it is a virtual environment or a regular physical network, in today’s Internet-connected manufacturing environment, network monitoring becomes another strong tool in solid defense in depth program.

From a security perspective, the goal is to ensure the network stays up as much as possible, which minimizes downtime and maximizes operational return.

The biggest risk on a network is when you get something that impacts the business like a distributed denial of service (DDoS) attack or when someone is trying to find an exploit and are running scanners across the network. By monitoring NetFlow traffic, it is possible to report on unusual activity on unknown ports and provide that information in real time as it is happening. NetFlow is a feature on Cisco routers that provides the ability to collect IP network traffic as it enters or exits an interface.

RELATED BLOG POSTS
“I’ve Got Wireless on My Network!”
Getting the CFO’s Attention & Improving Your Budget Position
Network Optimization – Meaning and Benefits
Controlling Costs of Legacy Devices
Best Practices for Handling BYOD
Predicting Network Utilization
Rightsize Your Network – Improve CAPEX & OPEX Costs
Sensors and Sensor Networks for Enterprise IT Professionals

Because it is possible to report on things connected or disconnected on the network, if there are critical connections on your network that go down which are mission critical, the network monitoring tool can immediately alert the user the second the tool receives an outage notice from a device. The tool can inform the administrator of a fault before they realize it has happened on the network.

A perfect case in point is if the administrator is not aware of what is occurring on the network, he or she can go back and look at the history of the network and understand the baseline. The same goes for the unusual traffic.

It is possible to classify traffic based on the rules defined to say port 80 and port 443 are normal protocols on the network and classify them as web traffic. But looking at port 3389 for remote desktop connections and put that over in a different area as a normal business application and then anything outside of that consider it as unknown. Then the alert can say if there is more than 20 percent network traffic that is unknown at anytime, the administrator should know about it so they can start to investigate. That type of alert will tell them about things they were not prepared for or even looking for because it will end up classified in that unknown category.

OT Growth Coming
Network monitoring has been a staple in the IT industry for a decade or so and the manufacturing automation industry is just now starting to pick up on the benefits that visibility bring.

Right now, OT networks tend to be a lot smaller than IT networks, but that is in the process of changing, especially with the looming shift to the industrial internet of things (IIoT). When that happens, and there are industry pundits saying it will happen en masse sooner than later, the number of network connected devices is absolutely going to mushroom. So most likely in two years, network monitoring will be as important in this industry as it already is in IT.

It seems pretty simple, a boost in visibility will allow everyone from engineers to C-level leaders make informed decisions in the new era of manufacturing.