Intelligent Networks Need More Visibility

Thursday Nov 16th 2017 by Arthur Cole
Share:

Eventually, the enterprise may move toward data-driven networks that are managed by AI, and those networks will need better monitoring.

Automating network infrastructure requires a lot of trust: trust in the systems you’ve deployed, trust in the policies you’ve established, and trust in your ability to reassert manual control should things go wrong. But just as nations employ the “trust, but verify” mantra when dealing with critical issues, so too should the enterprise when it comes to critical applications and services.

Verifying network performance, however, has grown a lot more complicated in the past decade. The advent of software-defined architectures and scale-out cloud and IoT infrastructure, as well as the speed at which workflows and virtual network deployments take place these days, makes it all the more imperative that organizations adopt increasingly intelligent management stacks. So in a way, intelligence tends to feed off of itself. The smarter our systems and devices become, the smarter the network must be in order to maintain acceptable service levels.

This is causing some experts to look past the software-defined network (SDN) and even the emerging intent-based network (IBN) toward the data-driven network. In the words of networking consultant Terry Slattery, this is when big data gathered from multiple points throughout the network is collected in real-time and run through an external analytics engine to provide insight on everything from security and traffic management to optimization and architectural development. A key step in achieving this level of performance is replacing the already outdated Simple Network Management Protocol (SNMP) with newer solutions like the Advanced Message Queueing Protocol or various message-oriented middleware stacks like RabbitMQ and ZeroMQ. In this way, the enterprise gains the ability to publish data streams to which users can subscribe (a technique known as Pub-Sub) to implement their own network environments.

Companies like Cisco, Arista and Veriflow are already implement remote data collectors in their networking solutions as a means to move beyond mere traffic monitoring to enable deep-dive analysis of multiple operating metrics. Cisco’s Tomer Dichterman says the main difference between modern automation and that of the past is that the scale and complexity of the data ecosystem has grown so vast that the old methods of configuring the command line interface (CLI) and deploying Tool Command Language (TCL) scripts can no longer cope. Instead, the enterprise must embrace automation on the Application Programming Interface (API), which offers the ability not only to craft highly customizable networks but also to implement automatic detection and even pre-emptive correction capabilities to ensure optimal connectivity at all times.

But how well can technologies like machine learning actually deal with the challenges of modern network management? If the aim is to improve on detection-prevention-analysis-response (DPAR) models, the outlook is pretty good, according to Leon Adato and Destiny Bertucci of Solarwinds, but only if a few best practices are employed. For one thing, intelligent automation stacks require broad visibility across the entire IT spectrum — everything from bandwidth consumption and disk array performance to database actions and web server connectivity. As well, the human operators of these systems (yes, they will still be necessary) will need more training in the data sciences and the DevOps model of IT management.

Of course, new platforms are at the ready to streamline the deployment of network intelligence into legacy infrastructure. ThousandEyes has expanded its Path Visualization system with a module called Device Layer, which strives to boost application and service performance by adding device discovery and health information data network management workflows. The aim is to help IT teams identify root causes rapidly by mapping network topologies and analyzing detailed interface metrics. In this way, organizations can control scaled-out networks from a single point, even as those networks extend into the cloud and beyond.

One of the best things about intelligent networking is that even though it represents a paradigm shift in IT management, it can be implemented on legacy infrastructure relatively easily. Once it has learned how to improve today’s environment, it has the capacity to change processes on a more fundamental level going forward, all while adapting to new topologies, new service requirements and new business models.

By empowering these systems with the visibility tools to acquire the proper data to assess network operations, the biggest challenge won’t be managing the network but figuring out how it can be leveraged to produce the biggest gain in data productivity.

Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved