As the old saying goes, there is more than one way to skin a cat. Bad analogies aside, we all know there’s more than one way to get the desired result. This is true in almost every endeavor we undertake. As telecom professionals, gaining an understanding of a customer’s satisfaction or dissatisfaction with a service’s performance is no exception. There are efficient and cost-effective ways to get information distilled and presented and there are less evolved ways. There is however one truism, “The customer is king” (or queen or princess or prince), and getting an in-depth understanding of the “customer experience” is key to keeping the lifeblood of a service provider flowing; engaged customers.
So, what are we talking about when we say customer experience? Let’s look at it in the context of network intelligence and service assurance. Network Intelligence can stand alone and serve a wide variety of applications but can be part of a workflow or starting point to identify and solve issues. A means to an end in operations, engineering, and planning solutions. Think customer care, where a condensed, intelligently organized dashboard gives support staff and call responders details on the customer’s activities. This could be anything from, a quantified score of their experience, a break down of where the subscriber had good success and\or bad experiences, information on their access to the network and services, ability to download and upload, ability to reach websites, and which applications have they been using, when and in what volumes? The Quality of Experience (QoE) is the subscribers’ happiness quotient or score. Getting to a QoE score seems simple enough, you grab data, organize it, keep what you need, discard the rest, and through it at a user interface, what’s so hard about that?
Understanding customer behaviors
Let’s dig a bit further. Understanding customer behavior and getting performance and quality indicators requires information and data. For data, more and more of it is encrypted, which is gibberish to a protocol stack unless a cyphering key is available and these are not shared in the public domain. So how can we see what a customer is doing if we cannot distinguish what the traffic is? Enter Artificial Intelligence (AI), or Machine Learning (ML) to be more exact.
There are different types of ML, supervised, unsupervised, and reinforced. They differ in how they use data, what state the data is in, and what the desired outcome needs to be. In the example of encrypted network traffic, the ML algorithm does two things. First, it classifies the data so it knows what the data relates to, and second, it needs to pull the relevant aspects out of the data to calculate KPI and QPI statistics. A classification engine will use ML to build a pattern and match it to what the application looks like in the traffic feed, this is called signature matching. Once you have the traffic signature, ML can be used to match that signature to what it has “learned” from the network feeds, and label it. Once labeled, all manner of things can be done including, KPI and QPI measurements, as well as QoE and other CEM and SOC functions, which can take place. The slice and dice of information is a visualization extravaganza.
This is the business of Business Intelligence based on network data, and not only helps those in the operational area of a CSP but can be used for engineering to better understanding growth and capacity requirements, service performance, trends, and a number of other important items. It also helps planning, to better prepare the next service or technology evolution by understanding how the launch of past technology went.
Getting the most out of the data
But there’s more to understanding a customer experience than network data alone, we did say earlier data and information. Getting to the root cause of an issue, and those impacted by it takes some doing. This is where Service Assurance lives. This layer houses in-depth sophisticated tools to track, trace and organize packets and sessions and identify all manner of requests, responses, data transportation protocols, inter and intra conversations in a sea of network elements and virtual functions. Service Assurance tools are fed information from a mediation function, which aggregates, sorts, correlates, extracts, transforms, and loads data from the many sources available. Sources like network tapped and groomed traffic through aggregators and Network Packet Brokers, mediated feeds from probes that may monitor legacy traffic, external inputs from other systems in OSS and BSS, like billing information, CRM, and anything else you can think of. An open mediation layer with API’s and wide-ranging support is key to enriching the service assurance layer.
When the industry says CDR and xDR they are usually talking about a record of activity relating to a users session that includes not just network exposed packets organized in some random manner but a “correlated” record using an identifier and enriched with as much additional information as the system can muster. This is the way you get the efficiency and the meantime to resolve issues down. This is also the basic building block of all those applications we discussed with respect to network intelligence.
Getting notifications of events from network elements is a good source of additional information. We see in 5G, a container-based Service Based Infrastructure or SBI with notifications and event subscriptions from the virtual functions available. Using this information to add even more information to the CDR and xDR affords an even deeper understanding of what is happening to the user.
The good kind of Service Assurance
So are all Network Intelligence and Service Assurance systems built in an open way, with all the latest practices for building relationships quickly and integrating software elements together? Well, yes and no.
The least desirable ones are not. They use proprietary hardware solutions and closed software that locks the customer into costly upgrades and forklift changes that delay progress, hamper innovation and stifle progress.
The good ones, those offering the greatest benefits are built to do all the things we discussed earlier. They use containers with microservices, beyond simple virtualized functions, they use fewer resources, deliver more information, faster and more intelligently than their classmates. They are vendor agnostic and provide end to end views of sessions across and multi-vendor environment, remember the sea of network elements and functions, these are not all from the same vendor.
Operationally the best solutions allow for non-stop integration and deployment cycles with no service interruptions. They upgrade easily and automatically. They are based on a common infrastructure that simplifies hardware and software management. This open environment means that even when they are not the incumbent solution, they can easily include legacy system information and make the most of previous investments.
RADCOM Service Assurance is a fully automated and containerized solution for 5G. By taking data from multiple sources and correlating them to deliver smart insights, RADCOM Service Assurance is able to deliver visibility from the RAN to the Core. To learn more about RADCOMizing your network click here
This blog post may contain forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. To read more about forward-looking statements please click here.