IT leaders recognize that migrating applications to Hybrid Multi-Cloud (HMC) environments is a critical aspect of digital transformation, delivering tangible operational benefits such as increased business agility, scalability on-demand, and OPEX savings. However, this migration requires multi-year planning and coordination, underpinned by a Telemetry and Network Automation solution that insulates the business from changes in the underlying technology to facilitate a smooth transition from private data centers to HMC environments.
Anuta ATOM is a vendor agnostic, multi-domain and multi-cloud network automation solution that includes low-code automation, real-time analytics, and closed-loop automation. In this webinar replay, customers of Anuta Networks share best practices in building smart, predictable and responsive networks for your current and future infrastructure needs.
In this on-demand webinar you will learn:
- How to build massive-scale telemetry and analytics using model-driven telemetry
- How to transform into a pro-active network operation team with closed-loop automation
- How to deploy quickly with low-code automation
- How to ensure quality, achieve compliance and drive down operational costs for immediate ROI
- Best practices for multi-domain, multi-cloud orchestration and analytics
Transcript:
Introduction to Hybrid Multi-Cloud
Steve: 00:00 So, for those of you here in the US or North America, good morning, if anybody’s calling in from outside North America, I guess it would be the good afternoon or good evening. Welcome to today’s ONUG webinar, titled Model Driven telemetry and low code automation, power responsive networks for hybrid multi cloud migration. So, it’s a very interesting and relevant topic for the for the ONUG community. I’m Steve Collins, I serve as the working group CTO for ONUG, which means I’m very involved in the work that’s going on in several different areas related to defining kind of a set of use cases and supporting requirements for various aspects of hybrid multi cloud migration. And this topic is actually directly relevant to the issues that the ONUG community is dealing with. So, the interesting, interesting webinar here. Let’s move on to the next slide. And so, the agenda here is we’re going to talk about hybrid multi cloud deployments.
And the good news is we have a couple of Anuta Networks, customers online that are going to talk about their experience in this area. Going to learn a little bit about what Anuta networks does, what their solution is all about. And what the operational experience the, two customers have had with, with the product. And we will have time for Q and A at the end, hopefully, maybe about ten minutes or so. If you have questions, feel free to ask them at any time, just go to the zoom dashboard, go to the Q and A tab, and you can enter your question and we will track those as they come in. And then we’ll, have a session at the end. And let’s move to the next slide. So, I just want to set the stage here. We look at hybrid multi cloud infrastructure. right this is the type of the types of systems of software, you know, we’re going to be all migrating to whether it’s the service providers, the enterprises, you know, the leading web scalers are already there. And I think everybody recognizes the power of this type of infrastructure, all the benefits that will accrue over time.
But the other side of this, the flip side of it is there’s it also introduces a lot of complexity. And that sort of manifests itself in a number of different dimensions is this just fundamental issue of scale. fact that stuff is really very multi-layer in nature, it crosses multiple domains, you have kind of a high degree of distribution of you know, workloads and in systems and networks. Of course, everything’s been disaggregated right into various hardware and software components and different planes, and different layers. And the whole thing is very dynamic.
So, there’s a lot of complexity. And that translates into cost, right. And so, that’s what this webinar is all about is how can we tackle this complexity, and one of the real fundamental keys here is automation. Next slide. So, I think at the end of the day, we’re all going to be looking for a model here for hybrid multi cloud automation that kind of follows what I have here in this diagram. You know, we’ve got systems here, we’re going to be instrumenting them, been able to pull metrics from that instrumentation using telemetry mechanisms, collecting that, putting it into repository, performing analytics on that data, and doing that, hopefully, in real time, so that we can monitor the state of the infrastructure.
04:09 And if there’s a change to that state, we can feed that into an orchestration system. And the orchestration system can react appropriately, which usually translates into some type of control or reconfiguration of the underlying infrastructure. And so, this is kind of the model we want to work towards. We’re going to work towards this model, I think in sort of different aspects of that infrastructure, whether it’s the application level, the network level, sort of the systems or ITops level, and even at the security level. So, ultimately, this is this model is going to kind of be pervasive across the entire infrastructure. So, with that, just wanted to kind of set the stage a little bit, this is all directly relevant to what we’re going to be discussing on this call. Let’s move on to the next slide. So, I’d like to introduce the panel today, we have Peter Juffernholz, and he is VP of virtual virtualized network services at Tata Communications. We have Matt Wilson, who’s the Director of network engineering at NeuStar, and we have Praveen Vengalam who’s the VP of Engineering at Anuta networks. And the format here is we’re going to have both Peter and Matt, talk about the challenges, you know, they faced in their environments, you know, kind of what drove them to look at various automation solutions and what their experience has been what they’ve learned. deploying Anuta networks, and praveen will participate in that discussion as well. So, let’s move on to the next slide. And I’m this point, I’m going to hand it over to Peter and let him describe what he’s done at Tata Communications. Peter,
Tata Communications – Business Value and Network Automation Challenges
Peter: 05:56 Yes, thank you. Stephen. Hello, Peter, Juffernholz, at Tata communication and for those who are not familiar with Tata Communications, I just wanted to give you a very quick overview. These days, we are describing ourselves as a digital transformation provider to Enterprise Services, with the backdrop of all those hybrid cloud and connectivity moves that we also highlighted earlier as the backdrop of the changes in our industry. So, our target is to assist our customers in their digital transformation journey from within various aspects everything from network to through security and UCC services etc. That are required for customers to stay relevant to have a competitive edge in deploying technology that is agile and future proof as well as making onboarding new technology de-risking for customers as much as we can. we have a history of being an internet service provider and mpls service provider for many years and have done so on a global scale.
We are an Indian company, however, have significant assets outside of India actually employed equally customers across sorry, employed equally, people across the globe and have a global customer base with slightly more revenues actually achieved outside of India than in India. And we have serviced global MNC for a very long time. 15 years or so, with everything from connectivity to SD-WAN services security etc. More recently, I think our engagement goes now back almost four years with Anuta. And I think the deployment is now three years in place that we have on boarded Anuta to help us with hybrid managed services in general SD-Wan in particular SD-WAN prime service, which is allowing for some path selection and congestion management options on standards Cisco routers. So, that was a project that we have built in parallel to Cisco Iwan, with the advantage not to require any overlay tunnels on the MPLS side and limiting the overhead that is transmitted on a network communication to basically be to BGP communities and some other protocols that are not very heavy.
We have since then, on boarded additional SDN technologies we have a Versa offering in the market, we also in the process of launching Cisco based service for the Viptela variant of the product. So, now, our key issue for the automated automation that we faced was to cut down in those days on the delivery times, and to reduce the error rates that we had delivery times as in working with multiple well, not just in a multi network and the multi-service environment. But configuring complex scenarios on routers we had up to, I think four hundred individual CLI commands that had to go into routers to get them ready for our service. And, and so that the time to deliver service, and the possibility of errors was relatively high.
Yes, you script it a lot. But those scripts were a limited to the mastering of the scripts was limited to the people that actually wrote them. And then it still depended on inject, basically, manually injecting certain information for the proper configuration. So, and that was the onset for the discussions that we had how we can automate it and can build from there with Anuta’s help, and we went through the diligence process looked at different options. Did a market survey of vendors in that field and Anuta was ultimately the choice that we came to because of the flexibility of their platform. And Anuta as a company and the team that we worked with as being very outcome focused and very skilled in their work. So, I will speak about that a little, little bit more, I think as we go forward.
Steve: 10:58 Off to you Matt, I think.
Neustar – Challenges in Automating the DDoS mitigation network
Matt: Yeah, hi. So, this is Matt Wilson, I’m from NeuStar. I suspect a lot of folks haven’t really heard of us, we’re a company that we have a suite of products that help our customers really guide grow and protect their service offerings are, you know, their lines of business, I particularly sit on the protect side of the business. On the protect side, we offer a suite of services around DDoS protection, which is my expertise and, and DNS services as well. So, both the recursive and manage authoritative and recursive DNS. So, if you step back several years ago, we were in the process of really kind of transforming our DDoS business, we were going from being a small network that had just a couple terabytes of bandwidth with about four nodes globally. And you know, fairly straightforward infrastructure, not overly complex, we made this decision to get ahead of the market, right? we wanted to really differentiate ourselves, we wanted to grow our network.
So, we went out and we built a 14 node. One, we’re now at 11 terabits per second DDoS network. And in the process of doing this, we also kind of rapidly or drastically change kind of the architecture, we did this in order to be able to enable us to kind of more rapidly support new product development, you know, kind of new forms of connecting with our customers, you know, adding additional features. And that was really the goal of the entire project. Well, one of the things that we identified very early on with this is that the scale of this was going to become more problematic, it was one thing when you had, you know, basically, you know, four or five routers that you are managing across this, you know, you can script a fair amount, it really wasn’t overly complex. Well, fast forward to the new architecture, where we have multiple vendors, we have multiple, you know, multiple vendors and platforms, you know, all the same across all 14 nodes, but kind of different capabilities, different levels of bandwidth, and every single node, we needed a more effective way to manage this.
And so, for us, I think very similar to Peter, this was very much about being able to provide a standardized set of services, that we could cut down on human error, and that we can deploy very quickly. So, that’s why we’ve looked at a number of solutions, we had scripting as well, we looked at doing more scripting, and for us, one of the things that we really wanted to do is to have something that can maintain the state of the network, right. So, I wanted this entire platform to be something that was watching the network, keeping track of what every single device was supposed to be doing, what its configuration was supposed to be looking like. And that could give us the difference between what we expected and what we wanted. And so, you know, as we deploy things, you know, you’re talking about deploying things globally, right. So, we’re talking about having the ability to push changes at a service level.
And so, when I say a service level, like, like, I want to push a configuration for one customer, or I want to push a brand-new customer out to all 14 nodes simultaneously, have the exact same actions take place, and then being able to verify and double check that exactly what got put in there is exactly what I expected. And if not, then I needed to be able to kind of roll back at a very detailed level, exactly what happened. And in some cases, I wanted to be able to roll back exactly what happened. You know, we’re constantly making changes in the DDoS world, you know, we’re always making constant changes, the customers were, you know, adapting responding to DDoS attacks, you know, tweaking settings, things like that, we needed the ability to not have to roll back an entire configuration of a router, and then realize why only the stuff that I wanted to be in there, I wanted to be able to go back and have the transactional records of exactly what is getting done. rolled back. So, just that transaction that I might have done two hours before or two weeks or two months before. So, we were looking for a solution that would enable us to do that. We looked at all sorts of automation platforms. So, like Service Automation, platforms, also, you know, like things like Ansible tower things along those lines. For us, they didn’t kind of meet what we were looking for. Because, again, I needed something that sort of natively maintain state.
At the same time, kind of about the same time that we were looking and we are starting our deployment, we are starting our implementation of this, Anuta came out with Atom. And so we’ve been in the process of identifying all the use cases behind the data side of this being, you know, very similar to Peter being on the service provider side of this, you know, having the visibility into what’s happening, knowing exactly what’s going on in our infrastructure and being able to do that within the same platform, where you know we’re grabbing all the pieces of data. Streaming telemetry is in it and it really runs the gamut, right. We have log data, we have streaming telemetry, we have a lot of different types of data coming off of our different boxes, being able to pull that into a platform is really being valuable in assisting our automation, like automated responses. So, we’re starting to automate some of our responses based on that data.
17:16 Yep. thank you, Matt that’s an excellent overview. Thank you, Peter, as well, I think at this point, let’s bring Praveen in and Praveen, why don’t you provide a little more background on Anuta Networks and Atom? And then we’ll segue into Matt and Peter talking about their deployment experience.
Introduction to Anuta Networks and Anuta ATOM Overview
Praveen 17:43 Sure, hi, thanks for joining into the webinar. This is Praveen, I am VP of Engineering at Anuta networks. So, we’ve been in business helping large enterprise customers large SPs, MSPs with their automation and orchestration needs. And the main product that we deliver is atom. It delivers Assurance, Telemetry and orchestration for multi-vendor and multiple networks. And we actually cater to lots of different domains, it could be datacenter could be MPLS, WAN core, it’s pretty much an open platform that can be deployed in any domain. It is a multi-vendor network. And as we have seen from the descriptions from both Matt and Peter, one is MSP and other one is an enterprise. So, they place different requirements on to the automation platform. And that’s actually what we will discuss in the next slide. So, if you see on this slide, it just summarizes the major capabilities of atom and to cater to these different domains. And these domains having different vendors like multiple touch points, and being able to orchestrate a service at a granular level and also at a higher level, that is more at a service level. So, we need to provide a lot of mechanisms to our customers, where we help them describe what they want. And we can help achieve the automation.
And apart from being able to automate, we also need to ensure that whatever stays deployed, or whatever stays whatever is automated stays deployed, that’s where we can observe all the configuration changes that are happening in the infrastructure and we can ensure that any accidental changes can be rolled back and then on top of it, we will need to look into some of the performance aspects in the infrastructure and right do some base lining and then do some remediation actions. So, there are lot of these life cycle events that has to be taken care. And at the same time since this automation software is going to be in the main path of the business, right? This is a main business enabler. Now. The software itself has to be highly resilient, the software has to scale. The software has to be open, it has to provide API’s so that we can tie into the different ecosystem partners out there.
And ecosystem integration is fairly important because we do not expect ourselves like atom to be doing everything out there, we’re going to be required to integrate into like IP address management solutions, we’re going to be integrated into NMS solutions, SDN controllers and SD-WAN controllers. And towards that aspect, it’s very important for the platform to be open, it should be able to seamlessly integrate into multiple different endpoints. So, that all becomes imperative for the software.
So, if you see from left to right, the life cycle starts from pretty much on boarding a device, being able to understand what the device capabilities are being able to see the device that that infrastructure element with configuration, even doing some of the routine maintenance activities, like image management is something that we provide in atom and then once the devices are on boarded, we take care of that Day-1 to Day-N activities, like being able to deploy services and that could be services that are one time services like SNMP refresh, or maybe its a security patch, like a one time activities, that are stateless activities, or we can have scenarios that are that are stateful, both Peter and matt have described some services that are more stateful in nature, where they on board an external customer, they push service and service has a life of its own.
In those scenarios, we have a very rich way to declare intent that’s like a Yang Model Driven, service model that can support updates, deletes and an ongoing maintenance of the service. And these services that we deploy in ATOM, they have to be available to API so that northbound self-service portals can tune it. So, that’s where we have a very rich API to atom. And once these services are deployed, we will be constantly tuning into the underlying infrastructure for SNMP traps, syslog, telemetry and if there are any anomalies that are happening, we can define baselining, we can emit alerts. And then from there, the alerts can be tied into remediation actions. So, the alerts in some scenarios can just be emails or in some scenarios, they can be posted as notes on a Slack channel.
But eventually, sometimes some of the alerts when we need to trigger a remediation action, that could be like where we can go in and invoke a workflow. And atom has a very rich work flow engine that’s BPMN compliant, it can also support low-code automation, where we can define the mops, let’s say BGP Neighbor is flapping, interface is flapping, high utilization, I want to steer the traffic to secondary link. All those kinds of remediation actions can be defined, and atom can take care of doing that fairly well. All in all. It’s a highly scalable, open and customizable platform. And as we saw it, there are multiple vendors at play in both customers scenarios and different kinds of activities. One is Networking device, like NMS solutions, SD-WAN solutions. Where been fairly successful in catering to the diverse deployment options like single site like across multiple geographic locations also. We have done that. And that’s about where I would like to pause.
Steve: 23:33 Yeah. So, with that background on Anuta and atom, let’s turn it over to Matt again and let Matt talk about some of his actual deployment experience.
Neustar – Experience Deploying Anuta ATOM
Matt: 23:46 Yeah, so for us when we started down this process, one of the things that we identified pretty quickly was that we needed to, we really needed the ability to have this entire platform a very fault-tolerant right, we’re talking about nodes all over the world, we’re talking about nodes, it just there’s a fair amount of latency between certain nodes. So, we kind of chose a model that was a bit of a multi-tiered model where we had agents in every single node, as well as multiple platforms on the back end, because we needed the redundancy on the back end, we had multiple redundant agents in each node, kind of being able to talk to these. So, that way, we had a fair amount of fail-over resiliency.
One of the things that we had, we also realize is, we were kind of engineering some of this a little bit on the fly. So, we step back when we first started with Anuta, we step back and realize, we needed to better identify what we were looking for. So, where he talked about where we have the YANG modeling, we had to sit down and better understand exactly what each service model needed to look like so service model kind of what the Yang modeling is. So, we had to understand what a service model was.
So, we sat down and we did a big analysis in our infrastructure to say, what are the granular steps we take? what are some of the things we choose to do? we, identified the most critical ones for us that were, you know, in order to kind of enable us to go to market, what we wanted to do. And then we had a big list of kind of future items that we wanted to do as well. And so, we’ve prioritized and gone out and worked on all these priority, kind of first priority ones. Anuta has been great working with us to tweak those as we go. You know, we kind of identify that, you know, maybe sometimes our requirements were not the clearest or not the most well defined. So, we’d sit back and we tweak it, you know, using the professional services.
We have been able to fix some of the kind of little nuances as we move forward, and frankly, as we change our services, we have to tweak these things as we change our services just a little bit. So, it’s been a very valuable tool for us. You know, it’s kind of we are able to use it as an ad hoc tool for the SOC. So, while we wrote service models for some of the most critical and important things that we knew we needed as part of our model, we’ve also we’ve worked with them to, we’re using some just direct ad hoc type of tooling where, you know, we have like some basic API, and UIs, around access to certain devices that has also proven to be really valuable.
And we’re just really starting to hit the tip, the tip of the iceberg on the telemetry and the analytic side of this as well. But all in all, it’s been pretty pleasant experience and you know, it’s not without its complexities as you can imagine anything like this is but you know, it’s they make it pretty easy to work with and just straight off the bat like the deployment was pretty straightforward. You know we got enough we got it testing in our lab we have a kind of a full lab for everything that we do. And so, we got it up working in the lab, we were able to simulate some of those long distance and make sure that we’re handling high latency connections, fail over type of scenarios. But I mean, for us, the deployment has been we were able to identify the stuff very early on and kind of move that forward so.
Steve: 27:52 That’s it. Thanks, Matt. And we’ll talk a little bit more about kind of what you learn from this whole experience shortly. Let’s flip it over to Peter and let him talk about his deployment experience next.
Tata Communications reduces CPE on-boarding with Anuta ATOM
Peter: 28:08 Yeah, certainly. So, from our requirements and wish list on set, it was all about us being able to automate the delivery of CPE is in service activation and doing so in Greenfield and Brownfield scenarios with zero touch provisioning, back end OSS integration, scalability as we are service provider with thousands of customer sites deployed, or tenths of thousands, I should say, and then have the ability of, you know, to verify the work that has happened and also discover operations and, you know, failures and so forth. And so Matt also pointed out to have the ability to, to roll back certain scenarios where failures arise and do that in a granular fashion so that we know what went wrong during migration or doing a customer onboarding, with the ability to basically stop in either the transactions or roll them back on only the devices that failed on the entire network depending on how we assess the, the impact of that.
So, auto discovery was also a key piece for us because we needed that for our Brownfield scenarios. We needed to basically read all the conflicts and do a true up on what should be there, what is there and needed to make sure that we have the deployment and brought back at scale, have all transaction locked for auditing from an auditing perspective, again, we are we have to as a service provider, to our customers as well to you know, to maintain internally integrity of our services and ensure that no unauthorized access happens. That also then led to the need to detect actually out of out of process changes so, that we can get alarms on that and roll that back if changes were made possibly doing a troubleshooting through CLI and so that the you know then either accepted in the system what’s on record with Anuta or push the, the previous configuration back on the device after operations were made. So that these were all automated and you know tied to escalations internally.
You know, so our other criteria was to cut down the provisioning time zero touch comes into that but also provisioning times in general from complex scripting and you know, individual configuration post revoke etc. So, our modest goal was to get below twenty minutes per device in reality months where we actually dipped the Anuta onboarding, we came out at five minutes, on average. And that’s a number that still holds true. On occasion, it’s even faster. And that has to do with the ability to get it first time, right. So, basically no rework and to model the scripts, you know, it’s very intuitive once you start working on it and building it and have the ability to configure it, configure them, so that we can tailor them and simplify them enough to roll them out on an entire network and we also wanted to cut on the, well, we had a very strict timeline when we started that out because we had said, during the screening and when working with different vendors invested a lot of time and we have large pressing projects coming our way, but by coming to close.
And we wanted to make sure that we can deliver the entire platform as per our requirements within ninety days, you know, hands down and Anuta beat that we were ready up and running in about fifty-one or two days, give or take, right so, that’s calendar days I’m saying so, that was absolutely stunning and it was only because the team has started working already on the set up during our negotiations and began strategizing as we first engaged with them. So, that was a very nice delivery and then subsequently also any additional work that has been executed was also quite bang on time and it always was to our satisfaction. So, we had very good experience even on the deployment side.
So, last but not least, you know, we ended up as an example of so what did all that do? It simplified our lives and then service delivery side and allows us now to look forward and integrate more into the rest of the system to have the integration with our back end systems with our inventory with, you know, with the IPAM for example with the portal services and with FMS and PMS systems and on a per user level, right, so that means for our customers, we have multiple customers that we service, and these customers see the feeds that are relevant to them. So, we are able to, to build that on the Anuta platform with relative ease. Last but not least, the example that I want to throw out is a customer deployment where we had I think about a hundred and fifty side Greenfield deployments that we did with a networking customer of ours where we were able to, basically push all the configurations in a single once the devices were deployed on site in a single push in eighteen hours. So, in two phases essentially to bring the devices up and running and integrate them and get them ready for service. So that’s something I would have otherwise cost us over hundred some engineering hours with probably rework etc. So, it would have been a much, much, much more difficult project for us prior to orchestration through Anuta.
Steve: 34:28 Peter, is there anything you’d add in terms of you know, lessons learned through all of this process?
Tata Communications – Lessons Learned
Peter: 34:34 Well, yeah. Okay, so there is a couple of things, right. So, one is definitely choose your vendor wisely, you know, don’t be taken aback or too cautious because it’s a smaller company or a new company, you know, pays off. If you do the right choice, right pays off in flexibility and agility you know, quality of work that comes out of it. So, do your due diligence early – both on the vendor side, as well as on the details of what you’re trying to achieve because there are lots of promises out there on automation platforms, lots of vaporware. I’m sorry, slideware I wanted to say, some people put a lot of things and capabilities on slides. But you know, it’s really down to make sure that those are capabilities that can be rebuilt and not for a two million dollar projects Plan, right, with professional services, etc. because it’s all tailor-made in the end to do. So, that would be my suggestion, and ready the team to work with YANG. I think probably now on the networking side you know, we have crossed the chasm and people are quite familiar with it. But definitely that’s a necessity for anything we’re going to do going forward in networking.
Steve: 35:59 Great. How about hey Matt, how about yourself? I mean, what would you add in terms of lessons learned from this whole process.
Neustar – Lessons Learned
Matt: Yeah, I think one of the most important things that we figured out is you need to, you need to sit down and really think through the problems that you’re trying to solve. Not a kind of a really simplistic level and like, I need automation. But truly, what is it you’re trying to solve? what are the limitations of the team that you have or had, that are going to be expected to use this thing’s, because doing that sort of analysis is what really drove us towards this type of orchestration solution, as opposed to just like kind of the standard scripting that we are headed towards. Once you kind of understand the true problems that you’re trying to solve. You need to get very granular about how you solve that one of the steps, what did these models need to look like. And you know, and what sort of data do you want? how frequently do you want it.
You know, and what would be the ultimate response? it’s really critical to think about what that roadmap might look like within your organization – I’m starting here, here’s the baby steps I can take. But ultimately, this is where it’s going to get us, right. Because if you’re not thinking really long term for us, this is what drove us to this type of solution was thinking where we wanted to be in two to three years. Understanding that, you know, we’re going to stay, there was maybe slightly easier ways to start right away. But that growth and the capabilities when it really stopped within about a few months, and it didn’t ultimately solve the long-term issue that we were trying to shoot for.
Steve: 37:46 Great. Well, thank you, Peter. And thank you, Matt. I’d like to _ I think at this point, turn it back over to Praveen to kind of provide a kind of a summary and final kind of perspective on the atom solution. Move to the next slide, please.
Anuta ATOM Summary
Praveen 38:10 Yeah, as we learn from the problems and how both Matt and Peter alluded to, we’re looking at a requirement where it has to be a platform approach, it cannot be a cookie cutter kind of solution, it has to cater to different kinds of use cases. It has to scale, there are going to be multiple vendors, different kinds of endpoints. They could be devices, there could be IP address management solution, NMS systems, there could be lots of different things, including some of the endpoints being deployed on-premises versus on a public cloud or private cloud. So, and all of this where we have to go do both config discovery. We have to understand what configurations are deployed in a brownfield environment. We have to do reporting, we have to do remediation, we have to do assurance, automation, all of these things come into picture and all in while ensuring that we are an open platform.
We provide APIS that has to also come-in. So, atom delivers on all of these clauses, all of these requirements and all of these promises. And, again, we are looking at two customers that are fairly disparate to. One is a MSP catering to their end enterprise customers where the equipment is owned by the end enterprise and Tata Communications is providing managed services. The other is enterprise, which is behaving like a service provider. But again, it’s a slightly different scenario. And we’ve been fairly successful to get it to two distinct requirements. And now we’re happy to update the team on this one.
Q&A
Steve: 39:46 Great, thank you. I think we’ve come to the point here, if you go to the next slide, I think we’ve come to the point here where we can do some Q and A, so if anyone has questions for Praveen or Matt or Peter, you can basically go to your zoom dashboard or client and click on Q and A and enter your question. And we’ll try to get those teed up. You know, one thing I wanted to ask is Praveen, this would be one for you. You know, you talked about multi-vendor support, you mentioned you support kind of a wide range of vendors, I think was forty-five on that last slide. But if, if you currently don’t support a vendor, you know, what’s the process? and how long does it take for you to incorporate support into your platform.
Praveen 40:38 So, typically, it takes anywhere from two weeks to four weeks, depending on the type of vendor or the type of equipment we’re looking at. And in some scenarios, we have our customers doing it for themselves. Our SDK provides complete access to our customers to do it or our SI partners to do it as well. So, we are fairly flexible in that regard.
Steve: 41:01 Great. Another question, you know, terms of the Atom deployment itself, you know, what type of underlying infrastructure is required you need, you know, for instance, do you need Kubernetes to operate atom?
Praveen: 41:19 Right. So, to provide the fault tolerance and deployment, like rolling upgrades and some of those advanced capabilities to provide higher uptime, resiliency, horizontal scale and fault isolation, right. We had to go with a dockerized environment where Kubernetes is required. And for the remote collector nodes where that are going to be deployed on-premises closer to the enterprise, we have situations with Kubernetes, this is not mandatory for the remote nodes, but for this central infrastructure. We do expect the software to be deployed in a Kubernetes based platform.
Steve: 41:56 And the reason for that is primarily issues dealing with kind of scale and resiliency.
Praveen: 42:02 Scale resiliency and like rolling upgrades are typically the use cases.
Steve: 42:06 Great. I think another question related to kind of software defined networking and SD-WANs. You know, if you’re in an environment where the customer already has SD-WAN solution deployed and, you know, specific controller as part of that operation. You know, essentially what level of automation does that provide? And, you know, how does ATOM fit in there? Would I still need to deploy atom, and then environment, right.
Praveen: 42:36 So, we do have scenarios where the end to end orchestration is going to require, like doing some IP address management in some scenarios. So, it’s not that the SDN, SD-WAN controllers cannot do it. It’s just about from an overall process point of view. There is more than just automation of the endpoint. There’s going to be a lot of resource reservation, some of the bookkeeping that we have to do like something sometimes we have to mediate between service Now, and in some scenarios, our customers actually have multiple SD-WAN controllers. And we cannot have a northbound self-service portal indeed, like directly making API calls into different of these endpoints. So, that’s where we provide the mediation between the SD-WAN controllers or SDN controllers and the Self-Service portals. And we have done integrations into these controllers already. And we do see the need and our customers also agree with this so far Steve.
Steve: 43:34 Great, do you integrate with service now?
Praveen: 43:43 Yes, we do, especially on the service catalog front, typically, that’s when they expect the northbound, some of the clients invoking the services. That’s a typical area where we are expected to integrate, and we have done that integration already.
Steve: 44:00 Okay. And with respect to you know, pricing, what’s the pricing model for an atom solution, you know, essentially what are the key drivers for dimensioning and pricing.>
Praveen: So, pricing model is, is based on the number of devices. And it’s fairly simple in that sense. It doesn’t matter what type of device it is, but it just number of devices and then what features you want to use with an atom like orchestration or telemetry, right, so a couple of big building blocks there. And it just those two combinations decide the price.
Steve: 44:42 Let me circle back with both Peter and Matt here, you know, so, and I think you both alluded to this, but I guess I’ll start with Peter you know, what do you envision sort of his next steps in the process for you at Tata with Anuta and atom?
Peter: 44:59 Yeah. Okay. So, we have incrementally implemented additional features along the way. Since our first use cases and deployments, we intend to have, you know, basically all of our managed CPE services ultimately discovered and managed through Anuta. And so that’s the scale piece. We are looking currently at the options to use Anuta also for multi-SD WAN vendor integration, so something that was addressed just now by Praveen as well, right. So, those are the things that we look at I also to basically build around it or integration with other services that we have to have a fully automated delivery and activation, integration with the rest of the systems that are not yet rolled in like IP address management and so forth, so that we can ultimately deliver devices and instances I should say, because I think I forgot to mention that we also deployed on Cloud instances, was basically an automated process end to end. You know till service activation.
Next Steps
Steve: 46:18 Thank you, Peter. Matt, how about yourself? what do you what do you see as being next steps?
Matt: 46:23 Yeah. So, for us, we’re actually experimenting, I sort of call it like an SD-WAN or like SDN sort of light, if you will. We’re doing a lot of kind of custom thing on the routing side of things, and taking a lot of the complexities of how you provide the DDoS service off of the devices. Now, as we’re doing that, what we’re really looking at is okay, how do I integrate that with the management capabilities and the analytics that I can get out of Anuta and so how do I kind of blend these things together, what those type of interactions look like, you know, what makes sense for us to automate what makes sense more to notify, and really starting to work through a lot of the next step type of operational integration of both the SDN side of this and we kind of the config management orchestration and analytics that Anuta gives us.
Steve: 47:34 Great. So, sort of a question maybe for all three of you, maybe you might want to provide your perspective on this, but if you had to kind of compare and contrast Anuta atom with you know, the competitive alternatives, whether those are solutions from other vendors or you know, open source software solutions. You know, really how do you kind of stack up Anuta atom against those alternatives and that could be on a feature basis, it could be on a kind of performance and scale basis or even could be on a kind of a pricing model basis and maybe Praveen you want to go first. But I’d also be interested in hearing, Matt and Peter’s perspective on that.
Praveen: 48:23 Sure. So, Anuta atom is a true multi-vendor platform. And along with it, we also do pretty much all the end-to-end scenarios like we don’t ask our customers to go and deploy a different product for workflow, different product for telemetry, different product for orchestration, right. So, in that sense, we are a much more integrated platform, and much more multi-vendor we are also a bit futuristic in a sense that we provide fault tolerance. Our software itself is a Docker micro services based. So, in that sense, we have some competition in specific areas, but broadly, we are a much more integrated platform for the complex deployments.
Steve: 49:07 Peter & Matt, you obviously made a decision. Just curious how you respond.
Peter: 49:14 Yeah, I think that Praveen has already touched on that. So, I think the vendor dependence is also important and true support right, in unbiased support for set of multi vendors in the process is key in our environment, right. Because many of the competitors are aligned with one or other vendors solution and while using open standards, not really implementing all of the components in an open and transparent fashion and requiring certifications from third party vendors, etc. which is not something required in Anuta and Anuta has taken on, on you know, a lot of that work for us and in general. So, you know, so again, without, without any restrictions on what vendors we can on board or making life for everybody difficult in the process.
Steve: 50:18 Matt, anything you would add?
Matt: 50:21 No, I mean, I think we were pretty similar. We looked at a lot of the different vendor specific stuff. The platforms that were out there, we looked at some things like Open Daylight, we looked at a number of different types of solutions for this. And it was the true multi-vendor support. I mean, the whole idea that I mean, if it’s something that you can log into and issue commands or having an API, or you know, has any mechanism by which you can push configurations to it, the Anuta can interact with it, and for us, that was exceptionally powerful. Because, you know, we knew going forward, as we grew our platform, and as we added services and added capabilities, you know, we didn’t want to be hamstrung by the decisions that we had made, you know, three years back about, well, this is our management platform. So, we’re kind of stuck implementing certain solutions that play well in that environment – with Anuta that we don’t really have to consider that right away. We know it’s a proper, fully vendor agnostic type of solution that can interact with pretty much any vendor that you have. And that was super powerful for us, right. Because we were, we were moving this service along, you know, building this new DDoS platform, we knew we were going to be adding new things. We were going to add new vendors to this. So, it was really powerful for us to have that flexibility, setting us up for future flexibility from day one.
Wrap-Up
Steve: 51:55 Excellent. Well, I think this point, I’d like to thank both Peter and Matt for their contributions here. This has been a great perspective on all automation in general and Anuta atom in particular. Why don’t we move to the to the next slide here, Praveen, and we can wrap this up here.
Praveen: 51:21 Yeah, so further information on atom, some of the case studies, you can find these URLs. And you can always reach out to info at anuta networks.com for further information, and we’re also on LinkedIn.
Steve: 52:36 Great and if you go to the next slide. So, we have our ONUG spring 2019, event coming up here shortly, actually just in a couple of weeks or so down in Dallas, and so you can come to the conference and meet with Anuta networks there and also participate in the event. There’s a way to register here for a thirty percent discount if you follow that link. And I find the ONUG events to be, you know, very stimulating environment, if you’re in the position of looking how you migrate, you know, your applications, your infrastructure from kind of the world we’ve all gotten comfortable living in for the last maybe ten or twenty years to this new world of kind of hybrid multi cloud deployments.
So, it’s a great, event in that respect is you can kind of interact with your peers. And essentially, it’s a good way to get kind of take the pulse of the industry and you know, what the current state of the art is and who, who the leading vendors are in terms of implementing solutions along these lines. So, I think with that we’ve kind of come to the end of today’s program. Again, you can circle back with the folks at Anuta if you have more questions or interest in their products and solutions. And again, I’d like to thank Praveen, Matt and Peter for today’s event. Thank you all.
Peter: 54:04 Thank you, Steve.
Praveen: 54:07 Thank you.
Steve: 54:08 Thanks, everybody.