
The Q&AI Podcast
From navigating the ethical complexities of AI to leveraging AI in use cases spanning industries like healthcare, education, and security, The Q&AI delivers actionable insights that empower you to make more informed decisions and drive more strategic innovation. In each episode, Juniper Networks’ Chief AI Officer, Bob Friday, and other guest hosts engage with a range of industry experts and AI luminaries to explore the AI topics that matter most to your business.
We’d love to hear what you think! Your ratings and reviews help others discover The Q&AI and keep us inspired. Catch up on all past episodes and learn more about the podcast by visiting juniper.net/QandAIpodcast
The Q&AI Podcast
Network, Talk to Me: The Future of Conversational Interfaces
In this episode of The Q&AI Podcast, host Bob Friday welcomes Juniper’s Yedu Siddalingappa, Navraj Pannu, and Shirley Wu to discuss the evolution and future of the Marvis® AI Assistant. These advancements allow users—from helpdesk engineers to CIOs—to gain faster, more accurate insights and even generate dynamic dashboards.
The team also highlights how agentic workflows help reduce hallucinations by incorporating iterative validation and reflection. And finally, they discuss the ultimate vision with the assistant, which is to move toward a self-driving network, transforming how users manage and interact with their IT infrastructure.
-----
Key points covered:
Enhancements to the Marvis AI Assistant, including natural language interaction with network data using GenAI and implementation of agentic workflows for iterative, intelligent problem-solving.
How Marvis has evolved, transitioning from structured query interfaces (pre-2018) to natural language and LLM-powered interactions, shifting from static tools to dynamic, AI-driven troubleshooting and insights.
How the agentic framework uses multiple specialized agents to simulate human-like troubleshooting, enabling reflection, validation, and multi-perspective analysis to reduce hallucinations.
The impact for customers, such as significant reductions in trouble tickets (90%+) and mean time to resolution (95%+), as well as expanding use cases from network engineers to IT managers and CIOs.
The future of Marvis, moving toward a self-driving network with automated actions while empowering users to generate dynamic dashboards and make data-driven decisions.
-----
Where to find Yedu Siddalingappa?
LinkedIn - https://www.linkedin.com/in/yedu/
Where to find Navraj Pannu?
LinkedIn - https://www.linkedin.com/in/navraj-pannu-746359177/
Where to find Shirley Wu?
LinkedIn - https://www.linkedin.com/in/shirley-wu-ab32171/
-----
Keywords
IT organizations, Complexity, Team resources, Network operations, Marvis, Virtual network assistant, Conversational interface (CI), Natural language processing (NLP), Queries, Troubleshooting, GenAI capabilities, Network operations improvement
-----
To stay updated on the latest episodes of The Q&AI Podcast and other exciting content, subscribe to our podcast on the following channels:
Apple Podcasts - https://podcasts.apple.com/us/podcast/the-q-ai-podcast/id1774055892
Spotify - https://open.spotify.com/show/0S1A318OkkstWZROYOn3dU?si=5d2347e0696640c2
YouTube - https://www.youtube.com/playlist?list=PLGvolzhkU_gTogP5IBMfwZ7glLp_Tqp-C
We hope you enjoyed this episode!
Bob: Hello and welcome to another episode of Q&AI. Today we have the honour of being joined by the illustrious Marvis product management and engineering team. Joining us from Singapore, Yedu. In the studio with me is Navraj and Shirley, and today we're going to be talking about the Marvis conversational interface, Yedu, let's start with you. Gen AI agentic framework all the talk of the town, I know even last year you guys announced Gen AI public doc search. Maybe we'll start with. What are you working on now? What are you announcing on the Marvis CI 2.0?
Yedu: Hey, thanks, Bob for having me. So, this year we are focusing on two enhancements to Marvis, or our virtual network assistant. So, number one we are enhancing Marvis to effectively interface with all the different data sources in the Mist cloud using Gen AI capabilities. So, what this enables is it helps user to move from querying to conversing. Now any user, irrespective of any technical background, now they can extract any insights from the network.
So, this is number one. So, secondly, we are incorporating agentic workflows to several of the Marvis functions and, as you know, like in agentic workflows, we are moving away from linear process to more of a iterative process. You know it involves reflections, it involves tapping into different data sources, different tools.
Now this ability of Marvis to interface with different data sources will supercharge this agentic implementation. So, end of the day, the benefit of end users is more effective and accurate responses, which is validated, which is iterated by our agentic framework.
Bob: Okay, so, if I understand you right, I mean, last year was all about Gen AI, Juniper public docs and helping them get answers out of public documents. So, this year it's all about using Gen AI to help customers get answers from their actual network data. Is that correct?
Yedu: Yes, yeah, that's a good summary, Bob yes.
Bob: Okay, Shirley, you've been with Mist almost from the beginning. Marvis CI came out in 2018. What has really been the technology changes. You know from when you first started doing the Marvis CI back in 2018, and what we're doing now with Marvis CI 2.0?
Shirley: Yes, Bob, that's a really good question. But actually when I joined the MIST in 2018, prior to that, Marvis has already have a virtual network assistant, which is a tool really useful allow user to use structured like SQL type of language to interact with Marvis to get the data to be able to do the troubleshooting. But in 2018, we realized we want to change the way how user interact with Marvis. We want to use natural language.
They can ask questions about their data and be able to do troubleshoot and also asking questions about Marvis, the product and our Wi-Fi provided solutions. So generally, there are two types of challenges we're facing. First, use natural language. How can we understand what the user wants? Second is, as soon as we understand what the user's question and the intention, how can we get an accurate response for the user? But just remember that it's 2018.
Prior, the LM existed. At that time there were quite a lot of challenges. We had to utilize limited training data plus synthetic training data to build our own model to do intent classification, entity extraction to identify the network features customers are asking. Then, on top of that, we have to build very static wired troubleshooting tools to answer questions about the problems of the network and also integrate the different API search capabilities to provide answers about their data. And also we build our own homegrown RAG solution trying to address our public doc related questions.
That was prior for the large language model existed, but now, today, large language model took over the world. Actually, a lot of problems we were facing before does not exist anymore and on top of that, large language model enable everybody to build agents. Agents can bring everything up to the next level. So, first of all, you know same two problems I talked about earlier still existed. We need to clearly understand what users' intention.
So large language model really help us to limit the scope of our challenge. But the second is that how can we faster, quicker to get the responses for the user's question? So, we have data stored everywhere in the cloud. As far as we have the data, how can we map user's question to the data we have and be able to come to the conclusion? So this is a challenge we are facing and also the innovation we are working on.
Bob: Okay, so it sounds like you know Marvis 2018, we were really good at understanding, but not really good at generating language and that's kind of what ChatGPT and large language models really brought a voice into Marvis. But I think the other thing you mentioned and maybe Navraj will talk you know, LLM and large language models brought this voice natural language generation but also gave us a tool for actually solving problems. So, we talk about Gen AI and agentic workflows. Maybe get a little bit of audience. How are you actually leveraging Gen AI beyond giving Marvis a voice, helping getting the customer's questions answered?
Navraj: So, we talked about many different aspects. Some of the important points is we have all of this diverse data within the cloud, whether it be documents, whether it be network states, whether it could be databases, with both long and short-term data. Now, for all of this, LLMs provide a tremendous amount of information. It knows the language of our databases. Thus, we can ask questions and get complex queries that we haven't even thought of, that our customers want.
Before what we would have to do with these static languages we developed. We would have to program every time a customer would ask. That wastes us time. Now, with all of this power, the LLMs can be trained. The agents can probe all of these different, diverse data sets and get the answer for us and summarize it and validate it. That's the power of an agentic framework, with each individual agent having an LLM to answer that question.
Bob: And Yedu maybe we'll come back to you in terms of you know you're working with customers on a daily basis. When we look at these conversational interfaces, we're kind of moving people from the CLI paradigm to dashboards, to where now these natural language interfaces is becoming the way customers are getting information out of or communicating with their networks. Any great customer stories of how do you see our customers basically changing their behavior on how they operate their networks using this conversational interface technology?
Yedu: Bob, we have several customers who have given public testimonials on how they were able to reduce trouble tickets by more than 90%. They were able to reduce mean time to resolve problems by more than 95%. So, Marvis CI has played a big role in all these outcomes. When we talk about Marvis CI users, mostly we are seeing network operators like engineers, help desk people. They are the top users of Marvis CI. This is the primary interface where these users go to whenever there is a problem in the network.
What we have observed is troubleshoot function. So that is one of the top used use case, like within Marvis. So, troubleshoot client, troubleshoot any network device or troubleshoot site. So, these are some of the top used questions, like within Marvis. And so far, yeah, like the main use case has been troubleshooting.
But as we are progressing towards agentic framework and we are enabling Marvis to understand and interface with the data, now we can expand the use cases you know, like beyond troubleshooting and doc search and all. So, there is another persona of users within the IT team, for example IT managers or CIOs. They are more interested in high-level data, right. So, their core functions are capacity planning or resource optimization or network performance optimization.
For these they need to make data-driven decisions, and they need the filtered insights you know like, which is gathered from historic performance data, right? So now, with this new capability, so Marvis will also be able to serve this new persona. So far, all the top users have been from the troubleshooting side. But yeah, going forward, yeah, we will have more and more different personas, you know like getting benefit out of Marvis CI.
Bob: Yeah, you do, you bring up a good point. And you know, Navraj, if you think back, you know when we did the public doc Gen AI search that's kind of what they call a one-shot thing right, we basically found the correct document and fragments and we basically used the large language model to summarize something.
Now we're moving into these more agentic workflows where we're actually using Gen AI to actually troubleshoot problems. Maybe we can start a little bit with the audience on what is an agent. What is a basic Gen AI agent? We'll talk a little bit about the difference between Gen AI agents and agentic workflows, but maybe we'll start with what's an agent. If I go look at your code.
Navraj: Well, I define an agent as solving a particular problem, right, and that's one agent within a whole framework. So, we can think about a problem as one solving where the information occurs in our documentation. A second agent could be a user wants to know something about their network state over a long period of time, but maybe they would want them combined, and that's what an agent can do.
So, for example, what is the information that our switch is giving us? What exactly does that mean? We can provide the information, summarize it, but also point to the document, discussing it for further details and also summarizing all the important metrics that they want, so they know the exact definition. So that's a very simple, just a simple agent that we've constructed. But remember, we have all of this diverse data sets within our clouds and we can grow our agentic framework even more to tackle any problem that a customer may ask.
Bob: Yeah, you know, surely I mean when I look at kind of the YouTube videos, you know, I kind of see when people say the word Gen AI agent, usually it means there's a large language model. There may be some tools or API function calls, but then when I hear about agentic waveform, it usually means that we have multiple agents trying to solve problems. Any good examples where you're starting to use multiple agents to solve not just a conversational interface problem but an actual networking problem.
Shirley: Yeah, Bob, currently the team is actively exploring and building the you know the troubleshooting agent. So, troubleshooting agent, like I said, in the earlier generation, we have already accumulated enough APIs and the knowledge is how our customers are curated to do the troubleshooting about the actual customer problems. Now we take that human troubleshooting steps to develop the agents in our software so that we can simulate the actual CSQA journey when they tackle customer support tickets, for example. We are learning all the existing troubleshooting support tickets
Based on there we learned the steps our SysQA assistants, what they did to check the data in our cloud, then came to the conclusion what's the next step and then identify the root cause, then provide a recommendation. So, starting from that, we learned our domain knowledge. So, this is the part actually the current large language model they trained with all the internet data which doesn't exist, because this is our Juniper Mist unique domain experience, domain knowledge which does not exist outside our internet. So, this is our proprietary knowledge. We have to build our own solution to teach our LLM model to understand the troubleshooting flow specifically for Juniper mist devices.
Bob: Okay, so you're starting to use both customer data and historical data to help troubleshoot and solve networking problems now. Now, Yedu I'll come back to you because I talk about large language models. You can't say large language models without saying hallucination. So maybe give our audience customers. How do we make sure that these large language models are not hallucinating? I always tell people who have kids and new employees sometimes they do things that what the heck were they thinking? Hallucinations. But how are we making sure Marvis doesn't hallucinate?
Yedu: Yeah, I believe this is where agentic workflow comes into play. So, in agentic workflow, like, we incorporate the reflections, iterations, right. So, it repeatedly, you know like, checks the response for accuracy and it also iterates by looking at the problem from different perspective, right? So, for example, if we take, you know, like, any troubleshoot workflow, with any normal network engineer, it is not like linear process, right?
So, the first step is data collection and then we look at the problem from a different perspective, right, and we slowly try to eliminate, one by one. It's like back-and-forth process. That's exactly what this agent framework will, you know like, bring to this complete, like troubleshoot or other process. So, this is one great way to eliminate hallucinations, right, because there is a constant review going on, there's a constant feedback mechanism.
Bob: Yeah, so that's that back-and-forth iteration between the agents reviewing their work, making sure they got the right answer. So maybe we'll you know, maybe we'll wrap it up, Yedu. Last final question here you know what's got you most excited when you look forward at Marvis Conversation Interface 2.0? What's going to get it changing the world right now?
Yedu: Yeah, in the next version of Conversation Interface, users would be able to do more, so from the current set of functions troubleshoot, doc search, device search and all so they can move to talking to data. Now they can interface with any network data. Also, this opens up use cases for personas beyond operators. Right now, we are seeing mostly network engineers, helpdesk, using Marvis CI for operation and monitoring and troubleshooting purpose.
Now, with this new capability of Marvis CI, now we can see other personas like managers, CIOs. They will be able to churn out dynamic dashboards with insights. Whether it is long term or short term data store, they can pull insights from anywhere. They can create dashboards on the fly so which can help in their normal functions, Whether it is capacity planning or performance optimizations. They will be able to make data-driven decisions based on this dynamically created dashboards.
Navraj: Actually, Bob, I wanted to add to that as well. One other agent, which something that Yedu mentioned, we're opening up other possibilities, but something we're also very passionate about is actually creating a self-driving network, and that's where all this information that we have we have how issues were resolved with a customer ticket. We have all that information. We know when it's going to be possible to automatically create an action. Turning a recommendation to an action is also yet another agent that we can use, and that's actually doing the opposite of what we're saying now allowing the customer to be completely reliant and build our confidence in them.
Bob: Yeah, I would have to agree with you, Navraj. For me personally, these agentic workflows, I think these are the key to actually getting into that Uber, Waymo, self-driving network experience. Shirley for us Star Trek fans remember the talking computer. So, what's got you excited about Marvin's Conversational Interface 2.0? How close are we to the Star Trek talking computer?
Shirley: So, yes, Bob, that's a dream. We started this journey in 2018. But you know, like I said before, that before LM existed, that's really quite a challenge. So now, today is a totally different level of a game. We have a lot of capabilities in the large language model provide, a lot of frameworks, agentic framework provide. So, the team is actually excited. The company is under really active dev work in the prototype. Hopefully, we are going to have some sort of customer-facing features available later on this year.
Bob: Okay, well, Shirley, Navraj, Yedu, I want to thank you. I can't wait to get my hands on Marvis Conversational Interface 2.0. I want to thank the audience for joining us on this episode and looking forward to see you on the next episode of Q&AI.