Back to Webinars
Okey dokey. I think we can get started now. And I'd just like to welcome everybody here today. Thank you for joining us on this lovely Wednesday afternoon. Good morning or good evening, wherever you are. I'm Andy. I'm the principal marketing manager at Paligo, and I'm just going to do a little housekeeping before we begin. For everyone here, if you cannot complete the entire webinar with us today, don't worry. We will send everyone the recording afterwards, and you can also access this on piligo dot net slash webinars. Throughout the presentation, you will see on your right a question and answer or Q and A box. Please submit any questions you have for our presenters, and we will do our best to get to them at the end. Because we have quite a large audience today, we may not get to all of your questions individually, so we have a hard stop at the end of the hour. And any questions we don't get to, we will follow-up with you one to one via email afterwards from either CAPA or Paligo, whoever's best suited. And so without further ado, I will allow the presenters from CAPA and Paligo to introduce themselves. And if you have any other questions, pop them in the chat, I'll respond to them. Enjoy the show. Brilliant. Thank you, Andy. So I'm Daniel Nyberg. I lead the marketing team at Toledo. I've done so for the past four years. And for those of you who are not familiar with Paligo, Paligo is a structured content authoring tool with content delivery, and our primary use case that we see with our customers is in technical documentation, It's for knowledge based content. It's for support documentations, instructions for use, standard operating procedures, and many, many more. But with that, I'm gonna hand over to my your colleague, Rasmus, for further introductions. Yes. Hello. My name is Rasmus Peterson, and I'm responsible for strategy and partnerships here at Paligo. And next one is Silvio. Yeah. My name is Silvio Perez. I am the tech writer at Polygo. Cool. And I can round us out. Hey, everyone. I'm Emile. I'm one of the cofounders and CEO of Capa dot ai. Super excited to join today. Essentially, at Capa, we work with lots and lots of wonderful companies that have lots of technical documentation, a lot of which is hosted on Paligo and turn that into AI assistance that users use to onboard faster to deflect support tickets, and so on. I'm super, super excited to be here. Great. Thanks, Eamon. And it's super exciting to have guests and not only be polygorous at the webinar. So really fun work to have two companies and individuals from both who are actually into technical documentation and AI. So let's get going here. And, Rasmus, hanging over for you. Yes. So big topic here, but AI adoption is accelerating in documentation. So when we talk to our customers and look at what's happening in the market, it's it's obviously very exciting times. There's a lot of things happening. But it's also kind of connected to the history of of our space. So what what some people are saying is that AI is kind of a catalyst for making technical documentation or just documentation in general to be part of the product experience. And I think we we kind of don't agree really with that statement, because for us, technical documentation has always been a key part in a very good product experience. What we believe AI might actually do in this context is to emphasize and clarify that technical docs are a very important piece of doing a really good product experience. So we think AI will act as a catalyst to to to kind of show that clearer. We're also seeing a big change in the user expectation when it comes to consuming technical documentations. Customers are more and more expecting them to interact with any type of technical documentation and get a conversational answer. So historically, it's always been about trees and search and trying to figure out the right way to find the answer to the question you have. Nowadays, very quickly, the users are going over to expecting to be able to have a conversational answer. It might be that they want to access the trees and search after they've interacted with the conversational answer via the AI, But they are expecting that to be the first line of interface, so to speak. We also hope and believe, because this is very important for us and we are, you know, technical documentation nerds to a large degree at Paligo, is that we really believe that AI will accelerate the importance of technical docs inside organizations. And we think that this will put it higher in the agenda inside of the organizations. And we will talk to, you know, in the following slides a little bit why we think that is that is the case, but we really truly believe that this will be something that will drive that. And and we think that's the right, you know, development that we really would like to see. We also see when it comes to AI that it's shifting a little bit around how the organizations are organized around this. Because I, for example, talked to a former colleague of mine who works at the at the at the Danish software company, and they have reorganized their whole company and and created something they call the customer experience organization, which was then user experience, which is she is head of. But they had put in, you know, support customer success and even their technical documentation team inside that organization. And this is really what we would like to see that technical docs become part of the customer experience because it's such a key part of that. So next slide, please. Yeah. So what is AI changing and what it doesn't? Well, I think the main thing that AI changes is that how the users consume technical documentation. They're expecting answers directly from the AI and from the technical documentation. But what what it doesn't change is the need for to have an accurate source behind that, because otherwise the answer isn't really worth that much. So we see that governance and reviews, which is something we've always emphasized when it comes to technical documentation, is becoming even more important with AI. Because in a way, AI is putting a magnifying glass on the quality of your technical documentation. And if you have good documentation or vice versa, not that great documentation, it puts the magnifying glass on it via the AI and it becomes much more visible if it's actually good or not. So so that's that's a key piece that it said putting up the magnifying glass on it. But really, it's always the same things that we always emphasized. You need good structured documentation to be able to give the right answer. I think another thing that we will see as well, which might affect, you know, technical documentation, just as a side note, is that with AI, coding will speed up software developers. We're already seeing it everywhere, which means that it will be easier to add a lot of functionality into products. And by itself, the more functionality you add into a product, if you're talking software, for example, it generally tends to be more complicated, which then in itself increases the need for technical documentation to really help you understand all of that functionality. So for us, a lot of the things that happening now in AI is just emphasizing the need and importance of a really good technical documentation. Okay, next one, please. Yeah. So the foundation for successful AI implementation. I mean, obviously, this is probably something that could be its own webinar by itself, several hours of it, to be sure. But we believe, because that this is really kind of at the core of what we do, that high quality structured content is the most important input you can have for an AI if you want AI to give you good answers or give your customers good answers. And accuracy in terms of the quality or the accuracy of the answer you're getting is very hard to do by a prompt. I mean, obviously there are probably prompt cures out there that can get very high accuracy, but then the prompt becomes very, very long or very, very specific. And we believe that if you have a very good structure and ownership of the content, let's say, being fed to the AI, then the accuracy goes up, and hence the trust in that data goes up in that chatbot. So something that's always been incredibly important for us at Paligo, because that was kind of the key to why Paligo is such an effective tool, consistent structure, consistent terminology, and, you know, a lot of metadata that's added to the content. That has always been the key for Paligo, but we also see that it clearly gives a much better AI answer if you have that data structured in that way. One thing we also see is that integrations are important, meaning that you wanna have the AI integrate very well or integrate very well with the content that you have published from Paligo, like Capadas, for example. And I think one thing that we will focus more on going forward is to improve how we integrate and hopefully be even able to give even more of the metadata out that we already have inside the tool to when we publish the content. So this is a focus area for Paliade going forward. And I think a way to state it is that AI rewards technical documentation teams that have been taking their job to to create very good technical documentations that are structured well seriously, and the AI basically rewards that. So anything we have been saying that you need to do historically for doing good technical docs, the AI is really rewarding that work that you've done before. Okay. Next one. So why structured content changes everything? Well, we believe that the structured content gives the AI context to the content of what it is, not only what it says, but actually what is inside the content. That means that the AI have much more data to work off and can give a better answer. A thing, for example, that we've always focused on is content that is in topics or smaller chunks of content. And that is something that was really useful for use and still is when you use Paligo, but actually we also see that that type of chunking or smaller bits and pieces of content is actually also better for giving better AI answers. So we believe that unstructured content forces an AI to guess more or to fill in the blanks, and we believe that the structured content gives the AI accuracy enhanced trust. And another aspect of why the structured content can change things is that I was listening to this podcast. They were interviewing a VC from this famous venture capital firm called Sequoia, And they were discussing how you would buy b to b software in the future. And basically, what they're saying is they they will see AI agents that are evaluating and and recommending what software you will buy. And in that setup, and this wasn't me who was saying, it was the the the the partner at Sequoia was saying she was saying that for an AI chatbot to evaluate software, the technical documentation or product documentation would be a much bigger part than than the human equivalent, because they are going to read the data and evaluate out of the the data that's available. And then product documentation would be a much bigger part of the evaluation process for any software. Yes, next one. Yeah, super interesting, Rasmus. I mean, just as a comment, could you even say, I mean, flip side of discussion here would be that unstructured content and sort of low quality documentation content would actually become a liability for a company. I mean, of course, we are promoting structured content. So we we really love our own setup. But, yes, there is definitely a risk that that you could get more hallucinations, the the the AI inventing or filling in the blanks more more so, and that also gives a risk that would also lower the trust for actually the answers you get. And it's something we've seen is in b to c, you can accept that. So if you're using the the chatbot just for your own personal use, then it's often more fine. But if you use it in b to b or inside an organization, then the need to have very, very good answers and accurate answers are much higher. Cool. So kind of in summary, our structured and trusted content wins the AI. That's what we believe, definitely. And we're also seeing it in in, you know, our own use of Kappa, but also from our customers that they are asking for how they can better integrate the structured content into their chatbots. Go ahead. If anything was yep. Sorry. Go ahead, Emil. No. I was just saying, I I couldn't I couldn't echo that more. If anything, that's that's a perfect jumping off point. I think maybe to to to say a little bit more about our world view in terms of kind of documentation and and and AI. Maybe the one liner, right, you you also said it, Rasmus, is is the the cheeky saying, right, with with all of these models is garbage in, garbage out. Like, even as the models get better, that just proves true time and time again. The more structured the content, like, the more effort that goes into good technical writing, the better the quality of, like, your AI agent that you build on top of this. Even with, like, all the craziest, like, context windows and trickery and all the stuff you do, like, everything comes back to the underlying content. Yeah. And and maybe this is this also kind of introduces I I think I I would like to maybe start this part of the presentation by by maybe introducing how we think about the future. I really like how you brought along Rasmus kind of the Sequoia, I guess, partners view because essentially what you're saying, right, is how people interact with technical products, whether they're internal technical products or external technical products, This is changing. Is no longer your users that are manually going to the documentation and clicking through pages and so on. People are expecting to be able to chat to an AI assistant. At the same time, like your employees, right, your solutions engineers, your field engineers, your customer support reps, they're no longer also expecting to go ping colleagues for answers. They expect to chat to some sort of AI and get an answer right away. And there's this whole new emerging category of of, if you will, of folks that are engaging with your with your your product and your content. And that's AI agents themselves. Right? Like, you know, chat GPT and and agents that sit on top. I know, for example, Emil, that the the the Kappa Chatbot is very popular with our customer success team inside Paligo. They use it in a That's that's that's really cool to hear. It's really common. And and I guess we'll get back to this when when when we go a little bit later in the presentation who this is helpful for. But but to tie this back to the main point, I think, of this page and just introduce this is this essentially means that technical writers and folks that own documentation, I think, have never had more power within their companies before because you are now delivering essentially the training material for how your users, your employees, and AI agents will be interacting with your product. That's maybe my my main point I wanna convey here. But maybe use as a as a jumping off point. Daniel, if you wanna jump to the next slide, I I can give a I promise I'll I'll keep this brief so we can get to the meat of this, but maybe just a brief intro to to what what essentially COPPA is. Essentially, at COPPA, what we do is we work with companies that have lots of technical documentation to turn it into AI assistance for this this exact purpose. This is a quick quick screenshot kind of showcasing showcasing the platform. You can imagine once you train an AI assistant on your content, the first thing you want to know is, you know, if this is live for your users, like, how many questions are people asking? How are people reacting to this and so on? I can explain some of this in a little bit more detail, but, Daniel, maybe if you jump to the next slide, perhaps the most helpful thing to do now would actually just be to take a step back to go, like, what do you mean as an AI assistant that sits on top of your technical documentation? This is a mental model we found that that works quite well at kind of outlining how these tools like a Kappa, for example, have evolved over the last couple of years as the models have gotten better. So taking a step back, if your goal is to create an AI system technical documentation, the first thing you want to do is you want to make it really easy to connect to it, whether that's public documentation, internal documentation, even at times less structured content because that can complement it quite well. You know, if you pair something like structured documentation with, you know, support tickets that can serve as really, really good grounding for for an AI assistant that knows about your product. Once you've set that up, the next question really becomes, you know, where are your users and employees? Where are they asking questions? A lot of folks still go to the documentation site to get questions. So you want to make it really easy, you know, to deploy an AI assistant on there. That's something we offer at Kappa. Right? We'll talk about this in a bit. But a super simple script tag to have an AI assistant live. If you ask me where things get really exciting, it's it's kind of on MCP deployments and kind of custom builds because essentially what you can imagine if you index all of this technical documentation, you can start to build really, really cool experiences like in product agents and so on. If you can just query this data and get answers directly in your product. The third pillar of a system like Kappa, if you can imagine this, is essentially a good answer engine. We'll get to this in a second. But, you know, if you build an AI agent on top of your your documentation, it's pretty easy to spin up a prototype. But the second that starts to give bad answers or make something up or confused versions or complex terms, people lose trust. So from our side, we've worked rigorously for more than three years now together with folks like yourselves and OpenAI and N8N and Nokia to really understand how should these models behave and designed our own evaluation suite. That means that whenever we get early access to a new OpenAI model, we can very quickly determine if this is something that's better for technical documentation because sometimes it's not. Sometimes these models are only good at writing code or they're only good at other kinds of use cases. And then the final part to something like Kappa, and I think this really is maybe the the kind of killer use case is it turns out once you have an AI assistant live, as we've seen on the Paligo site, people ask lots and lots of questions to it. Like, people really engage with this, and there's so much more meaning and intent in the kinds of questions you get from a system like this as opposed to what you're able to see off simple keyword search before. So you can do some really cool stuff to understand how to improve your documentation, meaning that you don't need to have the perfect docs right to get started. Maybe if you jump to the next slide, Daniel. Yeah. I think this is such a hot topic. We we we talk about this all the time. We've had conversations. I think Daniel and Rasmus about this too before. Right? But, like, where does something like RAC play in? Rac being short for retrieval augmented generation. The way to think about RAC right now is is I I'm gonna butcher this quote, but I think it's it's a little bit like the the saying. Right? Like, democracy is terrible, but it's kinda like the best of the best system that we've been able to come up with so far. Rag has some parallels there. Rag has lots of limitations, but the goal of Rag really is to build an AI assistant or a chatbot on top of your knowledge sources that doesn't make stuff up. Because what you do with Rag is whenever someone asks a question, say, about Polygo, you don't just rely on a model that might have been trained a couple of years ago or a couple of months ago on older versions of docs or, you know, how will it know anything about internal documentation, for example. So what you do is you create a vector database and you do lots of fancy like very fancy like agentic LLM stuff. And then you take that question and you reformulate it and do stuff, and then ultimately, you'll get an answer grounded in your docs. And Rack matters a lot because it's the best way we have today of delivering a system like this. It's really good because Rack, as it says here on the slide two, it builds systems if it's set up correctly with answers you can actually trust with sources that are explicitly cited. So you can actually see what docs the the model is referring to. And it provides auditability. So you as a technical writer can go into a platform like Kappa, for example, to understand, hey. You know, what parts of our technical documentation is actually getting used? Where could we go back into, you know, Paligo to maybe change the way we structure the content? Where could we go back into Paligo to think about how we optimize our metadata so a model is able to, you know, potentially in the future pick something like this up? Cool. Maybe we jump to the next slide. And I think I believe this is my last one, so so you'll you'll wanna have to hear me talk for much longer. No worries, David. No worries. No. No. No. It's it's good. Final thing I'll say, and this this very much relates to the previous slide. Right? You now live in a world where you can have a system that's grounded in your sources. Well, what happens if a user asks a question, you know, let's say about Paligo on the documentation site to a model that has access to your docs and there actually isn't any documentation to answer that question? Well, it's actually pretty simple. In that case, the AI should say, I don't know. And when the AI says, I don't know, what you get is a really, really interesting case of trust building. What we hear from our users is the most important feature of Kappa is its ability to just say, I don't know, because this is what a, you know, a standard LLM out of the box or, you know, usually a quick prototype will not do. And it turns out it's arguably the biggest feature too. Because when it says I don't know, what you can do is you can begin to look at these as a writer and begin to work with these, you know, groups of uncertain questions. And we we kind of productize this in in the Kappa platform to say, you know, here are the top coverage gaps, you know, where you could actually go and, like, write more content about these. But that's really, really interesting because now you actually have a live real time feedback based on what your users are actually doing, what they're trying to do with your product, and it gives you, the documentation team, I imagine you, Erasmus, as a strategy leader, insight into what we could actually think about building in the future to to essentially enable our users to do what they're trying to do. That's a quick one from my side. I think if we go to the next slide, the jumping off point here is that I think you guys actually brought some of these coverage gaps from from from your Polygore deployment of Yeah. We did. We did. And I mean, I just wanna put in a a quick reflection here before we get into the next slide. I you know, being a marketer, being in digital marketing for the past ten years, I've spent much of my time trying to understand what keywords people use to find the site that I'm marketing, right? And it's ninety percent guesswork and ten percent tooling that's also guessing and with tools like Kappa, you actually get to understand exactly questions people are asking your documentation. I know it's not a perfect analogy, but having that insight, you're removing all that guesswork and fully understanding what people are asking for. And if I'm answering that question, well, like thumbs up, thumbs down, or if I'm not answering it at all, that's, I think that the sort of coverage gap is, I'm not sure what Silvia thinks, but I think that's, from my point of view, having this in digital marketing would be like such a killer application, throughout any industry. I think it's super interesting and the whole sort of just like Yemen said, this is actually a screenshot from Paligo. This is from Q4. So it's a couple running on Paligo, and these are the coverage gaps identified in previous quarter on our DOM documentation. So here's an excellent way for Silvio to understand, he can connect these analytics to his users as a writer, revealing all of those questions people are asking. He can make decisions on what content to develop to, it's actually evidence based on what people are asking and no guesswork. So, driving that as a proactive content strategy, uncovering the coverage gaps or the poorly answered questions or low quality questions. And you know, I don't know, Silviuer, is this a good tool I'll I'll get into some of that when I get my slides, Daniel, so Okay. Good. I think it's super interesting. And the fact that we've been running Kappa on our own documentation for six months proves it really works very well. So I'm gonna shift over to and talk a bit more like zooming out from AI and Tech Docs and Kappa such and just briefly touch upon how Poligo and Kappa fits into the AI workflow. I think we've covered it well already, but I think it allows a bit of repetition and the Paleo allows you to prepare that structured content for AI delivery and for delivery to your users. You use that structured authoring, you get quality control with workflows, you feel confident in the quality of your docs, building it with Paligo. And like we talked, like that AI readiness starts, like the foundation here is structured content, metadata rich content models, and safe automation, confident answers. And with Paligo, that governance is applied across all your teams and your outputs. So, and I think Rasmus touched upon this earlier as well and Emil, like remember, all AI engines will scrape your knowledge site. Like if you have your docs or your knowledge site public, they will be scraped and the AI engines will be aware of your content and your users will search for answers on how your product works. So no matter if it's right or wrong, like if they can't get answers from your documentation, they will go to Claude, they will go to Gemini, they will go to ChatGPT and if the answers are poor there as well, or if the answers, forced answers are hallucinations or inconsistent, the product experience will be poor. So, Polygo gives you that structured truth and Kappa gives you those answers you can trust very quickly. And now, Silvio, for you. You're muted, Silvio. Perfect. Can you hear me now? Yes. Okay. So I'm I guess I'm here to talk more about the practical side of things implementing Kappa into Paligo. And and really, it's it's been very, very straightforward, I'd say. Once you create an account, you make a widget in the account. And from that widget, you can download a script that you can then add to your layout in Paligo. Now the the script contains a bunch of parameters like name, color, sample questions, and stuff like that. And that means that not only can you have different parameters or different values for different layouts, but you can also have people who have access to the layouts all change those parameters if they want. So you don't have to go to your Kappa admin to get those changed. Now I'd want to highlight two features. You see them on the slide here as well, and some of them you guys have already alluded to. The first one is language. And I thought this was pretty awesome and impressive when I tested it for the first time, which is that you can basically ask COPPA questions in any language or probably not any language, but a lot of supported languages regardless of whether or not you've localized your content. Users would then get an answer in that same language, which is a huge help for some of our users. I remember our Japanese customers wanting to access content in Japanese. And now they can in a way. The only drawback here is that of course when you don't have it actually localized, your content, the links provided will still link to the actual English content. But but that's something that you can you can fix at a later date. At least now they have access to it in their own language. And the second thing is what what what Daniel already talked about, which is the analytics. And and and the reason why they're so powerful because queries are are structured into or grouped into semantic clusters. So rather than giving me numbers on a keyword or set of keywords which don't always mean anything to me, it groups them into meaningful themes which brings us to the next slide which is the changing role of the tech writer in an organization. Now if you look at how tech writing has changed over the past few years, you can see sort of a clear shift from procedural writing to more of an editorial or architectural role if you want. I remember starting twenty years ago and then it was really important to be able to write procedures, know, one, two, three, step four, etc, etc. That's not the case anymore, at least not to the same extent. We're becoming more of people who structure content rather than perhaps necessarily writing it. And the CAP analytics can be a great tool in this, I found at least. It provides a gauge for user importance on a thematic level. I mean, track what topics or clusters of topics people are interested in. And as Daniel already mentioned, it serves up where there are gaps in your documentation. So not only then can I as a tech writer see like, okay, I have to fill out this information here, but also I can see where how can I more strategically structure my content to serve my not only the users better but also the AI so that it can give better answers? As an example, people search for the word taxonomy, I mean it can mean a lot of things. But if it's grouped into a thematic topic like taxonomies and filtering and it has a bunch of very detailed questions attached that people have asked Kappa, I can do something with that and restructure my content in a way that makes it easier for people to find it. So ultimately the technical writer's work becomes more a matter of strategically structuring, delivering content. It's not just writing, it's not just grammar, but you get another insight into how to use taxonomies and metadata and conditional content. And that's basically it for me. Perfect. Thank you, Sunil. Okay. So moving on and looking at some real outcomes teams are seeing. So here, our idea is to make a little bit of a q and a. I'm putting you in the hot seat here, Emil, asking some questions that I am interested in. So when you ask your customer what benefits they're seeing from Krappa, what is the most common answer you get? That's that's a really, really good question. I I think the short answer is this. There's about three that come up pretty consistently. It's it's faster user onboarding. It's, like, reduction in support tickets, and it's more visibility as a writing team to understand where to focus. Maybe if I if I break these down one by one, I'll give you some examples. So so for, I think, faster user onboarding and kind of user self serve, it's really cool to see. I I can see in in the chat, we've got Joyce Fee. She leads docs over at Red Panda, insanely fast growing unicorn developer tool. And I think Joy's also just left a comment as well saying, for their users, Kappa is wildly popular. It's used a ton on their documentation site. And I believe it's, like, seventy seven percent. Joyce mentioned, like, seventy seven percent of users either rate answers is, like, very helpful or helpful. So as a user, you just spend less time worrying about the complexities of a product like RedPanda and more time actually using it in onboarding. From a support perspective, I recently spoke with with a head of support at a large publicly listed developer tool company as well. They they reported after starting to use Kappa that they saw almost an eighty percent decrease in documentation related support tickets. So the types of tickets that would normally come in and require an answer to linking to docs. So for them, that's been a huge win there. Support folks can actually spend more time doing working on the complex cases and delivering more proactive support as opposed to the boring link over to docs type support that consumed a lot of their time. Then I think finally from from a writer perspective, you now finally have insight into where you can spend your time and focus from another data source. Of course, you had the interactions with engineers and customers before as as Sylvia also called out. My favorite hard stat here is the Logitech team. They use Kappa on their quite technical b to b product sites. And they've actually reported a thirty percent increase in the number of articles they wrote last year while having COPPA versus the previous year's not having it. Because now finally, their folks don't have to spend as much time doing all the, like, you know, manual small support cases because they they had a bit of a dual role. They can actually spend time producing more content, training the AI, ensuring it has better answers, cleaning up existing documentation. It's a really slightly longer answer, but but across those three cases. So basically, in that last part, it's basically like the tech support needs are getting kind of crowdsourced. Right? You're you're really kind of getting what they need to what's missing and what they want added to the, you know, already existing documentation. So that's that's awesome. Yeah. It's like a a continuous marketing survey, really. Yeah. Exactly. You guys get fed the answers to your questions without even asking them. I'm yep. Awesome. So switching over a little bit, do you think customers are starting or already are understanding what a dream team, a structured content tool like Paligo, and a great specific focused AI chatbot like Kappa is if you combine the two? I think I think people are slowly beginning to realize it. I I'm so excited because we partnered, I think, together on a couple of, on a couple of working with a couple of folks now. And and, generally, we see folks that deploy Kappa on top of a system that really encourages and prioritizes technical, technical structure like Paligo, it will just perform better out of the gate. We can actually see the numbers in this. I I should probably have dug them up here. But one thing we track pretty aggressively is is, like, is the uncertainty rate. So the the rate of answers that come in that don't have a question, And we just see when you're using a system like LEGO out of the gate, you're way more likely to have a bot that's more certain because to your point, Rasmus earlier, like, the more structure, like, a system like Kappa has, the more metadata, the more hierarchy, like, it has, like The better, like, the models will work with it, the better the chunking will be, the better recall will be through retrieval and so on. So I I I think it's still probably early, but but I at least we're seeing it in the data. Perfect. Last one is, what do you say to users that still think it's hard to apply or or use an AI tool for their b to b needs? Good question. I mean, I think keep trying it. Like, keep having a high bar, but we're just seeing it on an internal engineering side here at Kappa. Right? This is a bit of a different use case, but what's possible now versus two months ago with systems like ClaudeCode and, like, some of these new breakthrough models is so wild. If you had a bad case of an AI saying something wrong a year ago, like, give it another shot. Like, these models keep getting wildly better, and and people keep getting better at figuring out how to build tools on top of them. Perfect. Thank you, Emil. Okay. So my last slide here is a little bit about preparing your documentation for AI readiness. And it's almost a little bit kind of conclusion of what we already said. I think that we have always promoted that when it comes to technical documentation, you invest time into structured topics, metadata, Because that pays off in the tool like Polygon historically in terms of being able to reuse and other productivity tools. But we're seeing that investment that you've done in those type of things really pay off for AI, because then you get an AI that really performs well. So do that investment in the things that have been important, they will still be important in age of AI. I think another thing that's important is to think in questions and not in pages. And QA's importance as we go forward is going to increase because this is kind of what the AI is looking for. It's looking for questions and answers. And I know that's also not only true for technical documentation or product documentation, but definitely something Daniel and I have discussed in terms of marketing of Paligo and similar. So again, think about the question and answers part in your technical documentation. One thing that's always difficult, but the more you can do it, the better you will the more the more kind of gain you will get is to try to align different teams and terminology and sources. And that comes to the tech support team, your product teams, your dev teams. The more you can do that, and the more the technical documentation team can be the the in the forefront and drive that process of trying to align your terminology through the company or the organization, the better. I think one thing is to really, and this is just repeating, but again, let's go back to real user question. Put the priority on the questions that the user has. And obviously, this is where Kappa really shines and we really have talked about it, to make sure you answer the right questions, because you can always try to guess what the questions will be. But in the end, those crowd source, so to speak, questions will be the ones that the customers really want to have the answers to. And I think lastly but not least, again, a kind of a no brainer, but, you know, it's always good to repeat. Make sure you put AI in your doc strategy, not as the center necessarily of the doc strategy because I think the stuff that was important in your doc strategy before is still going to be important. But don't make it a side project. Don't make it something that you're doing on the side and adding on top. Make sure it's inside your doc strategy and create your docs for also being consumed and and visualized by AI. So that would be very short, some kind of recommendations though, what you should do. Yes. Good. Thanks, Rasmus. And now, Emil, your final take. I'll I'll keep this insanely brief. Trust is everything in these systems. Like, whenever you're looking at a a tool or thinking about docs and AI strategy, like, really, really, like, look for tools that can give citations. Look for tools that ground in your documentation and understand how they should use your metadata and the structure that you've spent so long time, like, really nailing down in a system like Polygo. Like, these models can really make use of this. I think the second thing is analytics are really, really, really, really important. It's by actually engaging with this content you can get insights, Daniel, to your point, for keywords. You know, Rasmus for for strategy and product direction. It's, you know, Silvio for your documentation, and, like, where to continue writing more docs. Analytics are really, really, really important. And then I think the sec third and final thing I'll say is don't also be afraid to launch these things even if you feel like your docs aren't in a perfect AI ready state. Any good system AI system that sits on top of your docs will be able to work with almost any doc set to get started, and then it'll help you figure out how your content structure should evolve. So when you're doing your work and editing your docs, you can guide that based on real information. Conclusion there, Evan. Thank you. And this is the final slide. I just wanna send this one away. Like, With Paligo and Kappa, you get that strong foundation for your knowledge. Paligo gets you to build a structured content truth for your users, for your customers, and for AI. And with COPPA, you get those customized answers, the trust in the answers that your audience is looking for. So, get out there, review your current content foundation, and identify if high impact cases like COPPA is applicable for you. And with that, I just want to open up for questions. I see there's a lot of questions I see from Andy here. But first, before questions, big thanks to Rosmus, to Emil, to Silvio for for doing this super interesting discussions. And, Andy, welcome back. Let's dig into the questions. Yes, we have a very lively Q and A section. And folks, just so you know, we do have a hard stop at the hour. But keep submitting your questions, and we will gather them, and we will get back to them in time if we don't get to them right now. So we'll do our best. And so we're going to start with Martin. How do video tutorials relate to the AI tech docufuture? I mean, don't customers as of twenty twenty six tend to prefer quick video tutorials to the text information in manuals and HTML helps? I can start here, and then I would love to hear the perspective of the the other team here as well. I think it will be easier to generate video tutorials. I think still maybe some of the AI video generating tools are a little bit too crude for being able to do that well. But, you know, over time they will improve, which means that that would be one option to create video tutorials based on on the technical documentation. But again, if that goes back to not having proper or well done or accurate or structured technical documentation. That video tutorial will probably be not that great. And I think again, I think that's going to be one of several different types of way you consume technical documentation. One, but one among the other ones that are also available. Alright, any other commentary or shall I? Yeah. And then I I could get that in a reflect from my side. I think video tutorials still have, you know, a valid place for documentation. But to to the point of what we've been discussing, like, when you have a very specific question or a very specific answer that you're looking for, and there's no obvious video tutorial for that topic, or it might be there, but it's, you know, it's six minutes in and you have to scroll your way into that video tutorial. The more you use AI tools, the more aware you are that the answer is probably faster in Claude or in a tool like Kappa. So I think, video tutorials, from my point of view, is still very applicable, but looking at how I use tooling today, in marketing, for example, I tend to look for the specific answers and the ones that are sort of cross systems with integrations and such, I will look for them using AI tools. All right. So the next question is from let me just go down. Okay. From Tim. Is there a recommended way for chunking Paligo content? The XML part is a lot of tokens, which makes it harder to store the content in the vector database from my experience. I don't know if this is a too technical one or, Silvio, if you have this, but I'll throw you under the bus for this one. Oh, yeah. Yeah. Thank you. No. This is this is way too technical to get into here. I we we we can't provide an answer. The one thing I can say is that I did not do anything with our content before we implemented COPPA on our side. So we didn't have to restructure anything there or do something with chunking, etcetera. But there is more to say about that, definitely. Alright. And, Tim, we'll get back with a more detailed answer later since this would it would take up quite a bit to answer here. From Bilal, how do we get the content audited if it is AI ready? Sure. If helpful, I'm I'm more than happy to take this one. It's super hard to give a a general answer. We usually just say, just try it. Connect it to a system like Kappa. Ask it ten, twenty, thirty questions. Try a few support questions. Try a few questions you would expect the user to ask and see how it performs. Like, the worst that'll happen is it'll say, don't know a lot, but then already there you have some guidance as to where to essentially start. That's also how we approach when when folks reach out to us. We say, hey. You know, like like, AI tools are new. We recognize this. So we'll let you try Kappa for free for two weeks on your content, whether that's internal or external, just so you can get comfortable to see if your content is AI ready enough to a point where you'd feel comfortable exposing this to to users. I guess I'll extend this offer here. If that's if that's of interest, just shoot me shoot me an email or go and go and sign up on Kappa, and and we'll we'll take it first. Yeah. Yeah. I think, Eamon, to your point, I think oh, sorry. Oh, excuse me. Just on on that last question, I think that that content audit, like you say, Eamon, like, you have to do it yourself. There's no sort of proven models. In my experience, you know, ingestion of content into AI engines is, like, once you point it in the right direction, it's very quick. Like, it's consumed and ingested within hours, and you can start testing and trying out how well your content is ready for AI. Great. So from Susan, how to identify high impact AI use cases? Do have any general rules? Is this for you again maybe, Emil? Yeah. I think it really depends what kind of business you have. If you have documentation, chances are someone's reading it. So the first step I would do is start there. Add some sort of AI chat interface that sits directly on top of your documentation. Common extensions after that would be, as Joyce here from Red Panda, I think also was nice enough to mention in the comments, is to lean into support. That's a common secondary use case or deployment. Usually, another one we see that that gets a lot of interest is creating an AI agent that sits on top of this, as you can do with MCP and and and some other protocols. But start with docs and then then expand from there. Alright. From Mary Jo Trepani, do you have recommendations for how to prepare for AI agents, for example, use an element? And if so, which elements to use? I think, I mean, this goes back to having a good structure of your technical documentation in general. Think to give that specific of a recommendation here is very hard without seeing the specific documentation. We do have a solution engineering team that also have information architects in them. So we have the ability to go in and help and do an evaluation of, you know, how well the structured content is is done and what type of taxonomies or other things you can use in your your documentation. But to give general recommendations, at least from my side, I don't know if, Silvi, if you can say anything, but I think it's it's quite difficult. Yeah. No. Nothing specific. The the just that the more information you put in there, the better it is. If you if you label a GUI term as a GUI term or a GUI label in our case, it tells the AI agent interpreting that information what it exactly is or what it's used for or if it's a parameter or things like that. So so the more structure you can give to your content, the better it is and the more they an AI agent can do stuff with it. Alright. We might have time for one, maybe two more. Jackie, we create our content in Paligo but publish everything through Salesforce. Are there any options available for us considering this environment? Yeah, I mean, I know Salesforce has options in terms of AI agents. I do not know if actually Kappa is working off any information published by Salesforce, but as long as it's a publicly available documentation, I assume you can actually make sure that it it that's the defined content that it's using and and use it from there. Right, Emil? Yep. Nailed it. We also have, like, prebuilt Salesforce integrations to pull it off directly there. But, usually, scraping it publicly does does always a slightly cleaner job. This relates back to the XML questions as well. To be honest, from our side, we try not to care too much what the original format is. We just want to look at the format as it's presented to the user in the end and how it's rendered, and then we'll find a way to crawl this kind of with a platform because that also ensures that what the AI surfaces is how it'll be rendered to the user, and you can kind of backlink insight to that as well. Alrighty. We'll go for we'll go for one more from Carrie. Emile said you can refine an unready content structure based on the feedback from the AI system. What sort of things would you look for to guide your feedback? Great question. I think the the the quick thing to look for is cases where the model just says I don't know. That is like, you know, total eighty twenty. So put a system like this live, try it internally for a couple of weeks just to see what kind of questions are people asking, where is the model not confident, make one tweak to your documentation if there's one or two questions that really come up a lot that the model is unable to do. And then after that, just release it to the wild. The worst that'll happen is the model will go, I don't know, and then you can work from there. By the way, I mean, I just love this I don't know kind of answer because, one, it shows confidence, and number two, it really builds trust. So that's where we're so aligned in in our thinking between Kappa and Paligo. Really like that. A hundred percent. It's awesome. Alright. Well, we're at the hour now, and we have a lot of questions that we have not gotten to in this session. However, don't worry, we have logged your name and your email address every question that was submitted. So over the next couple of days, we will reach out to you with a fuller answer for everyone we didn't get to. And thank you so much for joining us. If you only made part of the presentation or you wanted a colleague to see this, the recording will be sent out to all registrants. And it will also be uploaded and available at paligo dot net slash webinars in the next couple of days. So don't worry. And if you have any other questions that you'd like to submit at a later date, contactpoligo dot net is how you can reach us. And if there's any questions for CAPA, we will forward them along to our friends. So thank you, everyone. Thank you, Emil, Daniel, Rasmus, and Silvio. And hopefully, we will get together again sometime. This was a ton of fun. Thanks for organizing. Thank you. You. Bye. Excellent. Bye. Bye bye.
