Video: Discover your AI maturity stage (and what to do next) | Duration: 3352s | Summary: Discover your AI maturity stage (and what to do next) | Chapters: Welcome to Templify (4.4s), Content Strategy Creation (23.245s), Introducing Templafy Talks (77.695s), AI Maturity Overview (168.025s), AI Maturity Stages (233.33s), AI Maturity Assessment (346.34s), AI Strategy Assessment (421.815s), AI Strategy Implementation (566.525s), Focused AI Implementation (670.845s), Operationalizing AI Effectively (785.315s), Assessing AI Pilots (953.47s), Data Readiness Challenges (1119.7s), AI Instruction Importance (1329.905s), AI Integration Strategies (1768.71s), AI Adoption Trends (1854.83s), Widespread AI Skills (1939.375s), AI Implementation Challenges (2073.78s), AI Governance Challenges (2434.065s), AI-Powered Document Creation (2797.255s), Contact for Demo (3240.87s), Concluding Thoughts (3265.64s)
Transcript for "Discover your AI maturity stage (and what to do next)": Welcome everyone to Templify Connect. What we really wanna create here is an opportunity for everybody to get to know one another, get to know people you work with at Templify, get to know other clients at Templify, share ideas, have fun. Hello, everyone, and welcome to Templify Talks. As we go through creating a presentation, we validate the user input. We create the content strategy for the deck. Let's see how it works. I'm just gonna I'm just gonna copy the the prompt as we see it here and see how well the product actually performs. What I've really enjoyed is one, learning more about the product and the new capabilities. Two, seeing real examples from real customers that are using it, understanding how they're using it. Then the third thing is kind of really getting hands on with a real demo. Hello, everyone, and, welcome to Talks, the first Templify Talks of 2026. I hope you enjoyed, that little look back at 2025. I'm your host, as always, John Taniakos, and we've got a lot in store for you in 2026. No spoilers yet, but keep in touch on LinkedIn. We'll make sure we give you a little previews as the time goes on. Today, we're gonna be discussing AI maturity and what that means for organizations. Now folks new to Templafy Talks, welcome. For those of you that want to watch previous Templafy Talks, visit our website. I think one of my colleagues is gonna post in the chat where to find them. For those of you that have visited us before, welcome back. Now, as always, I'll start with a quick guide to the house rules. Now we use Goldcast. This webinar is powered by Goldcast. And as you know, you can use the chat to interact with us. This is live. And to test the chat right now, usually, when I do this, I say, where in the world are you? I'm gonna do that, but today, given the themes, I'm gonna say what I'm gonna ask you where you're from and what your favorite AI tool is or the one you use the most, and I'm gonna post mine. I'm from Copenhagen today, and I use Templify AI assistant most in in the work I do. So your turn. Tell me where in the world you are and what tool you use. Awesome. All dust. Very exciting. Okay. So today's topic, as I mentioned, is AI maturity. Now this is extremely fast moving, the AI world. A few companies actually stop to review where they currently are or even where they're going. Now to help you get a good idea of your AI maturity, we're gonna cover it into five digestible parts and as I mentioned on both LinkedIn and as you would have seen in the introduction via email, this is going to be interactive. So topics we'll discuss today is why your maturity matters, how data factors in, level of AI expertise, for your users and across the business, how your AI deployments work, and, of course, governance. You'll be happy to hear that I am not alone in this. I'm actually gonna be joined by two guests from Tempurify today. The first is a regular at this point on Tempurify Talks. He's our esteemed co founder, Christian Lund. Welcome back. Thank you. And for the first time, this man is a member of our account management team, our sales team in Copenhagen here. He's here to share insights from the many, many discussions he had with customers. And this was actually this thing was actually his idea. So, Oliver Goodmason, welcome. Tamerfy Talks. Great. Glad to be here. Awesome. Thank you. So, yeah, exciting times. This is this is live, so lots can lots can happen, which is really exciting. I have seen a couple of Temprify, Temprify in the most used documents, AI tools, so that's exciting. Christian. Yes. When we talk about AI maturity, what do we mean, and what stages do you think companies are currently? Yeah. We are what we mean by AI maturity is where our customers or the market in general are on that journey. Because I think a while back, everyone agreed that it's something you have to have. You have to get to some level of maturity. It's not using this, at least for the types of companies we work with. In enterprises, that's just a prerequisite by now. And a lot of businesses are currently trying to figure out what that actually means and where they what they can use it for, which has been a big big thing on its own. And we're trying to figure out if they're actually able to to achieve that and get there. And to do that, you need to sort of look at the maturity states, and I think we'll be discussing that potentially a little bit more in terms of what that means and inputs and data and structure and governance and all sorts of other stuff. So it's around that area. Awesome. Yeah. Exactly. And I think that's, something that we're gonna go through today. It's not about answering everybody's questions or understanding exactly, you know, solving all the problems. Is essentially maybe considering things that they may not have done before in their organization. And this, brings us to why you thought this would be a great idea for Atemplify Talks is that you work with a lot of our customers and you meet many different people from, you know, from prospects to to existing customers. So how differently do they view maturity? I think that so about a year ago when we really started banging the drums for for AI capabilities in Semtify, it became apparent that there's a massive gap between how users see themselves and their maturity. Some folks are really far along and have built multiple models and RAG models, and they've trained all sorts of different things, and they're really, really far along. But they said that, thinking, yeah, we're barely scraping the surface. We don't really have a structure yet. We're not that mature. While other customers had maybe done a pilot of some sort, maybe a copilot pilot, and thought they were very advanced. So it was really difficult to find a way to connect with these different people because we were not really at the same page. So we thought, what if we had a a kind of a model or a maturity model to figure out where do we connect the most value to you when we talk to you, and how do we resonate with each other? So that's kind of why I thought this was a great idea to figure out how do we together assess the right starting point for a conversation so we can deliver something that matches your expectations. Awesome. So why, Christian, did we choose documents and presentation work as a lens for assessing AI maturity in this case? Well, first of all, honestly, because that's what we know. So it's it's been our business for many, many, many years, so we know a lot about that. But there's a deeper purpose to it as well why you can use that as a general way to assess, because it's one of those areas where you have a user on one end that typically has requests or things they want to achieve and go through, like, similar types of processes but at scale. So in companies with a 100,000 people, everyone is doing it. And, when you put AI into the mix of that, you kind of need to look at those areas, like, for example, business documents, where you have those types of scenarios where a lot of people are doing similar types of things. And then it's a good place to look when you wanna when you wanna take a look at, is there a place where to which extent can AI actually help through the process that they're looking at at scale? So you also have the other scenarios where you can put AI to very specific tasks for very specific people inside an an organization, but we think it's it's interesting as well. But to really assess companies as a in a in a greater picture, in a greater scale, it's good to look at these things where you have things happening at scale that really affects all employees across organizations. Awesome. Well, thank you. Just for those of the viewers, that is Christian Lund, not Isabella. Isabella is our behind the scenes, and I I just saw her name pop up when Christian was talking. So, you know, this is the magic of of live entertainment. So we're gonna go to our first poll now. What I'm gonna do, on the right hand side, you'll see a pop up. I'm gonna open it. And the first question we want to ask and remember, just because our our frame is around documents and presentations and how we work there, if you click you'll just see it now where the chat is. You'll see a poll with a little red dot above. You click that, and we have our questions. So how would you describe your organization's AI strategy? And, again, what I'd like you to do is tally up your points because at the end, we'll give you a score or you give yourself a score, and, we'll go from there. Okay. So the first one is around strategy. When organizations say they have an AI strategy, Christian, what usually sits behind that statement and what often is missing? Everything and nothing is a little bit to what Oliver was alluding to before that I don't think we meet any customers today that wouldn't say they don't that would say they don't have an AI strategy because there is definitely consensus that this is something you need to get your head around if you want to be in business also in five years from now. So it's just a prerequisite to have that. But what it actually means can be can be, you know, very, very different from now we need to figure out what our strategy should be or to the much more advanced scenarios where you actually have a very, very full understanding of what you are trying to achieve with AI. Typically, when you're assessing your business as is, everything that you used to do before, how is that affected by now applying AI into the mix and doing those deep analysis towards what's that what that means, and then really assessing where could AI as a new capability that didn't exist before actually be very helpful and then trying to build the strategy around to really, really make that happen. So we see everything in between, and it's also part of the discussions we like to have with with our customers and everyone we meet to just agree on, let's let's figure out where you are currently. And there's definitely extra stages, new stages, other places we can help you go to, but it doesn't really make sense to go far too early. We So like to take the discussion and meet our customers where they are and start there Yeah. And be helpful on sharing learnings from what we've seen similar types of companies are doing at similar stages and how they got to the next step and how we can be helpful in different ways. Yeah. And talking back to the customer lens, Oliver, how do you help leaders connect and have a strategy? Because you had a really good point there at the start when you said some people you speak to think they're further ahead than they are, and others, they don't. So how do you help support them and just kind of guide them on the strategy piece? I don't think there's a I don't see a problem in testing the waters. And, like, just, like like, we're very familiar with that approach, that we try to guide them in trying to figure out what works. But in general, would say that at least identifying a core use case in the business, that could be how do we go to market, how do we do a pitch deck, how do we build internal presentations, figuring out somewhere in the process that takes a lot of time for a business and focusing on that. Because if we do that really, really wide approach where we just enable something and then everyone just goes at it, everyone gets disappointed. So it's about figuring out, like, is there a specific thing we can focus on and then be really, really good at that and then build on? And it doesn't have to be a big team or for everyone. It could be for 50 people in the organization, or it could be even smaller. It's just about having some success along the way, because I think two years ago or I think we were all there two years ago when Copilot went on Microsoft went on stage and presented Copilot as this great new thing. And we all know that what we saw back then was the ambition, but not necessarily where the technology was, but it was the aiming point that everyone has been looking towards for the past two years. Technology is kind of there now, but it hasn't been for a long time, so we've had a lot of experience seeing the customers that, hey, this is not what we were expecting, not just with Simplify, but with many AI tools all over the place. So it is much about focusing on specific use cases and trying to be really good at those. That's my suggestion. And, Christian, what takes the customer from being able to start to operationalize their AI rather than just talk about it? Yeah. It's as with everything in business, I would say, starts with what you're trying to achieve and ask those questions because, obviously, one thing we saw a couple of years ago and still to some extent, but less and less, was, you know, the whole excitement and the hype around, oh, it's amazing that AI can do all these things. It does a lot for me in seconds. And what was missing for a long, long time was actually that question. That's great, but what are you trying to achieve with it? And so really start getting starting to to get closer to those answers. As an example from our world, it's fantastic if you can build a deck or a presentation that used to take three hours to build, and all of a sudden, you can you can build it in twenty seconds. It's great with the speed, but you have to have the other axis following as well, which is the quality, as an example. So as you get closer to the real use cases, you start questioning as well if AI is even the right tools to use. And if it if it is the capability you wanna use, then you have to match what you're trying to achieve. So, you know, when when companies are starting to do that and looking, again, as an example, on specific document types like pitch decks or proposals, you know what good looks like. And if AI is the new solution that would help you get to that with a lot in in in much faster, you kind of have to get the other things as well. It has to be with similar type of accuracy that you used to have at least at least as high quality as well and also with in a way that is much more repeatable and under much more control, typically. So that's when you start asking the other questions on how do I then make AI work towards that. And it has been, you know, speaking to many, many customers and in general looking at what hap what's happening in the market. It seems to be the big transition that is going on right now where AI, the models themselves, has really matured very, very quickly in terms of what they're able to do, but they're also very comparable. And what is seems to be missing a lot is how to, especially from a business side, to put a lot of orchestration around it, to get it closer to what you're trying to what you're actually trying to to achieve in a consistent manner. So those instructions to the agents that don't only rely on a user given input seems to be, like, a big discussion point in many, many businesses also outside of just document generation right now. That's kind of the next big thing on on mature companies that you have to make sure that it actually helps you do what you're trying to achieve. Otherwise, it's not a help. It's just a liability. Thank you. So I've closed the poll, as you can see. I'm gonna share the results now. So we have the winner in this case with 48% is we're running out of pilots but without a central strategy. It goes to your point, Oliver. You talked about, you know, customers and prospects you're talking to are trying things out, and there's nothing wrong with that. But from a strategy perspective, what should they be looking at? How could they be sort of framing their the pilots around specific goals? Well, to what I just talked about, I mean, obviously, they are trying out these different pilots in different areas for a reason. So they are doing the right things. Like, they're trying to identify pockets in the business that can use the help of AI. So I'm not surprised of of these answers. These fit quite well into when I do these assessments myself with with customers that, yeah, you are doing what what what you can right now because there is probably not a central strategy, like, overarching everywhere. So I'm not I'm not saying that it's not like a bad answer just because you scored two. That's probably where the business most businesses are right now. But as I said before, if if they if a customer can tell me the why they're doing it, like, what is it we're trying to solve? Is it because the output is bad right now, or is it because it's too slow? We can't produce fast enough? Like, if they can tell me that, then we're already a very, very long way. Yeah. I can chip in as well. It's it's it's almost been a little bit annoying to watch some of these pilots going on because it's been lacking what you're actually trying to achieve, as as was said before. And and it it continues to be a problem. If you don't know what you're chasing, how can you then, like, put an assessment after it? So it's really, again, back to figuring out, you come from a place. You need to go to a new place. If you want to achieve this, what is actually required to get there? And to a large extent, it's been like AI for the sake of AI. So AI has been the big word, which is actually wrong. You still have the same problems, but you now have new capabilities to solve it solve it potentially. And it's mostly about looking at where is it I can ply these things that I weren't able to or wasn't able to do before in in these two scenarios. Those are the pilots that typically go best because they're they're attached to dollar amounts ultimately, things you can actually measure. If I do this, I will get better results, save time, reduce risk, or generate revenue. And it's always been like that. That's not new. But it got lost a little bit because the AI word just popped up as as the as the most important thing. I think that is changing now, which is good to see. Thank you. I'm gonna open our second poll now, and this is around how data is managed across your organization. So you can read those. I'm not gonna read the list out. But why does data become such a readiness so why does data readiness become such a blocker for AI and document workflows specifically? Well, it's because this is totally not new. Like, the whole garbage in, garbage out analogy is just you know, it's so obvious there. So if you take what I like to do is to take AI out of the mix for a second, and you what would you do if you didn't have AI? You know? And you ask you ask a person to do something, and you don't have, the foundation with the right data to work with or with the right content to work with, with the right right direction to work with, then you don't get to good results. And what AI is doing is essentially replace a lot of that. So it's the same. You know? It's, if you don't have good data structures for it to source from and to do assessments on the back of, then, of course, you don't get good results. And, and it's back to the to the problem of being impressed with the speed. And, obviously, one thing we've seen a lot with AI is so eloquent typically on putting things together that look right, but it isn't necessarily. So making sure that there is actually a very good data structure where you can source your your information from, not only data, actually, also content, best practices, things you know work really well from one, and then combining that with the ability to make sure that when there is an intent from a user to do something, that you make sure that you orchestrate or you instruct the AI to go and actually source its information from those those areas where you actually trust the output. You know, if you don't do that, you can't win, at least when you get to more, I would say, deep business use cases where you have to rely on the output in a consistent way. Fantastic. And so, Oliver, what's the biggest misconception organizations have about how ready they are with their data at the moment? Well, I think there's a lot of data that is not AI ready yet. And that's okay. I mean, you don't need your entire output to be AI generated and consumed with AI yet. That's maybe a big misconception that, oh, if we're going to adopt AI into our workflows, then the entire thing is just going be pitch perfect like with AI. There's probably going to be some workflows that still require a human, luckily for all of us, and something where AI maybe later will do a better job consuming that data. So when you pick a partner to work with for AI, figure out if that partner is able to extract some of your data. That could be your foundation for the formatting or how your visual identity looks like. It could be specific understandings of what good looks like in your business, how you communicate, what's your tone of voice, how do you present your products. So there are definitely some areas that are a lot easier for AI to consume right now because you can instruct AI. You can give it documents. You can give it PDFs to read from in other areas that are far harder. And if we try to go all the way at the beginning, then it's ultimately kinda gonna fail a bit. So it's a lot about, at least in my role, to guide folks on what what is like, what's our ambition going to look like, how are we going to get for it to be a success, and what should we strive for later. Fantastic. And you talked a lot about there, about instructing data. How would you if a customer at the moment doesn't have sort of data that's sort of, well structured, what practical tips could you give them, Christian, around, like, how best to, structure that data, how to use that data, inform models, etcetera? Yeah. There are two sides of, actually, of your question, because one thing is to make sure that you have, I would say, trustable data, that you're actually you're comfortable with it. And there's a ton of things. That's probably a debug in our own to actually how how to do that and dive deep into it. But then there's the other thing that is actually easier to get to, that is more how you instruct how you instruct. Because, again, one of the problems that we've been seeing is that there's been too much reliance on a person, a user, needing to be really good at putting in a prompt and then sending that directly to a model or to an agent. And this whole there's so much getting lost in the translation in between. So we've been using the analogy. If you have if you take the best model that is now available out there of every model that is, and you give it, like, a user prompt with 10 words, and you compare the output from that with a very dedicated brief or long instruction or almost a recipe for an agent that is much sort of less smart, maybe number 100 on that list, you'll still get a lot better results in number two. So the what has really been underestimated for quite a while is the instruction piece. User has an intent. Business picks up that intent. I understand what you're trying to do, but I'm gonna put a little bit of guardrails to it and extrapolate what you're saying into what is essentially like an instruction to agents so we get much better quality in the output, and we get much, much more consistent outputs as well. So that piece is, equally important as the data itself to be able to really instruct these this AI. AI is kinda dumb. It's like, you know, it has extreme knowledge about everything that has ever happened, at least looking back, but it's not necessarily smart enough to do that assessment itself. It kinda needs some help in the middle, especially if you take it into enterprise requirements where you have this both the quality but also the consistency on the output. Anything to add? No. Perfect. Okay. So we'll touch we'll go back to that user intent and users themselves in a moment, but I do want to share the results of the poll. So, again, it looks very, very similar in the middle. I think the largest one that we've got is we have clear data governance policies in place with the most votes. But what doesn't have any votes, which I think is really important, is real time data systems enable enable AI driven discussions. That was the five point. That was the, you know, the top we were looking for and nobody has that in place. What does that say to you, Christian? Maybe it's something to look at. There you go. Awesome. Okay. So I'm gonna stop sharing this poll if I can find where the stop sharing is. There we go. Good. Proof that it's live. So the next one is what does your, what does AI expertise look like at your organization? You know you know the drill by now. While we're talking about expertise, Kristen, how important is expertise and sort of broad AI literacy, across an organization? Like, I touched on your point again, the user is the person in control. How much do they need to know? Yeah. Starting from the user, what they need to need to know, I would say, is almost the least least possible. Because there's been a lot of early on, there was this that we need to we need to turn everyone into a prompt engineer and so on. And we even saw it, I I think, originally with Copilot. The idea probably was that the, you know, the pilot to the Copilot would be the user. And I think they've changed that perspective a little bit because it's too big a task. Right now, what we're seeing is is, again, a lot more like users focus on their destination. What are you trying to achieve? Give me your intent. That's kind of that's kind of what the the level of expertise they have, which is great because it hasn't changed. You just need to tell us what you need to achieve. That's it. So and when you then go to the others other end where you have more the administration, I would say the deep expertise on the AI is actually typically also not that necessary. What you need to understand is, of course, what you have available, what AI can do for you, and not the least, again, connecting what users are trying to achieve to what you wanna orchestrate. So it's it's mostly on that side of continuing to stay very close to the business intent and making sure that that's where you have your focus. Mhmm. And then, of course, you build that into understanding how AI can be helpful. You will, of course, need expertise, the deeper expertise in in some of the some of the more mechanical parts of AI. But to achieve good business results, I actually think the other one is very underestimated on where you should be putting your focus. It's back to classic business knowledge of what you're trying to achieve and how to put that best in place, but now you have a new partner to work with, which is AI, and you need to put that in play in the in the in the exact best way. So it's a new task, but it's not deep knowledge to AI necessarily. That makes sense. You're an end user of AI tools. Obviously, have a lot available to us. Which ones do you use the most? How do you find the enablement for them? And sort of what advice would you give to organizations that have the same tools? Well, I'm a user of Templify, so I mostly speak on behalf of what Templify makes available to me. I have a thing with writing emails fast and then spending too much time rephrasing them afterwards, because, well, maybe the sentence was a bit weird, or, well, it came across too blunt. And for that specifically, as as Christian talked about, like, not everyone has to be an expert in prompting. If we can make if we can help end users speak better to AI by pre predefining prompts, that's what Templify, for one thing, does. So I use that a lot. So I use that to rephrase my sentences, to make them sound more clear to the to the recipient. Also, improving my text before sending, so that's something that's been along forever. Like, something like Grammarly that's been here for, like, I don't know, many years. I also use that a lot to to rephrase my emails. So that's an email. And then for product questions, security questions for the solutions I'm selling, in the old days, I would have to reach out to a security person at Simplify and get them to explain me X, Y, Z, while now all the information has been uploaded and consumed into an AI model. So we don't have to go this back and forth internally to get the answers so we can work a lot quicker. So that's some of the things I use daily. And then I use it a lot to to produce these kind of outlines for for my presentations or for my proposals. I'm not expecting, like, the fully fledged proposal that does everything for me yet. I'm expecting the the right structure so I can add my own, like, words and value into that. But just having that structure maybe like, saving those 20% of time, that's well, if I do that a few times a week, that's maybe a few hours, I get back, and I can focus on other things that aren't being, you know, attained on. Sometimes actually building the structure is the part that takes the time or the most consideration going to do. It's a hassle for me. Yeah. I mean, I'm not a PowerPoint expert, quite the opposite, so it helps me get the structure right. Yeah. Awesome. And so, Christian, what's one behavior that shows AI skills are becoming part of the culture rather than just isolated? One behavior. I would say the if it's a behavior or not, it's more what we're seeing, the best moving, the most forward moving businesses that are very successful with this are the ones that are putting AI tools in the hands of their of their users that is actually requires almost no behavioral changes at all. That's the best example. So they're starting to use it without almost even noticing and replicating what they would be doing in their private way of working. So one thing that happened a couple of years ago that just dramatically changed a lot of things around software in general was ChatGPT. Because when that came around just incredibly fast, a lot of people adopted to it, and they start using natural language to get further. So a big transition in technology that we've seen, including ourselves, is to just do that, have have that be the starting point, to allow people to continue what they do, also inside of the tools that that vendors other vendors are providing, but then, obviously, providing more specific or use case specific outputs, like, in our case, business documents. But you you sort of latch on to the behavioral change that has just happened in the world when you start doing that. So there's a general behavioral change, and the businesses that are doing that best are the ones that are providing those types of tools to the users that allow them to continue what they were doing already. Great stuff. Should we take a look at the poll? I'm excited to see this one. Okay. So we had the five options in this case. No in house AI skills currently. One vote there. Well, you definitely need to get in touch. A small team with basic AI knowledge, dedicated AMLML speed sorry, team growing expertise, And then AI skills are widespread along the organization, 22. And at the top sphere, it was 11% with AI culture exists AI innovation culture exists in every department. So I think the one to drill down on for me in this case is AI skills are widespread across the organization. I think it goes to the point you just made about not AI tools for the sake of AI tools, having them inbuilt into the sort of processes they're already using. And I think one thing that sticks out for me these days is that a lot of there's a lot of solutions out there that go from being that they make promises, they can give a prompt to a presentation. It's not a presentation as we're used to. Right? It would be, for example, in a different format or a PDF format. One thing that we're focusing on a lot is our ability to, you know, reduce the level of expertise because we're outputting a PowerPoint, something everybody has known and used for the last sort of thirty years. So I think that's a really important point to get across from the skill side of things. What does what do those results shout to you with regards to widespread being the the fourth. Yeah. It's it's kind of distributed, sort of kind of equally across. So it does to me, that explains that there is just a change going on with the sort of different levels of maturity. So people are definitely latching on to this. And when it's widespread, scattered, maybe, if I understand that well, it's because people are are using it. And I think you see a lot of we were speaking about before about which businesses are doing, you know, best around AI and the ones that are really good at putting these tools in the hands of their users in the right way that doesn't require them to learn a lot of new stuff. But from the user side as well and employee, there is definitely an opportunity as well to just jump into this pool and figure out what it's all about. And that motion is definitely happening as well with early adopters that are just leaning in. And at some point in time, that means that you have a little bit of scatter because people are using it for different purposes in different areas. And then, again, as a business, you need to figure out if that is going on and try to figure out how to consolidate that against better tools that would help you actually achieve your business goals better. So I think it's actually good news, and it just explains there's a general AI maturity going on, and different businesses are at different stages. Awesome. So our next topic is around implementations and deployments. So we're going to pose the question, where are you with your AI deployment? Oliver, we hear about so many different AI pilots that fail or don't make it into production, even when the technology itself works. How does that frame in your world? You obviously speak to a lot of customers that, you know, that are obviously with competitors or sort of similar tools to ours. What makes the difference from going from a pilot that fails to an implementation? I think it's a bit back to the thing we talked about earlier with testing the waters, trying to figure out is this something that will resonate with the end user. And we've seen that end users have a lot more power now, because leadership is investing heavily into this new AI world, but the people that have to consume it and make use of it, return the value, are the end users. So that's where all these end user testings are going on, that as opposed to in the earlier days, there was a lot more this leadership approach. There'd be a team around it to build a business case, and we would then move into, you know, implementation. Where now, we are well, AI companies are also more eager to get those tools out there and get them tested, so you will naturally see a lot more things fail, but it'll only fail if leadership deems it a failure. We also see some big AI companies out there where the testing just goes on and on and on until it becomes a success. And I think Copilot is a great example. I talked to tons of customers who have implemented Copilot, and they all have great ambitions, but what we can do right now is we can maybe transcribe meetings, we can transcribe emails, we can can help with some research and conversational UI, and and that's great. So we have been successful in 10% of what we want to achieve at the end. It's just a very, very long drone pilot. So I think it's natural that it will fail, that that if we haven't been good enough at at tying a use case or a value to to what we're trying to achieve, And if if what I knew of what I know now, if I had known that six months ago in some of the first tests we did, it would have been totally different because then I would have been a lot better suited at at helping and training my peers at getting, like, real value or at least tying value to use case. And also, we're an enterprise solution. We've always been an enterprise solution. That's always our market. And I think one of the things that we're coming up against quite often is shadow IT. Mhmm. Solutions that promise a certain result, but they do it with a very small scale and they do it without the guardrails. So then when it comes to sort of approvals and those things, sometimes the teams have been using these specific tools for months. And, obviously, therein lies the risk. So I guess what separates those kind of AI use cases to the sort of enterprise, you know, the ones that really succeed? Yeah. First of all, I the the reason I actually like the shadow idea just as a technology person is because they're typically tied to something you actually are trying to achieve as a user. You pull them down the shelves from somewhere because there's something you couldn't do if you didn't have it, and you sort of can't wait for that process. That's not great for the anarchy is never great for business. It might be great for music, but so, you know, that's that's what sometimes is act that's the reason for it happening, which is which is really good to some extent. But then as a business, you need to figure out why people are asking it, which is typically the question that isn't asked in these big assessment that go on because it's more like, I put something on your on your desktop, and you have to tell me if you like it. But you forget to ask about against which use case. What are we trying to achieve with it? So it's actually less about do you like it and more about does it work against what you're trying to achieve. So once you have the shadow IT and you figure out that it's actually there, I think it's a good source to figure out where you want to then consolidate tools that you can also live with as a business. So there's a reason why users pick them off the shelves from somewhere and start using them, but there's also a reason why you wanna put guardrails around things because you have a lot of risk with some of those tools. But it's a good place to look if you want to if you actually want to get close to use cases and start solving things in a better way than itself today, but still something that doesn't, you know, leave the users just on their own. That makes a lot of sense. Oliver, I guess a question for you. You mentioned before we knew that you're working on these pilots. How do you help customers build a business case? What sort of questions are you asking in order for them to help you? Because obviously you want to prescribe almost a success criteria. How do you go about that? Well, speaking to shadow IT and, you know, users picking things off the shelf because they have a problem right now to fix, Say, an everyday end user maybe does not care about the brand, not as much as the brand team at least does. And maybe they don't care as much about the data integrity of everything, because they'll figure that out afterwards. So they have different expectations, because it's easier to have AI solutions do something for you if you don't need guardrails. If you can just say whatever, I'll take whatever I want, then pretty much anyone could build an AI solution that could do that. So it's about helping them figure out what is actually important to your business. Like, is formatting important? Is data integrity important? Is it important that it's the same output no matter who the user is the organization? Do we get the same thing out of it? So I spent a lot of time talking to customers about what is it actually that is important for you, an AI tool. Maybe it's not our solution, but at least that has helped you guide you down that path. Awesome. Should we take a look at the results? Interesting to see this one. Okay. So 50% of all of the voters said they have standardized AI platforms enterprise wide, which is fantastic. I'd love to is a request in the chat. Exactly. I was a question for the chat, as as Oliver's alluded to. I'd love to hear some of the solutions that you do have access to that are available enterprise wide. Exciting. So our final poll and one that might argue are most important, is around how do you manage AI governance. It's probably the biggest challenge. I think it's something that we've spent, you know, the best part, well, in your case, fifteen years discussing. Why is document generation one of the biggest, one of the first places that, you know, governance issues arise? It sits in the words. Documents documentation, it's how you document work as an information worker. So it has to be accurate and right. And for that reason, there's a balance between if you let people go rogue, you get to a certain you get to certain results. If you put more control and guardrails around it, you typically get more consistency, and you can trust the output better, which removes a lot of risk. So the governance piece itself is, I would say, the the bad or the the necessary evil, so to speak, to get to where everyone wants to get to, work faster, generate more revenue. But especially in enterprises, that doesn't happen if you also put risk on yourself at the same time. So specifically for business documents, because of the magnitude, the millions and millions and millions of documents that are reproduced to actually deliver business outputs and document all sorts of stuff, It just has to be accurate. So AI is extremely capable of supporting that motion of doing that a lot quicker, and it's actually also very capable, if you do it right, to put those guardrails and governance around it. So, there are definitely, things to do with it, but, but that's why we think it's a it's an important place. Starting to use AI for generating documents that is supposed to be used for business, it's just a prerequisite. You have to do that well. Give us an example of some of the guardrails they should be putting around them. Yeah. So guardrails comes from, like, on the simple end, those could be company standards. So simple things like make sure at least it's it's using our brand. It has the right logo, and maybe it has some prerequisites on disclaimers and legal stuff that has to be there. That's kind of the starting point, typically, as a prerequisite. But then it just goes from there from being a template into being, like, document content. Once you start adding content, you have to make sure that it's accurate, it's dependable, and it's repeatable, that you can you can trust the information that actually goes onto the documents to to an extent where you can send it to a recipient on the other side who's at might be outside of the company. So and everything in between is is just you know, there are so many things to look. And, again, AI can can play a very significant role in it, but you have to put this orchestration and guardrails around it to do it. And if we have time, I think Oliver can show something about it. Yeah. I think we'll wrap up the I'll share the the final results, I think. And then if you're if you want to, we can take a look at some things. So the results, as they stand, do you do you manage AI governance? How do you manage AI governance? So I think very, very close, to be honest. Again, just like one of the previous polls, almost equal across the board. There's one that says we help set the industry AI governance standards. Love to hear from you and how you're doing that. And then there's there's 7% with no AI governance framework at all. So I think what we'll do is for the final one here, I'm going stop sharing and actually open the final poll which is how much did you score? So how it works I'm actually just gonna read those out for you. You've got your maturity stages here. If you've got five to eight points, it's you're an open observer, which means AI is mostly theoretical with no formal strategy. If you've got nine to 12, AI is being tested through pilots and POCs, maybe isolated teams, but no kind of unified strategy. If you've got 13 to 16, AI is guided by documented strategy and supported by governance and expertise. If you've got 17 to 20, it's embedded in mission critical processes delivering measurable ROI. And if you've got 21 to 25, you're essentially legends and AI is a core competitive advantage driving innovation. Now, don't worry if you haven't taken SCOR now. One of the follow ups from today is you're going to receive an email follow-up containing the questions you've just had and a lot more information that Oliver has helped collate around with a lot more details around each of the sections. So I think you'll find that very interesting. But remember, your maturity stage isn't a label by any stretch. It's simply a starting point, and the goal isn't to catch anybody else up. It's just about understanding how to move forward deliberately. So, Oliver, would you like to show us something practical? Yeah. I mean, for the folks that wanna hang around, I talked a bit about what I see customers using Sendohy for in regards to AI. So I'm just gonna share my screen and give you a quick quick demo. Mhmm. Ready to go. Thank you. So I'm just on a PowerPoint slide. What I wanna give you here is basically you can see on the right side we have a thing called AI assistant with Simplify. What this basically gives a company is the ability to help everyone in the company prompt in the same way. So the model behind it, that's up to you. You can decide exactly how that model looks. But for, you can say, the the less mature customers, something like this is really tangible because it helps people prompt the same way. So I have a a slide. I have some information there. I can mark the text. I can click on this little star icon also on the right side, and I can say, like, hey. Rephrase this for me. Give me an alternative way of saying the same thing just like I said before in my emails where I have trouble sounding precise. So I can maybe rephrase it to get an alternative word wording, or I could have also translated it directly or improved it and vice versa. So think of this as a way for your company to control how everyone has the same narrative when they use AI to prompt instead of chatting away. Because some people in a chat interface will do very short prompts, and ours will do very elaborative prompts. Most will do short. But having this kind of control and tone of voice in there, it it helps a lot on a business level. So that's where we usually lean in in the beginning with customers. If we take it a step further down, I have another example. I'm writing John an email. Maybe John had some questions for me about Tempify's security credentials. Instead of me having to go to my colleagues in security, I could instead just write, hey, John. Here's an overview of our security credentials. I could mark the text. I could find the specific prompt called security question. So behind this button, there's a long instruction and a lot of information about how Tempify deals with security. I can press on the security question prompt. It will then pull information from a specific model that has all the information about Tempify. So I no longer have to converse back and forth. I can just real time get that output back, and I can post into an email and get over to John. You're welcome. Thank you. Now, the last example I'm going to give today is a little bit more spicy. It's the latest addition to Templify, and it's what we call our document agents or prompt to presentation. And this is a way for users to get a presentation output without needing a specific template already built. So it's especially good for bespoke documents that doesn't exist yet. So I could say, hey sorry, I am selling Sempify's document agents to John Incorporated. Help me build a pitch deck, max eight slides. And I'm just gonna it's gonna ask me some questions because it's an assistant. It's gonna make it better, but I'm just gonna say no. Just great. So now it's gonna start working out the layouts and the outline for me as a user. But if I had actually been in a real scenario, I would have maybe taken some time to talk a bit more with the agent. It it was asking me, like, what are the main benefits I wanna highlight for John? Who will be the audience? Like, is it the decision makers? Is it the end users? Who am I conversing with? And and what are the challenges that John mentioned? So obviously, it's trying to do its best to give me an actual outcome and not just AI blurb. Yeah, imagine that you were It's like your new favorite colleague. The more if you're working with a colleague and you wanna get them to build a presentation, the more information you give them, the Yeah, good news obviously. Like, you would never just I mean, I've seen scenarios where you write the top line and you expect an outcome. Like, who in the real world would expect an outcome without all these questions? So Templafy has built this to be instructed to ask these kind of follow ups. Now it has given me an outline. Now, is Templify's way of saying, Okay, here's how it would do it in the eight slides I asked for. I can expand it. It has given me a suggestion for how it will format it. And the good thing is that it will use your brand. So the admins of that maintain Tempify maintains the brand and look and feel. I could then press on a slide and say, Actually, I want to well, some call it remix. We just call it different layout, so you can pick the different layout to instruct your model on. But let's say I'm happy with this. You can see how much it has actually taken out of it. Because I wrote PitchStack, because I wrote Document Agent, and I wrote John Incorporated, it already knows so much about what I want to achieve, because we've built that on the backside. So now we have all this layout, and we have all this information, and we can create it. And now it's going to consume the data, the input. It's not going to be perfect, because I haven't given it that much, but it's gonna give you an idea of what we can actually achieve if we started conversing a lot more. And while it's building, it's taking about a minute. Maybe do you have something to add? Or you? Well, if there's one thing which is always difficult to explain about these things is what actually goes on under the hood. So as you're saying, there's a lot of instructions. And since you since you wrote pitch deck, I think you wrote Mhmm. In that one, it actually starts to tries to identify if there is an agent underneath that is purpose built for that. So if a company has, like we do, a very sort of deliberate way that we wanna pitch text, it's gonna use that one that would tell it, when we do it, please do it in this exact way. Use this model to do it. Source your information from here. Make sure you look through the the best practice pitch decks we've done in the past, all sorts of things we do. So that is actually the grounding of the result that you actually are seeing here. So there's a lot of instructions going on behind the scenes. Yep. But the users don't need to know that. They don't need to know. But that's the reason why you can get to pretty pretty solid outputs. While while we're you're just before we present, we've got a question from from Lynn, and she's asked, you asked how our firm sets industry AI standards. We focus on structured, risk manage, and compliant AI governance frameworks, desired to move organizations from experimentation to practical, secure, and ethical AI implementation. Our approach emphasizes balancing innovation with regulatory compliance, a principle we call protecting at the pace of AI. And I did get this specific response from doing a web search for this AI generated course. Thank you, Lin. Enjoyed that. Yeah. Over to you. Yeah. I mean, we're kinda at the end here. Personally, I'm actually quite happy with the layout. I think it looks like it it has done a good job representing Tenpinfy's brand. Although I I gave it barely any input, it has understood what I'm trying to sell. It knows what the value of what I'm trying to sell is and what we're trying to to guardrail against. So, I mean, again, I'm not gonna send this out right now. I'm not just gonna consume and send. I'm probably gonna read through it a few times and and tailor it to JON Incorporated, but it's still giving me a really good starting point. Perfect. Well, Oliver, where do the if people in our audience want to learn more and see more about our document agents, where do they go? To me, personally. To you personally. Yeah. I mean, if you wanna give Oliver all of the leads and get a personalized demo with Oliver, I would highly recommend it. But you can visit our website and, find the go to demo section, and you'll get in touch with one of our sellers. You'll be lucky to get Oliver. Our final results of our poll, and this was how did you score? And I think these are interesting. So the majority, is just cautious experimenters, around 40%. Nobody are AI innovators even if, Lynn would like to think they are. Thank you, Lynn. So before we sign off, I wanna say thank you all for joining the the first Amplify talks of the year. No matter where you land on the spectrum, it's fantastic, and the opportunity is the same. Essentially, we didn't wanna answer any big questions today. We wanted to make you think and ask the questions yourself internally. As I mentioned, you're gonna get a follow-up email from us containing the AI survey. You can take that and learn more about your where you are on the scale. If you wanna actually have a chat one on one with anyone at Templify to talk more about your AI maturity model or understand where we're going, how document agents work, get in touch with us today. Christian, do you have any final words before we go? Just wanna say thanks for for everyone for joining, and we look forward to hopefully much deeper discussions with all of you. Oliver? Nothing to add. Nothing to add. Okay then. For myself, Oliver, Christian, and Isabella behind the scenes, everyone at Tenlify, thank you for joining Tenlify Talks, and we'll see you on the