🤖 AI Across The Product LifecycleEp. 9

MBSE, Testing, and Agentic Data Integration — with Dalus and Quix

Michael Finocchiaro· 57 min read
Guests:Dalus & Quix
Share

Episode Summary

The episode titled "MBSE, Testing, and Agentic Data Integration — with Dalus and Quix" delves into the intersection of model-based system engineering (MBSE), test data management, and artificial intelligence in manufacturing. The podcast features Michael Rosam from Quix, a company focused on test data management for industrial organizations, and Sebastian Völkl from Dalus, creators of modern MBSE software that integrates requirements management, architecture simulation, and system design in one platform. Both companies are leveraging AI to enhance their offerings and support the evolving needs of their clients in complex hardware systems such as aerospace defense and automotive industries.

Key insights shared by the guests include the importance of agility for startups in navigating the competitive landscape dominated by established players. Quix emphasizes its ability to serve niche markets with tailored solutions, while Dalus focuses on targeting fast-moving small organizations that are increasingly seeking modern tools over traditional enterprise vendors. Both companies also highlight the transformative potential of AI, particularly large language models (LLMs), in revolutionizing their operations and providing greater value to customers.

For PLM and engineering professionals, the episode underscores the critical role of integrating data management with advanced modeling techniques to drive innovation and efficiency. It emphasizes the strategic importance of choosing a focused market segment and leveraging AI to stay competitive, while maintaining customer-centric approaches that prioritize their unique needs over standardized solutions.


Full Transcript

Michael Finocchiaro

hi guys. This is Michael Finicaro on the AI Across the Product Lifecycle podcast. I'm joined by two amazing entrepreneurs, Michael Rosem of Quix and Sebastian Völkl of Dalus. They're gonna explain what they're doing and we're gonna talk about, as usual, AI and how it's impacting their... their success and helping their customers. I don't know, Michael, you want to start?

Michael Rosam

Yeah, hey, Michael. Good to be here. Thanks for having us. Yeah, so I'm the founder and CEO of a company called Quix We are five years old, just over, actually getting close to six now. We're based in the UK, although we're... And we focus on test data management. So we came out of the McLaren Formula One team. In 2020, we built a data platform. It's used by F1 teams and other industrial organizations, mainly in Europe. And yeah, we help our clients to collect up all of their test data, all the measurement data, all the configuration data from physical and virtual tests and consolidate it into a single source of truth, which obviously accelerates analytics, validation, verification, and enables things like machine learning and AI.

Michael Finocchiaro

Very cool. And Sebastian, can you tell us about your parkour and about Dalus?

Sebastian Võlkl

Sure, yeah. Nice to meet you guys. I'm calling in here from California, San Francisco. Originally from Germany, but my co-founder and met in the US and then moved to the company, DS. And we're about to end the journey about two years now, so a little less. But yeah, we are building Dalus which is basically a modern model-based system engineering software. And model-based system engineering. can include a lot of things, but the core pillars we are focusing on is requirements management, system architecture and simulation, combining these in one environment. And yeah, we are mostly used by companies building complex hardware systems. So this can include aerospace defense, ⁓ but also robotics, automotive, et cetera. But those are the two industries we just see used the most.

Michael Finocchiaro

How big are your teams? I forgot to these to some of the other startups. How big are your two teams respectively?

Michael Rosam

We're around 20 people.

Michael Finocchiaro

That's a pretty good size. Well, then it's five years old, so that's great. And how about you, Sebastian?

Sebastian Võlkl

Yeah, we're still setting out. have few interns here and there, but it's mostly just still me and my co-founder in the early stages.

Michael Finocchiaro

That's awesome. So we usually, you know, the title of the podcast is AI Across the Product Lifecycle. And I like to get an idea of what's happening in the real world in AI by asking amazing startups like you guys that are using it in the trenches every day. I usually start by saying, you know, AI, course, has existed for several decades in some form or another, but we all live through this revolution in 2023 with the advent of LLMs and chat GPT. And I like, it would be nice to know whether the two of you, ⁓ when that happened, when, you know, we've suddenly had this open AI stuff that we had never heard of before. ⁓ Were you guys really bullish about it? Did you say like, this is going to change everything? Or maybe you were a bit, let's see, because, you know, it could just be a fluff and maybe it's just going to fade out. You know, what were you guys, what was your guys' opinion back, you know, two, three years ago when this whole revolution started? Either of you can take that one. built for it.

Michael Rosam

Personally, what? Yeah super bullish as soon as I first tried it you could just see what it could do It was unbelievable. So I'm very bullish I don't think that's the common view though in my in my team I have three co-founders and it's taken a bit of time for everybody to warm up to it But now it's like another level the things that we're doing internally with AI to develop our product is Like really everybody's on board that it's a game changer

Michael Finocchiaro

And did you see it as a game changer for everything, for the development process as well as the actual products you could build now that you couldn't build before?

Sebastian Võlkl

Thank you. ⁓

Michael Rosam

I think it's taken a lot longer to see where it could impact the end user and the product. ⁓ That's coming now though, we understand that. In terms of like an internal accelerator for us, it's been very huge. That came quite quickly that it was helping engineers to build software faster. It's helping people in product and marketing to like analyze lots of customer calls and interviews faster. ⁓ So it's really, I would say it started internally and now it's, ⁓ I guess. So we're extend as well that the user base is catching up. So it's what they can do with it is also catching up.

Michael Finocchiaro

Is that the, were you also really bullish on it, Sebastian?

Sebastian Võlkl

Yeah, I mean, that's kind of like one of the things when you live in San Francisco is like, there's like nothing else to talk about, right? That's like the only topic, which, you know, can be good or bad depending on how you see it.

Michael Rosam

You

Michael Finocchiaro

You

Sebastian Võlkl

But yeah, it's like, you you go outside the door here and you go in a cafe and there's like 80 % of the people in the cafe building an AI startup right now. So it's like, you know, you're certainly living in a bubble and it's kind of like, or, you know, we don't know if it's a bubble, but like, it's certainly like everybody's doing it here. And once you go outside the city, once I go back to like, maybe Germany and talk to my friends and whatever, like I say, actually like no one knows what the... latest whatever model that just got released two days ago. There's like, you know, there's like different, different worlds. I think, yeah, like, I think

Michael Finocchiaro

They're not following MeetGhost then, right? Because MeetGhost tells us every day that every model comes out.

Sebastian Võlkl

Um, but I would say one of the benefits, I, I think I see kind of like on a daily basis, how it is implemented in like a lot of industries, how it can, like all these companies that are growing here, kind of like changing how different industries work and changing, like, kind of like over overcoming these kind of like adoption barriers that has been there for like two years, one or two years. And especially in like, think if you're talking about like AI, like engineering and specifically more like hardware engineering, because like sometimes in like the software engineering world, I mean, it's clearly there already, right? Like it's, it's obvious, even if you talk to the, you know, the best engineers in the world and the software industry is like, okay, they're all making use of the AI tools to kind of like help accelerate their work. But if you talk to most of the industries outside of that, that's just not there yet. So it's just like, okay, when will these other jobs or like industries catch up to this software engineering world? Because they're just like,

Michael Rosam

please catch up to this.

Sebastian Võlkl

natively adopted this first. ⁓ And by a fact, hardware is just always moving slower, including the software for hardware. it's kind of like just how long does it need to catch up? But it will for sure come. It's just a question of when.

Michael Finocchiaro

Yeah, very cool. And so obviously you guys are building ⁓ pretty amazing products and we want to learn a little bit about those. And what's really interesting too is it sort of separate the hype from the reality in terms of how your developers on your teams are using AI to build these awesome applications. know, how is it just in the scaffolding? Is it also building test data? Is it? What kinds of ways are you using it and the developers using it on a daily basis to build Dalus and Quix respectively?

Michael Rosam

Well, we're a software platform ⁓ so obviously we most of my 20 staff are software engineers of some sort And so they're using every day to build the product ⁓ I think it's just You know literally using Claude CLI mainly actually over here. So I think there's been quite a lot of iteration with different tools We've we've we've been using cursor. We've been using Yeah, exactly when sir if you name it ⁓

Michael Finocchiaro

Windsurf.

Michael Rosam

And in the end, people just love having the Claude code CLI in your CLI working with you. And I think it's really, I think it's taken a bit of time for engineers to get a workflow and how it works best. But there's been a lot of sharing and collaboration amongst the team about how to get the best out of it for specific use cases. And now we just, we couldn't be without it. mean, I'm basically...

Michael Finocchiaro

Hmm.

Sebastian Võlkl

Okay.

Michael Rosam

Like you can look at the credit consumption of Claude CLI and

Sebastian Võlkl

Okay.

Michael Rosam

you know, can see he's really nailing it and you can see who's maybe maybe not quite And it's almost like performance review time. It's like who's burnt the most quarter Well, because I actually I posted on LinkedIn recently we had a we were deploying to an enterprise account which had Zero trust requirements and we had to build we had to build something basically to help us deploy our software

Michael Finocchiaro

Ha ha ha.

Michael Rosam

And the engineers built that in a couple of weeks and it was tens of thousands of lines of code. It's probably a whole business in its own right building this solution to help us deploy our software. And an engineer just built it on his own in like seven days. It's insane. So yeah, it's really cool.

Michael Finocchiaro

Cool, okay, well, Sebastian, you're in the heart of the whole thing, so you're gonna give us like, what's the latest, coolest tool, right? I'll bet you're gonna tell us, Claude, that is so 2025, man.

Sebastian Võlkl

Yeah. Yeah, actually I moved. I moved apartment like about a year ago, but before I lived, there was like a one block away from where the people started cursor and where the cursor office is. So, you know, talking about everything is like so close to each other. But we know the people who found a cursor and we met them and it's a very great team and I think a great product. That's what we use mostly at our company. And yeah, I mean, I can, I can, you know, underline the things like I was saying and

Michael Finocchiaro

Wow, very cool.

Sebastian Võlkl

in our case, there's situations where we have maybe potential, maybe sometimes like a smaller thing, even like a smaller feature, like a smaller bug, whatever, that we learn about in the morning. And in the afternoon, we come back to the customer and say, you know, it's like not alive. Like you can go back in and it's like, you know, we implemented it and like something that just wouldn't be possible like a few years ago. Um, so, and, and, and also we were trying to make use of that, right? Like I think, over, you know, one, one of the conversations we are probably having, or like you're having a lot of cycle, like there's just like big enterprises and engineering tools and there's like these setups coming up. Like, how can you win as a startup? Well, it's like making use of these tools because like the enterprises usually have a hard time of like bringing these tools in the company, letting everybody out, dovetailed, which tool should we use, what's model. you know, is this, is this fine? Like, can we trust it? And I think as a starter, we can just like move fast and like, you know, try it out on a daily basis. And we, think we can take this to our advantage and that's certainly what we do. So we can move fast and like both things really quickly implement things quickly, get feedback. ⁓ and AI just really helps with product development on that side.

Michael Finocchiaro

I would guess that it also makes you more competitive with respect to the big three because you're a lot more agile, right? mean, people are still waiting for these companies to come out with GA ⁓ agentic stuff because everything's still in beta and you guys are like on top of that, right?

Sebastian Võlkl

Yeah. Yeah.

Michael Rosam

Yeah, definitely. mean, in the work we do, so like data platform, ⁓ our customers are stuck between a bit of a rock and a hard place, right? The rock is those big three you mentioned. You can go to Siemens or Dassault, you can get a data platform, but they're pretty rigid. ⁓ You know, they don't really bend to your internal workflows as an R &D organization. And by the way, every R &D organization has got different workflows. So that means like most customers are dissatisfied. And they're kind of like vendor lock data schemers and things like that. On the other side, you can build yourself, right? So that's the rock and the hard place. On the other side, you can build yourself. So you get your internal IT team, or maybe you go to a software house and they build your custom solution. But AI and Quix are coming right in the middle because with AI, you can build custom solutions like so fast that it's like having a It's like having a vendor. It's like having a vendor kind of backed ⁓ production ready data platform that you can completely customize in a matter of kind of like weeks or months. Whereas literally you're into years and years of customization. If you go with any of those two options, you're either two years in a build out of a DIY solution or two years in a, digital transformation program with a, with a, with a big heavy vendor software. But now like quicks is a framework really for building your test data platform So the framework plus AI means you get a custom solution in a fraction of the time That's a completely game-changing value proposition for any organization that you know is doing research and development and engineering and needs to manage data

Michael Finocchiaro

Absolutely. Very cool. you had a similar experience, Sebastian?

Sebastian Võlkl

Yeah, mean, you know, pretty similar boat. Maybe it might be even a bit tricky. I think when it comes to testing data, everybody's pretty much aligned and like we should do it. It makes a lot of sense. And this is some engineering and MVC world. Some people are still trying to understand what's even the whole idea of that and like the purpose of that. it's kind of like, think a lot of experimentation even needed on a product side of like, how should things work? How should things look? ⁓ So we have the benefit of just like trying out things really quickly, seeing what works for customers and what doesn't and then just like implementing those things. And I think in respect to like the bigger vendors, they just like push things for like 30 years and like now everything is there, but like, there's so much there that like, you don't even know what you want to use in that. And that creates all of those complications. And I think we can focus on the the easy things, the basic things that are the right things. then it makes it more, it makes it way more intuitive. again, coming back to this like quickly kind of like adopting new changes, quickly listening to the customer and getting rid of things. Sometimes we, sometimes we just like get things out of the tool again, because we see how that's actually not really useful. So that that is also part of that whole like ideation and process of like pushing things out, seeing what works. Yeah.

Michael Finocchiaro

and like factorization as well. I like factorizing it to get, yeah. There's actually a question, a ⁓ comment, Vedant Joshi said, great convo guys. I believe that the biggest challenge right now is whether to do stuff by AI or to do stuff for AI. I'm not sure where to go with that. It's not really a question, but it's an interesting proposition. It makes me want to ask you guys, are you guys using AI in the product from a user interface point of view or from a plumbing point of view or maybe a combination of the two? Is it more of an agentic idea or more just advanced machine learning? I how is it actually being used in Dalus, for example?

Michael Rosam

There's was there.

Sebastian Võlkl

Yeah, there's like multiple ways we have a lot of ways of non-agentic ways to work with AI in our software. you know, talking about extracting data out of documents into the software, right? Like which usually has been a very like manual process or even with like, if you did like a more deterministic whatever export before that usually like left out a few things and the latest. LLMs are just pretty good at these kinds of tasks. So these are kind of non-agentic ways of using AI, like getting data out of maybe more documents into the platform. I think we also have an interface for users to basically... Do I always phrase it as if you view the model as one big chunk of data, like your requirements and your architecture and your simulation data and whatever, all of this is to some degree just a big chunk of... data that is like very machine readable. Sometimes it's very hard for humans to read all of that. Right. ⁓ being able to just like query that database through like an LLM and being able to retrieve information is, it sounds simple, but like having this in one place and get context is a big thing. and then in our platform, we have ways to kind of like manipulate the data with AI. And that's kind of like where it's more like the agent process, right? So we can, we can add the lead change requirements by prompting it or by uploading documents and telling it to do things or creating the system architecture automatically based on existing requirements or creating certain types of Python scripts based on your architecture and your requirements. All of this is part of more manipulating the data. So I'm saying there's non-agentic ways you can actually make use of AI, there's agentic ways of using AI, but we are for sure to, coming back to this, we're just trying to see what works the best and then... leaving the things that people actually use and get benefit from and getting rid of those things that, you know, where it might not be the best use case for it, but that's how we do it at our company.

Michael Finocchiaro

Anything to add Michael?

Michael Rosam

Yeah, no, that's really cool. We've, I mean, we deal a lot with data movement. So we have to build data integrations. As I mentioned before, every organization you go to has got different systems in place for, testing, for simulation, for, for everything really. And so what you'll find is lots of custom integrations to get data in and out of the platform. ⁓ So we built an agentic workflow, which helps. the engineer building the integration to build the custom integration from a base. What it really does is it embeds our domain expertise as software engineers working on industrial data systems into an AI agent to build the connection. And so you can just get it done and it normally returns a result which is working within a few minutes. So that's taking kind of days or more, maybe weeks of...

Sebastian Võlkl

Okay. Okay.

Michael Rosam

human effort to make an integration, write the code and just put it collapsing that down. Yeah, it's a no time at all. ⁓ So that's one example. Another one, which is, is coming up is like querying. So engineers

Michael Finocchiaro

And all the trial and error that entails, right? And all the trial and error that entails.

Michael Rosam

often want to ask specific questions of like very large data sets. And that is a complicated thing to do today. So I I was with, and one of my ex colleagues at McLaren automotive. this week and he was saying how when we were developing a program as a high performance vehicle program, the key attribute was lap time at Nürburgring. They were designing like the, some suspension components and they wanted to ask basic questions like has, has a car, has a certain model of a McLaren car ever bottomed out around the Nürburgring? Answering that question is really difficult today because you have to go and get all the test data sessions you've ever run, look at them, make some queries on each one. and figure out what the ride heights were and things like that. ⁓ But what we're hopeful for, what we're looking at is, that'll just become a natural language query and the engineer will have, what would have taken weeks of analyzing historical data will take seconds to pull a result, which just lets the organization, the team move forward with setting even design requirements. It's back to the of like the requirement definition of Sebastian. because it's always full loop. It's a closed circle. You develop your requirements, you develop and test your systems, and then you go back to your requirements and validate them, and then you might have to go around that many times. And in that is always like data-driven questions, which are actually today very hard to answer.

Michael Finocchiaro

That makes me wonder, like, are you guys training your own LLMs on your own specialities or you're just using the, well, I know Leo AI, he created an LMM malware, he calls it the Large Mechanical Model, where he put all this mechanical engineering knowledge into an LLM. Or, so you guys are saying, no, you're not training your own data, it's more just having an agent that understands, has a rag that has all that knowledge in the rag, basically, right?

Michael Rosam

Yeah.

Michael Finocchiaro

More of this.

Michael Rosam

Yeah,

Sebastian Võlkl

Yeah.

Michael Rosam

for us, for us, no, we're not. ⁓ mean, our data is not our data. It's our customers data. So it's so incredibly sensitive. In fact, like most of our customers host our product on their own infrastructure. So we actually never even see their data. So no, that is not, ⁓ that's not within scope, I don't think. know, asking the data questions, ⁓ developing systems, that's that is kind of where we're looking at.

Michael Finocchiaro

Right. Exactly.

Sebastian Võlkl

Yeah, same. We thought about it. ⁓ But I think we came to the conclusion that, again, I would love to train a custom model based on the very unique data, I guess, that's been put in our platform. We can't do that. So that's not an option. I think my opinion is that if you actually want to do your own custom model, you would pick the latest, even from the open animals, whatever, because fine tune that. even if that's slightly better than whatever is said of the other model, you wait three months. You spend, I don't know how many thousands of dollars on creating this custom model and you spend how many months of giving it data and whatever. And then even like... Three or six months later, the latest, you know, lab releases a new, you know, whatever model and that will be better than your custom fine tune trained model. So it's kind of like all the work was like useless. And I think, I think that's kind of like, just what we need to expect from those models to get better and kind of like that fine tuning or creating your own custom model just makes sense. If you can actually have access to non-public data and are able to train on that. think that's the only.

Michael Finocchiaro

Queen 3 or whatever.

Sebastian Võlkl

way of actually having a better, you know, finding your custom model. Again, in our case, you know, we're not allowed to do that. We are not doing that. We don't make use of any of our custom data. In other industries, you might be able to do it. But ⁓ yeah, that's kind of like the way at least I think about it.

Michael Finocchiaro

No, that makes a lot of sense actually. It's just I remember that one of Mauro's comments was he said the problem with these LMS is they're trained on Reddit and Reddit isn't exactly an engineering repository. That being said, I've looked at a lot of the frameworks that the big three and other vendors are using to describe AI and the phases and one that I thought was really good was Aura Bay Area PTC. as this framework of advise, assist and automate. So advise is basically bring the data together and answer a question. Assist is okay, maybe give the user three or four options then before you actually, you do an action by the agent, there's human in the loop. And then of course, automate is where there's a whole workflow that is 100 % AI. There's just a human maybe 10 steps later or after it's been iterated 10 times.

Sebastian Võlkl

Okay.

Michael Finocchiaro

there's a human ⁓ oversight. So where are you guys in that? Are you guys still at the advisor? Are you doing some stuff with the system? Maybe there's some things you found where you can actually automate more, know, want to end steps ⁓ of a workflow.

Michael Rosam

Yeah, I mean, the connectors is like it does automate it, but you always end up with the code. AI as we're seeing is a really ⁓ great accelerator for an expert. So I don't think you would ever, ever currently wouldn't want to say, Hey, develop me a brake system for my new car and let it just do the job. Like you need to be an expert in, in, in like, you know, brake control systems.

Sebastian Võlkl

Thank you.

Michael Rosam

to review the code and you need to understand what it does. ⁓ So like, yeah, I think it's more of an accelerator. It's an assistive technology.

Michael Finocchiaro

More on the advice, more on an advice and then maybe assist for like one task or something. Is that the same for you, Sebastian?

Michael Rosam

Yeah, yeah, there's some things that you could automate, but it would have to be like non safety critical systems, obviously, and yeah.

Michael Finocchiaro

You're right. Same for you, Sebastian, about the same.

Sebastian Võlkl

Yeah, I think one of the unique jobs of a system engineer is to be a kind of like, you know, he can be a subject matter expert in maybe one way, but like one of the unique things about the job of system engineer is to really understand a lot of things a little bit, right? Like if he's working on a complex system and not on just one subsystem, but he needs to make sense of like

Michael Finocchiaro

the architecture.

Sebastian Võlkl

all the different subsystems, how they're connected to each other, understand what the requirements mean. ⁓ So what we see on the advice side is really kind of like, like they can really understand the general system better if we're on the topics even where they're not subject matter experts. ⁓ we do see, mean, we publish demos about that, but being able to keep keep a process where you keep the human in a loop, still kind of like send AI to the front and be like, can you try to figure this out? And you kind of like see what it comes back with. And then you look over and you're like, does this make sense? Yes or no, right? So there's use flow. Sometimes people come up with use cases and present them to me. And I'm surprised because I couldn't even think about doing that myself. people are kind of like looking at, ⁓ there's like a requirement failing, in our, our, again, in our tool, like everything is connected with each other. So you could, you could trace back to like, Oh, you know, asked or like asked the model of like, why is this requirement failing? And it could show you, Oh, you know, because there's certain restrictions it has, and it points you back to the analysis. And then analysis, there was like a, in the Python script, there was like a calculation that kind of like, was kind of like poorly performed. It was like wrong. would like trace all this with SPAC and be like, Yeah, like maybe you should change this and you can be like, not like blindly say yes, you should always be like, oh, like, you know, does this, oh, this actually makes sense. But it kind of like, it can help you get to this point faster. So there's a lot of like use cases where it can go beyond just like the advice, it can actually like do things for you. as Mike said, like I don't, I'm not trying to push for a Hey, just like the AI do everything, like, you know, trust, like just, you know, accept everything without looking over it. Like I disagree with this strongly. Like I keep saying people like, ⁓ sometimes you don't even want to use it at all. ⁓ for, like super mission critical things, like, you know, actually try to use your brain to think about certain problems and things and like not try to offload everything to AI. Like, ⁓ so

Michael Finocchiaro

Hmm.

Sebastian Võlkl

It really depends on the use case, but ⁓ it certainly can be very powerful.

Michael Finocchiaro

It is that sort of an intellectual trap, right? Every time it tells you how brilliant you are and would you like me to do this other thing? You're kind of delegating even more of your own life to these tools, right? It also makes, like I was thinking about these things and thinking that ultimately, well, we don't want an agent. We want agents, plural, right? Like maybe I was even thinking there could be a my amenities agents who's just basically trained to say, I don't believe you, prove it, you know? How are you guys getting around that particular aspect? The most annoying aspect in our very deterministic engineering world is hallucinations and probability, right? Because these are probabilistic engines and not deterministic ones. in your two worlds, which demand you an enormous amount of precision, right? Whether that's the testing or the model-based systems engineering, how are you guys working around this issue of hallucination and... the probabilistic nature of AI in a very deterministic engineering world? Maybe a long question, sorry, but maybe it needs to think about it. I think it's still relevant one because I think that's where a lot of people get stumbling on the AI thing. I don't understand how I can use it in a deterministic way.

Michael Rosam

Well, I think that's fair. I think that's where the expertise comes in. So you're basically creating a workflow with the agent that does a job. And, know, when you have expertise in that job to be done, then you're able to basically craft the workflow and understand whether it's being done right or not. You know, I often ⁓ make a sort of very simple analogy. Well, I'm not a natural writer, but I do post on LinkedIn. Now, if I ask an AI to create me a LinkedIn post, It's not very good. I actually write most of mine myself. But I've got colleagues in content marketing and they're very expert in writing. And so they're able to get a much better result out of an AI when they say, create LinkedIn posts, because they're just putting so much extra ⁓ workflow. There's like control parameters and steps and do this and then that. And it all comes from their domain expertise in writing. So it's the same for engineering. If you have domain expertise in an area of engineering, you can basically create a skill if you like, a ⁓ workflow. Yeah, a workflow that's gonna work for you. And so it takes quite a lot of time to tweak your workflows, but you can get to a good place.

Michael Finocchiaro

Prompt engineering. Is that the same way you're doing it, Sebastian?

Sebastian Võlkl

Mmm. Yeah, I mean, few obvious things like, you know, should it should should output, right? Like if you if you try that, that's kind of like what you said, right? But like if you if you if you give bad prompts, if you get bad, you know, questions, whatever, like you will not really kind of like get great results, or it doesn't have a lot of context on your system or whatever. When it comes to like multi, you know, multi agent processes, I think I think yes, that's kind like where the the future is kind of like leaning towards. And there's like, you you as a

Michael Finocchiaro

Yeah, of course.

Sebastian Võlkl

user, I'm more like an orchestra of like trying to like navigate these different, maybe Asian different directions. But yeah, again, like, it's like a lot of things where you might not actually want to use them and might make more sense to, to, like, you know, do things yourself. It did the one thing that's maybe more particular, like, you know, it's kind of like, okay, what are, what are the things where you kind of like already know how the end result should look like? but it would just take you a long time to manually come to this result, right? In our case, a lot of this in the system engineering world is kind of like, you know, coming up with like the system architecture and like creating this and like you, the state of the art, unfortunately is like, you go to someone who has like an engineering degree, like a system engineering degree, works as a system engineer and look over the shoulder and like, he's just like working in these like more like legacy tools and he's like spending 70 % of him making these like... block diagrams and like whatever. just like, is that a respool set? Like, is this time spent, like, wisely? And you would probably say no, like, you should probably more like making decisions and making, like, you know, you know, different, different kind of like, ⁓ you know, trade studies or whatever, right? Like that, that's like an engineering task, not creating like 70 % of time, these like block diagrams. And I think that's where like, you know, let's, let's write to you say, like automate these like, processes and steps that are actually like, Everybody, they already know what they want, but it just takes a long time to get there. That's where I see used, where it can actually be helpful, where you can actually trust the output or you can look over it. And then over time, you can create 10, 20, 30 at the same time, and you just look over them and you accept which version is the best one. You run off these simultaneously. So you have more data to look at. But yeah, step by step.

Michael Finocchiaro

That made me think that it might be a bit ironic then because the more that we use AI, the more we'll be able to give those mundane tasks to the AI and that'll give us more time to innovate. But since we've spent all that time doing those mundane tasks, will we be able to switch over to the innovation things because we're so used to doing the main stuff? Do you think that there'll be an aha moment in engineering where all that mundane stuff is delegated to an AI and now I can really become this creative engineer rather than someone that's just doing the plumbing 70 % of the time, as you said.

Michael Rosam

I guess that's the optimism of any technology revolution, right? They all free us up to do higher value things. so yeah, I suppose that's the direction of travel. It's the same when something like a washing machine was invented, you know, people were washing washing clothes with their bare hands and spending all day doing that. And now they don't, they're free to do something else. so yes, we should, we should be abstracted off of mundane day to day admin. I mean, We must all be experiencing that already. I certainly am. There's just a lot of stuff that you would do on a day-to-day basis that you don't have to do anymore. And you just pass that over and you spend your time, you know, higher value work.

Sebastian Võlkl

Thank

Michael Finocchiaro

Do you feel you're already doing that today? That you're already being able to focus on the higher value stuff?

Michael Rosam

Yeah, just on the desktop, know, as day to day business, there's lots of things where you'd have to review documents or do review analysis and you can, you know, some of that you can pass off and you can just get summaries back. It's very powerful.

Michael Finocchiaro

Do you have that same experience, Sebastian?

Sebastian Võlkl

Yeah, I think, I mean, you know, might sound harsh, but I think that's kind of like, can like make or break companies over the next, or next, maybe not like six, 12 months, but like two to three plus years, right? Like you, you, you, always see it in the, like, you know, if I talk about it from like a customer perspective, like, you know, AI adoption, like, you know, smaller companies and startups and more like innovative companies was like, bigger enterprises and slow moving organizations that are obviously like taking longer. you just see the direct impact this has. So I think that's more like a culture question, like a company culture question, like how it has been like, you know, how does leadership decide to like implement it? Right? Like there's some CEOs, even of, you know, big enterprises that like, you know, what do they want? You know, do they try to speed it up and be like, now everybody uses AI in our company? Or is it like, the CEO of the company doesn't really trust it. like, ⁓ no, let's wait for it. But I think waiting for it, I obviously talk to companies who they don't allow any AI in their organization. Sometimes it's security issues, but sometimes it's about other topics. I think that just sounds like they're getting overrun by even small organizations in the next few years.

Michael Finocchiaro

Yeah.

Sebastian Võlkl

so I think it's a, it's a, it's an important time to like, as a, you know, if, if I would be like a leader and like a bigger organization, it's like, try to make it very practical and not just like have consultants come in and have fancy slides that talk about this AI thing. Like that, sure. That, you know, make you feel good, but this doesn't want to change any processes in your organization. So you need to like, ⁓ one of the best ways I've seen it actually implemented is, ⁓ spitting up like a lab environment, spitting up an R &D environment where you can actually test new processes tools really quickly and like see if this works for you or not before you even try to adopt it within a big organization. I've seen that working very well. So you can actually like have like a two to three months, you know, period of testing out new tools and whatever in your smaller version of your organization and then bring that up to leadership and say, we got these benefits out of it. And I think we should do this for our whole organization. So that's something I've seen working pretty well for even like bigger luck organization to the adoption.

Michael Finocchiaro

Sara Varakumar asks, with the rapid industrial AI across industries, how do you see the role of CAD and PLM professionals evolving and what skills? This is question I actually want to ask you guys. What should the next generation of engineers be looking at to stay relevant in this new agentic future that we're heading towards? Because a lot of these entry-level engineering jobs will be agents doing it. So how do engineers stay relevant? I mean, of course, there's the argument that when we went from horses to cars, there were more engineers in jobs, even if the horse rider, the guys riding the horses didn't have much jobs. But how do you guys see that happening? What would your advice be to the people 10, 15, 20 years younger that are just trying to think about being engineers?

Sebastian Võlkl

Yes.

Michael Rosam

Yeah, well, focus on the fundamentals. mean, you know, first principles engineering, I think, is the most important. So you learn these deep skills. Maybe you're going to go deeper into some of the fundamentals of physics and applied engineering physics. Learn coding alongside. mean, lots of engineers are doing this anyway, but you can really turbocharge yourself these days with fundamental knowledge. ⁓ She was talking to a young engineer recently at Warwick University and he was talking about the T, the old T model and the top is he's focusing on computer science and down into deep into physics. I think that's about right because the end of the day, AI is more or less a computer science and data science paradigm. And then you've got the physics, the intersection of that is very, powerful.

Sebastian Võlkl

And I'm going to give you a little a brief to the program. So I'm to bit a brief introduction program. So I'm to with bit to the program. So I'm going start a little bit brief

Michael Rosam

unlocked for AI is it will stop you doing all the admin. I remember I was a mechanical engineer myself and would spend some time in, ⁓ some time, you know, developing a part, some time kind of doing some structural, you know, FEA testing on it. Most of the time you would be doing administrative things like reviewing drawings. kind of managing part versions in bill of material systems, making sure that the person in the cost engineering team understands what you're designing and how they're gonna cost it, talking to purchasing, talking to your suppliers. Like most of my time was not spent engineering vehicles. So really AI should

Sebastian Võlkl

Okay. Thank

Michael Finocchiaro

Hmm.

Michael Rosam

take a lot of that away and let you focus on where you have the most value.

Michael Finocchiaro

And first, Sebastian, must be even a more relevant question for all these kids in San Francisco, right? Do I just become an AI guy and work for the next startup bubble, or do I stick with being an engineer, right?

Sebastian Võlkl

Yeah, see, I don't think I think specifically in the like, if you're if you're actually like an engineer, like, you know, try to be, you know, agree with the idea of like, you know, learning basics become good at your job. let's say cat, for example, should there might be like a cat assisted AI here and there, but it doesn't mean if you're now grad or like you're an engineer is like should I learn to do CAD? And I was like, yes, like you should, like you should become good at it. like before you think about using AI for it, and then like, you can think about like how it can assist your work. ⁓ so I think maybe this conversation might be different in five years from now, but like for the next few years, you're good if you're actually, you know, good at, you know, working on whatever CAD tool and like, you know, designing it. ⁓ I, do very much agree with the idea of like learning basics of like software engineering. that doesn't need to mean you need to be an expert in software engineering as well. But getting the basics so you can actually build your that's like the unique time you're in, right? You can actually build your own internal tools really quickly if you want to, or you can build your own, whatever you need to have as a very unique, instead of going to, you know, instead of going to us, or going to any tool vendor, maybe you can just build a plugin yourself, maybe you can build a whatever thing yourself to assist your personal workflow with software. ⁓ and, know, hopefully it's not, you know, that's the term that's being used with like vibe code at which I kind of like, don't really like, because for me, this assumes you don't have to basic understanding of software and you just kind of like prompted to generate code and you don't understand it. And, know, just we'll leave the issues sooner or later. ⁓ so yeah, getting a basic sun. ⁓

Michael Finocchiaro

Hmm.

Sebastian Võlkl

of like learning a bit of software and aside, think it's something helpful. But you know, become good at your craft. think one of the unique things in like engineering, like hardware engineering is like, it's way more of like a craft you need to like be an expert in like, you know, don't know, in finance where you need to like work in Excel sheets every day. And like, it's very like, you know, what is your actual like, what are you, what actually do you need to be good at? And so like data rank or whatever, but like in all of the hardware engineering tasks and like, be good at your craft and like. be good at that before you think about using AI. And then you can do it.

Michael Finocchiaro

It makes me think that really the, it's maybe the, terms of software engineering, it's more like the DevOps stack because that's really the key, right? it, you understand the minimum of DevOps and how do you get software from an idea to an actually running software is one thing that we weren't taught in engineering school. At least I wasn't taught 40 years ago. I didn't even know that. Well, it didn't even exist back then, but the acronym didn't exist. So.

Michael Rosam

Thank

Michael Finocchiaro

This is awesome. Just to close out this section before we talk a little bit broader about the customers you work with. How has your opinion of AI changed from the bullish beginnings of 2023 and now we're heading towards 2026? Has your opinion of it changed? Has it become more nuanced or are there things where you're like... AI will continue to do this and other places like you were just saying, Sebastian, maybe this conversation expires in three or four years because by then we're going to be moved to some other place with it. Just wondering what was your opinion now in late 2025 about it?

Michael Rosam

still very bullish, ⁓ which moved beyond like, you know, experimentation, people are using it every day, although we're still also learning how to work with it for lots of different use cases and workflows. I think some things work well, others less and now it's only going to get better. think it's already like just a familiar companion. I've got the app, you know, running on my desktop all the time, ⁓ dipping into it constantly. There's other workflows that you can run automatically. So really it's just there as a companion to my life now. So more bullish on it than ever.

Michael Finocchiaro

How about you, Sebastian? I guess you don't have much of a choice sitting where you're sitting, right? ⁓

Sebastian Võlkl

Yeah. Yeah, I mean, I can't think about not using it. I think everybody was surprised by it. If you could have actually predicted it, you'd be a billionaire right now if you invested in the right stocks. So I think no one actually predicted the right outcomes. I think it was interesting how fast it overcame this period of hallucination, I think, for years back. It was really bad and it hallucinated a lot. ⁓ Well, it's actually like useful and like now you go to like the latest set of the art models and you're like, ⁓ like, sure. Like I trust you like a lot. and, and, and hopefully you can give me the right sources if I ask you for it. ⁓ so it overcame this period really quickly. ⁓ let's see how the, how the, you know, the, the, I, the, the one ask a half for, for the AI industries to build better open source models. that's kind of like my.

Michael Finocchiaro

Yeah.

Sebastian Võlkl

My model, we have this at our company as well. We need to do on-premise deployments as well and can't use the closed source model obviously because of that. There's just a certain quality reduction we currently are seeing between using our AI and our software with any of the closed source models or the open source models. So I'm always hoping for better open source models to be released that are getting to at least similar results or even better results. But I think that's just the timing. there's nothing that speaks against it from a physics perspective, whatever. It's just the timing and the resource thing. so yeah, this will continue to improve. Probably not as radical as it has been. We're certainly trying to not see a curve that goes linear or exponential, but it will be more like a curve that slows down a little bit over time because of the...

Michael Finocchiaro

The hype, right?

Sebastian Võlkl

We basically used all the data available. just like all the latest models are just trained on additional, like artificially created testing data, right? So it's kind of like, you know, we already used up all of our resources when it comes to training data on the internet. So how do you get more testing data? It's like a lot of interesting startups here in San Francisco that are kind of like, you know, building these kind of like job platforms to interview experts to gather additional data, right? That's like the next billion dollar companies built here are kind of like just the companies creating testing data for these labs.

Michael Finocchiaro

Yeah.

Sebastian Võlkl

So that's a very interesting new wave of set-up CNSF or just like, you know, creating expert testing data so the models get better over time. So it will be slower, but it will certainly always get better.

Michael Finocchiaro

So in the last part of the conversation, I'd to ⁓ talk a little bit about your experiences putting your software into the real world because ⁓ companies are relatively reluctant to talk about where they are because they might not look at the, if they're very modern, like you even said, some companies don't even allow AI inside their company. ⁓ I usually think of company of data maturity on a spectrum of one to five, like one, they're still using Excel and email. Five, you know, it's fully agentic, adaptive digital twins are completely not autonomous. Basically nobody's at five or maybe somebody might be at 4.1, but there's probably like one company like that in the entire planet. Whereas everybody else, depending on what industry you're in, there's somewhere between one and three probably. So first part of the question is, do you agree, you that they were definitely still are on this? a very relatively low data maturity, despite the fact that we've got technology that could enable a much higher level. And the second part of the question is, when you put in a Dalus or you put in Quix, is there a bit of an aha moment that's customer say, well, if I actually fixed all my data governance problems, I would get so much more out of this tool. It'd be just so much more amazing. And so the implementation of a Dalus or a Quix becomes a catalyst for change and a catalyst for transformation of the company into something far ⁓ more digitally mature.

Michael Rosam

Yeah, I mean, we're selling to like enterprise size, engineering companies in Europe and, ⁓ the data maturity is somewhere around one and maybe between zero and one, you know, it's like, ⁓ the, the state of the art is, ⁓ files in folders. and those file folder systems will be everywhere across the whole organization. ⁓ so test data. mean, you imagine an organization with

Michael Finocchiaro

God. Oof.

Michael Rosam

couple of hundred, maybe, know, engineers working in Simulink and MATLAB and what they're doing whenever they press run or save is saving the data onto their local hard drive. ⁓ Yeah, of course, you know, test campaigns with cars going round tracks and the results being CSV emailed to people and then sometime later being uploaded into a repository somewhere. ⁓ I think it really

Michael Finocchiaro

Really? There's no repositories? Jeez.

Michael Rosam

Strikes me because I've also worked quite a lot in them. Let's call it the modern data stack where people talk about, you know, CRM sales data, business intelligence data. And this whole ecosystem is so incredibly mature. And there's like lots of companies at the four or five where the data flows. It's in like snowflake or data bricks. They've got their, they've got their BI stack. They've got, you know, analysts and data engineers building data transformation pipelines and the reports are all available. And it's like really good. But when you come to engineering where I find the data in engineering is so, so, so valuable and could unlock so much potential, it's just so behind. And the good news is that companies are really investing in it at the moment. So there's a lot of initiatives. There's a lot of challenges to be solved. It's really hard. Data is not easy. It's really, really hard domain to get your data right, to get it.

Sebastian Võlkl

and

Michael Rosam

to get it integrated, to get it cleaned, to have it like reliable data pipelines, have data quality, have data governance. This is really, really big challenge, especially when you're coming from so far behind. ⁓ companies are trying to catch up now, which is why we are where we're at and growing very quickly now.

Michael Finocchiaro

Have you had that same experience, Sebastian, with Dalus?

Sebastian Võlkl

Yeah, maybe slightly better, but similar. Yeah, I think we have to change the workflow of like. fast moving, also smaller organizations and setups and scale-ups. And they're usually from an organization perspective a bit more modern than the enterprises. again, coming back to what's the current state of the art in system engineering is, if you have traditional system engineering, which is document-based and model-based, it's still like 90 % plus as conflict document-based system engineering in these organizations. If that's in Excel or if it's, again, information lying around and like dozens of, I don't know, diagrams and requirement sheets and, you know, testing data, whatever is like, ⁓ still server document based. a big, challenge is like, how do ⁓ you make this transition happening without needing to spend millions of dollars to hire 50 people to come into your org and do this? Like how can we make use of like maybe automated ways of transferring this data to like other platforms and making use of that. ⁓ But so I think it's more like a maybe like 0 to like a 1 to 2, as often as usual one, that's kind of like, so like very, very document based, basically, very few organizations that are actually like doing any middle based things in the system engineering division.

Michael Finocchiaro

But when they bring in Dalus, does that change? there a... Because you guys are turning all that into data,

Sebastian Võlkl

Yeah, there's very few quantitative results in the system engineering world when it comes to adopting MBC. There's a few case studies really serious there over the last 20 years, but nothing really usable, or very few things that are usable. So one of the first things we started doing is ⁓ when we get into organizations, we really create a plan of like... Okay, what are the actual results you want to achieve in terms of quantifying them? What are the metrics we want to track over the next three months and have qualitative series? And after the three months, we come back and see how many early issues did we detect because now we switched over from the stock and base. How fast does it now take me to get to certain results and information? How does everybody feel? we kind of see. I hope we can release a lot of these kind of soon. publicly right now, all of this is in the works and more like, you know, in the works. And we can't really talk about it yet. But I think when we get out of that, that will be like, like that will be the most amount of data points the system engineering world has probably seen in terms of results over the last 30 years. Because there's like, you know, again, nothing really there yet, but I think we can reach those soon. And I think that's kind of like, what I would love to do, I want to like... I'm a big like, you know, I want to see actual results. I want to see the metrics. How much does it actually save the organization? How much does it, you you know, how much less cost do they have, et cetera, et cetera, by adopting NBC instead of like, yeah, we are doing NBC now. like, you know, no one actually like knows what does mean. It's like, if you have at least have metrics and results, can like, have a very good reasoning for doing it. And that's kind of like where I'm trying to get to.

Michael Finocchiaro

Very cool. We're almost out of time and I've really ⁓ loved this conversation. It's been really fascinating. I've learned a lot. Thank you guys. ⁓ Do you think we'll still have this conversation in two or three years? Do you think this is ⁓ the entire game is going to change? I mean, where do you think what's your crystal ball say on what's engineering like? Because it seems like it changes every three to six months now. But if you were to look at the crystal ball, where do you think we're going to be in the next two to three years? Other than you guys are both unicorns, you're doing exit with billions of dollars. But other than that, right.

Michael Rosam

Yeah, well, the hope is more data driven, is there more automation, ⁓ supporting more engineers to do their jobs. ⁓ I think it's going to take a long time to fix the data. we know data is the lifeblood of AI. So it's going to take a large number of years to just fix the data layer. And then in parallel, of course, all the models are coming along and you will find specific physics based models. And then they can be plugged into the data warehouses that that engineering organizations are developing in order to accelerate R &D. ⁓ So now I'm still bullish. I think it's going to take longer in R &D than we, in engineering than we expect, ⁓ but it's coming.

Sebastian Võlkl

Yeah, maybe with five years we can press the button and the finished product comes out at the end. That maybe just will take some more time.

Michael Finocchiaro

Ha ha. Maybe I should raise a question differently. You guys are both startups and we still have 18 to 20 billion dollars that are going to three companies and not to the other 360 I've identified. So does that change? Is there a shift where ⁓ SMBs and even large enterprises are looking to startups ⁓ of various sizes to solve specific problems that... ⁓ that the big three cannot solve. Do you think that there's a shift there? There's ⁓ a transformation? Or do you think the Big Three just buys everybody and we're just back to the Big Three again?

Michael Rosam

I mean, one of the fears is... I guess AI can accelerate the monopolistic tendencies of markets, I believe. know, if you look at like, as your AWS Google Cloud, they dominate the cloud's market, right? Not many, there's no opportunity for anybody to come and compete with them. They own the distribution channel to the clients. So, you know, if somebody who's a customer of Azure once wouldn't say, I, they get co-pilot.

Sebastian Võlkl

Okay.

Michael Rosam

and you know, as already being the market, dominating the market, then they just get to reinforce and entrench their position. And of course, they've got the resources to go and invest in that as well. no startups have a big fight, but what we do have on our side is agility. We have the ability to go into niche areas where they might be underserved. We also have the ability to do what's right for our customers and not just what's right ⁓ or not just what fits our product and our services and that we try and impart on our customers because we've got the strongest voice in the room. because the startup, we come from a perspective as we understand what customers do and don't want and we're trying to help them get what they do want. It's not what's currently available on the market and then they have a choice. So the only other really. Challenge is making making more customers aware that there's there's there's a new choice on the market and And that's where we put all of our efforts. So it's all about channel partnerships distribution messaging ⁓ marketing Podcasts here we are Yeah

Michael Finocchiaro

Mm. Hardcast. ⁓ How about you, Sebastian?

Sebastian Võlkl

Yeah, think. You mean, you know, pick your market segment and try to win that, right? Like I think, you know, trying to be specific, like it doesn't make sense for me right now to go to you know, the US to the defense primes and like, you know, convince them to get away from those big two hours. Like that doesn't work, at least for now, but that's not our goal. I think our goal is to like go with the small organizations that are moving fast. if you

Michael Finocchiaro

That won't work.

Sebastian Võlkl

I can, you know, I can pick an example here, like, even if you look at like, who are the companies that are even winning in these contracts right now, like, it's, you know, the even even going away from like tool vendors to like the industry, and you stay in like, oh, that's, that's like a new upcoming, you know, small organization, but more fast moving organization, both better trucks and one of the primes. Apparently, that's why they're running more contracts, right? What they would love to go for more, you know, modern kind of like tool vendor than with like the enterprise. Yeah, probably. So every, head to head, let's say fight against kind of like in the small fast moving organizations, we win against the bigger tool vendors, but that's kind of like what we're also trying to focus on and trying to provide value. I think, think it's important for like startup and like whatever pick your, know, pick whatever you think you're the best to like win in and kind of like not try to win everything because that's like not possible. And, and, you know, let the, you know, like, let the, you know, let the others get the rest of the market where obviously I got a lot of money is made in, but like, let's see how this changes over the next five years. Well, they still make so much money in five years than today. So that's kind of like the bets on our side is to, to, to, move with the winners of, of our customer segment markets on respectively. Michael Finocchiaro (1:00:08) Awesome. Well, ⁓ I wanted to thank you guys very much for taking your time and joining the podcast. I hope you guys enjoyed it too. Michael Rosam (1:00:16) Yeah, it's been great. Thanks a lot for us, Michael. Sebastian Võlkl (1:00:17) No, yeah. Thanks for the advice. Michael Finocchiaro (1:00:19) And it was really awesome talking to both of you guys. And we'll be back next week. We have a Thready and ⁓ Up to Parts and a couple other ⁓ really awesome startups. I think I was also trying to propel both the CTO and the CEO. So stay tuned for more. Again, big thanks to Sebastian and to Michael. Best of luck with Quix in Dalus. And we'll be back with another podcast. Thank you guys. Sebastian Võlkl (1:00:43) Thank you. Michael Rosam (1:00:45) Bye bye. Michael Finocchiaro (1:00:47) Okay.

Share