🤖 AI Across The Product LifecycleEp. 27

Physics Has a ChatGPT Moment — Special Edition with Vinci

Michael Finocchiaro· 41 min read
Guests:Vinci
Share

Episode Summary

The episode delves into the intersection of physics and artificial intelligence within engineering software, focusing on the advancements made by Vinci. Hosted by Michael Finocchiaro, the podcast features Hardik Kabaria from Vinci, a company that has developed a foundational model for physics intelligence, and Andy Fine, an expert in simulation with the Fine Physics Consortium. The discussion centers around the capabilities of physics-based models versus surrogate data approaches, emphasizing the importance of true physics in solving complex engineering problems.

Two key insights emerge from the conversation: first, Hardik Kabaria highlights that their physics intelligence layer enables engineers to ask detailed questions about physical phenomena and receive precise answers at manufacturing resolution, leveraging differential equations as a core component. Second, Andy Fine cautions against overly broad claims regarding foundational AI models capable of solving all types of physics problems, advocating for application-specific solutions.

For PLM and engineering professionals, the episode underscores the significance of adopting physics-based approaches over surrogate data in simulation tools to ensure accurate and reliable results. It also emphasizes the potential of integrating AI with fundamental physical principles to revolutionize product development processes, offering a glimpse into the future where physics and AI collaborate seamlessly to drive innovation.


Full Transcript

Michael Finocchiaro

And we're live. This is Michael Finichera. We're on a special edition of the AI Across the Product Lifecycle podcast today. I have two guests. have Hardik of Vinci, who's going to tell us all about Vinci's amazing physics solutions that are basically an open AI moment, I think, in the moment of simulation, as well as my friend Andy Fine, who has the Fine Physics Consortium, who's an expert with tons of experience in simulation. So first, maybe The two of you can introduce yourselves. I'll let Hardik, you go first.

Hardik Kabaria

Well, thanks for having me. My name is Hardik. My background is in physics and geometry software, and I'm co-founder at CFO of Vinci, and we'll tell you more about what we are building at Vinci.

Michael Finocchiaro

Andy?

Andy Fine

Appreciate it. So I'm Andy Fine. I've been in the engineering software industry for about 25 years, background in fluid dynamics at engineering, based over in the US and I run a technology consortium for next generation engineering software startups.

Michael Finocchiaro

Awesome. Well, let's get started. I think ⁓ when we've had, we've talked before Hardic, you talked about ⁓ physics intelligence. What does that mean in the context of what you're doing at Vinci?

Hardik Kabaria

I'll start by defining the problem. So what are we doing? We are building a foundation model for physics. And I'm sure you heard that from many, many people. ⁓ So it's probably not new. We are not the first person telling you. But we've actually done it. It's shipped. It's in production with tier one hardware companies. And the way the model's capability manifests itself is through the product. That's what we call physics intelligence layer. which enables engineers to ask questions about physics, say heat transfer, of all kinds. And it gives answers at manufacturing resolution. And the only limitation is the compute capacity, the compute on which the software is deployed.

Michael Finocchiaro

Yeah, I think you were telling me that you're ⁓ really passionate about differential equations and as long as it's a differential equation, you can solve it, right?

Hardik Kabaria

Yes, exactly. ⁓ Everything in the universe is governed by laws of physics. And as I like to think about it, laws of physics ⁓ are not owned by anybody. It's not owned by one company. It's the same for Apple and ASML and TSMCE. And one person started trying to build something new. So they are constant. They are universal. And physicists and mathematicians who came before us have converted them into laws of physics as defined by differential equation. But it's something as simple as energy balance, heat generated equals heat dissipated, and momentum balance, stress balance, so on and so forth. And those are the things we have built into the model to enable the answers for physics questions.

Michael Finocchiaro

So Andy, you, you know, from the other simulation vendors you've talked to, is there a lot, I mean, I've talked to quite a lot and it seems like this is a, well, there's a lot of companies that are using surrogate data rather than physics to do these calculations. Obviously Hardic is taking the more physics-based approach. I mean, what does your read on that?

Andy Fine

So it's interesting because you've got the companies doing physics, you've got the companies doing surrogate models. There is a lot of information out there. There's also a lot of what I call fluff out there. So information that is can be somewhat misleading. So question for Hardik, what do you see is definitions in this industry that are kind of misleading, too broad and messages that are coming out there that don't really make an entire amount of sense to you from a physics or a discrete engineering perspective.

Hardik Kabaria

Yeah, I think it's good to define what it means ⁓ for a model to work and also how it's useful. So let's start by something simple, right? Like you ask a physics question about how does the heat transfer happen in an object, whether I ask it, whether Michael asks it, we should all get the same answer. Whether we ask it today, whether we ask it tomorrow, we should get the same answer. So that means the nature of the solution has to be deterministic. That's the first part of it. The second is, ⁓ If these things are governed by laws of physics, that means we should be able to check it. Did we really solve the physics equation? Did we really balance the energy? ⁓ It should be automatic in the other ways that ⁓ engineers are interested in understanding how this physics matters for their part. So they should be able to bring in a prompt which in this industry means geometry and material properties and environmental condition. And lastly, it has to work out of the box. If the user has to create a training at the time of their usage, it's not really a foundation model or out of the box solution because it completely changes who the user is. Then the user is somebody who has to worry about the data. Is the data has the coverage? Does it apply to my situation? They have to do data version. It creates a whole different perspective. So for us, A physics intelligence layer powered by a foundation model means it's deterministic. It is solver grounded. It's as accurate as any other high fidelity solvers. If you could solve the problem using that way, it is automatic. And there is no fine tuning or training required from the user perspective. And I know this is a mouthful. But in other words, if we are all engineers and if we don't define it well, we know it's not going to work.

Andy Fine

Okay, so I mean, with that, it's interesting you say like, we can do everything with a high fidelity solver can do so on, which is absolutely amazing. But there are a lot of companies out there that saying the same thing. We can do everything we can predict everything in AI is an AI wonderful, it can do everything including cut your hair. But a lot of times, It doesn't work. from your point of view, and as a highly qualified engineer, you've built this company. We understand you've got some great clients already, which we won't go into. But what do you see as the minimum bar for actually being credible in this category? So what makes sense to be credible from more a business perspective? And if I'm CEO, VP of an engineering company, a tier one company, how do I know that what you're telling me is right and what's credible behind that? up.

Hardik Kabaria

That's a great question because that's the first thing we end up discussing when we talk to a potential customer. That, I can have a big white paper as a New Ribs paper, but they kind of don't mean anything. The businesses care about just one thing and one thing only. Does it help me? Hardware companies care about creating an amazing, ambitious product faster. You know, that applies to memory companies, semiconductor companies, robotics companies, right? So when we talk to those executives, those engineers, They care about how this tool can help me. That's where the rubber meets the road. And that's the true test. That's how what we live by. Doesn't matter what's inside the box. Doesn't matter how big or small the model is. When we go in and we prove that on your benchmark data, on your as complex as a thing you are trying to create, maybe you are trying to create a design that has nanometer level features and a centimeter scale domain. Can we solve that problem for you? out of the box. We do that, and you establish trust. Compare us to what you think is the ground truth. And we have seen both the levels of ground truth comparison. The first one is, I have my favorite solver. Can you match that? Because I've been doing physics, we are all hardware companies, there have been toolings around. So that happens. And then it goes to the next level. Can you help us solve the problem at a higher fidelity than what was possible with my existing tool? And we have done that as well. But to answer your question, if we cannot help you solve that in the first meeting, then it's not really a solution that you will use, no matter how I describe my solution.

Michael Finocchiaro

So like, are you guys proposing to step in place of say an answers that the country, the community has to do? He turns away. Where is, is, Benzies said in the workflow of the engineer, like I'm designing a rack of servers, I'm designing whatever. Cause I think that's the case you and I had talked about. Where does it sit in the workflow of that engineer?

Hardik Kabaria

Yeah, I the workflow is super important because this is the place where the shape of the product for us is not just the raw capability, which is say we have a model that does something. It's very important to understand what engineers want to bring in, what kind of data they want to bring in, and how they want to consume that output that is produced by a physics model. So for us, we started by understanding where the design file comes from. So it often comes from upstream software. For semiconductor, there are file formats like OSS file, GDS file, for PCBs, Gerber files, for ⁓ mechanical component at step five. So we directly digest those. But those are geometries. They don't have material properties. So we also digest those. Then you care about environmental conditions, or as physicists care about boundary conditions. So we digest that information in various file formats. And then we produce an output. So the goal of the product is data in, data out. And we digest the data that is available for engineers to export in their existing systems. So not creating yet another sort of a detour, rather drag and drop situation. So it makes the workflow easy to adopt. It's extremely important. If we don't do that, no matter how good the capability is, the adoption is going to be minimal.

Andy Fine

When you're looking at validating different applications, particularly with thermodynamics and you've got fluid dynamics in there as well, for our audience who don't have

Hardik Kabaria

Mm-hmm.

Andy Fine

the kind of super geeky background that some of us do in this. Fluid dynamics is governed by effectively something called the Reynolds number, which is a function of the speed of the flow and the size of the object you are doing. Now, there are upper limits and the physics of fluid dynamics completely changes. So, for example, the cooling airflow in my laptop is going to be completely different physics to the airflow around 747. So...

Hardik Kabaria

Absolutely.

Andy Fine

My question to you Hardik, where do you see your sweet spot where your physics works, where you focused at the moment and what are the applications you're seeing where you can really excel at? Because I'm assuming you're not going to be doing the aerodynamics of a jumbo jet right now.

Hardik Kabaria

I mean as for a startup it's very important to focus on things and so we picked there are so many physics like just you described right so you got to pick one thing the one thing that you want to succeed at so we picked heat transfer problems and then we were like you know we got to succeed at something amazingly well so that we narrowed it to heat transfer problems in semiconductor and electronics and for that we built a model that works out of the box the extent we have spoken of. But underneath we have it based on the underlying physics equations. So we started with that and then we extended. thermo-mechanical issues. So we extend it to a different physics now which allows you to do heat transfer problem coupled with mechanical deformation. And that leads itself into fluid dynamics which like you said happens around the mechanical part. You might have a cooling channel or you might have a cold plate you might have an airflow. So that's the place where we operate today. But where we go from here is wherever there is a laws of physics these are operating on this part that's on our roadmap. But before we go and build a ginormous model that nobody can use that's not the style of our company. The style of our company is we have built that is already affecting and creating value for the users. in a manner to give them answers for the physics much faster at a higher fidelity than they could access.

Andy Fine

thanks Mike. So to bring that together, so you're doing the conduction, the radiation, and the convection, the three methods of heat transfer. doing those all within your physics engine. And just to explain to the audience, as I say, who may be a little bit non-technical when Hardik also said about deformation. If you imagine you're passing electric current or you're passing heat through something and it changes the material and the material can flop or the material can warp and so on. That's purely from the conduction aspect, is that right?

Hardik Kabaria

Yes, but it actually the phenomena governing it is it last to static. So now you have a Hooke's law or elasticity equations that go and how these parts deform. That happens because of you have heterogeneous materials, specifically coefficient of thermal expansion like that changes the thermal strain, the thermal strain makes the part before. Now, before we go into the physics, let's talk about the application. This is an extremely important area for semiconductor electronics because you have the chip packages or the memories getting bigger and bigger in dimension with smaller and smaller in feature. You have nanometer level feature on a centimeter scale domain just in the semiconductor and the boards are even bigger. So during the manufacturing process as well as operation these parts deform and they deform so much so that it creates a yield as well as reliability problem. So this is technically a coupled phenomena that is of interest to the companies and product teams that are operating in this area of creating a very complex semiconductor electronics parts. Sort of applies to everywhere there is printer circuit board, as well as everywhere there is a semiconductor components. It's pretty hard to find parts that don't have PCB or semiconductor components.

Andy Fine

So I just want to bring that together again for our slightly non-technical audience. What Hardik has just said is a multi-physics model for semiconductors. This is something that's been a holy grail for a lot of the software producers and physics software producers for a number of years, actually combining thermodynamics, fluid dynamics, and structural dynamics. Bringing it all together. It's also adding what you've just said Hardik Something called DFAM or DFM so design for manufacturing Which is again incredibly important to say DFAM is additive manufacturing. It's not just additive. So DFM, so design for manufacturing constraining what you design for the manufacturer So you're looking at the whole process here, which is really impressive

Hardik Kabaria

Exactly. Yeah, because that's where in our mind the problem comes, right? Let's say I have designed an amazing ASIC and somebody else designs a server board, which as you can imagine, everybody's interested in doing an AI inference, whether it's at their laptop or consumer parts or in data center. And then you ship your design to electronic manufacturing service providers. This is a problem a lot of you can relate with. And now the boards are going to deform so much

Michael Finocchiaro

Mm.

Hardik Kabaria

that your electronic manufacturing service provider is going to tell you that this is going to have a very bad yield. Not reliability, a yield problem, something that you cannot even now ship to the market fast enough. That is the problem that our software today can enable you to understand at every phase of your design. Whether it's an architectural stage, that means you don't really have a design, you're drawing boxes. We are working with customers who are operating at that level. Then there is a pre-silicon phase that I have some level of design to a design that I'm ready to ship out to my manufacturer. And lastly, an integration phase, because every hardware is basically an assembly. Even for boards, even for a consumer product, you have so many different components coming from so many different companies. And at the end, if the whole thing doesn't work together, we won't have our phone working for us.

Michael Finocchiaro

I guess you could also add in electromagnetic, right? Cause then you start to have maybe interference because of radio frequencies or whatever. ⁓ Sorry, thinking about that phone example you were using and I was thinking about another of Andy's company's Nullspace, what they were doing. ⁓ What kinds of ROI are your customers seeing? mean, obviously you're being able to detect problems before it become a problem. So the ROI can't... Calculation must be actually rather astronomical for these companies, right, for the customers.

Hardik Kabaria

Yeah, I mean, they want to derive the information about physics at every stage. Just think about it as, hey, I am an engineer and I'm trying to guide my semiconductor team how to change the design. I got myself an extremely complex design. Just in terms of the raw data, the design files could be 10 gigabytes. I have 10 gigabytes worth of 3D geometry data. I want to run an analysis. I want to run as accurate of an analysis as I can run so I can tell my team. that here is where the heat transfer or thermal bottlenecks are. So they can do something about it. Now that's a very basic loop that we can all appreciate. The faster you do loop, the better explorations your team would do and better products you ship. With our software, a single engineer has been able to do thousands of analysis, each worth millions of degrees of freedom problem in a matter of a day. So now you can just think about the exploration has gone to an order of magnitude higher and hence they are digesting that data, analyzing it. So it's not just about the person who's creating the prompt or know asking for the physics question is actually even about the person who's trying to consume it. Right, hey you got an analysis if you perform an analysis on 10 gigabyte of design file you can imagine the data generated is not two by two plot. It's a lot of data. So now we have to also create an analytics that is digested by even 10 times broader set of designers who is going to try to understand how do I make sense of it so that I can take actions. So that's the platform level benefit that we are providing for our users.

Andy Fine

I'm going to throw a spanner in the works. This is what I do. In terms of deep learning, I spent a few years in the deep learning world. One of the fundamental questions that any engineer is going to ask is, is what I'm looking at correct? And that comes down to uncircumstantial de-quantification, which is a key part of any deep learning example. And a particular question you have to ask for.

Hardik Kabaria

Go for it.

Andy Fine

foundational models. How are you addressing the uncertainty? for the audience again, this is the model telling you that, okay, I'm 85 % certain this is right, I'm 90 % certain this is right, and this is the model telling you how clever it actually is with your model, if that makes sense.

Hardik Kabaria

Correct. No, it totally makes sense. So first, for any ⁓ model development, you have to develop data sets and benchmarks. Make sure before you even go to the users, you convince yourself as a team that, we are on the right path and we have gotten ourselves a good amount of data coverage. So I'll just start there, that we are training a model on 45 terabyte worth of physics simulation data. So extremely wide coverage that would have parts from semiconductor components. PCB, robotic arm, you name it. And we create a very good benchmark data set that is tested against based on the type of things we know our customers are interested in. So every time a release goes out, there is a very thorough set of checks applied. So that's sort of the stage one of all the internal checks we do. And at the same time, it's impossible to say that I have a transformer model that is always going to give full coverage, like you just said, Andy. There will be some level of inaccuracies. And inaccuracies for engineering can creep it in a very interesting way. It's very easy to create a model that can predict temperature field. It's much harder to create a model that predicts temperature field and heat flux, the derived quantity well. It gets even worse if you are going to get integration on the derived quantity. Now the error is going to stack up. So the technology bed we have created is self-corrected because the laws of physics are known. So we check for it, we correct for it, and we present the answer. So let me take it for the reverse side. We are the only company that will perform 300 million degrees of freedom problem in 40 seconds and give you a residual norm that we solve the problem 1E minus 10. Complete transparency in the things we provide. So actually, You were asking about uncertainty confliction. I'm sort of leveling up the bar. We'll give you a residual norm at the time of inference. How well we solve the problem. You don't like it? You shouldn't accept the answer.

Andy Fine

A residual at the time of inference. So inference is making a guess on the model, obviously. What do you mean by residual? Because depending on what your background is, a residual can mean multitude of things. So can you just clarify that a little bit?

Michael Finocchiaro

Mm.

Hardik Kabaria

Yes, I'll describe what it means for us and then of course we can go in detail. But when somebody solves a physics equation, right, you can, if you have a solution for the physics equation, you sort of plug it in, you're supposed to get zero, right? If you have a perfect answer to your physics equation, you have the answer, you plug it into the equation, you get zero. Now zero is pretty hard to get because we are all doing some level of floating point and machine precision. So the answer you are

Michael Finocchiaro

Mm-hmm.

Hardik Kabaria

For a really good solution you are going to start getting in physics 1u-10, 1u-12. Those are sort of starting to get to machine precision for floating point. We provide that information. Hey, you asked us to solve a heat transfer equation on this part. At the time of inference, we solve the problem, and here is the residual. You asked us to solve a thermoelasticity equation on this large board. How does the part deform? We solve the problem. Here's the residual. create complete transparency in terms of the solution we provide. The goal is not to create a black box solution. It's rather the reverse, depending upon the level of the interest the user has. That's true for LLM too. Like today, I may ask a question, hey, help me understand what the foundation model for physics is. And I might get a different answer than Andy, if you ask a question with extremely detailed problem. So in the world of AI, The answer lies in the question. How much detail do you want? You are able to get that answer. And we are taking the same approach for the product. We'll have users who don't care about it, who are going to ask the question in an abstract way. Here is my rough design. I'm going to change the material. I'm going to change the power map. Give me an idea how much delta I'm going to get. But we'll also have principal engineers who like, no, I'm going to check everything you did. I want to make sure the answer is right. before I guide my team to change the design of the memory, because it will have non-trivial impact, and we cannot be on the edge. ⁓ So complete transparency to the residual norm that we publish at the time of inference enough.

Michael Finocchiaro

It's pretty awesome, especially when you talked about the 300 million degrees of freedom. seems like solutions like what you're talking about with Vinci, but also I remember ⁓ Brad and Sisket and Istari Digital did a webcast about two weeks ago about how they're also expanding basically the art of the possible into the just billions of degrees of freedom, almost an infinite design space, which is just kind of mind blowing for me. That's sort of the open AI moment is this. The that are no longer, our design space is no longer constrained to two to three variables. We're talking, you're talking about 300 million of them, right? I think you even quoted me a figure, something that you did a trillion equations in like 27 hours or something. Some insane number.

Hardik Kabaria

Yeah, mean, yes, our software has enabled inference that involved 1.2 trillion degrees of freedom. And that inference worked in 24 hours. So yes, it is massively scalable. Yeah.

Michael Finocchiaro

Ha It's insane,

Andy Fine

I have a What in the hardware world made this possible for you to do? Because I'm going back 20 years ago when I was doing CFD. mean, the 32 things. Yeah.

Hardik Kabaria

nanometers ⁓ Which type of hardware we run on? So I'll answer both the question. Where does it appear? It appears in every semiconductor company. Why? You have nanometer level feature on a centimeter scale domain. Just performing manufacturing resolution simulation would need trillions of degrees of freedom. You have an extremely small size on a really large 10 to the power six. This is the difference between the feature size and the domain size.

Michael Finocchiaro

Yeah Okay. It's like an ant on a huge table or something,

Hardik Kabaria

It's way worse than that, but yes. So that's why. And the second, to answer Andy's question, what is the hardware we are running our product on? So our product needs at least 80 gigabytes of GPU to run our software. Yes, our models are not small, cannot be inferred on CPU. We need at least GPU. What works on both Nvidia and AMD? H100 or higher on Nvidia side, MI300 higher on AMD side?

Michael Finocchiaro

.

Hardik Kabaria

can infer a product. To answer that particular prompt, it took one H200 node, which has eight cards, to perform that analysis for 24 hours.

Michael Finocchiaro

Well, I'm just curious too, because I've talked to a couple of companies that are doing quantum. They're already working on the quantum machines we have today, which are still relatively small and young, right? But will quantum also be a massive game changer for you? Would your stuff run on that? I know it's a different kind of coding, but how quantum you can do sort of simultaneously solve thousands, millions, infinite number of equations simultaneously. It just makes me think about.

Andy Fine

kiss it

Michael Finocchiaro

possibilities here.

Hardik Kabaria

I actually think ⁓ we are so focused on shipping and today's hardware is GPU. So that's what we're focused on. As a startup, you have to figure out the aperture, how to make an impact and then sort of grow from there. So that's our style. ⁓ Quantum technology, I believe will come real. And when it does, we'll sort of figure out how this adapts to it in a production scale environment. So nothing we have looked into the quantum computing today. ⁓ GPUs, they are changing and iterating extremely fast. So what we are focused on developing is a GPU native code. That means the next generation of GPUs as they keep coming, which is going to keep happening from NVIDIA and AMD and handful of others, our code runs automatically on those next generation devices and creates a benefit for the user.

Andy Fine

I I'd add some insider information on that without breaking any things. The quantum, I'd say the quantum realm, sounds like something from a Marvel movie right now. But the quantum on the computing on the atomic scale for quantum, it's absolutely fascinating. But if you consider the... ⁓ Waitlist that there was for these high-level GPUs at the moment and quantum is that much harder to build so you can imagine what the waitlist is going to be for those computers so we're a little while off that I think building natively for GPUs is Certainly the way things are going at the moment. I just want to clarify a Couple of things that you said with the compute requirements and the hardware requirements that you had for this Are you saying that you need this 80 gig? ⁓

Hardik Kabaria

GPU.

Andy Fine

Yeah,

Hardik Kabaria

For inference, Our models are heavy. They provide an extremely high throughput benefit, but they are heavy. And hence, it requires 80 gigabytes of GPU memory at the time of inference. But the throughput it creates is insane. On a single card, you can run 10,000 simulations, 10,000 analysis in 24 hours. So yes.

Andy Fine

Mm-hmm. Okay.

Hardik Kabaria

But it depends on that it all of these depends on the complexity of the prompt So these numbers does make sense for different users a different benchmark like folks who are trying to look at analysis of a different scale will get a different performance The prompt to query like inference time can be pretty long, too We have had cases where a user created a prompt that lasted for seven days for inference So it scales depending upon the complexity and the compute that is available

Michael Finocchiaro

So what are the first two or three wedge use cases for Vinci? It's going to be along this alliance of I'm designing a new server, I'm designing a new chip. What are the first two or three wedge cases?

Hardik Kabaria

Yeah, it's definitely those. ⁓ I'm designing the new semiconductor component. I'm designing this boards. The boards can be for consumer product. Boards can be in laptops, phones, to the server racks that are going to sit in the data center, to avionics box control systems that are in the robots, whether they are in the defense objects, battery systems, ⁓ wherever there is heat transfer. ⁓ that's starting to open up for us. And in terms of the industry, we are talking to memory companies, foundries, Fabulous, electronic manufacturing service provider, as I said, some battery automotive companies, all the way to a steel plant. A steel plant has a furnace. Furnace is a heat transfer problem governed by the same equations like Andy just said earlier. And they also care about understanding how ⁓ the plant is processing steel and do they want to change anything about the process parameter based on the things that are going on for them so they can better understand and better control their systems.

Michael Finocchiaro

So who are the buyers? This sounds rather technical. like when you're going, you've got to find the engineering department or the geeks that are really trying to solve the problems.

Hardik Kabaria

I mean, that part isn't new. There's definitely this is an engineering tool for engineers, right? And they do have, like Andy mentioned earlier, and they all care about trust, but verify. And that's absolutely is the only way it should be because in hardware, you cannot do a patch fix. Once the product is out, product is out, right? So the bar is pretty high. That also means our teams that we interact with sit under

Michael Finocchiaro

You

Hardik Kabaria

You know, you would have a VP of hardware, or there would be fellows in semiconductor industry. ⁓ Similar, people who own the product performance. They care about creating the methodology that enables them to ship a better product faster. That's the team that we have to interact with, like any other engineering tool that serves an engineering organization.

Andy Fine

Following on from that, and you mentioned a few different use cases. You said at the beginning, we've got to have focus. We've got to have focus somewhere specifically. So you no doubt have current benchmarks and current validation cases where you have. Obviously, we're not going to talk about any of current customers because that's all confidential. But your validated use cases that you're going after now that you can go into a client, into a prospect and say, look, this is the use case we've got. This is the proof we can do it. This is how, and I'm going to mention the word risk here. This is how we're going to measure risk failure. Cause at the end of the day, think any software, any simulation product is a risk management tool. How can we reduce the risk of failure once we actually start building this stuff? So what is your,

Hardik Kabaria

Yeah. Sure.

Andy Fine

your ultimate focus and where you're really going for right now.

Hardik Kabaria

Yeah, mean, maybe differently said ⁓ the same. If you have a heat transfer problem, we are reaching out to you. And actually, lot of the engineers are reaching out to us as well. So there is both inbound and outbound across the industry. Definitely started in semiconductor electronics. We started with that. But it's actually expanding a bit on its own, as ⁓ there are thermal engineers everywhere. There are thermal engineers who are doing these type of analysis for battery companies. There are thermal engineers doing this type of analysis for defense companies. And there are thermal engineers for manufacturing plant, electronic manufacturing service providers, like I said. So today, the place where we definitely have a wedge is if you hire or have a thermal engineer on your team.

Michael Finocchiaro

Hardik Kabaria

Which is, it's not specific to an industry though definitely that was a good starting point. Because if you think about even today, helping world design better, higher performing semiconductor faster is the problem that a lot of people are working on. That's the place where we can create.

Michael Finocchiaro

Is there also an idea, one of the other optimizations you're trying to accomplish is also reducing the amount of energy required for the stuff to work? I mean, we were talking about warping and manufacturing, but I suppose also energy consumption is a mega concern, right? Because GPUs burn a lot of cycles.

Hardik Kabaria

Absolutely. Yes, it's lot of teams are trying to increase the power in the semiconductor part so they can make it a better, faster performing devices. But that is on the downside is met with, hey, you create an amazing ASIC, right? A high power ASIC, but the ASIC, ⁓ XPU, CPU, GPU, whatever, eventually will be integrated on a board. And the board is what either we are going to interact with when we do inference training, whatever. And if you cannot create cooling for that chip, you will have a problem downstream. Either you will have a throttling, or you would have some sort of a reliability problem. So it's a real pull and push and tension between creating a high-performing device as well as making sure all the power that the device is going to create, you can cool it. And cooling itself is going to take its own power consumption problem at the data center level or the device.

Andy Fine

Can you just redefine throttling for us and what you mean in this sense? So throttling is clearly a failure mode within semiconductor. Can you just define what that is and what actually happens there?

Hardik Kabaria

Yeah, so let's say you have a high performing CPU and you are generating some power as somebody creates a workload to create some sort of calculation that applies to CPU, GPUs, everything. And there is a thermal range, a range in which these devices operate meaningfully successfully. And if heat generated is equal to heat dissipated, that's the law of physics. Now, if you cannot dissipate heat fast enough through heat sink, through cooling channel, through cold plate, then your temperature within the semiconductor component is going to keep increasing. And at some point, it hits a limit that automatically the chip itself is going to throttle its performance. And the reason why those mechanisms are built that way, because semiconductor industry cares about creating parts that last for a long time in the environment. And you can imagine this can go crazy. What if you have a chip? that is inside a car. There are so many semiconductor components in a car. Cars operating in Arizona, cars operating in Alaska, things are going to be different. That's way more complex than data center. Data center you can control. You can't control way more autonomous driving, which is a physics AI model operating in the street. In a different environmental condition, everything is generating heat. You don't control it. You are going to have an issue with the performance at which inference happens. So this easy, big data center, when I see a way more car, think there is a big data center that is rolling on the street. And it does have a cooling problem that directly affects the inference, which is what is driving.

Andy Fine

Would you say that throttling problem is the same as thermal runaway for EVs and so on? So a battery pack that's got a certain temperature window it can operate in. Are talking that this is the same application of failure modes and battery packs as well?

Michael Finocchiaro

So how did, okay, there.

Hardik Kabaria

Exactly. So at a certain point it starts degrading. That's what happens in semiconductor. In battery it kind of goes into a runaway problem that it just feeds itself that you cannot control it. And this affects even optical devices. I'm sure we are all hearing about going from copper to optical fiber to transfer data, which is known as co-packaged optics. So yes, I mean, again, this is how interesting physics gets us. We created an alternative that says, okay, now we are going to do data transfer through optical fiber, ⁓ and it doesn't generate heat because it's not copper, right? ⁓ It has a different problem. It has an even tighter range of temperature in which it operates. So imagine now you have an optical fiber sitting next to your CPU, GPU. That is generating heat. So if you cannot dissipate heat fast enough from your neighboring component, it affects the data rate of the thing that is supplying the data. So it's kind of becoming even more complex. This is what I was alluding to is that you have multiple hardware companies creating parts that eventually have to come together in a system. And if the system doesn't work together, we at the end do not get the benefit that we care about, whether it's for my phone, my laptop, or when I do my inference that eventually is going to get performed in a data center.

Michael Finocchiaro

Where does VINCI run? Does it run on premise? How do you protect the IP of the customer? Because obviously Intel is not going to want AMD to get the information. How do you protect the IP and where does it run?

Hardik Kabaria

Yes, we deploy Docker containers on customers computing infrastructure. That means they can say, want to use it on Azure, AWS, Google, my hyperscaler. That's fine. We ship Docker containers over the air updates that go to them. We also have ⁓ users that say, no, no, no. It's going to run on our own data center on a bare metal server. And as you can imagine, the companies in this space, in general in hardware, they are so sensitive about their design as they should be. That's the thing they create. And hence, we ship behind their firewall, ensuring their IT team, their legal team, is completely in sync with what we are trying.

Michael Finocchiaro

So really, they're not really running on the cloud per se. You're actually deploying inside their cloud or inside their own infrastructure. ⁓

Hardik Kabaria

Inside the app cloud, it might still be the same hyperscaler, right? ⁓ So it doesn't matter to us which cloud service provider you choose. We give you a hardware spec, the spec saying, hey, this is the type of hardware we need. And you could create those machines available to yourself on the hyperscaler or on your own data center. And we have both the flavors. We have seen this usage both ways.

Michael Finocchiaro

So Andy, you're someone who knows this industry far better than even I do. What's your take? So you've listened now for 40 minutes to Hardik and you've talked to all these other vendors. What is your take on how you'd convince a skeptical practitioner to use Vinci?

Andy Fine

This is always the fun bit because I'm still an engineer at heart. So I'm skeptical on absolutely everything. so I always say I need to be convinced of it myself ⁓ before I can actually ⁓ put forward a solution.

Michael Finocchiaro

Yeah

Hardik Kabaria

I think that is the right answer, is that convince yourself, and that's precisely what I tell to my users. My benchmark does not mean anything. My benchmark is good enough for me to have this meeting with you. Your benchmark matters a whole lot. Let us enable and show you on your benchmark. And that's when you establish trust. There is no other way. Otherwise, we go away. We are not ready for

Andy Fine

I think you've hit a nail on the head there. Do you want to establish trust before anything else? I think that's absolutely, especially going back to what I said, there's a lot of fluff out there in the AI world. And these fluff is the most polite term I can think of. So mean, Hardik, for your customers and your customers where you're deployed, what is your path to adoption? You've said, right, you go to the engineer, you say to the engineer, okay, try it on your model.

Hardik Kabaria

this small energy house.

Andy Fine

⁓ It should work on your model. That's just the initial kind of your foot's in the door. You're doing something there. What is your path to adoption and deployment in your customers?

Hardik Kabaria

Correct. Yes. Yeah, I so we like to get attached to hardware programs because at the end, that's how these teams operate. So they actually have a problem. So our style of engagement is that, yes, we'll do a trial period. But during that trial, we are going to help you figure out, is this useful to you on a program that you are working on right now? And if the answer is no, then we are just going to become another rock in your pack. We actually walk. because then we haven't identified a problem that you need a solution for that is really useful. So it cuts both ways, but on the other side, when it's useful, we get attached, and then you rely on us. That's it. So that's the type of environment and culture we create when we engage with a customer, with their team, having a sponsorship. that says, we are trying to solve this problem. So understanding the problem the engineers have, why is the thermal problem important for their hardware program, is the most important part of the work we do. After that, it's physics we all understand.

Michael Finocchiaro

Well... We're almost at time. ⁓ Before we go Hardik, I wanted to ask you, what's like the next step for the, engineering leaders that are listening to this podcast, other than call you, but what's like the next step for them?

Hardik Kabaria

Well, I'll take a different route. So I think we have been able to teach language to AI with a non-smart of data. But it has done a bit of a creative task for us. Like even when I ask it to write code, it generates a different code for me. It might generate a different code for Andy. Physics is extremely universal. I would go far as to say that heat transfer was the same for dinosaur. And it's going to be the same when you plan it. It's physics. It's as simple as that. So when we start teaching physics to the AI model, which enables us to understand everything that's around us, the parts we are going to design today, parts we are going to design tomorrow, it's also operate on a nature-made part. So I think the possibilities of what we can do with the raw capability of physics and AI, I think is going to be mind-boggling. I don't think any of us can put enough. put an envelope on it, like this is what it will do. I think it's going to surpass all of our expectation. And that's the future I'm super excited about. And we are just going to be part of that pushing that envelope. Let's teach AI physics, the real physics.

Michael Finocchiaro

Yeah, I think that I like how last year at the GTC, Jensen was framing a lot of things around the AI factory of the future. And I feel it's pretty awesome that you guys are already putting that brick, the physics brick of the AI factor of the future with Vinci is really cool. Go ahead.

Andy Fine

⁓ I'm going to put one spanner in the works there. Physics is extremely complex and depending on the physics you're simulating it gets more more complex. So being application specific, yes, the AI can solve the physics for this problem or that problem. Any notion that there is a journalist foundational AI that can do everything. I'm sorry, that's simply a false statement. But as Hardik says, he's focused in the semiconductor... Okay, today... I'll give you that today...

Hardik Kabaria

Today, today, this is the place where I'll challenge you. Today. Yes, it's not happening. Yes, it hasn't happened yet. And I believe in the optimistic

Michael Finocchiaro

Yeah

Hardik Kabaria

part, it's gonna happen. But this is the place where me and you can have a beer bet.

Andy Fine

I think we can have a bet and I would say once quantum comes on board, once we compute to the atomic level and do that many operations per second, then I think we'll be there. ⁓

Michael Finocchiaro

Hahaha.

Hardik Kabaria

Do you just put a time

Michael Finocchiaro

We get

Hardik Kabaria

on?

Michael Finocchiaro

the 150 qubit machine, right?

Hardik Kabaria

Now you're just arguing time axes. In my mind, we've got to small steps. That's my style. Like if we cannot do the first thing, the next thing isn't going to happen. And yes, I think we are starting to build on it. And that's the exciting.

Andy Fine

you I think that's a fantastic approach. Start small, know what you know, and then build from that. I think that's a great approach.

Hardik Kabaria

Bye. Awesome.

Michael Finocchiaro

Thank you very much, Hardik. Thank you very much, Andy, also for your perspective. And we'll see you on the next episode. And thanks, everybody. Bye-bye.

Andy Fine

Thanks Peter.

Hardik Kabaria

Thanks for having us.

Share