Will there be an Official PLECS AI Agent in the Future?

Agent rules the world :joy:

Hi @yang, thanks for the question!

It’s an interesting idea. The term AI agent can mean many different things (from simple assistants that help with documentation/search to more autonomous systems that help execute workflows), so I’m curious what you had in mind when you wrote it? Are you thinking of:

  • something that helps navigate PLECS documentation / examples?
  • a tool that could help automate simulation tasks or parameter sweeps?
  • something that could generate or suggest models / scripts for common tasks?
  • or something else entirely?

Just trying to understand the use-case you’re picturing.

Hi, thanks for following up!

What I have in mind is an agent that can actively participate in the simulation workflow. Ideally, I’d like to be able to describe a converter in natural language and have it built automatically—for example, ā€œbuild a peak current-mode controlled buck converter with the following LC parametersā€¦ā€ and then run parameter sweeps using natural language commands. That way, I can tightly integrate the design process with simulation without manually running each case to compare results.

I’d also like the agent to assist in writing simulation scripts—for instance, generating a positive/negative sequence impedance sweep script for a three-phase inverter to perform custom advanced testing.

Also, I hope the agent can take part in the entire HIL testing process. After the model is built, the agent would automatically deploy it to the HIL system, run the desired test sequence, and manage the whole process without human intervention.

Finally, I’ve been thinking about whether PIL (Processor-in-the-Loop) could make a comeback. In the context of AI-powered automated testing, which matters more—real-time performance or accuracy? It seems to me that PIL could potentially deliver more accurate results than HIL, and with AI running tests 24/7 without human supervision, real-time constraints become less of a concern.

These are some of the ideas I’ve been thinking about—hope they don’t sound too naive!

Thank you for your interesting insight. I agree that some of the mentioned use cases are predestined for some sort of AI support, although I am not entirely sure whether this is something PLECS itself should provide. Let’s go through it step by step.

What I have in mind is an agent that can actively participate in the simulation workflow. Ideally, I’d like to be able to describe a converter in natural language and have it built automatically—for example, ā€œbuild a peak current-mode controlled buck converter with the following LC parametersā€¦ā€

The main question that arises here is: where should PLECS obtain the data for training such a system? PLECS provides demo models, but these are far from representing real, fully implemented industrial circuits. Even if all openly available schematics were used for training, the most sophisticated and optimized designs are typically not open access.

Furthermore, if the model is trained only on already available circuits, the output will essentially be an average of existing solutions. In that case, it is questionable whether such a system could truly surpass the current state of the art.

In my opinion, this is rather a task for individual companies. They could develop an internal agent trained on their own past designs. This would help engineers implement previously successful concepts and avoid repeating mistakes made in earlier projects.

…and then run parameter sweeps using natural language commands. That way, I can tightly integrate the design process with simulation without manually running each case to compare results.
I’d also like the agent to assist in writing simulation scripts—for instance, generating a positive/negative sequence impedance sweep script for a three-phase inverter to perform custom advanced testing.
Also, I hope the agent can take part in the entire HIL testing process. After the model is built, the agent would automatically deploy it to the HIL system, run the desired test sequence, and manage the whole process without human intervention.

PLECS and the RT Box already provide APIs via XML-RPC or JSON-RPC, which can be accessed, for example, using Python. Personally, I trained an LLM on the PLECS documentation (PDF Version or Online Version). Since modern LLMs are very strong at generating Python code, this significantly accelerates scripting and automation tasks.
In addition, you can use an LLM tool that is already approved and verified within your company, instead of relying on a built-in solution inside PLECS.

In general, I would argue that PLECS should focus on providing a powerful and comprehensive API that allows users to define and control simulations programmatically. Many functions are already accessible, but some capabilities are still missing (for example, drawing a circuit entirely through commands).
Adding a built-in AI agent directly into PLECS, on the other hand, is something I consider rather unlikely.

Hi, thanks for the reply! Based on my current understanding of LLMs, I’m on the same page as you.

Regarding your point about training an LLM on the PLECS documentation—how did you go about that? Could you give me a high-level overview of the process?

I hope to join your discussion. I have used both the XML-RPC and JSON-RPC interfaces—the former with Python and the latter with Matlab—to extract data for training corresponding neural network models. However, I encountered a common issue: as the number of samples increases, the time required to obtain results from single/batch simulations becomes quite long (on a personal PC). My solution was to develop my own time-domain model of the converter. Based on this, I have a few questions I’d like to consult about:

  1. How can I use the LLM you have trained? I couldn’t find this in the user manual.

  2. For purposes requiring batch data training, could Plecs provide a simplified simulation model to speed up the process? Alternatively, could it offer standalone loss or magnetic component modules? Users could then build their own time-domain models and call Plecs’ loss/magnetic modules to obtain the necessary data.

  3. Given the powerful capabilities of current models, could Plecs develop a dedicated model to answer user questions and help them set up simulation models more quickly? I understand this would involve costs and could be offered as a separately subscribable feature.

These are some of my thoughts, which I wanted to share with you all.

So far, it’s quite simple. We created a bot in ChatGPT and uploaded the documentation as a PDF. Since the bot is aware of the documentation, finding the correct commands is much faster, and it becomes even more efficient as you use it, learning from your previous code.

I am extremely weary about AI but agree 100% with your analysis. What I think you describe is an expertly designed introspective and user-programmable interface that drives a superb extensible simulation engine. The latter part is of course already there, the first part just needs some real, not artificial intelligence. The main problem might be that the introspective and extensibility parts clash with the objectives of commercial programs. In principle FMU solves this problem but what I have seen to date is that it buries the basic functionality under layers of formal canine excretion.

Up until now, I’ve found that PLECS uses a pure text-based format for model description. This means LLMs can manipulate simulation files just as easily as they refactor code. My initial tests show massive potential: I fed the model some simple PLECS examples, and then it successfully generated a SEPIC circuit and an LC oscillator with parameter sweep functionality. Aside from a few minor component orientation issues, the simulation results were spot-on on the first try.

Hi, I don’t know how FMU works internally, but I would like to know your perspective on the comparison between PLECS’ pure-text storage format and FMU. Do you think PLECS is inherently more superior for Large Language Models (LLMs)?

Have you tried parallel simulation? I believe that increasing speed should not come at the expense of model accuracy; otherwise, wouldn’t the resulting data be compromised?

I only had a brief look at the FMU documentation. It is very tough reading, almost legalese. The PLECS text file descriptions are easy to read. There is some encoded binary that I don’t get immediately and the keywords seem to be undocumented. Most of it can be deduced, but I wouldn’t want an AI to guess on that level for something this important. Therefore the FMU interface is probably best for an AI to handle. I wish you luck.

PS: I don’t see any FMU examples in PLECS 5.0 yet? I guess it should be easy to let MATLAB’s FMI drive a PLECS exported model?

Thank you for this interesting discussion so far.

Using an LLM to generate a PLECS file directly is something that I would consider a possible workflow. As mentioned above, PLECS Standalone files are text-based and relatively simple for an LLM to interpret (and partly also for humans).

As a quick test, I took our most basic boost converter demo model, uploaded it to ChatGPT, and asked it to optimize the model for 250 W output power. Even though I accidentally wrote ā€œbuck converterā€ in my prompt, ChatGPT corrected me, derived the relevant equations, and explained the design steps. It then generated a modified PLECS Standalone file which ran successfully and delivered 250 W at the output.

Note that this test was performed using the browser version of ChatGPT without any specific training or customization.

TL;DR: Using LLMs for model generation is definitely a possible workflow.

FMUs, on the other hand, are meant to exchange models (Model Exchange) or even solvers (Co-Simulation) between different simulation tools. Using them as a mechanism for AI-based model generation is not something I would expect to see.

If you are interested in FMUs, have a look at this forum post: FMU workflow in PLECS

As @yang mentioned, parallel simulation would be the correct approach. The important point is to avoid transferring too much data and instead focus on exchanging only key parameters at specific points in the simulation. With this in mind, you should be able to speed up your simulation roughly in proportion to the number of available CPU cores.

If you are still experiencing issues, feel free to send me your model directly.

So will plecs support the model generation workflow in future? Maybe offer some official skills. This is really game changer!

okay, I will have a try, thanks