How to use 'make' or 'external build system' for C-Script?

I am using some C libraries for embedded systems, particularly ‘SCS’ and ‘OSQP’, which normally need a ‘make’ file to link all the dependencies.

In PLECS, the C-Script block does not seem to provide access to a terminal or an external build system.

How can I implement this in Plecs? Is there a similar way to do it as you were in a computer terminal?

You cannot modify the PLECS Make behavior. You can include external libraries directly as C-Code, which works well for simple situations. See Using header file in C-script - #3 by Marco_Guerreiro for an example. However, for more complex situations I would recommend using the DLL block which will give you the most flexibility for compiling your code and linking against external libraries.

DLL is definitely the way to go. I’ve used OSQP in PLECS and it was horrible, had to include source files by trial and error, very difficult to debug, and the C-script block is… lacking.

With DLL on the other hand is much better. You can even debug your code line by line if you want.

I have a setup that in fact uses two DLLs: one DLL just talks with PLECS and a TCP server, and another DLL that is a TCP server and runs my controller. “Why so complicated” you might ask… well, I’ve found that using this scheme gives you pretty much infinite possibilities. A TCP server is just… a TCP server, and you can run it anywhere, which means your controller could be anything. And PLECS won’t know/care, so you don’t have to change anything in the PLECS side (except for the IP of your server of course). Some possibilities you have are:

  • You’ve tested your controller and wonder how long it would take to execute it on your microcontroller? Just have your microcontroller run the TCP server, and just like that you have a closed-loop system, with PLECS as the plant and your microcontroller running your control algorithm

  • You want to test some nonlinear MPC with different solvers but you’re not sure if it will even work and don’t want to go through the pains of C? Just run the TCP server in Python and run your controller there, where it will be much easier to test your ideas, then decide on the solver and just do the final step in C

  • You’re experimenting with SoCs and want to accelerate your MPC algorithm with the FPGA, but don’t want to test it straight on the hardware and don’t have a $$$ hardware-in-the-loop platform? Just have the processor of your SoC run the TCP server, relay data to the FPGA and happy closed-loop FPGA testing. Hardware-in-the-loop is just simulation anyways

I must admit there is a minor big issue with this method: it can be slow, because PLECS is exchanging data with your controller every sampling period. Personally, however, I’ve always found the simulation time to be tolerable.

I’ve found this scheme so useful that I’ve created a small library to take care of this PLECS/DLL/TCP server connection, you can find more info here.

There’s an example in the library that uses make to build the PLECS DLL and the controller. It should work without problems, as long as you don’t put the library in a place where there are spaces in the path.

For simple stuff using make to build your controller works, but if you want to go down the MPC path, I would recommend using CMake. I don’t have an “official” example for this case, but I have attached an “unofficial” example here.m (142.1 KB)
(just rename the extension to zip to access the contents).

I can’t guarantee you’ll get it to work right away as you’ll need to install additional python packages that I’ve created and are not in pypi (plecsutil, pyctl), but at least you can have a look at how you could get it to work. My workflow for this situation is:

  • Use Python to generate your C code, build the controller application with CMake, run PLECS, grab the results, make pretty plots

After your first Python script works, running simulations with different controller parameters (e.g. weighting factors and etc), plant parameters, etc, is just one for loop away.

3 Likes

This is great Marco, thanks for sharing this with the community! I look forward to experimenting with the tool you have developed.

Hi Macro, your work is genius! I wonder if your framwork support other communication protocol like UART? TCP is hard to Implement on TI C2000 or stm32 I think.

Hi Yang,

I’ve tried UART with C2000 but I would not recommend because it will be very slow.

It is not difficult to implement a TCP server on the C2000. We’ve done it and it works better than UART. All you need is an Ethernet shield like Wiznet’s W5500 or similar. And depending on the STM32 you have, it might even already have Ethernet so it is even simpler. If not, you just need to use an Ethernet shield.

I’m currently working on a project with C2000 that uses the W5500 to provide the Ethernet interface. If you want, you can have a look at the code here. The biggest headache with the C2000 is the 16-bit-char issue that can mess up some data transfers, but we took care of that in the SPI write/read functions.

If you have an STM instead, you basically have to replace the c2000_w5500.c/h files with stm32_w5500.c/h files and replace the C2000 SPI functions with the STM ones.

You are really expert on this Macro! I will try your solution later! Thanks a lot!

Hi Macro! I have successfully ran your buck example model on my windows PC! When running contoller and plecs simulation both on the same PC, the simulation speed is very fast, just like doing a normal simulation. But I have met problems when I want to let two PCs work together.

I connect the two PCs with wire, the communication speed said on PC is 1G. I expect the simulation will end very soon just like what I did on one PC. However, the simulation runs very very slowly. It takes over 3 minutes to finish a 0.1s simulation. To fix this, I tried to use wireshark to monitor the TCP connection, and I found that there is always a 50ms delay in every communication step. The DLL block runs at 20kHz so every 0.1s simulation, over 200 second waste just because the 50ms delay.

So Macro, have you met this problem before on Windows PC? I’m confused by this all day :weary_face: and wish you can give me some advise.

This figure shows the information in wireshark, you can see there’s always a 50ms delay

Hello yang,

I have no idea where this delay is coming from, maybe there’s something off in the way I’m using sockets to exchange the data.

I’ve tried running here in two different computers to see if it would be slow too, but I got into some technical issues and was not able to do the test. I’ll try again next week and let you know.

But in general, PIL will be slow. When I do PIL, I keep my simulation time as short as possible, sometimes in the milisecond range.

Hi, Macro!

After several days of trying, I finally figured out how to solve the 50ms delay issue. For testing on Windows, we need to set the TcpAckFrequency registry value to 1 on both PCs, as mentioned in this Microsoft article. This change significantly improves simulation speed!

用于控制 TCP ACK 的新注册表项 - Windows Server | Microsoft Learn

Additionally, I used your buck converter example as a testbench to compare the speed of different communication protocols:

  • For TCP, I kept your original simulation setup (except the time span, I set it to 1s)

  • For serial communication, I used the officially provided PIL block in plecs (which, although deprecated, still runs without errors)

To optimize serial performance, I used an RS422 module and a high-speed USB-serial converter supporting up to 12M baud rate. Here are the results for a 1-second simulation:

  • TCP 100M between 2 PCs: 1m 35s

  • TCP 1000M between 2 PCs: 22s

  • Serial 115200 baud between PC and 280039: 2m 20s

  • Serial 750000 baud between PC and 280039: 50s

Next I will check your socket implementation to see if there’s some methods to make it faster, and I will try TCP communication on my 28377 board with W5300.

Macro, could you please run your buck example between two PCs to verify my test results? I’d appreciate it if you could help confirm my findings. I think both your opil and plecs PIL block presents a highly viable solution for applications with low switching frequencies, such as IGBT-based inverters. It enables us to conduct software development and functional verification through PIL first, before committing to hardware testing. By the way, if PIL is fast enough, is HIL still necessary? The only thing HIL test more is just the peripheral, I think it can be test carefully before power up.

Best regards

Hey yang,

I still wasn’t able to run the 2 PC test here as I’m facing some technical issues. But I’m trying…

I’m really glad you were able to speed things up. Maybe you can create an issue in the OPiL repository and contribute to the code if you want.

Regarding the test you’ve done, I’m really surprised that the TCP 100M and Serial 115200 are only a minute apart. I supposed C2000 + W5500 will be slower than the TCP 100M, since the Wiznet shields are 100 M I think, plus there’s the C2000 SPI signalling and processing. But still, it would be interesting to see how long it takes.

About HiL, you’ve mentioned something that I often think about. In my opinion, HiL is just a much faster simulation. Like you said, the only additional thing that HiL tests is the peripherals, but in my opinion there are other ways to test peripherals that are reliable, and this is not something you need to test every time. Also, HiL won’t give you the switching and measurement noise that you’d get with the real converter, and so you can’t really test filters that you might have; so HiL is in my opinion just a fast simulator. PiL has the advantage that is much cheaper, and you can have more complex simulation models without limitation. Also, with PiL you can even debug your code line-by-line while running a closed-loop test, which I don’t think it is possible with HiL.

Would you mind telling me the purpose you’re trying to run PiL? Do you want to check the execution time of your controller, or you are checking if your C code is correct? Because if you’re just checking your C code, I would say you don’t need your controller at all, just testing it like in the buck example already ensures that you C code is correct.

Hi Marco,

After spending several days learning PIL technology, I’ve realized that the communication speed mentioned in papers, like Baudrate or TCP speed, is not the most important factor. In PIL simulation, data needs to be transferred over 20,000 times per second. Based on my tests, I found that if the communication speed is sufficiently high, the pure communication time accounts for less than 10% or even 1% of the total simulation time. The most significant factor is actually the delay caused by the communication process on PC drivers and applications.

For serial communication on Windows PC, there is always an IOCTL_SERIAL_WAIT_ON_MASK period, which takes at least 2 ms for each data transfer. This means that if the control task executes 20,000 times in one second, over 40 seconds would be spent just on this WAIT event—while the entire simulation only lasts 50 seconds. Moreover, most serial drivers on Windows don’t even allow configuration of this setting. In such cases, you might end up waiting over 30 ms for each data transfer, making the simulation extremely slow.

As for TCP communication on Windows PC, there is a similar configuration issue like IOCTL_SERIAL_WAIT_ON_MASK . The solution was shared in my previous post. Additionally, inspired by AI, I discovered that setting TCP_NODELAY when opening the socket can significantly speed up the simulation! Today, I tested the buck example again, and with 100M TCP communication, the simulation took only 6 seconds to complete 1 second of simulation time ! Just by adding a few lines of code both in hostCommSockOpenConnection and targetCommSockOpenConnection function:

int flag = 1; 
if (setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (const char*)&flag, sizeof(flag)) == SOCKET_ERROR) 
{ 
    LogError(("setsockopt TCP_NODELAY failed with error code %d", WSAGetLastError())); 
}

I’m interested in using PIL because I find it very challenging to implement control mode transitions in simulation. For example, consider a grid-connected inverter that can also operate in island mode. The transition between grid-connected control and island control is difficult to realize using block diagrams. As you mentioned, SIL simulation can help with this transition. But what I really want is to test my control strategies on a real 5kW inverter. PIL can help me verify not only the control logic but also protection, communication, and other functionalities. The same code can then be directly applied to a real inverter design. Now I can confidently say that using TCP for PIL makes the simulation very fast—almost like a normal simulation—and I believe there’s no need for HIL testing just for software verification.

The only code I added to your OPIL framework is the snippet above. Since I’m not very familiar with Git procedures, you can easily add it yourself—it’s simple but makes a significant improvement. When you fix your technical issues, could you please help me check my tests? I would be very grateful.

Best regards

Hey Macro!

I’m very, very, very excited to tell you that I’ve successfully run a PIL simulation based on TCP communication and the PLECS official PIL block on my 28377 board with W5300! I tested it with your buck example, and it takes only 18s to run a 1s simulation! It’s much faster than I expected! Therefore, I’m not going to integrate your OPIL framework into my work for now, but I believe your OPIL could be even faster. This is because I think the official framework doesn’t set TCP_NODELAY, and its protocol is more complex than yours. As a reminder, it only took 6s to run a 1s simulation in the 2 PC test.

If you run into any challenges on your C2000+W5300 work, just let me know! I’m happy to help in any way I can!

PixPin_2025-11-21_00-43-29

(Only 4MB gif is allowed to be upload lol, but I promise this gif is realtime, about 2s to run 0.1s simulation)

Hi yang,

I’m having some difficulties running the two PCs example, so I gave up. But I did another test instead: I used one PC to run Plecs, and my Pynq-Z2 board as the controller. This board already supports Gigabit Ethernet.

Then, I tried the several configurations of your TCP_NODELAY flag, and ran the buck example for 0.1 seconds. Here are the results:

| Target TCP_NODELAY | Host TCP_NODELAY | Run time |
|--------------------|------------------|----------|
| 0                  | 0                | ~500 s   |
| 1                  | 0                | ~500 s   |
| 0                  | 1                | ~113 s   |
| 1                  | 1                | ~0.9 s   |

So, indeed, your suggestion made the execution more than 500x faster! Thank you so much for investigating this and sharing with everybody.

I’m glad you were able to get it to work!

Since you’re using the official PIL block, can you choose the communication to be serial? Or it is only possible to do TCP? I’m just curious about the speed difference…

Just one last comment, based on my own experience. When I’m working on a new control strategy which I’m implementing it in C, I always use PIL as in the original buck example: the controller is in the same computer as PLECS. C code is just C code, so if it works in the computer, it will work in your microcontroller as well. The control algorithm should run independent of the environment (simulation or hardware). In this C code that I develop, I also implement protections, but they are separate from the control algorithm because protection in the hardware might be different. However, what I do is to always set a flag when the protection tripped, so the control algorithm knows what to do, and in then when I implement protection in the hardware, I’ll set this flag in the specific hardware implementation.

Then, after the algorithm is working, I’ll run PIL with the actual controller, just to check if the execution time of the microcontroller will meet the real-time requirement. If so, then it is ready to test in the hardware.

The reason I work in this way is that PIL is slower (although you just made it 500x faster), but whenever you build new code and have to flash in the controller, it takes much longer than simply building and running code in your computer. Just keep these things in mind.

Thanks a lot for the collaboration and happy PIL testing :slight_smile:

Hi Macro, your test results align exactly with what I observed earlier. Setting TCP_NODELAY on the PC significantly improves the speed!

Regarding the official PIL block, you can choose from four different communication options, including TCP and serial. As I mentioned in previous posts, I’ve already tested the serial communication on 280039+RS422—at 750k baud rate, it takes about 50s to simulate 1s of operation. Meanwhile, using TCP over a 100M network on 28377+W5300 takes around 18s for the same 1s simulation. At this point, speed is no longer the bottleneck; the main limitations lie in communication delays and protocol processing.

You’re right that the SIL method can test code logic, but I believe similar tasks can also be accomplished using code generation tools like Plecs Coder. It can even generate a full PIL validation environment, allowing you to test execution time on a DSP without manual coding.

However, the key advantage of the official PIL block is that it enables you to directly apply your code to a real converter—without rebuilding the code or even changing a single line! In most projects and academic papers, we’re required to perform HIL simulations or build actual converters to validate our theories. I’m sure you’re familiar with this process. But HIL simulators are expensive and often in short supply, requiring long waiting times. Building a real converter is also challenging and time-consuming, especially for Master’s and PhD students. If we can confidently verify that our code is correct through PIL, it would significantly reduce workload and boost team morale. PIL makes this possible!

By the way, do we really need real-time simulation for converter design? Although PIL doesn’t achieve real-time performance, our tests show it’s fast enough for practical use. For example, if a converter has 20k switching events, PIL can simulate about 1k events per second in real time. For a 50/60Hz inverter, that means observing two cycles per second. As a digital replica of the real converter, PIL simulation is sufficiently fast to analyze dynamic behavior when the actual converter fails tests. Actually you may still need to pause and zoom in for detailed waveforms. A ratio of 0.9s simulation time per 0.1s real time is quite feasible!

These are my thoughts on PIL simulation—I’d love to hear your comments, Macro!