IGBT with Diode switching times influence on conduction losses

I am simulating a specific topology of Voltage Source Inverter with the use of IGBT transistors.

Conduction losses and switching losses are being calculated as in PLECS examples or Youtube tutorials, with the use of periodic average and periodic impulse average blocks.

Transistors are controlled by hysteresis controller.

When I reduce the width of hysteresis loop, the transistors switch more often. This cause increase in switching losses, but conduction losses are on the same level.

In practice, switching is not a rapid process, there are turn on and turn off times, specific for transistors vs gate resistance and gate control voltage.

This means, that more frequent switching cause the transistor stays in static state (on or off) for shorter time.

Standard PLECS IGBT with Diode model does not incorporate these switching times, thus increase in switching frequency does not cause decrease in conducting losses.

There is an IGBT with Limited di/dt model, but it is not possible to simulate switching losses based on thermal library model with this one.

My question: Is it possible to simulate switching losses of an IGBT with Diode model with switching times influence on the static states duration?

the switching losses map must be extracted already with the right pair of IGBT/Diode. Look ETH docs, there is a good literature from Kolar