While using the thermal model of a SiC switch, I noticed that a paramter ‘deadtime’ was automatically created. Perhaps it has been defined in the XML file. Correct me if I am wrong. The issue is that the dead time defined in that tab had no effect on any losses at all. I am trying to study the effect of dead time on the losses of my model. Right now, I have kept manual dead time on my switching pulses. However, a deadtime of 0 give me minimum losses all the time which is unexpected. I expect the losses to go down initially(with increase in dead time) and only then go high after a certain point. Please tell me how should I approach to make this study of impact of dead time on my losses.
Thank you!
Perhaps you can post your model and thermal description so I can better understand your observations.
Some vendor models have a dead-time parameter that the user can enter in the component mask. Infineon CoolSiC models come to mind, but there may be others. The Infineon models use the dead-time to scale values in the Eon and Eoff tables, normalized to a nominal dead-time, and likely assume a non-zero dead-time since there are other effects that would occur if the dead-time were actually zero. I cannot speak to the assumptions in these models, but you can easily see the formulas used.
Generally speaking you can also see the dead-time impact on 3rd quadrant conduction losses, where there will be much higher losses with the gate-off compared to gate-on, so minimizing the dead-time should reduce the overall losses.