Discretization Done Right: From Continuous Design to Digital Control

You have designed your controller in the continuous domain, it works in simulation, and you are ready to deploy. When you enable code generation on a subsystem, PLECS discretizes its continuous blocks using Forward Euler. It does this both for code generation and for offline simulation. Forward Euler is the only method that avoids algebraic loops in a continuous block diagram, but it comes with two drawbacks: it can go unstable if the sample time is too large relative to your circuit time constants, and even when stable, its frequency response diverges from the continuous design at higher frequencies.

The solution is to handle discretization yourself using the General Bilinear Transformation (GBT), a single framework that covers all common methods through one parameter \alpha. This article presents a systematic methodology for deriving closed-form discrete coefficients directly from H(s), illustrated with three examples: a low-pass filter, a PI and PR controller. Tustin (\alpha = 0.5) is typically the best choice: it is the only value that is both unconditionally stable and second-order accurate. Other values of \alpha \geq 0.5 are also unconditionally stable but only first-order accurate, and are worth considering only when intentional numerical damping is needed.

Accompanying PLECS models to experiment with can be downloaded here:

The s- and z-Domains

The link between the continuous and discrete domains comes from how exponential signals behave under sampling. A continuous-time system \dot{x}(t) = s\, x(t) has the solution x(t) = x(0)\,e^{st}. Sampling with period T gives

x[k] = x(0)\,e^{skT} = x(0)\left(e^{sT}\right)^k

A continuous-time pole at s therefore maps to a discrete-time pole at

z = e^{sT}

z is the shift operator: multiplying a signal X(z) by z advances it by one sample.

For a stable system, all poles must lie strictly inside the unit circle, |z| < 1. An unstable system has poles outside, |z| > 1, and poles on the boundary |z| = 1 produce sustained oscillations. This follows directly from the mapping: a stable continuous pole has \operatorname{Re}(s) < 0, which gives |e^{sT}| = e^{\operatorname{Re}(s)T} < 1, consistent with |z| < 1.

The General Bilinear Transformation (GBT)

The mapping z = e^{sT} is exact, but rational approximations are needed to convert H(s) into a computable H(z).

A Taylor series expansion of the exponential gives a way to build such approximations:

z = e^{sT} \approx 1 + sT + \frac{(sT)^2}{2!} + \frac{(sT)^3}{3!} + \ldots

Using only the first two terms gives the Euler methods.

Forward Euler:

z \approx 1 + sT

Backward Euler:

z \approx \frac{1}{1 - sT}

Alternatively, a Padé approximation for the exponential term can be utilized. The Padé [1/1] approximation leads to the Tustin (or Bilinear) Transformation:

z \approx \frac{1 + sT/2}{1 - sT/2}

These approximations can be combined into the General Bilinear Transformation (GBT):

z \approx \frac{1 + sT(1-\alpha)}{1 - sT\alpha}
\alpha Method
\alpha = 0 Forward Euler
\alpha = 0.5 Tustin
\alpha = 1 Backward Euler

The two Euler methods are first-order accurate: the local truncation error is O(T).
Tustin (\alpha = 0.5) is the exception — the symmetric weighting of past and future
samples causes the first-order error terms to cancel exactly, giving second-order
accuracy O(T^2). For any other value of \alpha, the GBT is also first-order
accurate. This makes Tustin not merely one choice among equals but the unique
best approximation within the GBT family.

Relation to the θ-method. In the numerical analysis and finite-element literature, the same family is known as the θ-method, with update rule y_k = y_{k-1} + T\bigl[\theta\,f(t_k,\,y_k) + (1-\theta)\,f(t_{k-1},\,y_{k-1})\bigr], where \theta plays the same role as \alpha. The two names come from different communities arriving at the same construction independently; the methods are identical.

The Integrator

The integrator is the most basic building block of filters and controllers. Using the GBT, it discretizes to:

\frac{1}{s} \approx T \cdot \frac{\alpha \cdot z + (1-\alpha)}{z - 1}

Graphical Interpretation

All discretization methods approximate the integral as a sum of rectangles of width T. The methods differ only in which sample value is used for the height of each rectangle.

General GBT splits each step of width T into two partial rectangles: one of width (1-\alpha)\cdot T using the previous sample x[k-1], and one of width \alpha \cdot T using the current sample x[k]:

y[k] = y[k-1] + T(1-\alpha)\cdot x[k-1] + T\alpha\cdot x[k]

Forward Euler (\alpha = 0) uses the previous sample x[k-1] as the rectangle height. The area added each step is a left-aligned rectangle. Because the input is always one step old, this method introduces a delay and can become unstable if the sample time is too large relative to the filter time constant.

y[k] = y[k-1] + T \cdot x[k-1]

Backward Euler (\alpha = 1) uses the current sample x[k], producing a right-aligned rectangle. This method is unconditionally stable, but tends to over-estimate the integral for rising signals and under-estimate it for falling ones.

y[k] = y[k-1] + T \cdot x[k]

Tustin (\alpha = 0.5) splits each step equally between the previous and current sample, producing two rectangles of width T/2 with heights x[k-1] and x[k]. As the first figure shows, these two rectangles together occupy exactly the same area as the trapezoid formed by connecting x[k-1] and x[k] with a straight line, shown in the second figure. The two representations are geometrically identical; the trapezoidal view simply makes the approximation more intuitive. Tustin is the most accurate of the three methods: it is unconditionally stable and its approximation error is second-order in T, compared to first-order for the Euler methods.

y[k] = y[k-1] + \frac{T}{2} \cdot \bigl(x[k-1] + x[k]\bigr)

The discrete integrator can also be drawn as follows:

Discretization of a Low-Pass Filter

Consider a first-order continuous low-pass filter with time constant \tau:

H(s) = \frac{1}{1 + \tau s}

A natural first step is to replace every integrator \frac{1}{s} in the block diagram with its discrete GBT equivalent. For the low-pass filter, which is built from a gain \frac{1}{\tau} feeding into an integrator in a unity-feedback loop, this gives the block diagram shown below.

For \alpha \neq 0, the GBT integrator output depends on the current input x[k]. In the feedback loop of the filter, that input is a function of the output y[k] itself, so y[k] appears on both sides of the update equation, forming an algebraic loop. Computing y[k] would require solving an implicit equation at every step, which cannot be expressed as a simple sequential update and is not feasible on a microprocessor.

The only exception is Forward Euler (\alpha = 0): the GBT integrator then uses only the previous sample x[k-1], so the current output depends only on stored state and the loop disappears. This is why PLECS uses Forward Euler (\alpha = 0) by default when discretizing a continuous block diagram for code generation.

To support all values of \alpha, work directly with the discrete transfer function rather than the continuous block diagram. The goal is to choose a discrete structure in z^{-1} whose input-output behavior matches H(s) and is free of algebraic loops by construction (every feedback path must pass through at least one z^{-1} delay). For a first-order filter with one pole and no finite zeros, the natural choice is a first-order IIR section:

H(z) = \frac{b_0 + b_1 z^{-1}}{1 + a_1 z^{-1}}

This is the structure implemented in PLECS as a discrete transfer function block. To find coefficients a_1, b_0, b_1 that reproduce H(s) for a chosen \alpha, express z^{-1} as a function of s using the GBT and substitute into H(z):

z \approx \frac{1 + sT(1-\alpha)}{1 - sT\alpha} \quad\Longrightarrow\quad z^{-1} = \frac{1 - sT\alpha}{1 + sT(1-\alpha)}
\frac{b_0 + b_1 z^{-1}}{1 + a_1 z^{-1}} \xrightarrow{z^{-1}\,=\,f(s)} \frac{(b_0+b_1) + sT\,[(1-\alpha)\,b_0 - \alpha\,b_1]}{(1+a_1) + sT\,[(1-\alpha) - \alpha\,a_1]}

Matching this against H(s) = \dfrac{1}{1+\tau s} term by term:

Continuous H(s) Discrete H(z) after GBT
Numerator
s^0 1 b_0 + b_1
s^1 0 T\left[(1-\alpha)\,b_0 - \alpha\,b_1\right]
Denominator
s^0 1 1
s^1 \tau T\,\dfrac{(1-\alpha) - \alpha\,a_1}{1+a_1}

Solving gives the three closed-form coefficients:

\boxed{a_1 = \frac{T(1-\alpha) - \tau}{T\alpha + \tau}} \qquad \boxed{b_0 = \frac{T\alpha}{T\alpha + \tau}} \qquad \boxed{b_1 = \frac{T(1-\alpha)}{T\alpha + \tau}}

The discrete filter has a single pole at p = -a_1. For stability this pole must lie strictly inside the unit circle, |{-a_1}| < 1, which gives a constraint on the sample time:

0 < T < \frac{2\tau}{1 - 2\alpha} \qquad (\text{only applies for } \alpha < 0.5)

For \alpha \geq 0.5 (Tustin and Backward Euler) the constraint is always satisfied, making these methods unconditionally stable regardless of sample time. For \alpha < 0.5 (including Forward Euler), the filter is only stable when T is small enough relative to \tau. For Forward Euler (\alpha = 0) this reduces to T < 2\tau.

The Bode plots below compare the frequency response of the three methods against the ideal continuous filter at two sample times. When \tau = 100 \cdot T, all three methods track the continuous response closely in the passband. Differences appear only near the Nyquist frequency, where the discrete approximation of z = e^{sT} breaks down:

  • Forward Euler (\alpha = 0) rolls off earlier and accumulates more phase lag than the continuous filter. The magnitude dips sharply before Nyquist because the forward approximation maps the left half of the s-plane to a circle that does not fully enclose the unit circle.
  • Tustin (\alpha = 0.5) matches the continuous response most closely. The bilinear mapping wraps the entire imaginary axis onto the unit circle, so no frequency is aliased, only warped, and the warping is small at low frequencies.
  • Backward Euler (\alpha = 1) introduces a magnitude bump and phase lead near Nyquist. This mirrors the Forward Euler behavior in the opposite direction: the backward approximation maps poles inside the unit circle more aggressively, making the filter appear less damped at high frequencies.

When \tau = T/2, the Forward Euler stability condition T < 2\tau is violated (T = 2\tau, exactly on the stability boundary). The pole of the discrete filter sits on the unit circle and the magnitude response diverges. Forward Euler produces an unstable filter, visible in the plot as the blue curve growing without bound instead of rolling off.

Tustin and Backward Euler remain well-behaved because \alpha \geq 0.5 makes them unconditionally stable. Regardless of how large T is relative to \tau, their poles stay inside the unit circle. Both still approximate the continuous response reasonably in the passband, though the Tustin frequency warping becomes noticeable at these coarse sample rates.

Tustin has a useful additional property: because it maps the entire imaginary axis bijectively onto the unit circle, every continuous frequency corresponds to exactly one discrete frequency, related by a nonlinear but invertible tangent compression. This means the corner-frequency shift introduced by the mapping can be corrected exactly by prewarping the time constant before computing the coefficients. The prewarped value \tau_{\text{pw}} is chosen so that after the Tustin mapping the corner frequency lands at the target \omega_0 = 1/\tau:

\omega_{\text{pw}} = \frac{2}{T}\tan\!\left(\frac{\omega_0 T}{2}\right), \qquad \tau_{\text{pw}} = \frac{1}{\omega_{\text{pw}}}

The coefficients are then computed with \tau_{\text{pw}} in place of \tau. No other GBT method supports this correction, because Forward Euler and Backward Euler do not map the imaginary axis onto the unit circle. At high sample rates the tangent is nearly linear and \tau_{\text{pw}} \approx \tau, so prewarping has negligible effect; it only matters when the corner frequency is a significant fraction of the Nyquist frequency. Prewarping is a tradeoff: pinning the corner frequency exactly comes at the cost of greater phase deviation from the continuous-time model at lower frequencies. The Bode plot below shows both effects.

Verification in PLECS

The discretized filter can be implemented directly in PLECS using a Discrete Transfer Function block, with the coefficients b_0, b_1, and a_1 derived above entered for the chosen \alpha.

To verify the frequency response, PLECS provides a Frequency Response Analysis tool (FRA). A subtlety arises in how the perturbation is connected: the continuous perturbation signal is first passed through a ZOH before entering the discrete transfer function block, and the FRA reference input must be taken from the ZOH output rather than the raw perturbation. This ensures the reference and the response share the same sampling characteristic, so the ZOH effect divides out and the measured transfer function reflects only the discrete filter response.

The block diagram for this measurement setup is shown below.

The PLECS model can be downloaded here.

Using MATLAB/Octave built-ins. The coefficients above can also be computed directly using c2d, avoiding the manual coefficient matching. Apply prewarping first, then pass the transfer function to c2d with 'tustin':

wc = 2/T * tan((1/tau) * T / 2);   % prewarp cutoff
H_z = c2d(tf(1, [tau, 1]), T, 'tustin');
[num, den] = tfdata(H_z, 'v');
% b0=num(1), b1=num(2), a1=den(2)

Discretization of a PI Controller

Consider a PI controller with proportional gain K_p and integral gain K_i:

H(s) = K_p + \frac{K_i}{s} = \frac{K_p s + K_i}{s}

The PI controller has a parallel structure: a proportional path and an integrator path acting on the same input. Applying the GBT substitution to the integrator and collecting terms that act on the same sample gives a natural starting point for the discrete structure. The \alpha-weighted current-sample tap of the GBT integrator acts on x[k] in the same time step as the proportional gain K_p, these two contributions merge into a single gain K_1; the delayed contribution through z^{-1} is kept as a separate gain K_2, giving:

H(z) = \frac{K_1 + (K_2 - K_1)\,z^{-1}}{1 - z^{-1}}

The denominator 1 - z^{-1} comes from the integrator pole. In continuous time the pole is at s = 0, which maps exactly to z = 1 under z = e^{sT}, giving a denominator root at z = 1 and therefore 1 - z^{-1}. Only the two numerator coefficients K_1 and K_2 remain to be determined and,
crucially, this structure is the same regardless of \alpha: Forward Euler, Tustin,
and Backward Euler all lead to the same H(z) form, differing only in the values of
K_1.

This structure is free of algebraic loops by construction. It is also well suited for microcontroller implementation: the output y[k] splits into a direct term K_1 \cdot x[k] and a state term y[k-1] + (K_2 - K_1) \cdot x[k-1]. The state term depends only on values from the previous cycle and can be computed ahead of time. Once a new measurement x[k] arrives, the new output follows with minimal delay. The separation between the direct and state terms also makes output saturation with anti-windup straightforward to implement: the state can be clamped independently of the direct term, preventing integrator windup without additional restructuring.

To find K_1 and K_2, express z^{-1} as a function of s using the GBT and substitute into H(z):

z \approx \frac{1 + sT(1-\alpha)}{1 - sT\alpha} \quad\Longrightarrow\quad z^{-1} = \frac{1 - sT\alpha}{1 + sT(1-\alpha)}
\frac{K_1 + (K_2 - K_1)\,z^{-1}}{1 - z^{-1}} \xrightarrow{z^{-1}\,=\,f(s)} \frac{s\,(K_1 - \alpha K_2) + \dfrac{K_2}{T}}{s}

Matching the s^1 coefficient with K_p and the constant term with K_i gives:

\boxed{K_1 = K_p + \alpha\,T\,K_i} \qquad \boxed{K_2 = T\,K_i}

K_2 sets the area of each integration step; it is simply TK_i. K_1 adds a portion of the current sample weighted by \alpha; when \alpha = 0 it reduces to a pure Forward Euler integral term.

Notice that K_2 = TK_i contains no \alpha: the discretization method does not affect the integral (state) term at all. It only changes K_1, the direct term that acts on the current sample x[k]. All three methods accumulate the integral identically; they differ only in how much of the current input is added directly to the output in the same step.

The discrete PI has a pole at z = 1 for all values of \alpha, making it unconditionally stable regardless of the discretization method or sample time. Different methods do, however, introduce different amounts of phase error at higher frequencies, which affects closed-loop performance when the controller is placed in a feedback loop.

All three methods match the continuous PI closely at low frequencies. Differences grow near Nyquist:

  • Forward Euler (\alpha = 0) accumulates the most phase lag at high frequencies. The magnitude also rolls off more steeply than the continuous PI, so the controller becomes less aggressive near Nyquist.
  • Tustin (\alpha = 0.5) tracks the continuous response most closely, with only a small frequency-warping error near Nyquist.
  • Backward Euler (\alpha = 1) introduces phase lead at high frequencies, making the controller appear more aggressive than intended near Nyquist.

Prewarping is less relevant for the PI controller than for the filter. The PI zero is at \omega_z = K_i / K_p, and a prewarped K_i could in principle be derived the same way as for the filter. In practice, however, the PI zero is intentionally placed well below the crossover frequency, which itself is typically well below Nyquist. At those low frequencies the Tustin frequency compression is negligible, so the shift in the zero location is insignificant.

The PLECS model can be downloaded here.

Discretization of a Proportional-Resonant Controller

A Proportional-Resonant (PR) controller has the following transfer function:

H(s) = K_p + \frac{K_r\,s}{s^2 + 2\zeta\omega_0 s + \omega_0^2}

K_p is the proportional gain. K_r sets the resonant gain. \omega_0 is the tuned frequency (radians per second). \zeta is the damping ratio: in the limit \zeta \to 0 the ideal resonant integrator is recovered, with infinite gain at \omega_0 and zero bandwidth. A small positive \zeta broadens the peak slightly to increase the frequency bandwidth, and leads to a finite peak gain that can be realized in discretized form.

The resonator has a conjugate pair of poles, so the discrete structure must be a second-order IIR section (biquad):

H(z) = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2}}{1 + a_1 z^{-1} + a_2 z^{-2}}

This is the structure can be implemented in PLECS with the Discrete Transfer Function block. To find the five coefficients, express z^{-1} as a function of s using the GBT and substitute into H(z):

z \approx \frac{1 + sT(1-\alpha)}{1 - sT\alpha} \quad\Longrightarrow\quad z^{-1} = \frac{1 - sT\alpha}{1 + sT(1-\alpha)}
\frac{b_0 + b_1 z^{-1} + b_2 z^{-2}}{1 + a_1 z^{-1} + a_2 z^{-2}} \xrightarrow{z^{-1}\,=\,f(s)} \frac{(b_0+b_1+b_2) + sT\bigl[2(1-\alpha)b_0+(1-2\alpha)b_1-2\alpha b_2\bigr] + \cdots}{(1+a_1+a_2) + sT\bigl[2(1-\alpha)+(1-2\alpha)a_1-2\alpha a_2\bigr] + \cdots}

Matching this against H(s) term by term:

Continuous H(s) Discrete H(z) after GBT
Numerator
s^0 K_p\,\omega_0^2 b_0+b_1+b_2
s^1 2K_p\zeta\omega_0 + K_r T\bigl[2(1{-}\alpha)b_0+(1{-}2\alpha)b_1-2\alpha b_2\bigr]
s^2 K_p T^2\bigl[\alpha^2(b_0{+}b_1{+}b_2)-\alpha(2b_0{+}b_1)+b_0\bigr]
Denominator
s^0 \omega_0^2 1+a_1+a_2
s^1 2\zeta\omega_0 T\bigl[2(1{-}\alpha)+(1{-}2\alpha)\,a_1-2\alpha\,a_2\bigr]
s^2 1 T^2\bigl[(1{-}\alpha)^2+\alpha(\alpha{-}1)\,a_1+\alpha^2 a_2\bigr]

Solving gives the five closed-form coefficients. Define the common denominator \Delta = 1 + 2\alpha T\zeta\omega_0 + \alpha^2 T^2\omega_0^2:

\boxed{a_1 = \frac{-2\bigl[1 + (2\alpha-1)T\zeta\omega_0 - \alpha(1-\alpha)T^2\omega_0^2\bigr]}{\Delta}} \qquad \boxed{a_2 = \frac{1 - 2(1-\alpha)T\zeta\omega_0 + (1-\alpha)^2T^2\omega_0^2}{\Delta}}
\boxed{b_0 = K_p + \frac{K_r\alpha T}{\Delta}} \qquad \boxed{b_1 = K_p a_1 + \frac{K_r(1-2\alpha)T}{\Delta}} \qquad \boxed{b_2 = K_p a_2 - \frac{K_r(1-\alpha)T}{\Delta}}

The denominator coefficients a_1 and a_2 depend only on \omega_0, \zeta, and \alpha: the pole locations are independent of the controller gains K_p and K_r. All three GBT methods share the same H(z) form and differ only in the values of the five coefficients.

For the recommended Tustin method (\alpha = 0.5) the coefficients simplify:

\boxed{a_1 = \frac{2(T^2\omega_0^2 - 4)}{T^2\omega_0^2 + 4T\zeta\omega_0 + 4}} \qquad \boxed{a_2 = \frac{T^2\omega_0^2 - 4T\zeta\omega_0 + 4}{T^2\omega_0^2 + 4T\zeta\omega_0 + 4}}
\boxed{b_0 = K_p + \frac{2K_r T}{T^2\omega_0^2 + 4T\zeta\omega_0 + 4}} \qquad \boxed{b_1 = K_p\,a_1} \qquad \boxed{b_2 = K_p\,a_2 - \frac{2K_r T}{T^2\omega_0^2 + 4T\zeta\omega_0 + 4}}

Stability Analysis

The discrete PR controller has two poles, the roots of z^2 + a_1 z + a_2 = 0. For a resonant controller with \zeta < 1 the discriminant a_1^2 - 4a_2 = 4T^2\omega_0^2(\zeta^2 - 1) is negative, so the two poles are always a complex conjugate pair z, z^* with |z|^2 = a_2. Both poles lie on the same circle of radius \sqrt{a_2}, so stability reduces to the single condition a_2 < 1.

Setting \alpha = 0 gives:

a_2^{\text{FE}} = 1 - 2T\zeta\omega_0 + T^2\omega_0^2

The condition a_2^{\text{FE}} < 1 requires T^2\omega_0^2 < 2T\zeta\omega_0, giving the Forward Euler stability limit:

T < \frac{2\zeta}{\omega_0} \qquad (\text{Forward Euler only})

This is a tight constraint because \zeta is small for a narrow resonance. This is why explicit GBT discretization with \alpha \geq 0.5 is preferred over relying on the default Forward Euler substitution that PLECS applies to continuous blocks.

For \alpha \geq 0.5, Tustin and Backward Euler are unconditionally stable for all T > 0, provided \zeta > 0.

Frequency Response and Prewarping

The plot below compares Backward Euler, Tustin, and Tustin with prewarping for a resonance designed at f_0 = 50\,\mathrm{Hz} with damping ratio \zeta = 0.01, at 20 samples per cycle (T = T_0/20, f_\mathrm{Nyq} = 500\,\mathrm{Hz}). Forward Euler is omitted because its stability limit T < 2\zeta/\omega_0 is violated at this sample rate for any narrow resonance. The left column shows the full frequency range on a logarithmic axis; the right column zooms in linearly over a 6 Hz window around the resonant peak.

Away from the resonance all three methods track the continuous response closely — the broadband proportional gain K_p is reproduced accurately at any practical sample rate. The differences are confined to the narrow band around \omega_0.

Backward Euler introduces numerical damping that broadens and suppresses the resonant peak. At 20 samples per cycle the peak is reduced from 24.6 dB to 5.8 dB and shifts downward to 49.4 Hz. This extra damping is intrinsic to the backward approximation and cannot be eliminated by choosing a finer sample rate.

Tustin (\alpha = 0.5) preserves the full peak height because it maps the imaginary axis bijectively onto the unit circle, introducing no spurious damping. However, the Tustin frequency compression shifts the peak from 50.00 Hz to 49.59 Hz.

Tustin with prewarping corrects the frequency shift exactly. The prewarped natural frequency is chosen so that after the bilinear mapping the peak lands at the design frequency:

\omega_{0,\text{pw}} = \frac{2}{T}\tan\!\left(\frac{\omega_0 T}{2}\right)

The damping ratio \zeta is left unchanged — it is dimensionless and does not participate in the frequency warping. The coefficients are computed with \omega_{0,\text{pw}} in place of \omega_0; the biquad structure is identical. The zoom panel confirms that the prewarped peak sits at exactly 50.00 Hz.

Using MATLAB/Octave built-ins. The five coefficients can also be obtained directly using c2d, bypassing the manual coefficient matching. The tf object for H(s) can be built in two parts and summed, and c2d with 'prewarp' handles the frequency correction automatically:

H_r = tf([Kr, 0], [1, 2*zeta*w0, w0^2]);   % resonant part
H_s = tf(Kp) + H_r;                         % full PR controller
H_z = c2d(H_s, T, 'prewarp', w0);
[num, den] = tfdata(H_z, 'v');
% b0=num(1), b1=num(2), b2=num(3), a1=den(2), a2=den(3)

The PLECS model can be downloaded here.

Summary

When you implement a discrete transfer function directly in PLECS, you take full control of the discretization. You no longer rely on the automatic Forward Euler substitution. For the low-pass filter, the starting point is a first-order IIR section in z^{-1}, chosen to be free of algebraic loops by construction. For the PI controller, the discrete structure follows directly from its parallel topology: the proportional and integrator paths are discretized separately and then merged into a two-coefficient form. For the proportional-resonant controller, the second-order resonator leads to a biquad with five coefficients.

In each case, the GBT is used to express z^{-1} as a rational function of s. Substituting this into the discrete form recovers an equivalent s-domain expression, and matching coefficients term by term against the known H(s) gives the discrete coefficients in closed form. A single parameter \alpha covers Forward Euler, Tustin, and Backward Euler. The stability analysis follows directly from the pole locations: a sample-time constraint applies for \alpha < 0.5, while \alpha \geq 0.5 gives unconditional stability.

In practice, Tustin (\alpha = 0.5) is the default choice: it is the only value that is both unconditionally stable and second-order accurate. Values of \alpha > 0.5 are also unconditionally stable but revert to first-order accuracy; they can be useful when intentional numerical damping is needed, but at the cost of accuracy.

Tustin also uniquely supports prewarping, a technique that corrects the frequency shift introduced by the bilinear mapping. This is particularly important for the proportional-resonant controller, where even a small shift in the resonant peak away from \omega_0 degrades steady-state tracking performance.

1 Like