Exercises Chapter 2 ”System Identification
”
The solutions presented here are for the exercises: •
2G.2
•
2G.4
•
2E.2
•
2E.4
•
2E.6
1
2G.2 Let Φs (ω ) be the (power ) spectrum of a scalar signal defined as in (2.63). Show that i Φs (ω ) is real ii Φs (ω )
≥ 0∀ω
iii Φs ( ω ) = Φs (ω )
−
(2.63) ∞
Φs (ω ) =
R s (τ )e
iτ ω
−
τ =−∞
2
2G.2 Let Φs (ω ) be the (power ) spectrum of a scalar signal defined as in (2.63). Show that i Φs (ω ) is real ii Φs (ω )
≥ 0∀ω
iii Φs ( ω ) = Φs (ω )
−
(2.63) ∞
Φs (ω ) =
R s (τ )e
iτ ω
−
τ =−∞
2
2G.2 Solution The expression (2.63) represents the Fourier transform of the Autocorrelation function. This Expression can be divided as: 0
Φs (ω ) =
∞
R s (τ )e
iτ ω
−
+
R s (τ )e
iτ ω
−
τ =0
τ =−∞
− R (0) s
rewriting this expression we get ∞
Φs (ω ) =
τ =0
∞
R s ( τ )eiτ ω +
−
R s (τ )e
iτ ω
−
τ =0
− R (0) s
According to the properties of the Autocorrelation Function we know that it is an even function which tell us: R s (τ ) = R s ( τ )
−
3
We can rewrite the expression in the next form ∞
Φs (ω ) =
∞
R s (τ )eiτ ω +
τ =0
R s (τ ) eiτ ω + e
τ =0
by definition cos(θ ) =
R s (τ )e
iτ ω
−
τ =0
∞
=
iθ
e
+ e 2
iθ
−
iτ ω
−
−
− R (0) s
R s (0)
Constructing this expression in our function we get: ∞
Φs (ω ) = 2
τ =0
R s (τ )
eiτ ω + e
iτ ω
−
− )
2
R s (0)
∞
Φs (ω ) = 2
τ =0
R s (τ )cos(τ ω )
− R (0) s
4
which can be also written as ∞
Φs (ω ) = R s (0) + 2
R s (τ )cos(τ ω )
τ =1
By definition R s (τ ) and cos(τ ω ) are real functions, therefore using this expression we can conclude that:
Φs (ω ) is real ii. By theory any matrix M is said to be positive-semidefinite if it has the following equivalent property: M = LL
∗
≥0 (L s) (L s) = ||L s||2 ≥ 0 ∗
∗
s LL s
∗
∗
∗
∗
5
The expression for the spectral density is defined as ∞
Φs (ω ) =
R s (τ )e
iτ ω
−
τ =−∞
where ∞
R s (τ ) = Es(t)s(t
− τ ) =
s(t)s (t
− τ )
∗
t=−∞
Replacing in the expression for the spectral density ∞
Φs (ω ) =
Es(t)s(t
τ =−∞ ∞
=
− τ )e
∞
s(t)s (t
− τ )e
iτ ω
s(t)s (t
− τ )e
iωt iω(t−τ )
∗
τ =−∞ t=−∞ ∞
=
iτ ω
−
−
∞
τ =−∞ t=−∞
∗
−
e
6
if we make t
∼ ξ then we can rewrite the last expression as − τ = ∞
Φs (ω ) =
∞
s(t)e
iωt
sT (ξ)eiωξ
−
t=−∞
ξ=−∞
∗
∞
=
s(t)e
t=−∞
L
∞
iωt
−
s(ξ)e
iωξ
−
ξ=−∞
L
∗
if we apply the positive semi-definite theorem to this expression ∞
z Φ(ω )z = z ∗
∗
t=−∞
∞
s(t)e
iωt
s(t)e
−
iωt
−
∗
z
t=−∞
7
∞
z Φ(ω )z = ∗
−
t=−∞
s(t)e
iωt
−
t=−∞
∗
∞
z
s(t)e
iωt
−
z
2
∗
∞
≥ ∗
∗
iωt
s(t)e
t=−∞
=
z
0
this last term tell us that the expression for ω will be always larger or equal than zero due to the square exponent in the final expression
Φs (ω )
≥0 ∀
ω
8
iii. If we use the relation ∞
Φs (ω ) = 2
R s (τ )cos(τ ω )
τ =0
− R (0) s
instead of (2.63), we see that ω only affects the term cos(τ ω ). By definition of the cosine function we know that: cos(ω ) = cos( ω )
−
This relations implies that
Φs (ω ) = Φs ( ω )
−
9
2G.4. Let a continuous time system representation be given by y (t) = Gc ( p)u(t)
The input is constant over the sampling interval T . Show that the sampled input-output data are related by y (t) = GT (q )u(t)
where
1 GT (q ) = 2πi
i∞
s=−i∞
Gc (s)
esT
−1
s
1 q
sT
−e
ds
note: correction for the exercise can be found at: http://www.control.isy.liu.se/ ljung/sysid/errata/
10
2G.4. Solution If we express the system in terms of the Laplace transform we have Y (s) = G(s)U (s)
where U(s) will be the Laplace transform of the constant step signal over the period T
T
U (s) =
u(t)e
st
−
dt
0
which will lead to the solution 1 st U (s) = e (0,T ) −
|
−e
−
s
=
1
sT
s
rewriting the expression for Y (s) we will have Y (s) = Gc (s)
1
−e
sT
−
s
11
To be able to use the discrete transform GT (q ) it is needed first to take this expression back into time domain
1 y (t) = 2πi 1 y (t) = 2πi
i∞
Y ( s)est ds
s=−i∞ i∞
Gc (s)
s=−i∞
1
−e
sT
−
est ds
s
From this expression we can calculate GT (q ) applying the definition of the discrete transform T
GT (q ) =
gT (k)q
k
−
k=1 T
=
k=1
1 2πi
i∞
s=−i∞
Gc (s)
1
−e s
sT
−
eskT q
k
−
ds
12
The expression only affects the last exponential term
1 GT (q ) = 2πi
i∞
Gc (s)
1
−e
sT
−
s
s=−i∞
T
eskT q
k
−
ds
k=1
Analyzing the sumatory T
T
skT
e
q
k
−
=
k=1
e
sT
q
k=1
If we apply the known expression T
aq
1 k
−
=
k=0
1 k
−
q q
−a
in the equation above T
e
k=0
sT −1 k
q
T
=1+
k=1
sT
e
q
1 k
−
13
As a result we will have that T
e
k=1
sT −1 k
q
T
=
sT
q
esT
−1
=
q
q
−
−
e
k=0
=
1 k
−
1
esT
q
−e
sT
Plugging the last expression in the general formula
1 GT (q ) = 2πi
i∞
s=−i∞
Gc (s)
1
−e s
sT
esT
−
q
−
esT
ds
14
1 GT (q ) = 2πi 1 GT (q ) = 2πi
i∞
Gc (s)
1
−
Gc (s)
s=−i∞
e
s
s=−i∞ i∞
1
esT
sT
q
esT −1 esT
sT
−e
esT sT
s
q
−e
esT
−1
1
ds
ds
which finally leads to the expression
1 GT (q ) = 2πi
i∞
s=−i∞
Gc (s)
s
q
sT
−e
ds
15
2E.2. Suppose that η (t) and ξ(t) are two mutually independent sequences of independent random variables with
{
}
{
Eη (t) = Eξ (t) = 0,
}
Eη 2 (t) = λη ,
Eξ2 (t) = λξ
Consider ω (t) = η (t) + ξ(t) + γξ (t
− 1)
Determine a MA(1) process v (t) = e(t) + ce(t
− 1)
where e(t) is white noise with
{
}
Ee(t) = 0,
Ee2 (t) = λe
such that ω (t) and v (t) have the same spectra; that is, find c and λe so that Φv (ω ) Φω (ω )
{
≡
}
{
}
16
2E.2. Solution If we want to achieve
Φv (ω )
≡Φ
ω (ω )
then the Fourier transform of both autocorrelation functions should be the same ∞
∞
R v (τ )e
iτ ω
−
τ =−∞
=
R ω (τ )e
iτ ω
−
τ =−∞
this reasoning leads to the fact that both autocorrelation functions should be the same R v (τ )
≡ R (τ ) ω
or Ev (t)v (t
− τ ) ≡ Eω(t)ω(t − τ ) 17
Separating the problem in two parts Ev (t)v (t
− τ ) = E {(e(t) + ce(t − 1))(e(t − τ ) + ce(t − 1 − τ ))} = E e(t)e(t − τ ) + ce(t)e(t − 1 − τ ) + ... ... + ce(t − 1)e(t − τ ) + c2 e(t − 1)e(t − 1 − τ )} = Ee(t)e(t − τ ) + 2cEe(t)e(t − 1 − τ ) + ... ... + c 2 Ee(t − 1)e(t − 1 − τ )
As we can see the expressions above represent only the Autocorrelation functions of the white noise e(t) shifted or multiplied by a scalar Ev (t)v (t
− τ ) = R (τ ) + 2cR (τ ) + c2R (τ ) e
e
e
= R e (τ )(1 + c )2
Under certain conditions we could conclude that R e (τ ) = Ee2 (t) = λe 18
Then the final expression for R v (τ ) will be R v (τ ) = Ev (t)v (t
− τ ) = λ (1 + c)2 e
to obtain a expression for R ω (τ ) we follow the same procedure Eω (t)ω (t
− τ ) = E {(η(t) + ξ (t) + γξ(t − 1))(η(t − τ ) + ... ... + ξ(t − τ ) + γξ (t − 1 − τ ))} = E η (t)η (t − τ ) + η (t)ξ(t − τ ) + ... ... + γη (t)ξ(t − 1 − τ ) + ξ(t)η (t − τ ) + ... ... + ξ(t)ξ(t − τ ) + γξ (t)ξ (t − 1 − τ ) + ... ... + γξ (t − 1)η (t − τ ) + γξ (t − 1)ξ(t − τ ) + ... ... + γ 2 ξ(t − 1)ξ(t − 1 − τ ))
19
Separating the function we will get Eω (t)ω (t
− τ ) = Eη(t)η(t − τ ) + Eη(t)ξ(t − τ ) + ... ... + γEη (t)ξ (t − 1 − τ ) + Eξ (t)η (t − τ ) + ... ... + Eξ(t)ξ(t − τ ) + γEξ (t)ξ (t − 1 − τ ) + ... ... + γEξ (t − 1)η (t − τ ) + γEξ (t − 1)ξ(t − τ ) + ... ... + γ 2 Eξ (t − 1)ξ(t − 1 − τ )
applying the data given we can reduce the system to Eω (t)ω (t
− τ ) = Eη(t)η(t − τ ) + Eξ(t)ξ(t − τ ) + ... ... + γEξ (t)ξ(t − 1 − τ ) + γEξ (t − 1)ξ(t − τ ) + ... ... + γ 2 Eξ (t − 1)ξ(t − 1 − τ ) 20
Applying the same reasoning as before, the terms in the last expression represent only the Autocorrelation functions of R η (τ ) and R ξ (τ ), therefore we can rewrite as Eω (t)ω (t
− τ ) = R (τ ) + R (τ ) + 2γR (τ ) + γ 2R (τ ) η
ξ
ξ
ξ
= R η (τ ) + R ξ (τ )(1 + γ )2
Under certain conditions we could conclude that R ξ (τ ) = γ ξ R η (τ ) = γ η
Then the final expression for R ω (τ ) will be R ω (τ ) = Eω (t)ω (t
− τ ) = γ
η
+ γ ξ (1 + γ )2
21
By the equivalence R v (τ )
≡ R (τ ) ω
we obtain λe (1 + c)2 = γ η + γ ξ (1 + γ )2 λe + 2cλe + cλ2e = λη + λ ξ + 2γλξ + γ 2 λξ
we can form the equalities λe = λη + λ ξ cλe = γλξ
And finally we can conclude that λe = λη + λξ c=
λξ λη + λξ
γ
22
2E.4. Consider the ”state space description” x(t + 1) = f x(t) + ω (t) y (t) = hx(t) + v (t)
where x, f , h, ω and v are scalars. ω (t) and v (t) are mutually independent white Gaussian noises with variances R 1 and R 2 , respectively. Show that y (t) can be represented as an ARMA process:
{
y (t)+ a1 y (t
− 1)+ · · · + a
n y (t
}
{
}
− n) = e(t)+ c1e(t − 1)+ · · · + c
n e(t
− n)
Determine n, ai , ci and the variance of e(t) in terms of f , h, R 1 and R 2 . What is the relationship between e(t), ω (t) and v (t)?
23
Taking the discrete transform of the state space model: qX (q ) = f X (q ) + ω (q ) Y ( q ) = hX (q ) + v (q )
X (q ) = Y ( q ) =
ω (q ) q
− f
hω (q ) q
− f
+ v (q )
Y ( q )(q
− f ) = hω(q) + (q − f )v(q) y +1 − f y = hω + v +1 − f v k
k
k
k
k
what we want to do now is to make valid the next equivalence hωk + vk+1
− f v ∼ = αe +1 + βe k
k
k
24
as we see in the ARMA process, we are missing the values of α and β , but one fact that we have is that the ARMA process is monic, which means that the term for α = 1, thus yk+1
− f y ∼ = e +1 + βe k
k
k
if we name the expression as follow: ξk = hωk + vk+1
− fv
k
ζ = ek+1 + βe k
To fulfill the equivalence of the process we will find the next equivalences: Eξk = Eζ k = 0 Eξk ξk = Eζ k ζ k Eξk ξk
1
−
= Eζ k ζ k
1
−
25
replacing the expression E hωk + vk+1
{
− f v }{hω + v +1 − f v } = = E {e +1 + βe }{e +1 + βe } +1 − f v }{hω 1 + v − fv 1} = = E {e +1 + βe }{e + βe 1 } k
k
k
E hωk + v k
{
k
k
k−
k
k
k
k
k
k
k
k−
k
k−
making the internal operations for Eξk ξk = Eζ k ζ k we end up in h2 Eωk2 + Evk2+1 + f 2 Evk2 = Ee2k+1 + β 2 Ee2k h2 R 1 + R 2 + f 2 R 2 = R e + β 2 R e
the expression for Eξk ξk
1
−
= Eζ k ζ k
1
−
−f Ev2 = βEe2 −f R 2 = βR k
k
e
26
We need to solve the next system of equations for β and R e h2 R 1 + R 2 + f 2 R 2 = R e (1 + β 2 )
−f R 2 = βR
e
getting R e from the second equation R e =
−f R 2 β
Replacing into the first equation
1 + β2 β
=
−
h2 R 1 + R 2 + f 2 R 2 f R 2
as we see the numerator of the right hand side will be always positive and equal to some constant value, for simplicity we can rewrite the expression like:
1 + β2 β
=
−2C 27
where
2C =
h2 R 1 + R 2 + f 2 R 2 f R 2
the solution for β will be then β = C
±
C 2
−1
So finally we can write the ARMA process in the form yk+1
− fy
k
= ek+1 + (C
± − C 2
1)ek
now we need to check the stability of the ARMA process, to do that we analyze first the square root in the expression for β: C 2
−1>0 28
which tell us that C has to be always 1 < C < 1 in order to avoid complex values. As we see in the expression for 2C this requirement is going to be fulfilled whit a good choice of the value f . Another requisite will be
−
|C
± − | C 2
1 <1
and again this decision can be taken with a good choice of the parameter f . Clearly it can be seen in the process transfer function: Y ( q ) E (q )
=
q + β q
− f
if the parameter f in the denominator is chosen in a way that f > 1 then we will have an unstable system. This means that nothing can be said about the sign that we must choose for the expression β in order to make the system stable. The value for the variance of e(t) will be
| |
R e =
f R 2 −√ C ± C 2 − 1 29
2E.6. Consider the system d dt
y (t) + ay (t) = u(t)
Suppose that the input u(t) is piecewise constant over the sampling interval u(t) = uk ,
kT
≤ t < (k + 1)T
(a) Derive a sample data system description for uk , y (kT ). (b) Assume that there is a time delay of T seconds so that u(t) in the expression is replaced by u(t T ). Derive a sample data system description for this case.
−
(c) Assume that the time delay is 1.5T so that u(t) is replaced by 1.5T ). Then give the sampled data description. u(t
−
30
2E.6. Solution a). The solution to the differential equation which includes the effects of the input and initial conditions is given by:
t
y (t) = e
a(t−t0 )
−
y (t0 ) +
a(t−τ )
−
e
u(τ )dτ
t0
where y (t0 ) is the initial condition on the state variables. Based on this solution the sampled state response is given by
T
y [(k + 1)T ] = e
aT
−
y [kT ] +
e
a(T −τ )
−
u(τ + kT )dτ
τ =0
we can write y [(k + 1)T ] = Ay [kT ] + Bu[kT ]
where
T
A=e
aT
−
,
B=
τ =0
a(T −τ )
−
e
dτ =
1
−e
aT
−
a
31
if we use Z transform z (Y ( z )
− y[0]) = AY (z) + BU (z)
Y (z )(z
− A) = BU (z) + zy[0] Y ( z ) =
B
z
z
− A U (z) + z − A y[0]
or expressed in geometric series y (t) =
B A
∞
k=0
∞
Ak+1 u(t
− k) +
Ak q
k
−
y [0]
k=0
32
(b). Introducing the time delay into the system d dt
y (t) + ay (t) = u(t
− T )
The solution to the differential equation after applying the delay in the equation including the effects of the input and the initial conditions is given by:
t
y (t) = e
a(t−t0 )
−
y (t0 ) +
a(t−τ )
−
e
u(τ
− T )dτ
t0
where y (t0 ) is the initial condition on the state variables. Based on this solution the sampled state response is given by
T
y [(k + 1)T ] = e
aT
−
y [kT ] +
e
a(T −τ )
−
τ =0
u(τ + kT
− T )dτ
we can write y [(k + 1)T ] = Ay [kT ] + Bu[kT
− T ] 33
where
T
A=e
aT
−
,
B=
a(T −τ )
−
e
dτ =
1
−e
aT
−
a
τ =0
if we use the Z transform z (Y ( z )
− y[0]) = AY (z) + Bz
Y (z )(z
− A) = Bz Y ( z ) = Y ( z ) =
Bz z
1
−
U (z ) + Bu[ 1]
−
U (z ) + Bu[ 1] + zy [0]
−
1
B
−
−A
U (z ) +
B
z (z
1
−
− A)
z
−A
U (z ) +
u[ 1] +
−
B
z
−A
z z
−A
u[ 1] +
−
y [0]
z
z
−A
y [0]
34
(c). Introducing the time delay into the system d dt
y (t) + ay (t) = u(t
− 1.5T )
The solution to the differential equation after applying the delay in the equation including the effects of the input and the initial conditions is given by:
t
y (t) = e
a(t−t0 )
−
y (t0 ) +
a(t−τ )
−
e
u(τ
− 1.5T )dτ
t0
where y (t0 ) is the initial condition on the state variables. Based on this solution the sampled state response is given by
T
y [(k + 1)T ] = e
aT
−
y [kT ] +
τ =0
e
a(T −τ )
−
u(τ + kT
− T )dτ
where T = 1.5T 35