• Sonuç bulunamadı

Periodp(y k 1:k−1 |y )

N/A
N/A
Protected

Academic year: 2021

Share "Periodp(y k 1:k−1 |y )"

Copied!
70
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

Introduction to Sequential Monte Carlo and

Inference in (switching) state space models

A. Taylan Cemgil

Signal Processing and Communications Lab.

30 Nov 2006

(2)

Abstract

This talk will be an introduction to Sequential Monte Carlo methods for filtering and smoothing in time series models. After reviewing the basic concepts such as importance sampling, resampling, Rao-Blackwellization, I will illustrate how those ideas can be applied for inference in switching state space models (using the mixture Kalman filter) and changepoint models (where exact inference is possible). The material is based on the following papers:

• A. Doucet, S. Godsill, and C. Andrieu, ”On sequential Monte Carlo sampling methods for Bayesian filtering,” Statistics and Computing, vol. 10, no. 3, pp.

197-208, 2000.

• R. Chen and J. S. Liu, ”Mixture Kalman filters,” J. R. Statist. Soc., vol. 10, 2000.

• P. Fearnhead, ”Exact and efficient bayesian inference for multiple changepoint problems,” Tech. Rep., Dept. of Math. and Stat., Lancaster University, 2003.

(3)

Outline

• Time Series Models and Inference

• Importance Sampling

• Resampling

• Rao-Blackwellization

• Putting it all together, Sequential Monte Carlo

• Conditionally Gaussian Switching State Space Models – Mixture Kalman Filter

– Change-point models

(4)

Time series models and Inference, Terminology

In signal processing, applied physics, machine learning many phenomena are modelled by dynamical models

x0 x1 . . . xk−1 xk . . . xK

y1 . . . yk−1 yk . . . yK

xk ∼ p(xk|xk−1) Transition Model yk ∼ p(yk|xk) Observation Model

• x are the latent states

• y are the observations

• In a full Bayesian setting, x includes unknown model parameters

(5)

Time series models and applications

• Hidden Markov Models

• (Time varying) AR, ARMA, MA models

• Linear Dynamical Systems, Kalman Filter models

• Switching state space models

• Dynamic Bayesian networks

• Nonlinear Stochastic Dynamical Systems

(6)

Online Inference, Terminology

Filtering: p(x

k

|y

1:k

)

– Distribution of current state given all past information – Realtime/Online/Sequential Processing

x0 x1 . . . xk−1 xk . . . xK

y1 . . . yk−1 yk . . . yK

• Potentially confusing misnomer:

– More general than “digital filtering” (convolution) in DSP – but

algoritmically related for some models (KFM)

(7)

Online Inference, Terminology

Prediction p(y

k:K

, x

k:K

|y

1:k−1

)

– evaluation of possible future outcomes; like filtering without observations

x0 x1 . . . xk−1 xk . . . xK

y1 . . . yk−1 yk . . . yK

• Tracking, Restoration

(8)

Offline Inference, Terminology

Smoothing p(x0:K|y1:K),

Most likely trajectory – Viterbi path arg maxx0:K p(x0:K|y1:K) better estimate of past states, essential for learning

x0 x1 . . . xk−1 xk . . . xK

y1 . . . yk−1 yk . . . yK

Interpolation p(yk, xk|y1:k−1, yk+1:K)

fill in lost observations given past and future

x0 x1 . . . xk−1 xk . . . xK

y1 . . . yk−1 yk . . . yK

(9)

Deterministic Linear Dynamical Systems

• The latent variables s

k

and observations y

k

are continuous

• The transition and observations models are linear

• Examples

– A deterministic dynamical system with two state variables – Particle moving on the real line, a perfect metronome

s

k

=

 phase period



k

=  1 1 0 1



s

k−1

= As

k−1

y

k

= phase

k

= 1 0 

s

k

= Cs

k

(10)

Kalman Filter Models, Stochastic Dynamical Systems

• We allow random (unknown) accelerations and observation error

s

k

=  1 1 0 1



s

k−1

+ ǫ

k

= As

k−1

+ ǫ

k

y

k

= 1 0 

s

k

+ ν

k

= Cs

k

+ ν

k

(11)

Tracking

s0 s1 . . . sk−1 sk . . . sK

y1 . . . yk−1 yk . . . yK

• In generative model notation

sk ∼ N (sk; Ask−1, Q) yk ∼ N (yk; Csk, R)

• Tracking = estimating the latent state of the system = Kalman filtering

(12)

Kalman Filtering and Smoothing (two filter formulation)

p(x1)

x1

p(x2|x1) x2

p(x3|x2) x3

p(x4|x3) x4

p(y1|x1) p(y2|x2) p(y3|x3) p(y4|x4)

• Forward Pass

p(y1:K) =

Z

xK

p(yT|xK)

Z

xK−1

p(xK|xK−1)

| {z }

αK

. . .

Z

x2

p(x3|x2) p(y2|x2)

α2|1

z }| {

Z

x1

p(x2|x1)

| {z }

α2

p(y1|x1)

α1|0

z }| {

p(x1)

| {z }

α1

• Backward Pass

p(y1:K) =

Z

x1

p(x1)p(y1|x1) . . .

Z

xK−1

p(xK−1|xK−2)p(yK−1|xK−1)

| {z }

βK−2

Z

xK

p(xK|xK−1)p(yK|xK)

| {z }

βK−1

1 11

|{z}

βK

• Smoothing: p(xk|y1:K) ∝ αkβk

(13)

α

1|0

= p(x

1

)

Period

0 1 2 3 4 5

0 0.5 1 1.5

p(y k|y 1:k−1)

Phase

(14)

α

1|1

= p(y

1

|x

1

)p(x

1

)

0 1 2 3 4 5

0 0.5 1 1.5

p(y k|y 1:k−1)

Phase

Period

(15)

α

2|1

= R

dx

1

p(x

2

|x

1

)p(y

1

|x

1

)p(x

1

) ∝ p(x

2

|y

1

)

Period

0 1 2 3 4 5

0 0.5 1 1.5

p(y k|y 1:k−1)

Phase

(16)

α

2|2

= p(y

2

|x

2

)p(x

2

|y

1

)

0 1 2 3 4 5

0 0.5 1 1.5

p(y k|y 1:k−1)

Phase

Period

(17)

α

5|5

∝ p(x

5

|y

1:5

)

0 1 2 3 4 5

0 0.5 1 1.5

p(y k|y 1:k−1)

Phase

Period

(18)

Importance Sampling (IS)

Consider a probability distribution with (possibly unknown) normalisation constant

p(x) = 1

Zφ(x) Z =

Z

dxφ(x).

IS: Estimate expectations (or features) of p(x) by a weighted sample hf (x)ip(x) =

Z

dxf (x)p(x)

hf (x)ip(x)

XN i=1

˜

w(i)f (x(i))

(19)

Importance Sampling (cont.)

Change of measure with weight function W (x) ≡ φ(x)/q(x) hf (x)ip(x) = 1

Z Z

dxf (x)φ(x)

q(x)q(x) = 1 Z



f (x)φ(x) q(x)



q(x)

≡ 1

Z hf (x)W (x)iq(x)

• If Z is unknown, as is often the case in Bayesian inference

Z =

Z

dxφ(x) = Z

dxφ(x)

q(x)q(x) = hW (x)iq(x)

hf (x)ip(x) = hf (x)W (x)iq(x) hW (x)iq(x)

(20)

Importance Sampling (cont.)

Draw i = 1, . . . N independent samples from q x(i) ∼ q(x)

We calculate the importance weights

W(i) = W (x(i)) = φ(x(i))/q(x(i))

Approximate the normalizing constant

Z = hW (x)iq(x)

N

X

i=1

W(i)

Desired expectation is approximated by

hf (x)ip(x) = hf (x)W (x)iq(x) hW (x)iq(x)

PN

i=1 W(i)f (x(i))

PN

i=1W(i)

N

X

i=1

˜

w(i)f (x(i))

Here w˜(i) = W(i)/PNj=1 W(j) are normalized importance weights.

(21)

Importance Sampling (cont.)

−100 −5 0 5 10 15 20 25

0.1 0.2

−100 −5 0 5 10 15 20 25

10 20 30

−100 −5 0 5 10 15 20 25

0.1 0.2

φ(x) q(x)

W(x)

(22)

Resampling

Importance sampling computes an approximation with weighted delta functions

p(x) ≈ X

i

(i)δ(x − x(i))

In this representation, most of W˜ (i) will be very close to zero and the representation may be dominated by few large weights.

Resampling samples a set of new “particles”

x(j)new

X

i

W˜ (i)δ(x − x(i))

p(x) 1 N

X

j

δ(x − x(j)new)

Since we sample from a degenerate distribution, particle locations stay unchanged. We merely dublicate (, triplicate, ...) or discard particles according to their weight.

This process is also named “selection”, “survival of the fittest”, e.t.c., in various fields (Genetic algorithms, AI..).

(23)

Resampling

−100 −5 0 5 10 15 20 25

0.1 0.2

−100 −5 0 5 10 15 20 25

10 20 30

−100 −5 0 5 10 15 20 25

1 2

−100 −5 0 5 10 15 20 25

0.1 0.2

φ(x) q(x)

W(x)

xnew

x(j)new ∼ P

i(i)δ(x − x(i))

(24)

Examples of Proposal Distributions

x y p(x|y) ∝ p(y|x)p(x)

• Prior as the proposal. q(x) = p(x)

W (x) = p(y|x)p(x)

p(x) = p(y|x)

(25)

Examples of Proposal Distributions

x y p(x|y) ∝ p(y|x)p(x)

• Likelihood as the proposal. q(x) = p(y|x)/R

dxp(y|x) = p(y|x)/c(y) W (x) = p(y|x)p(x)

p(y|x)/c(y) = p(x)c(y) ∝ p(x)

• Interesting when sensors are very accurate and dim(y) ≫ dim(x).

Since there are many proposals, is there a “best” proposal distribution?

(26)

Optimal Proposal Distribution

x y p(x|y) ∝ p(y|x)p(x)

Task: Estimate hf (x)ip(x|y)

• IS constructs the estimator I(f ) = hf (x)W (x)iq(x)

• Minimize the variance of the estimator D(f (x)W (x) − hf (x)W (x)i)2E

q(x) =

f2(x)W2(x)

q(x) − hf (x)W (x)i2q(x)(1)

=

f2(x)W2(x)

q(x) − hf (x)i2p(x) (2)

=

f2(x)W2(x)

q(x) − I2(f ) (3)

• Minimize the first term since only it depends upon q

(27)

Optimal Proposal Distribution

• (By Jensen’s inequality) The first term is lower bounded:

f2(x)W2(x)

q(x) ≥ h|f (x)|W (x)i2q(x) =

Z

|f (x)| p(x|y)dx

2

• We well look for a distribution q that attains this lower bound. Take q(x) = |f (x)|p(x|y)

R |f(x)|p(x|y)dx

(28)

Optimal Proposal Distribution (cont.)

• The weight function for this particular proposal q is

W(x) = p(x|y)/q(x) = R |f(x)|p(x|y)dx

|f (x)|

• We show that q attains its lower bound f2(x)W2(x)

q(x) =

*

f2(x) R |f(x)|p(x|y)dx2

|f (x)|2

+

q(x)

=

Z

|f (x)|p(x|y)dx

2

= h|f (x)|i2p(x|y)

= h|f (x)|W(x)i2q(x)

• ⇒ There are distributions q that are even “better” than the exact posterior!

(29)

A link to alpha divergences

The α-divergence between two distributions is defined as Dα(p||q) ≡ 1

β(1 − β)

 1 −

Z

dxp(x)βq(x)1−β



where β = (1 + α)/2 and p and q are two probability distributions

• limβ→0 Dα(p||q) = KL(q||p)

• limβ→1 Dα(p||q) = KL(p||q)

• β = 2, (α = 3)

D3(p||q) ≡ 1 2

Z

dxp(x)2q(x)−1 − 1

2 = 1 2

W (x)2

q(x) − 1 2

Best q (in a constrained family) is typically a heavy-tailed approximation to p

(30)

Examples of Proposal Distributions

x1 x2

p(x|y) ∝ p(y1|x1)p(x1)p(y2|x2)p(x2|x1)

y1 y2

Task: Obtain samples from the posterior p(x1:2|y1:2) = Z1

yφ(x1:2)

• Prior as the proposal. q(x1:2) = p(x1)p(x2|x1) W (x1:2) = φ(x1:2)

q(x1:2) = p(y1|x1)p(y2|x2)

• We sample from the prior as follows:

x(i)1 ∼ p(x1) x(i)2 ∼ p(x2|x1 = x1(i)) W (x(i)) = p(y1|x(i)1 )p(y2|x(i)2 )

(31)

Examples of Proposal Distributions

φ(x1:2) = p(y1|x1)p(x1)p(y2|x2)p(x2|x1)

• State prediction as the proposal. q(x1:2) = p(x1|y1)p(x2|x1) W (x1:2) = p(y1|x1)p(x1)p(y2|x2)p(x2|x1)

p(x1|y1)p(x2|x1) = p(y1)p(y2|x2)

• We sample from the proposal and compute the weight

x(i)1 ∼ p(x1|y1) x(i)2 ∼ p(x2|x1 = x(i)1 ) W (x(i)) = p(y1)p(y2|x(i)2 )

• Note that this weight does not depend on x1

(32)

Examples of Proposal Distributions

φ(x1:2) = p(y1|x1)p(x1)p(y2|x2)p(x2|x1)

• Filtering distribution as the proposal. q(x1:2) = p(x1|y1)p(x2|x1, y2) W (x1:2) = p(y1|x1)p(x1)p(y2|x2)p(x2|x1)

p(x1|y1)p(x2|x1, y2) = p(y1)p(y2|x1)

• We sample from the proposal and compute the weight

x(i)1 ∼ p(x1|y1) x(i)2 ∼ p(x2|x1 = x(i)1 , y2) W (x(i)) = p(y1)p(y2|x(i)1 )

• Note that this weight does not depend on x2

(33)

Examples of Proposal Distributions

φ(x1:2) = p(y1|x1)p(x1)p(y2|x2)p(x2|x1)

• Exact posterior as the proposal. q(x1:2) = p(x1|y1, y2)p(x2|x1, y2) W (x1:2) = p(y1|x1)p(x1)p(y2|x2)p(x2|x1)

p(x1|y1)p(x2|x1, y2) = p(y1)p(y2|y1)

• Note that this weight is constant, i.e.

W (x1:2)2 − hW (x1:2)i2 = 0

(34)

Variance reduction

q(x) W (x) = φ(x)/q(x) p(x1)p(x2|x1) p(y1|x1)p(y2|x2) p(x1|y1)p(x2|x1) p(y1)p(y2|x2) p(x1|y1)p(x2|x1, y2) p(y1)p(y2|x1) p(x1|y1, y2)p(x2|x1, y2) p(y1)p(y2|y1) Accurate proposals

• gradually decrease the variance

• but take more time to compute

(35)

Rao Blackwellization

• Suppose x = (r, s),

r s y

φy(s, r) = p(y|s, r)p(s|r)p(r) ≡ φy(s|r)φy(r)

• and for a given f (r, s) it is possible to compute the following integral (conditioned on r)

f (r) ≡ Z

dsf (s, r)φy(s|r) = hf (s, r)iφy(s|r)

(36)

Rao Blackwellization

• Consider two procedures for estimating I(f ) = hf (r, s)iφy(r,s) 1. q = q(r, s)

I(f ) = Z

dsdrf (r, s)φy(r, s)

q(r, s) q(r, s) ≈ 1 N

X

i

f (r(i), s(i))W (r(i), s(i)) = IN(f )

2. qRB = q(r) with f (r) ≡ hf (s, r)iφ(s|r) I(f ) =

Z

drdsf (s, r)φy(s|r)φy(r) = Z

dr hf (s, r)iφ(s|r) φy(r)

= Z

drf (r)φy(r)

q(r) q(r) ≈

XN i

f (r(i)y(r(i)) q(r(i)) ≡

XN i

f (r(i))W (r(i)) = INRB(f )

• Rao-Blackwell theorem says

(I(f ) − INRB(f ))2 ≤ (I(f) − IN(f ))2

(37)

Rao Blackwellization Example

Suppose the posterior φy(s, r) = φy(r)φy(s|r) = φy(r)N (s; µr, Σr), a conditional Gaussian

φ(s) f(s)

• IN(f ) (with q(s, r) = q(r)φy(s|r)) – Sample

r(i) ∼ q(r), s(i) ∼ N (s; µr(i), Σr(i)) – Evaluate

IN(f ) = 1 N

X

i

W (s(i), r(i))f (s(i), r(i))

(38)

Rao Blackwellization Example

Suppose the posterior φy(s, r) = φy(r)N (s; µr, Σr) is a conditional Gaussian

φ(s) f(s)

• INRB(f ) – Sample

r(i) ∼ q(r) – Evaluate

f (r(i)) = Z

dsf (s, r(i))N (s; µr(i), Σr(i)) – Evaluate

INRB(f ) = 1 N

X

i

φy(r(i))

q(r(i)) f (r(i)) = 1 N

X

i

W (r(i))f (r(i))

(39)

Sequential Importance Sampling, Particle Filtering

Apply importance sampling to the SSM to obtain some samples from the posterior p(x0:K|y1:K).

p(x0:K|y1:K) = 1

p(y1:K)p(y1:K|x0:K)p(x0:K) ≡ 1

Zyφ(x0:K) (4)

Key idea: sequential construction of the proposal distribution q, possibly using the available observations y1:k, i.e.

q(x0:K|y1:K) = q(x0) YK k=1

q(xk|x1:k−1y1:k)

(40)

Sequential Importance Sampling

Due to the sequential nature of the model and the proposal, the importance weight function W (x0:k) ≡ Wk admits recursive computation

Wk = φ(x0:k)

q(x0:k|y1:k) = p(yk|xk)p(xk|xk−1) q(xk|x0:k−1y1:k)

φ(x0:k−1)

q(x0:k−1|y1:k−1) (5)

= p(yk|xk)p(xk|xk−1)

q(xk|x0:k−1, y1:k) Wk−1 ≡ uk|0:k−1Wk−1 (6)

Suppose we had an approximation to the posterior (in the sense hf (x)iφ Pi Wk−1(i) f (x(i)0:k−1))

φ(x0:k−1) ≈ X

i

Wk−1(i) δ(x0:k−1 − x(i)0:k−1)

x(i)k ∼ q(xk|x(i)0:k−1, y1:k) Extend trajectory Wk(i) = u(i)k|0:k−1Wk−1 Update weight φ(x0:k) ≈ X

i

Wk(i)δ(x0:k − x(i)0:k)

(41)

Example

• Prior as the proposal density

q(xk|x0:k−1, y1:k) = p(xk|xk−1)

• The weight is given by

x(i)k ∼ p(xk|x(i)k−1) Extend trajectory Wk(i) = u(i)k|0:k−1Wk−1 Update weight

= p(yk|x(i)k )p(x(i)k |x(i)k−1)

p(x(i)k |x(i)k−1) Wk−1(i) = p(yk|x(i)k )Wk−1(i)

However, this schema will not work, since we blindly sample from the prior. But ...

(42)

Example (cont.)

• Perhaps surprisingly, interleaving importance sampling steps with (occasional) resampling steps makes the approach work quite well !!

x(i)k ∼ p(xk|x(i)k−1) Extend trajectory Wk(i) = p(yk|x(i)k )Wk−1(i) Update weight W˜ k(i) = Wk(i)/ ˜Zk Normalize ( ˜Zk ≡ X

i Wk(i)) x(j)0:k,new

XN i=1

(i)δ(x0:k − x(i)0:k) Resample j = 1 . . . N

• This results in a new representation as φ(x) ≈ 1

N

X

j

kδ(x0:k − x(j)0:k,new)

x(i)0:k ← x(j)0:k,new Wk(i) ← ˜Zk/N

(43)

Optimal proposal distribution

The algorithm in the previous example is known as Bootstrap particle filter or Sequential Importance Sampling/Resampling (SIS/SIR).

• Can we come up with a better proposal in a sequential setting?

– We are not allowed to move previous sampling points x(i)1:k−1 (because in many applications we can’t even store them)

– Better in the sense of minimizing the variance of weight function Wk(x).

(remember the optimality story in Eq.(3) and set f (x) = 1).

• The answer turns out to be the filtering distribution

q(xk|x1:k−1, y1:k) = p(xk|xk−1, yk) (7)

(44)

Optimal proposal distribution (cont.)

• The weight is given by

x(i)k ∼ p(xk|x(i)k−1, yk) Extend trajectory Wk(i) = u(i)k|0:k−1Wk−1(i) Update weight u(i)k|0:k−1 = p(yk|x(i)k )p(x(i)k |x(i)k−1)

p(x(i)k |x(i)k−1, yk) × p(yk|x(i)k−1) p(yk|x(i)k−1)

= p(yk, x(i)k |x(i)k−1)p(yk|x(i)k−1)

p(x(i)k , yk|x(i)k−1) = p(yk|x(i)k−1)

(45)

A Generic Particle Filter

1. Generation:

Compute the proposal distribution q(xk|x(i)0:k−1, y1:k).

Generate offsprings for i = 1 . . . N ˆ

x(i)k q(xk|x(i)0:k−1, y1:k) 2. Evaluate importance weights

Wk(i) = p(ykxk(i))p(ˆx(i)k |x(i)k−1)

q(ˆx(i)k |x(i)0:k−1, y1:k) Wk−1(i) x(i)0:k = (ˆx(i)k , x(i)0:k−1) 3. Resampling (optional but recommended)

Normalize weigts W˜k(i) = Wk(i)/ ˜Zk Z˜k

X

j Wk(j) Resample x(j)0:k,new

N

X

i=1

W˜ (i)δ(x0:k − x(i)0:k) j = 1 . . . N

Reset x(i)0:k ← x(j)0:k,new Wk(i) ← ˜Zk/N

(46)

Switching State space models - Segmentation -

Changepoint detection

(47)

Different names for the same model

• Jump Markov Linear System

• Switching Kalman filter models

• Conditionally Gaussian switching state space models

(48)

Segmentation and Changepoint detection

• Complicated processes can be modeled by using simple processes with occasional regime switches

– Piecewise constant

0 10 20 30 40 50 60 70 80 90 100

−5 0 5 10 15

(49)

Segmentation and Changepoint detection

– Piecewise linear

0 20 40 60 80 100 120 140 160 180 200

−10

−5 0 5 10 15

• What is the true state of the process given noisy data ?

• Where are the changepoints ?

• How many changepoints ?

(50)

Conditionally Gaussian Changepoint Model

rk ∼ p(rk|rk−1) Changepoint flags ∈ {new,reg} θk ∼ [rk = reg] f (θkk−1)

| {z }

Transition

+[rk = new] π(θk)

| {z }

Reinitialization

Latent State

yk ∼ p(ykk) Observations

r1 r2 r3 r4 r5

θ0 θ1 θ2 θ3 θ4 θ5

y1 y2 y3 y4 y5

(51)

Example: Piecewise constant signal

0 10 20 30 40 50 60 70 80 90 100

−5 0 5 10 15

θ0 ∼ N (µ, P ) rk|rk−1 ∼ p(rk|rk−1)

θkk−1, rk ∼ [rk = 0]δ(θk − θk−1)

| {z }

reg

+ [rk = 1]N (m, V )

| {z }

new

ykk ∼ N (θk, R)

(52)

A More General Model

Each segment is modelled by a linear dynamical system

θ0 ∼ N (µ, P ) rk ∼ p(rk|rk−1)

θk ∼ [rk = 0]N (Aθk−1, Q)

| {z }

reg

+ [rk = 1]N (m, V )

| {z }

new

yk ∼ N (Cθk, R)

(53)

Example: piecewise linear regimes

0 20 40 60 80 100 120 140 160 180 200

−10

−5 0 5 10 15

A =

 1 1 0 1



Q =

 0 0 0 0



C = 1 0 

k 1 2 3 4 5

rk ∼ p(rk) 1 0 0 1 0

θk ∼  N (Aθk−1, Q) N (m, V )

0.0 1.0

1.0 1.0

2.0 1.0

8.0

−0.5

7.5

−0.5 yk ∼ N (θk, R) 0.1 0.97 2.03 8.1 7.3

(54)

Conditionally Gaussian Switching State Space Model

rk ∼ p(rk|rk−1) Regime label θk ∼ p(θkk−1, rk) Latent State yk ∼ p(ykk) Observations

r1 r2 r3 r4 r5

θ0 θ1 θ2 θ3 θ4 θ5

y1 y2 y3 y4 y5

• ... can model outliers or sensor failures by adding links form r to y

(55)

Example: piecewise quadratic splines (with mirror slopes)

200 400 600 800 1000

0 2 4 6 8 10 12

y

0 200 400 600 800 1000

−2 0 2 4 6 8

θ(2)

rk ∼ p(rk|rk−1) Regime label θk ∼ N (θk; Ark−1→rkθk−1, Qrk−1→rk)

yk ∼ N (yk; Cθk, R) Observations

(56)

Example: piecewise quadratic splines (with mirror slopes)

200 400 600 800 1000

0 2 4 6 8 10 12

y

0 200 400 600 800 1000

−2 0 2 4 6 8

θ(2)

A1→1 =

0



1 122

0 1

0 0 1

1

A A1→2 = A2→2 =

0



1 122

0 1 −∆

0 0 1

1

AA2→1 =

0



1 0

0 1 0

0 0 0

1

A

Q1→1 = Q1→2 = Q2→2 = 0 Q2→1 =

0



0 0 0 0 0 0 0 0 σ2

1

A

C = 1 0 0



(57)

Example: Audio Signal Analysis

rk|rk−1 ∼ p(rk|rk−1)

θkk−1, rk ∼ [rk = 0]N (Aθk−1, Q)

| {z }

reg

+ [rk = 1]N (0, S)

| {z }

new

ykk ∼ N (Cθk, R)

A =



Gω

G2ω

. . .

GHω



 Gω = ρk

 cos(ω) − sin(ω) sin(ω) cos(ω)



0 < ρk < 1 is a damping factor and C =



1 0 1 0 . . . 1 0



is a projection matrix.

(58)

Audio Signal Analysis

r kfrequency

k k

x k

(59)

Application to music transcription

500 1000 1500 2000 2500 3000 3500

Cemgil et. al. 2006, IEEE TSALP

(60)

Factorial Changepoint model

r0,ν ∼ C(r0,ν; π0,ν) θ0,ν ∼ N (θ0,ν; µν, Pν)

rk,ν|rk−1,ν ∼ C(rk,ν; πν(rt−1,ν)) Changepoint indicator θk,νk−1,ν ∼ N (θk,ν; Aν(rkk−1,ν, Qν(rk)) Latent state

ykk,1:W ∼ N (yk; Ckθk,1:W, R) Observation

r0ν · · · rνk · · · rKν

θν

0 · · · θνk · · · θνK

ν = 1 . . . W

yk yK

(61)

Application: Analysis of Polyphonic Audio

νfrequency

k

x k

• Each latent changepoint process ν = 1 . . . W corresponds to a “piano key”.

Indicators r1:W,1:K encode a latent “piano roll”

(62)

Sequential Inference Problems

• Filtering p(θk|y1:k) = P

r1:k

R dθ0:k−1p(y1:k0:k)p(θ0:k|r1:k)p(r1:k)

• Viterbi path (e.g. Raphael 2001)

(r1:k, θ1:k) = argmax

r1:k1:k

p(y1:k0:k)p(θ0:k|r1:k)p(r1:k)

• Best segmentation (MMAP) r1:k = argmax

r1:k

Z

0:kp(y1:k0:k)p(θ0:k|r1:k)p(r1:k)

– Each configuration of r1:K encodes one of the possible 2K possible models, i.e., segmentation.

• All problems are similar, but MMAP is usually harder because max and R

do not commute

(63)

Exact Inference in switching state space models

• In general, exact inference is intractable (NP hard)

– Conditional Gaussians are not closed under marginalization

⇒ Unlike HMM’s or KFM’s, summing over rk does not simplify the filtering density

⇒ Number of Gaussian kernels to represent exact filtering density p(rk, θk|y1:k) increases exponentially

−7.9036 6.6343

0.76292

−10.3422

−10.1982

−2.393

−2.7957

−0.4593

(64)

Sequential Inference

• Filtering: Mixture Kalman Filter (Rao-Blackwellized PF) (Chen and Liu 2001)

• MMAP: Breadth-first search algorithm with greedy or randomised pruning, multi-hypothesis tracker (MHT)

(65)

Mixture Kalman Filter

• Particles are conditional Gaussians ψk(i)k; r1:k(i)) ≡ W (r1:k(i)) exp



−1

kK(i)θk + θkh(i) + g(i)



• Use Rao-Blackwellization to compute the optimal proposal distribution (the marginal filtering density) by Kalman predict and update steps

q(rk|y1:k) = Z

kp(ykk) Z

k−1p(θkk−1, rk)p(rk|rk−1(i)k−1(i)k−1; r1:k−1(i) )

· · · rk(i)−1 rk · · ·

· · · θk−1 θk · · ·

yk−1 yk

(66)

Exact Inference for Changepoint detection?

• Exact inference is achievable in polynomial time/space

– Intuition: When a changepoint occurs, the old state vector is reinitialized

⇒ Number of Gaussians kernels grows only polynomially (See, e.g., Barry and Hartigan 1992, Digalakis et. al. 1993, `O Ruanaidh and Fitzgerald 1996, Gustaffson 2000, Fearnhead 2003)

r1 = 1 r2 = 0 r3 = 0 r4 = 1 r5 = 0

θ0 θ1 θ2 θ3 θ4 θ5

y1 y2 y3 y4 y5

– The same structure can be exploited for the MMAP problem

⇒ Trajectories r1:k(i) which are dominated in terms of conditional evidence p(y1:k, r1:k(i)) can be discarded without destroying optimality

Referanslar

Benzer Belgeler

PEKER EMLAK İNŞAAT which adopted the delivery of all Projects it undertook in the rough construction field in a complete and compatible manner with the rules within the

SAHNE IŞIKLARI ve DİĞER ŞEYLER Yazan ve Çizen: Jean-Jacques Sempé Türkçeleştiren: Damla Kellecioğlu Karikatür / Her Yaş / Nisan 2019 Baskı Detayları: 170x220 mm, 64 sayfa,

Testler aracılığıyla bireyin psikolojik özellikleri nesnel olarak ölçülebilir.. Psikolojik testler; bireylerin her hangi bir niteliğini ölçmek amacıyla, nitelikler

(2) Ara dönem finansal tabloların bağ ımsız denetim e tabi olması halinde sürelere; sermaye piyasası araçları Borsada işlem gören işletmeler için 10,

Ancak; buradan gelecek teğetlerin kesim noktası, sadece, geometrik yere ait bir nokta olurdu... Teğetler birbirine dik olacağına göre, bu denklemin köklerinin

Yazan: John Wyndham Çeviri: Niran Elçi Roman / Sert kapak 200 sayfa / Nisan 2018. Triffidlerin Günü, uygarlık, insanlığın doğa karşısındaki kibirli tutumu, cinsiyet, sınıf

Bakanlar Kurulu, önergede belirtilenler ışığında, Gemi Trafik Hizmetleri KKTC Doğu Akdeniz Genişleme Planlaması çeçevesinde, TC Ulaştırma, Denizcilik ve Haberleşme

(Önerge No:1598/2014) (Bb.Yrd.E.T.K.S.B.) Bakanlar Kurulu, önergede belirtilenler ışığında, doğa yürüyüşü turizmi başta olmak üzere önemli turizm ziyaret