Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology...

25
Communications technology laboratory Viterbi-algorithm SS 2013 Florian Lenkeit NW1, Room N 2390, Tel.: 0421/218-62395 E-mail: [email protected] Universit¨ at Bremen, FB1 Institut f¨ ur Telekommunikation und Hochfrequenztechnik Arbeitsbereich Nachrichtentechnik Prof. Dr.-Ing. A. Dekorsy Postfach 33 04 40 D–28334 Bremen WWW-Server: http://www.ant.uni-bremen.de Version from May 2013

Transcript of Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology...

Page 1: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

Communications technology laboratory

Viterbi-algorithm

– SS 2013 –

Florian LenkeitNW1, Room N 2390, Tel.: 0421/218-62395

E-mail: [email protected]

Universitat Bremen, FB1Institut fur Telekommunikation und Hochfrequenztechnik

Arbeitsbereich NachrichtentechnikProf. Dr.-Ing. A. Dekorsy

Postfach 33 04 40D–28334 Bremen

WWW-Server: http://www.ant.uni-bremen.de

Version from May 2013

Page 2: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

CONTENTS I

Contents

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . 1

1.2 Structure of the experiment . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 1

1.3 Preparation and postprocessing of the experiment . . . . .. . . . . . . . . . . . . . . . 2

2 Theoretical fundamentals 4

2.1 FIR-filtering and convolutional encoding as finite statemachine . . . . . . . . . . . . . 4

2.1.1 Transversal structures . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . 5

2.1.2 State transition diagram . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . 5

2.1.3 Trellis diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 5

2.2 Trellis decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 6

2.2.1 Disturbance model for the decoding . . . . . . . . . . . . . . . .. . . . . . . . 6

2.2.2 Viterbi-metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . 7

2.2.3 Viterbi-algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 7

2.3 The Viterbi-algorithm for channel equalization . . . . . .. . . . . . . . . . . . . . . . 8

2.3.1 Transmission channel in transversal structure (FIR-filter) . . . . . . . . . . . . . 8

2.3.2 Description of the Viterbi-algorithm (Euclidean metric) . . . . . . . . . . . . . . 9

2.4 The Viterbi-algorithm for the convolutional decoding .. . . . . . . . . . . . . . . . . . 12

2.4.1 Convolutional encoding . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 12

2.4.2 The metric at convolutional decoding . . . . . . . . . . . . . .. . . . . . . . . 13

3 Exercises 14

4 Experimental run 16

4.1 Channel equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . 17

4.1.1 Preparations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 17

4.1.2 Demonstration program VA (binary transmission) . . . .. . . . . . . . . . . . . 17

4.1.3 Error analysis for the VA at QPSK-modulation . . . . . . . .. . . . . . . . . . 19

4.1.4 Channel equalization at QPSK-modulation . . . . . . . . . .. . . . . . . . . . 19

4.2 Decoding of convolutional codes . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 20

4.2.1 Convolutional code of rate 1/2 . . . . . . . . . . . . . . . . . . . .. . . . . . . 20

4.2.2 CC of rate 1/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 21

4.2.3 Convolutional decoding . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 21

4.2.4 Soft-input vs. hard-input . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . 21

Page 3: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

CONTENTS II

4.2.5 Error structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 22

Page 4: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

1 INTRODUCTION 1

1 Introduction

1.1 Motivation

This experiment treats with the Viterbi algorithm (VA), a fundamental basis of modern communicationtechnology. Its application is very manifold:

• Equalisation of memory afflicted channels and digitally modulated signals; application for exampelin mobile phones according to the GSM-standard (D- and E-Net)

• Decoding of convolutional codes; application also in GSM-mobile phones (in connection with theabove example thus even a concatenation of two VA) or in telephone-channel modems

• Demodulation of CPM- or TCM-signals (CPM:Continuous Phase Modulation, TCM: TrellisCoded Modulation)

• Speech recognition

The VA represents the efficient realization of the optimal solution for the estimation of a data sequencex from the disturbed reception sequence

y = f(x,S) + n,

wheren describes a white, Gaussian distributed noise vector,f(·) characterizes the output functionof the system andS represents the vector of the memory of the system. A receiveraccording to themaximum-likelihood-principle, i.e. the principle of the detection with an error probability as less aspossible, calculates the maximum of all probabilities, at which a possibly transmitted data vectorxµ

results in the reception vectory:maxµ

{P (y|xµ)} → x .

This calculation can efficiently achieved with the help of the VA. It determines the estimated datasequence for the transmitted data sequencex, which leaded to this maximum of probability.

Within the scope of this experiment, we will restrict ourselves to two application fields of the VA – thechannel equalization as well as the decoding of convolutional codes. By means of these two applicationsit shall be shown, that the VA can be used for several problems. These problems can be formulated asfollows:

• Channel equalization: The VA is used for removing disturbing intersymbol-interference (ISI),in order to ideally obtain the transmission quality of the AWGN-channel (AWGN:Additive WhiteGaussian Noise)

• Decoding: Through the VA is specifically used the brought-in redundancy for the decrease of theerror rate compared with an uncoded transmission.

1.2 Structure of the experiment

A short summary of the most important formulas and connections is given in Section 2 of this description.For improving the understanding the exercises given in Section 3 have to be solved in advance. Theinstruction for the execution of experiment then follows inSection 4.

Page 5: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

REFERENCES 2

1.3 Preparation and postprocessing of the experiment

For the successful and fast run of the experiment, you have tomaster the required theoretical fundamen-tals. The knowledge of these fundamentals is therefore checked beforethe start of the experiment. Atinsufficient preparation of the team, you will be excepted from the experiment!

The required theoretical fundamentals were imparted in thelectures

• “Communications Technology”,

• “Digital Signal Processing” as well as

• “Channel Coding”.

Specific parts of these fundamentals, especially of the channel equalization, are described in chapter 14.5of the textbook [Kam08] and chapter 10.1 of [Pro01]. The basics of channel encoding and decodingof convolutional codes are given in chapter 4 of the lecture notes [KW08]. Further literature hintsconcerning the Viterbi algorithm are given in the bibliography ([Bos99, Fri96, Fri03, KK01, KK09,Vit67, For72, For73, Pro01]).

Please absolutely note the following points:

• Solve the exercises in Section 3before the date of the experimentand bring yourown solutions inwritten form to the lab. Occurring problems can be discussed before the start of the experiment.(You will have to present the results on the black board). Only those of you how can present andexplain the own solutions are admitted to the lab!

• During the course of the lab you will prepare (a handwritten)protocol, containing the developedresults regarding the working points in Section 4. In order to enhace the readability you may writein a clean form and the applicatin of a rule will not decrease the chance of passing the lab. Thesolutions of the tasks / questions is discussed and checked during the lab. In case all tasks havebeen explained in a sufficient way, the preperation of a detailed protocol at home is omitted. Thus,by a good preperation you can save a lot of work.

• Within this description all questions that have to be answered or problems that have to be explainedin the protocol are marked by a continous number (e.g.VP-1: ). This numbering shall be takenover into the protocol completely and in the correct succession to make a clear elaboration easier.

• In case not all taks are handeled within the noted protocol ina sufficient way, you have to workout a detailed protocol at home. This protocol has to containthe gained results (no repetition resp.“summary” of this description).Also the solutions of the exercises have to be taken over into theprotocol! Please, work out your protocol with a computer program.

References

[Bos99] M. Bossert.Channel Coding for Telecommunications. John Wiley, 1999.

[For72] G.D. Forney. Maximum likelihood sequence estimation for digital sequences in the presence ofintersymbol interference.IEEE Trans. on Information Theory, IT-18:363–378, 1972.

[For73] G.D. Forney. The viterbi algorithm.Proceedings of the IEEE, 61:268–278, 1973.

[Fri96] B. Friedrichs.Kanalcodierung. Springer, Berlin, 1996.

[Fri03] B. Friedrichs.Error-Control Coding. 2003. Online:http://www.berndfriedrichs.de/ .

[Kam08] K.D. Kammeyer.Nachrichtenubertragung. Vieweg+Teubner, Stuttgart, fourth edition, 2008.

Page 6: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

REFERENCES 3

[KK01] K.D. Kammeyer and V. Kuhn.Matlab in der Nachrichtentechnik. J. Schlembach Fachverlag, Weil derStadt, 2001.

[KK09] K.D. Kammeyer and K. Kroschel.Digitale Signalverarbeitung: Filterung und Spektralanalyse (mitMatlab-Ubungen). B.G. Teubner, Stuttgart, seven edition, 2009.

[KW08] V. Kuhn and D. Wubben.Kanalcodierung. Vorlesungsskript Universitat Bremen, 2008. Online:http://www.ant.uni-bremen.de/teaching/kc/ .

[Pro01] J. G. Proakis.Digital Communcations, volume 4. McGraw-Hill, 2001.

[Vit67] A.J. Viterbi. Error bounds for convolutional codesand an asymptotically optimum decoding algorithm.IEEE Trans. on Information Theory, IT-13:260–269, 1967.

Page 7: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 4

2 Theoretical fundamentals

2.1 FIR-filtering and convolutional encoding as finite statemachine

In this section, we go into the structural commonnesses of a channel with memory (representable asFIR-filter (FIR: Finite Impulse Response)) and a convolutional encoder.

Channels with memory are not favorable, because the so caused intersymbol-interference (ISI) is hard toresist especially in the case of a large memory. ISI results from the temporal interlacing of several inputsymbols to output symbols, that have a mostly much greater index than the input symbols. The numberof input symbols, from which each output symbol depends, is finite for the class of the FIR-channels.Channels with memory can make a signal detection without equalization impossible – our aim thereforeis to undo the undesired effects through achannel equalizer.

Convolutional codes – as a class of the channel codes – fulfillthe task of controlled adding redundancyto the data to be transmitted for protection from channel effects, i.e. from transmission errors. By theaddition of redundancy the output symbol rate is increased compared to the input symbol rate. Theconvolutional encoder maps the input symbol sequence onto an output symbol sequence; the number ofthe preceding input symbols, that are taken into consideration at the mapping, is finite for non-recursiveconvolutional codes. The task of theconvolutional decoderis an error-resistant recovery of the inputsymbol sequence from the noisy reception symbol sequence.

The process of FIR-filtering (influence of the FIR-channel) and of convolutional encoding can be in-terpreted as the effect of aMealy-automatonwith Z distinguishable inner memory states (“finite-state-machine”) on the input symbol sequence.

The assignment of the current output symbolwi is dependent on the memory stateSi of the state machineand the input symbolxi. It is described through the output functionf

wi = f(xi, Si) (1)

of the Mealy-automaton. The same parameters determine the following stateSi+1 with thestate transi-tion functiong according to

Si+1 = g(xi, Si) . (2)

This relation illustrates the recursive concatenation of the memory states and thereby the correlation ofneighboring output symbols. The relations (1) and (2) characterize a time-invariant Mealy-automatonaccording tofig. 1.

xiSi

Si+1 wi

State memory f S( , )xi ig S( , )xi i

fig. 1: FIR-filter and convolutional encoder as Mealy-automaton

Here is forxi andwi:

xi ∈ Ax = {X1, X2, X3, . . . , XM},wi ∈ Aw = {W1, W2, W3, . . . , WMw},

whereAx andAw represent the input resp. output alphabet. The number of possible input symbols isgiven byM , whereas we indicate the number of possible output symbols by Mw. The distinguishablememory states are described by state numbers:

Si ∈{1, 2, . . . , Z},

Page 8: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 5

where there number is given byZ = M ℓ = MK−1, if ℓ = K − 1 is the memory length (i.e. the numberof memory locations). The parameterK = ℓ+ 1 denotes the number of output symbols that is affectedby one input symbol (constraint length).

2.1.1 Transversal structures

In case of the FIR-channel as well in case of the convolutional encoder the finite memory can berepresented in a non-recursive transversal structure (cmp. fig. 2).

. . .x

w

. . .D D D D D

(x , x 1 , x , x -2 , . . .x +1 , x )

fig. 2: Transversal representation of the memory structure of FIR-channels and convolutional encoders

The symbol D represents a memory element, its input signal being taken over into the memory at thetransition from stepi to stepi + 1 (D: delay). The number of memory elements isℓ = K − 1. If theconstraint lengthK is equal to1, the corresponding FIR channel is a so called “ideal channel”, whereasthe channel encoder becomes a (trivial) block encoder.

2.1.2 State transition diagram

The sequence of the memory states(. . . , Si−1, Si, Si+1, . . .) of the Mealy-automaton from fig. 1 rep-resents a time-discrete stochastic process. At fixed input symbol the following state of the state machinedependents only on the previous state. Proceeding from eachstateM (number of possible input symbols)transitions to following states exist for the different input symbols. Each statetransition involves thecreation of an output symbol (ofMw possible ones) according to the output equation (1). A statetransition diagram, as shown infig. 3 for an example with the parametersZ = 3, M = 2 andMw = 4,graphically illustrates the overall amount of state machine equations (2) for all combinations of theargumentsSi andxi at the transition from stepi to stepi+ 1.

X2�/W�2 X1�/W�4

X2�/W�1

X1�/W�1

X1�/W�4�

X2�/W�3

31

2

fig. 3: State transition diagram withZ = 3 states,M = 2 input symbols andMw = 4 output symbols

2.1.3 Trellis diagram

By transacting the state transition diagram over the counting indexi we get to the net or Trellis diagramof a Mealy-automaton (see fig. 4). This diagram is especiallyuseful for the analysis and presentation ofchannel equalization- and convolutional decoding-algorithms.

Page 9: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 6

X2�/W�2

X1�/W�4

X2�/W�1

X2�

W1�

X1�

W4�

. . .

. . .

. . .

. . .

X2�

W3�

X1

W1

X1�/W�1

X1�/W�4�

X2�/W�3

1 2 31

3

1

2

fig. 4: Trellis diagram of the state transition diagram from fig. 3 with an example path

In case of the time-invariant Mealy-automaton the Trellis diagram is equal from step to step. Thetransition in the Trellis diagram from a starting stateSi to the following stateSi+1 is calledbranch(SiSi+1). Each branch is labeled by the input symbolXi, which causes the corresponding state transitionSi → Si+1. Furthermore, we also mark each branch with the output symbol Wk, that is created by theoutput equation with the input symbol and the register contents of the state machine.

An input symbol sequence(. . . , xi−1, xi, xi+1, . . .) is mapped on a sequence of branches in the Trellisdiagram, which is called apath. In fig. 4 a path through the Trellis diagram is emphasized. The inputsymbols, that have created this path, are underneath the diagram with a grey background. Underneaththem the symbols of the output sequence(. . . , wi−1, wi, wi+1, . . .) of the Mealy-automaton are entered,that – degraded by a disturbance on the transmission – can be observed at the side of the receiver.

2.2 Trellis decoding

Each input symbol sequence(. . . xi−1, xi, xi+1, . . .) is uniquely represented by a path in the Trellisdiagram. The detection of the estimated message1 (. . . , xi−1, xi, xi+1, . . .) from the sequence of thereceived values(. . . , yi−1, yi, yi+1, . . .) is based on the consideration of theoverall received message.This so called “sequence estimation process” determines the path through the Trellis diagram with thegreatest probabilityafter receiving the overallreceived message, thusa-posteriori. As the estimationis carried out over the overall symbol sequence, the term maximum-a-posteriori-sequence-estimationis used. Under assumption of equally probable input symbolsthis method simplifies to amaximum-likelihood sequence estimation (MLSE).

2.2.1 Disturbance model for the decoding

The task of trellis decoding is to conclude the input symbol sequence(. . . , xi−1, xi, xi+1, . . .) by meansof the output symbol sequence(. . . , yi−1, yi, yi+1, . . .), that was degraded by disturbances (seefig. 5).For this process the receiver needs not only the noisy outputsequence but also the output functionf(xi, Si) as well as the memory structure (state transition functiong(xi, Si)) of the state machine. Thedisturbance of the output symbolswi is assumed as white, Gaussian distributed noise (AWGN).

1resp. of the input symbol sequence

Page 10: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 7

Mealy-automaton(FIR-Channel/convolutional encoder)

AWGN

xiyiwi

fig. 5: Disturbance model

2.2.2 Viterbi-metric

The aim of the ML-decoding (ML:Maximum Likelihood) is to choose that code sequencex correspond-ing to the reception sequencey, which maximizes the conditional probability densityp(y|x). If x is theML-estimation, then for all decoder input symbol sequences(code sequences)xµ holds:

p(y|x) = maxµ

{p(y|xµ)}. (3)

The conditional probability densityp(y|x) for sequences divides into a product of the single conditionalprobability densitiesp(yi|xµi) in the case of a memory-free, discrete channel with white noise:

p(y|xµ) =∏

i

p(yi|xµi). (4)

As the logarithm is a monotonically increasing function, the maximization does not change, if youreplace the transition probability by its (scaled) logarithm. The resulting sum is called “Viterbi-metric”

L(y|x) = maxµ

{Lµ(y|xµ)}

= maxµ

{ln p(y|xµ)}

= maxµ

{

i

ln p(yi|xµi)}

= maxµ

{

i

Lµ(yi|xµi)}

. (5)

Lµ(yi|xµi) is thereby called metric-increment. It represents a measure for the “similarity” of the Trellisbranch with the received signal. Due to the maximization in eq. (5) that path is chosen, that is mostsimilar to the received sequence.

Nevertheless, due to practical reasons we will often calculate thegreatest possible similarityas thesmallest possible deviation. The metric-incrementLµ(yi|xµi) then characterizes the “costs”, that causesa branch in the Trellis, resp. the “distance” (Euclidean distance) of the received signal from the desiredsignal. These costs are minimized during the Viterbi-decoding, which is equivalent to the ML-decoding.In this case, the calculation has to follow unlike eq. (5) as minimization:

L(y|x) = minµ

{

i

Lµ(yi|xµi)}

(6)

2.2.3 Viterbi-algorithm

The Viterbi-algorithm (VA) [Vit67] is theoptimal and at the same time the efficient method for theestimation of the input symbol sequence of a finite state machine from its output symbol sequence, thathas been observed in the white Gaussian noise.

Compared with the MLSE described above, the following considerations are leading to the efficientrealization in form of the VA:

Page 11: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 8

• The recursive calculation of the maximum-likelihood-functional (“path costs”, calculated becauseof a certain metric, cmp. section 2.3.2) keeps the demands onthe calculation power within limits.

• In each step only one path hast to be considered for each of theZ states, which allows a realizationwith low memory demands.

• Path joining: from a certain, dated back point of time on, allZ considered paths join. Based onthis observation it is possible to decrease the memory demand. Even more important is the fact,that data can already be outputbeforethe end of the decoding of the overall block.

TheViterbi-algorithm is executed as follows:

• For all stepsi in the Trellis diagram:

Calculate for ζ = 1 up toZ ·M all metricsLζ(yi|xiζ) corresponding to stepi.

For all statesSi = 1 up toZ:

Add to the metricsLS(1)i−1

, . . . , LS(M)i−1

of all M previous statesS(1)i−1, . . . , S

(M)i−1 the current

branch metricsLS(∗)i−1→Si

(yi|xi∗) of the branchesS(∗)i−1 → Si, that result in the stateSi.

Compare the accumulated metrics for all paths, that result in the stateSi.

Selectthe branchS(∗)i−1 → Si corresponding to the stateSi, for which the sum of previous

state metric and branch metric is maximal. At equality of themetrics a random decision istaken.

Store the selected sum of the metrics as new state metricLSi

= LS(∗)i−1

+ LS(∗)i−1→Si

(yi|xi∗) and the branchS(∗)i−1 → Si.

• After the above calculations are made through the Trellis for all stepsi, output the data sequence,that corresponds to the remaining path.

The above itemized steps shall be illustrated in section 2.3.2 by means of the specific example of thechannel equalization.

2.3 The Viterbi-algorithm for channel equalization

2.3.1 Transmission channel in transversal structure (FIR-filter)

As already introduced in Section 2.1, we consider an FIR-channel as a special form of the generaltransversal structure from fig. 2. The state transition function of the Mealy-automaton is determinedby the shift register chain. The i-th output symbol results from the inner product of the coefficients ofthe channel impulse responsef = (f0, f1, . . . , fℓ) and the register content as well asxi. With

dµ = (. . . , xi−1, xi, xi+1, . . .)

denoting theµ-th data vector from all possible data vectors, this operation corresponds to the convolutionof the data vector with the channel impulse response.

It is worth to note, thateach input symbolxi (this stems often from the alphabetAx = {+1, −1} or{0, 1}) results inan output symbolthat can stem from a greater alphabet, as different signal levels canbe created by the channel. The number of delay elements is called memory lengthℓ and the so calledinfluence length is given byK = ℓ+ 1.

Page 12: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 9

. . .x

f f f f f f fw

. . .D D D D D

+ + + + + + +

fig. 6: Transmission channel in transversal structure (symbol clock model)

2.3.2 Description of the Viterbi-algorithm (Euclidean metric)

The following explanations essentially summarize the statements of Chapter 13.2 from [Kam08] (chapter6 of the lesson Communications Technology). The referred equation numbers (13.x.x) in this section,relate to the corresponding equations of this book.

The ML-criterion eq. (13.1.18) and eq. (6)

Lµ = |Fdµ − y|2 != min (7)

states, that fromall possible undisturbed input sequences– to be calculated from the convolution matrixF and all possible transmission sequencesdµ – has to be selected that one, which has the minimalEuclidean distance to the currently considered disturbed reception sequencey. If you formulate thefunctional instead of matrix notation in the notation of sums

Lµ(i) =i∑

ζ=0

|y(ζ)− f(ζ) ∗ dµ(ζ)|2 , (8)

you can easily see the possibility of the recursive calculation of the functional:

Lµ(i) = Lµ(i− 1) + |y(i)− f(ζ) ∗ dµ(ζ)|ζ=i|2 with µ∈{0, . . . , ML − 1} , (9)

whereL is the length of the input data sequence andM , as usual, the modulation index of the input data.Nevertheless, not all possible paths in the Trellis diagramhave to be considered in each step. From pathsthat merge into one state only this one has to be further takeninto account, which corresponds to theminimal functional. The other paths have already have turned out to be less likely. Thereby, the numberof considered functionals is essentially reduced. Therefore, we define the followingpath costs:

Lµ ⇒ Pν with ν∈{0, . . . , M l − 1} (10)

and the channel orderl. Advisably for the calculation, we introduce thetransition path costsfor thedifferent signal levelszνk

pνk(i) = |y(i)− zνk|2 with k∈{0, . . . , M − 1} , (11)

so that finally the following recursive calculation instruction for the summation path costs results:

Pν+(i) = mink

{Pν−(i− 1) + pν−k(i)} . (12)

The dependence of the updated path costsPν+(i) on the previous path costsPν−(i− 1) results from thecorresponding Trellis-diagram. This relation is illustrated infig. 7 for the case of a binary transmissionover a channel of order 2.

Rule of ThumbUp to now, we always assumed the transmission and detection of finite input data sequences of lengthL. For very long data sequences or a continuous transmissionsthis would result in a great delay, as theVA detects the whole sequence and thereby has to wait for the end of the transmission (the end of thesequence). Nevertheless, practical observations show, that all paths merges at a certain point of time, notexceedingimax time steps. Thus, at instant timei we can already make a decision about the input data

Page 13: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 10

0( -1)

1( -1)

2( -1)

3( -1)

0( )10( )

00( )

31( )

1( )

2( )

3( )

fig. 7: Updating of the summation path costs:Pν+(i) = mink {Pν−(i − 1) + pν−k(i)}

d(i− imax) (as well as all previous data). The rule of thumb for a maximalpath merging length in praxisis the fivefold value of the channel memory.

Error rate performanceThe Viterbi-detector is the optimal receiver for a channel with memory with additive white Gaussiandisturbance – this does not mean, thatnoerror occur at the detection, but only, that the smallest possiblenumber of errors occur. The calculation of the error probability at the Viterbi-detection is very com-plicated and shall not be derived here (the detailed derivation can be found in [Kam08], ch. 13.3 and[Pro01], ch. 10.1.4). Nevertheless, we shall repeat at thispoint, that the efficiency of the VA stronglydepends on the current channel impulse response.

This becomes clear at consideration of eq. (13.3.24a) from [Kam08], which estimates the error probabil-ity of M -ary PSK-transmission (PSK,Phase Shift Keying) and a given channel impulse response

PS ≈ 1

2Kγmin

· erfc(

ldM · Eb

N0· γ2min · sin

π

M

)

. (13)

The influence of the transmission channel is herein given by the so calledS/N -loss factorγ2min:

γ2min = mine

{e∗F∗Fe}, (14)

that contains the convolution matrixF – and therewith the channel impulse response2 f – and onlyconsiders thoseerror vectorse which lead to the minimum. An error vector of an individual error eventof the lengthLf is thereby defined as a vector of the lengthLf − l, which describes the divergence ofthe detected path to the real path in the Trellis diagram:

e = [e0, e1, . . . , eLf−l−1]T (15)

with

eν =1

dmin

(

d(i0 + ν)− d(i0 + ν))

. (16)

We do not want to consider the factorKγminin (13) more in detail, because the error probability

especially at greatS/N -ratio is affected more strongly by the argument of theerfc-function. Its definitionis given owing to the completeness:

Kγmin=

e|γmin

w(e)

Lf−l−1∏

ν=0

m(eν)

M. (17)

The definitions for theHamming-weightw(e) and the factorm(eν) can be drawn from [Kam08], ifrequired.

2power normalization to one is presupposed

Page 14: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 11

Crucial for the determination of the error probability according to eq. (13) is the calculation of theS/N -loss factorγ2min from (14), which shall be transformed into

γ2min = mine

{f∗REee f}. (18)

for further examinations with regard to worst-case channels. Hereinf describes the vector of the channelimpulse response, at which the elements are arranged in reverse order:

f = [f(l), f(l − 1), . . . , f(0)]T . (19)

The usage of the Karhunen-Loeve transformation allows thedescription of the power-autocorrelationmatrixRE

ee by means of orthonormal basis vectorsqν in the format

REee = UΛU∗, (20)

whereU represents a matrix containing the eigenvectorsuν of REee in the columns andΛ characterizes

the diagonal matrix with the corresponding eigenvaluesλ0, . . . , λl according to eq. (13.3.27) (See alsoSingular Value Decomposition (SVD) in chapter 1 of the lesson Advanced Topics in Digital Communi-cations). With the help of this transformation eq. (18) can alternatively be written as

γ2min = mine

{f∗ UΛU∗ f} = mine

{

l∑

ν=0

λν |f∗uν |2}

. (21)

By considering this expression we can conclude, that the minimum valueγ2min is obtained, iff is replacedby the eigenvectoruν that corresponds to the smallest eigenvalueλν . Consequently,γ2min is identical tothis smallest eigenvalueλν .

With the above given derivations two different problems canbe formulated :

1. The search for such autocorrelation matricesREee (according to eq. (20) with globally minimal

eigenvalues, leads to certain error vectorse. By means of eq. (21) these error vectors can easilybe determined by the corresponding “unfavorable” channel impulse responsef , i.e. the worsttransmission conditions, and the appertainingS/N -loss factor. The search for the error vectorsis (especially at channels with higher order) mathematically very large-scaled; in [Kam08] thechannel impulse responses for worst case channels of 1st and2nd order are given.

2. For a given channel impulse responsef those error vectors can be found, that fulfill eq. (14). Withthese error vectors the value ofγ2min can be determined to estimate the symbol error probabilityaccording to eq. (13). At the same time, these error vectors (“error pattern”) are those errors, whichmost frequently can be observed within the detection.

Example: Worst case channel of 2nd order, QPSK-transmissionAccording to [Kam08], ch. 13.3.3, overall four worst case channels of 2nd order are known, each leadingto anS/N -loss ofγ2min = 3.3 dB:

f1 = [α, β(1 + j), jα]T f2 = [α, −β(1 + j), jα]T

f3 = [α, β(1− j), −jα]T f4 = [α, −β(1− j), −jα]T ,(22)

with α = 0.4680 andβ = 0.5301. The corresponding error vectors have the length 3:

e1 = a · [1, −(1 + j), j]T e2 = a · [1, (1 + j), j]T

e3 = a · [1, −(1− j), −j]T e4 = a · [1, (1− j), −j]T .(23)

Page 15: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 12

To each of these channelsfi four different error vectors witha∈{+1, −1, +j, −j} belong. For exam-ple, channelf1 results frequently to error patterne1, i.e., the following four error vectors can be observedwith high probability

e1,1 = [1, −1− j, j]T e1,2 = [−1, 1 + j, −j]T

e1,3 = [j, 1− j, −1]T e1,4 = [−j, −1 + j, +1]T .(24)

The symbol error probability for QPSK-transmission can thus be estimated through

PS ≈ 3

8· erfc

(

0.4689 · Eb

N0

)

. (25)

Fig. 8 represents the performance drawback with respect to symbolerror rate of a second order worstcase channel equalized by VA in comparison to the ideal channel. Additionally the (much higher) errorprobability of a linear equalization with a symbol clock equalizer (T-equalizer) of ordern = 31 is shownfor comparison.

0 5 10 1510

-4

10-3

10-2

10-1

100 QPSK, ungunstigster Kanal 2.Ordnung

Eb/N0 [dB] →

PS →

idealer Kanal

Viterbi (analytisch)

Viterbi (Simulation)

lin. Entzerrer (n=31)

fig. 8: Symbol error probability for worst case channel conditions

2.4 The Viterbi-algorithm for the convolutional decoding

In contrast to FIR-channels, where the influences to the transmitted signal are undesired and are leadingto a disturbance of an error-free transmission, the convolutional coding is specifically applied beforetransmission for error suppression. In both cases the effect of the convolution is undone with the Viterbi-algorithm – it is in a way “deconvolved”.

2.4.1 Convolutional encoding

Convolutional encoders transform a sequence of input symbols (“information bits”) into a sequence ofoutput symbols (“coded bits”) by convolving the information bits with a set of generator coefficients. Thesummation is here executed as a modulo-2-addition. The state transition function of the convolutionalencoder (Mealy-automaton) is here also determined by the shift register chain. In contrast to FIR-channels, at which there is onlyoneset of FIR-coefficients, the sequence is here “convolved” with at

Page 16: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

2 THEORETICAL FUNDAMENTALS 13

least (and infig. 9 exactly) twogenerator polynomials. At those places, where the coefficients of thegenerator polynomials are equal zero, no connection exist.The memory lengthℓ (number of memorylocations) corresponds to the degree of the generator polynomials.

For FIR-channelsone output symbol per input symbolis developed, wich however can stem from agreater alphabetAy ⊂ R. Though for convolutional codingseveral output symbols per input symbolare developed depending on the code rate, where the output symbols nevertheless stem from the samealphabet as the input symbolsAy = Ax = {0, 1}.

. . .x

g

g

g

g

g

g

g

g

g

g

g

g

g

g

w. . .D D D D D

fig. 9: Convolutional encoder for a code of rateR = 1/2with the generator polynomialsg0 = (1, 1, 0, 0, 0, . . .1, 1)andg1 = (0, 1, 0, 0, 1, . . .1, 1)

2.4.2 The metric at convolutional decoding

For the AWGN-channel the binary input alphabet is properly fixed toAx = {−1, 1} for being able toexecute the following simplification. The channel yields the output alphabetAy = R, see fig. 5. In thiscase the distribution density function of the reception data for a transmittedxi and a receivedyi is :

p(yi|xi) =1√πN0

· e−(yi−xi)

2

N0 . (26)

Thus, the Viterbi-metric according to eq. (5) becomes

L(y|x) = maxµ

{

i

Lµ(yi|xµi)}

= maxµ

{

i

ln p(yi|xµi)}

= maxµ

{

i

(

ln1√πN0

− y2iN0

−x2µiN0

+2yixµiN0

)}

.

As constants in the sum do not have an influence on the maximization, the termsln 1√πN0

andx2µi

N0can be

ignored because ofx2µi = 1:

L(y|x) = maxµ

{

i

(

− y2iN0

+2yixµiN0

)}

(27)

The term y2i

N0only depends oni, thus yields the same contribution to each sum and does not affect the

maximization by means ofµ. So the metric-increment divides into a simple scalar product:

L(y|x) = maxµ

{

i

(yixµi)

}

, (28)

which allows the processing of unquantized reception symbols (soft-decision-inputs). Consequently, forthe reception sequencey this code sequencex has to be selected, that maximizes the sum of the scalarproducts of its corresponding elements (correlation metric has to be maximized‘).

This corresponds to a minimization of the Euclidean distance between reception sequence and wantedcode sequence (according to principle 1.4 from [Fri96]; seealso lesson channel coding).

Page 17: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

3 EXERCISES 14

3 Exercises

Reminder: Solve these exercises prior to the laboratory in a written form and present them at thebeginning of experiment. The task are presented on the blackboard by you! Only those students whoshow there own solutions and are able to explain can be omitted to the execution of the experiment!

In case you have to work out a detailed protocol, the (corrected) solutions have to be attached to thisreport!

Exercise 1: Parameter of the Viterbi Algorithmus Ch. 14.5

Given is the modulation indexM and the length of the channel impulse responseK = l + 1. Give ingeneral form

• number of statesZ,

• number of pathes, leaving each state or arriving at a state

• total number of path transitions in each time instant.

Calculate the corresponding values for the transmission of8-PSK symbols over a channel of order 2.

Exercise 2: Intersymbol interference (ISI) at QPSK Ch. 13.2

Statistically independent, equally distributed QPSK-symbols are transmitted over a channel of 1st orderwith the real-valued channel impulse response

f = [0.8, 0.6]T .

The QPSK-symbols here stem from the symbol alphabet

d1 =1 + j√

2, d2 =

−1 + j√2

, d3 =−1− j√

2, d4 =

1− j√2

.

a) Depict the symbol clock model of the transmission channel.

b) Exemplarily calculate the resulting signal levelsw11 andw42 at the output of the channel for thetwo input symbol combinations{d(i) = d1, d(i − 1) = d1} and{d(i) = d4, d(i − 1) = d2},respectively.

c) Depict the signal space diagram at the output of the channel resulting fromall possible inputsymbol combinations.

d) Sketch the trellis diagram and the appropriate path into the trellis diagram for the symbol sequence

{d(i)} = {1+j√2, 1−j√

2, −1−j√

2, −1+j√

2, −1−j√

2} for i = 0, . . . , 4

{d(i)} = {−1−j√2} for i < 0, i ≥ 5

Exercise 3: Trellis diagram for a channel of 2nd order Ch. 13.2

Consider a BPSK-transmission over a channel, where its baseband symbol clock model has an impulseresponse of length 3. Draw into the appertaining Trellis diagram that path, which corresponds to thefollowing data sequence:

{d(i)} = {1,−1, 1, 1,−1, 1} for i = 0, . . . , 5

{d(i)} = {−1} for i < 0, i ≥ 6

Page 18: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

3 EXERCISES 15

Exercise 4: Viterbi-detection at a channel of 1st order Ch. 13.2

Consider that a transmission follows over a channel with thesymbol clock impulse response

f = [1, 1]T

with usage of unipolar datad(i)∈ {0, 1}. A superposition of white Gaussian noise is assumed for thetransmission.

a) Depict the complete Trellis diagram for the transmissionof a transmission sequence of the length 3.Thereby assume, that the channel stores the data valued = 0 at the beginning of the transmissionand for creating a power-free channel at the end of the transmission as fourth value in each case issent a “0”.

b) Determine the possible signal levels at undisturbed transmission.

c) At the receiver the signal sequence

{y(i)} = {0.9, 1.1, 1.0, 0.8}

is measured. Execute the Viterbi-algorithm for the optimaldetection of the transmitted data andgive the data sequence which was sent most probably.

Exercise 5: Trellis diagram of a convolutional encoder Ch. 9 in [Fri96, Fri03]

Create the shift register structure and the state transition diagram of a convolutional encoder with thegenerator polynomialsg0 = (1, 1, 1) andg1 = (1, 0, 1). Determine the function table for the input datax = {1, 0, 1, 1, (0, 0)}, enter the input data as an emphasized path into the Trellis diagram and give theoutput symbolsy. User figures 4 and 9 for orientation. How many bits does the output sequence consistof?

Hint: The last two zeros of the vectorx (in the brackets) represent the so called tail bits, which bring thetransversal structure (in this case the convolutional encoder) into a defined final state; the tail bits strictlyspeaking do not belong to the input data.

Exercise 6: Convolutional decoding Ch. 9 in [Fri96, Fri03]

Flip an arbitrarily chosen data bit in the middle of the output sequence of the previous exercise andexecute a decoding with the Viterbi-algorithm. Use the Hamming distance at the calculation of themetric (Hamming distance of two bit sequences: number of different bits). Can the error be corrected?

Hint: The solution of this exercise is voluntary.

Page 19: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

4 EXPERIMENTAL RUN 16

4 Experimental run

Within this experiment the understanding of the subject shall also be deepened with the help of softwaremodels in the programming environment MATLAB . The VA is, as already mentioned in section 1, appliedon the one hand forchannel equalizationand on the other hand in thechannel coding.

Usedm-files

Some often required MATLAB -commands are summarized in table 1.

who list variables in memorywhat list m-files of current directorysize row and column dimensionsfind find indices of non-zero elementsload load variables from disksave save variables to fileplot linearx/y plotsemilogy semi-logx/y plotsubplot split graph windowtitle plot titlegrid draw grid linesaxis manual axis scalingzoom zoom in/out on 2-D plotzplane z-plane zero-pole plotimage displays a matrix as an image

Table 1: Overview of important MATLAB -functions.

Furthermore specific MATLAB -routines for the execution of the single experimental items are required,listed in table 2. The necessary information concerning thecorresponding input syntax as well as therequired parameters can be looked up (as always in MATLAB with thehelp -function.

Hint 1: In order to simplify the execution of the experimental partyou get the MATLAB routinevitlab.m , which defines some variables. You should insert all following items resp. the necessarycommand sequences (some of them are already included) into this file. Thus a program which finallyrepresents the overall experiment is developed during the experimental runs. In order to avoid theexecution of the wholem-file it is sensible to either execute only parts of the program using the editor

Ch. 4.1vitdemo demonstration VA for binary transmissionvitloss VA-analysis programdig_mod symbol production for digital modulationconvfir convolution forviterbi.mrausch creation of a noise vectorviterbi Viterbi-detectorqpskcomp comparison of QPSK-symbol vectors

Kap. 4.2convcod convolutional encoder forviterbi.mviterbi Viterbi-decoder

Table 2: MATLAB m-files to be used.

Page 20: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

4 EXPERIMENTAL RUN 17

(making & right mouse & Evaluate Selection) or mark certain parts of the program as comments (puttingthe passages inif 0 ... end ).

Hint 2: Within this chapter the marks⇒ Plot and⇒ Text-File are used to indicate that you should savethe corresponding figure or save the output into a txt-file foryour report. Save the figures in the MATLAB

specific format* .fig for further processing with MATLAB at home and / or export the figure into theformat required by your word processing system (* .eps for Latex and* .emf for Word). You shouldstrictly follow this hint because it saves a lot of time!

4.1 Channel equalization

4.1.1 Preparations

For making the description and execution of the experiment easier, the following variables with the givenvariable names are give within the MATLAB routinevitlab.m :

NumSym = 1000; % number of symbols to be transmittedEsNoh = 11; % Es/No "high" in dBEsNol = -4; % Es/No "low" in dB

alphabet = 1/sqrt(2) * [1+j; -1+j; -1-j; 1-j]; % QPSK-symbol alphabetSnutz = ( sum(abs(alphabet).ˆ2) ) / length(alphabet);

% Symbol clock channel impulse response (CIR) ------------ ------------% a) worst case channel of 1. order (see page 577)Theta = 0;f1 = 1/sqrt(2) * [1; -exp(j * Theta)];f1 = f1/sqrt(sum(abs(f1).ˆ2)); % power nomalization to "1"fl1 = length(f1); % length of the CIR

% b) "normal" (arbitrarily chosen) channel of 2. orderf21 = [1.0; 0.8 * exp(j * 0.6 * pi); 0.2 * exp(j * 1.1 * pi)];f21 = f21/sqrt(sum(abs(f21).ˆ2)); % power nomalization to "1"fl21 = length(f21); % length of the CIR

% c) worst case channel of 2. order (see eq. 14.5.64 or eq. 12)alpha = 0.4680;beta = 0.5301;f22 = [alpha; beta * (1+j); j * alpha];f22 = f22/sqrt(sum(abs(f22).ˆ2)); % power nomalization to "1"fl22 = length(f22); % length of the CIR

By executing the programm the variables are defined and the impulse responses and zero planes aredepicted according tofig. 10

VP-1: Calculate for the twoES/N0-valuesEsNoh=11 dB andEsNol=-4 dB the correspondingEb/N0-values in dB for a QPSK-transmission.

4.1.2 Demonstration program VA (binary transmission)

The demonstration programvitdemo is used here to serve for the illustration of fundamental mecha-nisms at the Viterbi-detection. The call is very easy and requires only the channel impulse response and

Page 21: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

4 EXPERIMENTAL RUN 18

0 0.5 1 1.5 20

0.5

1

|f1|

−>

plot(abs(.))

−1 0 1−1

0

1

Real part

Imag

inar

y pa

rt

zplane(.)

0 0.5 1 1.5 20

0.5

1

|f21|

−>

−1 0 1−1

0

1

Real part

Imag

inar

y pa

rt

0 0.5 1 1.5 20

0.5

1

t/T −>

|f22|

−>

−1 0 1−1

0

1

Real partIm

agin

ary

part

fig. 10: Impulse responses and zero plans of the used transmission channels

theES/N0-ratio in dB as input, e.g.vitdemo(f21,EsNoh) . Then an extract of a Trellis diagramis represented, where the path development and so a detection of data from a steady state on can beobserved by continuing the path development by pressing anykey. The data are chosen by random at thebeginning of each call.

On the left side of the figure the “old” summation path costs ofeach state are given; these are overwrittenwith the current summation path costs after pressing in eachcase two keys. The paths develop to theright, where for the presentboth in one state ending paths are drawn as a broken line together with theirsummation path costs. The lower path costs are represented lightly and are taken over at pressing a key,while the grey path costs are assigned to the dying branch. The decided data are entered to the left fromthe path union on – these can be compared with the original transmission data, that are given at the lowermargin of the figure.

Repeatedly execute several experiments for the two different constellations:

1. Constellation: channelf21 at low noise:vitdemo(f21,EsNoh)2. Constellation: channelf22 at strong noise:vitdemo(f22,EsNol)

Make several (about 4-8) experiments in each case, write down the maximum path union length perexperiment in a table and finally answer the following questions:

VP-2: Whichmaximalpath union length can you observe at your experiments in eachcase for the firstand second constellation? How would it be according to a ruleof thumb?

VP-3: At which constellation (1. or 2.) occurs theon an averagegreater path union length? What isthe reason for this purpose?

VP-4: Which disadvantage results from the fact, that a data decision can only be made after the pathunion?

VP-5: Observe the development of the path costs inonestate at the first constellation. Is it possible,that the summation path costs at Euclidean metric – thus at a positiv increment – decrease from one stepto the other? Illustrate your explanations by means of a concrete example.⇒ Plot

Page 22: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

4 EXPERIMENTAL RUN 19

Determine by means of the correct and the detected data, whenthere is a wrong decision at the secondconstellation.

VP-6: Sketch a corresponding section of the Trellis diagram with two previous and three followingsteps, where you draw the correct and the estimated path and assign the appertaining data. How manysymbols does the path deviation cover?⇒ Plot

4.1.3 Error analysis for the VA at QPSK-modulation

Under usage of the programvitloss theS/N -loss factorsgam and the error vectorse_vek for allthree channels shall be determined:

[gam1, e_vek1 ] = vitloss(f1);[gam21,e_vek21] = vitloss(f21);[gam22,e_vek22] = vitloss(f22);

By a systematical search this program yields as output parameters allS/N -loss factors in dB (e.g.gam1)and the appertaining error vectors (e.g.e_vek1 ) up to the length three, decreasingly sorted according tothe greatestS/N -loss. As the output parameters have a high dimension, it is sensible to consider in eachcase only the here interesting beginning, e.g. by means ofgam22(1:10) or e_vek22(1:10,:) .

VP-7: Determine for all three channels theS/N -loss factorγ2min. What does a loss of0 dB imply andin which case does it occur?

VP-8: Make a tablefor the channelf22 with the first ten loss factors and the appertaining error vectors(not in MATLAB notation!).

VP-9: Determine through a comparison, which index (1 until 4) fromeq. (22) signifies the channelf22 and which index from eq. (23) belongs to the noted four error vectors toγ2min. Thereto assign theparametera to each of the four error vectors.

4.1.4 Channel equalization at QPSK-modulation

The examinations for the channel equalization are here executed exclusively with the worst case channelof 2nd orderf22 . The required MATLAB -inputs are as follows and are already included invitlab.m :

% symbol creation -> "transmission symbols"sym_o = dig_mod(alphabet,NumSym);

% Transmitter in the steady state with tail bits:sym_c = convfir(f22,sym_o,alphabet);

% adding noise -> "reception symbols"sym_r = sym_c + rausch(EsNoh,Snutz,length(sym_c));

% Viterbi-equalization -> "detected symbols"sym_v = viterbi(sym_r,f22,alphabet);

% comparison of input/output sequence -> error vector, symb ol error rate:[e_vec,SER] = qpskcomp(sym_o,sym_v);

% Determine the error positions and error vectorse_idx = find(e_vec);fehlertab = [e_ind real(e_vec(e_ind)) imag(e_vec(e_ind) )];sprintf(’ | %3d || %4.1f | %4.1f |\n’, fehlertab.’);

Page 23: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

4 EXPERIMENTAL RUN 20

VP-10: The programvitlab.m : contains the code sequence to plot the above created signalsappropriately inonesignal space diagram. How do you explain, that the detected symbolssym_v inthe signal space exactly lie on the transmission symbolssym_o, although at the transmission obviouslyoccur errors?

Make in the above given mannerat least six simulations with unchangedES/N0-ratio and answerthe following questions (first readall questions until the end of the section for being able to perhapsanswering them at the same time).

VP-11: Annotate the occurring symbol error rates in a table, so that you can calculate a mean symbolerror rate from all simulations at the end. Give this mean symbol error rate and the calculation.

VP-12: Compare the mean symbol error rate with the read off value from fig. 8. WhichS/N -losscompared with the ideal channel can be ascertained? Represent this loss graphically with usage of fig. 8.

The vectore_vec=sym_o-sym_v contains the difference between the transmitted and the detectedsymbols according to (14). Consequently, it represents thedeviance between the detected and the truepath in the trellis diagram.

Consider the occurring path deviations. Therefor represent the magnitude of the error sequence normal-ized to the minimal signal space distance withstem(abs(e_vec)) 3. The indices corresponding tothe error events can be found bye_idx = find(e_vec) and the corresponding error vectors areprinted bye_vec(e_idx) . (The filevilab.m already contains the corresponding code sequencesand prints the error positions and the corresponding elements of the error vector in a table.)

VP-13: Annotate all occurring error vectors and their occurrence frequency by concretely outputtingthe (complex) values in the command line in the sections referred to, e.g.e_vec(e_idx) (for thereport represent them in a table). Which “error patterns” dooccur very frequently? Does this observationcorrespond to the theoretical examinations from section 4.1.3 or VP-8: ?

Hint: Prepare two tables! For each experimental run write down theSER in Table A. In the left columnof Table B annotate alldifferenterror vectorse_vec(e_idx) and in the right column the number ofoccurrence in all experimental runs.

Hinweis: You have to prepare two tables! For each experimental run (1,2, 3, ...) write down themeasured SER in Table A. Table B contains two columns, where the first column contains the differentoccurred error vectors (as row vectors) and the second column contains a tally sheet for the number ofobservations. Each time an error vector is observed the firsttime, it is written in the left column andmarked in the tally sheet. If this error vector occurred again in this or in one of the next experiments,a dash is added in the right columns. Thus, at the end a table with all different error vectors and thecorresponding number ofoccurrence in all experimental runsis obtained.

4.2 Decoding of convolutional codes

In this section we be shortly treat details of the VA concerning the convolutional decoding. Continue them-file from the previous section.

4.2.1 Convolutional code of rate 1/2

Consider a convolutional code (CC) with the generator polynomialsg0 = 1 + x + x2 andg1 = 1 + x2

and encode the data sequence (1 0 1 1 (0 0)). (The last two zerosin the brackets are the so called tail bits,that are introduced automatically by the convolutional coder convcod .) Therefor enter:

3Extend if necessary the signal section to be examined more indetail by means ofzoom – ask the tutor if required.

Page 24: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

4 EXPERIMENTAL RUN 21

G2 = [[1 1 1];[1 0 1]];dk = [1 0 1 1];c2 = convcod(G2,dk)

VP-14: Check the convolution resultc2 by calculating it “by hand”! How does the length of the vectorc2 come about?

4.2.2 CC of rate 1/3

Repeat the steps from the previous section for a convolutional code with the generator polynomialsg0 = 1 + x+ x2, g1 = 1 + x2 andg2 = g0 = 1 + x and encode the data sequence (1 0 1 1 (0 0)). TheMATLAB -commands for this are:

G3 = [[1 1 1];[1 0 1];[1 1 0]];dk = [1 0 1 1];c3 = convcod(G3,dk)

VP-15: Give a formula for the lengthlc3 of the vectorc3 , that contains the quantitiesR (code rate),ld(length of the data vector without tail bits) andℓ (degree of the generator polynomials).

4.2.3 Convolutional decoding

After the transmission of encoded bits, these are possibly disturbed. The task of the decoding is toconclude the data bits despite the deviation of the received(disturbed) data from the correct code bitsequence.

dl = [1 0 1 1 0 1 0 1 1 1];c2 = convcod(G2,dl);%***[c2r,error] = bschan(c2,0.2); %BSC-channel 20% bit error p robability.error’error_number_in_codesequence = sum(error)dl_hat = viterbi(reshape(c2r,2,length(c2)/2),G2);error_number_in_dl_hat = sum(dl_hat ˜= dl)

VP-16: Execute the MATLAB -code section from%*** on several times and observe, how many errorsin the code bit sequence lead to how many errors in the decodeddata bit sequence. Does the successof the decoding only depend on the number of errors in the codebit sequence or also on their temporaldistribution?

4.2.4 Soft-input vs. hard-input

This section shall illustrate the superiority of the soft-input decoding compared with the hard-inputdecoding. A hard-input decoding is required, if the received code symbols are available only in harddecided format (Ac ∈{−1,+1}). In case of soft-input decoding, the received code symbolsare availableto the Viterbi-decoder in fine quantized (Ac ⊂ R) symbols.

The MATLAB -commands for this are:

dl = randint(1,10000);c2 = convcod(G2,dl);% Es/No=2dBnoise = rausch(2,1,length(c2));

Page 25: Communications technology laboratory Viterbi-algorithm ...€¦ · Communications technology laboratory Viterbi-algorithm ... in order to ideally obtain the transmission quality of

4 EXPERIMENTAL RUN 22

c2r = (2 * c2-1)+noise;% VA-SoftInputdl_hat_soft = viterbi(reshape(c2r,2,length(c2)/2),G2) ;soft_errors = sum(dl_hat_soft˜=dl)% VA-HardInputdl_hat_hard = viterbi(round(reshape(c2r,2,length(c2)/ 2)),G2);hard_errors_v = (dl_hat_hard˜=dl);hard_errors = sum(hard_errors_v)

VP-17: Which kind of decoding achieves the lower error rate? How do you explain this?

4.2.5 Error structure

The data bit sequencehard_errors_v , decoded in the previous section, contains errors. Illustratethis error vector withfehlbild(hard_errors_v) . The represented matrix contains the vectorhard_errors_v row by row. Errors are represented with black points.

VP-18: Are the errors after a Viterbi-decoding mainly single or burst errors? Increase also the noise(Es/N0 = 0 dB) and again observe the error structure (the following MATLAB -code is an extract fromthe previous item):

% Es/No=0dBnoise = rausch(0,1,length(c2));c2r = (2 * c2-1)+noise;% VA-HardInputdl_hat_hard = viterbi(round(reshape(c2r,2,length(c2)/ 2)),G2);hard_errors_v = (dl_hat_hard˜=dl);fehlbild(hard_errors_v)