Selçuk J. Appl. Math. Selçuk Journal of Vol. 5. No. 2. pp. 25-31, 2004 Applied Mathematics

On The Structure and Decoding of Linear Codes with Respect to the Rosenbloom-Tsfasman Metric

Mehmet Ozen1 _{and Irfan Siap}2

1_{Department of Mathematics, Sakarya University, Turkey;}

e-mail:ozen@ sakarya.edu.tr ;

2_{Ad¬yaman Faculty of Education, Gaziantep University, Turkey;}

e-mail:isiap@ gantep.edu.tr

Received: January 11, 2004

Summary. We investigate the structure of linear codes over …nite …elds with respect to recently introduced the Rosenbloom-Tsfasman metric. Given the generator of a linear code in a special triangular form called standard form we give a formula for the spectra of the code. We give a formula for the number of linear codes with a particular length, dimension and the minimum distance with respect to the Rosenbloom-Tsfasman metric. Finally, we propose a decoding technique with respect to this new metric.

Key words:Binary linear codes, Rosenbloom-Tsfasman metric, non-Hamming metric, decoding.

2000 Mathematical Subject Classi…cation: 9405, 94B60

1. Introduction

In this section we give the main de…nitions of linear codes on both Hamming and the Rosenbloom-Tsfasman (RT) metric. In the next section, we use the standard form with respect to RT metric de…ned in [3] and show that we are able to compute the weight enumerator of a code of particular length, dimension and minimum distance. In the last section, we explore the advantages and disadvantages of this new metric and we conclude by proposing a method for decoding.

1.1. Hamming Metric

Let Fq (or GF (q)) be a …nite …eld of order q. A linear code C of length n over Fq is a vector subspace of V := Fqn. When q = 2, a code is called a binary code. The elements of C are called codewords. The (Hamming) distance d(u; v) between two vectors u = (u1; : : : ; un) 2 V and v = (v1; : : : ; vn) 2 V is de…ned by

d : V _{V ! N}0
d(u; v) := jfi : ui6= vigj;

where N0= N[f0g and N is the set of positive integers, d is a metric on V . The minimum distance between distinct pairs of codewords of a code C is called the minimum distance of C and denoted by d(C) or simply d. Another important notion is the (Hamming) weight of a codeword u which is de…ned by distance function w(u) = jfijui 6= 0gj; i.e., the number of the nonzero entries of u: The minimum weight w(C) of a code C is the smallest possible weight among all its nonzero codewords. We observe that if C is a linear code, then d(C) = w(C):

1.2. The Rosenbloom-Tsfasman Metric

The RT metric …rst is introduced in [4] and some bounds on minimum distance of linear codes are given there. Later, in [6] the RT metric of codes over matrices with entries from …elds is investigated. Further, di¤erent MacWilliams identities with respect to the RT metric are proven in [1] and [5], respectively.

Here, we investigate the minimum distance and the weights of codes closer by taking advantage of their structure.

Let x = (x1; x2; :::; xn) 2 V (n; q). Then

wN(x) = maxfi j x_{0;}i6= 0g; x 6= 0_{x}_{= 0}

is called the RT weight of x: (x; y) = wN(x y) is called the RT distance between x and y; is a metric, too. The minimum RT distance of C is de…ned by dN(C) = minx;y2Cf (x; y) j (x; y) 6= 0g. Also, the minimum RT weight of C is denoted by wN(C) = minx2CfwN(x) j x 6= 0g. Note that in this case dN(C) = wN(C) holds too.

2. Spectra of a Linear Code

We recall some of the de…nitions and the results given in [3]

Theorem 1 [3] A linear code of lenght n and dimension k has a generator of the following form

(1) 0 B B B @ g11 g1sk 1 0 g1sk+1 ::: g1s2 1 0 g1s2+1 g1s1 g21 g2sk 1 0 g2sk+1 ::: g2s2 1 g2s2 0 0 .. . ... ... ... ... ... ... ... ... gk1 gksk 1 gksk 0 ::: 0 0 0 0 1 C C C A where n s1> s2> s3> ::: > sk 1 and g/ 1s1 = g2s2= ::: = gksk= 1

De…nition 2 The form of a generator matrix is called a standard form of the generator with respect to RT metric or shortly a standard form.

We note that with respect to the RT metric, column permutation is not allowed. Hence the standard form is in the most simpli…ed form.

The very …rst implication of a standard form is that the last row of the matrix gives us the minimum distance of the code.

Theorem 3 [3] If a generator matrix of a linear code C is in standart form as in (1), then minimum distance is dn(c) = sk

Further, given a standard form of a generator matrix, we are able to give the weights of a linear code with respect to RT metric. Let Ak = jfc 2 C j wN(c) = kgj denote the number of codewords with weight 1 k n:

Theorem 4 Let C be a linear code of lenght n and dimension k and generator matrix of form (1). Let wi denote the weight of the ith row in the standart form (1). Then

Awi = (q 1) q

k i where 1 i k

Proof. For i = k; Awi= AdN = q 1: For i = k 1; we consider the combination

of k and k 1 rows, which gives Awk 1 = (q 1)q: Inductively, we can reach

the assertion.

Considering a standard form of a generator matrix, we can easily conclude that the minimum RT distance dN of a linear code of length n and dimension k is less then or equal to n k + 1: The minimum distances of linear codes that achieve this bound are called maximum distance separable codes.

Corollary 1. [3] Let C be a maximum distance separable code of lenght n and dimension k. Let Ai denote the number of codewords of weight i in C. Then Ai= (q 1) qi dN for dN i n

Theorem 5 The number of linear codes of lenght n, dimension k, and mini-mum distance dN is

AdN(n; k) =

n dN 1

Proof. The …rst and the last row of G are …xed.
G =
0
B
B
B
@
::: ::: ::: 1
..
. ... ..._{| {z }}... ...
n dN 1
gk 2 ...
::: gk;dN 0 0
1
C
C
C
A

As seen from the above matrix, we have n dN 1 choices out of k 2; to form a generator matrix in a standard form. This is done in

n dN 1

k 2

ways.

Note that this theorem for Hamming case is still an open and very di¢ cult problem.

2. Decoding

As pointed out, we can easily construct a linear code of particular parameters contrary to Hamming case. On the other hand, one should suspect whether the encoding and decoding are easily done or not. First we consider an example given below.

Example 1.

Let C be a binary code with a generator matrix D :

D = 0 B B B B @ 1 1 1 1 1 1 0 0 0 0 1 1 0 1 0 1 0 1 0 0 1 0 1 1 1 0 0 1 1 0 1 0 0 1 0 1 0 1 1 1 1 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 C C C C A

Since D is in a standard form, C is a binary code of length 11; dimension 5 and RT minimum distance 4: Assume that a codeword c = (1; 0; 1; 0; 1; 0; 1; 0; 0; 1; 0) is sent through a noisy chanel and a word u = (1; 0; 1; 0; 1; 0; 1; 0; 0; 0; 0) with one error is received. If we apply maximum likelihood decoding we are supposed to decode u to the closest codeword in C: Unfortunately, there exist two codewords with closest weight and both of them are far away from the correct one. In this example we see that C cannot correct even one error though its RT weight is equal to 4. This example suggests to look for a di¤erent decoding technique.

In order to overcome this problem, we propose the following: Given a linear code C of length n; dimension k, minimum RT distance dN and a generator matrix G in a standard form, …nd the minimum RT distance dN of the code. Encode and decode the last n dN positions of C using maximum likelihood with respect to Hamming metric. Then, assuring the correctness of the last

n dN positions encode and decode using maximum likelihood with respect to RT metric. This encoding approach is more e¢ cient especially if most errors occur in the …rst dN positions, i.e. it is quite e¢ cient for burst errors.

Theorem 6 Let C be a linear code over Fq of lenght n, dimension k and a generator matrix in a standart form as in (1). If the submatrix restricted to the …rst s columns of type k s is of full rank and generates a code which corrects t errors, then C t + n s errors. Otherwise, if the submatrix restricted to the …rst s positions is not of full rank say of rank m < k, then the matrix D can be put in the form.

(2) G0 G1

0 G2 ;

where G0is a matrix of rank m < k, G1 and G2 are lower triangular matrices, Assume that the resticted submatrix (G1; G2)T generates a code that corrects t2 errors and C can correct t + t2 errors.

Proof. First assume that the submatrix restricted to the …rst s columns of type k s is of full rank and generates a code which corrects t errors. Suppose that a codeword c is sent and a word v is received. If v has t or less errors the …rst s positions, then …rst s positions can be corrected. Since the matrix G is in lower triangular form given the …rst s positions, the full codeword can be recovered and hence the codeword c can be computed.

Otherwise, assume that the submatrix is not of full rank say of rank m < k, then by row operations the matrix G can be put into the form given in (2). Again suppose that the submatrix G0generates a code that can correct t errors. Assume that a codeword c = (c0; c1) is sent where c0and c1are the restrictions of c to the …rst s and n s positions, respectively. Suppose that a word v = (v0; v1) is received. The restriction word v0 can be decoded correctly to c0 if t or less errors occurred. Hence, v is decoded as (c0; v1): Now let c1=P gi(1)+

P
g(2)_{i}
where g_{i}(1) and g_{i}(2) are rows of the matrices G1 and G2 , respectively. Since
c0 is decoded correctly we have c0 =P gi(1): Thus, c1 =P g(1)i +

P
g_{i}(2):
c1 P gi(1) =

P

g_{i}(2) must be a codeword of a linear code generated by G2:
Since the number of errors in c1is equal to the number of errors in c1 P g(1)i
and G2 generates a code which corrects t2 or less errors, t2 or less errors in
c1 P gi(1) can be corrected.
Example 2.
Let
(3) G =
0
B
B
@
1 0 0 0 0 1 1 1 0 1 1 1
0 1 0 0 1 0 1 0 1 1 1 0
0 0 1 0 1 1 0 1 1 1 0 0
0 0 0 1 1 1 1 0 0 0 0 0
1
C
C
A

generate a code C of length 12; dimension 4 and minimum RT distance 7. Since the matrix is in lower triangular form we can apply Theorem 6 directly. The restricted submatrix to the …rst 7 positions is of full rank. Further, the submatrix which corresponds to the …rst 7 positions is the Hamming matrix and hence the submatrix generates a code which can correct upto three errors. Altogether, the code C generated by the matrix G can correct upto eight errors. Example 3. Let (4) D = 0 B B B B B B @ 1 0 0 0 0 1 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 0 0 1 0 1 1 0 1 1 1 0 1 0 0 1 0 1 0 1 0 1 1 0 0 1 0 0 0 0 1 1 0 0 1 1 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 1 C C C C C C A

generate a code S of length 14, dimension 6 and minimum RT distance 7: The restricted submatrix to the …rst 7 positions is not full. Due to Theorem 6 we need to rearrange the rows of the matrix by adding the …rst and the third to the fourth and by adding the second and the third to the …fth, and rearranging the necessary rows we obtain:

(5) 0 B B B B B B @ 1 0 0 0 0 1 1 1 0 1 1 1 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 0 1 1 0 0 1 1 1 0 1 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 C C C C C C A :

By applying Theorem 6, we see that any received word D generates a linear code where in the …rst 7 positions and in the rest of the positions can correct at most one error since G2generates a code which can correct up to one error. Assume that a codeword c is sent and v = (011101110001100) is received. Considering the linear code generated by the submatrix G0 leads to correction of v as c0= (0110011). Now, since c0 = g3(0), c0 g3(0) = (0011100): Considering the word c0 g3(0)in a linear code generated by G2; c0 g3(0)can be corrected as (0011110) which gives c1: Therefore, the message which is sent is actually c = (c0; c1) = (011001110001110):

By interpreting Theorem 6, it is possible to construct an in…nite family of codes with particular parameters.

Corollary 2. If there exists a linear code over Fq of lenght n0, dimension k and minimum Hamming distance d0, then for all n1 2 N there exists a linear code Fq of lengt n0+ n1, dimensional k and non Hamming distance n0. Further, this

code can correct up to d0 1

2 + n1 errors with the restriction that ,n the …rst
n0positions d0_{2}1 or less errors may occur.

The advantage of Theorem 6 and Corollary 2 is in particular for constructing burst error correcting codes where the initial positions are better protected than the tails of the codewords.

We would like to thank the referees for their valuable remarks and e¤ort which improved the article’s presentation.

References

1. Dougherty S. T.and Skriganov M.M.(1999): MacWilliams Duality and the Rosenbloom-Tsfasman Metric, Mosc. Math. J., Vol. 2, No. 1, p. 81-97.

2. MacWilliams F.J. and Sloane N.J.A (1977): The Theory Of Error Correcting Codes, North-Holland Publishing Co.

3. Ozen M., Siap I. and Callialp F.: The structure of binary codes with respect to Rosenbloom-Tsfasman metric, (Turkish), Anadolu University Science and Technology Journal, accepted.

4. Rosenbloom M.Yu and Tsfasman M. A. (1997): Codes for the m-metric, Problems of Information Transmission, Vol. 33., No. 1, p. 45-52.

5. Siap I. (2001): The Complete Weight Enumerator for Codes over Mn s(-Fq),

Lec-ture Notes in Computer Sciences, Vol. 2260, p. 20-26.

6. Skriganov M.M. (2002): Coding theory and uniform distributions, St. Petersburg Math. J. Vol 13, No. 2, p. 301-337.