JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING

IMAGE COMPRESSION USING EBPA WITH NEURAL NETWORK

1 PROF. D.B. MANTRI, 2 PROF. S.N. KULKARNI, 3PROF. S.S.KATRE

1Asst.ProfessorAnd Head,Electronics And Telecommunication Department,

VVP Institute Of Engineering, And Technology, Solapur

2Vice Principal and Asst.Professor,Mechanical Engineering Department,

VVP Institute Of Engineering, And Technology, Solapur

3Asst.Professor And Head ,Electrical And Electronics Department,

VVP Institute Of Engineering, And Technology, Solapur

, ,

ABSTRACT :A paper explains a EBPA based three layer neural network for image compression. The proposed technique includes steps to break down large images into smaller windows and eliminate the redundant information. Furthermore, this technique corresponds to transform coding group for image compression and employs a neural network trained by iterative method. EBPA is implemented for training the network. A no. of experiments have been conducted. The results obtained such as compression ratio and PSNR of compressed images are presented in this paper.

ISSN: 0975 –6779| NOV 10 TO OCT 11 | VOLUME – 01, ISSUE - 02 Page 1

JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING

1.0 INTRODUCTION The transport of images across communication path is an expensive process. Image compression provides an option for reducing the no. of bits in transmission and also in storage. This in turn helps to increase the volume of data transferred in a space of time, the less memory to store along with reducing the cost required. It has become increasingly important to most computer networks as the volume of data traffic has begun to exceed their capacity for transmission.

Artificial neural networks have been applied to many such problems and have demonstrated their superiority over the classical methods when dealing with noisy or incomplete data. One such application is for image compression. Neural networks seems to be well suited to this particular function. Not only can ANN based techniques provides sufficient compression rates of the data, but security is easily maintained as data along communication line is encoded & does not resemble its original form.

The result convey information about the compression ratio achieved, the quality of the image after decompression, PSNR of reconstructed image with original image

2.0 IMAGE DATA COMPRESSION USING NEURAL NETWORK

Although there are many algorithms to perform data compression most of these are designed to deal with static data such as text. Video data compression is difficult problem from algorithm point of view.As described neural network approach is idea for a video image data compression application because a three layer network can easily be trained with back propagation algorithm to map a set of patterns from n dimensional space to m. dimensional space.

2.1BACKPROPAGATION ALGORITHM

2.2.1 Network selected for compression The first step to solve the problem is to find the size of the network that will perform the desired data compression. We would like to select a network architecture that provides a reasonable data reduction factor while still enabling us to recover a close approximation of the original image from the encoded form. This network used is a feed forward network of three layers. All connections are from units in one layer to the other. The hidden layer consists of fewer units than the input layer, thus compress the image. The size of the output and the input layer is same and is used to recover the compressed image. The network is trained using a training set of patterns with desired outputs being same as the inputs using back propagation of error measures. Using the back-propagation process the network will develop the internal weight coding so that the image is compressed for ratio of number of input layer nodes to the number of hidden layer nodes equal to four. If we then read out the values produced by the hidden layer units in our network and transmit those values to our receiving station, we can reconstruct the original image by propagating the compressed image to the output units in an identical networks.

2.2.2 Training the Network The objective to train the network is to adjust the weights so that application of a set of inputs produces the desired set of outputs. Before starting the training process, all of the weights must be initialized to small random numbers. This should ensure that the network is not saturated by large values of weights.

2.2.3Procedure for training the network

1 Apply an input vector to the network and calculate the corresponding output values.

2 Compare the actual output with the correct outputs and determine a measure of the error.

3 Determine in which direction (+ or - ) to change each weight in order to reduce the error.

4 Determine the amount by which has to change each weight.

5 Apply the corrections to the weights.

1)Repeat items 1 to 5 with all the training vectors until the errors for all the vectors in the training sets is reduced to an acceptable value.

2.2. 4 Forward Pass

Step 1 in 2.2.3nstitute a forward pass. In forward pass an input vector ‘x’ is applied and an output vector ‘Y’ is produced. The input target vector pair ‘X’ and ‘T’ comes from the training set. The calculation is performed on ‘X’ to produce the output vector ‘Y’.

Calculation in multilayer network is done layer by layer. Starting at the layer nearest to the inputs. The ‘NET’ value of each neuron in the first layer is calculated as the weighted sum of its neurons inputs. The activation function F then ‘squashes’ ‘NET’ to produce the ‘OUT’ value for each neuron in that layer. Once the set of outputs is found, it serves as input to the next layer. This process is repeated layer by layer until the final set of network output is produced.

2.2.5 Reverse Pass

Adjusting the weights of the output layer. As the target value is available for each neuron in the output layer adjusting the associated weights is easy.

Error = Target  OUT

This is multiplied by the derivative of the squashing function

(NET) = [OUT(1-OUT)]

there by producing the ‘’ value .

 = Squashing Function  Error

 = (NET)  Error

 = OUT * (1 OUT)  (Target OUT)...(2.1)

then ‘’ is multiplied by ‘out’, the source neuron for the weight in question. This product is in turn multiplied by a training rate parameter this will give a change in weight

(2.2)

where  = Learning rate

 = the value of 

= the value of OUT in hidden layer

The change in weight should be added to the old weights in order to get modified weights

…(2.3)

where,

- The value of a weight from neuron p in the hidden layer to neuron q in the output layer at step n (before adjustment)

- Value of the weight at step n+1 (after adjustment)

- The value of

OUT - The value of OUT in hidden layer

An identical process is performed for each weight proceeding from a neuron in the hidden layer to a neuron in the output layer.

2.2.6 Adjusting the Weights of the Hidden Layers

Hidden layers have no target vector so the training process described above can not be used. Back propagation trains the hidden layers by propagating the output error back through the network layer by layer adjusting weights at each layer.Equations (2.2) and (2.3) are used for both output and hidden layers. However for hidden layers, must be generated without benefit of a target vector. First  is calculated for each neuron in the output layer as in equation (2.1). It is used to adjust the weights feeding into the same weights to generate a value for  for each neuron in the hidden layer. These values of  are used, to adjust the weights of this hidden layer and in a similar way are propagated back to all preceding layers.

Consider a single neuron in the hidden layer just before the output layer. In forward pass, this neuron propagates its output value to the neuron in the output layer through the interconnecting weights. During training these weights operate in reverse, passing the value of  from the output layer back to the hidden layer. Each of these weights is multiplied by the  value of the neuron to which it connects in the output layer. The value of  needed for the hidden layer neuron is produced by summing all such products and multiplying by the derivative of the squashing function:

…(2.4)

with  in hand, the weights feeding the hidden layer can be adjusted. For each neuron in a given hidden layer,  must be calculated, and all weights associated with that layer must be adjusted. This is repeated, moving back toward the input layer by layer until all weights are adjusted.

2.2.7 Momentum

While enhancing the stability of the process. adding momentum to the weight adjustment that is proportional to the amount of previous weight change. Once an adjustment is made it is “remembered” and serves to modify all subsequent weight adjustments.

… (2.5)

……..(2.6)

were  is the momentum coefficient

3.0 NEURAL NETWORK REQUIREMENTS AND IMPLEMENTATION

3.1 Preprocessing

All the training and test images used have 8 bit resolution & are grey level images. Since the network is to be trained using a training of patterns with the desired o/p being similar as the input pixel intensity values were normalized in the range of 0 to 1 by dividing pixel intensity value by 255 .As the initial weights of the network are known they have to be selected in random manner & also to be normalized between 0 to 1.

3.2 Selection of Training & Test Data

Training data required to train the neural network can be taken from any image randomly or in a sequential pixel way. Here the standard LEN.128 image is used is as a training image and that to in sequential pixel manner. Pattern so far considered for training the network are 100 This can be counted as the major difficulty to select the training pattern and hence proper matching of training data with the network selected is quite a typical work based on trail & error basis. If the training pattern pixels have a very less intensity variations then this will results to reduce the quality of reconstructed image. Any 128x128 grey level images can be used to test the trained neural network and one can achieve the compression and decompression of images.

3.3 Compression algorithm

  1. Read the input image for the network architecture same as that is used for the training
  2. Read trained weights and apply it to input layer nodes.
  3. Obtain sum of input layer node – weight combination.
  4. Apply squashing function to sum obtained in step 3 for all hidden layer nodes.
  5. Save the output of hidden layer nodes to a file.
  6. Repeat 1 through 5 steps until whole image date is compressed.

3.4 Decompression algorithm:

  1. Read the compressed data from the file and apply it to the hidden layer.
  2. Adopt output weight matrix from trained weights and use it for connecting hidden layer nodes with the output layer nodes.
  3. Compute sum of hidden nodes – weights combination for all output layer nodes.
  4. Apply squashing function to obtain original pixels.
  5. Repeat steps 1 through 4 until all image data is recovered.

4.0 GENERAL TRAINING GRAPH ACHIEVED AND RESULTS

While training the network proper selection of  and  is must for that by varying these values on trail and error basis by considering the large theoretical range of  and  between 0.001 to 1, training the network for each set of  and  proper selection of  and  is

possible. By referring the graph herewith the proper  and  can be selected for which maximum PNSR is available (Better quality). That is the reason why  and  are different for each neural network selected as shown in table (1) here different architectures are used to compare the quality of reconstructed images.

Architecture /  /  / Compression
16-4-16 / 0.6 / 0.05 / 2 bits/pixel
32-8-32 / 0.8 / 0.02 / 2 bits/pixel
32-4-32 / 0.5 / 0.05 / 1 bit/pixel
16-6-16 / 0.5 / 0.05 / 3 bits/pixel
16-8-16 / 0.56 / 0.045 / 4 bits/pixel

Table (1)

The patterns used for training the network are selected sequentially from LEN.128 gray level image file.

The training time acquired increases with the increase in network size & it also depends upon two main factors.

i) Learning rate ‘alpha’. ()

ii) Momentum constant ‘eta’ ()

For higher compression rate we have use the architecture 32-4-32 which compression of 1bit/pixel at expense image quality. For lower compression rate we have used the architecture 16-8-16 which given the compression of 4 bits/pixel which gives best quality reconstructed.

5.REFERENCES

[1]‘Digital Image Processing’ - Rafel C. Gonzalez, Richard E. Woods.

[2]‘Fundamentals of Digital Image Processing’, - A. K. Jain.

[3]‘Introduction to Artificial Neural Network’ Jaceb. Zurada

[4]‘Image Data Copmression Using Neural Network Model’ - N. Sonehara, M. Kawato, S. Miyake, K. Nakane,

[5]‘Data Compression System Using Neural Network Based Architecture’ - Mohammed Arozullah and Aran Namphol.

[6]‘Image compression using Neural networks’ – Yahya M. Masalmah

[7]‘Image Compression Using Feed Forward Neural Networks’ – Rudy setiono and Guojun LL

ISSN: 0975 –6779| NOV 10 TO OCT 11 | VOLUME – 01, ISSUE - 02 Page 1

JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING

ISSN: 0975 –6779| NOV 10 TO OCT 11 | VOLUME – 01, ISSUE - 02 Page 1

JOURNAL OF INFORMATION, KNOWLEDGE AND RESEARCH IN ELECTRONICS AND COMMUNICATION ENGINEERING

Graph :1



Original Image Reconstructed Image (PSNR :--21.559280 db)

ISSN: 0975 –6779| NOV 10 TO OCT 11 | VOLUME – 01, ISSUE - 02 Page 1