Image Processing Based Projects

Download Full Project for B.E, B.Tech, BCA, MCA, M.E, M.Tech, B.Sc, M.Sc and Polytechnic

1. An Implementation of Image Enhancement Using Histogram Equalization and Brightness  Preserving Bi-Histogram Equalization

ABSTRACT
                 Histogram Equalization is a contrast enhancement technique in the image processing which uses the histogram of image. However histogram equalization is not the best method for contrast enhancement because the mean brightness of the output image is significantly different from the input image. There are several extensions of histogram equalization has been proposed to overcome the brightness preservation challenge. Contrast enhancement using brightness preserving bi-histogram equalization (BBHE) and Dualistic sub image histogram equalization (DSIHE) which divides the image histogram into two parts based on the input mean and median respectively then equalizes each sub histogram independently. Histogram equalization is a well-known method for enhancing the contrast of a given image in accordance with the sample distribution. In general, histogram equalization flattens the density distribution of the resultant image and enhances the contrast of the image as a consequence. In spite of its high performance in enhancing contrasts of a given image, however, Global histogram equalization may change  the original brightness of an input image, deteriorate visual quality, or,  introduce some annoying artifacts [ woods]. 

Click Here to Download FULL  ABSTRACT

2. A LSB Based Steganography for Video Stream with Enhanced Security and Embedding/Extraction

ABSTRACT
                 Video Steganography deals with hiding secret data or information within a video. In this paper, a hash based least significant bit (LSB) technique has been proposed. A spatial domain technique where the secret information is embedded in the LSB of the cover frames. Eight bits of the secret information is divided into 3,3,2 and embedded into the RGB pixel values of the cover frames respectively. A hash function is used to select the position of insertion in LSB bits. The proposed method is analyzed in terms of both Peak Signal to Noise Ratio (PSNR) compared to the original cover video as well as the Mean Square Error (MSE) measured between the original and steganographic files averaged over all video frames. Image Fidelity (IF) is also measured and the results show minimal degradation of the steganographic video file. The proposed technique is compared with existing LSB based Steganography and the results are found to be encouraging. An estimate of the embedding capacity of the technique in the test video file along with an application of the proposed method has also been presented.

Click Here to Download FULL ABSTRACT

3. A High Capacity Steganography Technique for JPEG2000 Compressed Images Using DWT

ABSTRACT
                   Steganography is an information hiding technique using which the secrete data is hidden in a host medium like text, image, audio or video.  In information hiding terminology, host media is termed as cover media and after hiding secret data, cover media is termed as stego media. Images are widely used as a cover media for Steganography as they may have high redundancy. The purpose of Steganography technique is to protect confidential and sensitive information when it is transmitted using a public network. Embedding capacity, security and robustness are main research targets for a Steganography technique.

Click Here to Download FULL ABSTRACT

4. DCT Based Recognition of Human Iris Patterns for Biometric Identification

ABSTRACT
                 Iris recognition has been acknowledged as one of the most accurate biometric modalities because of its high recognition rate. Accuracy and reliability of the system makes it more superior than all the other existing biometric systems such as face, fingerprint and face recognition. In this paper, brief description about biometrics and iris recognition technology along with the steps involved in its implementation is given. In iris recognition using 2-DCT process, there are four steps- Segmentation, Normalization, Feature extraction and Matching. Segmentation and Normalization steps are implemented as the mid-term work in this paper. In Segmentation and Normalization steps Canny edge detector, Hough circular Transform and Daugman rubber sheet model are used. These algorithms are helped to obtain better normalized and segmented image results with high speed. Further, in Feature extraction and Matching steps 2-DCT algorithm and Hamming distance algorithm will be used, which will help to store previous iris image results into a biometric template. The biometric template is compared by hamming distance with the other templates stored in a database until a matching template is found and if no match is found then, the subject remains unidentified.

Click Here to Download FULL ABSTRACT

5. An Implementation of Improved SPIHT Algorithm with DWT for Image Compression

ABSTRACT
              Set partitioning in hierarchal trees (SPIHT) is a widely used compression algorithm for wavelet transformed images of most algorithms developed, SPIHT algorithm ever since its  introduction in 1996 for image compression has received a lot  of attention. Though SPIHT is much simpler and efficient than many existing compression techniques as it’s a fully embedded codec , provides good image quality, high PSNR, optimized for progressive image transmission, efficient combination with error protection, sort information on demand and hence requirement of powerful error correction decreases from beginning to end but still it has some drawbacks which need to be removed for its better use so since its evolution it has undergone many changes in its original version. This paper presents a survey on various improvements in SPIHT in certain fields as speed, redundancy, quality, error resilience, complexity, and memory requirement and compression ratio.

Click Here to Download FULL ABSTRACT

6. An Efficient Approach For Number Plate Recognition By Neural Networks And Image Processing

ABSTRACT
               The road becomes more pervasive, our country's road transport development, because of rapid labor management has not filled with actual needs, microelectronics, communications and computer technology in the transport sector of the application has greatly improved the traffic management efficiency. car license plates for automatic identification technology has been widely applied. car license plates automatically identify the entire process is divided into pre-processing, edge extraction, License Plate Positioning, character segmentation and character recognition 5 module, which character recognition process mainly consists of the following three components: 1) correctly to split text image area; 2) correct separation of a single text; 3) correctly identify a single character. The MATLAB software programming to achieve each and every part, and finally identify the license plate of a car. In the study of the same in which the issue of a concrete analysis, and processing. vehicle license plate recognition system as a whole is the main vehicle positioning and character recognition made up of two parts, one license plate positioning and can be divided into image pre-processing and edge extraction module and the licensing of the positioning and segmentation module; character recognition can be divided into character segmentation and feature extraction and a single character recognition two modules.

Click Here to Download FULL ABSTRACT

7. A New Secured And Robust Dual Image Steganography Approach For Encrypted Image 

ABSTRACT
                In the last few years communication technology has been improved, which increase the need of secure data communication. For this, many researchers have exerted much of their time and efforts in an attempt to find suitable ways for data hiding. There is a technique used for hiding the important information imperceptibly, which is Steganography. Steganography is the art of hiding information in such a way that prevents the detection of hidden messages. The process of using Steganography in conjunction with cryptography, called as Dual Steganography. This paper tries to elucidate the basic concepts of Steganography, its various types and techniques, and dual Steganography. There is also some of research works done in Steganography field in past few years. Steganography is a data hiding technique which conceals the existence of data in the medium. It provides secrecy of text or images to prevent them from attackers. It provides secret communication so that intended hacker or attacker unable to sense the presence of information. Steganography, derived from Greek, literally means "covered writing".

Click Here to Download FULL ABSTRACT

 8. A Novel Method For Medical Image Fusion By Integrating PCA And Wavelet Transform

ABSTRACT
               Image fusion is the technique of merging several images from multi-modal sources with respective complementary information to form a new image, which carries all the common as well as complementary features of individual images. With the recent rapid developments in the domain of imaging technologies, multisensory systems have become a reality in wide fields such as remote sensing, medical imaging, machine vision and the military applications.
         Image fusion provides an effective way of reducing this increasing volume of information by extracting all the useful information from the source images. Image fusion creates new images that are more suitable for the purposes of human/machine perception, and for further image-processing tasks such as segmentation, object detection or target recognition in applications such as remote sensing and medical imaging. The overall objective is to improve the results by combining DWT with PCA and non-linear enhancement. The proposed algorithm is designed and implemented in MATLAB using image processing toolbox. The comparison has shown that the proposed algorithm provides a significant improvement over the existing fusion techniques.

Click Here to Download FULL ABSTRACT

9. A Robust Digital Image Watermarking Based On Joint DWT And DCT

ABSTRACT
          The authenticity & copyright protection are two major problems in handling digital multimedia. The Image watermarking is most popular method for copyright protection by discrete Wavelet Transform (DWT) which performs 2 Level Decomposition of original (cover) image and watermark image is embedded in Lowest Level (LL) sub band of cover image. Inverse Discrete Wavelet Transform (IDWT) is used to recover original image from watermarked image. And Discrete Cosine Transform (DCT) which convert image into Blocks of M bits and then reconstruct using IDCT. In this paper we have compared watermarking using DWT & DWT-DCT methods performance analysis on basis of PSNR, Similarity factor of watermark and recovered watermark.

Click Here to Download FULL ABSTRACT

10. An Improved Image Compression Using Embedded Zero-Tree Wavelet Encoding And Decoding Technique

ABSTRACT
             Image compression is very important for efficient transmission and storage of images. Embedded Zerotree Wavelet (EZW) algorithm is a simple yet powerful algorithm having the property that the bits in the stream are generated in the order of their importance. Image compression can improve the performance of the digital systems by reducing time and cost in image storage and transmission without significant reduction of the image quality. For image compression it is desirable that the selection of transform should reduce the size of resultant data set as compared to source data set. EZW is computationally very fast and among the best image compression algorithm known today. This paper proposes a technique for image compression which uses the Wavelet-based Image Coding. A large number of experimental results are shown that this method saves a lot of bits in transmission, further enhances the compression performance. This paper aims to determine the best threshold to compress the still image at a particular decomposition level by using Embedded Zero-tree Wavelet encoder. Compression Ratio (CR) and Peak-Signal-to-Noise (PSNR) is determined for different threshold values ranging from 6 to 60 for decomposition level 8.

Click Here to Download FULL ABSTRACT

11. A Secure And Robust High Quality Steganography Scheme Using Alpha Channel

ABSTRACT
              Steganography is going to gain its importance due to the exponential growth and secret communication of potential computer users over the internet. It can also be defined as the study of invisible communication that usually deals with the ways of hiding the existence of the communicated message. Generally data embedding is achieved in communication, image, text, voice or multimedia content for copyright, military communication, authentication and many other purposes. In image Steganography, secret communication is achieved to embed a message into cover image (used as the carrier to embed message into) and generate a stego-image (generated image which is carrying a hidden message). Steganography is the art or practice of concealing a message, image, or file within another message, image, or file. It is the art and science of communicating in such a way that the presence of a message cannot be detected. Generally, the hidden messages will appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. In this paper we proposed Steganography based on alpha channel.

Click Here to Download FULL ABSTRACT

12. Eigen Value Based Rust Defect Detection And Evaluation of Steel Coating Conditions

ABSTRACT
                 PSNR is one of the most often and universally used method for measuring quality of image. In this paper we propose a methodology for assessment of coating condition of bridge images. The defect recognition algorithm includes conversion of captured images into grey level; these grey level images are grouped into defective & non defective group. Further that is processed to plot correspondence map. The correspondence map is measure of matching image. Straight line with 450 in correspondence map indicates no defect in scene image. In contrast if correspondence map produces nonlinear image it indicates defect (rust) in scene image. The nonlinear shape of grey level distribution in correspondence map can be analyzed by calculating Eigen values. Two similar images will produce smaller Eigen value (approximately zero), whereas it will be distinctly large for dissimilar images. The PSNR determines proportion of rust in scene image with relation to reference image.

Click Here to Download FULL ABSTRACT

13. Content Based Image Retrieval System Based On Dominant Color And Texture Features

ABSTRACT
                  The increased need of content based image retrieval technique can be found in a number of different domains such as Data Mining, Education, Medical Imaging, Crime Prevention, Weather forecasting, Remote Sensing and Management of Earth Resources. This paper presents the content based image retrieval, using features like texture and color, called WBCHIR (Wavelet Based Color Histogram Image Retrieval).The texture and color features are extracted through wavelet transformation and color histogram and the combination of these features is robust to scaling and translation of objects in an image. The proposed system has demonstrated a promising and faster retrieval method on a WANG image database containing 1000 general-purpose color images.

Click Here to Download FULL ABSTRACT

14. An Implementation of Data Hiding Technique Using LSB Based Audio Steganography

ABSTRACT
      A Steganographic method for embedding textual information in audio signal is deliberated here. A new fast algorithm is proposed which will use DCT base audio compression to speed up the audio Steganography algorithm. In the proposed method each audio signal will be transformed into bits and then the textual information will be embedded in it. In embedding process, first the message character is transformed into its equivalent binary form. The last 4 bits of this binary is taken into deliberation and applying redundancy of the binary code the prefix either 0 or 1 is used. To identify the uppercase, lower case, space, and number the control symbols in the form of binary is used. By using proposed LSB based algorithm, the capacity of stego system to hide the text increases. The performance evaluation will be prepared by comparing the output of proposed strategy with well-known existing algorithms.

Click Here to Download FULL ABSTRACT

15. Audio Noise Reduction of Speech Signal Using Wavelet Transform

ABSTRACT
                   Noises present in communication channels are disturbing and the recovery of the original signals from the path without any noise is very difficult task. This is achieved by denoising techniques that remove noises from a digital signal. Many denoising technique have been proposed for the removal of noises from the digital audio signals. But the effectiveness of those techniques is less. In this paper, an audio denoising technique based on wavelet transformation is proposed. Denoising is performed in the transformation domain and the improvement in denoising is achieved by a process of grouping closer blocks. The technique exposes each and every finest details contributed by the set of blocks and also it protects the vital features of every individual block. The blocks are filtered and replaced in their original positions. The grouped blocks overlap each other and thus for every element a much different estimation is obtained. A technique based on this denoising strategy and its efficient implementation is presented in full detail. The implementation results reveal that the proposed technique achieves a state-of-the-art denoising performance in terms of both signal-to-noise ratio and audible quality.

Click Here to Download FULL ABSTRACT

16. An Efficient Brain Tumor Detection Algorithm Using Watershed And Segmentation Methods

ABSTRACT
                   Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm followed by morphological filtering which avoids the mis-clustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location. In this paper, we present a system based on gabor filter based enhancement technique and feature extraction techniques using texture based segmentation and SOM (Self Organization Map) which is a form of Artificial Neural Network (ANN) used to analyze the texture features extracted. SOM determines which texture feature has the ability to classify benign, malignant and normal cases. Watershed segmentation technique is used to classify cancerous region from the non cancerous region.

Click Here to Download FULL ABSTRACT

17. Real Time Face Detection And Tracking Through Webcam

ABSTRACT
                   Face detection which is the task of localizing faces in an input image is a fundamental part of any face processing system. The aim of this paper is to present a review on various methods and algorithms used for face detection etc. Three Different algorithms i.e. Haar cascade, ad boost, template matching were described Finally it includes some of applications of face detection. In this paper, we represent a methodology for face detection robustly in real time environment. Face detection is a computer technology that determines the locations and sizes of human faces in arbitrary (digital) images. It detects facial features and ignores anything else, such as buildings, trees and bodies. Human face perception is currently an active research area in the computer vision community.

Click Here to Download FULL ABSTRACT

18. An Improved Image Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution Technique

ABSTRACT
                 One of the important techniques in digital image processing is to enhance images. Contrast enhancement is a method that is used to enhance images for viewing process or for further analysis of images. Main idea behind contrast enhancement techniques is to increase contrast and to preserve original brightness of images. In this paper a contrast enhancement technique is proposed that first segments histogram of image recursively and then applies Gamma Correction with Weighting Distribution (GCWD) Technique. The proposed technique is basically an improvement over GCWD technique and aims to get better contrast enhancement and brightness preservation than GCWD technique.

Click Here to Download FULL ABSTRACT

19. Automated Feature Extraction For Detection of Diabetic Retinopathy In Fundus Images

ABSTRACT
                  Diabetes is a group of metabolic disease in which a person has high blood sugar.  Diabetic Retinopathy (DR) is caused by the abnormalities in the retina due to insufficient insulin in the body. It can lead to sudden vision loss due to delayed detection of retinopathy. So that Diabetic patients require regular medical checkup for effective timing of sight saving treatment.  This is continuous and stimulating research area for automated analysis of Diabetic Retinopathy in Diabetic patients. A completely automated screening system for the detection of Diabetic Retinopathy can effectively reduces the burden of the specialist and saves cost as well as time. Due to noise and other disturbances that occur during image acquisition Diabetic Retinopathy may lead to false detection and this is overcome by various image processing techniques. Further the different features are extracted which serves as the guideline to identify and grade the severity of the disease. Based on the extracted features classification of the retinal image as normal or abnormal is carried out.  In this paper, we have presented detail study of various screening methods for Diabetic Retinopathy. Many researchers have made number of attempts to improve accuracy, productivity, sensitivity and specificity.

Click Here to Download FULL ABSTRACT

20. Real Time Implementation of Moving Object Tracking In Video Processing Through Webcam

ABSTRACT
             Real time object detection and tracking is an important task in various computer vision applications. For robust object tracking the factors like object shape variation, partial and full occlusion, scene illumination variation will create significant problems. We introduce object detection and tracking approach that combines Prewitt edge detection and kalman filter. The target object’s representation and the location prediction are the two major aspects for object tracking this can be achieved by using these algorithms. Here real time object tracking is developed through webcam. Experiments show that our tracking algorithm can track moving object efficiently under object deformation, occlusion and can track multiple objects.

Click Here to Download FULL ABSTRACT

21. A Contrast Enhancement Technique Using Histogram Equalization Methods On Images

ABSTRACT
            Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. This method preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analyses using both objective and subjective assessment.

Click Here to Download FULL ABSTRACT

22. Design of Face Recognition System Using Principal Component Analysis

ABSTRACT
           Face is considered to be one of the most important visual objects for identification. Recognition of human face is complex and it converts the face into a mathematical model. Face recognition is the most efficient and sophisticated method for the security systems. It is a biometric technology with a wide range of applications such as use in ATM machines, preventing voter’s fraud, criminal identification, human computer interaction, etc. This paper describes the building of a face recognition system by using Principal Component Analysis method. PCA is the method for reduce the data dimension of the image. It is based on the approach that breaks the face images into a small set of characteristic feature images. These “eigenfaces” are the principal components of the initial data set of face images. Recognition is done by comparing the input face image with the faces in the data set through distance measuring methods. Here the face recognition system is developed using MatLab and it recognizes the input face from a set of training faces.

Click Here to Download FULL ABSTRACT

23. Palmprint Recognition And Feature Extraction System Using Gabor Filters

ABSTRACT
                   Palmprint recognition being one of the important aspects of biometric technology is one of the most reliable and successful identification methods. In this paper, several existing palmprint recognition algorithms have been studied and analyzed. A simple approach to preprocessing and roi extraction has been discussed. The available databases have also been analyzed and the most efficient of all will be used for the development of the proposed system. Palmprint recognition is one of the biometrics available at the present. Biometric systems are used to authenticate the identity by measuring the physiological and/or behavioral characteristics. So, the two main categories of biometrics are ‘physiological’ and/or ‘behavioral’. The physiological category includes the physical human traits such as palmprint, hand shape, eyes, veins, etc. The behavioral category includes the movement of the human, such as hand gesture, speaking style, signature etc.

Click Here to Download FULL ABSTRACT

24. An Efficient Approach For Brain Tumor Detection In MRI Images Using Rough Set Theory

ABSTRACT
                Brain tumor is an uncharacteristic growth of brain cells within the brain or within the spinal canal. Brain tumor concealment is very exigent problem due to complex structure of brain. The exact boundary should be detected for the proper cure by segmenting necrotic and enhanced cells. Magnetic Resonance Imaging (MRI) is an ideal source that provides the exhaustive information about the brain anatomy. The aim of this work is to offer a framework for detection of brain tumor from MRI using rough set theory.

Click Here to Download FULL ABSTRACT

Email id: notesplanetprojects@gmail.com

  3 comments:

  1. Hello Sir I am Ramjan Reddy From Sagar Institute of
    Research and Technology, Bhopal, I am studying in B.E
    final year Electronics & Telecommunication Branch and
    i want project on DCT Based Recognition of Human Iris
    Patterns for Biometric Identification can you provide me.
    Reply me sir i mail you a msg on you id.

    ReplyDelete
  2. Nice Website I like it.....

    ReplyDelete
  3. nice site plz provide me more project topics

    ReplyDelete

Total Pageviews

CONTACT US

Prof. Roshan P. Helonde
Mobile / WhatsApp: +917276355704
Email: roshanphelonde@rediffmail.com

Contact Form

Name

Email *

Message *

Archive

Notes Planet Copyright 2018. Powered by Blogger.