Image segmentation is one of the principal approaches of image processing. The choice of the most appropriate Binarization algorithm for each case proved to be a very interesting procedure itself. In this paper, we have done the comparison study between the various algorithms based on Binarization algorithms and propose a methodologies for the validation of Binarization algorithms. In this work we have developed two novel algorithms to determine threshold values for the pixels value of the gray scale image. The performance estimation of the algorithm utilizes test images with, the evaluation metrics for Binarization of textual and synthetic images. We have achieved better resolution of the image by using the Binarization method of optimum thresholding techniques.
Deep Dive into Enhancement of Image Resolution by Binarization.
Image segmentation is one of the principal approaches of image processing. The choice of the most appropriate Binarization algorithm for each case proved to be a very interesting procedure itself. In this paper, we have done the comparison study between the various algorithms based on Binarization algorithms and propose a methodologies for the validation of Binarization algorithms. In this work we have developed two novel algorithms to determine threshold values for the pixels value of the gray scale image. The performance estimation of the algorithm utilizes test images with, the evaluation metrics for Binarization of textual and synthetic images. We have achieved better resolution of the image by using the Binarization method of optimum thresholding techniques.
The Image segmentation is an essential task in the fields of image processing and computer vision. It is a process of partitioning the digital images and is used to locate the boundaries into a finite number of meaning full regions and easier to analyze [8]. The Simplest method for image segmentation is thresholding. Thresholding is an important technique in image segmentation, enhancement and object detection. The output of the thresholding process is a binary image whose gray level value 0 (black) will indicate a pixel belonging to a print, legend, drawing, or target and a gray level value 1 (white) will indicate the background. The main complexity coupled with thresholding in documents applications happen when the associated noise process is non-stationary. The factors that make difficult thresholding action are ambient illumination, variance of gray levels within the object and the background, insufficient contrast, object shape and size non-commensurate with the spectacle.
The lack of objective measures to assess the performance of thresholding algorithms is another handicap. Many methods have been reported in the literature [10,11,15,19,26,27,28]. It can extract the object from the background by grouping the intensity values according to the thresholding value. Thresholding divides the image into patches, and each patch is thresholding by a threshold value that depends on the patch contents. In order to decrease the effects of noise, common practice is to first smooth a boundary prior to partitioning. The Binarization technique is aimed to be used as a primary phase in various manuscript analysis, processing and retrieval tasks. So, the unique manuscript characteristics, like textual properties, graphics, linedrawings and complex mixtures of the layout-semantics should be included in the requirements. On the other hand, the technique should be simple while taking all the document analysis demands into consideration. The threshold evaluation techniques are adapted to textual and non-textual area properties, with the special tolerance and detection to different basic defect types that are usually introduced to images. The outcome of these techniques represents a threshold value proposed for each pixel. These values are used to collect the final outcome of the Binarization by a threshold control module. The approach is to examine the manuscript image surface in order to decide about the Binarization method requirement. The Binarization algorithms are to produce an optimal threshold value for each pixel [28]. Therefore we can verify about the algorithm, is best selected for obtaining the optimum thresholding value. The two different algorithms are then discussed in detail to obtain the optimum threshold values. We have structured the paper in different sections. Section 1 presents the introduction, in section 2 we present the literature review. In the section 3 presents problem description. In the section 4 presents the two algorithms. The result and discussion presents in the section 5. Section 6 presents the conclusion remarks.
The number of survey was done on thresholding. Lee, Chung and Park [15] conducted a comparative analysis of five global thresholding methods and advanced several useful criteria for thresholding performance evaluation. Weszka and Rosenfeld [12] have defined several evaluation criteria. Palumbo, Swaminathan and Srihari [24] has addressed the issue of document binarization comparing three methods, whereas Trier and Jain [18] had the most extensive comparison basis (19 methods) in the context of character segmentation from complex backgrounds. Sahoo et al. [27] surveyed nine thresholding algorithms and illustrated relatively their performance. Glasbey [5] pointed out the associations and performance difference between 11 histogram-based algorithms based on a wide statistical study. The choice of the most suitable one is not a simple procedure. The assessment and evaluation of these algorithms is difficult to provide evidence, since there is no objective way to compare the results. Leedham et al. [9] compared five binarization algorithms by using the precision and recall analysis of the resultant words in the foreground. He et al. [13] compared six algorithms by evaluating their effect on end-to-end word recognition performance utilizing a commercial OCR engine. Sezgin and Sankur [16] described 40 thresholding algorithms and categorized them according to the used information content. They measured and ranked their performance comparatively in two different contexts of images. The problem is that almost in every case, they try to use results from ensuing tasks in document processing hierarchy, in order to estimate the performance of the binarization algorithm. In case of historical documents where their quality obstructs the recognition, and sometimes the word segmentation as well, this method of evaluation can be proved problematic. On the other hand, we need a different, more dire
…(Full text truncated)…
This content is AI-processed based on ArXiv data.