Classification of digital cervical images acquired during visual inspection with acetic acid (VIA) is an important step in automated image-based cervical cancer detection. Many algorithms have been developed for classification of cervical images based on extracting mathematical features and classifying these images. Deciding the suitability of a feature and learning the algorithm is a complex task. On the other hand, convolutional neural networks (CNNs) self-learn most suitable hierarchical features from the raw input image. In this paper, we demonstrate the feasibility of using a shallow layer CNN for classification of image patches of cervical images as cancerous or not cancerous. We used cervix images acquired after the application of 3%–5% acetic acid using an Android device in 102 women. Of these, 42 cervix images belonged in the VIA-positive category (pathologic) and 60 in the VIA-negative category (healthy controls). A total of 275 image patches of 15 × 15 pixels were manually extracted from VIA-positive areas, and we considered these patches as positive examples. Similarly, 409 image patches were extracted from VIA-negative areas and were labeled as VIA negative. These image patches were classified using a shallow layer CNN composed of a layer each of convolutional, rectified linear unit, pooling, and two fully connected layers. A classification accuracy of 100% is achieved using shallow CNN.
|Number of pages||11|
|Journal||Critical Reviews in Biomedical Engineering|
|Publication status||Published - 01-01-2018|
All Science Journal Classification (ASJC) codes
- Biomedical Engineering