PORTABLE CAMERA-BASED ASSISTIVE TEXT AND PRODUCT LABEL READING FROM HAND-HELD OBJECTS FOR BLIND PERSONS

Main Article Content

Miss.Nanaware Sonali M.
Prof.Mantri D.B.

Abstract

We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives. To isolate the object from cluttered backgrounds or other surrounding objects in the camera view, we first propose an efficient and effective motionbased method to define a region of interest (ROI) in the video by asking the user to shake the object. This method extracts moving object region by a mixture of- Gaussians based background subtraction method. In the extracted ROI, text localization and recognition are conducted to acquire text information. To automatically localize the text regions from the object ROI, we propose a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels. Text characters in the localized text regions are recognized by off-the-shelf optical character recognition (OCR) software. The recognized text codes are output to blind users in speech. The proof-of-concept prototype is also evaluated on a dataset collected using 10 blind persons, to evaluate the effectiveness of the system’s hardware. We explore user interface issues, and assess robustness of the algorithm in extracting and reading text  from different objects with complex backgrounds.

Article Details

How to Cite
Miss.Nanaware Sonali M., & Prof.Mantri D.B. (2021). PORTABLE CAMERA-BASED ASSISTIVE TEXT AND PRODUCT LABEL READING FROM HAND-HELD OBJECTS FOR BLIND PERSONS. JournalNX - A Multidisciplinary Peer Reviewed Journal, 1–4. Retrieved from https://repo.journalnx.com/index.php/nx/article/view/760