PORTABLE CAMERA-BASED ASSISTIVE TEXT AND PRODUCT LABEL READING FROM HAND-HELD OBJECTS FOR BLIND PERSONS
| Author(s) | : | Jay Narayanrao Paul, Dr.V.N.Nitnaware |
| Institution | : | Student of Dept. of ENTC Engineering, Dr.D.Y.Patil School of Engineering Academy , Ambi , Pune |
| Published In | : | Vol. 3, Issue 6 — June 2016 |
| Page No. | : | 406-410 |
| Domain | : | Engineering |
| Type | : | Research Paper |
| ISSN (Online) | : | 2348-4470 |
| ISSN (Print) | : | 2348-6406 |
This paper provides a digital camera-based totally product records reader to help blind individuals toexamine statistics of the goods. digital camera acts as main vision in detecting the label photo of the product then imageis processed internally and separates label from image through using MATLAB and eventually identifies the product calland identified product records is said thru the optical person popularity (OCR).We suggest a digital camera-basedassistive text analysing framework to help blind men and women examine text labels and product packaging from handheld gadgets in their each day lives. To isolate the object from cluttered backgrounds or other surrounding items insidethe digital camera view, we first advise an green and effective movement based technique to define a location of interest(ROI) inside the video with the aid of asking the consumer to shake the object. This technique extracts moving itemlocation by using a mixture-of-Gaussians-based historical past subtraction approach. Inside the extracted ROI, textualcontent localization and popularity are carried out to acquire text records. To automatically localize the textual contentregions from the item ROI, we advocate a singular textual content localization algorithm by way of studying gradientfeatures of stroke orientations and distributions of side pixels in an Adaboost version. Text characters inside thelocalized text regions are then binaries and recognized by off-the-shelf optical individual popularity software.The recognized textual content codes are output to blind customers in speech. Performance of the proposed textualcontent localization set of rules is quantitatively evaluated on ICDAR-2003 and ICDAR-2011 sturdy reading Data-sets.Experimental outcomes display that our set of rules achieves the state of the arts. The evidence-of-concept prototype isalso evaluated on a data-set accumulated using ten blind folks to evaluate the effectiveness of the gadget’s hardware. Weexplore person interface problems and determine robustness of the set of rules in extracting and analysing text fromexceptional items with complex backgrounds.
Jay Narayanrao Paul, Dr.V.N.Nitnaware, “PORTABLE CAMERA-BASED ASSISTIVE TEXT AND PRODUCT LABEL READING FROM HAND-HELD OBJECTS FOR BLIND PERSONS”, International Journal of Advance Engineering and Research Development (IJAERD), Vol. 3, Issue 6, pp. 406-410, June 2016.








