Main Article Content

Abstract

This project handles two core speech technologies "Speech Synthesis" and "Speech Recognition" that are supported through the Java Speech API. Our project requires a "Speech engine" designed to deal with either speech input or speech output. Speech synthesizer and speech recognizer are both speech engine instances. The Java Speech API defines a standard, easyto-use, cross-platform software interface to state-of-the-art speech technology. Speech recognition provides computers with the ability to listen to spoken language and to determine what has been said. In other words, Speech synthesis provides the reverse process of producing synthetic speech from text generated by an application, an applet or a user. It is often referred to as text-to-speech technology. More time and effort has been given to increase user interactivity with computer using mouse and speech synthesis. Operation like media player, text reader, file search etc. have been implemented to give the application a look as of an O.S. for blinds. A GUI has been developed, that works same as windows GUI. Each window will have some buttons, each for a unique task. As mouse comes over the button, the synthesizer will speak that mouse is over this button, and what will happen if he performs some action with mouse.Full use of all the functionalities of mouse has been incorporated to make the application easier to use, just using one or two peripherals.

Article Details

How to Cite
R.Deepan, S.Ramya, S.Praveen Raj, & V.Manimaran. (2018). Human Computer Interaction for Visually impaired People . International Journal of Intellectual Advancements and Research in Engineering Computations, 6(2), 1748–1752. Retrieved from https://ijiarec.com/ijiarec/article/view/728