I am designing an installation in which I would like to have a stream of the speech heard being rendered as text on a screen. It will be like an audio vortex, with text spiralling outwards in a variety of lines,
This will be running on a mac mini.
I am currently trying out ofxSpeech, which seems to work only with recognized words, and in the Apple Speech Recognition Reference I cannot find any way to access the ‘pre-recognized’ information. ( https://developer.apple.com/library/mac/documentation/Carbon/Reference/Speech_Recognition_Manager/ )
What I am hoping for is something similar to what is being used here: https://speechlogger.appspot.com/en/ (only for google chrome right now)
Think of it as a stream-of-consciousness text for all audio heard in a room (it doesn’t necessarily have to make sense, but it should be fairly responsive)
I am about to try Google Speech ( https://github.com/gillesdemey/google-speech-v2/ ) and using their suggestion for linking it to google chrome to get an indefinite amount of accesses for a show.
Is there a different technique I should be considering, though? Some part of the apple speech developer tools which will give a pre-recognized string of audio-to-speech elements I can use as a visual representation of audio which is spoken in a gallery setting?
Thank you very much for your help in this.