The simple texterface would be a feature for a digital
audio book reader that
shows
up as a slider or scrollbar or touch panel at the bottom or
side of a touch screen on which the text of an audio
book is
displayed, that lets you hear a synthesized voice read --
and see a highlight applied
to -- each word, as you run
your
finger across its length.
On the other side of the screen could be another slider
that
can be assigned to incrementally changes any of the
following: the number of words that the first panel is
divided into, the volume/pitch/tone of the voice, the
strength/subtlety of automatically-related images that
pop
up in a corner of the screen that are related to the
meaning of the text, or other stuff like that. That panel
could be called the metatexterface.
This feature could be built into an application like the
Read2Go app for iOs
which is a digital audio reader targeted at students with
learning disabilities.
This idea is inspired by a refreshable Braille display that
plays a
synthesized vocalization of each word as the user runs
their finger across the refreshable pin-based display.
So if you dont know Braille, but you understand english,
you can run your finger across the Braille and hear the
book and listen to the whole book that way, and after a
couple of books you will know Braille. So why aren't Blind
kids learning Braille this way? I think because the Braille
displays cost too much money to justify buying one for
each student, although they may be coming down soon.
But I think that the feature of bundling one kind of
learning into another should be liberated from that
specific
application and applied more generally.
So if you already like listening to books on tape, why
couldn't the english words of an audio book you were
listening to be displayed visually in Spanish as you were
listening (even though the grammar wouldn't be right -
you
would be bundling the learning of vocabulary into the
process of listening to the story.
Another aspect of this could be slowly morphing the
visual
mapping of the audio at a rate that is controlled by a
loop
back into the system. So somehow measuring how
quickly
the person is learning the mapping and then changing it
at
the ideal rate for an increasing amount of learning. So
you
could slowly change the visual mapping of an audio book
from different fonts at first to different languages to a
system of images to any kind of visual mapping of the
audio spectrum, and then measure how the person
retained it.