BMNR6 AATPage 2
“IKnow”: Helping those with hearing impairment
communicate fluently
ABSTRACT
Hearing impairment is one of the most common disabilities
in the world which brings a risk in daily life because of
problems in communication. The goal of this study was to
create an assistive technology for hearing impaired people.
From precious literature and semi-structure interview, the
design goal need is summarized. The study aims to promote
a design for hearing impaired people to communicate
fluently and in a good social manner. With the help of voice
and text translation, a conversation could be processed.
Language skills - shortening and vowels removed - and
sample words from frequency may guarantee a fluent
talking. AR technology is chosen to maintain a face-to-face
communication. Although the design is limited by
technology and could not tested in a high fidelity prototype,
it promoted a new design requirement - the hearing aid
devices need to maintain a fluent talking in a good manner
and have the ability to deal with various sound at same time.
To improve the design, future study should focus on the
place of interface to ensure the activity of hearing loss
people will not be affected by wearable devices.
Five semi-interviews for hearing impaired people were
conducted. Findings were combined with prior literature to
develop a new prototype. “IKnow”, which is an augmented
reality glass, could translate between voice and text to help
target users communicate with people without hearing
disabilities fluently. Different from existing devices,
“IKnow” relies on visual technology instead of audio one to
help hearing impaired people. Besides, devices focus on
how to deal with manner and fluency problem in
communication. An evaluation is conducted in a form of
semi-interview after experiment.
INTRODUCTION
Hearing loss is one of the most common disabilities in the
world [1]. 466 million people are suffering from disabling
hearing loss which brings a risk in daily life [2].
One of the main impacts of hearing loss is on the person’s
communication ability [2]. Communication difficulty is one
of the cause of bad interpersonal relationships [3], even
leading to social isolation [4] . This tendency will generate
adverse effects on health and wellbeing [5]. Therefore, it
worth studying how to improve communication quality of
hearing impaired people.
Although the problem could be released by assistive
listening devices (ALDs), Augmentative and alternative
communication (AAC) and text telephone or
telecommunications devices (TTY), these measures are
hard to solve problem completely. These devices could not
deal with variety of sound like group meetings [1]. Besides,
audiences of hearing-impaired speakers may feel
challenged to keep a conversation fluently because the
devices need time to translate [6]. Another problem is that
the conversation etiquette is woken by the devices. For
example, some ALDs ask hearing-impaired listeners face
their ears to others. This is an impolite manner in
conversation [7, 8].
Therefore, the aim of this study is to promote a new device
to help hearing-impaired listeners maintain a fluency
conversation with a good manner and realize who are
speaking in a variety of sound condition.
LITERATURE REVIEW
Measures promoted to maintain a communication between
hearing-impaired listeners and normal-hearing people are
variable. The devices could be divided into three categories:
Text telephone or telecommunications devices (TTY)
and phone
In the past, TTY helped hearing loss people hear and speak
[5]. It is comprised of a keyboard to type sentence and a
screen to display the conversation [9]. TTY now is mainly
replaced by phone [10]. Words will be translated into text
and what hearing-impaired people want to say could be
input through handwriting or typing [11]. For those whose
speaking ability is not broken, they could rely on this
spoken translation system to achieve conversation.
TTY works in a short talk like asking the way. However, it
is not convenient in a long-time conservation because TTY
spends a long time on typing and translation [5]. Audience
of hearing-impaired speakers may feel that the talking is
time-consuming and tired to focus. Besides, because the
BMNR6 AATPage 3
devices could not distinguish various sounds, in a group
meeting the translation may be disordered [11].
Assistive listening devices (ALDs)
ALDs aim to amplify the sound come directly into the ear
[12]. The representative devices are hearing aid and
cochlear implant. Some ALDs also used in large facilities
such as hearing loop systems, frequency-modulated (FM)
systems and infrared systems [13].
One advantage of ALDs is that they could separate voice
from a noisy background [14].However, the effect is limited.
It could only deal with the loudest voice that the device
detected [15]. To avoid the condition, speakers are required
to say the person’s name before and take turns speaking
[16]. On the other hand, users could not know who are
speaking. To communicate with hearing-impaired listeners,
speakers are asked to use a slow speed and each word
should clear and separately [16]. Another problem is that
ALDs ask users use their ears face to speakers [15]. Less
eye connecting generate a feeling of lack respected.
Based on the above reasons, in a talking between hearing-
impaired people and people without hearing disability,
some pauses will generate and the conversation speed will
be slower than a conversation between two people without
hearing disability.
Augmentative and alternative communication (AAC)
Some inborn hearing-impaired people could not hearing and
speak [17]. The former condition could be solved by ALDs
and TTY while the latter situation could be solved by AAC.
AAC relies on speech-generating technology to produce
computer-generated voice output. Users rely on portable
electronic devices to input text which could be spoken
through a mechanical voice [18] . Besides text, hand gesture,
eye tracking, head point and joystick are also accessible
input method [13]. Also, some spelling and word prediction
is used to increase conversation speed.
As a communication matter, AAC gives people a channel to
express their need and desire and more fully take part in
decisions [19]. However, as a single communication
method, AAC could not help hearing-impaired listeners
hear what other says. Therefore, for hearing-loss people
who could not speak, they may need to use AAC and ALDs
together to achieve a communication.
Design features
In order to overcome problems generate by current devices,
a number of deign suggestions have been listed based on
existing literature [Table 1].
The present paper analyzes some accessibly assistive
devices and some design needs based on literature. Then,
five semi-structure interviews were conducted to detect
challenges and needs of users. The design is developed
based on a user-center process.
Table 1 Design needs from existing literature
Interaction design needs
Visual simplicity The interface will not distract speakers and easy to see the translated conversation and
who are speaking [11][23]
Operation simplicity The devices could easy and fast enter information [15]
Function needs
Different communication
method
The devices could translate spoken word into text and change input into speech [24]
Connect pause Devices could conduct the nonverbal pause in communication to maintain a fluency
conversation [25]
Conservation manner In a conversation the speakers should face to face and have suitable eye contacts [25].
Detect variable sound In a variable sound condition, devices could discriminate who are speaking [24]
Appearance needs
Outlook The devices should not make hearing impaired people feel special and embarrassed
[26]
Facility needs [27]
Sensor Amicrophone to speak electronic voice
A receiver to receive voice
Interaction A devices to input what hearing-impaired people what to say
A screen to show conservation
Power A chargeable battery to maintain the work
BMNR6 AATPage 4
RESEARCH
Challenges and needs may be affected by various factors
including duration of hearing loss, age, living quality and
personality [20]. Also, in the researching process how to
communicate with target users should is a challenge [21,
22]. Studying with self-report is a problematic way in
design-need collection [20]. Therefore, in the present paper,
interviews are used. The semi-structure interviews involved
four partly hearing loss individuals and one completely
hearing loss person to help understand challenge and need
of hearing loss people.
Face to face communication is used in the interview and the
following questions areas were probed:
1) Which kinds of devices now are used?
2) What are challenges when using devices?
3) Which is the favorite way to input and display the
conversation?
4) How to join in a group talking?
5) What are challenges to participant social activities?
A recorder was used to record. Thematic analysis and
Affinity diagram [Figure 1] were then used to identify
prominent themes based on: Devices, devices barrier, social
barrier, potential opportunity.
From the analysis, the findings were summarized:
How to deal with noisy
Hearing loss people feel frustrated in a noisy environment
because they could not hear others clearly and hearing aid
does not work.
“So that was really disappointed. That’s all public record”
- P4 (partly hearing loss)
“If I could pick up my phone and look at the people spoke
to the words. …… tiring working out what people are
saying and making sure you’ve got it right”
- P3 (partly hearing loss)
Attitude to social context
In social meetings, hearing-impaired listeners do not want
to ask others to repeat sentences. Therefore, to avoid
repeating, some people tend to smile and not making
comments even they know they are participant of a meeting.
“So I just want to no, we can smile, not making any
comments”
- P3 (partly hearing loss)
Could only hear one voice once
The hearing aid could only detect one person speaking. If
two people speak at the time the voice will overlap and the
devices will choose the louder one to amplify. Therefore,
sometimes what hearing-impaired listeners hear is crossing
and what they could do is to pick up more relevant voice.
“Whereas two people might have said something the same
time, but you heard you picked up something that just
sounded might be more relevant”
- P4 (partly hearing loss)
Figure 1 Thematic analysis and Affinity diagram
Could only hear one voice once
The hearing aid could only detect one person speaking. If
two people speak at the time the voice will overlap and the
devices will choose the louder one to amplify. Therefore,
sometimes what hearing-impaired listeners hear is crossing
and what they could do is to pick up more relevant voice.
“Whereas two people might have said something the same
time, but you heard you picked up something that just
sounded might be more relevant”
- P4 (partly hearing loss)
Lip reading
Lip reading is one of the most effective assistance for
hearing loss people. They use it to know who are speaking
and guess what they said when they could not hear clearly.
With lip reading sometimes they even do not need sound.
“…… you use the lip reading to know who you speak, or
you guess, or try to understand what they speak. I mean,
sometimes I don't need sound. ”
BMNR6 AATPage 5
- P3 (partly hearing loss)
ESTABLISHED REQUIREMENT
From several participants, a set of core design needs were
established:
1) The devices should make the conversation go though
fluently.
2) The devices should make the conservation fit
communication manner
3) The devices should show who are speaking
4) The devices interface should not affect the lip reading
5) The devices should look like a daily tools and will not
make users feel embarrassed
6) The devices should use interface to solve the problem
that translation delay problem.
7) The devices could be used in a noisy condition
8) The devices could set the speaking environment
The design would follow the requirement found from
interview and combine them with design need summarized
from literature [Table 1].
“IKNOW” PROTOTYPE DESIGN
Based on design requirements, a wearable device –
“IKnow” [Figure2] was design for voice and text
conversion. It is based on the principle of Google glasses to
display conversation and track-sensor is used to trace hand
movements [Figure 3].
Figure 2 Idea of glasses
Figure 3 Idea of ring
Idea selection
Initially, both a wearable devices and an application were
considered. Application, however, catches users’ attention
and avoid eyesight connect which seems against the design
need to “make the conservation fit communicated manner”.
Ear phone and glasses are two core forms that are
considered. Brainstorming and sketching were used to
generate initial ideas. After selecting based on the design
requirement, a plan is promoted [Figure 2-3]. . Glasses
could show conversation visually which is a function that
ear phone could not achieve. Also, glasses allow users see
the face of speakers instead of facing others by ear.
Function introduce
Electronic component
Electronic prototype was generated using Arduino [Figure
4]. . Sound sensor was chosen to collect conversation. Two
touch controls were chosen to change the speaking
environment and volume. Integrated circuit board will
control Augmented Reality (AR). 3.7V integrated substance
lithium cell 2.1 WH will provide chargeable power to
glasses.
Figure 4 Electronic prototype
Input text
“IKnow” allows users to input text by handwriting. Users
wear a ring [Figure 3] which contains tracking sensor to
detect what a user write. To increase the writing speed,
shortening and vowels removed [28] are promoted. Sample
words from frequency [28]will predict the next words.
Some personality writing feature could be pre-installed.
Fluent conversation
“IKnow” could bridge translation pause and process
conversation fluently. If there are some pauses between text
or devices generate delay on translation, conjunction will be
used [29].
Display conversation
AR is used to display conversation. The projector located
on the edge of frame. Users could see conversation in the
interface. The handwriting will also be shown in the
interface. Users thus could know if the express themselves
rightly. Common and preset writing will be in the right side.
BMNR6 AATPage 6
Gestures is used to edit and alter preset words and alter
error [Figure 5].
Figure 5 Gestures
Interface
When open “IKnow”, Figure 6 as the welcome page will be
shown. Then, though Bluetooth, the ring will connect with
the glasses automatically [Figure 7]. Users could choose
who they want to speak with by selecting directly and
glasses will catch the voice the selected person [Figure 8].
The conversation will be displayed in front of the users
[Figure 9]. Besides, users could set gesture [Figure 10] and
some special spelling habit [Figure 11] by themselves.
Figure 6 Welcome page
Figure 7 Connect Bluetooth
Figure 8 Choose who they want to talk with
BMNR6 AATPage 7
Figure 9 Conversation interface
Figure 10 Gesture set
Figure 11 Spelling set
USER FEEDBACK ON “IKNOW”
Feedback of “IKnow” was gained from four partly hearing
loss participants.
Testing setup
The electronic prototype was larger than the printed
prototype. Therefore, electronic prototype reflected the
working principle and printed prototype was used to collect
feedback of users. Users wore printed prototype and
interfaces were printed out and shown directly to users. Test
lasted total 60 minute and each participant taking around 15
minute to finish their experiment. A semi-structure
interview is conducted after completing experiment to
collect information about the user experience of “IKnow”.
Task
Participants were introduced to conduct the following three
experiments:
1) Input and gesture test: Two participants who have the
speaking ability stood before a screen. They wrote and
spoke out what they want to write. What they said was
translated into text by the researcher which could
simulate automatically input. There will have some
errors and participant need to use gestures to correct.
2) Noisy environment: Speakers will write down what
they want to say before to simulate the interface.
When they speaking, they hold the paper in front of
them to simulate the AR interface. Three participants
took part in the experiment twice, using spoken
language and writing respectively.
3) Interface test: interface has been build up by sketch
and connected by Invision. Interfaces were shown in a
Micro Laptop. Participants were asked to use fingers
to control the interfaces.
During the whole process, person read what participants
write as a simulation of speaker.
Data collected
The experiment served to the semi-structure interview.
Feedback mainly came from interviews and some results
gathered from observation in experiment. Interviews
focused on four part: experience, visual simplicity,
challenge in use and social interaction. Observation focused
on if the conversation could process fluently and if users
could use gesture to input easily.
Result
Interface
Participants said the interface is cool and navigation is clear.
However, one participant said the interface is a dark
background and the interface is set to display in front of the
eyesight. These will have a negative effect on nature
viewing. In the future, where the interface will display will
be tested. Besides, one participant who has shortsightedness
worried if they could see the interface clearly.
Gesture
Participants agree that gestures are useful. But one of them
suggests that the introduction of gestures could be input in
the interface. And another participant wandered if their
short writing could be understood by translators.
Visual simplicity
Participants think the interface is bit of complex but it
works for a technical-sense interface. It worthy and
BMNR6 AATPage 8
generate a feeling they are living in future.
Challenge
Interfaces have some overlapping with real world. When
users go through conversation texts, they may ignore real
world condition which may be dangerous for them. Another
challenge is that users do not know if the device is reliable.
If there is evidence showing error rates of “IKnow”, they
may try it.
Social interaction
Users feel released to use the interface. It will translate what
other says and do not need to a repeat again and again.
However, users need to reading in conversation which is
another workload for them. One participant mentioned
when using the interface, he has to focus on the text
completely. He has no time to have eyesight connecting
with others.
DISCUSSION
Contribution
“IKnow” shows conversation by an AR interface,
promoting a new approach to fluent conversation. Besides,
compared with using phone application, which need
speakers speak to the phone, “IKnow” is a more nature
communication form. This will be benefit to satisfy
psychology of hearing loss people - not special from other.
Also, it allows uses have a face-to-face conversation instead
of focusing on screen or face their ear to other speakers.
This is more consistent with communicating manner and
will reduce impolite feelings of audience of hearing-
impaired speakers.
Limitation
“IKnow” is limited by technologies. Some feature like
detect various sounds is hard to achieve recently. Usability
test hence is not comprehensive and some function could
not be tested.
The research mainly focuses on partly hearing loss people.
Most of them do not need text to voice function. They
therefore are not familiar with using text to communication
and may generate bias to this feature. Failing to gain
enough target user approve may lead to less robust design
results.
Also, time spend on feedback is constraint. Participants
may not have sufficient time to familiar with devices.
Result therefore may be not reliable.
Future research
Future researches should enlarge target participants and
extend testing time. Furthermore, a high quality prototype is
suggested. As for the function, the specific place of
interface, which is shown in front of eyes, need to be
considered. It should not barrier eye connection and also
not affects lip reading.
CONCLUSION
The research promote an augmented reality glasses
“IKnow”, which could translate between voice and text to
help target users communicate with people without hearing
disabilities fluently, and evaluate it based on experience,
visual simplicity, challenge in use and social interaction.
BMNR6 AATPage 9
REFERENCE
[1] Ohlenforst, B., Zekveld, A. A., Jansma, E. P., Wang, Y.,
Naylor, G., Lorens, A., Lunner, T., Kramer, S. E. J. E. and
hearing Effects of hearing impairment and hearing aid
amplification on listening effort: A systematic review, 38, 3
(2017), 267.
[2] WTO Deafness and hearing loss. City, 2019.
[3] Dalton, D. S., Cruickshanks, K. J., Klein, B. E., Klein,
R., Wiley, T. L. and Nondahl, D. M. J. T. g. The impact of
hearing loss on quality of life in older adults, 43, 5 (2003),
661-668.
[4] Yoshinaga-Itano, C., Sedey, A. L., Coulter, D. K. and
Mehl, A. L. J. P.-E. E. Language of early-and later-
identified children with hearing loss, 102, 5 (1998), 1161-
1171.
[5] Findlay, R. A. J. A. and Society Interventions to reduce
social isolation amongst older people: where is the
evidence?, 23, 5 (2003), 647-658.
[6] Hirvonen, M. I. and Tiittula, L. M. J. L. A., New Series:
Themes in Translation Studies How are translations created?
Using multimodal conversation analysis to study a team
translation process (2018).
[7] Kalra, M. J. J. o. S. H. and Diabetes Managing Difficult
Conversation, 6, 02 (2018), 104-105.
[8] Noddings, N. J. J. o. M. E. Conversation as moral
education, 23, 2 (1994), 107-118.
[9] Zafrulla, Z., Etherton, J. and Starner, T. TTY phone:
direct, equal emergency access for the deaf. City, 2008.
[10] Roos, C. and Wengelin, Å. The Text Telephone as an
Empowering Technology in the Daily Lives of Deaf People
- A Qualitative Study. Assistive Technology, 28, 2 (2015).
[11] Abdallah, E. E. and Fayyoumi, E. Assistive
Technology for Deaf People Based on Android Platform.
Procedia Computer Science, 94 (2016), 295-301.
[12] Zanin, J. and Rance, G. Functional hearing in the
classroom: assistive listening devices for students with
hearing impairment in a mainstream school setting.
International Journal of Audiology, 55, 12 (2016), 723-729.
[13] Ifukube, T. Sound-based assistive technology : support
to hearing, speaking, and seeing /Tohru Ifukube. Cham,
Switzerland : Springer, 2017.
[14] Packer, L. Top five assistive listening devices. City,
2015.
[15] Ali, A., Hickson, L. and Meyer, C. Audiological
management of adults with hearing impairment in Malaysia.
International Journal of Audiology, 56, 6 (2017), 408-416.
[16] Hagemeyer, A. L., Friends of Libraries for Deaf, A.
and Library for Deaf, A. Communicating with hearing
people / [produced by Friends of Libraries for Deaf Action
Alice Hagemeyer]. Washington, D.C. : Library for Deaf
Action, Washington, D.C.], 1980.
[17] Adegbiji, W. A., Olajide, G. T., Olatoke, F., Olajuyin,
A. O., Olubi, O., Ali, A., Eletta, P. A. and Aluko, A. A.
Preschool children hearing impairment: Prevalence,
diagnosis and management in a developing country.
International Tinnitus Journal, 22, 1 (2018), 60-65.
[18] Schlosser, R., Shane, H., Allen, A., Abramson, J.,
Laubscher, E. and Dimery, K. Just-in-Time Supports in
Augmentative and Alternative Communication. J Dev Phys
Disabil, 28, 1 (2016), 177-193.
[19] Soto, G. and Clarke, M. T. Effects of a Conversation-
based Intervention on the Linguistic Skills of Children with
Motor Speech Disorders who Use Augmentative and
Alternative Communication. Journal of Speech, Language,
and Hearing Research , 60 pp. 1980-1998. (2017) (2017).
[20] The, L. Hearing loss: an important global health
concern. The Lancet, 387, 10036 (2016), 2351-2351.
[21] Newman, W. C., Weinstein, E. B., Jacobson, P. G. and
Hug, A. G. Test-Retest Reliability of the Hearing Handicap
Inventory for Adults. Ear and Hearing, 12, 5 (1991), 355-
357.
[22] Ventry, M. I. and Weinstein, E. B. The Hearing
Handicap Inventory for the Elderly: a New Tool. Ear and
Hearing, 3, 3 (1982), 128-134.
[23] (!!! INVALID CITATION !!! ).
[24] Pichora-Fuller, K. M., Kramer, E. S., Eckert, A. M.,
Edwards, W. Y. B., Hornsby, E. B., Humes, L. L., Lemke,
A. U., Lunner, S. T., Matthen, L. M., Mackersie, L. C.,
Naylor, L. G., Phillips, L. N., Richter, L. M., Rudner, L. M.,
Sommers, L. M., Tremblay, L. K. and Wingfield, L. A.
Hearing Impairment and Cognitive Energy: The Framework
for Understanding Effortful Listening (FUEL). Ear and
Hearing, 37 Suppl 1 (2016), 5S-27S.
[25] Deliang, W. Deep learning reinvents the hearing aid.
IEEE Spectrum, 54, 3 (2017), 32-37.
[26] Daniel Fok, L. S., Mary Beth Jennings, Margaret
Cheesman Universal accessibility and usability for hearing:
Considerations for design. Journal of the Canadian
Acoustucal Association, 35, 3 (2007), 84-85.
[27] Hartley, D., Golding, M., Rochtchina, E., Mitchell, P.
and Newall, P. Use of hearing aids and assistive listening
devices in an older australian population. Journal of the
American Academy of Audiology, 21, 10 (2010), 642-653.
[28] Mackenzie, I. S. Human-Computer Interaction: An
Empirical Research Perspective. Elsevier Science, 2013.
[29] US Patent Issued to Broadcom on Nov. 8 for "Mobile
communication device with game application for use in
conjunction with a remote mobile communication device
and methods for use therewith" (California Inventor). City,
2016.
学霸联盟