残疾与创新设计代写-CJXW3 PSYC0100
时间:2021-04-15
CJXW3 PSYC0100 1
Hearing Through Vibrations: Exploring
the Effects of Vibrotactile Feedback on
Deaf Individuals’ Emotion Recognition
in Music


Author Keywords
Wearable technology; music; emotion; deaf; SubPac.
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g.,
HCI): Human-centered computing ~ Accessibility
technologies • Human-centered computing ~ Usability
testing
Introduction
Listening to music is a social and therapeutic
experience, as music conveys emotions through pitch,
timbre, frequency, rhythm, and mode (major/minor)
[3], which can encourage self-regulation and mood
enhancement [27]. However, at least 11 million people
in the UK are affected by hearing loss [1, 2] of which
roughly 900’000 individuals have severe to total
(profound) loss of hearing above 81 decibels [1, 5],
which means they cannot engage with music in the
same way as the hearing population.
Nevertheless, many profoundly deaf individuals
experience music with the help of sign language
interpreters, subtitles [22], hearing aids and/or
.





CJXW3 PSYC0100 2
cochlear implants [13]. However, these means
frequently fail to provide them with an accurate
awareness of all music features and its emotional
impact and meaning [17, 13]. Being unable to fully
engage with music and related social activities places
deaf people at a higher risk of developing mental health
problems, such as depression [1]. Additionally, as more
people are expected to experience deafness in coming
years (i.e. ageing population, loud music) [1, 2], the
need for assistive technologies, conveying music to
other senses, is increased.
New developments of vibrotactile technology have
opened the world of deaf music-lovers to new
possibilities of engaging with sound [22]. Vibrotactile
vests or backpacks like the SubPac M2X (Figure 1),
which use subwoofers to transduce sound frequencies
into vibrations [31], have become a pre-requisite for
many concerts and festivals for the deaf, such as
Glastonbury’s Deaf Zone or London’s DeafRave [16].
However, as research has focused predominantly on
exploring deaf people’s enjoyment of prototype
systems, it is unclear whether vibrations, which are
supplied by commercially available devices, can provide
deaf individuals with an understanding of music-related
emotions, which this study aims to explore.
Literature Review
Emotions and Music
Music can convey emotions, which are defined by
valence (positive/negative) and arousal
(excitement/boredom) [37, 26]. For instance, Vieillard
and colleagues [36] exposed hearing participants to 56
happy, angry, sad and peaceful piano excerpts, each
defined by a particular sound frequency, mode, tempo
and pitch. Likert-scale ratings indicated that happiness
and anger both encouraged high arousal although
anger was perceived negative whereas happiness was
positive. Equally, peaceful and calm music both
encouraged low arousal but differing valence.
However, profoundly deaf individuals cannot derive
emotions from music without external help. Whilst sign
language, subtitles, hearing aids and cochlear implants
convey some of the mentioned music features [13, 22],
many deaf individuals reject the latter in particular due
to their perceived invasion of deaf culture [20, 30].
Aids and implants also tend to have poor spectral and
frequency resolutions, conveying pitch and timbre
inaccurately [17]. Also, they occasionally cause
uncomfortable experiences with loud sounds,
particularly when human voices are involved [13].
Vibrotactile Devices
Vibrotactile feedback, which is processed in deaf
individuals’ auditory cortices [28], has since been
suggested as a replacement for sound. So far, vibrating
floors have allowed deaf individuals to dance in
synchrony with music with no performance differences
to hearing people [33, 29]. Also, vibrating chairs were
positively received [16, 22] and allowed deaf
filmmakers to edit music into films [4]. Findings by
Araujo and colleagues [3], who explored a vibrating
chair and bracelet, further showed that deaf
participants could identify whether a music excerpt was
related to a video based on the vibrations’ rhythm and
energy.
However, only two studies have somewhat explored
whether vibrotactile feedback can convey emotions to
deaf users. On one hand, Wilson and Brewster [37]
showed that music-unrelated vibrations, applied to

Figure 1. Pictures of the SubPac
M2X [31].


CJXW3 PSYC0100 3
hearing participants’ hands, resulted in different arousal
and valence ratings: High frequency vibrations of 200-
300 Hertz (Hz) were perceived as negative, and long-
lasting vibrations (1000ms) with high amplitudes
(intensity) increased arousal.
On the other hand, Karam and colleagues [18]
investigated the effectiveness of a vibrotactile chair,
called “The Model Human Cochlear”. It converts music
features into vibrations via eight coils, aligned at the
chair’s back. Artificially deafened participants were
interviewed and rated their enjoyment, arousal and
valence in response to happy, angry, fearful and sad
music. As expected, they enjoyed happy music
significantly more than angry or fearful music.
However, happiness and sadness were not
distinguishable on the valence-spectrum and arousal
ratings for sadness did not differ from other emotions.
Thus, although some distinctions between music were
possible, participants seemed unable to truly feel
emotions when using the chair.
Research Limitations
Based on contrasting findings and a lack of existing
research [16], no conclusions can yet be made on the
effectiveness of vibrotactile feedback for conveying
emotions. Additionally, the latter two studies involved
hearing participants, limiting their generalizability to
deaf individuals, who tend to have a higher tactile
sensitivity [23] and can be expected to differentiate
music better when exposed to vibrations.
Furthermore, existing studies have not investigated
whether participants were able to recognize specific
emotions after experiencing vibrations, and, thus,
whether emotions were not only subconsciously
experienced but also consciously perceived [36].
Finally, researchers have used varying methods to
explore different and unfinished prototypes rather than
commercially available devices [11]. Consequently,
comparisons between studies/devices are difficult and
provide limited insights into the effectiveness of
available technologies, such as the SubPac M2X, which
is popular among deaf music-enthusiasts and at
concerts/festivals for the deaf [16].
Research Question
Therefore, this study will use the SubPac M2X to
investigate the question: To what extent can profoundly
deaf individuals experience and recognize music-related
emotions via vibrotactile feedback. It is expected that:
1. Happy and angry music conditions will receive
higher arousal-ratings than sad and peaceful music.
2. Happy and peaceful music will be rated as more
positive than sad and angry music.
3. Participants will recognize the intended emotion
more often than unintended emotions in each condition
(e.g. recognize anger most frequently in angry music).

The study will also explore the number of correctly
recognized emotions as well as participants’ certainty
across all music conditions. Implications for future
research and design solutions will be proposed.
Method
Design
The study used a within-subjects design. The
independent variable was the music’s conveyed

CJXW3 PSYC0100 4
emotion with four conditions: happy, angry, sad, and
peaceful. The first two dependent variables were
arousal and valence. The third dependent variable was
participants’ emotion recognition with four levels:
Happy, angry, sad, and peaceful.
Participants
Using convenience sampling, sixteen profoundly deaf
British-born adults (12 male, 4 female), aged 19 to 63
(mean = 39; SD = 14.08), were recruited via social
media advertisements to related community-sites. All
but one participant wore hearing aids or cochlear
implants. Individuals with vision impairments, epilepsy,
heart problems, anxiety, or sensitivity to vibrations
were excluded from participation.
Materials
The vibrotactile backpack, SubPac M2X, was used to
transduce sound frequencies of 1-200hz into vibrations
with a default intensity of 50%. It was connected via a
3.5mm stereo aux cable to a MacBook Air laptop, which
used iTunes to play twenty piano excerpts ranging
between 11 and 16 seconds each. The excerpts were
chosen from Vieillard and colleagues’ [36] study due to
being recognized correctly most frequently by their
participants. Four further excerpts from their study
were used as demonstrations in an introductory
practice session. Happy music was composed in major
mode with medium-high pitch and tempo (92-196
Metronome Markings (MM)) whereas angry music was
played in minor mode with dissonant, out-of-key tunes
and varying tempo (44-172MM). Sad excerpts were in
minor mode with slow tempo (40-60MM) and peaceful
music had a major mode with moderate tempo (54-
100MM). For practice/introductory sessions, a pair of
JBL DuetNC headphones was used.
A questionnaire, adapted from previous research [36,
37], presented participants with a multiple-choice
option to encircle one of four emotions they recognized
per excerpt. It also included three 9-point Likert-scales
for each excerpt, assessing participants’ arousal (9 =
excited, 1 = bored), valence (9 = positive, 1 =
negative), and certainty of their recognized emotion (9
= certain, 1= uncertain). Pictures, representing each
Likert-point, were adapted from Korres, Jensen and Eid
[19]. Participants also filled out an open-ended
qualitative questionnaire, inquiring about their SubPac-
experience, its strengths/weaknesses, and design
recommendations. It will not be further expanded upon.
Procedure
Participants received an information sheet over e-mail
and were given a paper copy before signing a consent
form on the day of the experiment. The 30-minute
study was conducted in a meeting room at the
University College London. Each participant took part
individually. As part of a practice trial, the introductory
excerpts were played using the SubPac M2X and
headphones. These excerpts were repeated without
headphones after participants switched off their hearing
aids/cochlear processors to provide them with context
to the device and study.
Participants were then presented with the main 20
music excerpts and were asked to simultaneously fill
out the quantitative questionnaire. Each excerpt was
played twice. The researcher, who sat a meter apart
from the participant, manually controlled the iTunes
playlist (Figure 2). A new excerpt was played after the
participant stopped writing and at least 10 seconds
after the end of another to allow individuals to calm
down and distinguish between the music. The excerpts’

Figure 2: Experimental setup.
I

CJXW3 PSYC0100 5
order was randomized across participants to control for
order and learning effects. Thereafter, participants were
presented with the qualitative questionnaire, and,
finally, verbally debriefed.
Coding
Mean arousal, valence, and certainty values for each
music condition were computed per participant. The
number of correctly recognized emotions and the
frequency with which each emotion was recognized in
each music condition were counted for each participant.
Results
Data Treatment
Arousal and valence ratings were analyzed using a
multivariate analysis of variance (MANOVA), as the
data met its assumptions: It was within-subjects,
continuous, normally distributed as shown by the
Kolmogorov-Smirnoff test, and had more cases in each
cell than dependent variables. It also met assumptions
of linearity, multivariate normality, and displayed no
multicollinearity and no significant multivariate outliers
as indicated by Mahalanobis Distance [32]. Arousal,
valence and certainty ratings were also analyzed with
individual Repeated Measures (RM) ANOVAs, as they
met the additional assumptions of normality, sphericity
– shown by Mauchley’s test of sphericity –, and did not
display significant univariate outliers.
A Friedman test was performed on emotion recognition
values, as data of the dependent variable was ordinal
and one group of participants (within group design) was
measured across more than three (music) conditions
[24]. Following significant Friedman tests, post-hoc
Wilcoxon signed-rank tests were used with a Bonferroni
adjusted value of p<.01 to control for Type 1 errors.
Arousal and Valence
Table 1: Mean values for arousal, valence, and certainty when
listening to happy, angry, sad, and peaceful music.
A MANOVA demonstrated that there were significant
differences in arousal and valence ratings between the
four music conditions, F(6,88)=15.74, p<.001, Wilks'
λ=.23, ηp2=.52. Means and standard deviations are in
Table 1 and Figure 3 contains a graphic visualization.
Two univariate post-hoc analyses with RM ANOVAs
were conducted for arousal and valence. There was a
significant difference in valence ratings between music
conditions, F(3,13)=6.53, p=.006, Wilks’ λ=.40,
ηp2=.60, namely between happy and angry music
(p=.002). However, no significant differences existed
between sad and happy (p=1.00), sad and peaceful
(p=1.00), happy and peaceful (p=.94), angry and sad
(p=.68), as well as angry and peaceful music conditions
(p=.50).
Ratings (Mean, SD)
Music
Condition
Arousal Valence Certainty
Happy 6.40 (1.42) 6.06 (1.28) 6.30 (1.39)
Angry 7.20 (1.00) 4.88 (1.15) 6.05 (1.62)
Sad 3.79 (1.35) 5.61 (1.19) 6.16 (1.64)
Peaceful 4.09 (1.29) 5.54 (.96) 5.66 (1.90)
Figure 3: Each participant’s
mean arousal and valence ratings
for happy (orange), angry (red),
sad (blue) and peaceful (green)
music. The four quadrants
emphasize where ratings of the
same color would be expected.


1
2
3
4
5
6
7
8
9
1 2 3 4 5 6 7 8 9
Va
le
nc
e
Arousal
Mean Arousal and Valence Ratings Across Conditions
Sad
Happy
Angry
Peaceful

CJXW3 PSYC0100 6
For arousal, significant differences existed,
F(3,13)=24.88, p<.001, Wilks’ λ=.15, ηp2=.85.
Pairwise comparisons showed significant differences in
participants’ arousal between happy and sad (p=.001),
happy and peaceful (p<.001), angry and sad (p<.001),
as well as angry and peaceful music (p<.001).
However, there were no significant differences between
sad and peaceful (p=1.00) and happy and angry music
conditions (p=.128).

Certainty
A one-way RM ANOVA on participant’s certainty ratings,
showed that there were no significant differences
between the four music categories in relation to
participant’s certainty that their identified emotion was
the intended one of the music, F(3,13)=.75, p=.28,
Wilks’ λ=.75, ηp2=.25. Respective means and standard
deviations are presented in Table 1.
Identified Emotions Across Music
Identified emotion (Mdn)
Music
condition
Correct
(Mdn)
Happy Angry Sad Peaceful
Happy 3.00 3.00 1.00 .00 .00
Angry 3.00 1.00 3.00 .00 .00
Sad 2.00 .00 .00 2.00 2.00
Peaceful 2.50 0.50 .00 1.00 2.50
Table 2: Number of correct recognitions and identified
emotions across music conditions.
Four Friedman tests were conducted to explore
differences across music conditions in the number of
identified happy, angry, sad and peaceful emotions
(see Table 2 for median (Mdn) values and Figure 4 for a
graphic visualization).
There was a significant difference in the number of
times happiness was identified across all four music
conditions, χ2(3, n=16)=30.07, p<.001. A post-hoc
Wilcoxon signed-rank test demonstrated no significant
differences in the small number of times happiness was
recognized between sad and peaceful music (Z=-.584,
p=.559, r=.07). However, happiness was significantly
more often identified in happy compared to angry
(Z=-3.18, p=.001), sad (Z=-3.44, p=.001, r=.43), and
peaceful music (Z=-3.35, p=.001, r=.42). It was also
more often recognized in angry compared to sad
(Z=-2.81, p=.005, r=.35) and peaceful music
(Z=-2.55, p=.011, r=.32).
There were also significant differences in the number of
times that anger was identified across music conditions,
χ2(3, n=16)=32.71, p<.001. A post-hoc Wilcoxon
signed-rank analysis showed that there were no
significant differences between sad and happy music
(Z=-2.34, p=.02, r=.29), peaceful and happy
(Z=-2.57, p=.01, r=.32), and peaceful and sad music
(Z=.00, p=1.00). However, anger was significantly
more often identified in angry compared to happy
(Z=-2.91, p=.004, r=.36), sad (Z=3.44, p=.001,
r=.43) and peaceful music (Z=-3.45, p=.001, r=.43).
A Friedman test on “sadness” identifications across
music conditions also demonstrated significant
differences, χ2(3, n=16)=24.76, p<.001. A post-hoc
Wilcoxon signed-rank test showed that there were no

Figure 4: The total number of
participants’ identified emotions
(happy=orange, angry=red,
sad=blue, peaceful=green)
across the four music conditions.


0 20 40 60
Happy
Angry
Sad
Peaceful
Number of Times Identified
M
us
ic
Co
nd
iti
on
Identified Emotions Across Conditions
Peaceful Sad Angry Happy
Identified Emotions:

CJXW3 PSYC0100 7
significant differences between angry and happy
(Z=-1.00, p=.317, r=.13) and, again, peaceful and sad
music (Z=-1.23, p=2.18, r=.15). Sadness was
recognized in sad music significantly more often
compared to happy (Z=-3.10, p=.002, r=.39) and
angry music (Z=-3.08, p=.002, r=.39). However, it
was also recognized more often in peaceful compared
to happy (Z=-2.84, p=.005, r=.36) and angry music
(Z=-2.71, p=.007, r=.34).
There were also significant differences in the frequency
with which peacefulness was identified across all music
conditions, χ2(3, n=16)=28.16, p<.001. A respective
post hoc test showed similar results to the previous
analysis on sadness: There were no significant
differences between angry and happy (Z=-.82, p=.414,
r=.10), and peaceful and sad music (Z=-.42, p=.677,
r=.05). However, peacefulness was more frequently
identified in sad than happy (Z=-2.95, p=.003, r=.37)
and angry music (Z=-3.16, p=.002, r=.40). It was also
more often recognized in peaceful compared to happy
(Z=-3.35, p=.001, r=.42) and angry music (Z=-3.32,
p=.001, r=.42).
Correct Recognitions
The number of correct recognitions across all four
music conditions was also examined (see Table 2 and
Figure 4) to explore whether some conditions were
more/less difficult for recognizing emotions than others.
Although the Friedman Test pointed out significant
differences, χ2(3, n=16)=9.04, p=.03, the post hoc
Wilcoxon signed-rank test suggested that these could
only occur in larger samples, as no differences were
evident between music conditions after applying the
Bonferroni correction. Thus, participants correctly
identified emotions to a similar extent between happy
and angry (Z=-.53. p=.60, r=.07), happy and sad
(Z=-2.13, p=.033, r=.27), happy and peaceful (Z=-
1.29, p=.20, r=.16), sad and angry (Z=-2.11, p=.04,
r=.26), sad and peaceful (Z=-.80, p=.42, r=.10), as
well as angry and peaceful music (Z=-.99, p=.32,
r=.12).
Discussion
Based on recent developments of vibrotactile
technology, which promise to make music more
immersive and inclusive [12], this study investigated
the SubPac M2X’s effectiveness in translating music-
related emotions through vibrations to profoundly deaf
users.
As expected and in line with Wilson and Brewster’s
research [37] but in contrast to Karam and colleagues’
[18] findings, deaf participants rated happy and angry
music as more arousing than peaceful and sad music.
However, the second hypothesis was not entirely met:
Although participants experienced more positive
emotions during happy compared to angry music, they
did not experience emotions of differing valence across
the remaining conditions, similarly to Karam and
colleagues’ research [18]. Therefore, the SubPac M2X
could not make participants truly feel all music
conditions’ intended emotions. The third hypothesis
was also not entirely supported: Although participants
were able to clearly differentiate happy and angry
music and distinguish both from sad and peaceful
excerpts, the latter two could not be told apart.
Exploratory analyses demonstrated no differences
across the four conditions for certainty ratings and the
number of times emotions were recognized correctly.

CJXW3 PSYC0100 8
However, participants were mainly neutral/undecided
concerning the accuracy of their judgment.
Overall, the findings suggest that vibrotactile
technologies, using frequencies of 1-200hz, have high
potential for conveying music-related emotions.
However, valence may need to be conveyed or
pronounced more clearly, particularly for music that
encourages low arousal via a slow or moderate tempo
(i.e. sad and peaceful music), to evoke the intended
emotions.
Strengths/Weaknesses
A key strength was the study’s inclusion of participants
with similar degrees of deafness. Also, significant
findings had consistently high effect sizes, which
suggests high validity [7]. Furthermore, large effect
sizes for non-significant findings imply that sad and
peaceful music could be distinguished in larger samples
[24].
Other strengths are the study’s internal validity and
control of extraneous variables. For instance, it used a
popular and commercially available device as well as
validated methods and materials [36, 37]. Due to
randomizing the excerpts’ order, it also controlled for
order and learning effects.
However, the controlled nature of the experiment
reduced its ecological validity. Additionally, based on its
predominant focus on profoundly deaf individuals, who
typically engage with synthesized sounds via hearing
aids or cochlear implants, the study’s findings are not
generalizable to the heard-of-hearing population, who
use naturally occurring sounds alongside vibrations,
and born-deaf individuals who never used aids or
processors and, thus, have different interpretations of
music [34]. Furthermore, by calculating mean arousal,
valence, and certainty values for each condition per
participant, information on the data’s variance was lost,
potentially reducing the RM ANOVAs power [32].
Future Directions
Future research could use hierarchical linear models to
avoid the latter limitation [32] and explore the SubPac
M2X’s effectiveness with the above-mentioned
populations, as well as the impact of learning/exposure
to vibrations on emotion recognition. Skin conductance
or heart rate data could also provide better insights into
vibrations’ psychophysiological effects [6, 25, 35].
In regards to design, qualitative analyses, not reported
here, suggested that additional colors on the vest could
represent emotions better [22,14], particularly as
visuals activate deaf individuals’ auditory brain regions
[10]: Saturated bright colors along the red-yellow
spectrum are intuitively perceived as happy whereas
cyan-blue colors are related to fear [9]. Thermal
feedback could also be used: Warmth is typically
perceived as positive whereas coldness carries negative
connotations [15, 38]. Therefore, a combination of
color, temperature and vibration could improve music-
related emotion recognition [37]. Additionally, if
SubPac M2X transduced frequencies above 200hz, it
could provide users with a more nuanced
understanding of the music [22]. It will be important to
include deaf individuals in the design and research of
future vibrotactile devices to meet their needs [21, 8].
Conclusion
Vibrotactile feedback promises to make music more
accessible to the profoundly deaf community. However,

CJXW3 PSYC0100 9
although participants could distinguish music based on
arousal, they were not certain of their judgments and
struggled to identify the music’s valence. In particular,
they struggled to distinguish between peaceful and sad
music. Therefore, the effectiveness of additional
features (i.e. visuals, temperature) needs to be
explored.
References
1. Action on Hearing Loss. 2015. Hearing matters.
Retrieved April 4 2019 from
https://www.actiononhearingloss.org.uk/how-we-
help/information-and-
resources/publications/research-reports/hearing-
matters-report/.
2. Action on Hearing Loss. 2017. Hearing progress:
Update on our search for treatments and cures.
Retrieved April 4, 2019 from
https://www.actiononhearingloss.org.uk/how-we-
help/information-and-
resources/publications/biomedical-
research/hearing-progress-2017/.
3. Felipe A. Araujo, Fabricio L. Brasil, Allison C. L.
Santos, Luzenildo de Sousa Batista Junior, Savio P.
F. Dutra, and Carlos E. C. F. Batista. 2017. Auris
system: Providing vibrotactile feedback for hearing
impaired population. BioMed Research International
2017, e38: 1-9.
4. Anant Baijal, Julia Kim, Carmen Branje, Frank
Russo, and Deborah I. Fels. 2015. Composing
vibrotactile music: A multisensory experience with
the Emoti-chair. In 2012 IEEE Haptics Symposium
(HAPTICS), 509-515.
https://ieeexplore.ieee.org/document/6183839.
5. Artem Boltyenkov. 2015. A healthcare economic
policy for hearing impairment. Springer Gabler
Verlag, Wiesbaden, Germany.
6. Alan Bryman. 2015. Social research methods (4th
ed.). Oxford University Press, Oxford, UK.
7. Jacob Cohen. 1988. Statistical power analysis for
the behavioral sciences (2nd ed.). Lawrence
Erlbaum Associates, Mahwah, NJ.
8. Sheena Erete, Aarti Israni, and Tawanna Dillahunt.
2017. An intersectional approach to designing in
the margins. Interactions 25, 3: 66-69.
9. Nele Dael, Marie-Noelle Perseguers, Cynthia
Marchand, Jean-Philippe Antonietti, and Christine
Mohr. 2016. Put on that colour, it fits your
emotion: Colour appropriateness as a function of
expressed emotion. The Quarterly Journal of
Experimental Psychology 69, 8: 1619-1630.
10. Eva M. Finney, Ione Fine, and Karen R. Dobkins.
2002. Visual stimuli activate auditory cortex in the
deaf. Nature Neuroscience 4, 12: 1171-1173.
11. Marcello Giordano, John Sullivan, and Marcelo M.
Wanderley. 2018. Design of vibrotactile feedback
and stimulation for music performance. In Musical
Haptics, Stefano Papetti and Charalampos Saitis
(Eds.). Springer, Cham, Switzerland, 193-214.
12. Satoshi Hashizume, Shinji Sakomoto, Kenta
Suzuki, and Yoichi Ochiai. 2018. LIVEJACKET:
Wearable music experience device with multiple
speakers. In International Conference on
Distributed, Ambient, and Pervasive Interactions
(DAPI 2018), 359-371. Springer, Cham,
Switzerland.https://link.springer.com/chapter/10.1
007%2F978-3-319-91125-0_30.
13. Marcia J. Hay-McCutcheon, Nathaniel R. Peterson,
David B. Pisoni, Karen I. Kirk, Xin Yang, and Jason
Parton. 2018. Performance variability on perceptual
discrimination tasks in profoundly deaf adults with
cochlear implants. Journal of Communication
Disorders 72: 122-135.
14. Jessica A. Holmes. 2017. Expert listening beyond
the limits of hearing: Music and deafness. Journal

CJXW3 PSYC0100 10
of the American Musicological Society 70, 1: 171-
220.
15. Hans Ijzerman, and Gün R. Semin. 2009. The
thermometer of social relations: Mapping social
proximity on temperature. Psychological Science
20, 10: 1214-1220.
16. Robert Jack, Andrew McPherson, and Tony
Stockman. (2015). Designing tactile musical
devices with and for deaf users: a case study. In
Proceedings of the International Conference on the
Multimodal Experience of Music 2015, 1-7.
https://www.eecs.qmul.ac.uk/~andrewm/jack_icm
em15.pdf.
17. Nicole T. Jiam, Mickael L. Deroche, Patpong
Jiradejvong, and Charles J. Limb. 2019. A
randomized controlled crossover study of the
impact of online music training on pitch and timbre
perception in cochlear implant users. Journal of the
Association for Research in Otolaryngology, 1-16.
18. Maria Karam, Frank A. Russo, and Deborah Fels.
2009. Designing the Model Human Cochlea: An
ambient crossmodal audio-tactile display. IEEE
Transactions on Haptics 2, 3: 160-169.
19. Georgios Korres, Camilla B. F. Jensen, Wanjoo
Park, Carsten Bartsch, and Mohamad S. A. Eid.
2018. A vibrotactile alarm system for pleasant
awakening. IEEE Transactions on Haptics 11, 3:
357-366.
20. Paddy Ladd. 2003. Understanding deaf culture: In
search of deafhood. Multilingual Matters, Bristol,
UK.
21. Mará I. Laitano. 2016. Developing a participatory
approach to accessible design. International Journal
of Sociotechnology and Knowledge Development 9,
4: 1-11.
22. Suranga Nanayakkara, Elizabeth Taylor, Lonce
Wyse, and Simheng Ong. 2009. An enhanced
musical experience for the deaf: design and
evaluation of a music display and a haptic chair. In
Proceedings of the 27th SIGCHI Conference on
Human Factors in Computing Systems (CHI ’09),
337-346.
https://dl.acm.org/citation.cfm?id=1518756.
23. Donna J. Napoli. 2014. A magic touch: deaf gain
and the benefits of tactile sensation. In Deaf gain:
Raising the stakes for human diversity (1st. ed.),
H-Dirksen L. Bauman and Joseph J. Murray (Eds.).
University of Minnesota Press, Chicago, IL, 211-
232.
24. Julie Pallant. 2013. The SPSS survival manual (5th
ed.). Open University Press, London, UK.
25. Nikki S. Rickard. 2004. Intense emotional
responses to music: A test of the physiological
arousal hypothesis. Psychology of Music 32,4: 371-
388.
26. James A. Russel. 1980. A circumplex model of
affect. Journal of Personality and Social Psychology
39, 6: 1161-1178.
27. Thomas Schäfer, Peter Sedlmeier, Christine
Städtler, and David Huron. 2013. The psychological
functions of music listening. Frontiers in
Psychology, 4: 511.
28. Martin Schürmann, Gina Caetano, Yevhen
Hlushchuck, Veikko Jousmäki, and Riitta Hari.
2006. Touch activates human auditory cortex.
NeuroImage 30, 4: 1325-1331.
29. Mina Shibasaki, Youichi Kamiyama, and Kouta
Minamizawa. 2016. Designing a haptic feedback
system for hearing-impaired to experience tap
dance. In Proceedings of the 29th Annual
Symposium on User Interface Software and
Technology (UIST ’16 Adjunct), 97-99.
https://dl.acm.org/citation.cfm?id=2985716.
30. Robert Sparrow. 2005. Defending deaf culture: the
case of cochlear implants. Journal of Political
Philosophy 13, 2:135-152.

CJXW3 PSYC0100 11
31. SubPack. 2019. M2X (Wearable). Retrieved April 5
2019 from https://eu.subpac.com/products/m2x.
32. Barbara G. Tabachnik, and Linda S. Fidell. 2013.
Using multivariate statistics (6th ed.). Pearson,
London, UK.
33. Pauline Tranchant, Martha M. Shiell, Marcello
Giordano, Alexis Nadeau, Isabelle Peretz, and
Robert J. Zatorre. 2017. Feeling the beat: Bouncing
synchronization to vibrotactile music in hearing and
early deaf people. Frontiers in Neuroscience, 11:
507.
34. Sandra E. Trehub, Tara Vongpaisal, and Takayuki
Nakata. 2009. Music in the lives of deaf children
with cochlear implants. Annals of the New York
Academy of Sciences 1169, 1: 534-542.
35. Marjolein D. van der Zwaag, Joye H. D. M.
Westerink, Egon L. van der Broek. 2011. Emotional
and psychophysiological responses to tempo,
mode, and percussiveness. Musicae Scientiae 15,
2: 250-269.
36. Sandrine Vieillard, Isabelle Peretz, Nathalie
Gosselin, Stéphanie Khalifa, Lise Gagnon, and
Bernard Bouchard. 2007. Happy, sad, scary and
peaceful musical excerpts for research on
emotions. Cognition & Emotion 22, 4: 720-752.
37. Graham Wilson, and Stephen A. Brewster. 2017.
Multi-moji: Combining thermal, vibrotactile & visual
stimuli to expand the affective range of feedback.
In Proceedings of the 2017 CHI Conference on
Human Factors in Computing Systems (CHI ’17),
1743-1755.
https://dl.acm.org/citation.cfm?id=3025614.
38. Graham Wilson, Dobromir Dobrev, and Stephen A.
Brewster. 2016. Hot under the collar: Mapping
thermal feedback to dimensional models of
emotion. In Proceedings of the 2016 CHI
Conference on Human Factors in Computing
Systems (CHI ’16), 4838-4849.
https://dl.acm.org/citation.cfm?id=2858036.28582
05.



Word count without references: 3266.
END.





























































































































































































































































































































































































































































































































































































































































































































































































学霸联盟


essay、essay代写