Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the rank-math domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/sensation-experience.com/public_html/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ultimate-addons-for-gutenberg domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/sensation-experience.com/public_html/wp-includes/functions.php on line 6114
My Experiences as a Totally-blind and Hard-of-hearing Person, part 2 - Sensation Experience
Sensation Experience

Can you hear me: My Experiences as a Totally-blind and Hard-of-hearing Person, part 2

Okay! Last time I talked about some of the social issues I’ve experienced due to my hearing challenges, and if you read my about me page, you’ll probably know that I can’t watch a lot of TV and films, which means that my perception of social dynamics might as well be static. I sometimes have a bit of trouble with the cocktail party effect. That’s basically when you are able to focus on one particular sound amid a bunch of other sounds.
Anyhow, I wanted to talk a little more about some of the auditory and technical issues I’ve dealt with, as well. First, however, I’d like to introduce you to a blind and hard-of-hearing gentleman who is pretty well-known among the blind community. Back in 2014, he wrote a blog article about how living with hearing loss has impacted his life to some extent, and what he has done to make up for it. Now hear this! The surprising thing was that I never knew he had a hearing loss in the first place, or that he also had the same condition I have.
‘Click, click. Is this on? Can you hear me? Hello? Is this working? Is that too loud? What was that again?’ These are many of the things I either heard other people say to me, or things I’ve asked of them. Some of them refer to using something called an FM system, which is a radio transmitter and receiver that operates on a frequency spectrum using FM without causing interference. The receiver sends this signal to a neck loop, which then sends the signal using magnetic induction much like how a guitar pickup coil works, to the hearing aid(s). This part of the hearing aid is called the telecoil. Sometimes I’ve used the FM system to spy on other people and do some eavesdropping. Although this post from Kids Health doesn’t have this, I remember reading stories from other kids about how they’ve taken advantage of their systems to tell their classmates when the teacher was coming back within range. This type of magnetic eavesdropping is more common than people realise, so to protect sensitive conversations, people usually go into a Faraday cage.
I first started losing my hearing at the age of seven, though it was barely noticeable at first because I’ve had perfect hearing from birth to about age six. Since I was born with a condition that made me prone to developing hearing loss, though, due to my brother’s having the same thing, I was later tested by the education service district’s audiology department when I first entered Kindergarten. Occasionally, I’d see my primary doctor, or someone at school would bring in an audiometer to this small room used for individuallised study. It was this big and bulky box with lots of buttons on it. The person running it placed noise-cancelling headphones over my ears and played a series of tones, some of them I remembered to be at 1000Hz, or 1kHz. Other times, they would simply insert a small probe into the ear canal and play the tones through it. Whenever I got ear infections, which was usually in my left ear, I couldn’t hear that tone at 30Db, I think, or maybe lower. They simply asked me to raise my hand corresponding to the ear they were testing if I could hear that pure sine wave tone. I also got a tube placed in my left eardrum to treat my otitis media on Tuesday, 9 December 2003.
I was always extremely talkative and was frequently dubbed chatterbox and other names I wish not to write here. I guess it is why, in later years, I became more afraid of being taunted for something I should’ve been free to do. A lot of people told me I never laughed, but how can you if you don’t know what people are laughing about? They’ve also criticised me for not yelling or making any loud social vocal sounds like grunts, groans, sighs, hoots, yawns, etc. You see, part of the problem with hearing aids is that your voice may sound extremely loud to you, but it might sound very soft to others. And even with hearing aids on, people can still I am hard-of-hearing. Likewise, without hearing aids, you might speak up so you can hear yourself, but it might cause some people to cringe because you are speaking way too loud. And, because I come from a Spanish-speaking family, I never get to hear English on a daily basis except through books, reading the internet, and going out. Yes, although you could say English isn’t my first language, it is my primary language because I use it a lot more than I know Spanish, and it is because I know a lot more words and vocabulary than I know in Spanish. However, I rarely get humorous comments and sarcasm because I don’t often know by the tone of the situation, though this may work differently in writing. So, in elementary school, I went in every couple of days to speech and language pathology, either individually or in group session so they could better fine-tune my social and communication skills. After all, I’m pretty sure that’s the only reason I went in the first place. We covered things like understanding inflection, how to respond to an upward inflection with a downward one, how to join a conversation when there was a pause, and more importantly, how to stay on topic so that it wouldn’t sound like you just came out of nowhere with a random comment. What I didn’t realise later on, though, was that there were some gendered ways of talking and general overall vocalisations. That was why I mostly tried to subconsciously use an androgynous-sounding voice, because I knew that if I had done anything that felt natural for me, people would not have let me live it down.
Also, I never understood this until recently, but I remember an experience where I was supposed to give an oral presentation about a likely scenario that would occur five or ten years in the future. When it was my turn, I briefly talked about how I wanted to do something that involved using Braille Music, web design, and flying. At the end of my speech, the guy who facilitated the group thanked me, but he said something after that which I couldn’t catch, but I gathered from his tone that he wasn’t very pleased with my performance. That was in early April of 2010. Two years later, in December 2012, I was talking to someone about my fascination with my synaesthesia project, and the person at this party, who I actually met via some mutual friends, told me that they could tell how passionate I was because of the enthusiasm in my voice. In reality, it was because I had varied or modulated the inflection of my voice to sound less boring. While I may not have been conscious of it at the time, I am glad that I finally know about it so I can be sure to use that in leadership-related fields. And since my hearing has gotten worse since then, and since I was raised practically on my own, having to facilitate a large workshop would be extremely hrd, unless I found a really great support system to help with that.
I got my first hearing aid for my left ear in Summer of 2001, and at that time, I remember experiencing tinnitus that sounded like the buzzing of a fly’s wings, or more like a sawtooth wave, though not as harsh. Some of them were around 325Hz, but the one I remember the most was one that I kept hearing in my left ear, which was around 265Hz. It lasted for about four months, and at one time, I thought it dropped about a semitone. Anyhow, I was first ecstatic for having gotten it, and that I could hear things just as well as I could hear with my right ear, but soon, I didn’t feel comfortable wearing it, mostly because I didn’t want others to know I had a hearing aid. I only wore it at school. I still had enough hearing in my right ear to not need my hearing aid at home. If you’ve read my other posts, you may have learned that I was bullied by some blind people for having hearing loss because I was the only one with it in our little clique. I so badly wished for more students with hearing loss to join us, so that I wouldn’t feel like a minority within a minority all by myself.
In the summer of 2004, it was decided, based on a recent hearing test , to complement my setup with another hearing aid, which meant now that I had then developed bilateral hearing loss. It was evident by the audiograms that my right ear was better at perceiving higher frequency sounds than the left, so whenever I talked to people, I’d turn my head so that my right ear would be facing them, or i’d sit on the person’s left side. I’ve had some instances of diplacusis. That’s basically when a tone sounds slightly higher or lower than what you know it to be in the other ear. For example, if I played a tone of F-sharp4 in my right ear and played that same tone in my left ear, I’d hear a G4 instead. I didn’t know I had perfect (absolute) pitch until long after, let’s say when I was in my sophomore year of high school, but back then, this was what I had to work with. Occasionally, I’d wake up with a condition that felt like my right ear, usually, ducked the audio coming in. Sometimes I’d get a small headache and hear this strange buzzing tone, like one of those old dial tones at 120Hz, but with lots of high harmonics added. Also, frequencies at the high end of the spectrum are almost imperceptible, and voices end up sounding tinny. There has been some studies to see whether corticosteroids were effective at treating sudden sensorineural hearing loss (SSHL). I suspect it was maybe how oversensitive my tiny ear muscles were while I slept. I had a habit of sleeping with earbuds, so I could listen to various soundscapes while I slept, but maybe my ears thought they were too loud, so it tried to protect itself the best way it could. If you’ve ever experienced spontaneous ringing in your ears, this post from 2013 explains that the outer hair cells, which are used to amplify really quiet sounds, tend to vibrate on their own, sometimes causing a feeling of fullness or temporary loss of balance. Fortunately, there is a feedback loop that corrects this problem in a minute or two. About a month ago, I heard this tone increase in volume until it was nearly deafening. It was at around 975 Hz, and ten minutes later, everything got tinny again. What was more interesting was that anything that was sympathetically resonant to 975 Hz caused those hair cells to vibrate abnormally.
Another interesting phenomenon I noticed was that I could control some of the muscles, which I later learned were called the tensor muscles, and make sort of a click, click, click sound. It was perceptible enough that if I placed a microphone inside my ear, I could capture this sound. When I first discovered this, probably when I was five years old, I was afraid of having it and thought there was something in my ear that was causing it. I thought of running away from it, but no matter where I went, it’d just follow along with me.
Anyhow, I don’t know what happened, but once, probably in summer 2006, after I had gotten a tooth extraction, I noticed my hearing dissipating in my left ear if I moved my jaw too far back. I was genuinely afraid of this, and I never told anyone about it, so I don’t know what could’ve caused it. I suspect it might have been due to inflammation of the tempromandibular joints, but since I was on non-steroidal anti-inflammatory drugs, I didn’t feel anything.
Anyhow, socialising got harder and harder as my hearing continued to worsen over time. Crossing streets became troublesome to the point I needed to solicit assistance all the time, and I’ve had some blind people guilt trip me into thinking it was my fault I couldn’t hear them when they yelled at me or whatever, instead of just using alternative means of communication, like spelling words on demand using the phonetic alphabet. One thing I’ve come to realise is that the more I am familiar with hearing a word or phrase based on its cadence, rhythm, inflection, intonation, prosody, etc, I could recognise it even if I didn’t hear all the vowels and consonants. Of course this wouldn’t work for words or phrases I’ve never heard before. It’s like listening to the lyrics of a song. Your brain expects to know what is coming ahead. This I later learned to be Lady Mondegreen Syndrome. That’s why one of my former teachers of the visually-impaired gave me a special nickname so that even if I didn’t recognise his voice, I’d still know who he was because of that.
When I got my first computer in 2007, even though I didn’t have internet then, I still had enough hearing to use the desktop speakers at high volume. I watched some TV shows by pressing my right ear against the TV’s speakers, but I later found a TV with a headphone jack, and this made watching TV shows easier. It wasn’t until late 2009 to early 2010 that my hearing decreased rapidly, especially in my right ear, that I started being more dependent on my assistive technology to hear my surroundings even when I was in my own home.
When I first got my own internet through Comcast back in 2010, I was gradually introduced to other blind people on Skype and other platforms, and I learned about audio production-editing using single and multi-track editors and digital audio workstations, MIDI sequencers, and VST hosts. I did not know much about some of the fancier audio equipment people used to make better quality recordings, though. I had lots of ideas for making audio drama, but I ended up being criticised because people told me that my audio was of super-low quality. They never explained how so it was,and I probably should’ve explained how difficult things were with my hearing loss. Alas, I never did. Instead, I continued pressing on, oblivious to some of the artifacts I was likely producing by boosting my onboard sound card’s preamplifier to the maximum so I could monitor myself, and probably other things. Speaking of monitors, I began relying hevily on anything that acted as one to also behave like a personal sound amplifier or hearable. This is one of the ways I’ve developed interesting and unconventional uses of audio gear. I’ve used headphones as stereo microphones. I didn’t know that in-ear mics existed, like the Andrea’s Electronics binaural microphone headphone combo, or the earbud version. Fortunately, I later got a Pocket Talker Ultra from Williams Sound. I really enjoyed using them because to people, it didn’t look like I was using hearing aids. Rather, it looked more like I was listening to music or something. Someone told me that if I got Bose’s new augmented-reality headset or Apple Air Pods, I could virtually use hearing aids all the time. Not only have I found monitoring to be of great help in amplifying my surroundings, but it has helped restore my hearing awareness, so that I was more likely to notice when I mispronounced words or use wrong intonations as is common in people who can’t hear themselves well. Of course, people who wear hearing devices all the time, even when they sleep, are likely to develop a lot of earwax over time. Also, I spoke with somebody who said that they absolutely hated hearing aids and avoided them like the plague. They said that even if insurance were to pay three to five thousand dollars for a piece of crap, it was still a rip-off when they could easily build a rig that was about a thousand dollars and have much better EQ and filters and binaural microphones and stereo headphones.
In Fall of 2011, I was now at the top of the heap in my high school career. I initially didn’t think of making anything of it other than do my school work, but when I learned that our musical theatre department was putting on a production of The Wizard Of Oz, one of my favourites of all time, I knew I had to conquer my fear of not being able to do well because of my hearing challenges. And, while I didn’t run the soundboard that time, I did help in sound design by gathering sound effects from my archive and mixing them together, and even recording my own sound effect and editing it. I later got to run the board for Senior Spotlight after having demonstrated that I had exceptionally good operational skills even if I didn’t possess the technical background, knowledge, or expertise of audio fundamentals.
Although a lot of people always recommended that I record lectures using a digital voice recorder, there was one particular reason I didn’t often follow through, a huge problem I didn’t learn about until much later. If you remember when I first talked about using an FM system, and if you read the article I linked here pertaining to that subject, then you are probably aware that many venues provide assistive listening devices to help negate the effects of ambient noise by bringing the sound of the person speaking into the microphone directly into someone’s ears. This is because, more often than not, sounds with frequencies that decay rapidly are lost in the reverberation or echo of a room, thus making it virtually impossible for one to hear the subtleties of a vowel or consonant. This was always a problem I experienced when I was at an auditorium and couldn’t hear what was being said through a speaker, or even when someone was just talking without one. What I’ve also noticed was that there tended to be some psychoacoustic differences in using headphones versus speakers. For instance, if I recorded something, and then I used speakers to play it back, I might hear things I failed to hear had I used headphones or earbuds. So, beginning in 2014, I began to look for ways of recording lectures directly. I had one instructor stand in front of a stereo microphone that was hooked up to my computer so I could record what they were saying. One challenge to using this kind of approach of direct listening was that since FM and other wireless transmission systems sent the microphone’s input directly into the hearing aid in mono, we also tend to lose any sense of directionality, so if a person were to my left, they would still sound as if they were in the centre. The only exception to this would be if someone invented a wireless system that used stereo microphones. So, when I ran the show for Senior Spotlight, I was able to use my FM rig to connect to the soundboard, and while I couldn’t hear the performers who were far away from the hanging mics, I was able to hear when one of them spoke directly into the microphone, and I knew when to play the sounds without the help of the stage manager to cue me by tapping on my shoulder.
Anyhow, in 2016, I was eligible to get new hearing aids thanks to my insurance plan. These new devices had two microphone capsules with variable pick-up patterns. They wanted to wait until I had completed another tympanogram, audiogram, and speech perception test, all unaided, before configuring them. It was determined that I could not hear anything above 3,100Hz at 85Db in the right ear, and nothing above that frequency, no matter how loud it was, on the left ear. I once had a few bone conduction tests, but I told them that I mostly felt the vibration of the tones rather than heard them because of the occlusion effect. Speaking of that, some of the older hearing aids used a bar that you would bite on, so that the sounds resonating from it would be transferred via this means. Since the type of hearing aids I received were more modern, it meant that I could now use brand-specific accessories to enhance my listening experience. I now use a ComPilot Streamer, which is just like a neck loop, but it uses a different RF protocol, which makes it unshareable to other hearing aid users. These hearing aids also had sophisticated digital signal processing for equallisation, cut and shelf filters, and even a transposition feature. Imagine I played a tone at B-flat 7. To my ears, it would sound like something in between B5 and B-sharp5. That feature always threw me off because I didn’t know what was real and what was not. I had the audiologist set up a programme that would turn this feature off whenever I wanted to listen to music. Anyhow, it made things like the S-sound sound more like an SH. It was still hard to differentiate between the Eeh and ooh vowels, though. For example, my friend told me that they knew of someone who may have had auditory neuropathy or central auditory processing disorder, and that they couldn’t hear the /k/ and /t/ sounds in cat, leaving them only to hear the /ae/ sound with a high attack and release. Although this post further explains how these hearing aids work, I couldn’t find one that talked about how lack of exposure to high frequencies could lead to brain atrophy, so some manufacturers are using either a harmonic exciter or some other technique to gradually introduce those high frequencies again. I hope that using these techniques, we can develop more hearing simulators that can simulate various hearing impairments the way some goggles are able to simulate blurred vision for things like what it feels like to be drunk. We could even prepare people to know how wearing a cochlear implant full-time might feel.
Having said that, I heard about a former on-line academy that prepared blind people for careers in the IT and audio production fields earlier this year. Sadly, however, they ran out of funding, but luckily, they released their audio courses. Alternatively, you can go here to learn more about IT, and here to find tutorials on audio engineering and production. When I finally started working on refining my critical listening abilities, I found that I could not hear certain vital characteristics that would’ve helped me determine if there were problems in my audio, such as aliasing, quantisation noise, artifacts from transcoding, comb filter effects, etc. At least now I knew about these concepts so that I would be more aware of them.
So, that’s basically my experiences with hearing loss in a nutshell. I do hope that synthetic biologists will further experiment with quail and other birds and reptiles to better understand how epidermal stem cells work, and work on implementing a technique discovered by Oregon State University. I know that a lot of blind people would act indifferent about wanting to restore their eyesight, but they would almost no doubt jump at the chance of having their hearing loss cured, assuming they lost it later in life. Of course, there is always going to be a big Deaf and a little deaf, the former referring to people who identify with Deaf culture and have lerned to embrace it. Someone from the National Federation of the Blind said that there was no such thing as Blind Culture. So, is there such a thing as Deaf-Blind culture? You tell me.

Exit mobile version