By MAUREEN TOWEY
November 5, 2017
Seven years ago, when Rachel Kolb was 20, her friends pitched in to help her learn how to hear music. She was born profoundly deaf and had recently received a cochlear implant to give her partial hearing. “They were so gracious — they made a playlist and annotated it: At this time in the song, it’s this instrument coming in,” she recalled. “So I learned to recognize, oh, that’s a piano, because my friend wrote down, ‘At 35 seconds the piano starts to play.’ ”
Rachel is a former Rhodes scholar and current doctoral student whom I met through Peter Catapano, the editor of the Disability series, on the Opinion desk. I work as a senior producer in the VR department at The Times, creating both long- and short-form virtual reality videos. Part of my job is constantly looking for stories that will fit our uniquely experiential medium.
Peter introduced me to Rachel after she submitted an essay to him about her experiences of music both before and after receiving the implant. She described music as tactile and visual — not something that you just hear. We thought her story was a great match for the immersive treatment that virtual reality provides. We started to adapt Rachel’s article into a storyboard and quickly settled on a VR piece that would be a mix of animation and live-action, with narration from Rachel.
When we met Rachel in person, we were excited to see that she exhibited a natural on-camera presence — vibrantly intelligent and self-possessed. When we worked with her, she read our lips, since none of our team members knew American Sign Language (ASL). But lip-reading for more than two hours is tiring and it’s a less effective method for large groups, so Rachel often uses a sign language interpreter.
During our continuing conversations about the development of the piece, she suggested that we try to find another deaf collaborator for our team. This made a lot of sense, especially because we wanted to keep the piece deaf accessible and we knew that more deaf perspectives during the creative process would strengthen the final product. After some research, we found James Merry, an animator who works for the production company Squint/Opera in London.
In person, James is quiet, but his animations are fast and lively. We nudged him toward a style that was loose and hand-drawn. (VR is very high-tech, and we wanted the animations to feel warm and approachable.) Luckily, James had already worked in VR and knew how to adapt his animations to an immersive environment. He was an easygoing collaborator, bringing new ideas to the table during each stage of development.
James does not have a cochlear implant; he uses hearing aids. Like Rachel, he reads lips. Communicating on the phone with a deaf person is often not ideal, so we used video chat or email to correspond and give creative notes. I went to London to work with James and his colleagues for a few days, and interacting in person helped our process — but I could say that about any collaboration.
“When I work with voice-over, I’m hardly ever able to work out what’s being said from the voice track alone,” James explained. “I can hear when something is being said, but not so much what is being said. So I use a combination of the audio track, the transcript with timings, and timeline markers. The audio waveform, which I can see on the computer screen, helps me to sync things up. If I still can’t work it out, there’s usually a friendly producer nearby who can help me fill in the gaps.”
At one point, before we had a crucial sound cue built in, James threw in some sound cues himself, but the volume levels were really loud so he could hear them. When we watched and listened to that version together, the hearing collaborators dove for the volume. We all laughed about it and James apologized, but it taught us a lesson: James’s sound cues were the only ones we felt as well as heard. That mattered for our sound design. In the moment when Rachel’s cochlear implants are turned on, we aimed to have a sound cue that jolted us physically.
We also wanted an acoustically extraordinary setting. Since there is a detail in Rachel’s op-ed about her first experiences hearing live music after getting her implant, we asked her where those experiences happened and which ones were the most impactful. The Santa Fe Opera, near her home in New Mexico, was at the top of her list. It generously opened its doors, and the back wall of its open-air stage, so we could film Rachel there with a twilit desert backdrop.
While the animations were being developed, we landed a partnership with the technical wizards at Lytro to create our live-action scenes. Their light field technology can bring an extraordinary amount of depth and detail to a virtual reality image. The biggest VR camera we use regularly at The Times is about the size of a basketball. Lytro’s camera is the size of a sumo wrestler. It gathers about 475 times more visual information than we do for a standard VR piece. Lytro’s system also features Six Degrees of Freedom (6DoF), which enables the viewer in aheadset to move around within the piece. If you are standing in front of a person, you can close the distance to them or step farther away. If you are watching an animation, you can crouch down or swivel to the side to see what the image looks like from a different angle. This freedom of movement increases the sense that you are right there with Rachel. It’s what we call “presence” — the magic of VR.
During this process, we were also working closely with Brendan Baker, a sound designer whom I knew from his work on the podcast “Love + Radio,” which is known for its innovative use of sound in storytelling. I thought of him as the mad scientist of the podcast world, which was exactly what we needed to create a sound design that could accurately represent the sounds that Rachel was hearing when she first turned on her cochlear implant. The production company Q Department came in on the back end to spatialize the sound design so the sound would adjust as you moved your head in the headset.
When Rachel got that playlist from her friends, she was trying to do something she hadn’t done before: hear music. In working on this piece, we were trying to do some things we hadn’t done before. We were trying to create a VR piece that was animated, that incorporated new 6DoF technologies and that told Rachel’s story with the depth and sensitivity it deserved. In the VR department, our jobs are the most exciting when we can use a new technology to shine a light on an important story.
There are several ways to experience “Sensations of Sound” and other NYT VR pieces. You can watch through the NYT VR apps in Oculus and Daydream headsets for an immersive experience. You can watch on your phone through the NYT VR app or by clicking on this link and waving your phone around to explore. Or you can watch on your computer and use your mouse to scroll around in the 360 video.