Towards inclusive digital events

Live interpretation during the Climate:Red virtual summit: a multifaceted challenge

Three weeks have passed since the IFRC’s virtual Climate:Red summit came to an end. Plenty of time to reflect on our experience, what worked, and what did not work as well as expected. The event offered live interpretation in all four IFRC official languages, an unusual yet critical feature in the digital space for event organisers, the audience, and interpreters alike.

A few words on Climate:Red

Climate:Red was organised by the IFRC’s Solferino Academy (the IFRC’s hub dedicated to innovation and future foresight), the Climate Center, and supported by several National Societies. This Red Cross Red Crescent event was innovative in various ways, among which the following are particularly relevant for our topic of interest:

  • OpenLab (a regular partner of the Solferino Academy) developed https://climate.red, a bespoke landing page where participants could browse the schedule, show their interest in particular sessions, and access them.
  • Climate:Red offered around 200 public sessions (keynotes, panels and workshops) over 31 consecutive hours.
  • Seventeen main stage sessions ran on zoom, most live-streamed through Youtube, embedded on the event website, and live interpreted in French, Arabic, English and Spanish. Live interpretation enabled thousands to experience something that is often the prerogative of those that sit in conference rooms.
  • Fourteen out of the seventeen main stage sessions had interpreters using an online bespoke interpretation interface.

It is worth noting that the elements above created a lot of expectations and thrill, but also brought their share of scepticism and doubts, a natural reaction when facing novelty.

Digital but human-driven

Digital does not mean without people. And we had a lot of them. An audience of approximately 10,000, dozens of panelists and workshop facilitators, but also up to 70 people working behind the scene, sleeves rolled up, to make it a successful event, including fourteen interpreters. This big team collaborated smoothly. This is all the more impressive considering that we used tools such as Asana or Slack that most team members hadn’t used before, but also because team members represented a rich diversity in gender, age, organisation affiliation, time zone, but also, and maybe most importantly, skills and competences. We had professionals in agile project management, information management, web developers, promotion and advocacy, research, community management, video editing, and much more. Not everybody was digital-savvy yet, as pioneers, we often went beyond our job descriptions to overcome emerging challenges, including a couple of glitches. We supported each other with a focus on audience and user experience. The interpreters positively contributed to this joint effort.

Interpreting before the digital era

As with many jobs existing before the rapid growth and adoption of digital innovations, digitalisation shakes interpreters’ work. Until February or March 2020, they used to travel to venues, meet their peers, join their respective booths (one booth by language, with often two interpreters by booth), do sound checks, and get to interpret once the show began. Seeing speakers through the soundproof glass that separated them from the conference room helped with rendering livelier interpretation. In a given booth, two interpreters usually rotate every 30 minutes as the job is very tiring. Before going digital, they could talk to one another or make signs to coordinate rotation but also relays – not directly interpreting the floor language but using an interpretation made in another language instead (the relay language).

Digital changes interpretation, the learning curve can be important for event organisers and interpreters alike.

Digital innovations are often short-lived and navigating the offer can be overwhelming. Companies from the tech industry fight a constant battle to stay ahead, while users struggle to adapt to and choose among continually emerging and changing solutions. For video conferences, there are multiple solutions (e.g. Webex, Microsoft Teams, Google Meet, Skype for Business, or Zoom) that do not offer precisely the same options or services but target overlapping customer segments. Not all tools provide the functionalities required for interpretation.

Some solutions have in-built interpretation modules or allow pairing tier interpretation tools. Yet, humans are still needed to interpret. Artificial Intelligence interpreters could be a solution, but they are not reliable enough nor “human” enough yet (e.g. see this World Economic Forum post). Metallic sounding or non-natural voice flows fail to generate the empathy expected in a public event run by an organisation that holds Humanity as one of its core principles.

The learning curve can be steep for everybody involved. Managing the interpretation service is challenging when the venue is digital. First, organisations, teams and interpreters use or are familiar with different tools and processes, sometimes limited by internal policies. Keeping track can be challenging. Second, interpreters coordinators are suddenly required to go digital and may lack the necessary digital skills. Relying on IT or tech savvy professionals is tempting. However, they may not have experience in working with interpreters. 

Climate.red’s solution

The event proposed interpretation for the seventeen main stage sessions. We ran them on Zoom, with facilitators, panelists and interpreters in the “room”.

  • Fourteen of these sessions had interpreters connected to zoom to listen to the floor and outputting their interpretation through Climate:Red’s bespoke interpretation tool (see screenshot below). OpenLab developed it with the support of a professional interpreter and enabled overcoming a Zoom limitation. The software only allows to stream the floor audio channel along with the video, not interpretation channels, thereby making it impossible for the participants to access their preferred language.
  • The bespoke tool only allowed unilateral interpretation (interpreters usually interpret both ways, e.g. English to Arabic and Arabic to English). Neither did it allow relay interpretation. Developing such functionalities would have required much more resources (cost, time, people) than those available. Whenever we expected panelists to speak in more than one language, we had to interpret directly in Zoom, thanks to its dedicated module (available only to pro accounts). Three sessions had to run this way with participants entering the meeting through a Zoom link provided on the website rather than following the session on climate.red.

Keys to successfully managing interpreters

Climate:Red had 14 interpreters working in four shifts to cover the whole 31 hours long event and a dedicated coordinator. The job proved unexpectedly overwhelming, emotionally challenging and caffeine demanding. To some extent, it felt like working in an emergency context and wouldn’t have been possible without support from the team, and especially having a second person to support coordination. Yet, it was worth it as we may have contributed to defining a new profile in the Red Cross Red Crescent.

Roles and responsibilities

  • Coordinate and provide custom support to 14 interpreters;
  • Make sure interpreters have all the materials and information needed to access the zoom rooms, get in their virtual booths, and communicate between each other and with the coordinator;
  • Write a custom guide on how to interpret during climate:red, including technical requirements and instructions;
  • Support the developers in troubleshooting the bespoke interpretation module;
  • Liaise with the production team;
  • Manage interpretation directly in zoom (that’s for the 3 full zoom sessions);

Succeeding requires:

  • Speaking to interpreters in terms that they understand;
  • Assessing their proficiency in the use of digital interpretation tools;
  • Building capacity in a short timeframe, enabled by efficiently building trust (reliability, humility and humour help);
  • Taking the blame when technology fails;
  • Being patient and rephrasing when the message doesn’t go through;
  • Going on 1-1 calls when an interpreter is struggling more than others
  • Always keeping in mind that interpreters have different backgrounds (culture, digital literacy) and are familiar with different devices. 
  • Not sleeping much. Interpreters can be based in different time zones than the great majority of the production team members. For us, it meant that only evenings and nights (sometimes up to 2am) were left for tests and training.

What worked well

  • Participants could follow the conference in the language of their choice (among which around 3,000 in a language other than English);
  • Interpreters used the bespoke tool and understood how to work with us;
  • Leveraging more digital-savvy interpreters to support their peers;
  • Dedicated WhatsApp interpreters groups for each session;
  • Troubleshooting and debugging;
  • Relationship with interpreters;
  • Collective and individual training;
  • Liaison with the production team;

What was challenging or didn’t work as well as expected

  • Back to back sessions requiring interpreters to jump from one session to another. We solved that by having one interpreter from each booth joining the next session before the previous one ended. This unusual on-the-fly solution benefited from relationship-building efforts.
  • The bespoke interpretation tool only allowed unilateral interpretation. Our mixed solution (interpretation on climate.red but also sometimes directly on zoom) may have created confusion among participants and required much effort to manage interpreters.
  • Defining unambiguous and shared terms for technical and production coordination during the event.
  • When using the bespoke interpretation interface, interpreters had no way of listening to their colleague’s interpretation, something that they usually do for the sake of coherence.
  • Interpretation was delivered on climate.red independently from the youtube stream video sound. Meaning that if the audience did not mute the original youtube sound, they would hear both floor and interpretation audios. 
  • We fixed a bug after the event started. The custom interpretation tool worked well, but some display issues created confusion and stress among interpreters.
  • Arabic is unbelievably not among Zoom’s default display languages for interpretation. There is a possibility to add custom languages, yet this does not work well if all interpreters do not use zoom version 5.2.1 or higher. Those up to date can use the interpretation module while those without it cannot. We had to use Portuguese as a proxy for Arabic.

What we could have done better & ideas to explore

  • Finish developing and running exhaustive tests on custom tools way before the event, involving the interpreters that will be using it – This means being able to hire interpreters way earlier, something that is not always possible;
  • Include the possibility for interpreters to listen to their booth mate, a critical feature for successful interpretation and smooth interpreters switches;
  • Make sure interpreters are aware and able to adapt to last-minute technical or schedule changes;
  • Manage risk by considering even the most unlikely events (“s**t happens”)
  • Find a way to maintain the interpretation service once the event is over. Videos are available on climate.red but interpretation is not available anymore.
  • Be more inclusive by adding closed caption or subtitles in all four languages for people with hearing disabilities.
  • To be more inclusive, add more languages than the four official languages of the IFRC.
Share This