Technology for subtitling: a 360-degree turn

Palabras clave: subtitulación, tecnología de la subtitulación, contenido en 360º, estudio de recepción, realidad virtual, usabilidad

Resumen

La subtitulación se ha convertido en uno de los modos de la traducción audiovisual más importantes y no puede estudiarse fuera del contexto tecnológico que la hace posible. Asimismo, nuevos medios audiovisuales, como los vídeos de 360º, están emergiendo y la necesidad de subtitular este tipo de contenidos para hacerlos accesibles es evidente. En este artículo se presenta una revisión de la tecnología de subtitulación existente para contextualizar el estudio. A continuación, se presenta una revisión de los principales entornos inmersivos (3D, realidad aumentada y realidad virtual) y sus implicaciones para la subtitulación. El estudio se centra en la realidad virtual y, por lo tanto, se presentan los principales retos de la subtitulación de contenidos en 360º. Para responder a las necesidades de subtitulación de este tipo de vídeos, se ha desarrollado una primera versión de un editor de subtítulos que se ha presentado a veintisiete subtituladores profesionales que han probado la herramienta y aportado sus opiniones y sugerencias. En este estudio se ha demostrado la importancia de realizar pruebas de usabilidad con los usuarios finales a la hora de desarrollar software específico, así como los retos a los que se enfrentan los subtituladores en nuevos medios audiovisuales como los contenidos 360º.

Descargas

La descarga de datos todavía no está disponible.

Biografía del autor/a

Belén Agulló, Universitat Autònoma de Barcelona

Belén Agulló is a predoctoral researcher in the Department of Translation, Interpreting and East Asian Studies at the Universitat Autònoma de Barcelona (UAB). The topic of her PhD is subtitling for the deaf and hard of hearing in virtual reality. She works in the EC-funded H2020 project ImAC (Immersive Accessibility). She is member of the research group Transmedia Catalonia (http://grupsderecerca.uab.cat/transmedia/node/274). She holds an MA in Audiovisual Translation from UAB. She has taught the Audiovisual Translation course at the Translation and Interpreting degree at UAB (2017-2018/2018-2019). She is currently teaching the specialisation of game localisation in several audiovisual translation master’s degrees: Universidade de Vigo (Spain), Universitat Pompeu Fabra (Spain), ISTRAD (Spain) and Université de Strasbourg (France). Before starting her career in the academic field, she was working in the game localisation industry for more than 5 years. She worked as a Translator and Reviewer, Project Manager, Consultant and Department Director (Translation  and QA), and she was also the co-founder of the company KiteTeam. She has been speaking about game localisation around the world, for example at the Game Localization Summit (GDC) in San Francisco or the Localization World Conference in Berlin and Barcelona. Her research interests include audiovisual translation, game localization and media accessibility.

Citas

Agulló, Belén and Anna Matamala (2019), “The challenge of subtitling for the deaf and hard-of-hearing in immersive environments: results from a focus group”, The Journal of Specialised Translation, 32, pp. 217-235, on https://jostrans.org/issue32/art_agullo.pdf (consulted 22/12/2020).

Agulló, Belén and Pilar Orero (2017), “3D Movie Subtitling: Searching for the best viewing experience”, CoMe-Studi di Comunicazione e Mediazione linguistica e culturale, 2, pp. 91-101.

Agulló, Belén, Anna Matamala, and Pilar Orero (2018), “From disabilities to capabilities: testing subtitles in immersive environments with end users”, HIKMA, 17, pp. 195-220. DOI: https://doi.org/10.21071/hikma.v17i0.11167.

Álvarez, Aitor, Arantza del Pozo and, Andoni Arruti (2010), “APyCA: Towards the automatic subtitling of television content in Spanish”, en Maria Ganzha, Marcin Paprzycki (eds.) Proceedings of the International Multiconference on Computer Science and Information Technology, Danvers, MA, IEEE, pp. 567-574, on https:// ieeexplore.ieee.org/document/5680055 (source consulted 22/12/ 2020) . DOI: https://doi.org/10.1109/IMCSIT.2010.5680055.

Asociación Española de Normalización y Certificación (AENOR) (2003), Norma UNE 153010: Subtitulado para personas sordas y personas con discapacidad auditiva. Subtitulado a través del teletexto. Madrid: Asociación Española de Normalización y Certificación.

Brooke, John (1996), “SUS-A quick and dirty usability scale”, in Patrick W. Jordan et al. (eds.), Usability evaluation in industry, London, Taylor and Francis, pp. 189-194.

Brooke, John (2013), “SUS: A Retrospective”, Journal of Usability Studies, 8(2), pp. 29-40.

Brown, Andy, Jayson Turner, Jake Patterson, Anastasia Schmitz, Mike Armstrong, and Maxine Glancy (2018), “Exploring Subtitle Behaviour for 360° Video.” White Paper WHP 330, BBC Research & Development, on https://www.bbc.co.uk/rd/publications/white paper330 (consulted 26/3/2019).

Bywood, Lindsay, Martin Volk, Mark Fishel, and Panayota Georgakopoulou (2013), “Parallel subtitle corpora and their applications in machine translation and translatology”, Perspectives, 21(4), pp. 595-610. DOI: https://doi.org/10.1080/0907676X.201 3.831920.

Bywood, Lindsey, Panayota Georgakopoulou and Thierry Etchegoyhen (2017), “Embracing the threat: machine translation as a solution for subtitling”, Perspectives, 25(3), pp. 492-508. DOI: https://doi.org/ 10.1080/0907676X.2017.1291695.

Díaz-Cintas, Jorge (2013), “The technology turn in subtitling”, in Marcel Thelen and Barbara LewandowskaTomaszczyk (eds.), Translation and Meaning: Part 9, Maastricht, Zuyd University of Applied Sciences, pp. 119-132.

Díaz-Cintas, Jorge (2014), “Technological Strides in Subtitling”, in Chan Sin-wai (ed.), The Routledge Encyclopedia of Translation Technology Routledge, Abingdon, UK, Routledge, pp. 632-643.

Dranch, Konstantin (2018), “Translation tech goes media: Subtitling and dubbing won’t stay uncontested for long”, Nimdzi Blog, on https://www.nimdzi.com/translation-tech-goes-media/ (consulted 18/ 1/2019).

Estopace, Eden (2017), “Audiovisual Translation Hits a Sweet Spot as Subscription Video On-Demand Skyrockets”, Slator, on https://slator.com/features/audiovisual-translation-hits-sweet-spot-subscription-video-on-demand-skyrockets/ (consulted 15/1/2019).

European Broadcasting Union (EBU) (2017), “Virtual Reality: How are public broadcasters using it?”, on https://www.ebu.ch/publications/ virtual-reality-how-are-public-broadcasters-using-it (consulted 11/1/ 2019).

Forcada, Mikel L. (2017), “Making sense of neural machine translation”, Translation Spaces, 6(2), pp. 291–309. DOI: https://doi.org/10.1075 /ts.6.2.06for.

Garcia, Jose Enrique, Alfonso Ortega, Eduardo Lleida, Tomas Lozano, Emiliano Bernues, and Daniel Sanchez (2009), “Audio and text synchronization for TV news subtitling based on Automatic Speech Recognition”, 2009 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting, Piscataway, NJ, IEEE, pp. 1-6. DOI: https://doi.org/10.1109/ISBMSB.2009.5133758.

Georgakopoulou, Panayota (2012), “Challenges for the audiovisual industry in the digital age: the everchanging needs of subtitle production”, The Journal of Specialised Translation, 17, pp. 78-103.

Goodman, Elizabeth, Mike Kuniavsky and Andrea Moed (2012), Observing the User Experience: A Practitioner's Guide to User Research, San Francisco, CA, Morgan Kaufmann.

Harris, Aisha, Margaret Lyons, and Maureen Ryan (2019), “‘Bandersnatch’ Has Many Paths, but Do Any of Them Add Up to Anything?” The New York Times, Jan. 4, on https://www. nytimes.com/2019/01/04/arts/television/bandersnatch-black-mirror-netflix.html (consulted 11/1/2019).

Lewis, James R. (2018), “The System Usability Scale: Past, Present, and Future”, International Journal of Human-Computer Interaction, 34(7), pp. 577-590. DOI: https://doi.org/10.1080/10447318.2018.14 55307.

Matamala, Anna, Andreu Oliver Moreno, Aitor Álvarez Muniain and Andoni Azpeitia Zaldua (2015), “The reception of intralingual and interlingual subtitling: an exploratory study within the HBB4ALL project”, Proceedings of the 37th Conference Translating and the Computer, pp. 12-17, in https://ddd.uab.cat/record/144868 (consulted 11/1/2019).

Mathur, Abhinav, Tanya Saxena, and Rajalakshmi Krishnamurthi (2015), “Generating Subtitles Automatically Using Audio Extraction and Speech Recognition”, 2015 IEEE International Conference on Computational Intelligence & Communication Technology, Ghaziabad, India, IEEE, pp. 621-626, on http://www.proce edings.com/25890.html (consulted 22/12/2020). DOI: https://doi.org /10.1109/CICT.2015.46.

Morales, Manuel, Tommaso Koch, and Luis Pablo Beauregard (2019), “Netflix removes Castilian Spanish subtitles from Alfonso Cuarón’s ‘Roma’”, El País, October 1st, https://elpais.com/elpais/2019/01/ 10/inenglish/1547125380_758985.html (consulted 11/1/2019).

O’Hagan, Minako and Carme Mangiron (2013), Game Localization: Translating for the Global Digital Entertainment Industry. Philadelphia, John Benjamins. DOI: https://doi.org/10.1075/btl.106.

Pannafino, James and Patrick McNeil (2017), UX Methods: A Quick Guide to User Experience Research Methods, CDUXP LLC.

Perkins Coie LLP (2018), “2018 Augmented and Virtual Reality survey report: Insights into the future of AR/VR”, on https://www.p erkinscoie.com/images/content/1/8/v2/187785/2018-VR-AR-Survey-Digital.pdf (consulted on 11/1/2019).

Roach, Joan (2018). “AI technology helps students who are deaf learn”, The AI Blog, on https://blogs.microsoft.com/ai/ai-powered-caption ing/ (consulted on 29/1/2019).

Rothe, Sylvia, Kim Tran and Heinrich Hussmann (2018), “Dynamic Subtitles in Cinematic Virtual Reality”, Interactive Experiences for Television and Online Video (ACM TVX) 2018, ACM, on https://tvx.acm.org/2018/ (consulted on 22/12/2020). DOI: https:// doi.org/10.1145/3210825.3213556.

Sauro, Jeff and James R. Lewis (2016), Quantifying the User Experience: Practical Statistics for User Research. Amsterdam, Morgan Kaufmann. DOI: https://doi.org/10.1016/B978-0-12-802308-2.0000 2-3.

Tullis, Thomas and William Albert (2013), Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics, Burlingtom MA, Morgan Kaufmann.
Publicado
28/01/2021
Sección
ARTÍCULOS