Back to listing

Director weekly highlights 25 Mar

Jane Simpson, Nicholas Evans, Outreach

Date: 25 March 2022

Surprise! Nick Evans is in Melbourne today, to give a talk Mirror or compass? at Monash. He didn’t know he would actually be listening to a delectable array of papers by his former students, and blushing… Here’s the introduction from the organisers, Rachel Nordlinger and Alice Gaby.  

This festschrift event is timed to correspond neither with Nick’s 65th birthday, nor with his retirement, but we decided to do it anyway so as to leave space and time for many more to come. In this event we have focussed on Nick’s past graduate students who come together today to acknowledge and celebrate the profound impact he has had on our intellectual lives, our careers and our love of linguistics. Nick is a truly inspiring teacher who has infected us all with his curiosity, insight and enthusiasm for linguistic diversity and its interactions with people and cultures. As a mentor and supervisor he is supportive, encouraging, and sometimes a little daunting, but always intellectually stimulating with a lot of fun thrown in.   

Nick, thank you for all that you have done for us and for generations of Australian linguists. We are so lucky to have you. 

Two other items:   

Our heartfelt thanks to all the CoEDL members who volunteered to run the Australian Computational and Linguistics Olympiad this year - ensuring that Australia’s high school students have the chance to experiment with languages, and hone their analytic skill.     

Congratulations to Nick Thieberger on his splendid public lecture "How do we know what we know about the world’s languages?" the Lansdowne lecture at the University of Victoria (Canada), Humanities Computing and Media Centre on 21 March 2022. Watch here for a brilliant and subtle account of the evolution of making sound recordings discoverable and accessible through archives, and of PARADISEC’s role in this.  

Nick’s concern for speakers of small languages having equitable access to recordings of their languages leads naturally on to this week’s important piece on the “voice divide”. 

Jane Simpson
Deputy Director


CoEDL Spotlight: Judith Bishop 

Nick Evans introduces Judith Bishop: 

This week our spotlight turns to Judith Bishop, who for most of CoEDL's operation has been the key person in coordinating with our industry partner, Appen and making that the exceptionally fertile relationship that it has been. It has been a special pleasure to work with Judith again through this time, because we have worked together since she was a student back in Melbourne in the 1990s, torn between competing careers as a poet (she has published four anthologies of her own compelling poetry, and translated the poetry of Philippe Jaccottet and Renee Char) and as a phonetician, with her doctoral thesis on the intonation of Manyallaluk Mayali. She eventually ended up at Appen, where she coordinated and grew significant parts of their language processing program. In the last few years, she has become increasingly interested in the 'voice divide', which she discusses here, and in how the language technology can best reckon with the needs of linguistic equity. 


CoEDL Partner Investigator Judith Bishop 

The voice divide in human-AI interfaces: towards an inclusive future 

Every AI product with an interface to humans must learn to work with language. The AI language interface has just a few components, directly mimicking the human language processor with which we are familiar: language recognition (hearing/reading words and signs); understanding (meaning/intention); reference (real-world knowledge and relations), and language production (speaking/writing words and signs). Multimodal processing is now adding vision to these processes. In technology as in life, visual recognition and understanding provide support for the interpretation of linguistic expression. 

I feel I cannot overstate the future significance of this mimicry. AI is literally rebuilding the human language (and vision) capability from scratch, and it’s doing so in hundreds of languages. One day it may be thousands. Linguists and computational linguists around the globe are helping to create and annotate data for AI models at all points of this stack, bringing their passion for languages, their history and complexity, to the task of understanding variation in language and what computers need to know to handle it.   

In the process, they’re doing important work to overcome what I call the voice divide. (When I say voice, I mean that in the broadest sense, including written texts, signing, emotional, gestural and cultural expression.) The voice divide is the digital gulf that’s created when people can’t be understood by AI devices that process human voices and are excluded from the future that’s promised by AI – without a say in the matter.  

On the far side of the voice divide is every speaker of the ‘long tail’ of thousands of languages for which the commercial incentive to develop interfaces doesn’t (yet) exist and other sources of funding are scarce or non-existent.  

Plus, in all languages, the ethnically and linguistically diverse, the accented, the code-switchers, the elderly and those living with a voice impairment of any kind. For all these people, AI voice technologies may work to some extent – but their poorer performance creates a barrier to interaction, frustration, and the harm of knowing that this shiny new technology – whose lesser performance in itself encodes bias, perpetuating histories of the same – was not made for me

Some may prefer exclusion from the future being made with AI. But having a preference is predicated on the prior possibility of choice. And AI in all its manifestations – voice assistants, metaverse experiences, voice search, recommendation engines, predictive analyses of health, to name just a few – may become as inescapable as the internet is today for conducting daily life in many regions of the earth.   

As linguists, we need to ask what the future could look like – and what it would mean for AI to be truly inclusive of the richly human variation that exists both in language and in culture. Indigenous peoples may ask with Dr. Hēmi Whaanga, “Is AI inevitable, inescapable, a fait accompli for Indigenous peoples?”. They may ask, “How do we imagine a future with AI that contributes to the flourishing of all humans and non-humans?” And they may answer with Dr. Whaanga: “We need to be part of the dialogue on establishing global principles and standards for the use of AI to ensure that it is not used to perpetuate societal biases, inequalities and global homogenization.”1  

1All three quotes are from the Indigenous Protocol and Artificial Intelligence position paper, 2020: On the perpetuation of bias, see: “I don’t Think These Devices are Very Culturally Sensitive.”—Impact of Automated Speech Recognition Errors on African Americans”:

  • Australian Government
  • The University of Queensland
  • Australian National University
  • The University of Melbourne
  • Western Sydney University