ANU-CoEDL ZOOM Seminar: Do sequence-to-sequence model generalisations follow linguistic tradition for phonological processes?, Saliha Muradoglu, 9 June
Seminar: Do sequence-to-sequence model generalisations follow linguistic tradition for phonological processes?
Speaker: Saliha Muradoglu
When: 9 June 2021, 4pm-5pm
Where: via zoom (please email CoEDL@anu.edu.au for zoom link invitation)
Sequence-to-sequence models are considered state-of-the-art for word-formation tasks such as the SIGMORPHON shared tasks on morphological inflection (2016-2021). They are often capable of learning to model subtle morphophonological details with limited training data. Despite their success, the opaque nature of neural models renders the task of analysing and evaluating the generalisations produced difficult. To compare the generalisations generated by these models with those of linguistic tradition, we experiment with phonological processes on a constructed language. We establish that the models are capable of learning 28 different phonological processes with varying degrees of complexity. We explore whether the models generalise over linguistic categories such as vowels and consonants, whether they learn a representation of internal word structures and finally, more complex phonological processes such as rule ordering. We also show that negative evidence is crucial for capturing detailed phonological patterns.
Vylomova, E., White, J., Salesky, E., Mielke, S. J., Wu, S., Ponti, E., ... & Hulden, M. (2020). SIGMORPHON 2020 shared task 0: Typologically diverse morphological inflection. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology