CINXE.COM
Paul Boersma
<html><head><meta name="robots" content="index,follow"><meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Paul Boersma</title></head><body bgcolor="#FFCCCC"> <table border=0 cellpadding=0 cellspacing=0 width="100%"><tr><td bgcolor="#CCCC00"><table border=4 cellpadding=9 width="100%"><tr><td align=center bgcolor="#000000"><font face="Palatino,Times" size=6 color="#999900"><b> Paul Boersma </b></font></table></table> <table><tr> <td> </td> <td><img alt=Paul src="Paul2021.jpg" width=170></td> <td><dl> <dd>Professor of <a href=/>Phonetic Sciences</a> <dd> <dd><a href="http://www.uva.nl">University of Amsterdam</a> <dd> <dd>E-mail: <a href="mailto:paul.boersma@uva.nl" target="_blank">paul.boersma@uva.nl</a> <dd>Telephone: +31–20–5253864 (secretary). </dl></td> </tr></table> <p><b>Software</b>: <a href="http://www.praat.org">P<font size=-1>RAAT</font>: doing phonetics by computer</a></p> <p><b>Research</b>: <a href="chrono.html">all my writings in chronological order</a></p> <p>My research focuses on showing (by computer simulations) how the production, comprehension and acquisition of the phonetics, phonology and morphology of a language, as well as how these change over the generations, can be explained by assuming multi-level representations in the language user’s mind.</p> <p>My most recent papers in this area attempt to do this by using <b>artificial neural networks</b>, in which phonological categories emerge from the phonetic data, sometimes in combination with semantic data:</p> <table cellspacing=5 cellpadding=3> <a name="SubFree"></a> <tr bgcolor=white><td valign=top><font color=red>2022</font> <td valign=top>Paul Boersma, <a href=https://ucjtk.ff.cuni.cz/ustav/lide/zamestnanci/katerina-chladkova/>Kateřina Chládková</a> & <a href=http://www.mq.edu.au/about_us/faculties_and_departments/faculty_of_human_sciences/linguistics/linguistics_staff/dr_titia_benders/>Titia Benders</a>:<br> <a href="papers/2022-cjl-BoersmaChladkovaBenders.pdf"><b><font color=green>Phonological features emerge substance-freely from the phonetics and the morphology.</font></b></a> <i>Canadian Journal of Linguistics</i> <a href="https://www.cambridge.org/core/journals/canadian-journal-of-linguistics-revue-canadienne-de-linguistique/article/phonological-features-emerge-substancefreely-from-the-phonetics-and-the-morphology/308FF454E1B3E843F2DBDADB12FBB999"><b>67</b>: 611–669</a> (a special issue on substance-free phonology). <a name="BiPhonNN"></a> <tr bgcolor=white><td valign=top><font color=red>2020</font> <td valign=top>Paul Boersma, <a href=http://www.mq.edu.au/about_us/faculties_and_departments/faculty_of_human_sciences/linguistics/linguistics_staff/dr_titia_benders/>Titia Benders</a> & <a href=/klaas/>Klaas Seinhorst</a>:<br> <a href="papers/2020-jlm-BoersmaBendersSeinhorst.pdf"><b><font color=green>Neural networks for phonology and phonetics.</font></b></a><br> <i>Journal of Language Modelling</i> <a href=https://jlm.ipipan.waw.pl/index.php/JLM/article/view/224><b>8</b>: 103–177</a>. <a name="DBM"></a> <tr bgcolor=white><td valign=top><font color=red>2019</font> <td valign=top>Paul Boersma:<br> <a href="papers/2019-icphs-Boersma.pdf"><b><font color=green>Simulated distributional learning in deep Boltzmann machines leads to the emergence of discrete categories.</font></b></a><br> <a href=https://www.internationalphoneticassociation.org/icphs-proceedings/ICPhS2019/><i>Proceedings of the 19th International Congress of Phonetic Sciences</i></a>, Melbourne, 5–9 August 2019. 1520–1524. </table> <p>Earlier papers attempt do this by using <b>Optimality Theory</b>, which still has the advantage of being able to work with less toy-like languages. The following paper shows how in a parallel multi-level model of production phonological (i.e. “later”) considerations can influence morphological (i.e. “earlier”) choices:</p> <table cellspacing=5 cellpadding=3> <a name="TreeTableaus"></a> <tr bgcolor=white><td valign=top><font color=red>2017</font> <td valign=top>Paul Boersma & Jan-Willem van Leussen:<br> <a href="papers/2017-li-BoersmaLeussen.pdf"><b><font color=green>Efficient evaluation and learning in multi-level parallel constraint grammars.</font></b></a><br> <i>Linguistic Inquiry</i> <a href="http://www.mitpressjournals.org/doi/abs/10.1162/ling_a_00247"><b>48</b>: 349–388</a>. [<a href="http://www.mitpressjournals.org/page/policies/authorposting">copyright</a>] </table> <p>Multi-level models allow us to regard the typologies of the world’s languages as <b>emergent</b> rather than as innate or synchronically functionalist. For example, <b>markedness</b> is an emergent property that follows from frequency differences in the learner’s input (and not from innate markedness constraints or from synchronically functionalist faithfulness rankings), and <b>licensing by cue</b> emerges from differences in auditory cue reliability in the learner’s input (and not from innate specific-over-general faithfulness rankings or from synchronically listener-oriented faithfulness rankings):</p> <table cellspacing=5 cellpadding=3> <a name="FaiRan"></a> <tr bgcolor=white><td valign=top><font color=red>2008/03/10</font> <td valign=top> <a href="papers/EmergeFaith.pdf"><b><font color=green>Emergent ranking of faithfulness explains markedness and licensing by cue.</font></b></a><br> <i>Rutgers Optimality Archive</i> <b>954</b>. 30 pages.<br> <font size=-1 color=gray>Earlier version: <a href="presentations/BoersmaMFM14.pdf"><font color=green>Handout 14th Manchester Phonology Meeting, 2006/05/28</font></a>.</font> </table> <p>As another example, <b>auditory dispersion</b> in inventories of phonemes emerges from the fact that learners use in production the same constraint rankings that have optimized their comprehension (and not from innate markedness constraints or synchronically functionalist dispersion constraints). The following two papers are the one- and two-dimensional cases, respectively:</p> <table cellspacing=5 cellpadding=3> <a name="EvoCon1"></a> <tr bgcolor=white><td valign=top><font color=red>2008</font> <td valign=top>Paul Boersma & <a href="https://www.fon.hum.uva.nl/silke/">Silke Hamann</a>:<br> <a href="papers/BoersmaHamannPhonology2008.pdf"><b><font color=green>The evolution of auditory dispersion in bidirectional constraint grammars.</font></b></a><br> <i>Phonology</i> <a href="http://dx.doi.org/10.1017/S0952675708001474"><b>25</b>: 217–270</a>.<br> Material: <a href="papers/BoersmaHamannPhonology2008_run.zip">scripts for the simulations and pictures</a>.<br> <font size=-1 color=gray>Earlier version: <a href="papers/EvolutionOfContrast.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>909</b>, 2007/04/17</font></a>.</font><br> <font size=-1 color=gray>Earlier version: <a href="presentations/BoersmaHamannHandout.pdf"><font color=green>Handout OCP 3, Budapest, 2006/01/17</font></a>.</font> <a name="EvoCon2"></a> <tr bgcolor=white><td valign=top><font color=red>2007/04/11</font> <td valign=top> <a href="presentations/VowelSpaceTromsoe2007.pdf"><b><font color=green>The emergence of auditory contrast.</font></b></a><br> Presentation GLOW 30, Tromsø. 24 slides. </table> <p>As for the emergence of <b>categories</b> and <b>constraints</b> themselves, that is discussed for Optimality Theory as well as neural networks:</p> <table cellspacing=5 cellpadding=3> <a name="SubFree"></a> <tr bgcolor=white><td valign=top><font color=red>2022</font> <td valign=top>Paul Boersma, <a href=https://ucjtk.ff.cuni.cz/ustav/lide/zamestnanci/katerina-chladkova/>Kateřina Chládková</a> & <a href=http://www.mq.edu.au/about_us/faculties_and_departments/faculty_of_human_sciences/linguistics/linguistics_staff/dr_titia_benders/>Titia Benders</a>:<br> <a href="papers/2022-cjl-BoersmaChladkovaBenders.pdf"><b><font color=green>Phonological features emerge substance-freely from the phonetics and the morphology.</font></b></a> <i>Canadian Journal of Linguistics</i> <a href="https://www.cambridge.org/core/journals/canadian-journal-of-linguistics-revue-canadienne-de-linguistique/article/phonological-features-emerge-substancefreely-from-the-phonetics-and-the-morphology/308FF454E1B3E843F2DBDADB12FBB999"><b>67</b>: 611–669</a> (a special issue on substance-free phonology). <a name="BiPhonNN"></a> <tr bgcolor=white><td valign=top><font color=red>2020</font> <td valign=top>Paul Boersma, <a href=http://www.mq.edu.au/about_us/faculties_and_departments/faculty_of_human_sciences/linguistics/linguistics_staff/dr_titia_benders/>Titia Benders</a> & <a href=/klaas/>Klaas Seinhorst</a>:<br> <a href="papers/2020-jlm-BoersmaBendersSeinhorst.pdf"><b><font color=green>Neural networks for phonology and phonetics.</font></b></a><br> <i>Journal of Language Modelling</i> <a href=https://jlm.ipipan.waw.pl/index.php/JLM/article/view/224><b>8</b>: 103–177</a>. <a name="BEH"></a> <tr bgcolor=white><td valign=top><font color=red>2003/02/28</font> <td valign=top>Paul Boersma, <a href=https://www.westernsydney.edu.au/marcs/our_team/researchers/professor_paola_escudero>Paola Escudero</a> & <a href=https://sites.google.com/view/speech-acquisition-lab/rachel-hayes-harb>Rachel Hayes</a>:<br> <a href=papers/ICPhS_751.pdf><b><font color=green>Learning abstract phonological from auditory phonetic categories: An integrated model for the acquisition of language-specific sound categories.</font></b></a><br> <i>Proceedings of the 15th International Congress of Phonetic Sciences</i>, Barcelona, 3–9 August 2003, pp. 1013–1016 (= <i>Rutgers Optimality Archive</i></a> <b>585</b>). <a name="FunPhon"></a> <tr bgcolor=white><td valign=top><font color=red>1998/09/14</font><br><b>book</b> <td valign=top><a href=papers/funphon.pdf><b><font color=green>Functional phonology: Formalizing the interactions between articulatory and perceptual drives.</font></b></a><br> Ph.D. dissertation, University of Amsterdam, 504 pages.<br> A hardcopy edition is available from the author for free!<br> For more detail on separate chapters, and scripts, see <a href=diss/diss.html>Functional Phonology (1998)</a>. </table> <p>Such simulations make it possible to track languages over the generations (for more, see <a href="soundchange.html">sound change</a>):</p> <table cellspacing=5 cellpadding=3> <a name="CanRai"></a> <tr bgcolor=white><td valign=top><font color=red>2007/10/27</font> <td valign=top>Paul Boersma & <a href=https://www.umass.edu/linguistics/member/joe-pater>Joe Pater</a>:<br> <a href="presentations/CanadianRaisingNELS2007.pdf"><b><font color=green>Constructing constraints from language data: the case of Canadian English diphthongs.</font></b></a><br> Handout NELS 38, Ottawa. 18 pages. <a name="EteOpt"></a> <tr bgcolor=white><td valign=top><font color=red>2003</font> <td valign=top> <a href="papers/EternalOptimization_Kluwer.pdf"><b><font color=green>The odds of eternal optimization in Optimality Theory.</font></b></a><br> In D. Eric Holt (ed.): <i>Optimality Theory and language change</i>, 31–65. Dordrecht: Kluwer. [<a href="papers/otcycles.txt"><font color=green>Abstract</font></a>]<br> <font size=-1 color=gray>Earlier version: <a href="papers/otcycles.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>429</b>, 2000/12/13</font></a>.</font> </table> <p>The above papers (if younger than 2005) rely heavily on the framework of <a href="biphon.html">Parallel Bidirectional Phonology and Phonetics (BiPhon)</a>, i.e. on the idea that you use the same constraint ranking as a listener and as a speaker and on the parallel multi-level evaluation of your phonology and your phonetics. Here is more information on that subject:</p> <table cellspacing=5 cellpadding=3> <a name="Progrm"></a> <tr bgcolor=white><td valign=top><font color=red>2011</font> <td valign=top> <a href="papers/BiPhon21.pdf"><b><font color=green>A programme for bidirectional phonology and phonetics and their acquisition and evolution.</font></b></a><br> In Anton Benz & Jason Mattausch (eds.), <i>Bidirectional Optimality Theory</i>, 33–72. Amsterdam: John Benjamins.<br> <font size=-1 color=gray>Earlier version: <a href="papers/programme.pdf"><font color=green>Handout LOT Summerschool, June 2006, and Jadertina Summerschool (<i>Rutgers Optimality Archive</i> <b>868</b>), 2006/09/12</font></a>.</font> <a name="LoaKor"></a> <tr bgcolor=white><td valign=top><font color=red>2009</font> <td valign=top>Paul Boersma & <a href="http://user.phil-fak.uni-duesseldorf.de/~hamann/">Silke Hamann</a>:<br> <a href="papers/BoersmaHamannLoans35.pdf"><b><font color=green>Loanword adaptation as first-language phonological perception.</font></b></a><br> In Andrea Calabrese & W. Leo Wetzels (eds.), <a href="http://www.benjamins.com/cgi-bin/t_bookview.cgi?bookid=CILT%20307"><i>Loanword phonology</i></a>, 11–58. Amsterdam: John Benjamins.<br> <font size=-1 color=gray>Earlier version: <a href="papers/BoersmaHamannLoans.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>975</b>, 2008/06/15</font></a>.</font><br> <font size=-1 color=gray>Earlier version: <a href="presentations/BoersmaHamannOCP4.pdf"><font color=green>Presentation OCP 4, Rhodes, 2007/01/20</font></a>.</font> <a name="CueCon"></a> <tr bgcolor=white><td valign=top><font color=red>2009</font> <td valign=top> <a href="papers/CueConstraints26.pdf"><b><font color=green>Cue constraints and their interactions in phonological perception and production.</font></b></a><br> In Paul Boersma & Silke Hamann (eds.): <a href="http://www.degruyter.de/cont/fb/sk/detailEn.cfm?id=IS-9783110219227-1"><i>Phonology in perception</i></a>, 55–110. Berlin: Mouton de Gruyter.<br> <font size=-1 color=gray>Earlier version: <a href="papers/CueConstraints.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>944</b>, 2007/11/11</font></a>.</font> <a name="HAspire"></a> <tr bgcolor=white><td valign=top><font color=red>2007</font> <td valign=top><a href=papers/BoersmaHausse2007.pdf><b><font color=green>Some listener-oriented accounts of h-aspiré in French.</font></b></a><br> <i>Lingua</i> <a href=http://dx.doi.org/10.1016/j.lingua.2006.11.004><b>117</b>: 1989–2054</a>.<br> <font size=-1 color=gray>Earlier version: <a href="papers/UneHausse.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>730</b>, 2005/04/13</font></a>.</font> <a name="EvoLex"></a> <tr bgcolor=white><td valign=top><font color=red>2007/07/08</font> <td valign=top> <a href="presentations/BoersmaStanford2007.pdf"><b><font color=green>The evolution of phonotactic distributions in the lexicon.</font></b></a><br> Presentation Workshop on Variation, Gradience and Frequency in Phonology, Stanford. 32 slides. <a name="McGur"> <tr bgcolor=white><td valign=top><font color=red>2012</font> <td valign=top> <b><font color=green>A constraint-based explanation of the McGurk effect.</font></b><br> In Roland Noske and Bert Botma (eds.) <i>Phonological Architecture: Empirical, Theoretical and Conceptual Issues.</i> Berlin/New York: Mouton de Gruyter. 299–312.<br> <font size=-1 color=gray>Preprint: <a href="papers/McGurk3.pdf"><font color=green>2011/07/03, 11 pages</font></a></font>.<br> <font size=-1 color=gray>Earlier version: <a href="papers/McGurk.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>869</b>, 2006/09/15</font></a>.</font> <a name="ProtPerc"> <tr bgcolor=white><td valign=top><font color=red>2006</font> <td valign=top><b><font color=green>Prototypicality judgments as inverted perception.</font></b><br> In Gisbert Fanselow, Caroline Féry, Matthias Schlesewsky & Ralf Vogel (eds.): <i>Gradience in Grammar</i>, 167–184. Oxford: Oxford University Press. [<a href="papers/Prototypicality.txt"><font color=green>Abstract</font></a>]<br> <font size=-1 color=gray>Earlier version: <a href="papers/Prototypicality.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>742</b>, 2005/05/17</font></a>.</font> </table> <p>Most of the papers with simulations utilize the <b><a href="gla/Welcome.html">Gradual Learning Algorithm</a></b> for Optimality Theory, which was defined in the following two papers:</p> <table cellspacing=5 cellpadding=3> <a name="EmpGla"></a> <tr bgcolor=white><td valign=top><font color=red>2001</font> <td valign=top>Paul Boersma & <a href=https://linguistics.ucla.edu/people/hayes/>Bruce Hayes</a>:<br> <a href="papers/BoersmaHayes_li2001.pdf"><b><font color=green>Empirical tests of the Gradual Learning Algorithm.</font></b></a><br> <i>Linguistic Inquiry</i> <a href="http://dx.doi.org/10.1162/002438901554586"><b>32</b>: 45–86</a>. [<a href="http://www.mitpressjournals.org/page/pubagreement/ling">copyright</a>]<br> <font size=-1 color=gray>Earlier version: <a href="papers/etgla.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>348</b>, 1999/09/29</font></a>.</font><br> <font size=-1 color=gray>Additional material: the <a href="gla/Welcome.html">GLA web page</a>.</font> <a name="VarOpt"></a> <tr bgcolor=white><td valign=top><font color=red>1997</font><br> <td valign=top><a href=papers/learningVariation.pdf><b><font color=green>How we learn variation, optionality, and probability.</font></b></a><br> <a href=/Proceedings/IFA-Proceedings.html><i>IFA Proceedings</i></a> <b>21</b>: 43–58.<br> <font size=-1 color=gray>Additional material: <a href=diss/ch15/categ_learning_script.txt><font color=green>Simulation script</font></a>.</font><br> <font size=-1 color=gray>Earlier version: <i>Rutgers Optimality Archive</i> <b>221</b>, 1997/10/12 (incorrect!).</font><br> <font size=-1 color=gray>Also appeared as: chapter 15 of <a href=diss/diss.html>Functional Phonology (1998)</a>.</font> </table> <p>Since 2007 we have routinely checked how the simulations behave if we use Harmonic Grammar instead of Optimality Theory. The following paper describes a proof of the learning algorithm:</p> <table cellspacing=5 cellpadding=3> <a name="HGGla"></a> <tr bgcolor=white><td valign=top><font color=red>2016</font> <td valign=top>Paul Boersma & <a href=https://www.umass.edu/linguistics/member/joe-pater>Joe Pater</a>:<br> <a href="papers/SGAproof71.pdf"><b><font color=green>Convergence properties of a gradual learning algorithm for Harmonic Grammar.</font></b></a> <font size=-1 color=gray>[preprint, 2013/03/13]</font><br> In John McCarthy & Joe Pater (eds.): <a href=https://www.equinoxpub.com/home/harmonic-grammar/><i>Harmonic Serialism and Harmonic Grammar</i></a>, 389–434. Sheffield: Equinox.<br> <font size=-1 color=gray>Earlier version: <a href="papers/boersmaPaterHGGLA.pdf"><font color=green><i>Rutgers Optimality Archive</i> <b>970</b>, 2008/05/21</font></a>.<br> Additional material: the <a href="gla/Welcome.html">GLA web page</a>. </table> <p><b>Writings by subject</b>:<br> <ul> <li>Optimality-Theoretic (OT) and neural-network (NN) modelling of bidirectional phonology and phonetics and their acquisition and evolution (1989–2021): <ul> <li><a href="soundchange.html">sound change</a> (1989–2020), partly with Silke Hamann, Joe Pater and Klaas Seinhorst <li><a href="gla/Welcome.html">Gradual Learning Algorithm and other OT and HG learning algorithms</a> (1997–2016), partly with Bruce Hayes, Clara Levelt and Joe Pater <li>linguistic processes: <ul> <li>prelexical perception: <ul> <li><a href=categorization.html>categorization</a> (1997–2022), partly with Paola Escudero, Rachel Hayes, Titia Benders and Kateřina Chládková <li><a href=ocp.html>OCP</a> (1998–2003) </ul> <li><a href="lexicon.html">lexicon</a> (word recognition, faithfulness ranking, lexical selection) (1999–2009), partly with Silke Hamann and Diana Apoussidou <li>production: <ul> <li>framework <a href="ListenerOriented.html">listener-oriented production</a> (control loops, probabilistic faithfulness) (1997–2005), partly with Silke Hamann <li>framework <a href="biphon.html">Parallel Bidirectional Phonology and Phonetics (BiPhon-OT)</a> (2005–2017), partly with Diana Apoussidou, Silke Hamann and Jan-Willem van Leussen <li>framework <a href="biphon.html">Parallel Bidirectional Phonology and Phonetics (BiPhon-NN)</a> (2019–2022), with Titia Benders, Kateřina Chládková and Klaas Seinhorst </ul> </ul> <li>representations: <ul> <li><a href="features.html">features</a> with Kateřina Chládková, Titia Benders and Mirjam de Jonge (2011–2022) </ul> <li>paralinguistic tasks: <ul> <li><a href="grammaticality.html">grammaticality and prototypicality</a> (2001–2008), partly with Bruce Hayes and Silke Hamann </ul> <li>applications: <ul> <li><a href="loanword.html">loanword adaptation</a> (2000–2009), partly with Silke Hamann <li><a href="PovertyOfTheBase.html">Poverty of the Base</a> (1997–2009) <li><a href="NasalHarmony.html">nasal harmony</a> (1998–2003) <li><a href="metrical.html">metrical phonology</a> with Diana Apoussidou (2003–2004) </ul> </ul> <li>Experimental phonetics and phonology (1993–2019): <ul> <li><a href="praat.html">writings on the Praat program</a> (1993–2014), partly with David Weenink and Ton Wempe <li><a href="methodology.html">methodology</a> (2005–2013), partly with Paola Escudero, Titia Benders and Kateřina Chládková <li>distributional learning with Karin Wanrooij (2013–2015): <a href="chrono.html#ContDist">continuous distributions</a>, <a href="chrono.html#Dist2mon">two-month-old infants</a>, <a href="chrono.html#DistAdult">adults</a>, <a href="chrono.html#DispersionPeaks">confounds</a> <li>statistical learning with Sophie ter Schure (2016): <a href="papers/fpsyg-07-00525.pdf">multimodal</a>, <a href="papers/SchuJungBoe2016ibd.pdf">speech–object coupling with simulations</a> <li><a href="chrono.html#F0F1">the undersampling hypothesis</a> with Kateřina Chládková (2009) <li><a href="chrono.html#BraPEurP">Portuguese</a>, <a href="chrono.html#PerIbe">Spanish</a> and <a href="chrono.html#SweLong">Swedish</a> vowels (2009–2019), with Paola Escudero, Andréia Rauber, Kateřina Chládková, Ricardo Bion and Joppe Pelzer <li><a href="CroatianFolkSinging/Welcome.html">Croatian folk singing</a> with Gordana Kovacic (2003–2006) </ul> <li>Limburgian tone (2002–2018): <ul> <li><a href="limburgian.html">Franconian tonogenesis</a> (2002–2018) <li><a href="limburgian.html#Roermond">synchronic analysis of Roermond</a> (2011) <li><a href="limburgian.html#Westerwald">pitch accent versus tone</a> with Björn Köhnlein (2008) </ul> </ul> <p><b><a href="chrono.html">All writings in chronological order</a></b></p> <p><a href="presentations.html">Talks and posters</a></p> </body> </html>