Приказ основних података о документу

dc.creatorMišić, Ksenija
dc.creatorFilipović Đurđević, Dušica
dc.date.accessioned2023-11-03T14:03:01Z
dc.date.available2023-11-03T14:03:01Z
dc.date.issued2022
dc.identifier.urihttp://reff.f.bg.ac.rs/handle/123456789/5135
dc.description.abstractdriven learning framework (Filipović Đurđević & Kostić, 2021; Mišić & Filipović Đurđević, 2021). Filpović Đurđević and Kostić (2021) introduced a hypothesis that lexical ambiguity could be operationalized via partial overlap of multiple cues/outcomes related to meaning. Their demonstration relied on distributional semantics, namely the co-occurrence of words in the context. However, although relying on natural language samples is a powerful approach, it also introduces many complexities that potentially obscure the learning mechanics behind the ambiguity effects. Therefore, our aim was to perform ambiguity learning simulations from more of a theoretical standpoint by employing the toy model approach. Theory informed our data generation process in two ways. First, error-driven learning (Rescorla & Wagner, 1972) offered a mechanism for learning ambiguous words and the importance of cue competition for learning to occur (Hoppe et al., 2022). Second, we relied on psycholinguistic theory for descriptions of ambiguity, namely the polysemy (multiple related senses) and homonymy (multiple unrelated meanings) distinction (Rodd et al., 2002). In addition to sense/meaning relatedness, we also paid attention to their probabilities (Filipović Đurđević & Kostić, 2017). We manipulated the type of lexical ambiguity (unambiguous words, polysemes, homonyms), the balance of sense/meaning probabilities (balanced, unbalanced), and the level of cue competition (low, medium, high). Data were generated in the following way. We modelled a total of six words: two unambiguous words, a balanced and an unbalanced polyseme, and a balanced and an unbalanced homonym. Each word was represented by one outcome. Cues were created separately for each sense/meaning and were constructed as an equal-length string of arbitrary elements. The ambiguity type was manipulated via the cue overlap. Unambiguous words were predicted by a single cue set. Homonyms were predicted by three distinct sets of cues, each representing one meaning. Polysemous words were also predicted by three sets of cues, however, in addition to some unique cues for each of the senses, sets had some overlap among themselves in order to represent the sense relatedness. Each of the artificial words (outcomes) and its cues was presented to the network an equal amount of times. Balance of the sense/meanings frequency distribution was manipulated through the frequency of the presentation of each cue-outcome pairing. Finally, to introduce more cue competition, we randomly sampled a number of existing cues and appended them to other meanings/senses strings. By varying the number of cues appended, we varied the cue competition intensity. The data structure scheme is presented in Figure 1. We then compared simulations on two different measures – the activation of the outcomes, and the learnability (a quantitative description of learning curves). When cue competition was present, activation decreased in the following order: balanced homonyms, unbalanced homonyms, unbalanced polysemes, and unbalanced polysemes, with unambiguous words, activated the least. This pattern, although expected to be inversely proportionate, was directly proportionate to the RTs in lexical decision tasks (Filipović Đurđević, 2019; Filipović Đurđević & Kostić, 2021). Learnability measure revealed that homonyms were learned the best, followed by polysemes, and then unambiguous words. Nevertheless, the existing relationship suggests that possible modifications of the generated data might lead to a better insight into how learning leads to the presence of ambiguity in language.sr
dc.language.isoensr
dc.relationinfo:eu-repo/grantAgreement/MESTD/inst-2020/200163/RS//sr
dc.rightsopenAccesssr
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.sourceProceedings of the Second International Conference on Error-Driven Learning in Language (EDLL 2022), August 1-3, University of Tübingen, Germanysr
dc.subjectdistributed meaningssr
dc.subjectdistributed sensessr
dc.subjecterror-driven learningsr
dc.titleDistributed meanings and senses in error-driven learning framework – a proof of conceptsr
dc.typeconferenceObjectsr
dc.rights.licenseBYsr
dc.citation.epage13
dc.citation.spage12
dc.identifier.fulltexthttp://reff.f.bg.ac.rs/bitstream/id/12685/EDLL2022_conferenceProceedings.pdf
dc.identifier.rcubhttps://hdl.handle.net/21.15107/rcub_reff_5135
dc.type.versionpublishedVersionsr


Документи

Thumbnail

Овај документ се појављује у следећим колекцијама

Приказ основних података о документу