Melbourne School of Psychological Sciences - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Constructing Word Meaning without Latent Representations using Spreading Activation
    Shabahang, KD ; Yim, H ; Dennis, SJ (Cognitive Science Society, 2022-01-01)
    Models of word meaning, like the Topics model (Griffiths et al., 2007) and word2vec (Mikolov et al., 2013), condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy etc.). However, their reliance on latent representations leaves them vulnerable to interference and makes them slow learners. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement our spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains such as natural language text. After fixing the corpus across models, we show that spreading activation using a Dynamic-Eigen-Net outperforms the Topics model and word2vec in several cases when predicting human free associations and word similarity ratings. We argue in favour of the Dynamic-Eigen-Net as a fast learner that is not subject to catastrophic interference, and present it as an example of delegating the induction of latent relationships to process assumptions instead of assumptions about representation.
  • Item
    Thumbnail Image
    Beyond Pattern Completion with Short-Term Plasticity
    Shabahang, KD ; Yim, H ; Dennis, SJ (Cognitive Science Society, 2020-01-01)
    In a Linear Associative Net (LAN), all input settles to a single pattern, therefore Anderson, Silverstein, Ritz, and Jones (1977) introduced saturation to force the system to reach other steady-states in the Brain-State-in-a-Box (BSB). Unfortunately, the BSB is limited in its ability to generalize because its responses are restricted to previously stored patterns. We present simulations showing how a Dynamic-Eigen-Net (DEN), a LAN with Short-Term Plasticity (STP), overcomes the single-response limitation. Critically, a DEN also accommodates novel patterns by aligning them with encoded structure. We train a two-slot DEN on a text corpus, and provide an account of lexical decision and judgement-of-grammaticality (JOG) tasks showing how grammatical bi-grams yield stronger responses relative to ungrammatical bi-grams. Finally, we present a simulation showing how a DEN is sensitive to syntactic violations introduced in novel bi-grams. We propose DENs as associative nets with greater promise for generalization than the classic alternatives.