Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 189 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 35 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 207 tok/s Pro
GPT OSS 120B 451 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Multiplex Mental Lexicon Model

Updated 14 November 2025
  • The multiplex model of the mental lexicon is a multi-layer network that represents words as nodes with diverse relations like semantics, phonology, and taxonomy.
  • It employs multiplex measures such as multidegree, closeness, and PageRank to quantitatively link network structure with word acquisition and processing speed.
  • Simulated lexicon growth reveals phase transitions and the emergence of a robust language kernel, highlighting implications for cognitive resilience and clinical interventions.

The multiplex model of the mental lexicon formalizes the mental word-stock as a multi-layer network in which each word is represented as a node and the edges encode relationships of various types—semantic, phonological, taxonomic, feature-based, and more. This paradigm explicitly captures the concurrent, multi-relational structure of lexical knowledge and reveals quantitative patterns in acquisition, processing, and resilience that elude single-layer analyses. Multiplex methodology has yielded insights into the kernels of early linguistic knowledge, explosive transitions in lexical organization, phase-dependent word-learning strategies, and the interplay of core and periphery in maintaining lexical robustness.

1. Mathematical Structure of the Multiplex Mental Lexicon

The multiplex lexicon is defined as a set of NN word-nodes replicated across LL layers, each corresponding to a specific relation:

M={A(α)}α=1L,A(α){0,1}N×N\mathcal{M} = \{A^{(\alpha)}\}_{\alpha=1}^{L}, \qquad A^{(\alpha)} \in \{0,1\}^{N\times N}

where Aij(α)=1A^{(\alpha)}_{ij} = 1 indicates a relation (e.g., semantic association, syntactic co-occurrence, phonological similarity) between word ii and jj on layer α\alpha. Node identity is maintained across layers (node-alignment).

To encode both intra-layer and inter-layer relations, the supra-adjacency matrix is constructed as

Aiα,jβ=Aij(α)δαβ+cδij(1δαβ)\mathcal{A}_{i\alpha,\,j\beta} = A^{(\alpha)}_{ij} \delta_{\alpha\beta} + c\,\delta_{ij}(1-\delta_{\alpha\beta})

where cc is the interlayer coupling weight, and δ\delta is the Kronecker delta. In block-diagonal multiplexes (typical for cognitive lexicon modeling), c0c\to 0, but for centrality or shortest-path computations, virtual interlayer jumps carry unit or tunable cost.

Layer types empirically instantiated include:

  • Free word associations (e.g., South Florida norms, Edinburgh Associative Thesaurus)
  • Semantic feature sharing (e.g., McRae feature norms)
  • Co-occurrence in child-directed or adult speech (e.g., CHILDES)
  • Synonymy/taxonomy (e.g., WordNet)
  • Phonological similarity (IPA edit distance = 1)
  • Syntax (e.g., dependency co-occurrence)
  • Multimodal (visual, cross-language) extensions for multilingual and multimodal lexica

Each layer is undirected, unweighted in the canonical models, but weighting, directionality, and more elaborate relations (hyperedges, higher-order structures) are extensible within the formalism (Stella et al., 2022, Stella et al., 2016, Huynh et al., 7 Nov 2025).

2. Multiplex Network Measures and Their Cognitive Interpretations

Classical graph metrics are generalized as follows:

  • Multidegree and Aggregate Degree: For node ii, ki(α)=jAij(α)k_i^{(\alpha)} = \sum_j A^{(\alpha)}_{ij}, aggregate kimulti=αki(α)k_i^{\mathrm{multi}} = \sum_\alpha k_i^{(\alpha)}. Hubs with high multidegree are empirically shown to be crucial for lexicon connectivity and learning (Stella, 2020).
  • Multiplex Closeness: For minimal path length dijmultid_{i\to j}^{\rm multi} allowing any sequence of layer traversals,

cimulti=1jidijmultic_i^{\rm multi} = \frac{1}{\sum_{j\neq i} d_{i\to j}^{\rm multi}}

This centrality strongly tracks empirical age-of-acquisition in early childhood (normalized gain up to +160% vs. single-layer baselines) (Stella et al., 2016).

  • Multiplex (Versatile) PageRank: Stationary solution to

π=pPπ+(1p)u\pi = pP\pi + (1-p)u

on the supra-adjacency, with marginalization over layers yielding per-word centrality (Stella et al., 2016).

  • Layer Overlap and Redundancy: Interlayer edge overlap

ωαβ=i<jAij(α)Aij(β)min(L(α),L(β))\omega_{\alpha\beta} = \frac{ \sum_{i<j} A^{(\alpha)}_{ij}A^{(\beta)}_{ij} }{ \min\left(L^{(\alpha)},L^{(\beta)}\right) }

Low overlap outside semantic layers demonstrates that multiplex representation preserves non-redundant information (Stella et al., 2016, Stella et al., 2022).

  • Viability: The concept of AND-connectivity whereby a set SS induces connectivity in all layers simultaneously; the largest viable cluster (LVC) is the maximal such SS (Stella et al., 2017, Stella et al., 2022).

3. Dynamics of Lexicon Growth and Phase Transitions

Stochastic growth models simulate the evolving lexicon as words are “learned” according to empirically driven or algorithmic orderings, subject to layer-specific constraints (Stella et al., 2016):

  • Phonologically constrained assembly: New words must connect via at least one phonological neighbor in the already-learned set; semantic/syntactic features order candidate proposals but do not gate insertion.
  • Mixed heuristics: Orderings combining shortest phoneme-length, frequency, and semantic centrality produce assembly timings with up to 90% overlap with empirical age-of-acquisition (AoA) distributions, while pure length or frequency heuristics deviate markedly.
  • Phase transitions in core emergence: As the lexicon grows, a sharp transition (“explosive percolation”) occurs in the LVC size—an abrupt increase in the set of words simultaneously connected in all layers. Under normative AoA trajectories, this transition occurs at age 7.7±0.6\sim 7.7 \pm 0.6 years, with a jump ΔLAoA=420±50\Delta L_{\rm AoA} = 420 \pm 50 words, which is not reproduced in random or most null sequences (Stella et al., 2017).
  • Early childhood word learning: Multiplex closeness best predicts the ordering of the first 120 acquired words (normalized word gain G0.25G \approx 0.25). An initial phase (VELS: t40t \lesssim 40) is marked by balanced semantic and phonological influence; a later phase (LLS: t200t \gtrsim 200) shifts to local attachment to high-degree association nodes (Stella et al., 2016).

4. The Language Kernel and Viable Clusters

The LVC, or “language kernel,” is a subset of words that remain mutually reachable through all layers. Its properties have been extensively characterized:

  • Core structure and properties: LVC-in words are, on average, higher in frequency, acquired earlier, more concrete, shorter, and more polysemous; they yield faster lexical decision times and higher naming accuracy in aphasia (Stella et al., 2017, Stella, 2020).
  • Role in processing and resilience: The LVC acts as a mental navigation core, conferring both efficient retrieval (by concentrating high closeness) and robustness (buffering against random or clinical degradation).
  • Fragility to targeted attack: The empirical LVC is robust to random removals but highly susceptible to multiplex centrality-targeted attacks (PageRank, multidegree); loss of ~15–20% of high-centrality words causes explosive LVC fragmentation (Stella, 2020).
  • Relevance to creativity and clinical performance: Over-reliance on LVC has been linked to lower creativity in semantic fluency, while higher robustness inside the LVC predicts recovery or proficiency in aphasia naming tasks (Stella et al., 2022).

Table: Summary of Core Lexicon Features in Multiplex Models

Feature LVC-Core Words Non-LVC Words
Frequency High Lower
Concreteness High Lower
Polysemy Higher (9.7\approx 9.7) Lower (3.6\approx 3.6)
Lexical Decision RT Shorter (faster) Longer
Naming in Aphasia Higher success Lower success

5. Integration with Psycholinguistic, Cognitive, and Clinical Models

Multiplex architectural features enable reconciliation of diverse psycholinguistic phenomena:

  • Superior AoA prediction: Multiplex measures, especially closeness and versatility, consistently outperform any single-layer property in predicting empirical word-acquisition sequences for children (Stella et al., 2016).
  • Medial and suppressive inter-layer effects: Layer interaction can reveal hidden mediation or suppression among features (e.g., free-association mediating semantic-phonological relatedness), with multiplex network distance best predicting reaction times in relatedness judgment tasks (Stella et al., 2022).
  • Bridging vectors and networks: Feature-rich multiplex models (e.g., FERMULEX) combine attribute vectors (frequency, length, polysemy) with topological relations, yielding a small, highly homogeneous language kernel—undetectable by core decomposition or clustering in feature or network space alone (Citraro et al., 2022).
  • Extensions to multimodal and multilingual lexica: Inclusion of visual layers and multilingual identity/synonym links demonstrates that multimodal coupling accelerates the emergence of integrated kernels, enhances cross-language retrieval, and supports educationally relevant gains in translation tasks for heritage/multilingual learners (Huynh et al., 7 Nov 2025).

6. Limitations, Extensions, and Future Directions

Multiplex lexicon modeling presents ongoing methodological and empirical challenges:

  • Layer selection and redundancy: Not all candidate layers supply non-redundant information; structural reducibility metrics and algorithmic compressibility guide optimal model design (Stella et al., 2022, Stella et al., 2016).
  • Higher-order and weighted links: Linguistic phenomena involving multi-word expressions, templates, or graded similarity invite hypergraph or weighted adjacency generalizations.
  • Inference under uncertainty: Real data layers (free-association, corpus co-occurrence, etc.) are subject to noise and missingness; Bayesian reconstruction and joint inference across layers address this limitation.
  • Neurocognitive interface: Embedding multiplex lexical models within broader multiscale brain–mind networks remains open, with implications for cognitive neuroscience and computational psycholinguistics.
  • Clinical application: Resilience and vulnerability patterns identified in multiplex models may inform targeted rehabilitation in language impairment by focusing on core hubs and viable cluster connectivity rather than on frequency or semantic properties alone.

A plausible implication is that future computational and educational interventions will increasingly exploit the full multi-relational structure of the mental lexicon—across modalities, developmental windows, and individual differences—to optimize both human and artificial language learning and recovery.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Multiplex Model of the Mental Lexicon.