Identifying the most suitable base language models for GLM instantiation

Perform comprehensive comparisons among pretrained language models with relative positional encoding, including models with rotary positional encoding, to determine which base model is most suitable for instantiating Graph Language Models.

Background

GLMs are constructed by adapting transformer LMs with relative positional encodings to operate on graphs while retaining pretrained weights. Although T5 is used in the experiments, the framework supports other LMs with compatible positional encodings, including rotary.

The authors explicitly state that a systematic study to identify the most suitable LM for GLM instantiation remains to be done.

References

Comprehensive comparisons to determine the most suitable models for the GLM framework remain for future investigation.

Graph Language Models (2401.07105 - Plenz et al., 2024) in Limitations