Intrinsic transformer mechanisms behind narrative-prompt prediction gains
Investigate whether an intrinsic property of transformer-based language models produces improved predictive accuracy under narrative prompting independent of OpenAI usage policy constraints, and, if so, characterize the underlying mechanism—potentially involving hallucination processes within attention.
Sponsor
References
Another explanation, though, is that there is something intrinsic to the narrative prompting that allows the Transformer architecture to make more accurate predictions even outside of the confounding set by OpenAI's terms of service. This may be related to how the hallucination fabricrations work within the machine learning environment of attention mechanisms. But as we only studied the two OpenAI GPT models, we are unable to provide more than just speculation as these terms of use violations are always present if that is indeed the case.