Test generalization of context-dependent disclosure to closed‑source frontier models

Investigate whether the context-dependent AI-identity disclosure patterns observed in open-weight models also occur in closed-source frontier models under the same professional personas and epistemic probes.

Background

The evaluation focuses on open-weight models to analyze parameter counts and training variations; thus, external validity to closed-source frontier systems is unconfirmed.

Given similar transformer-based pretraining and RLHF paradigms, similar context-dependence is plausible but unverified without direct empirical testing.

References

Whether similar patterns exist among frontier closed-source models requires direct empirical testing.

Self-Transparency Failures in Expert-Persona LLMs: A Large-Scale Behavioral Audit (2511.21569 - Diep, 26 Nov 2025) in Limitations and Future Directions (Discussion)