Project Submission Games Analysis
- Project submission games are formal and empirical frameworks where agents strategically select, submit, and manage projects subject to evaluation rules and resource constraints.
- They are applied across domains such as participatory budgeting, public goods challenges, and online competitions, revealing both computational complexity and tractability under structured settings.
- Automated evaluation platforms and educational tools leverage these models to foster competition, collaboration, and hands-on learning through dynamic scoring and modular system architectures.
A project submission game is a formal and empirical framework for modeling, analyzing, or operating systems in which agents select, submit, and manage projects subject to evaluation, competition, and strategic or collaborative interaction. In academic literature, such games arise in domains including participatory budgeting, threshold public goods problems with project selection, educational settings with game-based submission and evaluation, and automated online competitions. Project submission games make explicit the strategies of proposers, the structure of submission and evaluation (often mediated by voting or judging rules), and the computational or sociological incentives that shape equilibrium outcomes.
1. Project Submission Games in Participatory Budgeting
In participatory budgeting (PB) and multiwinner elections, a project submission game (PSG) is defined by a set of proposers, each controlling a subset of projects, who select which subset to submit for consideration by a fixed voting rule. The formal environment is given by the tuple , comprising a set of projects , voters each with an approval set, and a budget . The total project set is partitioned into disjoint owned by each proposer ; a proposer's strategy is a choice of for submission. The aggregate submission profile is , which induces the set for the election.
The outcome is selected via a voting rule (e.g., Basic Approval Voting, Phragmén, or Method of Equal Shares), which picks a subset of subject to budget . Each proposer aims to maximize the total cost of funded projects from their own portfolio:
where is the cost of project .
A pure Nash equilibrium (NE) is a profile such that for every proposer and any alternative , .
The existence and tractability of NE depend on both the voting rule and project domain. For arbitrary project costs and general PB, the NE existence problem is both –hard and –hard for the investigated rules; that is, checking whether an NE exists is computationally intractable (Faliszewski et al., 13 Aug 2025). In the multiwinner setting with unit costs (committee selection), Basic Approval Voting always admits a pure NE, computable in polynomial time, whereas for Phragmén and MES further structural assumptions are needed for tractability.
The best-response problem—finding an optimal subset of projects for a proposer given the others' choices—mirrors this complexity landscape: it is intractable for general PSGs but polynomial for specific structures, such as the PSG/1 variant (each proposer submits exactly one project).
A best-response dynamics algorithm begins with all proposers submitting their full project sets and iteratively updates each proposer to their best-response submission. In practical PB datasets and in PSG/1, convergence to NE is typically rapid.
Setting | NE Existence Complexity | Best Response Complexity |
---|---|---|
General PB, arbitrary | -hard, -hard | -hard |
Multiwinner, unit | Poly. (BasicAV); Hard (others) | Poly. (BasicAV/party-list) |
PSG/1 | Poly. | Poly. |
These findings formalize how, in competition-like settings such as PB, the strategic landscape of project submission depends crucially on the combinatorial structure of the rules and available choices (Faliszewski et al., 13 Aug 2025).
2. Mechanisms of Project Selection in Public Goods Games
Project selection games generalize classical public goods games by allowing agents to preselect a project scale , which sets the threshold for the achievement of the group task. A group of forms, with cooperators, each able to contribute up to . Completion occurs if . Payoff schemes differ conditionally:
- If completed:
- Cooperators:
- Defectors:
- If not:
Agents adjust their preferred scale through one of two learning mechanisms:
- Mutation: Randomly resample in within .
- Imitation: Copy the project scale of the best-performing neighbor.
Strategy propagation follows probabilistically with the Fermi function:
where are environmental noise parameters (Zhong et al., 2019).
A critical fraction of imitators governs the emergence of cooperation. Below this threshold, mutation dominates, leading to small project scales and suppressed cooperation; above it, imitation drives toward larger, more rewarding projects and a higher frequency of cooperation. The equilibrium analysis yields for cooperators' share :
with quantifying the project scale gap between cooperators and defectors.
The coevolution of strategies and project scales forms a feedback loop: increased cooperation leads to larger projects, which in turn incentivize further cooperative behavior, manifesting as higher average and .
Practical implications of this model include the suggestion that systems promoting imitation of successful project submissions—rather than random variation—facilitate the selection of optimal project sizes and robust cooperation among participants (Zhong et al., 2019).
3. Online Project Submission and Evaluation Systems
Automated platforms for project submission games have been developed to handle large-scale competitions, enabling seamless code submission, reproducible evaluation, and dynamic leaderboard management (Chen et al., 23 Jul 2025). In such systems, participants receive version-controlled git repositories and push code to submit; the platform, typically comprising a Node.js/React frontend and a MongoDB backend, detects new submissions and dispatches jobs to evaluation servers.
Each evaluation job is executed in an isolated Docker container with strict resource limits, addressing both compatibility and security. The detailed workflow is:
- Detect submission via git commit.
- Schedule an evaluation job.
- Run the job in a Docker sandbox, with optional dependency installation and resource controls.
- Collect and record metrics such as runtime, correctness, output quality.
- Update leaderboards and feedback channels.
The scoring function for complex multi-metric competitions is typically:
with rule-specific normalization (Chen et al., 23 Jul 2025).
The architecture supports advanced features: multi-track competitions, cloud resource scaling (e.g., AWS EC2 + Slurm), debug and strict evaluation phases, and integration with external authentication. It has been used in the Grid-Based Pathfinding Competition and the League of Robot Runners.
Component | Role | Benefit |
---|---|---|
Git Host | Submission/versioning | Archival, reproducibility |
Evaluation Server | Job scheduling/execution | Automation, consistency |
Docker | Isolation/resources | Compatibility, security |
Leaderboard | Results dissemination | Transparency, competition |
The system architecture is modular, enabling adaptation to new project submission game types with custom evaluators and user interfaces.
4. Project Submission Games as Educational Tools
Game-based project submission is also leveraged in educational contexts for both self-assessment and collaborative learning. SpaceRaceEdu exemplifies such an application, featuring teams that compete to launch virtual rockets by completing domain-specific tasks requiring correct answers to multiple question types (multiple choice, numeric, ordering, classification) (Gómez et al., 2 Oct 2024).
Competition arises between teams striving to complete their objectives faster, while intra-team cooperation is essential for division of labor and collective problem-solving. The system emphasizes learning safety by permitting retries after failures, eschewing harsh penalties and promoting a growth mindset.
The automated reporting and progress tracking interfaces support both students and educators: teachers create or adapt question banks and receive post-game analytics on performance; students use the game for self-paper, benefiting from immediate feedback and iterative learning mechanisms.
A generic scoring model in such systems takes the form:
where indicates whether question was answered correctly. Task progression and completion follow algorithmic routines that manage pending tasks, resource accrual, and game objectives.
These features ensure the game functions as both a project submission tool and an assessment environment, blending elements of competition for motivation and cooperation for skill-building (Gómez et al., 2 Oct 2024).
5. Co-Creative and Board Game Approaches
A pedagogical avenue for project submission games is the co-creation of board games within project management curricula. In these settings, students collaboratively design game-based assignments encapsulating project submission and review challenges (Gkogkidis et al., 2021).
The empirical analysis separates the process into two conceptual frameworks:
- Positive characteristics: engagement with knowledge, self-assessment opportunities, enhanced creativity, open communication, and effective peer collaboration.
- Challenges: risk of lack of focus, insufficient structure, and underrepresentation of practice-oriented (versus theoretical) project scenarios.
The approach is qualitative and framework-based, borrowing structurally from problem-based learning cycles. The significance lies in increased student engagement, hands-on understanding of project management dynamics, and improved communication between participants and instructors.
Applications can be designed to maximize these strengths, for instance, by integrating targeted focus areas and clearer workshop segmentation, as well as practice-based elements that closer mimic real project submission challenges.
Dimension | Example Feature |
---|---|
Engagement with Knowledge | Playful application of theory |
Knowledge Assessment | Formulation and testing of tasks |
Creativity/Collaboration | Open-ended design, teamwork |
Structural Challenges | Need for practical, focused design |
This methodology can be generalized for constructing custom project submission games in other educational or professional domains (Gkogkidis et al., 2021).
6. Computational and Strategic Complexity in Project Submission Games
The computational analysis of project submission games highlights the dual nature of the field: while abstract PSGs may have exponentially many strategies and hard equilibrium problems, real-world variants often allow efficient algorithms under domain-specific constraints. Pure NE may fail to exist or be infeasible to find except under restricted settings, such as unit cost projects or single-submission-per-agent scenarios.
Algorithms such as best-response dynamics, exhaustive enumeration (in PSG/1), and hybrid theoretical-empirical heuristics are central in practice. Their effectiveness and convergence trajectories are tied to the game's structure, the voting rule's properties, and underlying resource constraints.
These findings interact with public goods and project selection models, where the balance of mutation-like exploration and imitation-like convergence mechanisms shapes both efficiency and cooperative stability. Optimal institutional design in such games, whether for PB, educational, or competitive settings, depends on the alignment between rule complexity, computational feasibility, and participant incentives (Faliszewski et al., 13 Aug 2025, Zhong et al., 2019).