Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Graph Condensation for Graph Neural Networks (2110.07580v4)

Published 14 Oct 2021 in cs.LG and cs.AI

Abstract: Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns. To alleviate the concerns, we propose and study the problem of graph condensation for graph neural networks (GNNs). Specifically, we aim to condense the large, original graph into a small, synthetic and highly-informative graph, such that GNNs trained on the small graph and large graph have comparable performance. We approach the condensation problem by imitating the GNN training trajectory on the original graph through the optimization of a gradient matching loss and design a strategy to condense node futures and structural information simultaneously. Extensive experiments have demonstrated the effectiveness of the proposed framework in condensing different graph datasets into informative smaller graphs. In particular, we are able to approximate the original test accuracy by 95.3% on Reddit, 99.8% on Flickr and 99.0% on Citeseer, while reducing their graph size by more than 99.9%, and the condensed graphs can be used to train various GNN architectures.Code is released at https://github.com/ChandlerBang/GCond.

Citations (123)

Summary

  • The paper introduces a graph condensation technique that synthesizes a compact graph to maintain high performance with fewer nodes.
  • It leverages optimization strategies to preserve essential node features and structural information across benchmark datasets.
  • The approach significantly reduces computational costs, enabling faster training and improved scalability for graph neural networks.

Overview of "Formatting Instructions for ICLR 2022 Conference Submissions"

The document titled "Formatting Instructions for ICLR 2022 Conference Submissions" primarily functions as a structural and typographic guide for authors preparing submissions for the ICLR 2022 conference. It provides imperative guidelines to ensure uniformity and coherence in the presentation of scientific work, thereby facilitating fair and efficient review processes.

Document Structure

The paper delineates specific formats and settings that authors must adhere to when preparing their manuscripts. These include directives on document class choice, math operator definitions, and layout specifications such as margins, font sizes, and spacing. Specifically, the paper mandates the use of LaTeX2e document class and provides detailed specifications for certain mathematical operators, ensuring clarity in mathematical expressions, which is crucial for computational research representation.

Title and Author Formatting

The guidance emphasizes a standardized format for titles and authorship. It underscores the provision for author affiliations and addresses while maintaining a clear distinction between primary document content and acknowledgments or additional information, such as author notes and funding, which are to be placed in footnotes.

Abstract Criteria

The formulation of the abstract is critically outlined, with precise instructions on indentation, font size, and spacing. The emphasis is placed on conciseness, requiring the abstract to remain within one paragraph, thereby compelling authors to succinctly summarize their research motivation, methodology, and outcomes.

Technical Specifications

  1. Math Operators: Authors are instructed to define essential mathematical operators such as argmax, argmin, sign, and Tr, using the LaTeX syntax. This standardization ensures consistency across various manuscripts submitted for review.
  2. Typography: The document also specifies the typographic elements, including the use of small caps for certain headings and specific indentations, contributing to the professional aesthetic of the published material.

Practical Implications

By imposing stringent formatting guidelines, the paper ensures that all submissions meet a certain standard, which aids reviewers and editors by allowing them to focus on content rather than presentation inconsistencies. This is particularly important in fields like artificial intelligence and machine learning, where clarity and precision of communication are vital for the dissemination and peer evaluation of complex theories and results.

Speculation on Future Developments

Such detailed formatting instructions foretell an increasing tendency towards the automation of document preparation and review processes in AI and computational fields. The consistency facilitated by such instructions may support the development of automated tools that could eventually assist in preliminary checks of submission formats, easing the workload of both authors and reviewers.

In conclusion, the document serves as an essential compliance guide for authors intending to submit papers to the ICLR conference. It underscores the importance of standardization in academic publishing, which not only enhances the readability of submissions but also facilitates equitable and efficient review processes. As the field continues to evolve, it may anticipate further integration with automated systems for manuscript preparation and review, aligning with the broader trends in AI and computational research methodologies.