Call For Papers

Generative AI systems are approaching a scalability limit in their development. Due to power density issues, it will soon become infeasible to train large language models with an increasing number of parameters in a single datacenter. While the industry is actively pursuing an effort to scale up AI systems, it becomes necessary to explore the use of scaled-out, global distributed systems to train or serve generative AI models.

Besides, services based on generative AI ask for stringent quality of service levels to meet users demand. Meeting those requirements can be addressed by using systems mixing powerful computing instances residing in cloud platforms with localized edge platforms, using heterogeneous and distributed systems.

Those questions may find a solution in approaches adopted by federated learning systems, in which models are trained among several stakeholders. Yet, those systems also face scalability issues in dealing with models of a larger size.

This workshop, initiated in the realm of the Internet Research Task Force (IRTF), aims at discussing the networking challenges raised by the distribution of generative AI workloads at a large scale. To that extend, we aim at receiving contributions from academic researchers, machine learning system developers or AI infrastructure providers. .

Submitted papers must be at most six (6) pages long, excluding references and appendices, in two-column 10pt ACM format. Authors of accepted submissions are expected to present and discuss their work at the workshop. All submissions will be peer-reviewed, and the review process will be double-blind. Per the anonymity guidelines, please prepare your paper in a way that preserves the anonymity of the authors. No information will be shared with third parties.

Please submit your paper using the INET4AI Submission Portal: https://inet4ai25.hotcrp.com.

Important Dates AoE
  • Paper submissions deadline: July 28, 2025.

  • Paper acceptance notification: September 12, 2025.

  • Camera ready due: October 8, 2025.

  • Program available online: October 16, 2025.

Contact

If you have any problems or questions about your paper's submission, please contact us via e-mail at: submit@inet4ai.org

The areas of interest include, but are not limited to:

  • Future communication challenges related to the evolution of AI systems (Reasoning models, Mixture of Experts...);
  • Traffic characterization for distributed AI training or inference workloads;
  • Collective communications for the Internet;
  • Network protocols adaptations to scale out AI workloads;
  • Domain-Specific transport protocols;
  • Lossy and/or lossless transport protocols for distributed training and inference;
  • Dynamic compression, quantization, and model sparsification;
  • Congestion control methods for GPU-centric communications;
  • Parallelism for heterogeneous and/or distributed AI systems;
  • Impact on network infrastructure and protocols of in-network processing for large-scale distributed AI systems;
  • Large scale disaggregation of AI training or inference systems;
  • Load balancing and routing for heterogeneous and/or distributed AI systems;
  • KV-cache distribution, towards KV-centric networking;
  • Federated learning;
  • Security and privacy of distributed AI communications;
  • Encryption methods for distributed AI communications;
  • Distributing AI workloads among heterogeneous nodes;
  • Decentralized system designs for AI services;
  • Edge-core distribution of AI workloads.