TPC25 All-Hands Conference Early-Bird Pricing Ends 6/15!

TPC25-1000

WORKING GROUPS

TPC working groups were initially formed during the August 2023 workshop at Argonne National Laboratory in the United States, and have continued to refine their focus during the 2024 conference at the Barcelona Supercomputing Center in Spain, along with subsequent hackathons.

TPC invites new working groups in areas not already active, as well as breakout sessions organized around timely topics. Prospective leaders of proposed new working groups should first review the current TPC working groups listed below to avoid duplication. Creation of a TPC working group begins with a “Birds of a Feather” session at a TPC meeting.

TPC also invites breakout sessions organized around timely topics even if no working group is being proposed.

If you would like to organize a BOF to explore a new working group or as a topical breakout session, please submit a title, abstract, and at least three co-leaders from different organizations. The program committee will review and, subject to space availability, invite the co-leaders to organize a BOF during the parallel breakout sessions at TPC25.

The TPC25 conference will convene a number of hackathons on July 28-29, organized by individual working groups. Additionally, working groups will meet during the parallel breakout sessions on July 30-31. Current working groups are detailed below.

Current Working Groups

Initially formed during the August 2023 workshop at Argonne National Laboratory in the United States, several working groups continued to refine their focus during the 2024 conference at the Barcelona Supercomputing Center in Spain and subsequent hackathons. The TPC25 conference will convene a number of these working groups in July 2025:

Model Architecture and Performance Evaluation

Developing and comparing scalable model architectures and performance benchmarks for large-scale AI models, ensuring that training and inference run efficiently on modern supercomputers and advanced hardware platforms.

Data, Training Workflows, and Large Language Model Strategies

Addressing data acquisition, cleaning, formatting, and curation for scientific AI projects, and developing robust training workflows that can handle large-scale parallelization, distributed optimization, and domain-specific Large Language Model adaptations.

Skills, Safety, and Trust Evaluation

Identifying methods to assess model reliability, trustworthiness, and safety. This group also explores reducing biases, verifying fairness, enhancing model interpretability, ensuring alignment with ethical guidelines, and improving the reliability of generative models for scientific tasks.

Biology, Biochemistry, and Bioinformatics

Targeting biological and biomedical applications, such as protein structure prediction, drug discovery, genomics, and medical image analysis, and formulating new data sets and techniques that integrate well with large-scale AI approaches.

Outreach and Training

Formulating and delivering educational content, tutorials, and summer schools designed to train a broad community in the methods and tools required to develop and use trillion-parameter models.

Large Language Models for Scientific Software

Investigating how Large Language Models can assist in coding, debugging, optimizing, and maintaining scientific software ecosystems.

Large Language Models for Healthcare

Exploring how large-scale generative models can contribute to medical imaging analysis, clinical decision support, and health informatics while ensuring data privacy, regulatory compliance, and patient safety.

ENVIRONMENT, ECOLOGY, AND CLIMATE

Applying large-scale AI models to climate simulation, environmental data analysis, and ecological modeling, providing insights into long-term climate patterns, ecosystem dynamics, and sustainability measures.

Engineering, Energy, and Mechanics

Using AI to model and predict phenomena in fluid dynamics, materials engineering, mechanical design, energy system optimization, and related fields.

Fundamental Physics and Cosmology

Applying generative AI models to astrophysics, particle physics, cosmology, and quantum simulations, helping scientists refine theories, explore parameter spaces, and interpret large observational data sets.

Materials, Chemistry, and NanosciencE

Leveraging AI models to discover novel materials, predict chemical properties, explore molecular dynamics, and accelerate rational material and drug design.

Keep Me Updated

Name(Required)
Email(Required)