TPC25-1000

Agenda

Plenary, breakout, hackathon, and tutorial topics and speakers are being finalized by the TPC Steering and Program Committees, and will be announced over the coming month. This schedule is provisional, and will evolve as additional breakout requests are still being submitted.

Tuesday, July 29

12:30

Lunch

14:00

Opening Plenary Session

Welcome and TPC Keynote: SCALING UP TO GW DATA CENTERS AND AI FACTORIES

Rick Stevens, Argonne National Laboratory

Welcome Keynote: HPC and Science: The Need for Hybrid

Thierry Pellegrino, AWS

15:30

Break

16:00

Plenary Session 2

Reinventing HPC: AI Inference, Training, and Other Services

Moderator: Satoshi Matsuoka, RIKEN

Steve Clark, Quantinuum

Wednesday, July 30

9:00

Plenary Session 3

Agentic Systems, Multimodal Data, and Non-LLM Model Architectures

Moderator: Ian Foster, Argonne National Laboratory

Vivek Natarajan, Google
Preeth Chengappa, Microsoft

10:30

Break

11:00

Plenary Session 4

Evaluation of AI Systems: Performance, Skills, Trust

Moderator: Ricardo Baeza-Yates, Barcelona Supercomputing Center

Rio Yokota, IS Tokyo
Franck Cappello, Argonne National Laboratory
Jiacheng Liu, Allen Institute for AI

12:30

Lunch & Panel Discussion

Industry, Academia, and Government Collaboration: Accelerating Trustworthy AI for Science

Moderator: Karthik Duraisamy, University of Michigan

Ceren Susut, U.S. Department of Energy
Burnie Legette, Intel
Raj Hazra, Quantinuum

Thursday, July 31

16:00

Plenary Session 5

Multimodal Data, and Non-LLM Model Architectures

Prasanna Balaprakash, Oak Ridge National Laboratory
Flora Salim, University of New South Wales

CLOSING REMARKS AND A TPC ROADMAP

Moderator: Charlie Catlett, Argonne National Laboratory

Wednesday, July 30

14:00

Parallel Breakouts A

BOF: Federated Learning at Scale

BOF: Building Foundation Models for the Electric Grid (GridFM)

Life Sciences
AI Models for Biomedicine and Precision Population Health

Model Skills, Reasoning, and Trust Evaluation (EVAL)
Current State

BOF: Inference Services and AI Workloads
Emerging Needs

AI Models for Software Engineering and Development

16:00

Parallel Breakouts B

Agents, and Reasoning Frameworks (DWARF)
Autonomous Science Network

BOF: ICICLE AI Institute

Life Sciences
AI for Cancer

Model Skills, Reasoning, and Trust Evaluation (EVAL)
Tools and Frameworks

BOF: Inference Services and AI Workloads
Current Approaches; Projections

Open Slot

Thursday, July 31

9:00

Parallel Breakouts C

Data, Workflows, Agents, and Reasoning Frameworks (DWARF)
Data/Training Workflows

BOF: AI in Decision Sciences

Life Sciences
Agentic Systems for Life Sciences

Model Skills, Reasoning, and Trust Evaluation (EVAL)
Evaluating Reasoning

Model Architecture and Perf Eval (MAPE)
Profiling Tools

AI for Scientific Discovery in Materials Science (AI4MS)

11:00

Parallel Breakouts d

Data, Workflows, Agents, and Reasoning Frameworks (DWARF)
Agentic Systems

BOF: Public AI: Policy, Community, and the Future of National Labs

Life Sciences
Drug Discovery

Model Skills, Reasoning, and Trust Evaluation (EVAL)
Skills Evaluation

Model Architecture and Perf Eval (MAPE)
Current State of LLM Training

Education and Outreach

14:00

Parallel Breakouts E

AI for Scientific Software

BOF: Energy Efficient HPC for AI Workloads

Open Slot

BOF: LLMs and Reasoning

Model Architecture and Perf Eval (MAPE)
Training challenges for non-LLMs

Earth and Environment (AI for Digital Earth)

Monday, July 28

9:00

Hackathon Opening Plenary: Introduction to AI for Science

Rick Stevens, Argonne National Laboratory

Over 1.5 days, participants will learn to build and extend AI agents tailored to scientific challenges, using case studies in biology and chemistry. With guidance from mentors and access to NVIDIA and Cerebras compute resources, teams will collaborate on projects such as molecular tool development, protein engineering, and reasoning agents.

The open-format event emphasizes collaborative learning and practical implementation, building on foundational AI concepts from an introductory tutorial session. It is designed for researchers eager to explore agentic systems and apply them to their own scientific work.

Participants will:

  • Gain a working knowledge of agentic system architecture,
  • Learn how to apply agentic methods to domain-specific scientific problems, and
  • Develop prototype tools or agents.

They will collaborate with peers and mentors, access advanced compute resources, and leave with hands-on experience that empowers further exploration of accelerating scientific discovery using advanced AI.

Hackathon Team:  
Arvind Ramanathan (Argonne), Miguel Vazquez (BSC),
Mohamed Wahib (RIKEN), Tom Brettin (Argonne).

Session 1: Plenary session with all Tutorial and Hackathon participants: Foundations in AI for Science

11:00

Hackathon Session 2

Building Agentic Systems for Science

Session 2: Intro to Agentic Aystems and Use Cases

14:00

Hackathon Sessions 3

Building Agentic Systems for Science

Session 3: Team Formation and Project Kickoff

16:00

Hackathon Sessions 4

Building Agentic Systems for Science

Session 4: Hands-On Hacking with Expert Mentorship

Tuesday, July 29

9:00

Hackathon Sessions 5

Building Agentic Systems for Science

Session 5: Midpoint Sync, Debugging, Breakouts

10:45

Hackathon Sessions 6

Building Agentic Systems for Science

Session 6: Project Showcases, Wrap-Up Discussion

12:00

Hackathon Closing Plenary

Miguel Vazquez, Barcelona Supercomputing Center

Monday, July 28

9:00

Tutorial Opening Plenary: Introduction to AI for Science

Neeraj Kumar, Pacific Northwest National Laboratory

AI for Science: Foundations and Frontiers is a hands-on tutorial designed to equip researchers with practical skills and conceptual grounding in the application of large-scale AI models to scientific challenges.

The program covers key components of the AI model lifecycle: from distributed strategies for pre-training generative models to fine-tuning techniques for domain-specific tasks using models like LLAMA-70B and Stable Diffusion.

Participants will also learn to analyze and optimize performance through workload profiling with PARAVER, and to build intelligent scientific workflows using Retrieval-Augmented Generation (RAG) and agent-based approaches. The tutorial concludes with real-world case studies across disciplines—biology, climate, physics, chemistry—highlighting lessons learned from deployment and emerging trends such as simulation models and neural-symbolic systems.

Participants will develop a practical understanding of large-scale AI model development, including:

  • Parallelized pre-training strategies and fine-tuning techniques for domain-specific tasks
  • How to analyze and optimize AI workloads using profiling tools
  • Gain hands-on experience building Retrieval-Augmented Generation (RAG) pipelines and agent-based workflows

With exposure to real-world scientific applications and current research frontiers in AI for science.

Instructors:
Experts from Argonne, ORNL, PNNL, CINECA, BSC, and others.

Evaluation of AI Model Scientific Reasoning Skills is a hands-on tutorial designed to equip researchers with practical skills and conceptual grounding in the application of LLMs to scientific challenges.

Large Language Models (LLMs) are becoming capable of solving complex problems while presenting the opportunity to leverage them for scientific applications. However, even the most sophisticated models can struggle with simple reasoning tasks and make mistakes.

This tutorial focuses on best practices for evaluating LLMs for science applications. It guides participants through methods and techniques for testing LLMs at basic and intermediate levels. It starts with the fundamentals of LLM design, development, application, and evaluation while focusing on scientific application. Participants will also learn various complementary methods to rigorously evaluate LLM responses in benchmarks and end-to-end scenario settings. The tutorial features a hands-on session where participants use LLMs to solve provided problems.

Participants will learn the principles and approaches for the use of LLMs as scientific assistants and how these can be evaluated with respect to scientific knowledge and reasoning skills, such as:

  • Use cases of LLMs for scientific applications
  • Importance of prompting and performance
  • Basic of LLM evaluation,
  • Evaluation of LLMs for science and engineering
  • Advanced evaluation techniques of LLMs for Science and Engineering
  • Hands-on

Instructors:
Franck Cappello, Sandeep Madireddy, Neil Getty (Argonne), Javier Aula-Blasco (BSC)

Session 1: Plenary session with all Tutorial and Hackathon participants: Foundations in AI for Science

11:00

Tutorial Sessions 2

AI for Science: Foundations and Frontiers

Session 2: Case Studies and Emerging Frontiers in AI for Science

Evaluation of AI Model Scientific Reasoning Skills

Session 2: Use Cases and Basic Evaluation Techniques

14:00

Tutorial Sessions 3

AI for Science: Foundations and Frontiers

Session 3: Parallelization Strategies for Large-Scale Pre-Training

Evaluation of AI Model Scientific Reasoning Skills

Session 3: Advanced Evaluation Techniques

16:00

Tutorial Sessions 4

AI for Science: Foundations and Frontiers

Session 4: Fine-Tuning Techniques: From Theory to Practice

Evaluation of AI Model Scientific Reasoning Skills

Session 4: Hands On

Tuesday, July 29

9:00

Tutorial Sessions 5

AI for Science: Foundations and Frontiers

Session 5: Profiling AI Workloads with PARAVER

10:45

Tutorial Sessions 6

AI for Science: Foundations and Frontiers

Session 6: Building RAG-Based Workflows OR AI Agents

Agenda At a Glance

Monday, July 28

9:00

Hackathon Opening Plenary

Tutorial Opening Plenary

10:30

Break

11:00

Hackathon Sessions 2

Tutorial Sessions 2

12:30

Lunch

14:00

Hackathon Sessions 3

Tutorial Sessions 3

15:30

Break

16:00

Hackathon Sessions 4

Tutorial Sessions 4

17:30

Break

18:00 – 19:30

Hackathon Gathering

Tuesday, July 29

9:00

Hackathon Sessions 5

Tutorial Session 5

Exhibition

10:30

Break

EXHBITION

11:00

Hackathon Sessions 6

Tutorial Session 6

Exhibition

12:30

Hackathon Closing Plenary

Break

Exhibition

13:00

Lunch

EXHBITION

14:00

Opening Plenary

EXHBITION

15:30

Break

EXHBITION

16:00

Plenary 2

EXHBITION

17:30

Break

EXHBITION

18:00 – 19:30

Welcome Reception

Wednesday, July 30

9:00

Plenary 3

Job Fair

EXHBITION

10:30

BREAK

Job Fair

EXHBITION

11:00

plenary 4

Job Fair

EXHBITION

12:30

LUNCH & PANEL DISCUSSION

Job Fair

EXHBITION

14:00

Parallel Breakouts A

Job Fair

EXHBITION

15:30

BREAK

Job Fair

EXHBITION

16:00

Parallel Breakouts B

Job Fair

EXHBITION

Thursday, July 31

9:00

Parallel Breakouts C

10:30

BREAK

11:00

Parallel Breakouts D

12:30

LUNCH & PANEL DISCUSSION

14:00

Parallel Breakouts E

15:30

BREAK

16:00

Closing Plenary Session

Plenaries and Breakouts

are open to all conference attendees.

Tutorials

are open to all conference attendees, for an additional fee.

Hackathons

are open to TPC members and invited guests.

Job Fair

is open to all, July 30. For information on getting a table,

Exhibition Area

is open to all, July 29-30. For information on getting a table,

Plenary, breakout, hackathon, and tutorial topics and speakers are TBD as per TPC Steering and Program Committees, and will be announced over the coming months. Some of TPC’s prior speakers include:

Ian Foster, one of the 10 most cited computer scientists in the U.S. His work in “Grid Computing” began in 1994 and provided much of the underlying principles that were applied a decade later to create cloud computing. His team’s distributed computing infrastructure, Globus, is used by hundreds of computing centers around the world for both traditional scientific HPC computing and for AI workflows.

Rick Stevens, who is responsible for Argonne’s HPC center and a portfolio of over $500M/year of research. He has been one of the leaders in the DOE community that laid the intellectual and funding groundwork for the multi-$B Exascale project and the multi-$B plan for DOE investment in AI.

Satoshi Matsuoka, Japan’s leading computational scientist, with a portfolio and responsibilities at Japan’s RIKEN national laboratory similar to Rick Stevens’ programs at Argonne. He has won numerous international leadership awards and received an award from the Emperor of Japan for his work computational modeling of COVID-19 spread, which saved lives through its use designing public health policies during the pandemic.

Keep Me Updated

Name(Required)
Email(Required)