GLEAM: Graph-based Learning through Efficient Aggregation in Memory
Andrew McCrabb, Ivris Raymond, Valeria Bertacco
Design, Automation, and Test in Europe Conference (DATE) 2025
https://doi.org/10.23919/DATE64628.2025.10992998
184 Words
2025-03-31 09:30 +0200
Abstract
Graph Neural Networks (GNNs) have emerged as a powerful tool for analyzing relationship-based data, such as those found in social networks, logistics, weather forecasting, and other domains. Inference and training with GNN models execute slowly, bottlenecked by limited data bandwidths between memory and GPU hosts, as a result of the many irregular memory accesses inherent to GNN-based computation. To overcome these limitations, we present GLEAM, a Processing-in-Memory (PIM) hardware accelerator designed specifically for GNN-based training and inference. GLEAM units are placed per-bank and leverage the much larger, internal bandwidth of HBMs to handle GNNs’ irregular memory accesses, significantly boosting performance and reducing the energy consumption entailed by the dominant activity of GNN-based computation: neighbor aggregation. Our evaluation of GLEAM demonstrates up to a 10x speedup for GNN inference over GPU baselines, alongside a significant reduction in energy usage.
BibTeX Citation
@INPROCEEDINGS{10992998,
author={McCrabb, Andrew and Raymond, Ivris and Bertacco, Valeria},
booktitle={2025 Design, Automation & Test in Europe Conference (DATE)},
title={GLEAM: Graph-Based Learning Through Efficient Aggregation in Memory},
year={2025},
volume={},
number={},
pages={1-7},
keywords={Training;Performance evaluation;Energy consumption;Social networking (online);Memory management;Graphics processing units;Weather forecasting;Bandwidth;Graph neural networks;Hardware acceleration},
doi={10.23919/DATE64628.2025.10992998}}