Infrastructure Update: Our new parallel storage system is now online
Our lab’s new parallel storage system officially went online on January 21, 2026, adding 3PB of new capacity for data-intensive research workflows.
With this upgrade, the lab now operates 5PB of parallel storage and 3PB of archive storage, bringing our total storage capacity to more than 8PB. This expansion provides a much stronger foundation for large-scale data management, model development, and long-term research archiving.
We also expect to further expand our compute infrastructure before June 2026 with a new 32-GPU cluster, primarily based on RTX Pro 6000 cards with 96GB memory per GPU. The new cluster will mainly support two major workloads: large language model training and inference, and conventional deep learning model training.
Key points:
- The new parallel storage system went live on January 21, 2026, adding 3PB of new capacity.
- The lab now has 5PB of parallel storage and 3PB of archive storage, with total capacity exceeding 8PB.
- A new 32-GPU cluster is planned before June 2026, mainly to support LLM training and inference as well as traditional deep learning training.