流水线并行(Pipeline Parallelism)

参考文献: Huang Y, Cheng Y, Chen D, et al. GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism: 33rd Conference on Neural Inform

张量并行(Tensor Parallelism)

参考文献: Shoeybi M, Patwary M, Puri R, et al. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism[J]. ArXiv, 2020 张量并行(

数据并行(Data Parallelism)

参考文献: Team D, Majumder R, President V, et al. DeepSpeed: Extreme-scale model training for everyone[J]. Microsoft, 2020. Rajbhandari S, Rasley J, Ruwas
Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×