This remarkable achievement became possible thanks to open and transparent development of this technology as an official MLCommons project with public Discord discussions, important feedback from Neural Magic, TTA, One Stop Systems, Nutanix, Collabora, Deelvin, AMD and NVIDIA, and contributions from students, researchers and even school children from all over the world via our public MLPerf challenges. Special thanks to cKnowledge for sponsoring our developments and submissions, to One Stop Systems for showcasing the 1st MLPerf results on Rigel Edge Supercomputer, and to TTA for sharing their platforms with us to add CM automation for DLRMv2 available to everyone.
Since it’s impossible to describe all the compelling performance an power-efficient results achieved by our collaborators in a short press-release, we make them available with various derived metrics at the Collective Knowledge playground, mlcommons@cm4mlperf-results and this news page. We continue enhancing the MLCommons CM/CK technology to help everyone automatically co-design the most efficient end-to-end AI solutions based on their requirements and constraints. We welcome all submitters to follow our CK/CM automation developments at GitHub and join our public Discord server if you want to automate your future MLPerf submissions at scale.
See related HPC Wire article about cTuning and our CM/CK technology, and contact Grigori Fursin for more details!