Introduction

ByteMLPerf is an AI Accelerator Benchmark that focuses on evaluating AI Accelerators from practical production perspective, including the ease of use and versatility of software and hardware. Byte MLPerf has the following characteristics:

  • Models and runtime environments are more closely aligned with practical business use cases.
  • For ASIC hardware evaluation, besides evaluate performance and accuracy, it also measure metrics like compiler usability and coverage.
  • Performance and accuracy results obtained from testing on the open Model Zoo serve as reference metrics for evaluating ASIC hardware integration.

Category

The ByteMLPerf benchmark is structured into three main categories: Inference, Training, and Micro, each targeting different aspects of AI accelerator performance:

  • Inference: This category is subdivided into two distinct sections to cater to different types of models:

    • General Performance: This section is dedicated to evaluating the inference capabilities of accelerators using common models such as ResNet-50 and BERT. It aims to provide a broad understanding of the accelerator's performance across a range of typical tasks. Vendors can refer to this document for guidance on building general perf backend

    • Large Language Model (LLM) Performance: Specifically designed to assess the capabilities of accelerators in handling large language models, this section addresses the unique challenges posed by the size and complexity of these models. Vendors can refer to this document for guidance on building llm perf backend

  • Micro: The Micro category focuses on the performance of specific operations or "ops" that are fundamental to AI computations, such as Gemm, Softmax, and various communication operations. This granular level of testing is crucial for understanding the capabilities and limitations of accelerators at a more detailed operational level. Vendors can refer to this document for guidance on building micro perf backend

  • Training: Currently under development, this category aims to evaluate the performance of AI accelerators in training scenarios. It will provide insights into how well accelerators can handle the computationally intensive process of training AI models, which is vital for the development of new and more advanced AI systems.

Vendors looking to evaluate and improve their AI accelerators can utilize the ByteMLPerf benchmark as a comprehensive guide. The benchmark not only offers a detailed framework for performance and accuracy evaluation but also includes considerations for compiler usability and coverage for ASIC hardware, ensuring a holistic assessment approach.

Vendor List

ByteMLPerf Vendor Backend List will be shown below

VendorSKUKey ParametersInference(General Perf)Inference(LLM Perf)
IntelXeon---
Stream ComputingSTC P920
  • Computation Power:128 TFLOPS@FP16
  • Last Level Buffer: 8MB, 256GB/s
  • Level 1 Buffer: 1.25MB, 512GB/s
  • Memory: 16GB, 119.4GB/S
  • Host Interface:PCIe 4, 16x, 32GB/s
  • TDP: 160W
  • Supported-
    GraphcoreGraphcore® C600
  • Compute: 280 TFLOPS@FP16, 560 TFLOPS@FP8
  • In Processor Memory: 900 MB, 52 TB/s
  • Host Interface: Dual PCIe Gen4 8-lane interfaces, 32GB/s
  • TDP: 185W
  • Supported-
    Moffett-AIMoffett-AI S30
  • Compute: 1440 (32x-Sparse) TFLOPS@BF16, 2880 (32x-Sparse) TOPS@INT8,
  • Memory: 60 GB,
  • Host Interface: Dual PCIe Gen4 8-lane interfaces, 32GB/s
  • TDP: 250W
  • Supported-
    HabanaGaudi2
  • 24 Tensor Processor Cores, Dual matrix multiplication engines
  • Memory: 96 GB HBM2E, 48MB SRAM
  • Supported-

    Statement

    ASF Statement on Compliance with US Export Regulations and Entity List