Live Virtual Discussion [31 Aug] - Sparsifying YOLOv5 for Better Efficiencies

About

Tuesday 8/31/21 at 1PM ET, Neural Magic is hosting a live discussion on techniques we used to sparsify YOLOv5 and ways to deploy on cheaper and more readily available CPUs, at the edge or in the data center.

Topics

During the session, Mark Kurtz, Neural Magic’s ML Lead, will cover:

  • A general overview of our methodology and how we leveraged the Ultralytics YOLOv5 repository with SparseML’s sparsification recipes to create highly pruned and INT8 quantized YOLOv5 models.
  • How you can reproduce our benchmarks using the aforementioned integrations and tools linked out from the YOLOv5 model page.
  • How you can train YOLOv5 on new datasets to replicate our performance with your own data leveraging pre-sparsified models in the SparseZoo.

Sign-Up

Sign up for this event with the link here If you have any questions about the event or specific material you’d like to see, comment on this topic!

Hi, thanks for this great library. I tried the yolov5 tutorials with --recipe = yolov5s.pruned_quantized.md
The QAT works as I can see fake quantized modules in netron but the number of parameters doesnt change. Am i missing any additional steps to remove those pruned parameters?