Skip to content

Commit

Permalink
How to Benchmark: typos, images, Colab, correct class remap
Browse files Browse the repository at this point in the history
  • Loading branch information
LinasKo committed Nov 29, 2024
1 parent d3f6570 commit 853cbc4
Showing 1 changed file with 8 additions and 4 deletions.
12 changes: 8 additions & 4 deletions docs/how_to/benchmark_a_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ comments: true
status: new
---

# Benchmark a Model
![Corgi Example](https://media.roboflow.com/supervision/image-examples/how-to/benchmark-models/corgi-sorted-2.png)

## Overview
# Benchmark a Model

Have you ever trained multiple detection models and wondered which one performs best on your specific use case? Or maybe you've downloaded a pre-trained model and want to verify its performance on your dataset? Model benchmarking is essential for making informed decisions about which model to deploy in production.

Expand All @@ -24,6 +24,8 @@ This guide will show an easy way to benchmark your results using `supervision`.

This guide will use an instance segmentation model, but it applies to object detection, instance segmentation, and oriented bounding box models (OBB) too.

A condensed version of this guide is available as a [Colab Notebook](https://colab.research.google.com/drive/1HoOY9pZoVwGiRMmLHtir0qT6Uj45w6Ps?usp=sharing).

## Loading a Dataset

Suppose you start with a dataset. Perhaps you found it on [Universe](https://universe.roboflow.com/); perhaps you [labeled your own](https://roboflow.com/how-to-label/yolo11). In either case, this guide assumes you know of a labelled dataset at hand.
Expand Down Expand Up @@ -53,7 +55,7 @@ project = rf.workspace("<WORKSPACE_NAME>").project("<PROJECT_NAME>")
dataset = project.version(<DATASET_VERSION_NUMBER>).download("<FORMAT>")
```

If your dataset is fromUniverse, go to `Dataset > Download Dataset > select the format (e.g. YOLOv11) > Show download code.
If your dataset is from Universe, go to `Dataset` > `Download Dataset` > select the format (e.g. `YOLOv11`) > `Show download code`.

If labeling your own data, go to the [dashboard](https://app.roboflow.com/) and check this [guide](https://docs.roboflow.com/api-reference/workspace-and-project-ids) to find your workspace and project IDs.

Expand Down Expand Up @@ -272,7 +274,7 @@ Let's also remove the predictions that are not in the dataset classes.

remap_classes(
detections=predictions,
class_ids_from_to={27: 0},
class_ids_from_to={16: 0},
class_names_from_to={"dog": "Corgi"}
)
predictions = predictions[
Expand Down Expand Up @@ -432,6 +434,8 @@ Even better, the repository is open source! You can see how the models were benc

In this guide, you've learned how to set up your environment, train or use pre-trained models, visualize predictions, and evaluate model performance with metrics like [mAP](https://supervision.roboflow.com/latest/metrics/mean_average_precision/), [F1 score](https://supervision.roboflow.com/latest/metrics/f1_score/), and got to know our Model Leaderboard.

A condensed version of this guide is also available as a [Colab Notebook](https://colab.research.google.com/drive/1HoOY9pZoVwGiRMmLHtir0qT6Uj45w6Ps?usp=sharing).

For more details, be sure to check out our [documentation](https://supervision.roboflow.com/latest/) and join our community discussions. If you find any issues, please let us know on [GitHub](https://github.com/roboflow/supervision/issues).

Best of luck with your benchmarking!

0 comments on commit 853cbc4

Please sign in to comment.