diff --git a/README.md b/README.md
index fe569183e..2aa3cef0f 100644
--- a/README.md
+++ b/README.md
@@ -21,7 +21,7 @@
[![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/supervision)
[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb)
[![gradio](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Roboflow/Annotators)
-[![discord](https://img.shields.io/discord/1159501506232451173)](https://discord.gg/GbfgXGJ8Bk)
+[![discord](https://img.shields.io/discord/1159501506232451173?logo=discord&label=discord&labelColor=fff&color=5865f2&link=https%3A%2F%2Fdiscord.gg%2FGbfgXGJ8Bk)](https://discord.gg/GbfgXGJ8Bk)
[![built-with-material-for-mkdocs](https://img.shields.io/badge/Material_for_MkDocs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://squidfunk.github.io/mkdocs-material/)
@@ -135,88 +135,88 @@ for path, image, annotation in ds:
- load
- ```python
- dataset = sv.DetectionDataset.from_yolo(
- images_directory_path=...,
- annotations_directory_path=...,
- data_yaml_path=...
- )
-
- dataset = sv.DetectionDataset.from_pascal_voc(
- images_directory_path=...,
- annotations_directory_path=...
- )
-
- dataset = sv.DetectionDataset.from_coco(
- images_directory_path=...,
- annotations_path=...
- )
- ```
+ ```python
+ dataset = sv.DetectionDataset.from_yolo(
+ images_directory_path=...,
+ annotations_directory_path=...,
+ data_yaml_path=...
+ )
+
+ dataset = sv.DetectionDataset.from_pascal_voc(
+ images_directory_path=...,
+ annotations_directory_path=...
+ )
+
+ dataset = sv.DetectionDataset.from_coco(
+ images_directory_path=...,
+ annotations_path=...
+ )
+ ```
- split
- ```python
- train_dataset, test_dataset = dataset.split(split_ratio=0.7)
- test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
+ ```python
+ train_dataset, test_dataset = dataset.split(split_ratio=0.7)
+ test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5)
- len(train_dataset), len(test_dataset), len(valid_dataset)
- # (700, 150, 150)
- ```
+ len(train_dataset), len(test_dataset), len(valid_dataset)
+ # (700, 150, 150)
+ ```
- merge
- ```python
- ds_1 = sv.DetectionDataset(...)
- len(ds_1)
- # 100
- ds_1.classes
- # ['dog', 'person']
-
- ds_2 = sv.DetectionDataset(...)
- len(ds_2)
- # 200
- ds_2.classes
- # ['cat']
-
- ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
- len(ds_merged)
- # 300
- ds_merged.classes
- # ['cat', 'dog', 'person']
- ```
+ ```python
+ ds_1 = sv.DetectionDataset(...)
+ len(ds_1)
+ # 100
+ ds_1.classes
+ # ['dog', 'person']
+
+ ds_2 = sv.DetectionDataset(...)
+ len(ds_2)
+ # 200
+ ds_2.classes
+ # ['cat']
+
+ ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
+ len(ds_merged)
+ # 300
+ ds_merged.classes
+ # ['cat', 'dog', 'person']
+ ```
- save
- ```python
- dataset.as_yolo(
- images_directory_path=...,
- annotations_directory_path=...,
- data_yaml_path=...
- )
-
- dataset.as_pascal_voc(
- images_directory_path=...,
- annotations_directory_path=...
- )
-
- dataset.as_coco(
- images_directory_path=...,
- annotations_path=...
- )
- ```
+ ```python
+ dataset.as_yolo(
+ images_directory_path=...,
+ annotations_directory_path=...,
+ data_yaml_path=...
+ )
+
+ dataset.as_pascal_voc(
+ images_directory_path=...,
+ annotations_directory_path=...
+ )
+
+ dataset.as_coco(
+ images_directory_path=...,
+ annotations_path=...
+ )
+ ```
- convert
- ```python
- sv.DetectionDataset.from_yolo(
- images_directory_path=...,
- annotations_directory_path=...,
- data_yaml_path=...
- ).as_pascal_voc(
- images_directory_path=...,
- annotations_directory_path=...
- )
- ```
+ ```python
+ sv.DetectionDataset.from_yolo(
+ images_directory_path=...,
+ annotations_directory_path=...,
+ data_yaml_path=...
+ ).as_pascal_voc(
+ images_directory_path=...,
+ annotations_directory_path=...
+ )
+ ```
diff --git a/docs/how_to/track_objects.md b/docs/how_to/track_objects.md
index 784cb7bf1..1b321e7fe 100644
--- a/docs/how_to/track_objects.md
+++ b/docs/how_to/track_objects.md
@@ -1,5 +1,6 @@
---
comments: true
+status: new
---
# Track Objects
@@ -13,6 +14,8 @@ take you through the steps to perform inference using the YOLOv8 model via eithe
you'll discover how to track these objects efficiently and annotate your video content
for a deeper analysis.
+## Object Detection & Segmentation
+
To make it easier for you to follow our tutorial download the video we will use as an
example. You can do this using
[`supervision[assets]`](/latest/assets/) extension.
@@ -21,14 +24,13 @@ example. You can do this using
from supervision.assets import download_assets, VideoAssets
download_assets(VideoAssets.PEOPLE_WALKING)
-download_assets(VideoAssets.SKIING)
```
-## Run Inference
+### Run Inference
First, you'll need to obtain predictions from your object detection or segmentation
model. In this tutorial, we are using the YOLOv8 model as an example. However,
@@ -41,6 +43,10 @@ by obtaining model predictions and then annotating the frame based on these pred
This `callback` function will be essential in the subsequent steps of the tutorial, as
it will be modified to include tracking, labeling, and trace annotations.
+!!! tip
+
+ Both object detection and segmentation models are supported. Try it with `yolov8n.pt` or `yolov8n-640-seg`!
+
=== "Ultralytics"
```{ .py }
@@ -89,7 +95,7 @@ it will be modified to include tracking, labeling, and trace annotations.