Skip to content

Commit

Permalink
fix: Fix automatic batch size try catch not handling onnx errors (#123)
Browse files Browse the repository at this point in the history
* fix: Fix automatic batch size try catch not handling onnx errors

* build: Upgrade version, update changelog
  • Loading branch information
lorenzomammana committed Jun 24, 2024
1 parent 44c0a4f commit 4e07340
Show file tree
Hide file tree
Showing 4 changed files with 10 additions and 3 deletions.
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@
# Changelog
All notable changes to this project will be documented in this file.

### [2.1.11]

#### Fixed

- Fix sklearn automatic batch finder not working properly with ONNX backbones

### [2.1.10]

#### Fixed
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "quadra"
version = "2.1.10"
version = "2.1.11"
description = "Deep Learning experiment orchestration library"
authors = [
"Federico Belotti <[email protected]>",
Expand Down
2 changes: 1 addition & 1 deletion quadra/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "2.1.10"
__version__ = "2.1.11"


def get_version():
Expand Down
3 changes: 2 additions & 1 deletion quadra/utils/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -601,11 +601,12 @@ def automatic_batch_size_computation(
log.info("Trying batch size: %d", datamodule.batch_size)
_ = get_feature(feature_extractor=backbone, dl=base_dataloader, iteration_over_training=1, limit_batches=1)
except RuntimeError as e:
if "CUDA out of memory" in str(e):
if batch_size > 1:
batch_size = batch_size // 2
optimal = False
continue

log.error("Unable to run the model with batch size 1")
raise e

log.info("Found optimal batch size: %d", datamodule.batch_size)
Expand Down

0 comments on commit 4e07340

Please sign in to comment.