Skip to content

Releases: albumentations-team/albumentations

1.3.0

20 Sep 07:33
2a857a8
Compare
Choose a tag to compare

albu_1_3_0

Breaking changes

New augmentations

Bugfixes

Minor changes:

1.2.1

12 Jul 13:42
ed7626f
Compare
Choose a tag to compare

175977113-5874a3f9-515b-42d3-a01f-73297934b912(2)

Minor changes

  • A.Rotate and A.ShiftScaleRotate now support new rotation method for bounding boxes, ellipse. (#1203 by @victor1cea)
  • A.Rotate now supports new argument crop_border. If set to True, the rotated image will be cropped as much as possible to eliminate pixel values at the edges that were not well defined after rotation. (#1214 by @bonlime)
  • Tests that use multiprocessing now run much faster (#1218 by @Dipet)
  • Improved type hints (#1219 by @Dipet )
  • Fixed a deprecation warning in match_histograms. (#1121 by @BloodAxe)

Bugfixes

1.2.0

15 Jun 08:27
fe856c2
Compare
Choose a tag to compare

New augmentations

New augmentations

New augmentations:

  • A.UnsharpMask. This transform sharpens the input image using Unsharp Masking processing and overlays the result with the original image. (#1063 by @zakajd)
  • A.RingingOvershoot. This transform creates ringing or overshoot artifacts by convolving the image with a 2D sinc filter. (#1064 by @zakajd)
  • A.AdvancedBlur. This transform blurs the input image using a Generalized Normal filter with randomly selected parameters. It also adds multiplicative noise to generated kernel before convolution. (#1066 by @zakajd)
  • A.PixelDropout. This transformation randomly replaces pixels with the passed value. (#1082 by @Dipet)

Bugfixes

Minor changes:

1.1.0

04 Oct 09:30
dd0c5db
Compare
Choose a tag to compare

133947365-6cba891b-4537-4d97-8b84-5ac9ce908d1d

New augmentations

  • TemplateTransform. This transform allows the blending of an input image with specified templates. (#572 by @akarsakov )
  • PixelDistributionAdaptation. A new domain adaptation augmentation. It fits a simple transform on both the original and reference image, transforms the original image with transform trained on this image, and performs inverse transformation using transform fitted on the reference image. See the examples of this transform in the qudida repository. (#959 by @arsenyinfo)

Minor changes:

Bugfixes

1.0.3

15 Jul 10:11
929cbd8
Compare
Choose a tag to compare
  • Fixed problem with incorrect shape at keypoints and bboxes processors after ToTensorV2 #963
  • Fixed problems with float values in YOLO format in edge cases #958

1.0.2

09 Jul 10:34
6af3e02
Compare
Choose a tag to compare
  1. Fixed YOLO format conversion problem when bbox greater than image by 1 pixel.
    Now YOLO bbox will be converted to Albumentations format without bbox denormalization.
    More info in PR: #924
  2. Removed redundant search of first & last dual transform #946

1.0.1

06 Jul 13:14
b6b7e34
Compare
Choose a tag to compare

Added position argument to PadIfNeeded (#933 by @yisaienkov)

Possible values: center top_left, top_right, bottom_left, bottom_right, with center being the default value.

One possible use case for this feature is object detection where you need to pad an image to square, but you want predicted bounding boxes being equal to the bounding box of the unpadded image.

image_padding_2
image source

1.0.0

01 Jun 11:28
81523ea
Compare
Choose a tag to compare

Breaking changes

  • imgaug dependency is now optional, and by default, Albumentations won't install it. This change was necessary to prevent simultaneous install of both opencv-python-headless and opencv-python (you can read more about the problem in this issue). If you still need imgaug as a dependency, you can use the pip install -U albumentations[imgaug] command to install Albumentations with imgaug.
  • Deprecated augmentation ToTensor that converts NumPy arrays to PyTorch tensors is completely removed from Albumentations. You will get a RuntimeError exception if you try to use it. Please switch to ToTensorV2 in your pipelines.

New augmentations

By default, Albumentations doesn't require imgaug as a dependency. But if you need imgaug, you can install it along with Albumentations by running pip install -U albumentations[imgaug].

Here is a table of deprecated imgaug augmentations and respective augmentations from Albumentations that you should use instead:

Old deprecated augmentation New augmentation
IAACropAndPad CropAndPad
IAAFliplr HorizontalFlip
IAAFlipud VerticalFlip
IAAEmboss Emboss
IAASharpen Sharpen
IAAAdditiveGaussianNoise GaussNoise
IAAPerspective Perspective
IAASuperpixels Superpixels
IAAAffine Affine
IAAPiecewiseAffine PiecewiseAffine

Major changes

  • Serialization logic is updated. Previously, Albumentations used the full classpath to identify an augmentation (e.g. albumentations.augmentations.transforms.RandomCrop). With the updated logic, Albumentations will use only the class name for augmentations defined in the library (e.g., RandomCrop). For custom augmentations created by users and not distributed with Albumentations, the library will continue to use the full classpath to avoid name collisions (e.g., when a user creates a custom augmentation named RandomCrop and uses it in a pipeline).

    This new logic will allow us to refactor the code without breaking serialized augmentation pipelines created using previous versions of Albumentations. This change will also reduce the size of YAML and JSON files with serialized data.

    The new serialization logic is backward compatible. You can load serialized augmentation pipelines created in previous versions of Albumentations because Albumentations supports the old format.

Bugfixes

Minor changes

0.5.2

29 Nov 14:29
aa53526
Compare
Choose a tag to compare

Minor changes

  • ToTensorV2 now automatically expands grayscale images with the shape [H, W] to the shape [H, W, 1]. PR #604 by @Ingwar.
  • CropNonEmptyMaskIfExists now also works with multiple masks that are provided by the masks argument to the transform function. Previously this augmentation worked only with a single mask provided by the mask argument. PR #761

0.5.1

02 Nov 16:53
2e48ce9
Compare
Choose a tag to compare

Breaking changes

  • API for A.FDA is changed to resemble API of A.HistogramMatching. Now, both transformations expect to receive a list of reference images, a function to read those image, and additional augmentation parameters. (#734)
  • A.HistogramMatching now usesread_rgb_image as a default read_fn. This function reads an image from the disk as an RGB NumPy array. Previously, the default read_fn was cv2.imread which read an image as a BGR NumPy array. (#734)

New transformations

  • A.Sequential transform that can apply augmentations in a sequence. This transform is not intended to be a replacement for A.Compose. Instead, it should be used inside A.Compose the same way A.OneOf or A.OneOrOther. For instance, you can combine A.OneOf with A.Sequential to create an augmentation pipeline containing multiple sequences of augmentations and apply one randomly chosen sequence to input data. (#735)

Minor changes

  • A.ShiftScaleRotate now has two additional optional parameters: shift_limit_x and shift_limit_y. If either of those parameters (or both of them) is set A.ShiftScaleRotate will use the set values to shift images on the respective axis. (#735)
  • A.ToTensorV2 now supports an additional argument transpose_mask (False by default). If the argument is set to True and an input mask has 3 dimensions, A.ToTensorV2 will transpose dimensions of a mask tensor in addition to transposing dimensions of an image tensor. (#735)

Bugfixes

  • A.FDA now correctly uses coordinates of the center of an image. (#730)
  • Fixed problems with grayscale images for A.HistogramMatching. (#734)
  • Fixed a bug that led to an exception when A.load() was called to deserialize a pipeline that contained A.ToTensor or A.ToTensorV2, but those transforms were not imported in the code before the call. (#735)