🚀 Added
🧠 Your own fine-tuned Florence 2 in Workflows 🔥
Have you been itching to dive into the world of Vision-Language Models (VLMs)? Maybe you've explored @SkalskiP's incredible tutorial on training your own VLM. Well, now you can take it a step further—train your own VLM directly on the Roboflow platform!
But that’s not all: thanks to @probicheaux, you can seamlessly integrate your VLM into Workflows for real-world applications.
Check out the 📖 docs and try it yourself!
Note
This workflow block is not available on Roboflow platform - you need to run inference server on your machine (preferably with GPU).
pip install inference-cli
inference server start
🎨 Classification results visualisation in Workflows
The Workflows ecosystem offers a variety of blocks to visualize model predictions, but we’ve been missing a dedicated option for classification—until now! 🎉
Thanks to the incredible work of @reiffd7, we’re excited to introduce the Classification Label Visualization block to the ecosystem.
Dive in and bring your classification results to life! 🚀
🚧 Changes in ecosystem - Execution Engine v1.3.0
🚧
Tip
Changes introduced in Execution Engine v1.3.0
are non breaking, but we shipped couple of nice extensions and we encourage contributors to adopt them.
Full details of the changes and migration guides available here.
⚙️ Kinds with dynamic serializers and deserializers
- Added serializers/deserializers for each kind, enabling integration with external systems.
- Updated the Blocks Bundling page to reflect these changes.
- Enhanced
roboflow_core
kinds with suitable serializers/deserializers.
See our updated blocks bundling guide for more details.
🆓 Any data can be now a Workflow input
We've added new Workflows input type WorkflowBatchInput
- which is capable of accepting any kind
, unlike our previous inputs like WorkflowImage
. What's even nicer - you can also specify dimensionality level for WorkflowBatchInput
- basically making it possible to break down each workflow into single-steps executed in debug mode.
Take a look at 📖 docs to learn more
🏋️ Easier blocks development
We got tired wondering if specific field in block manifest should be marked with StepOutputSelector
, WorkflowImageSelector
,
StepOutputImageSelector
or WorkflowParameterSelector
type annotation. That was very confusing and was effectively increasing the difficulty of contributions.
Since the selectors type annotations are required for the Execution Engine that block define placeholders for data of specific kind we could not eliminate those annotations, but we are making them easier to understand - introducing generic annotation called Selector(...)
.
Selector(...)
no longer tells Execution Engine that the block accept batch-oriented data - so we replaced old block_manifest.accepts_batch_input()
method with two new:
block_manifest.get_parameters_accepting_batches()
- to return list of params thatWorkflowBlock.run(...)
method
accepts to be wrapped inBatch[X]
containerblock_manifest.get_parameters_accepting_batches_and_scalars()
- to return list of params thatWorkflowBlock.run(...)
method
accepts either to be wrapped inBatch[X]
container or provided as stand-alone scalar values.
Tip
To adopt changes while creating new block - visit our updated blocks creation guide.
To migrate existing blocks - take a look at migration guide.
🖌️ Increased JPEG compression quality
WorkflowImageData
has a property called base64_image
which is auto-generated out from numpy_image
associated to the object. In the previous version of inference
- default compression level was 90%
- we increased it to 95%
. We expect that this change will generally improve the quality of images passed between steps, yet there is no guarantee of better results from the models (that depends on how models were trained). Details of change: #798
Caution
Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.
🧠 Change in Roboflow models blocks
We've changed the way on how Roboflow models blocks work on Roboflow hosted platform. Previously they were using numpy_image
property of WorkflowImageData
as an input to InferenceHTTPClient
while executing remote calls - which usually caused that we are serialising numpy image to JPEG and then to base64
, whereas usually on Roboflow hosted platform, we had base64
representation of image already provided, so effectively we were:
- slowing down the processing
- artificially decreasing the quality of images
This is no longer the case, so we do only transform image representation (and apply lossy compression) when needed. Details of change: #798.
Caution
Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.
🗒️ New kind inference_id
We've diagnosed the need to give a semantic meaning for inference identifiers that are used by external systems as correlation IDs.
That's why we introduce new kind - inference_id
.
We encourage blocks developer to use new kind.
🗒️ New field available in video_metadata
and image
kinds
We've added new optional field to video metadata - measured_fps
- take a look at 📖 docs
🏗️ Changed
- Disable telemetry when running YOLO world by @grzegorz-roboflow in #800
- Pass webrtc TURN config as request parameter when calling POST /inference_pipelines/initialise_webrtc by @grzegorz-roboflow in #801
- Remove reset from YOLO settings by @grzegorz-roboflow in #802
- Pin all dependencies and update to new versions of libs by @PawelPeczek-Roboflow in #803
- bumping owlv2 version and putting cache size in env by @isaacrob-roboflow in #813
🔧 Fixed
- Florence 2 - fixing model caching by @probicheaux in #808
- Use measured fps when fetching frames from live stream by @grzegorz-roboflow in #805
- Fix issue with label visualisation by @PawelPeczek-Roboflow in #811 and @PawelPeczek-Roboflow in #814
Full Changelog: v0.26.1...v0.27.0