Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add step-wise optimization for the fixed-lag smoother (issue #300) #319

Open
wants to merge 5 commits into
base: devel
Choose a base branch
from

Conversation

fabianhirmann
Copy link
Contributor

This PR adds a step-wise optimization for the fixed-lag smoother for the issue with the existing fixed-lag smoother described in #300.

Please provide feedback if this PR is ready to be merged or what is missing. If you have questions, feel free to ask.

@fabianhirmann
Copy link
Contributor Author

After I haven't heard any comments on that for a while, I would like to make a short ping to @svwilliams, @carlos-m159 and @ayrton04 after mostly you had a look at the last pull requests in this repository.

Copy link
Contributor

@svwilliams svwilliams left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, I like to reiterate that the design intent of fuse was to allow people to extend the variables, constraints, publishers, etc. from outside the main project. This allows contributors share it under different terms, tailor it to specific use-cases, and generally retain control of their own code. If you are interested in releasing the StepWiseFixedLagSmoother from your own repo, I will happily adapt the current FixedLagSmoother class to make derived classes easier to implement.

Second, while most of fuse was designed to be extended, the BatchOptimizer and FixedLagSmoother don't quite live up to the rest. The implementation became complex and a bit unwieldy. Having created a derived FixedLagSmoother, I'd be interested in any design changes or recommendations you have to make life easier in the future.

And finally, I have a few concerns about how the constraints are being selected for each optimization phase. The sensors send Transaction objects the the optimizer. Each transaction may add or remove variables, and add or remove constraints. It is assumed that each sensor is smart enough to perform these operations in a correct and consistent order. If a sensor wants to remove a variable, it will also remove all of the constraints involved with that variable at the same time. A Transaction was intended to be an atomic unit of work. But the way the current PR separates work from different sensors only handles parts of each Transaction. And processing partial transactions could lead to unexpected results.

Looking forward to hearing your thoughts about any of this.

FixedLagSmootherParams::loadFromROS(nh);

// load optimization steps
XmlRpc::XmlRpcValue optimization_steps_raw;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add an include for XmlRpcValue?

@@ -126,6 +126,8 @@ class FixedLagSmoother : public Optimizer
*/
virtual ~FixedLagSmoother();

virtual void startOptimization();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a function description comment for startOptimization()?

this);
}

void FixedLagSmoother::startOptimization()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably initialize the optimization_running_ flag to false in the constructor, and then set it to true in the startOptimization() method. Just to adhere to the principle of least surprise.

Comment on lines +152 to +168
fuse_core::Transaction::SharedPtr new_transaction_for_loop = new_transaction->clone();

// Keep only the sensor models that will be optimized in this loop
std::unordered_set<fuse_core::UUID> constraints_for_removal;
for (const auto& constraint : new_transaction_for_loop->addedConstraints())
{
if (std::find(types.begin(), types.end(), constraint.source()) == types.end())
{
constraints_for_removal.insert(constraint.uuid());
}
}

// remove all constraints marked for removal
for (auto &&i : constraints_for_removal)
{
new_transaction_for_loop->removeConstraint(i);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you are making a copy of everything, and then deleting the things you don't want. I have some efficiency questions about this approach.

Also, you are only processing the AddedConstraints set. This means that:

  • All the new variables from all of the new constraints are added during the first iteration. This potentially leaves variables unconstrained during the optimization. L-M optimization often let's you get away with this, but it is not ideal.
  • If a sensor requested that a previous constraint or variable gets removed, all of the removals are also happening during the first iteration. Again, if a constraint gets removed but its replacement is not added back, you could end up with an under-constrained system during optimization.

The fixed-lag smoother keeps a queue of all of the original Transactions from the sensors. The processQueue() method applies the motion models, then merges all of the pending transactions into a single Transaction so the Graph can be updated in a single operation. My personal recommendation would be to replace the processQueue() call with your own implementation. You could loop through the queue multiple times, selecting only the Transactions from sensors you want to include during a given iteration. That ensures that variables/constraints are added and deleted at the correct times.

@fabianhirmann
Copy link
Contributor Author

Hi @svwilliams,

Thank you for your comments to my pull request. Our intention was indeed to include this step wise fixed lag smoother in the public repository such that it is accessible and available for everyone in the same way as fuse as public repository is helpful for us.

About the other comments, I would first like to resolve the issue described in #300 and then we can decide together in which way this implementation is useful for everyone (including us).

I look forward hearing from you in my issue #300,
Fabian

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants