You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The micro-controller built in to the sensor has the capability of making the readout smarter than it currently is. While filtering the analog signal from the hall effect sensor certainly helps to minimize the likelihood of extreme outliers with high variance during real-time measurement, it would be nice if there were also built-in tests to determine if there are likely measurement errors or anomalies detected during real-time measurements.
This approach would likely need to rely on certain assumptions, such as the filament being within a certain tolerable spec -- this is likely a safe assumption (using conservative values) as most printers will start to exhibit defective prints in the event that the filament is suddenly out of spec by a huge amount.
The approach to error and anomaly detection is largely dependent on the feed rate (which is unknown to the sensor) and the sensor polling rate (which can be configured).
Errors should be triggered whenever the measurements are (based on statistical tests or otherwise) likely to be far outside the average, perhaps by a configurable threshold (two standard deviations would seem like a very reasonable default). Obviously errors should also be triggered if the filament diameter is detected to be greater than the diameter of the bowden tubes downstream from the sensor (should be configurable), and if the filament is detected to have run out (zero diameter, perhaps over a configurable period of time).
Errors should not be triggered during the following circumstances, as they are the whole point of having the sensor in the first place (assuming otherwise fairly smooth transitions between variances in filament thickness):
When the filament suddenly gets thicker (as long as it's not so thick there would be a clear problem getting it through the tube), which can happen with filament joiners. This might possibly be an extremely rare condition.
When the filament suddenly gets thinner (for example, if it has been ground down upstream like in the case of a previous MMU2 loading/unloading failure) the sensor should be capable of detecting this difference and allowing adaptation to the print without failure.
The text was updated successfully, but these errors were encountered:
I agree, and think it might be possible to get the width sensor a feedrate off the signal to the stepper driver. That ought to be able to be used to catch outliers, plus it might be useful for a jam sensor as well.
I've been trying to think of a way to fit a magnetic encoder to the idler bearing, and think tape player pinch rollers might work, around the right size and might be accurate enough to be useful. Although, I think anything I can come up with to fix the idler to the new driven axle shaft would have enough runout to make it unusable as an accurate width sensor, or at least requiring some ridiculous mapping or something. You ought to be able to save a print near instantly on a jam then, plus with a little oled you could track usage and other stats.
I might order a few rubber pinch rollers from ali and see if they might shrink fit on a dowel or something and if they are still decently round, if they'd be hard enough to not deform, and if they'd be able to keep traction...
Outside getting a custom one piece axle made, I'm all ears for ideas for a friction driven idler/axle you could attach a magnet to!
The micro-controller built in to the sensor has the capability of making the readout smarter than it currently is. While filtering the analog signal from the hall effect sensor certainly helps to minimize the likelihood of extreme outliers with high variance during real-time measurement, it would be nice if there were also built-in tests to determine if there are likely measurement errors or anomalies detected during real-time measurements.
This approach would likely need to rely on certain assumptions, such as the filament being within a certain tolerable spec -- this is likely a safe assumption (using conservative values) as most printers will start to exhibit defective prints in the event that the filament is suddenly out of spec by a huge amount.
The approach to error and anomaly detection is largely dependent on the feed rate (which is unknown to the sensor) and the sensor polling rate (which can be configured).
Errors should be triggered whenever the measurements are (based on statistical tests or otherwise) likely to be far outside the average, perhaps by a configurable threshold (two standard deviations would seem like a very reasonable default). Obviously errors should also be triggered if the filament diameter is detected to be greater than the diameter of the bowden tubes downstream from the sensor (should be configurable), and if the filament is detected to have run out (zero diameter, perhaps over a configurable period of time).
Errors should not be triggered during the following circumstances, as they are the whole point of having the sensor in the first place (assuming otherwise fairly smooth transitions between variances in filament thickness):
The text was updated successfully, but these errors were encountered: