Skip to content

Commit

Permalink
metrology project added
Browse files Browse the repository at this point in the history
  • Loading branch information
FarStryke21 committed Sep 22, 2024
1 parent 56e3010 commit 3781af8
Show file tree
Hide file tree
Showing 6 changed files with 54 additions and 48 deletions.
102 changes: 54 additions & 48 deletions _research/research0.html
Original file line number Diff line number Diff line change
Expand Up @@ -7,90 +7,96 @@

<style>
.image-container {
text-align: center;
margin-bottom: 20px;
text-align: center;
margin-bottom: 20px;
}

.responsive-image {
height: auto; /* Maintain aspect ratio */
height: auto;
/* Maintain aspect ratio */
}
</style>

<p>
Pose estimation refers to the process of determining the position and orientation of objects in a given scene. It involves calculating the spatial relationship between the camera and the object, often represented as a 6-DoF (six degrees of freedom) pose consisting of three translational and three rotational parameters.
</p>
The Metrology Project is my main research project at CERLAB. The idea is to develop an automated testing pipeline
for additively manufactured parts. Metal based 3D printing allows us to print functional parts with a complex
topology with ease. But when those parts are to be used in high accuracy situations it is important that they are
free of any defects which may cause failure. Due to the high volume of parts, it is not always feasible to go around
manually inspecting each part which is printed. Additionally, the complex geometries make it more difficult to take
precise measurements of the parts.</p>

<p>
Why do we need to know where the objects are? The answer is simple: to interact with them. In the context of robotics, pose estimation is a fundamental task that enables robots to perceive and understand the environment, which is essential for various applications such as object manipulation, navigation, and human-robot interaction.
So how do we do it? The solution, theoretically is really simple. Mount a very good sensor at the end of the arm.
Use that arm to capture observation of the object we are interested in. Stitch all the observations together and we
have a reconstruction of the printed part. Now we can compare it against the actual part file and analyze for
inconsistencies.
</p>

<p>
Much work has been done in the field of pose estimation, and various methods have been proposed to solve this problem. These methods can be broadly categorized into two types: model-based and data-driven approaches. Model-based methods rely on a 3D model of the object to estimate its pose, while data-driven methods use machine learning techniques to learn the pose directly from the data.
</p>
<h2>Hardware</h2>

<h2>Getting some context</h2>
<p>
Take an example of the Metrology Pipeline. We are capturing the surface of the object using a 3D scanner. For this purpose we calculate the set of viewpoints we need to orient the sensor to capture the entire surface of the object. However, these poses are calculated with a given pose of the object. If the object is not in the correct pose, the calculated poses will not be able to capture the entire surface of the object. This is where the pose estimation comes into play. We estimate the pose of the object and then correct the set of viewpoints to capture the entire surface of the object.
In a broad classification, we have four major parts in our hardware system. First the specimen which we are
scanning, the sensor which we are using to perform the scans, a 6 DOF robot arm which helps move the sensor around
and orient it, and lastly a turntable which provides an additional redundant DOF to the system. Using a combination
of these 6+1 DOFs we can find solutions to the viewpoint planning problem.
</p>


<div class="image-container">
<img src="/images/research/pose_estimate/main.png" alt="Centered Image" class="responsive-image" style="width: 650px">
<img src="/images/research/metrology/hardware.jpg" alt="Centered Image" class="responsive-image"
style="width: 650px">
</div>

<p>
Our area of interest lies in the development of model-based methods for pose estimation. We are currently working on a project that aims to estimate the pose of objects using a point cloud. We use a structured light sensor capture the surface area and want to limit estimating the pose without changing the sensor.
Currently we use two categories of sensors to capture the scans. In the first category, we have a laser profiler
that capture measurements in a single dimension with a high degree of accuracy. We use this one for specific areas
where the accuracy requirement is high. On the other hand, we also use a structured light sensor which provides a
faster scanning time at a decent accuracy. Both have their pros and cons but balance each other out.
</p>


<div class="image-container">
<img src="/images/research/pose_estimate/pipeline.png" alt="Centered Image" class="responsive-image" style="width: 650px">
<img src="/images/research/metrology/sensors.jpg" alt="Centered Image" class="responsive-image"
style="width: 650px">
</div>

<h2>Global Registration</h2>

<p>
The Fast Point Feature Histograms (FPFH) is a feature descriptor that is used to describe the local geometric properties of a point in a point cloud. It is an extension of the Point Feature Histograms (PFH) descriptor, which captures the relationships between the point and its neighbors by computing the histogram of the geometric properties of the point pairs. The FPFH descriptor improves the efficiency of the PFH descriptor by introducing a fast and robust method for computing the local reference frame of the point, which is used to align the point pairs before computing the histogram.
The above image shows a sample of what an observation from our structured light sensor gives us from one viewpoint.
On the top right corner is what the inbuilt camera in the sensor sees during the measurement. We get a very dense
point cloud
of the object placed in our area of interest on the turntable, with the sensor capable of capturing 5000 point
measurements per square centimeter. Combine one such observation with the rest of the set allows a very dense
reconstruction of the object.
</p>


<div class="image-container">
<img src="/images/research/pose_estimate/fpfh.png" alt="Centered Image" class="responsive-image" style="width: 400px">
<img src="/images/research/metrology/observation.jpg" alt="Centered Image" class="responsive-image"
style="width: 650px">
</div>

<p>
We leverage the FPFH descriptor to estimate the pose of objects in a point cloud. We first extract the FPFH features from the point cloud and then match them with the features of the object model to estimate the rough pose. This rough global registration takes place using a down-sampled point cloud and using Random Sample Consensus (RANSAC) to fit the points.
</p>


<h2>Fine Registration</h2>
<h2>Software Pipeline</h2>

<p>
The output of the global registration provides a warm start to the fine registration algorithm. We use the Point-to-Plane Iterative Closest Point (ICP) algorithm to refine the pose estimate. The ICP algorithm iteratively aligns the point cloud with the object model by minimizing the distance between the corresponding points. We use the FPFH features to compute the correspondences between the point cloud and the object model, which are then used to update the pose estimate.
</p>
Our current pipeline expects the base models of the specimen and the sensor details to start working. The first
module solves for the viewpoint planning problem and provides a minimal set of feasible viewpoints which allows for
full surface coverage of the specimen. An object pose estimator isolates and corrects the viewpoints based on the
current positioning of the specimen on the turntable. Our motion planning module solves the inverse kinematics for
the entire 7 degree chain and the manages the hardware control of our setup. Once all the observations have been
captured, we segment out the specimen and register them into a single reconstructed point cloud. This can then be
passed on for postprocessing.

<p>
The output of the fine registration provides a transform which can be used to estimate the pose of the object.
</p>


<div class="image-container">
<img src="/images/research/pose_estimate/data.png" alt="Centered Image" class="responsive-image" style="width: 250px">
<img src="/images/research/pose_estimate/pipeline.png" alt="Centered Image" class="responsive-image"
style="width: 650px">
</div>

<div class="image-container">
<img src="/images/research/pose_estimate/result.png" alt="Centered Image" class="responsive-image" style="width: 400px">
</div>
<h2>Demo</h2>

<h2>Implementation</h2>

<p>
The current version of the pose estimation pipeline is implemented in Python using the Open3D library along with our in-house ROS bindings for point cloud data. The current implementation takes on average 3.3 seconds to capture the rough scan for the pose module, and the pose estimation engine provides a result in under 4 seconds. The entire pipeline can provide a pose estimate in under 8 seconds, and is currently available as a ROS Node for use in ROS based systems.
</p>


<div class="image-container">
<img src="/images/research/pose_estimate/solution.png" alt="Centered Image" class="responsive-image" style="width: 600px">
<div class="video-container"></div>
<video width="750px" height="540px" controls>
<source src="/images/research/metrology/demo.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<div class="video-caption">Hardware demo of the scanning pipeline</div>
</div>

<p>
Future work includes improving the performance by leveraging the Point Cloud Library (PCL) for feature extraction and aligning. The feature extraction process make it an ideal candidate for GPU processing. The final objective is to have a pose estimate rate of 1 Hz or more for live object tracking during the scanning process.
</p>
Binary file added images/research/metrology/demo.mp4
Binary file not shown.
Binary file added images/research/metrology/hardware.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/research/metrology/observation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/research/metrology/pipeline.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/research/metrology/sensors.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 3781af8

Please sign in to comment.