diff --git a/_pages/research.md b/_pages/research.md index af76b96b4d4ff..c5c0b2f0e84ee 100644 --- a/_pages/research.md +++ b/_pages/research.md @@ -7,6 +7,21 @@ author_profile: true ### Pose Estimation: Understanding what is where? + + + + + +
+
+ Project Image 2 +
+
+ 3D pose estimation is a pivotal facet of computer vision, transcending the limitations of its 2D counterpart by incorporating depth information. This process involves leveraging point clouds acquired from advanced sensors like LiDAR, stereo cameras, or depth cameras. The goal is to accurately determine the three-dimensional position and orientation of objects within a given scene. An instrumental player in this field is the Point Cloud Library (PCL), an open-source resource that provides an array of algorithms for processing and analyzing 3D point clouds. Whether it's filtering noise, segmenting data, extracting features, or facilitating visualization, PCL has become a cornerstone in applications such as robotics and augmented reality, enhancing the spatial comprehension of the environment. +
+ +In recent strides, the advent of Neural Radiance Fields (NeRF) has revolutionized 3D scene reconstruction. NeRF stands out as a neural network-based technique that transcends traditional approaches by learning a volumetric representation of a scene through an implicit function. This groundbreaking methodology excels in capturing intricate details of both geometry and appearance, making it particularly adept at synthesizing high-fidelity 3D reconstructions from sparse and unstructured 2D views. NeRF's fusion of deep learning and implicit function representation proves to be a potent tool for rendering novel views of a scene and holds promise for elevating the precision and efficiency of 3D pose estimation across various domains. + ### Coverage Viewpoint Generation: How to look at something in its entirety? ### PointCloud Resgistration: Leveraging feature descriptors for the best results