Fei Li
🐼 panda

1class ContactInformationCard:
2 def __init__(self):
3 self.dept = "me @ hku"
4 self.lab = "APL"
5 self.email = "lifei9524@connect.hku.hk"
6 self.phone = "+852 5973"
7
8 def flipCard(self):
9 print("tap on the card to flip.")
10
11 def closeCard(self):
12 print("tap outside to close it.")

Fei Li

lifei.win

Fei Li 🎧

I am a PhD student at the HKU (◕▿◕)

I grew up in Nanyang, China.

I do research in turbulent flow.

fluid mechanics: turbulence, atmospheric boundary layer flow, signal processing
heat transfer: heat

📜 updates


📚 publications

Turbulence scale and strength analysis in the roughness and inertial sublayers over urban areas: A wind tunnel study.

Fei Li, Ruiqi Wang, Guoliang Chen, Ziwei Mo, Naoki Ikegaya, Chun-Ho Liu
[PDF]
Abstract: ... See More
Correlations among high-order statistics and low-occurrence wind speeds within a simplified urban canopy based on particle image velocimetry datasets

Fei Li, Chiyoko Hirose, Wei Wang, Chun-Ho Liu, Naoki Ikegaya
[PDF]
Abstract: ... See More
Real-Time 3D Semantic Scene Perception for Egocentric Robots with Binocular Vision
arXiv (02/19/2024)
Khang Nguyen, Tuan Dang, Manfred Huber.
[PDF] | [CODE] | [DEMO]
Abstract: Perceiving a three-dimensional (3D) scene with multiple objects while moving indoors is essential for vision-based mobile cobots, especially for enhancing their manipulation tasks. In this work, we present an end-to-end pipeline with instance segmentation, feature matching, and point-set registration for egocentric robots with binocular vision, and demonstrate the robot's grasping capability through the proposed pipeline. First, we design an RGB image-based segmentation approach for single-view 3D semantic scene segmentation, leveraging common object classes in 2D datasets to encapsulate 3D points into point clouds of object instances through corresponding depth maps. Next, 3D correspondences of two consecutive segmented point clouds are extracted based on matched keypoints between objects of interest in RGB images from the prior step. In addition, to be aware of spatial changes in 3D feature distribution, we also weigh each 3D point pair based on the estimated distribution using kernel density estimation (KDE), which subsequently gives robustness with less central correspondences while solving for rigid transformations between point clouds. Finally, we test our proposed pipeline on the 7-DOF dual-arm Baxter robot with a mounted Intel RealSense D435i RGB-D camera. The result shows that our robot can segment objects of interest, register multiple views while moving, and grasp the target object. ... See More
Online 3D Deformable Object Classification for Mobile Cobot Manipulation
ISR Europe 2023 (Stuttgart, Baden-Wurttemberg, Germany)
Khang Nguyen, Tuan Dang, Manfred Huber.
[PDF] | [CODE] | [DEMO] | [SLIDES] | [TALK]
Abstract: Vision-based object manipulation in assistive mobile cobots essentially relies on classifying the target objects based on their 3D shapes and features, whether they are deformed or not. In this work, we present an auto-generated dataset of deformed objects specific for assistive mobile cobot manipulation using an intuitive Laplacian-based mesh deformation procedure. We first determine the graspable region of the robot hand on the given object's mesh. Then, we uniformly sample handle points within the graspable region and perform deformation with multiple handle points based on the robot gripper configuration. In each deformation, we identify the orientation of handle points and prevent self-intersection to guarantee the object's physical meaning when multiple handle points are simultaneously applied to the mesh at different deformation intensities. We also introduce a lightweight neural network for 3D deformable object classification. Finally, we test our generated dataset on the Baxter robot with two 7-DOF arms, an integrated RGB-D camera, and a 3D deformable object classifier. The result shows that the robot is able to classify real-world deformed objects from point clouds captured at multiple views by the RGB-D camera. ... See More
Multiplanar Self-Calibration for Mobile Cobot 3D Object Manipulation using 2D Detectors and Depth Estimation
IROS 2023 (Detroit, MI, U.S.)
Tuan Dang, Khang Nguyen, Manfred Huber.
[PDF] | [CODE] | [DEMO]
Abstract: Calibration is the first and foremost step in dealing with sensor displacement errors that can appear during extended operation and off-time periods to enable robot object manipulation with precision. In this paper, we present a novel multiplanar self-calibration between the camera system and the robot's end-effector for 3D object manipulation. Our approach first takes the robot end-effector as ground truth to calibrate the camera’s position and orientation while the robot arm moves the object in multiple planes in 3D space, and a 2D state-of-the-art vision detector identifies the object’s center in the image coordinates system. The transformation between world coordinates and image coordinates is then computed using 2D pixels from the detector and 3D known points obtained by robot kinematics. Next, an integrated stereo-vision system estimates the distance between the camera and the object, resulting in 3D object localization. We test our proposed method on the Baxter robot with two 7-DOF arms and a 2D detector that can run in real time on an onboard GPU. After self-calibrating, our robot can localize objects in 3D using an RGB camera and depth image. ... See More
ExtPerFC: An Efficient 2D & 3D Perception Software-Hardware Framework for Mobile Cobot
arXiv (06/08/2023)
Tuan Dang, Khang Nguyen, Manfred Huber.
[PDF] | [CODE] | [DEMO]
Abstract: As the reliability of the robot's perception correlates with the number of integrated sensing modalities to tackle uncertainty, a practical solution to manage these sensors from different computers, operate them simultaneously, and maintain their real-time performance on the existing robotic system with minimal effort is needed. In this work, we present an end-to-end software-hardware framework, namely ExtPerFC, that supports both conventional hardware and software components and integrates machine learning object detectors without requiring an additional dedicated graphic processor unit (GPU). We first design our framework to achieve real-time performance on the existing robotic system, guarantee configuration optimization, and concentrate on code reusability. We then mathematically model and utilize our transfer learning strategies for 2D object detection and fuse them into depth images for 3D depth estimation. Lastly, we systematically test the proposed framework on the Baxter robot with two 7-DOF arms, a four-wheel mobility base, and an Intel RealSense D435i RGB-D camera. The results show that the robot achieves real-time performance while executing other tasks (e.g., map building, localization, navigation, object detection, arm moving, and grasping) simultaneously with available hardware like Intel onboard CPUs/GPUs on distributed computers. Also, to comprehensively control, program, and monitor the robot system, we design and introduce an end-user application. ... See More
PerFC: An Efficient 2D and 3D Perception Software-Hardware Framework for Mobile Cobot
FLAIRS-36 (Clearwater Beach, FL, U.S.)
Tuan Dang, Khang Nguyen, Manfred Huber.
[PDF] | [CODE] | [DEMO]
Abstract: In this work, we present an end-to-end software-hardware framework that supports both conventional hardware and software components and integrates machine learning object detectors without requiring an additional dedicated graphic processor unit (GPU). We design our framework to achieve real-time performance on the robot system, guarantee such performance on multiple computing devices, and concentrate on code reusability. We then utilize transfer learning strategies for 2D object detection and fuse them into depth images for 3D depth estimation. Lastly, we test the proposed framework on the Baxter robot with two 7-DOF arms and a four-wheel mobility base. The results show that the robot achieves real-time performance while executing other tasks (map building, localization, navigation, object detection, arm moving, and grasping) with available hardware like Intel onboard GPUs on distributed computers. Also, to comprehensively control, program, and monitor the robot system, we design and introduce an end-user application. ... See More
IoTree: A Battery-free Wearable System with Biocompatible Sensors for Continuous Tree Health Monitoring
MobiCom 2022 (Sydney, NSW, Australia)
Tuan Dang, Trung Tran, Khang Nguyen, Tien Pham, Nhat Pham, Tam Vu, Phuc Nguyen.
[PDF] | [CODE] | [DEMO]
Abstract: In this paper, we present a low-maintenance, wind-powered, batteryfree, biocompatible, tree wearable, and intelligent sensing system, namely IoTree, to monitor water and nutrient levels inside a living tree. IoTree system includes tiny-size, biocompatible, and implantable sensors that continuously measure the impedance variations inside the living tree’s xylem, where water and nutrients are transported from the root to the upper parts. The collected data are then compressed and transmitted to a base station located at up to 1.8 kilometers (approximately 1.1 miles) away. The entire IoTree system is powered by wind energy and controlled by an adaptive computing technique called block-based intermittent computing, ensuring the forward progress and data consistency under intermittent power and allowing the firmware to execute with the most optimal memory and energy usage. We prototype IoTree that opportunistically performs sensing, data compression, and long-range communication tasks without batteries. During in-lab experiments, IoTree also obtains the accuracy of 91.08% and 90.51% in measuring 10 levels of nutrients, NH3 and K2O, respectively. While tested with Burkwood Viburnum and White Bird trees in the indoor environment, IoTree data strongly correlated with multiple watering and fertilizing events. We also deployed IoTree on a grapevine farm for 30 days, and the system is able to provide sufficient measurements every day. ... See More


🧩 outreach activities

iPlanter: An Autonomous Ground Monitoring and Tree Planting Robot


[POST] | [CODE] | [DEMO]
Description:


⛏️ resources

all software releases of the above projects can also be found here!