News

How does the fork picking robot achieve accurate cargo identification and positioning through sensors?

Publish Time: 2025-11-25
The core of the fork-picking robot's accurate cargo identification and positioning lies in the collaborative work of multiple types of sensors and the integration of intelligent algorithms. Its technology system covers three major dimensions: environmental perception, target recognition, and spatial positioning, forming a complete closed loop from data acquisition to decision execution.

At the environmental perception level, LiDAR acts as the fork-picking robot's "spatial scanner." By emitting laser pulses and receiving reflected signals, LiDAR can construct a real-time 3D point cloud map of the warehouse environment, accurately identifying the locations of shelves, obstacles, and aisles. This non-contact measurement method is unaffected by lighting conditions and can operate stably even in low-light or completely dark environments. Its high-frequency scanning characteristics (thousands of scans per second) dynamically capture environmental changes, such as temporarily stacked goods or moving personnel, providing the robot with real-time obstacle avoidance information. When the robot approaches a target shelf, the LiDAR focuses on the storage area and confirms the cargo's location using a point cloud matching algorithm, with an error range controllable to the centimeter level.

Vision sensors undertake the crucial task of cargo feature recognition. A vision system composed of industrial cameras and depth cameras can capture the color, shape, texture, and label information of the cargo. Using deep learning algorithms such as convolutional neural networks (CNNs), robots can identify different categories of goods and even distinguish between similar-looking items. For example, in a pharmaceutical warehouse, a vision system can accurately identify barcodes or text information on medicine boxes, avoiding picking errors. Depth cameras measure the distance from objects to the lens to generate 3D models, aiding in determining the stacking status of goods. When goods are partially obscured or tilted, the vision algorithm can use multi-frame image fusion technology to reconstruct the complete outline, improving recognition robustness.

Force sensors provide robots with tactile feedback capabilities. Multi-dimensional force sensors installed at the base of the forks can monitor the force applied when the forks contact the goods in real time. At the moment of picking up goods, the sensors can sense the weight and center of gravity of the goods, automatically adjusting the fork height and tilt angle through force feedback algorithms to ensure smooth gripping. For example, when the center of gravity of the goods shifts, the robot will slightly tilt the forks to balance the force and prevent the goods from slipping. In stacking operations, force sensors can also determine whether goods are accurately placed in the target position by detecting changes in contact force, avoiding damage to the shelving due to positioning errors.

The Inertial Measurement Unit (IMU) addresses the accuracy issue of robot localization. By integrating accelerometers, gyroscopes, and magnetometers, the IMU can monitor the robot's acceleration, angular velocity, and attitude changes in real time. Combined with absolute positioning data from LiDAR, the IMU uses a Kalman filter algorithm to achieve multi-sensor data fusion, eliminating accumulated errors and improving positioning stability. Especially in narrow aisles or high-density shelving areas, the IMU's assisted localization function can prevent the robot from colliding with shelves due to positioning drift.

Ultrasonic sensors, as the last line of defense for near-field safety, detect nearby obstacles around the robot by emitting ultrasonic waves and receiving reflected waves. Their detection range is typically between 0.1 and 5 meters, covering the blind spots of LiDAR. When an obstacle is detected, the ultrasonic sensor immediately triggers emergency braking to avoid a collision. In dynamic environments, ultrasonic sensors and LiDAR complement each other, jointly constructing a comprehensive safety protection network.

The fusion and decision-making of multi-sensor data rely on the intelligent algorithms of the central control system. By employing algorithms such as Extended Kalman Filter (EKF) or Particle Filter, the system spatiotemporally aligns point cloud data from LiDAR, image information from visual sensors, force feedback from force sensors, and attitude data from IMUs to generate a unified environmental model. Based on this, a path planning algorithm calculates the optimal picking path according to the location of goods and the warehouse layout, while an obstacle avoidance algorithm adjusts the trajectory in real time to ensure the robot completes its task efficiently and safely.

The fork-picking robot constructs a multi-layered, highly redundant perception system through spatial scanning by LiDAR, feature recognition by visual sensors, tactile feedback from force sensors, attitude monitoring by IMUs, and near-field protection by ultrasonic sensors. The various sensors work synergistically in data acquisition, processing, and decision-making, enabling the robot to achieve millimeter-level positioning accuracy and millisecond-level response speed in complex and dynamic warehouse environments, providing reliable technical support for intelligent logistics.
×

Contact Us

captcha