麻豆中文字幕丨欧美一级免费在线观看丨国产成人无码av在线播放无广告丨国产第一毛片丨国产视频观看丨七妺福利精品导航大全丨国产亚洲精品自在久久vr丨国产成人在线看丨国产超碰人人模人人爽人人喊丨欧美色图激情小说丨欧美中文字幕在线播放丨老少交欧美另类丨色香蕉在线丨美女大黄网站丨蜜臀av性久久久久蜜臀aⅴ麻豆丨欧美亚洲国产精品久久蜜芽直播丨久久99日韩国产精品久久99丨亚洲黄色免费看丨极品少妇xxx丨国产美女极度色诱视频www

New system helps self-driving cars predict pedestrian movement

Source: Xinhua| 2019-02-13 08:25:31|Editor: WX
Video PlayerClose

CHICAGO, Feb. 12 (Xinhua) -- Researchers at the University of Michigan (UM) are teaching self-driving cars to recognize and predict pedestrian movements with greater precision by zeroing in on humans' gait, body symmetry and foot placement.

According to a news released posted on UM's website Tuesday, the researchers captured video snippets of humans in motion in data collected by vehicles through cameras, LiDAR and GPS, and recreated them in 3D computer simulation.

And based on this, they've created a "biomechanically inspired recurrent neural network" that catalogs human movements, with which they can predict poses and future locations for one or several pedestrians up to about 50 yards from the vehicle, about the scale of a city intersection.

The results have shown that this new system improves upon a driverless vehicle's capacity to recognize what's most likely to happen next.

"The median translation error of our prediction was approximately 10 cm after one second and less than 80 cm after six seconds. All other comparison methods were up to 7 meters off," said Matthew Johnson-Roberson, associate professor in UM's Department of Naval Architecture and Marine Engineering. "We're better at figuring out where a person is going to be."

To rein in the number of options for predicting the next movement, the researchers applied the physical constraints of the human body: human's inability to fly or fastest possible speed on foot.

"Now, we're training the system to recognize motion and making predictions of not just one single thing, whether it's a stop sign or not, but where that pedestrian's body will be at the next step and the next and the next," said Johnson-Roberson.

Prior work in the area typically looked only at still images. It wasn't really concerned with how people move in three dimensions, said Ram Vasudevan, UM assistant professor of mechanical engineering.

By utilizing video clips that run for several seconds, the UM system can study the first half of the snippet to make its predictions, and then verify the accuracy with the second half.

"We are open to diverse applications and exciting interdisciplinary collaboration opportunities, and we hope to create and contribute to a safer, healthier, and more efficient living environment," said UM research engineer Xiaoxiao Du.

The study has been published online in IEEE Robotics and Automation Letters, and will appear in a forthcoming print edition.

TOP STORIES
EDITOR’S CHOICE
MOST VIEWED
EXPLORE XINHUANET
010020070750000000000000011100901378174941