Back to Blog

Internet of Everything - AI Edge Computing Enabling Tencent Autonomous Vehicles

#人工智能

To seize the development opportunities of intelligent transportation systems and explore future smart urban transportation systems, this March, Tencent's autonomous vehicles officially launched in the Shenzhen Intelligent Connected Transportation Demonstration Zone. This project will be based on intelligent connected vehicle testing, guided by the construction of an autonomous driving ecosystem and future transportation system, focusing on the five key elements of "vehicle, road, cloud, network, and map." Integrating new-generation technologies such as 5G, autonomous driving, and artificial intelligence, it aims to establish a comprehensive ecosystem for the research, development, and testing of future intelligent connected vehicles, and build an internationally leading intelligent connected transportation system test base. This is an autonomous driving demonstration project jointly created by the Shenzhen Pingshan District Government, Shenzhen Traffic Control, and Tencent. As one of Tencent's key suppliers in the edge computing field, [Supplier Name, implied] provided leading edge computing product support for the Shenzhen Intelligent Connected Transportation testing project.

Currently, with the continuous breakthroughs and applications of technologies such as 5G, artificial intelligence, and big data, autonomous driving, as one of the main application scenarios of artificial intelligence, has become a key focus area for many enterprises. In this era of rapid development in autonomous driving technology, efficient collaboration between intelligent transportation and Vehicle-to-Everything (V2X) is the cornerstone for achieving safe autonomous driving. Traditional driving relies on the human brain and eyes on the road, while autonomous vehicles depend on the collaborative work of artificial intelligence, edge computing, radar, monitoring equipment, and global positioning systems; helping autonomous driving equipment make correct and timely reactions.

Autonomous vehicles perceive the surrounding traffic environment through onboard sensors and use the perceived information about roads, vehicle positions, and obstacles to determine and execute vehicle steering, speed, and travel routes. To meet these stringent driving conditions, the key is to select an industrial-grade edge computing product that can rapidly process data from sensing units, thereby effectively achieving the goal of safe autonomous driving.

BRAV-7520, as a product specifically designed for intelligent AI edge computing, provides strong support for Tencent's autonomous vehicles! The hardware system of autonomous driving is divided into three main parts: perception, decision-making, and control, while positioning, mapping, and prediction are auxiliary parts. The specific hardware composition is shown in the C-V2X Vehicle-Road-Cloud Solution Architecture Diagram.

C-V2X Vehicle-Road-Cloud Solution Architecture

The vehicle itself perceives its status information, such as speed, steering angle, roll, pitch, and heading, as well as road environment perception data from LiDAR, cameras, millimeter-wave radar, and positioning information. The C-V2X roadside units (RSUs) and cloud platform provide beyond-line-of-sight (BLOS) capabilities – when a vehicle is on the road, it's difficult to detect information beyond the sensor's range. C-V2X devices send and receive relevant information, allowing the vehicle to receive information about traffic conditions ahead.

Physical diagram of Edge Computing Product 7302 application in MEC

Currently, the Mobile Data Center (MDC) for L4 and L5 autonomous driving uses a dual CPU+GPU architecture, while the On-Board Unit (OBU) communicator employs an FPGA architecture. The MDC (BRAV-7520-WP) connects to onboard sensors such as LiDAR, millimeter-wave radar, and cameras via a switch, performing deep learning inference computation for structured data fusion. Simultaneously, after direct connection to the OBU communicator, it can perceive roadside and cloud information from the upstream C-V2X network, while downstream it connects to the CAN bus. Through the CAN bus, it connects to the vehicle's drive-by-wire system, which enables automatic control of braking, steering, engine start/stop, transmission, door/window control, as well as audio, visual, and haptic warning systems. Providing strong support for highly autonomous driving!

Real-world application of Edge Computing Product 7520-WP in MDC

Edge Computing Product Features: 7302 and 7520-WP

Independent air duct cooling design for CPU and GPU Intel™ Kabylake-S/Skylake-S Core i3/i5/i7 CPU 2DDR4 2400/2133MHz SODIMM memory, up to 32GB NVIDIA Turing architecture MXM GPU module for deep learning to achieve LiDAR-vision fusion data structuring 1DP, 1HDMI, and 1VGA for triple integrated displays; 3DP and 1HDMI for quad discrete displays 3/7LAN, 4/6USB3.0, 3USB2.0, 4COM, 16DIO, Audio 2Mini PCIe (PCIe+USB), 1M.2 2242 B-Key 1mSATA, 1/2*2.5" SATA3 drive bays, supports RAID 0, 1 Supports Intel vPro technology for remote management, enhancing product maintainability High energy efficiency ratio architecture, coupled with optimized and innovative thermal and structural designs, resulting in a suitable size for more installation scenarios DC 6~48V wide voltage power supply, with short circuit, overvoltage, overcurrent, and undervoltage protection

Intel™ Xeon® E or 9th/8th Gen Core™ processor, Ultra HD dual 4K, triple independent displays (2DP, 1VGA) WP model supports RTX-3080 high-performance GPU for deep learning to achieve LiDAR-vision fusion data structuring CAN bus for system and vehicle drive-by-wire system interfacing 3Gig-LAN (iATM capable), optional multi-channel 10 Gigabit fiber optic cards, dual PCIe standard slots, supports various high-speed expansion function cards, multi-channel storage 2SATA3.0, 1*M.2 M-Key, supports NVMe. Fanless CPU, efficient air-cooled thermal design for AI/GPU cards The overall structure and installation method are designed for shock absorption, suitable for in-vehicle environments Maximum total output power of 600W, can provide single-channel 350W Power supply support for GPU cards or dual 75W AI accelerator cards DC 9-55V wide voltage DC power supply suitable for in-vehicle battery power