Location>code7788 >text

A guide to full-stack development of autonomous driving simulation systems based on CARLA and PyTorch

Popularity:564 ℃/2025-04-25 00:52:44

Introduction: Value and technology stack selection of autonomous driving simulation

As one of the most challenging research directions in the field of AI, autonomous driving requires a complete link of "simulation testing-closed-loop verification-real vehicle deployment". Among them, the high-fidelity simulation platform provides a safe and efficient experimental environment for algorithm iteration. This article will build an end-to-end autonomous driving system based on CARLA (open source autonomous driving simulator) and PyTorch framework, focusing on display:

  1. Simulation environment configuration and sensor integration
  2. Expert driving data collection plan
  3. Imitation learning model training framework
  4. Safety assessment index system
  5. Production-level model optimization strategy

1. Construction of CARLA simulation environment (including code implementation)

1.1 Environment dependency installation

# Create a virtual environment
 python -m venv carla_env
 source carla_env/bin/activate
 
 # Install core dependencies
 pip install carla pygame numpy matplotlib
 pip install torch torchvision tensorboard

1.2 Start the CARLA server

# server_launcher.py
import os
('./ Town01 -windowed -ResX=800 -ResY=600')

1.3 Client connection and basic control

# client_connector.py
 import carla
 
 def connect_carla():
     client = ('localhost', 2000)
     client.set_timeout(10.0)
     world = client.get_world()
     return world
 
 def spawn_vehicle(world):
     blueprint = world.get_blueprint_library().find('.model3')
     spawn_point = world.get_map().get_spawn_points()[0]
     vehicle = world.spawn_actor(blueprint, spawn_point)
     return vehicle
 
 #User Example
 world = connect_carla()
 vehicle = spawn_vehicle(world)

1.4 Sensor Configuration (RGB Camera + IMU)

# sensor_setup.py
 def attach_sensors(vehicle):
     # RGB camera configuration
     cam_bp = world.get_blueprint_library().find('')
     cam_bp.set_attribute('image_size_x', '800')
     cam_bp.set_attribute('image_size_y', '600')
     cam_bp.set_attribute('fov', '110')
    
     # IMU configuration
     imu_bp = world.get_blueprint_library().find('')
    
     # Generate sensors
     cam = world.spawn_actor(cam_bp, (), attach_to=vehicle)
     imu = world.spawn_actor(imu_bp, (), attach_to=vehicle)
    
     # Listen to sensor data
     (lambda data: process_image(data))
     (lambda data: process_imu(data))
     return cam, imu

2. Expert driving data acquisition system

2.1 Data logger design

# data_recorder.py
import numpy as np
from queue import Queue
 
class SensorDataRecorder:
    def __init__(self):
        self.image_queue = Queue(maxsize=100)
        self.control_queue = Queue(maxsize=100)
        self.sync_counter = 0
 
    def record_image(self, image):
        self.image_queue.put(image)
        self.sync_counter += 1
 
    def record_control(self, control):
        self.control_queue.put(control)
 
    def save_episode(self, episode_id):
        images = []
        controls = []
        while not self.image_queue.empty():
            (self.image_queue.get())
        while not self.control_queue.empty():
            (self.control_queue.get())
        
        (f'expert_data/episode_{episode_id}.npz',
                 images=(images),
                 controls=(controls))

2.2 Expert control signal acquisition

# expert_controller.py
 def manual_control(vehicle):
     While True:
         control = vehicle.get_control()
         # Add expert control logic (example: keyboard control)
         keys = .get_pressed()
          = 0.5 * keys[K_UP]
          = 1.0 * keys[K_DOWN]
          = 2.0 * (keys[K_RIGHT] - keys[K_LEFT])
         return control

2.3 Data Enhancement Strategy

# data_augmentation.py
 def augment_image(image):
     # Random brightness adjustment
     hsv = (image, cv2.COLOR_BGR2HSV)
     hsv[:,:,2] = (hsv[:,:,2]*(0.8,1.2),0,255)
    
     # Random rotation (±5 degrees)
     M = cv2.getRotationMatrix2D((400,300), (-5,5), 1)
     augmented = (hsv, M, (800,600))
    
     return (augmented, cv2.COLOR_HSV2BGR)

3. Imitation learning model construction (PyTorch implementation)

3.1 Network architecture design

# 
import torch
import  as nn
 
class AutonomousDriver():
    def __init__(self):
        super().__init__()
        self.conv_layers = (
            nn.Conv2d(3, 24, 5, stride=2),
            (),
            nn.Conv2d(24, 32, 5, stride=2),
            (),
            nn.Conv2d(32, 64, 3),
            (),
            ()
        )
        
        self.fc_layers = (
            (64*94*70, 512),
            (),
            (512, 256),
            (),
            (256, 3)  # throttle, brake, steer
        )
 
    def forward(self, x):
        x = self.conv_layers(x)
        return self.fc_layers(x)

3.2 Training framework design

# 
def train_model(model, dataloader, epochs=50):
    criterion = ()
    optimizer = ((), lr=1e-4)
    
    for epoch in range(epochs):
        total_loss = 0
        for batch in dataloader:
            images = batch['images'].to(device)
            targets = batch['controls'].to(device)
            
            outputs = model(images)
            loss = criterion(outputs, targets)
            
            optimizer.zero_grad()
            ()
            ()
            
            total_loss += ()
        
        print(f'Epoch {epoch+1}, Loss: {total_loss/len(dataloader):.4f}')
        (model.state_dict(), f'checkpoints/epoch_{epoch}.pth')

3.3 Data loader implementation

#
 class DrivingDataset(Dataset):
     def __init__(self, data_dir, transform=None):
          = (f'{data_dir}/*.npz')
          = transform
 
     def __len__(self):
         return len() * 100 # Assume that each episode has 100 frames
 
     def __getitem__(self, idx):
         file_idx = idx // 100
         frame_idx = idx % 100
         data = ([file_idx])
         image = data['images'][frame_idx].transpose(1,2,0) # HWC to CHW
         control = data['controls'][frame_idx]
        
         if :
             image = (image)
            
         return (image, dtype=torch.float32)/255.0, \
                (control, dtype=torch.float32)

4. Safety assessment and model optimization

4.1 Security indicator definition

  1. Collision rate: Number of collisions per unit distance
  2. Route completion: Successfully reached the end point ratio
  3. Traffic violation rate: Statistics of violations such as running red lights and pressing lines
  4. Control smoothness: Rate of change of throttle/brake/steer

4.2 Evaluation framework implementation

#
 def evaluate_model(model, episodes=10):
     metrics = {
         'collision_rate': 0,
         'route_completion': 0,
         'traffic_violations': 0,
         'control_smoothness': 0
     }
    
     for _ in range(episodes):
         vehicle = spawn_vehicle(world)
         While True:
             # Obtain sensor data
             image = get_camera_image()
             control = (image)
            
             # Execute control
             vehicle.apply_control(control)
            
             # Safety Inspection
             check_collisions(vehicle, metrics)
             check_traffic_lights(vehicle, metrics)
            
             # Termination conditions
             if has_reached_destination(vehicle):
                 metrics['route_completion'] += 1
                 break
                
     return calculate_safety_scores(metrics)

4.3 Model optimization strategy

  1. Quantitative perception training
# 
 = .get_default_qconfig('fbgemm')
(model, inplace=True)
(model, inplace=True)
  1. Control signal smoothing processing
# control_smoothing.py
class ControlFilter:
    def __init__(self, alpha=0.8):
        self.prev_control = None
         = alpha
        
    def smooth(self, current_control):
        if self.prev_control is None:
            self.prev_control = current_control
            return current_control
        
        smoothed =  * self.prev_control + () * current_control
        self.prev_control = smoothed
        return smoothed

V. Production environment deployment plan

5.1 Model Export and Loading

# model_export.py
 def export_model(model, output_path):
     traced_model = (model, (1,3,600,800))
     traced_model.save(output_path)
 
 # Loading example
 loaded_model = ('deployed_model.pt')

5.2 CARLA Integrated Deployment

#
 def autonomous_driving_loop():
     model = load_deployed_model()
     vehicle = spawn_vehicle(world)
    
     While True:
         # Sensor data acquisition
         image_data = get_camera_image()
         preprocessed = preprocess_image(image_data)
        
         # Model reasoning
         with torch.no_grad():
             control = model(preprocessed).numpy()
        
         # Control signal post-processing
         smoothed_control = control_filter.smooth(control)
        
         # Execute control
         vehicle.apply_control(smoothed_control)
        
         # Security monitoring
         if detect_critical_situation():
             trigger_emergency_stop()

5.3 Real-time optimization skills

  1. Accelerate reasoning with TensorRT
  2. Adopt multi-threaded asynchronous processing
  3. Implement dynamic frame rate adjustment
  4. Critical Path Code Cython Optimization

6. Complete project structure

autonomous_driving_carla/
├── datasets/
│   ├── expert_data/
│   └── augmented_data/
├── models/
│   ├── checkpoints/
│   └── deployed_model.pt
├── src/
│   ├── 
│   ├── data_collection.py
│   ├── 
│   ├── 
│   ├── 
│   └── 
├── utils/
│   ├── 
│   └── 
└── 

Conclusion: The leap from simulation to reality

This article fully presents the development process of the autonomous driving system through the CARLA+PyTorch technology stack. Key points include:

  1. Simulation environments need to accurately reproduce the physical rules and traffic scenes in the real world
  2. Imitation learning relies on high-quality expert data, and data enhancement can significantly improve model generalization capabilities
  3. Safety assessment should establish a multi-dimensional indicator system to cover functional safety and expected functional safety.
  4. Production deployment needs to balance model accuracy and real-time performance, and technologies such as quantization and pruning are crucial.

For developers, mastering the content of this tutorial can not only quickly build an autonomous driving prototype system, but also deeply understand the engineering implementation methods of AI models in complex systems. In the future, we can further explore advanced directions such as reinforcement learning and multimodal fusion, and continue to promote the evolution of autonomous driving technology.