FSD Beta 10.69.1.1 (2022.20.11) Official Tesla Release Notes

– Added a new “deep lane guidance” module to the Vector Lanes neural network that merges features extracted from video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture results in a 44% lower error rate on lane topology compared to the previous model, allowing for smoother control before lanes and their connectivities become visually apparent. This helps ensure that each autopilot drives as well as someone driving their own course, but in a general enough way that adapts to road changes.

– Improved overall ride smoothness, without sacrificing latency, through better system modeling and actuation latency in trajectory planning. The path planner now independently accounts for the latency of steering commands to actual steering actuation, as well as throttle and brake commands to actuation. This results in a trajectory which is a more accurate model of how the vehicle would drive. This allows for better tracking and smoothness from the downstream controller while allowing for more precise response during difficult maneuvers.

– Improved unprotected left turns with a more appropriate speed profile when approaching and exiting mid-crossing regions, in the presence of high-speed cross-traffic (“Chuck Cook” style unprotected left turns) . This was done by allowing an optimizable initial jerk, to mimic hard pedal pressure by a human, when having to go past objects at high speed. Also improved side profile when approaching these safe regions to allow for a better pose that lines up nicely when exiting the region. Finally, better interaction with objects entering or waiting inside the middle crossover region with better modeling of their future intent.

– Added control for arbitrary low-speed movement volumes from the occupancy network. It also allows finer control over more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting the velocity at each 3D voxel. We can now control slow UFOs.

– Upgraded occupancy network to use video instead of images from a single time step. This temporal context allows the network to be robust to temporary occlusions and allows the occupancy flow to be predicted. Also, improved ground truth with semantic-based outlier rejection, difficult example extraction, and a 2.4x increase in dataset size.

– Upgraded to a new two-step architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network computation is allocated O (objects) instead of O (space). This improved speed estimates for far-passing vehicles by 20%, while using a tenth of the calculation.

– Improved the smoothness of protected right turns by improving the association of traffic lights with bypass lanes vs. yield signs with bypass lanes. This reduces false slows when no relevant objects are present and also improves yield stance when they are present.

– Reduced false slowdowns near crosswalks. This was done with a better understanding of the intent of pedestrians and cyclists based on their movement.

– Improved the geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a complete update of the Vector Lanes neural network. Information bottlenecks in the network architecture were eliminated by increasing the size of per-camera feature extractors, video modules, autoregressive decoder internals, and adding a strict attention mechanism that significantly improved the precise position of the tracks.

– Made the speed profile more comfortable when crawling for visibility, to allow smoother stops when protecting potentially occluded objects.

– Improved animal recall by 34% by doubling the size of the self-tagged training set.

– Enabled crawling for visibility at any intersection where objects might cross ego’s path, regardless of the presence of traffic controls.

– Improved stopping position accuracy in critical scenarios with object crossing, allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.

– Increased recall of bifurcation lanes by 36% by involving topology tokens in autoregressive decoder attention operations and increasing the loss applied to bifurcation tokens during training.

– Improved speed error for pedestrians and cyclists by 17%, especially when ego makes a turn, by improving the on-board trajectory estimation used as input to the neural network.

– Improved object detection recall, eliminating 26% of missing detections for vehicles crossing far away by adjusting the loss function used during training and improving label quality.

– Improved prediction of future object trajectory in scenarios with high yaw rate by incorporating yaw rate and lateral motion into likelihood estimation. It helps with objects turning in or out of the way of the ego, especially in intersections or cut-off scenarios.

– Improved speed when entering the freeway through better handling of upcoming map speed changes, increasing confidence to merge onto the freeway.

– Reduced latency when starting from a standstill taking into account the jolt of the lead vehicle.

– Enabled faster identification of red light runners by assessing their current kinematic state against their expected braking profile.

Tap the “Record Video” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras share a short snapshot of the Autopilot associated with the VIN with Tesla’s engineering team to help make improvements to the FSD. You will not be able to view the clip.

Comments are closed.