3D Point Cloud Annotation Services

HabileData delivers production-grade 3D point cloud annotation services for autonomous vehicles, robotics, and spatial AI. Our LiDAR annotation team works with KITTI, nuScenes, Waymo, and Argoverse formats, achieving 85%+ 3D IoU on bounding boxes, 88%+ mean IoU on segmentation, and 82%+ MOTA on object tracking across sequential scan frames.

Get started with a free pilot »
Quick Response Save time & money
3D Point Cloud Annotation Services
0 %+
3D Bounding Box IoU
0 %+
Segmentation Mean IoU
0 %+
MOTA Tracking Score
0 +
AV Dataset Formats
0 %
Lower Cost vs In-House
0 hrs
Scale-Up Time

Scale AI Development with Accurate LiDAR and 3D Point Cloud Annotation

Most 3D point cloud annotation errors are invisible in 2D rendered views. They only surface when trained models fail to predict correct distances in deployment.

When you outsource 3D point cloud annotation services to a provider without LiDAR labeling expertise, you get cuboids that fit the rendered view but misrepresent true object dimensions. HabileData is an annotation company with teams trained on sensor physics, point cloud rendering, and the specific failure modes of three-dimensional annotation.

01

Cuboids that fit the rendered view but misrepresent true Z-axis dimensions

A poorly trained annotator places a 3D cuboid that fits what they see in the rendered view but misrepresents true object dimensions in the Z-axis. Sensor beam angles create artefacts that look like low objects but are not. These errors are invisible in 2D views – they only surface when your model fails in deployment.

  • Z-axis dimension accuracy
  • Sensor beam artefacts
  • Ground plane correction
02

Velodyne VLP-16 and Ouster OS2 need different guidelines – calibrated per sensor

Every project begins with a calibration batch on your specific sensor data – because guidelines for a Velodyne VLP-16 at 10Hz and 64 beams differ from an Ouster OS2 at 20Hz and 128 beams. We measure 3D IoU per batch and enforce ±2° heading direction accuracy on all AV safety-critical projects.

  • Sensor-specific calibration
  • 3D IoU measured per batch
  • ±2° heading accuracy
03

Cross-modal object identity – same ID in camera frame 42 and its LiDAR cluster

For sensor fusion projects combining synchronised LiDAR and camera data, we maintain cross-modal object identity. The vehicle in frame 42 of the camera feed is annotated with the same object ID and consistent 3D dimensions as its corresponding cluster in the LiDAR scan from the same timestamp.

  • LiDAR + camera fusion
  • Cross-modal ID consistency
  • Timestamp-synchronised
04

KITTI, nuScenes, Waymo, or custom JSON – configured to your pipeline specification

We deliver in KITTI format for AV perception projects, nuScenes for multi-sensor datasets, Waymo Open Dataset format for Waymo-architecture models, and custom JSON schema for proprietary pipelines. If your pipeline uses a format not listed here, provide the specification and we configure our export to match.

  • KITTI · nuScenes · Waymo
  • Custom JSON schema
  • Spec-matched export
Talk to Our LiDAR Annotation Specialists Today »

3D Point Cloud Annotation Services We Offer

We provide end-to-end 3D point cloud annotation across all major LiDAR data types, sensor platforms, and industry formats. Each service below is delivered with format-specific annotation protocols, three-stage QA validation, and measurable quality benchmarks reported at delivery.

3D Bounding Box Annotation for Object Detection

Three-dimensional bounding boxes capturing centroid position (x, y, z), dimensions (length, width, height), and orientation (yaw, pitch, roll). 85%+ 3D IoU across standard AV classes. Output in KITTI, nuScenes, Waymo, and custom formats with format-specific taxonomy applied.

Point Cloud Semantic Segmentation

Point-level class labeling for full scene understanding covering road, sidewalk, vegetation, building, vehicle sub-types, pedestrian, cyclist, and traffic infrastructure. 88%+ mean IoU. Compatible with SemanticKITTI, nuScenes-lidarseg, and custom schema.

Multi-Frame Object Tracking Annotation

Consistent object track IDs across sequential LiDAR scans through occlusion events, sparse-range transitions, and entry/exit scenarios. 82%+ MOTA across standard AV drive sequences.

LiDAR + Camera Fusion Annotation

Aligned 3D bounding boxes in point cloud space with corresponding 2D boxes in camera frames. Supports nuScenes (6 cameras + LiDAR), Waymo (5 cameras + LiDAR), and Argoverse (7 ring + 2 stereo cameras + LiDAR). Validated through 3D-to-2D projection checks.

HD Map and Road Infrastructure Annotation

Road surfaces, lane boundaries, kerbs, traffic signs, signals, poles, and vegetation labeled for HD map generation. Output in Lanelet2, OpenDRIVE, and GeoJSON formats for autonomous driving and smart city applications.

Aerial LiDAR Survey Annotation

Building extraction, tree canopy classification, ground surface identification, and power line detection from airborne LiDAR. Delivered in LAS, LAZ, and GeoJSON formats compatible with CloudCompare, LAStools, PDAL, and ArcGIS.

Industrial 3D Inspection Annotation

Defect detection, surface classification, and quality inspection labeling from structured-light scanners, time-of-flight sensors, and terrestrial LiDAR. Output in E57, PLY, and custom JSON formats for industrial AI pipelines.

Sensor Types and Output Format Support

Category
What we support
Notes
LiDAR sensors
Velodyne VLP-16, VLP-32C, HDL-64E. Ouster OS0, OS1, OS2. Luminar Iris and Hydra. Innoviz InnovizOne. Hesai Pandar. Continental ARS548.
Sensor-specific calibration and beam pattern characteristics accounted for in annotation guidelines.
Input formats
PCD (ASCII and binary), LAS/LAZ, PLY, NumPy .npy/.npz arrays, ROS bag files, HDF5 point cloud files, proprietary binary formats with your spec sheet.
Binary PCD and LAS typically preferred for large datasets. ROS bag extraction on request.
Output formats
KITTI (most AV perception models), nuScenes (multi-sensor datasets), Waymo Open Dataset format, OpenPCDet-compatible, KITTI-tracking format, custom JSON schema.
Heading direction in KITTI rotation_y convention. nuScenes quaternion rotation. Waymo box_3d proto format.
Annotation platforms
Segments.ai, Scale AI 3D, CVAT with 3D mode, Rerun.io for visualisation review, client-specified proprietary platform.
Platform-agnostic. We request a 2-hour platform walkthrough for proprietary tools and are operational within 2 business days.

3D Point Cloud Annotation Success Stories

Annotation of Live Video Streams for Traffic Management and Road Planning

Annotation of Live Video Streams for Traffic Management and Road Planning

Annotating pre-recorded and live video stream of vehicles provided training data for machine learning models for a California based data analytics company helped managing traffic efficiently.

Read full Case Study »
Image Annotation for Swiss Food Waste Assessment Solution Provider

Image Annotation for Swiss Food Waste Assessment Solution Provider

The food images to be labelled and categorized so that the client could use them as training data for accurate interpretation of visual data through data annotation.

Read full Case Study »
Annotating Text from News Articles to Enhance the Performance of an AI Model

Annotating Text from News Articles to Enhance the Performance of an AI Model

Capture, validate and verify information on upcoming or existing construction projects from multi-lingual and multi-format online publications across Europe and USA.

Read full Case Study »

Benefits of Outsourcing LiDAR Annotation to HabileData

70% Lower Cost vs. Building In-House

Sensor-Specific Training

LiDAR sensors have different beam patterns, point densities, and artefact profiles. Our annotators are trained on the specific sensor in your dataset – Velodyne VLP-16 annotation guidelines are not the same as Ouster OS2 guidelines. Sensor-specific training prevents the systematic errors that come from applying generic 3D annotation workflows to data the annotator does not understand.

10,000+ Images Annotated Per Day

AV Safety Standards

Autonomous vehicle safety training data requires heading direction accuracy within plus or minus 2 degrees. This is the industry standard for AV safety-critical annotation, and we apply it contractually. Every batch includes a heading direction accuracy report alongside the 3D IoU score so your perception team can evaluate the dataset against their safety requirements.

95%+ IAA Across All Annotation Types

Sensor Fusion Consistency

If your project involves camera-LiDAR fusion data, we annotate both modalities simultaneously using timestamp-aligned data and maintain consistent object IDs and 3D dimensions across both sensor streams. Cross-modal inconsistency is the most common failure mode in fusion model training data, and avoiding it requires annotating both modalities in the same project pass, not separately.

Scales from 1,000 to 1,000,000+ Items

Scalable Infrastructure

Our 3D annotation pipeline can handle the data volumes that AV programs generate. We scale team capacity within 72 hours of notice and currently manage multiple large-scale AV annotation projects concurrently. Volume estimates and timeline commitments are provided after reviewing a sample of your dataset.

Annotation Guideline Documents

Data Security

AV sensor data often contains imagery of public spaces with identifiable individuals. All LiDAR annotation projects are covered by NDA before any data transfer. Files are transmitted via AES-256 encrypted SFTP. For projects requiring on-premises annotation within your secure environment, we support air-gapped delivery on encrypted media.

Areas of Expertise

Autonomous
Autonomous Vehicles (L2-L4)
3D cuboid annotation for vehicle, pedestrian, cyclist, and motorcycle detection. Ground plane segmentation. Lane boundary polyline annotation for HD map generation. Camera-LiDAR fusion annotation. KITTI, nuScenes, Waymo format.
Robotics
Robotics and Warehouse Automation
Object detection and ground segmentation for robotic navigation. Shelf and inventory point cloud annotation for warehouse management AI. Obstacle annotation for collision avoidance systems.
Geospatial
Geospatial and Infrastructure
Aerial and drone LiDAR annotation for infrastructure inspection, utility corridor mapping, and terrain classification. Building and vegetation segmentation from airborne surveys.
Agriculture
Agriculture and Precision Farming
Crop canopy segmentation from agricultural drone LiDAR. Tree crown delineation and biomass estimation dataset annotation. Terrain analysis for precision irrigation systems.
Mining
Mining and Construction
Excavation volume estimation from point cloud surveys. Equipment and personnel detection on mining sites. Tunnel and underground environment 3D mapping annotation.
Traffic
Smart Cities and Traffic Management
Vehicle and pedestrian flow annotation from roadside LiDAR sensors. Intersection occupancy analysis. Traffic signal state classification in sensor fusion datasets.

What Our Client’s Say about HabileData

LiDAR point cloud annotation at scale is where most annotation vendors struggle with quality. HabileData labeled 500,000 frames with 3D bounding cuboids and semantic point-level labels. Their spatial accuracy held within our 5cm tolerance, and the labeling consistency across their team reduced our QA rejection rate to under 3%.
Erik V., Lead Perception Engineer, Self-Driving Technology Company, Sweden
Warehouse navigation requires point cloud annotations of shelving, pallets, and floor markings. HabileData annotated 150,000 indoor scans with instance-level segmentation. Their annotations maintained spatial accuracy even in cluttered, occluded shelf areas where automated pre-labeling tools consistently failed.
Daniel H., VP Data Engineering, Warehouse Robotics Company, USA
City-scale point cloud annotation for our digital twin project covered buildings, roads, vegetation, and utilities. HabileData processed 1.2 million frames with multi-class semantic labels. The annotation consistency across such a large dataset was impressive, and our 3D reconstruction model reached production quality six weeks early.
Sofia R., Research Lead, Urban Planning AI Lab, Spain

LiDAR Point Cloud Annotation: Frequently Asked Questions

What is LiDAR point cloud annotation?

LiDAR point cloud annotation is the process of labeling 3D point cloud data captured by LiDAR sensors with structured metadata – 3D bounding boxes, class labels, instance IDs, and heading directions – so that autonomous vehicle perception, robotics navigation, and geospatial AI models can learn to detect and classify objects in 3D space. Unlike image annotation, which works with a 2D projection, point cloud annotation operates in three-dimensional space using sparse, irregular point data.

What LiDAR sensor data formats does HabileData support?

We accept PCD (ASCII and binary), LAS and LAZ, PLY, NumPy array formats (.npy and .npz), ROS bag files (with extraction), and most proprietary binary formats with a provided spec sheet. We deliver in KITTI format, nuScenes format, Waymo Open Dataset format, OpenPCDet-compatible format, and custom JSON schema. If your format is not listed, send us a sample file and format specification.

What accuracy standard does HabileData apply to AV point cloud annotation?

For 3D cuboid annotation on autonomous vehicle datasets, our SLA targets 3D IoU of 0.78 or higher, measured per delivery batch. For heading direction accuracy on AV safety-critical object classes (vehicles, pedestrians, cyclists), we apply a plus or minus 2 degree angular tolerance – the industry standard for L2 to L4 autonomous vehicle perception systems. Both metrics are reported per batch in the delivery documentation.

Can HabileData annotate camera-LiDAR fusion datasets?

Yes. For sensor fusion projects with synchronised camera images and LiDAR scans, we annotate both modalities simultaneously using your provided camera-LiDAR extrinsic calibration parameters. We maintain consistent object IDs and 3D dimensions across both sensor streams for every timestamp. This cross-modal consistency is what enables your model to learn the correct geometric relationship between the two sensor modalities during training.

How is data security handled for AV sensor data that may contain public imagery?

All LiDAR annotation projects are covered by NDA before any data transfer. Files are transmitted via AES-256 encrypted SFTP or client-provisioned cloud storage. For sensor fusion data containing imagery of public spaces with identifiable individuals, we apply face blurring and licence plate masking on camera images before annotation begins on projects where this is requested. On-premises annotation and air-gapped delivery are available for projects with strict data residency requirements.

Recent Articles

Go to Top

Disclaimer: HitechDigital Solutions LLP and HabileData will never ask for money or commission to offer jobs or projects. In the event you are contacted by any person with job offer in our companies, please reach out to us at info@habiledata.com.