Most 3D point cloud annotation errors are invisible in 2D rendered views. They only surface when trained models fail to predict correct distances in deployment.
When you outsource 3D point cloud annotation services to a provider without LiDAR labeling expertise, you get cuboids that fit the rendered view but misrepresent true object dimensions. HabileData is an annotation company with teams trained on sensor physics, point cloud rendering, and the specific failure modes of three-dimensional annotation.
Cuboids that fit the rendered view but misrepresent true Z-axis dimensions
A poorly trained annotator places a 3D cuboid that fits what they see in the rendered view but misrepresents true object dimensions in the Z-axis. Sensor beam angles create artefacts that look like low objects but are not. These errors are invisible in 2D views – they only surface when your model fails in deployment.
Velodyne VLP-16 and Ouster OS2 need different guidelines – calibrated per sensor
Every project begins with a calibration batch on your specific sensor data – because guidelines for a Velodyne VLP-16 at 10Hz and 64 beams differ from an Ouster OS2 at 20Hz and 128 beams. We measure 3D IoU per batch and enforce ±2° heading direction accuracy on all AV safety-critical projects.
Cross-modal object identity – same ID in camera frame 42 and its LiDAR cluster
For sensor fusion projects combining synchronised LiDAR and camera data, we maintain cross-modal object identity. The vehicle in frame 42 of the camera feed is annotated with the same object ID and consistent 3D dimensions as its corresponding cluster in the LiDAR scan from the same timestamp.
KITTI, nuScenes, Waymo, or custom JSON – configured to your pipeline specification
We deliver in KITTI format for AV perception projects, nuScenes for multi-sensor datasets, Waymo Open Dataset format for Waymo-architecture models, and custom JSON schema for proprietary pipelines. If your pipeline uses a format not listed here, provide the specification and we configure our export to match.
We provide end-to-end 3D point cloud annotation across all major LiDAR data types, sensor platforms, and industry formats. Each service below is delivered with format-specific annotation protocols, three-stage QA validation, and measurable quality benchmarks reported at delivery.
Three-dimensional bounding boxes capturing centroid position (x, y, z), dimensions (length, width, height), and orientation (yaw, pitch, roll). 85%+ 3D IoU across standard AV classes. Output in KITTI, nuScenes, Waymo, and custom formats with format-specific taxonomy applied.
Point-level class labeling for full scene understanding covering road, sidewalk, vegetation, building, vehicle sub-types, pedestrian, cyclist, and traffic infrastructure. 88%+ mean IoU. Compatible with SemanticKITTI, nuScenes-lidarseg, and custom schema.
Consistent object track IDs across sequential LiDAR scans through occlusion events, sparse-range transitions, and entry/exit scenarios. 82%+ MOTA across standard AV drive sequences.
Aligned 3D bounding boxes in point cloud space with corresponding 2D boxes in camera frames. Supports nuScenes (6 cameras + LiDAR), Waymo (5 cameras + LiDAR), and Argoverse (7 ring + 2 stereo cameras + LiDAR). Validated through 3D-to-2D projection checks.
Road surfaces, lane boundaries, kerbs, traffic signs, signals, poles, and vegetation labeled for HD map generation. Output in Lanelet2, OpenDRIVE, and GeoJSON formats for autonomous driving and smart city applications.
Building extraction, tree canopy classification, ground surface identification, and power line detection from airborne LiDAR. Delivered in LAS, LAZ, and GeoJSON formats compatible with CloudCompare, LAStools, PDAL, and ArcGIS.
Defect detection, surface classification, and quality inspection labeling from structured-light scanners, time-of-flight sensors, and terrestrial LiDAR. Output in E57, PLY, and custom JSON formats for industrial AI pipelines.
LiDAR sensors have different beam patterns, point densities, and artefact profiles. Our annotators are trained on the specific sensor in your dataset – Velodyne VLP-16 annotation guidelines are not the same as Ouster OS2 guidelines. Sensor-specific training prevents the systematic errors that come from applying generic 3D annotation workflows to data the annotator does not understand.
Autonomous vehicle safety training data requires heading direction accuracy within plus or minus 2 degrees. This is the industry standard for AV safety-critical annotation, and we apply it contractually. Every batch includes a heading direction accuracy report alongside the 3D IoU score so your perception team can evaluate the dataset against their safety requirements.
If your project involves camera-LiDAR fusion data, we annotate both modalities simultaneously using timestamp-aligned data and maintain consistent object IDs and 3D dimensions across both sensor streams. Cross-modal inconsistency is the most common failure mode in fusion model training data, and avoiding it requires annotating both modalities in the same project pass, not separately.
Our 3D annotation pipeline can handle the data volumes that AV programs generate. We scale team capacity within 72 hours of notice and currently manage multiple large-scale AV annotation projects concurrently. Volume estimates and timeline commitments are provided after reviewing a sample of your dataset.
AV sensor data often contains imagery of public spaces with identifiable individuals. All LiDAR annotation projects are covered by NDA before any data transfer. Files are transmitted via AES-256 encrypted SFTP. For projects requiring on-premises annotation within your secure environment, we support air-gapped delivery on encrypted media.





LiDAR point cloud annotation is the process of labeling 3D point cloud data captured by LiDAR sensors with structured metadata – 3D bounding boxes, class labels, instance IDs, and heading directions – so that autonomous vehicle perception, robotics navigation, and geospatial AI models can learn to detect and classify objects in 3D space. Unlike image annotation, which works with a 2D projection, point cloud annotation operates in three-dimensional space using sparse, irregular point data.
We accept PCD (ASCII and binary), LAS and LAZ, PLY, NumPy array formats (.npy and .npz), ROS bag files (with extraction), and most proprietary binary formats with a provided spec sheet. We deliver in KITTI format, nuScenes format, Waymo Open Dataset format, OpenPCDet-compatible format, and custom JSON schema. If your format is not listed, send us a sample file and format specification.
For 3D cuboid annotation on autonomous vehicle datasets, our SLA targets 3D IoU of 0.78 or higher, measured per delivery batch. For heading direction accuracy on AV safety-critical object classes (vehicles, pedestrians, cyclists), we apply a plus or minus 2 degree angular tolerance – the industry standard for L2 to L4 autonomous vehicle perception systems. Both metrics are reported per batch in the delivery documentation.
Yes. For sensor fusion projects with synchronised camera images and LiDAR scans, we annotate both modalities simultaneously using your provided camera-LiDAR extrinsic calibration parameters. We maintain consistent object IDs and 3D dimensions across both sensor streams for every timestamp. This cross-modal consistency is what enables your model to learn the correct geometric relationship between the two sensor modalities during training.
All LiDAR annotation projects are covered by NDA before any data transfer. Files are transmitted via AES-256 encrypted SFTP or client-provisioned cloud storage. For sensor fusion data containing imagery of public spaces with identifiable individuals, we apply face blurring and licence plate masking on camera images before annotation begins on projects where this is requested. On-premises annotation and air-gapped delivery are available for projects with strict data residency requirements.
Disclaimer: HitechDigital Solutions LLP and HabileData will never ask for money or commission to offer jobs or projects. In the event you are contacted by any person with job offer in our companies, please reach out to us at info@habiledata.com.