{"modelVersions":[{"status":"UPLOAD_COMPLETE","memoryFootprint":"70.5","accuracyReached":80.0,"description":"","totalSizeInBytes":73969636,"batchSize":1,"totalFileCount":2,"gpuModel":"V100","versionId":"deployable_v2.0.2","createdByUser":"n90fe0en2gvll5957fel7u75sg","createdDate":"2022-08-22T18:55:02.371Z","customMetrics":[{"attributes":[{"key":"2. Model Architecture","value":"MaskRCNN"},{"key":"7. RUNTIME(S)","value":"Deepstream"},{"key":"8. Supported OS","value":"LINUX, LINUX4TEGRA"},{"key":"6. Trainable","value":"False"},{"key":"5. Output","value":"Category label (person), bounding-box coordinates and segmentation mask for each detected person in the input image."},{"key":"1. Description","value":"Segmentation of person instances in an image."},{"key":"3. Backbone","value":"ResNet-50"},{"key":"4. Input","value":"TYPE: RGB IMAGE; DIMENSION: 2D; RESOLUTION: 1920X1080 RESIZED TO 960X544X3"},{"key":"9. Supported Hardware","value":"All NVIDIA GPUS INCLUDING JETSON DEVICES"}],"name":"1. MODEL OVERVIEW"}],"numberOfEpochs":120},{"status":"UPLOAD_COMPLETE","memoryFootprint":"70.5","accuracyReached":80.0,"description":"","totalSizeInBytes":73969636,"batchSize":1,"totalFileCount":2,"gpuModel":"V100","versionId":"deployable_v2.0.1","createdByUser":"n90fe0en2gvll5957fel7u75sg","createdDate":"2021-10-26T16:03:57.648Z","numberOfEpochs":120},{"status":"UPLOAD_COMPLETE","accuracyReached":86.0,"description":"","totalSizeInBytes":361882215,"totalFileCount":1,"versionId":"trainable_v2.1","createdByUser":"n90fe0en2gvll5957fel7u75sg","createdDate":"2021-08-25T22:49:30.104Z","numberOfEpochs":120},{"status":"UPLOAD_COMPLETE","memoryFootprint":"168.1","accuracyReached":86.0,"description":"","totalSizeInBytes":176228030,"batchSize":1,"totalFileCount":2,"gpuModel":"V100","versionId":"deployable_v1.0","createdByUser":"n90fe0en2gvll5957fel7u75sg","createdDate":"2021-08-24T22:54:59.247Z","numberOfEpochs":120},{"status":"UPLOAD_COMPLETE","memoryFootprint":"168.1","accuracyReached":80.0,"description":"","totalSizeInBytes":176223473,"batchSize":1,"totalFileCount":2,"gpuModel":"V100","versionId":"deployable_v2.0","createdByUser":"n90fe0en2gvll5957fel7u75sg","createdDate":"2021-08-24T22:54:36.765Z","numberOfEpochs":120},{"status":"UPLOAD_COMPLETE","memoryFootprint":"344.7","accuracyReached":86.0,"description":"","totalSizeInBytes":361424122,"batchSize":1,"totalFileCount":1,"gpuModel":"V100","versionId":"trainable_v1.0","createdByUser":"n90fe0en2gvll5957fel7u75sg","createdDate":"2021-08-24T22:53:53.213Z","numberOfEpochs":120},{"status":"UPLOAD_COMPLETE","memoryFootprint":"345.1","accuracyReached":80.0,"description":"","totalSizeInBytes":361882215,"batchSize":1,"totalFileCount":1,"gpuModel":"V100","versionId":"trainable_v2.0","createdByUser":"n90fe0en2gvll5957fel7u75sg","createdDate":"2021-08-24T22:53:15.914Z","numberOfEpochs":120}],"model":{"orgName":"nvidia","labels":["nvaie:model:nvaie_supported","other:model:tlt","technology:model:soln_ai","technology:model:soln_nvidia_ai"],"logo":"https://dz112fgwz7ogh.cloudfront.net/logos/Nvidia-Centric-TAO.png","shortDescription":"1 class instance segmentation network to detect and segment instances of people in an image.","isReadOnly":true,"publicDatasetUsed":{},"teamName":"tao","application":"Instance Segmentation","latestVersionSizeInBytes":73969636,"isPublic":true,"description":"## PeopleSegNet Model Card\n\n### Model Overview \n\nThe model described in this card detects one or more “person” objects within an image and returns a box around each object, as well as a segmentation mask for each object.\n\n### Model Architecture \n\nThis model is based on MaskRCNN with ResNet50 as its feature extractor. MaskRCNN is a widely adopted two-stage architecture, which uses Region Proposal Network (RPN) to generate object proposals and various prediction heads to predict object categories, refine bounding boxes and generate instance masks.\n\n### Training \n\nThe training algorithm optimizes the network to minimize the mask, localization and confidence loss for the objects.\n\n#### Training Data \n\nPeopleSegNet v2.1 model was trained on a proprietary dataset with more than 5 million objects for person class. The training dataset consists of a mix of camera heights, crowd-density, and field-of view (FOV). Approximately half of the training data consisted of images captured in an indoor office environment. For this case, the camera is typically set up at approximately 10 feet height, 45-degree angle and has close field-of-view. This content was chosen to improve accuracy of the models for convenience-store retail analytics use-case.\n\n| | | Object |\n| ---- | ---- | ---- |\n| Environment | Images | Persons | \n| 5ft Indoor | 108,692 | 1,060,960 |\n| 5ft Outdoor | 206,912 | 166,8250 |\n| 10ft Indoor (Office close FOV) | 413,270 | 4,577,870 |\n| 10ft Outdoor | 18,321 | 178,817 |\n| 20ft Indoor | 104,972 | 1,079,550 |\n| 20ft Outdoor | 24,783 | 59,623 |\n| Total | 876,950 | 8,625,070 |\n\n##### Training Data Ground-truth Labeling Guidelines\n\nThe training dataset is created by labeling ground-truth bounding-boxes and categories by human labellers. Following guidelines were used while labelling the training data for NVIDIA PeopleSegNet model. If you are looking to re-train with your own dataset, please follow the guideline below for highest accuracy.\n\nPeopleSegNet project labelling guidelines:\n\n1. All objects that fall under one of the three classes (person, face, bag) in the image and are larger than the smallest bounding-box limit for the corresponding class (height >= 10px OR width >= 10px @1920x1080) are labeled with the appropriate class label.\n\n2. If a person is carrying an object please mark the bounding-box to include the carried object as long as it doesn’t affect the silhouette of the person. For example, exclude a rolling bag if they are pulling it behind them and are distinctly visible as separate object. But include a backpack, purse etc. that do not alter the silhouette of the pedestrian significantly.\n\n3. Occlusion: For partially occluded objects that do not belong a person class and are visible approximately 60% or are marked as visible objects with bounding box around visible part of the object. These objects are marked as partially occluded. Objects under 60% visibility are not annotated.\n\n4. Occlusion for person class: If an occluded person’s head and shoulders are visible and the visible height is approximately 20% or more, then these objects are marked by the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Occlusion guidelines in item 3 above.\n\n5. Truncation: For an object other than a person that is at the edge of the frame with visibility of 60% or more visible are marked with the truncation flag for the object.\n\n6. Truncation for person class: If a truncated person’s head and shoulders are visible and the visible height is approximately 20% or more mark the bounding box around the visible part of the person object. If the head and shoulders are not visible please follow the Truncation guidelines in item 5 above.\n\n7. Each frame is not required to have an object.\n\n### Performance\n\n#### Evaluation Data \n\nThe inference performance of PeopleSegNet v2.1 model was measured against 42000 proprietary images across a variety of environments. The frames are high resolution images 1920x1080 pixels resized to 960x576 pixels before passing to the PeopleSegNet model.\n\n##### Methodology and KPI\n\nThe true positives, false positives, false negatives are calculated using intersection-over-union (IOU) criterion greater than 0.5. The KPI for the evaluation data are reported in the table below. Model is evaluated based on precision, recall and accuracy.\n\n| Model | | ResNet 50 | |\n| ---- | ---- | ---- | ---- |\n| Content | Precision | Recall| Accuracy |\n| 5ft | 93.69 | 90.36 | 85.45 |\n| 10ft | 96.13 | 76.22 | 73.95 |\n| 20ft | 97.58 | 91.88 | 90.52 |\n| Office use-case | 88.31 | 94.52 | 86.00 |\n\n#### Real-time Inference Performance \n\nThe inference is run on the provided pruned models at INT8 precision. On the Jetson Nano FP16 precision is used. The inference performance is run using [`trtexec`](https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec) on Jetson Nano, AGX Xavier, Xavier NX and NVIDIA T4 GPU. The Jetson devices are running at Max-N configuration for maximum GPU frequency. The performance shown here is the inference only performance. The end-to-end performance with streaming video data might slightly vary depending on other bottlenecks in the hardware and software.\n\n| Platform | FPS |\n|------------------------------------|------|\n| Nano | 0.6 |\n| Xavier NX | 8.5 |\n| AGX Xavier | 12.2 |\n| T4 | 40 |\n\n### How to use this model \n\nThese models need to be used with NVIDIA Hardware and Software. For Hardware, the models can run on any NVIDIA GPU including NVIDIA Jetson devices. These models can only be used with Train Adapt Optimize (TAO) Toolkit, DeepStream SDK or TensorRT.\n\nPrimary use case intended for the model is detecting and segmenting people in a color (RGB) image. The model can be used to detect and segment people from photos and videos by using appropriate video or image decoding and pre-processing.\n\nThe model is intended for training using Transfer Learning Toolkit with the user's own dataset or using it as it is. This can provide high fidelity models that are adapted to the use case. The Jupyter notebook available as a part of TLT container can be used to re-train.\n\nThe model is encrypted and will only operate with the following key:\n\n- Model load key: `nvidia_tlt`\n\nPlease make sure to use this as the key for all TAO commands that require a model load key.\n\n#### Input\n\nColor Images of resolution 960 X 576 X 3\n\n#### Output\n\nCategory label (person), bounding-box coordinates and segmentation mask for each detected person in the input image.\n\n##### Input image\n\n\n\n##### Output image\n\n\n\n#### Instructions to use unpruned model with TAO\n\nIn order, to use these models as a pretrained weights for transfer learning, please use the snippet below as template for the `maskrcnn_config` component of the experiment spec file to train a MaskRCNN model. For more information on the experiment spec file, please refer to the [TAO Toolkit User Guide](https://docs.nvidia.com/tao/tao-toolkit/index.html).\n\n```py\nmaskrcnn_config {\n nlayers: 50\n arch: \"resnet\"\n gt_mask_size: 112\n freeze_blocks: \"[0]\"\n freeze_bn: True\n # Region Proposal Network\n rpn_positive_overlap: 0.7\n rpn_negative_overlap: 0.3\n rpn_batch_size_per_im: 256\n rpn_fg_fraction: 0.5\n rpn_min_size: 0.\n\n # Proposal layer.\n batch_size_per_im: 512\n fg_fraction: 0.25\n fg_thresh: 0.5\n bg_thresh_hi: 0.5\n bg_thresh_lo: 0.\n\n # Faster-RCNN heads.\n fast_rcnn_mlp_head_dim: 1024\n bbox_reg_weights: \"(10., 10., 5., 5.)\"\n\n # Mask-RCNN heads.\n include_mask: True\n mrcnn_resolution: 28\n\n # training\n train_rpn_pre_nms_topn: 2000\n train_rpn_post_nms_topn: 1000\n train_rpn_nms_threshold: 0.7\n\n # evaluation\n test_detections_per_image: 100\n test_nms: 0.5\n test_rpn_pre_nms_topn: 1000\n test_rpn_post_nms_topn: 1000\n test_rpn_nms_thresh: 0.7\n\n # model architecture\n min_level: 2\n max_level: 6\n num_scales: 1\n aspect_ratios: \"[(1.0, 1.0), (1.4, 0.7), (0.7, 1.4)]\"\n anchor_scale: 8\n\n # localization loss\n rpn_box_loss_weight: 1.0\n fast_rcnn_box_loss_weight: 1.0\n mrcnn_weight_loss_mask: 1.0\n}\n```\n\n#### Instructions to deploy these models with DeepStream\n\nTo create the entire end-to-end video analytic application, deploy these models with [DeepStream SDK](https://developer.nvidia.com/deepstream-sdk). DeepStream SDK is a streaming analytic toolkit to accelerate building AI-based video analytic applications. DeepStream supports direct integration of these models into the deepstream sample app.\n\nTo deploy these models with [DeepStream 6.1](https://developer.nvidia.com/deepstream-sdk), please follow the instructions below:\n\n[Download](https://developer.nvidia.com/deepstream-sdk) and install DeepStream SDK. The installation instructions for DeepStream are provided in [DeepStream development guide](https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html). The config files for the purpose-built models are located in:\n\n`/opt/nvidia/deepstream` is the default DeepStream installation directory. This path will be different if you are installing in a different directory.\n\nYou will need 2 config files and 1 label file. These files are provided in [NVIDIA-AI-IOT](https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/configs/peopleSegNet_tao).\n\n```sh\ndeepstream_app_source1_peoplesegnet.txt - Main config file for DeepStream app\npgie_peopleSegNetv2_tao_config.txt - File to configure inference settings \npeopleSegNet_labels.txt - Label file with 1 class\n```\n\nKey Parameters in `pgie_peopleSegNetv2_tao_config.txt`\n\n```sh\ntlt-model-key=nvidia_tlt\ntlt-encoded-model=../../models/peopleSegNet/V2/peoplesegnet_resnet50.etlt\nmodel-engine-file=../../models/peopleSegNet/V2/peoplesegnet_resnet50.etlt_b1_gpu0_int8.engine\nnetwork-type=3 ## 3 is for instance segmentation network\nlabelfile-path=./peopleSegNet_labels.txt\nint8-calib-file=../../models/peopleSegNet/V2/peoplesegnet_resnet50_int8.txt\ninfer-dims=3;576;960\nnum-detected-classes=2\n```\n\nRun `deepstream-app`:\n\n```sh\ndeepstream-app -c deepstream_app_source1_peoplesegnet.txt\n```\n\nDocumentation to deploy with DeepStream is provided in \"Deploying to DeepStream\" chapter of [TAO User Guide](https://docs.nvidia.com/tao/tao-toolkit/index.html).\n\n### Limitations \n\n#### Very Small Objects\n\nNVIDIA PeopleSegNet model were trained to detect objects larger than 10x10 pixels. Therefore it may not be able to detect objects that are smaller than 10x10 pixels.\n\n#### Occluded Objects\n\nWhen objects are occluded or truncated such that less than 20% of the object is visible, they may not be detected by the PeopleSegNet model. For people class objects, the model will detect occluded people as long as head and shoulders are visible. However if the person’s head and/or shoulders are not visible, the object might not be detected unless more than 60% of the person is visible.\n\n#### Dark-lighting, Monochrome or Infrared Camera Images\n\nThe PeopleSegNet model were trained on RGB images in good lighting conditions. Therefore, images captured in dark lighting conditions or a monochrome image or IR camera image may not provide good detection results.\n\n#### Warped and Blurry Images\n\nThe PeopleSegNet models were not trained on fish-eye lense cameras or moving cameras. Therefore, the models may not perform well for warped images and images that have motion-induced or other blur.\n\n#### Face and Bag class\n\nAlthough bag and face class are included in the model, the accuracy of these classes will be much lower than people class. Some re-training will be required on these classes to improve accuracy.\n\n### Model versions:\n\n- **trainable_v2.1** - ResNet50 based pre-trained model.\n- **deployable_v2.0.2** - ResNet50 based model deployable to deepstream.\n\n### Reference\n\n#### Citations \n\n- K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask RCNN. In ICCV, 2017. 1, 2, 4\n\n#### Using TAO Pre-trained Models \n\n- Get [TAO Container](https://ngc.nvidia.com/catalog/containers/nvidia:tao:tao-toolkit)\n- Get other Purpose-built models from NGC model registry:\n - [TrafficCamNet](https://ngc.nvidia.com/catalog/models/nvidia:tao:trafficcamnet)\n - [DashCamNet](https://ngc.nvidia.com/catalog/models/nvidia:tao:dashcamnet)\n - [FaceDetectIR](https://ngc.nvidia.com/catalog/models/nvidia:tao:facedetectir)\n - [VehicleMakeNet](https://ngc.nvidia.com/catalog/models/nvidia:tao:vehiclemakenet)\n - [VehicleTypeNet](https://ngc.nvidia.com/catalog/models/nvidia:tao:vehicletypenet)\n\n#### Technical blogs \n\n- [Train like a ‘pro’ without being an AI expert using TAO AutoML](https://developer.nvidia.com/blog/training-like-an-ai-pro-using-tao-automl/)\n- [Create Custom AI models using NVIDIA TAO Toolkit with Azure Machine Learning](https://developer.nvidia.com/blog/creating-custom-ai-models-using-nvidia-tao-toolkit-with-azure-machine-learning/)\n- [Developing and Deploying AI-powered Robots with NVIDIA Isaac Sim and NVIDIA TAO](https://developer.nvidia.com/blog/developing-and-deploying-ai-powered-robots-with-nvidia-isaac-sim-and-nvidia-tao/)\n- Learn endless ways to adapt and supercharge your AI workflows with TAO - [Whitepaper](https://developer.nvidia.com/tao-toolkit-usecases-whitepaper/1-introduction)\n- [Customize Action Recognition with TAO and deploy with DeepStream](https://developer.nvidia.com/blog/developing-and-deploying-your-custom-action-recognition-application-without-any-ai-expertise-using-tao-and-deepstream/)\n- Read the 2 part blog on training and optimizing 2D body pose estimation model with TAO - [Part 1](https://developer.nvidia.com/blog/training-optimizing-2d-pose-estimation-model-with-tao-toolkit-part-1) | [Part 2](https://developer.nvidia.com/blog/training-optimizing-2d-pose-estimation-model-with-tao-toolkit-part-2)\n- Learn how to train [real-time License plate detection and recognition app](https://developer.nvidia.com/blog/creating-a-real-time-license-plate-detection-and-recognition-app) with TAO and DeepStream.\n- Model accuracy is extremely important, learn how you can achieve [state of the art accuracy for classification and object detection models](https://developer.nvidia.com/blog/preparing-state-of-the-art-models-for-classification-and-object-detection-with-tao-toolkit/) using TAO\n\n#### Suggested reading \n\n- More information on about TAO Toolkit and pre-trained models can be found at the NVIDIA Developer Zone: https://developer.nvidia.com/tao-toolkit\n- Read the [TAO User Guide](https://docs.nvidia.com/tao/tao-toolkit/text/overview.html) guide and [release notes](https://docs.nvidia.com/tao/tao-toolkit/text/release_notes.html#transfer-learning-toolkit-v3-0l).\n- If you have any questions or feedback, please refer to the discussions on [TAO Toolkit Developer Forums](https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/tao-toolkit/17)\n- Deploy your model on the edge using DeepStream. Learn more about DeepStream SDK https://developer.nvidia.com/deepstream-sdk\n\n### License \n\nLicense to use these models is covered by the Model EULA. By downloading the unpruned or pruned version of the model, you accept the terms and conditions of these [licenses](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/).\n\n### Ethical Considerations \n\nTraining and evaluation dataset mostly consists of North American content. An ideal training and evaluation dataset would additionally include content from other geographies.\n\nNVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.\n","latestVersionIdStr":"deployable_v2.0.2","canGuestDownload":true,"latestVersionGpuModel":"V100","precision":"FP32","framework":"Transfer Learning Toolkit","createdDate":"2021-08-16T15:03:42.177Z","name":"peoplesegnet","publisher":"","displayName":"PeopleSegNet","modelFormat":"TLT","updatedDate":"2023-07-24T21:40:19.580Z"},"requestStatus":{"statusCode":"SUCCESS","requestId":"7ecd56ba-a0b2-46db-aaea-d73bc4110f9a"},"paginationInfo":{"totalPages":1,"index":0,"totalResults":7,"size":100}}