diff --git a/facebookresearch_WSL-Images_resnext.md b/facebookresearch_WSL-Images_resnext.md index 465e43e3..39b470c2 100644 --- a/facebookresearch_WSL-Images_resnext.md +++ b/facebookresearch_WSL-Images_resnext.md @@ -65,7 +65,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. print(torch.nn.functional.softmax(output[0], dim=0)) diff --git a/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md b/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md index 3fde7d68..e82de1cc 100644 --- a/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md +++ b/facebookresearch_semi-supervised-ImageNet1K-models_resnext.md @@ -73,7 +73,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. print(torch.nn.functional.softmax(output[0], dim=0)) diff --git a/nvidia_deeplearningexamples_efficientnet.md b/nvidia_deeplearningexamples_efficientnet.md index c306b343..0e75ff2e 100644 --- a/nvidia_deeplearningexamples_efficientnet.md +++ b/nvidia_deeplearningexamples_efficientnet.md @@ -55,7 +55,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -Load the model pretrained on IMAGENET dataset. +Load the model pretrained on ImageNet dataset. You can choose among the following models: diff --git a/nvidia_deeplearningexamples_gpunet.md b/nvidia_deeplearningexamples_gpunet.md index 5de663f5..cc526df8 100644 --- a/nvidia_deeplearningexamples_gpunet.md +++ b/nvidia_deeplearningexamples_gpunet.md @@ -54,7 +54,7 @@ print(f'Using {device} for inference') ``` ### Load Pretrained model -Loads NVIDIA GPUNet-0 model by default pre-trained on IMAGENET dataset. You can switch the default pre-trained model loading from GPUNet-0 to one of the following models listed below. +Loads NVIDIA GPUNet-0 model by default pre-trained on ImageNet dataset. You can switch the default pre-trained model loading from GPUNet-0 to one of the following models listed below. The model architecture is visible as output of the loaded model. For details architecture and latency info please refer to [architecture section](https://github.com/NVIDIA/DeepLearningExamples/tree/torchhub/PyTorch/Classification/GPUNet#model-architecture) in the original repo and Table#[3](https://arxiv.org/pdf/2205.00841.pdf) in the CVPR-2022 paper, respectively. diff --git a/nvidia_deeplearningexamples_resnet50.md b/nvidia_deeplearningexamples_resnet50.md index d613b59c..65856cce 100644 --- a/nvidia_deeplearningexamples_resnet50.md +++ b/nvidia_deeplearningexamples_resnet50.md @@ -59,7 +59,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -Load the model pretrained on IMAGENET dataset. +Load the model pretrained on ImageNet dataset. ```python resnet50 = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_resnet50', pretrained=True) utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_convnets_processing_utils') diff --git a/nvidia_deeplearningexamples_resnext.md b/nvidia_deeplearningexamples_resnext.md index e514e5e2..5f3c07d1 100644 --- a/nvidia_deeplearningexamples_resnext.md +++ b/nvidia_deeplearningexamples_resnext.md @@ -64,7 +64,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -Load the model pretrained on IMAGENET dataset. +Load the model pretrained on ImageNet dataset. ```python resneXt = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_resneXt') utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_convnets_processing_utils') diff --git a/nvidia_deeplearningexamples_se-resnext.md b/nvidia_deeplearningexamples_se-resnext.md index b0088a21..4737f88f 100644 --- a/nvidia_deeplearningexamples_se-resnext.md +++ b/nvidia_deeplearningexamples_se-resnext.md @@ -64,7 +64,7 @@ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cp print(f'Using {device} for inference') ``` -Load the model pretrained on IMAGENET dataset. +Load the model pretrained on ImageNet dataset. ```python resneXt = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_se_resnext101_32x4d') utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_convnets_processing_utils') diff --git a/pytorch_vision_alexnet.md b/pytorch_vision_alexnet.md index 3fa38f53..2c36a74a 100644 --- a/pytorch_vision_alexnet.md +++ b/pytorch_vision_alexnet.md @@ -59,7 +59,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -85,11 +85,11 @@ for i in range(top5_prob.size(0)): AlexNet competed in the ImageNet Large Scale Visual Recognition Challenge on September 30, 2012. The network achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up. The original paper's primary result was that the depth of the model was essential for its high performance, which was computationally expensive, but made feasible due to the utilization of graphics processing units (GPUs) during training. -The 1-crop error rates on the imagenet dataset with the pretrained model are listed below. +The 1-crop error rates on the ImageNet dataset with the pretrained model are listed below. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | -| alexnet | 43.45 | 20.91 | +| AlexNet | 43.45 | 20.91 | ### References diff --git a/pytorch_vision_densenet.md b/pytorch_vision_densenet.md index 29cab0a8..07832c01 100644 --- a/pytorch_vision_densenet.md +++ b/pytorch_vision_densenet.md @@ -63,7 +63,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -89,7 +89,7 @@ for i in range(top5_prob.size(0)): Dense Convolutional Network (DenseNet), connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. -The 1-crop error rates on the imagenet dataset with the pretrained model are listed below. +The 1-crop error rates on the ImageNet dataset with the pretrained model are listed below. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | diff --git a/pytorch_vision_ghostnet.md b/pytorch_vision_ghostnet.md index 693bd5b7..1808112f 100644 --- a/pytorch_vision_ghostnet.md +++ b/pytorch_vision_ghostnet.md @@ -59,7 +59,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_googlenet.md b/pytorch_vision_googlenet.md index 0e8f1d25..d1e7b66c 100644 --- a/pytorch_vision_googlenet.md +++ b/pytorch_vision_googlenet.md @@ -59,7 +59,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_hardnet.md b/pytorch_vision_hardnet.md index 9ee21888..b63f9c92 100644 --- a/pytorch_vision_hardnet.md +++ b/pytorch_vision_hardnet.md @@ -63,7 +63,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -95,7 +95,7 @@ were designed for comparing with MobileNet). Here we have the 4 versions of hardnet models, which contains 39, 68, 85 layers w/ or w/o Depthwise Separable Conv respectively. -Their 1-crop error rates on imagenet dataset with pretrained models are listed below. +Their 1-crop error rates on ImageNet dataset with pretrained models are listed below. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | diff --git a/pytorch_vision_ibnnet.md b/pytorch_vision_ibnnet.md index 55c076f7..8575c85e 100644 --- a/pytorch_vision_ibnnet.md +++ b/pytorch_vision_ibnnet.md @@ -59,7 +59,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_inception_v3.md b/pytorch_vision_inception_v3.md index a87e4d86..ae7f0da4 100644 --- a/pytorch_vision_inception_v3.md +++ b/pytorch_vision_inception_v3.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: Inception_v3 -summary: Also called GoogleNetv3, a famous ConvNet trained on Imagenet from 2015 +summary: Also called GoogleNetv3, a famous ConvNet trained on ImageNet from 2015 category: researchers image: inception_v3.png author: Pytorch Team @@ -59,7 +59,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -85,7 +85,7 @@ for i in range(top5_prob.size(0)): Inception v3: Based on the exploration of ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set. -The 1-crop error rates on the imagenet dataset with the pretrained model are listed below. +The 1-crop error rates on the ImageNet dataset with the pretrained model are listed below. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | diff --git a/pytorch_vision_meal_v2.md b/pytorch_vision_meal_v2.md index a5b36ff7..146b62b7 100644 --- a/pytorch_vision_meal_v2.md +++ b/pytorch_vision_meal_v2.md @@ -67,7 +67,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_mobilenet_v2.md b/pytorch_vision_mobilenet_v2.md index 6be178b5..0abb7e89 100644 --- a/pytorch_vision_mobilenet_v2.md +++ b/pytorch_vision_mobilenet_v2.md @@ -59,7 +59,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_once_for_all.md b/pytorch_vision_once_for_all.md index 7195975e..10bc7a64 100644 --- a/pytorch_vision_once_for_all.md +++ b/pytorch_vision_once_for_all.md @@ -105,7 +105,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_proxylessnas.md b/pytorch_vision_proxylessnas.md index e72407e0..1cf115a3 100644 --- a/pytorch_vision_proxylessnas.md +++ b/pytorch_vision_proxylessnas.md @@ -61,7 +61,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_resnest.md b/pytorch_vision_resnest.md index 94995d56..b7a4769e 100644 --- a/pytorch_vision_resnest.md +++ b/pytorch_vision_resnest.md @@ -62,7 +62,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_resnet.md b/pytorch_vision_resnet.md index a64244a1..2ab97bdc 100644 --- a/pytorch_vision_resnet.md +++ b/pytorch_vision_resnet.md @@ -64,7 +64,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -91,7 +91,7 @@ for i in range(top5_prob.size(0)): Resnet models were proposed in "Deep Residual Learning for Image Recognition". Here we have the 5 versions of resnet models, which contains 18, 34, 50, 101, 152 layers respectively. Detailed model architectures can be found in Table 1. -Their 1-crop error rates on imagenet dataset with pretrained models are listed below. +Their 1-crop error rates on ImageNet dataset with pretrained models are listed below. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | diff --git a/pytorch_vision_resnext.md b/pytorch_vision_resnext.md index 055c1abc..4798647d 100644 --- a/pytorch_vision_resnext.md +++ b/pytorch_vision_resnext.md @@ -61,7 +61,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -90,7 +90,7 @@ for i in range(top5_prob.size(0)): Resnext models were proposed in [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431). Here we have the 2 versions of resnet models, which contains 50, 101 layers repspectively. A comparison in model archetechure between resnet50 and resnext50 can be found in Table 1. -Their 1-crop error rates on imagenet dataset with pretrained models are listed below. +Their 1-crop error rates on ImageNet dataset with pretrained models are listed below. | Model structure | Top-1 error | Top-5 error | | ----------------- | ----------- | ----------- | diff --git a/pytorch_vision_shufflenet_v2.md b/pytorch_vision_shufflenet_v2.md index 1fac4538..115e4549 100644 --- a/pytorch_vision_shufflenet_v2.md +++ b/pytorch_vision_shufflenet_v2.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: ShuffleNet v2 -summary: An efficient ConvNet optimized for speed and memory, pre-trained on Imagenet +summary: An efficient ConvNet optimized for speed and memory, pre-trained on ImageNet category: researchers image: shufflenet_v2_1.png author: Pytorch Team @@ -59,7 +59,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_snnmlp.md b/pytorch_vision_snnmlp.md index 2da87327..636addfe 100644 --- a/pytorch_vision_snnmlp.md +++ b/pytorch_vision_snnmlp.md @@ -62,7 +62,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. print(torch.nn.functional.softmax(output[0], dim=0)) diff --git a/pytorch_vision_squeezenet.md b/pytorch_vision_squeezenet.md index bff65a8c..1d4374d7 100644 --- a/pytorch_vision_squeezenet.md +++ b/pytorch_vision_squeezenet.md @@ -61,7 +61,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -90,7 +90,7 @@ Model `squeezenet1_0` is from the [SqueezeNet: AlexNet-level accuracy with 50x f Model `squeezenet1_1` is from the [official squeezenet repo](https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1). It has 2.4x less computation and slightly fewer parameters than `squeezenet1_0`, without sacrificing accuracy. -Their 1-crop error rates on imagenet dataset with pretrained models are listed below. +Their 1-crop error rates on ImageNet dataset with pretrained models are listed below. | Model structure | Top-1 error | Top-5 error | | --------------- | ----------- | ----------- | diff --git a/pytorch_vision_vgg.md b/pytorch_vision_vgg.md index ebf2f9f7..fbbacbd1 100644 --- a/pytorch_vision_vgg.md +++ b/pytorch_vision_vgg.md @@ -3,7 +3,7 @@ layout: hub_detail background-class: hub-background body-class: hub title: vgg-nets -summary: Award winning ConvNets from 2014 Imagenet ILSVRC challenge +summary: Award winning ConvNets from 2014 ImageNet ILSVRC challenge category: researchers image: vgg.png author: Pytorch Team @@ -67,7 +67,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/pytorch_vision_wide_resnet.md b/pytorch_vision_wide_resnet.md index 2419a807..06a8e84a 100644 --- a/pytorch_vision_wide_resnet.md +++ b/pytorch_vision_wide_resnet.md @@ -62,7 +62,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) diff --git a/simplenet.md b/simplenet.md index b6b1531c..20fa4015 100644 --- a/simplenet.md +++ b/simplenet.md @@ -67,7 +67,7 @@ if torch.cuda.is_available(): with torch.no_grad(): output = model(input_batch) -# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes +# Tensor of shape 1000, with confidence scores over ImageNet's 1000 classes print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities = torch.nn.functional.softmax(output[0], dim=0) @@ -94,7 +94,7 @@ for i in range(top5_prob.size(0)): SimpleNet models were proposed in "Lets Keep it simple, Using simple architectures to outperform deeper and more complex architectures". Here we have the 8 versions of simplenet models, which contains 1.5m, 3.2m, 5.7m and 9.5m parameters respectively. Detailed model architectures can be found in Table 1 and Table 2. -Their 1-crop errors on imagenet dataset with pretrained models are listed below. +Their 1-crop errors on ImageNet dataset with pretrained models are listed below. The m2 variants