Skip to content

Commit

Permalink
updating links to old wiki - referencing now the doc site
Browse files Browse the repository at this point in the history
  • Loading branch information
wolfma61 committed Jun 7, 2017
1 parent fca6f27 commit 5ecc834
Show file tree
Hide file tree
Showing 78 changed files with 153 additions and 154 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
You want to contribute to CNTK? We're really excited to work together!

Please, follow the steps from the Wiki Article at
Please, follow the steps from the documentation:

https://github.com/Microsoft/CNTK/wiki/Contributing-to-CNTK
https://docs.microsoft.com/en-us/cognitive-toolkit/contributing-to-cntk

Your CNTK team.
4 changes: 2 additions & 2 deletions Dependencies/CNTKCustomMKL/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ for usage by CNTK ("CNTK custom MKL" for short).
By default, a CNTK binary with Intel® MKL support includes a prebuilt CNTK
custom MKL.
If you want to build CNTK with Intel® MKL support yourself, you can install a
prebuilt CNTK custom MKL, available for download from the [CNTK web site](https://www.cntk.ai/mkl).
See [CNTK's setup instructions](https://github.com/Microsoft/CNTK/wiki/Setup-CNTK-on-your-machine)
prebuilt CNTK custom MKL, available for download from the [here](https://www.microsoft.com/en-us/cognitive-toolkit/download-math-kernel-library/).
See [CNTK's setup instructions](https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-CNTK-on-your-machine)
for more details.

If you want to add new Intel® MKL functions to be used by CNTK you will have to
Expand Down
4 changes: 2 additions & 2 deletions Examples/Evaluation/CSEvalClient/Program.cs
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ namespace Microsoft.MSR.CNTK.Extensibility.Managed.CSEvalClient
/// There are four cases shown in this program related to model loading, network creation and evaluation.
///
/// To run this program from the CNTK binary drop, you must add the NuGet package for model evaluation first.
/// Refer to <see cref="https://github.com/Microsoft/CNTK/wiki/NuGet-Package"/> for information regarding the NuGet package for model evaluation.
/// Refer to <see cref="https://docs.microsoft.com/en-us/cognitive-toolkit/NuGet-Package"/> for information regarding the NuGet package for model evaluation.
///
/// EvaluateModelSingleLayer and EvaluateModelMultipleLayers
/// --------------------------------------------------------
Expand Down Expand Up @@ -367,7 +367,7 @@ private static void EvaluateMultipleModels()
{
Interlocked.Increment(ref count);

// The file format correspond to the CNTK Text Format Reader format (https://github.com/Microsoft/CNTK/wiki/CNTKTextFormat-Reader)
// The file format correspond to the CNTK Text Format Reader format (https://docs.microsoft.com/en-us/cognitive-toolkit/Brainscript-CNTKTextFormat-Reader)
var sets = line.Split('|');
var labels = sets[1].Trim().Split(' ').Skip(1);
var features = sets[2].Trim().Split(' ').Skip(1);
Expand Down
20 changes: 11 additions & 9 deletions Examples/Evaluation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,24 +2,26 @@

The folder contains some examples using the CNTK to evalaute a trained model in your application. Please note that Visual Studio 2015 update 3 is required, and only the 64-bit target is supported.

The [CNTK Eval Examples](https://github.com/Microsoft/CNTK/wiki/CNTK-Eval-Examples) page provides more details of these examples.

The [CNTK Eval Examples](https://docs.microsoft.com/en-us/cognitive-toolkit/CNTK-Eval-Examples) page provides more details of these examples.

# CNTK Library Eval C++/C# Examples

The CNTKLibraryEvalExamples.sln contains code samples demonstrating how to use the CNTK Library API in C++ and C#.
- CNTKLibraryCSEvalCPUOnlyExamples uses the CNTK Library CPU-Only Nuget package to evaluate models on CPU-only devices in C#.
- CNTKLibraryCSEvalGPUExamples uses the CNTK Library GPU Nuget package to evaluate models on GPU devices in C#.
- CNTKLibraryCPPEvalCPUOnlyExamples uses the CNTK Library C++ API to evaluate models on CPU-only devices. It uses the CNTK Library CPU-Only Nuget package.
- CNTKLibraryCPPEvalGPUExamples uses the CNTK Library C++ API to evaluate models on GPU devices. It uses the CNTK Library GPU Nuget package.

* CNTKLibraryCSEvalCPUOnlyExamples uses the CNTK Library CPU-Only Nuget package to evaluate models on CPU-only devices in C#.
* CNTKLibraryCSEvalGPUExamples uses the CNTK Library GPU Nuget package to evaluate models on GPU devices in C#.
* CNTKLibraryCPPEvalCPUOnlyExamples uses the CNTK Library C++ API to evaluate models on CPU-only devices. It uses the CNTK Library CPU-Only Nuget package.*- CNTKLibraryCPPEvalGPUExamples uses the CNTK Library C++ API to evaluate models on GPU devices. It uses the CNTK Library GPU Nuget package.

After a successful build, the executable is saved under the $(SolutionDir)..\..$(Platform)$(ProjectName).$(Configuration)\ folder, e.g. ..\..\X64\CNTKLibraryCSEvalCPUOnlyExamples.Release\CNTKLibraryCSEvalCPUOnlyExamples.exe.
On Linux, only C++ is supported. Please refer to Makefile for building samples. The target name CNTKLIBRARY_CPP_EVAL_EXAMPLES is used to build CNTKLibraryCPPEvalExamples.

# EvalDll Eval C++/C# Examples

The EvalClients.sln contains the following projects demonstrating how to use the EvalDll library in C++ and C#.
- CPPEvalClient: this sample uses the C++ EvalDll.
- CPPEvalExtendedClient: this sample uses the C++ extended Eval interface in EvalDll to evaluate a RNN model.
- CSEvalClient: this sample uses the C# EvalDll (only for Windows). It uses the CNTK EvalDll Nuget Package.

* CPPEvalClient: this sample uses the C++ EvalDll.
* CPPEvalExtendedClient: this sample uses the C++ extended Eval interface in EvalDll to evaluate a RNN model.
* CSEvalClient: this sample uses the C# EvalDll (only for Windows). It uses the CNTK EvalDll Nuget Package.

After a successful build, the executable is saved under the $(SolutionDir)..\..$(Platform)$(ProjectName).$(Configuration)\ folder, e.g. ..\..\X64\CPPEvalClient.Release\CppEvalClient.exe.
On Linux, please refer to Makefile for building samples. The target name EVAL_CLIENT, and EVAL_EXTENDED_CLIENT are used to build these projects.
2 changes: 1 addition & 1 deletion Examples/Image/Classification/AlexNet/Python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@ Our AlexNet model is a slight variation of the Caffe implementation of AlexNet (

`python AlexNet_ImageNet_Distributed.py`

You may use this python script to train AlexNet on multiple GPUs or machines. For a reference on distributed training, please check [here](https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines#32-python). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:
You may use this python script to train AlexNet on multiple GPUs or machines. For a reference on distributed training, please check [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Multiple-GPUs-and-machines). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:

`mpiexec -n <#workers> python AlexNet_ImageNet_Distributed.py`
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The network achieves an error rate of around `18%` after 30 epochs. This is comp
### ConvNet_CIFAR10_DataAug.cntk

The third example uses the same CNN as the previous example, but it improves by adding data augmentation to training. For this purpose, we use the `ImageReader` instead of the `CNTKTextFormatReader` to load the data. The ImageReader currently supports crop, flip, scale, color jittering, and mean subtraction.
For a reference on image reader and transforms, please check [here](https://github.com/Microsoft/CNTK/wiki/Image-reader).
For a reference on image reader and transforms, please check [here](https://docs.microsoft.com/en-us/cognitive-toolkit/BrainScript-Image-Reader).

Run the example from the current folder using:

Expand Down
2 changes: 1 addition & 1 deletion Examples/Image/Classification/ConvNet/Python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ All settings are identical to the previous example. The accuracy of the network

### ConvNet_CIFAR10_DataAug_Distributed.py

The fifth example uses the same CNN as ConvNet_CIFAR10_DataAug.py, but it adds support for distributed training with simple aggregation. For a reference on distributed training, please check [here](https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines#32-python).
The fifth example uses the same CNN as ConvNet_CIFAR10_DataAug.py, but it adds support for distributed training with simple aggregation. For a reference on distributed training, please check [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Multiple-GPUs-and-machines).
Note that this example will run with a CPU-only build.

`mpiexec -n <#workers> python ConvNet_CIFAR10_DataAug_Distributed.py`
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ For more parameter definitions, please use `-h` command to see the help text:

### BN_Inception_CIFAR10_Distributed.py

[This example](./BN_Inception_CIFAR10_Distributed.py) is similar to BN_Inception_CIFAR10.py, but it adds support for distributed training via [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface). Details can be found in [here](https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines#32-python).
[This example](./BN_Inception_CIFAR10_Distributed.py) is similar to BN_Inception_CIFAR10.py, but it adds support for distributed training via [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface). Details can be found in [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Multiple-GPUs-and-machines#42-running-parallel-training-with-python).
Note this example requires a multi-GPU machine or mpi hosts file to distribute to multiple machines.

Simple aggregation, BN-Inception, with a 2-GPU machine:
Expand All @@ -49,9 +49,9 @@ For more parameter definitions, please use `-h` command to see the help text:

### BN_Inception_ImageNet_Distributed.py

[This example](./BN_Inception_ImageNet_Distributed.py) is similar to BN_Inception_ImageNet.py, but it adds distributed training support.
[This example](./BN_Inception_ImageNet_Distributed.py) is similar to BN_Inception_ImageNet.py, but it adds distributed training support.

To run it in a distributed manner, please check [here](https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines#32-python). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:
To run it in a distributed manner, please check [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Multiple-GPUs-and-machines#42-running-parallel-training-with-python). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:

`mpiexec -n <#workers> python BN_Inception_ImageNet_Distributed.py`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ TrainNetwork = {
}

# Re-statistics the mean and variance of batch normalization layers while other parameters frozen after training
# more details: https://github.com/Microsoft/CNTK/wiki/Post-Batch-Normalization-Statistics#post-batch-normalization-statistics
# more details: https://docs.microsoft.com/en-us/cognitive-toolkit/Post-Batch-Normalization-Statistics
BNStatistics = {
action = "bnstat"
modelPath = "$modelPath$"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ TrainNetwork = {
}

# Re-statistics the mean and variance of batch normalization layers while other parameters frozen after training
# more details: https://github.com/Microsoft/CNTK/wiki/Post-Batch-Normalization-Statistics#post-batch-normalization-statistics
# more details: https://docs.microsoft.com/en-us/cognitive-toolkit/Post-Batch-Normalization-Statistics
BNStatistics = {
action = "bnstat"
modelPath = "$modelPath$"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ TrainNetwork = {
}

# Re-statistics the mean and variance of batch normalization layers while other parameters frozen after training
# more details: https://github.com/Microsoft/CNTK/wiki/Post-Batch-Normalization-Statistics#post-batch-normalization-statistics
# more details: https://docs.microsoft.com/en-us/cognitive-toolkit/Post-Batch-Normalization-Statistics
BNStatistics = {
action = "bnstat"
modelPath = "$modelPath$"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ TrainNetwork = {
}

# Re-statistics the mean and variance of batch normalization layers while other parameters frozen after training
# more details: https://github.com/Microsoft/CNTK/wiki/Post-Batch-Normalization-Statistics#post-batch-normalization-statistics
# more details: https://docs.microsoft.com/en-us/cognitive-toolkit/Post-Batch-Normalization-Statistics
BNStatistics = {
action = "bnstat"
modelPath = "$modelPath$"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ TrainNetwork = {
}

# Re-statistics the mean and variance of batch normalization layers while other parameters frozen after training
# more details: https://github.com/Microsoft/CNTK/wiki/Post-Batch-Normalization-Statistics#post-batch-normalization-statistics
# more details: https://docs.microsoft.com/en-us/cognitive-toolkit/Post-Batch-Normalization-Statistics
BNStatistics = {
action = "bnstat"
modelPath = "$modelPath$"
Expand Down
2 changes: 1 addition & 1 deletion Examples/Image/Classification/ResNet/Python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ for ResNet20 and ResNet110, respectively. The ResNet20 network achieves an error

### TrainResNet_CIFAR10_Distributed.py

[This example](./TrainResNet_CIFAR10_Distributed.py) is similar to TrainResNet_CIFAR10.py, but it adds support for distributed training via [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface). Details can be found [here](https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines).
[This example](./TrainResNet_CIFAR10_Distributed.py) is similar to TrainResNet_CIFAR10.py, but it adds support for distributed training via [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface). Details can be found [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Multiple-GPUs-and-machines).
Note this example requires a multi-GPU machine or mpi hosts file to distribute to multiple machines.

Simple aggregation, ResNet20, with a 2-GPU machine:
Expand Down
4 changes: 2 additions & 2 deletions Examples/Image/Classification/VGG/Python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Run the example from the current folder using:

`python VGG16_ImageNet_Distributed.py`

To run it in a distributed manner, please check [here](https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines#32-python). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:
To run it in a distributed manner, please check [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Multiple-GPUs-and-machines#42-running-parallel-training-with-python). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:

`mpiexec -n <#workers> python VGG16_ImageNet_Distributed.py`

Expand All @@ -22,6 +22,6 @@ Run the example from the current folder using:

`python VGG19_ImageNet_Distributed.py`

To run it in a distributed manner, please check [here](https://github.com/Microsoft/CNTK/wiki/Multiple-GPUs-and-machines#32-python). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:
To run it in a distributed manner, please check [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Multiple-GPUs-and-machines#42-running-parallel-training-with-python). For example, the command for distributed training on the same machine (with multiple GPUs) with Windows is:

`mpiexec -n <#workers> python VGG19_ImageNet_Distributed.py`
2 changes: 1 addition & 1 deletion Examples/Image/DataSets/Animals/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ downloaded by cd to this directory, Examples/Image/DataSets/Animals and running

After running the script, you will see two folders with images (Train and Test).

The Animals dataset is for example used in the Transfer Learning example, see [here](https://github.com/Microsoft/CNTK/wiki/Build-your-own-image-classifier-using-Transfer-Learning).
The Animals dataset is for example used in the Transfer Learning example, see [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Build-your-own-image-classifier-using-Transfer-Learning).
2 changes: 1 addition & 1 deletion Examples/Image/DataSets/Flowers/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ downloaded by cd to this directory, Examples/Image/DataSets/Flowers and running

After running the script, you will see a 'jpg' folder that contains the images and three map files that split the roughly 8000 images into three sets of once 6000 and twice 1000 images.

The Flowers dataset is for example used in the Transfer Learning example, see [here](https://github.com/Microsoft/CNTK/wiki/Build-your-own-image-classifier-using-Transfer-Learning).
The Flowers dataset is for example used in the Transfer Learning example, see [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Build-your-own-image-classifier-using-Transfer-Learning).
2 changes: 1 addition & 1 deletion Examples/Image/DataSets/Grocery/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ downloaded by cd to this directory, Examples/Image/DataSets/Grocery and running

After running the script, you will see three folders with images (positive, negative and test) and several text files that contain mappings and annotations.

The grocery dataset is for example used in the Fast R-CNN object detection example, see [here](https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN).
The grocery dataset is for example used in the Fast R-CNN object detection example, see [here](https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN).
2 changes: 1 addition & 1 deletion Examples/Image/DataSets/Pascal/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ downloaded by running the following Python command:

`python install_pascalvoc.py`

This will download roughly 3.15GB of data and unpack it into the folder structure that is assumed in the [object recognition tutorial](https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN#run-pascal-voc)
This will download roughly 3.15GB of data and unpack it into the folder structure that is assumed in the [object recognition tutorial](https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN#run-pascal-voc)

## Alternative: download data manually

Expand Down
8 changes: 4 additions & 4 deletions Examples/Image/Detection/FastRCNN/CNTK_FastRCNN_Eval.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"\n",
"This notebook demonstrates how to evaluate a single image using a CNTK Fast-RCNN model.\n",
"\n",
"For a full description of the model and the algorithm, please see the following <a href=\"https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN\" target=\"_blank\">tutorial</a>.\n",
"For a full description of the model and the algorithm, please see the following <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN\" target=\"_blank\">tutorial</a>.\n",
"\n",
"Below, you will see sample code for:\n",
"1. Preparing the input data for the network (including image size adjustments)\n",
Expand All @@ -17,11 +17,11 @@
"\n",
"<b>Important</b>: Before running this notebook, please make sure that:\n",
"<ol>\n",
"<li>You have version >= 2.0 RC 1 of CNTK installed. Installation instructions are available <a href=\"https://github.com/Microsoft/CNTK/wiki/Setup-CNTK-on-your-machine\" target=\"_blank\">here</a>.\n",
"<li>You have version >= 2.0 RC 1 of CNTK installed. Installation instructions are available <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Setup-CNTK-on-your-machine\" target=\"_blank\">here</a>.\n",
"\n",
"<li>This notebook uses the CNTK python APIs and should be run from the CNTK python environment.</li>\n",
"\n",
"<li>OpenCV and the other required python packages for the Fast-RCNN scenario are installed. Please follow the instructions <a href=\"https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN#setup\" target=\"_blank\">in here</a> to install the required packages.\n",
"<li>OpenCV and the other required python packages for the Fast-RCNN scenario are installed. Please follow the instructions <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Object-Detection-using-Fast-R-CNN#setup\" target=\"_blank\">in here</a> to install the required packages.\n",
"</ol>"
]
},
Expand Down Expand Up @@ -101,7 +101,7 @@
"The trained model accepts 3 inputs: The image data, the bounding box (region of interest, or ROI) proposals and the ground truth labels of the ROIs. Since we are evaluating a new image - we probably don't have the ground truth labels for the image, hence - we need to adjust the network to accept only the image and the ROIs as input.\n",
"In order to do that we use the CNTK APIs to clone the network and change its input nodes.\n",
"\n",
"More information and examples regarding cloning nodes of a network are available in the <a href=\"https://github.com/Microsoft/CNTK/wiki/Build-your-own-image-classifier-using-Transfer-Learning\" target=\"_blank\">Transfer Learning</a> tutorial."
"More information and examples regarding cloning nodes of a network are available in the <a href=\"https://docs.microsoft.com/en-us/cognitive-toolkit/Build-your-own-image-classifier-using-Transfer-Learning\" target=\"_blank\">Transfer Learning</a> tutorial."
]
},
{
Expand Down
Loading

0 comments on commit 5ecc834

Please sign in to comment.