Repository for supporting pytorch. This repo is experimental and is being used to start work related to this proposal.git
Run the following to deploy the operator to the namespace of your current context:github
RBAC=true #set false if you do not have an RBAC cluster helm install pytorch-operator-chart -n pytorch-operator --set rbac.install=${RBAC} --wait --replace
For this POC example we will use a configmap that contains our distributed training script.ui
kubectl create -f examples/mnist/configmap.yaml
Create a PyTorchJob resource to start training:this
kubectl create -f examples/mnist/pytorchjob.yaml
You should now be able to see the job running based on the specified number of replicas.spa
kubectl get pods -a -l pytorch_job_name=example-job
Training should run for about 10 epochs and takes 5-10 minutes on a cpu cluster. Logs can be inspected while the job runs. (TODO(jose5918) Find a better example for distributed training)code
Tail the logs for a pod to see its training progress or final status:ip
PODNAME=$(kubectl get pods -a -l pytorch_job_name=example-job,task_index=0 -o name) kubectl logs -f ${PODNAME}
Example output:ci
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz Processing... Done! Rank 0 , epoch 0 : 1.2753884393269066 Rank 0 , epoch 1 : 0.5752273188915842 Rank 0 , epoch 2 : 0.4370715184919616 Rank 0 , epoch 3 : 0.37090928852558136 Rank 0 , epoch 4 : 0.3224359404430715 Rank 0 , epoch 5 : 0.29541213348158385 Rank 0 , epoch 6 : 0.27593734307583967 Rank 0 , epoch 7 : 0.25898529327055536 Rank 0 , epoch 8 : 0.24815570648862864 Rank 0 , epoch 9 : 0.22647559368756534