It is possible to deploy your endpoints to GPU nodes via Valohai.
- Existing Kubernetes cluster that has been added to Valohai as a deployment target.
- Node group with GPUs enabled.
- Note that it is not possible add GPUs to an existing node group in GKE.
- NVIDIA drivers installed on the nodes
Once you have the node group and the DaemonSet set in your cluster, you can configure your Valohai endpoints to use the GPUs ether via
valohai.yaml or in the UI.
It is important to note that there is currently no GPU autoscaling or GPU timesharing available for the endpoints in Valohai. Each endpoint will use the number of GPUs assigned to it under the
devices setting (see instructions below). If there are no free GPUs on the node, the next endpoint requesting GPUs will fail, regardless of there being other resources (CPU, memory) available. To overcome this, you can increase the number of GPUs on the node or create new node groups and assign the endpoints on them by using the
To deploy your endpoint in a node with a GPU, you will need to define the
devices setting in the
valohai.yaml. In addition, if you have several node groups with GPUs, you can use the
node-selector setting to choose which one to use. For example, in the example below, the endpoint will be assigned 1 GPU in the node group that has the label
- endpoint: name: gpu-example image: tiangolo/uvicorn-gunicorn-fastapi:python3.7 server-command: uvicorn predict:app --host 0.0.0.0 --port 8000 files: - name: model description: Model output file from TensorFlow path: model.h5 node-selector: accelerator=tesla-v100 resources: devices: nvidia.com/gpu: 1
In the UI
Similarly to the valohai.yaml, you can define the
node-selector in the UI for your endpoint.
Please sign in to leave a comment.