endpoint
using a raw start command that works with any programming language or framework.
name
: name of the deployment endpoint, this will be the final part of the URL.image
: the Docker image that will be used as the deployment environment.server-command
: command that runs an HTTP serverport
: (optional) where should Valohai expect to find the web server, defaults to 8000. Note the server must be bound to0.0.0.0
(all interfaces), not onlylocalhost
.description
: (optional) more detailed human-readable description of the endpointfiles
: (optional) files that will be loaded into the image, for example, the trained model. The files will be in the same directory as your code, modified by thepath
property.
- endpoint:
name: server-endpoint
image: python:3.6
server-command: python run_server.py
port: 1453
files:
- name: model
description: Model output file from TensorFlow
path: model.pb
Installing additional packages to deployment images
Valohai automatically runs pip install --user -r requirements-deployment.txt
in case you have additional dependencies defined in requirements-deployment.txt in your project root.
If you don’t have a requirements-deployment.txt
file, Valohai will check if there is a requirements.txt
file and then run pip install --user -r requirements.txt
in your project root.
Commands from installed packages end up in ~/.local/bin
, following the standards for pip --user
installation.
Comments
0 comments
Please sign in to leave a comment.