- You can output any type of files from Valohai. For example trained models, CSV-files, images, or something else.
- Valohai outputs will be saved to the project's default data store (AWS S3, Azure Storage, GCP Cloud Storage, OpenStack Swift).
- By default, all outputs will be saved to your data store at the end of an execution. You can use Live outputs to save files mid-execution.
- Each output will receive a unique
datum://
link that you can use to download the file to another execution. If you’re using your own data store, you’ll also receive a link specific to that data store.
Write your files to the /valohai/outputs/
folder to save and upload them to your cloud storage. The saved files will appear under the Outputs tab of your execution and in the projects Data tab.
Save output files with valohai-utils
import valohai
# Define a path in Valohai outputs for "myfile.csv"
out_path = valohai.outputs().path('mydata.csv')
df.to_csv(out_path)
Save output files in Python without valohai-utils
# Get the location of Valohai outputs directory
VH_OUTPUTS_DIR = os.getenv('VH_OUTPUTS_DIR', '.outputs')
# Define a filepath in Valohai outputs directory
# e.g. /valohai/outputs/filename.ext
out_path = os.path.join(VH_OUTPUTS_DIR, 'mydata.csv')
df.to_csv(out_path)
Save output files in R
# Get the location of Valohai outputs directory
vh_outputs_path <- Sys.getenv("VH_OUTPUTS_DIR", unset = ".outputs")
# Define a filepath in Valohai outputs directory
# e.g. /valohai/outputs/>filename.ext>
out_path <- file.path(vh_outputs_path, "mydata.csv")
write.csv(output, file = out_path)
You can then use the copied datum://
link as an input in another execution. For example, use a link to a trained model file in a new Batch Inference execution.
Comments
0 comments
Please sign in to leave a comment.