Model Persistence

Save / Load Model allows you to save your model to files and load them later in order to make predictions.

Fire Insights allows you to save the ML Model created. The ML Models can be loaded in the same or other workflows to be used for scoring. The ML Models can also be downloaded from HDFS Browse Page.

The ML models can be saved into the following locations:

  • HDFS : when Fire Insights is connected to a Hadoop Cluster
  • S3 : when the jobs are running on EMR or on Databricks cluster on AWS. Even when Fire is running in local standalone mode on AWS, the models can be saved onto S3.

In order to save onto S3, the model path can be provided as s3://models/priceprediction

Spark ML Models

Spark ML models are saved into a directory with multiple files in it. Fire Insights has processors saving and loading the Spark ML models.

Save Model processor

Modelsave

ML Save Workflow

Modelsave

Load Model processor

Modelsave

ML Load Workflow

Modelsave

H2O Models

H2O Models can be saved in binary format or in MOJO format. Fire Insights has processors for them.

Save H2o Model processor

Modelsave

Load H2o Model processor

Modelsave

More details of saving and loading the H2O Models is available here:

http://docs.h2o.ai/h2o/latest-stable/h2o-docs/save-and-load-model.html

Save and Load H2O Workflow

Modelsave

Scikit-Learn Models

Scikit-Learn models are persisted with pickle. Fire Insights has processors for saving and loading the pickle files.

More details of the pickle format is available here:

https://scikit-learn.org/stable/modules/model_persistence.html