Skip to content

Commit

Permalink
Merge pull request #342 from mlrun/marketplace-doc-gen-f7e4fd1
Browse files Browse the repository at this point in the history
Marketplace update from marketplace-doc-gen-f7e4fd1
  • Loading branch information
aviaIguazio authored Feb 13, 2024
2 parents 9cf6f7e + dc10f55 commit caff285
Show file tree
Hide file tree
Showing 8 changed files with 69 additions and 41 deletions.
64 changes: 64 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,67 @@
### Change log [2024-02-13 16:05:23]
1. Item Updated: `load_dask` (from version: `1.1.0` to `1.1.0`)
2. Item Updated: `sklearn_classifier_dask` (from version: `1.1.1` to `1.1.1`)
3. Item Updated: `test_classifier` (from version: `1.1.0` to `1.1.0`)
4. Item Updated: `xgb_custom` (from version: `1.1.0` to `1.1.0`)
5. Item Updated: `concept_drift_streaming` (from version: `1.1.0` to `1.1.0`)
6. Item Updated: `auto_trainer` (from version: `1.7.0` to `1.7.0`)
7. Item Updated: `v2_model_server` (from version: `1.1.0` to `1.1.0`)
8. Item Updated: `concept_drift` (from version: `1.1.0` to `1.1.0`)
9. Item Updated: `pii_recognizer` (from version: `0.2.0` to `0.2.0`)
10. Item Updated: `pyannote_audio` (from version: `1.0.0` to `1.0.0`)
11. Item Updated: `structured_data_generator` (from version: `1.3.0` to `1.3.0`)
12. Item Updated: `describe_dask` (from version: `1.1.0` to `1.1.0`)
13. Item Updated: `pandas_profiling_report` (from version: `1.1.0` to `1.1.0`)
14. Item Updated: `open_archive` (from version: `1.1.0` to `1.1.0`)
15. Item Updated: `tf1_serving` (from version: `1.1.0` to `1.1.0`)
16. Item Updated: `rnn_serving` (from version: `1.1.0` to `1.1.0`)
17. Item Updated: `gen_class_data` (from version: `1.2.0` to `1.2.0`)
18. Item Updated: `send_email` (from version: `1.2.0` to `1.2.0`)
19. Item Updated: `tf2_serving_v2` (from version: `1.1.0` to `1.1.0`)
20. Item Updated: `hugging_face_classifier_trainer` (from version: `0.1.0` to `0.1.0`)
21. Item Updated: `model_monitoring_stream` (from version: `1.1.0` to `1.1.0`)
22. Item Updated: `sklearn_classifier` (from version: `1.1.1` to `1.1.1`)
23. Item Updated: `batch_inference` (from version: `1.7.0` to `1.7.0`)
24. Item Updated: `churn_server` (from version: `1.1.0` to `1.1.0`)
25. Item Updated: `get_offline_features` (from version: `1.2.0` to `1.2.0`)
26. Item Updated: `github_utils` (from version: `1.1.0` to `1.1.0`)
27. Item Updated: `translate` (from version: `0.0.2` to `0.0.2`)
28. Item Updated: `model_server` (from version: `1.1.0` to `1.1.0`)
29. Item Updated: `text_to_audio_generator` (from version: `1.1.0` to `1.1.0`)
30. Item Updated: `sql_to_file` (from version: `1.1.0` to `1.1.0`)
31. Item Updated: `snowflake_dask` (from version: `1.1.0` to `1.1.0`)
32. Item Updated: `silero_vad` (from version: `1.1.0` to `1.1.0`)
33. Item Updated: `virtual_drift` (from version: `1.1.0` to `1.1.0`)
34. Item Updated: `batch_inference_v2` (from version: `2.5.0` to `2.5.0`)
35. Item Updated: `feature_selection` (from version: `1.4.0` to `1.4.0`)
36. Item Updated: `feature_perms` (from version: `1.1.0` to `1.1.0`)
37. Item Updated: `azureml_serving` (from version: `1.1.0` to `1.1.0`)
38. Item Updated: `slack_notify` (from version: `1.1.0` to `1.1.0`)
39. Item Updated: `describe_spark` (from version: `1.1.0` to `1.1.0`)
40. Item Updated: `tf2_serving` (from version: `1.1.0` to `1.1.0`)
41. Item Updated: `coxph_trainer` (from version: `1.1.0` to `1.1.0`)
42. Item Updated: `validate_great_expectations` (from version: `1.1.0` to `1.1.0`)
43. Item Updated: `question_answering` (from version: `0.3.1` to `0.3.1`)
44. Item Updated: `stream_to_parquet` (from version: `1.1.0` to `1.1.0`)
45. Item Updated: `load_dataset` (from version: `1.1.0` to `1.1.0`)
46. Item Updated: `xgb_test` (from version: `1.1.1` to `1.1.1`)
47. Item Updated: `ingest` (from version: `1.1.0` to `1.1.0`)
48. Item Updated: `aggregate` (from version: `1.3.0` to `1.3.0`)
49. Item Updated: `transcribe` (from version: `1.0.0` to `1.0.0`)
50. Item Updated: `arc_to_parquet` (from version: `1.4.1` to `1.4.1`)
51. Item Updated: `bert_embeddings` (from version: `1.2.0` to `1.2.0`)
52. Item Updated: `model_monitoring_batch` (from version: `1.1.0` to `1.1.0`)
53. Item Updated: `describe` (from version: `1.2.0` to `1.2.0`)
54. Item Updated: `huggingface_auto_trainer` (from version: `1.0.0` to `1.0.0`)
55. Item Updated: `xgb_trainer` (from version: `1.1.1` to `1.1.1`)
56. Item Updated: `coxph_test` (from version: `1.1.0` to `1.1.0`)
57. Item Updated: `azureml_utils` (from version: `1.3.0` to `1.3.0`)
58. Item Updated: `xgb_serving` (from version: `1.1.2` to `1.1.2`)
59. Item Updated: `v2_model_tester` (from version: `1.1.0` to `1.1.0`)
60. Item Updated: `hugging_face_serving` (from version: `1.0.0` to `1.0.0`)
61. Item Updated: `model_server_tester` (from version: `1.1.0` to `1.1.0`)
62. Item Updated: `onnx_utils` (from version: `1.2.0` to `1.2.0`)

### Change log [2024-02-11 12:52:07]
1. Item Updated: `batch_inference` (from version: `1.7.0` to `1.7.0`)
2. Item Updated: `structured_data_generator` (from version: `1.3.0` to `1.3.0`)
Expand Down
2 changes: 1 addition & 1 deletion catalog.json

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -478,10 +478,7 @@
"# Import the `batch_inference_v2` function from the functions hub:\n",
"batch_inference_function = mlrun.import_function('hub://batch_inference_v2')\n",
"# you can import the function from the current directory as well: \n",
"# batch_inference_function = mlrun.import_function(\"function.yaml\")\n",
"\n",
"# Set the desired artifact path:\n",
"artifact_path = \"./\""
"# batch_inference_function = mlrun.import_function(\"function.yaml\")\n"
]
},
{
Expand Down Expand Up @@ -1448,23 +1445,18 @@
"# 1. Generate data:\n",
"generate_data_run = demo_function.run(\n",
" handler=\"generate_data\",\n",
" artifact_path=artifact_path,\n",
" returns=[\"training_set : dataset\", \"prediction_set : dataset\"],\n",
" local=True,\n",
")\n",
"\n",
"# 2. Train a model:\n",
"train_run = demo_function.run(\n",
" handler=\"train\",\n",
" artifact_path=artifact_path,\n",
" inputs={\"training_set\": generate_data_run.outputs[\"training_set\"]},\n",
" local=True,\n",
")\n",
"\n",
"# 3. Perform batch prediction:\n",
"batch_inference_run = batch_inference_function.run(\n",
" handler=\"infer\",\n",
" artifact_path=artifact_path,\n",
" inputs={\"dataset\": generate_data_run.outputs[\"prediction_set\"]},\n",
" params={\n",
" \"model_path\": train_run.outputs[\"model\"],\n",
Expand All @@ -1474,7 +1466,6 @@
" \"model_endpoint_drift_threshold\": 0.2,\n",
" \"model_endpoint_possible_drift_threshold\": 0.1,\n",
" },\n",
" local=True,\n",
")"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -815,9 +815,6 @@ <h3>4.2. Run the Example with MLRun<a class="headerlink" href="#run-the-example-
<span class="n">batch_inference_function</span> <span class="o">=</span> <span class="n">mlrun</span><span class="o">.</span><span class="n">import_function</span><span class="p">(</span><span class="s1">'hub://batch_inference_v2'</span><span class="p">)</span>
<span class="c1"># you can import the function from the current directory as well: </span>
<span class="c1"># batch_inference_function = mlrun.import_function("function.yaml")</span>

<span class="c1"># Set the desired artifact path:</span>
<span class="n">artifact_path</span> <span class="o">=</span> <span class="s2">"./"</span>
</pre></div>
</div>
</div>
Expand All @@ -833,23 +830,18 @@ <h3>4.2. Run the Example with MLRun<a class="headerlink" href="#run-the-example-
<div class="highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># 1. Generate data:</span>
<span class="n">generate_data_run</span> <span class="o">=</span> <span class="n">demo_function</span><span class="o">.</span><span class="n">run</span><span class="p">(</span>
<span class="n">handler</span><span class="o">=</span><span class="s2">"generate_data"</span><span class="p">,</span>
<span class="n">artifact_path</span><span class="o">=</span><span class="n">artifact_path</span><span class="p">,</span>
<span class="n">returns</span><span class="o">=</span><span class="p">[</span><span class="s2">"training_set : dataset"</span><span class="p">,</span> <span class="s2">"prediction_set : dataset"</span><span class="p">],</span>
<span class="n">local</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>

<span class="c1"># 2. Train a model:</span>
<span class="n">train_run</span> <span class="o">=</span> <span class="n">demo_function</span><span class="o">.</span><span class="n">run</span><span class="p">(</span>
<span class="n">handler</span><span class="o">=</span><span class="s2">"train"</span><span class="p">,</span>
<span class="n">artifact_path</span><span class="o">=</span><span class="n">artifact_path</span><span class="p">,</span>
<span class="n">inputs</span><span class="o">=</span><span class="p">{</span><span class="s2">"training_set"</span><span class="p">:</span> <span class="n">generate_data_run</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s2">"training_set"</span><span class="p">]},</span>
<span class="n">local</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>

<span class="c1"># 3. Perform batch prediction:</span>
<span class="n">batch_inference_run</span> <span class="o">=</span> <span class="n">batch_inference_function</span><span class="o">.</span><span class="n">run</span><span class="p">(</span>
<span class="n">handler</span><span class="o">=</span><span class="s2">"infer"</span><span class="p">,</span>
<span class="n">artifact_path</span><span class="o">=</span><span class="n">artifact_path</span><span class="p">,</span>
<span class="n">inputs</span><span class="o">=</span><span class="p">{</span><span class="s2">"dataset"</span><span class="p">:</span> <span class="n">generate_data_run</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s2">"prediction_set"</span><span class="p">]},</span>
<span class="n">params</span><span class="o">=</span><span class="p">{</span>
<span class="s2">"model_path"</span><span class="p">:</span> <span class="n">train_run</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s2">"model"</span><span class="p">],</span>
Expand All @@ -859,7 +851,6 @@ <h3>4.2. Run the Example with MLRun<a class="headerlink" href="#run-the-example-
<span class="s2">"model_endpoint_drift_threshold"</span><span class="p">:</span> <span class="mf">0.2</span><span class="p">,</span>
<span class="s2">"model_endpoint_possible_drift_threshold"</span><span class="p">:</span> <span class="mf">0.1</span><span class="p">,</span>
<span class="p">},</span>
<span class="n">local</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>
</pre></div>
</div>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -478,10 +478,7 @@
"# Import the `batch_inference_v2` function from the functions hub:\n",
"batch_inference_function = mlrun.import_function('hub://batch_inference_v2')\n",
"# you can import the function from the current directory as well: \n",
"# batch_inference_function = mlrun.import_function(\"function.yaml\")\n",
"\n",
"# Set the desired artifact path:\n",
"artifact_path = \"./\""
"# batch_inference_function = mlrun.import_function(\"function.yaml\")\n"
]
},
{
Expand Down Expand Up @@ -1448,23 +1445,18 @@
"# 1. Generate data:\n",
"generate_data_run = demo_function.run(\n",
" handler=\"generate_data\",\n",
" artifact_path=artifact_path,\n",
" returns=[\"training_set : dataset\", \"prediction_set : dataset\"],\n",
" local=True,\n",
")\n",
"\n",
"# 2. Train a model:\n",
"train_run = demo_function.run(\n",
" handler=\"train\",\n",
" artifact_path=artifact_path,\n",
" inputs={\"training_set\": generate_data_run.outputs[\"training_set\"]},\n",
" local=True,\n",
")\n",
"\n",
"# 3. Perform batch prediction:\n",
"batch_inference_run = batch_inference_function.run(\n",
" handler=\"infer\",\n",
" artifact_path=artifact_path,\n",
" inputs={\"dataset\": generate_data_run.outputs[\"prediction_set\"]},\n",
" params={\n",
" \"model_path\": train_run.outputs[\"model\"],\n",
Expand All @@ -1474,7 +1466,6 @@
" \"model_endpoint_drift_threshold\": 0.2,\n",
" \"model_endpoint_possible_drift_threshold\": 0.1,\n",
" },\n",
" local=True,\n",
")"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -815,9 +815,6 @@ <h3>4.2. Run the Example with MLRun<a class="headerlink" href="#run-the-example-
<span class="n">batch_inference_function</span> <span class="o">=</span> <span class="n">mlrun</span><span class="o">.</span><span class="n">import_function</span><span class="p">(</span><span class="s1">'hub://batch_inference_v2'</span><span class="p">)</span>
<span class="c1"># you can import the function from the current directory as well: </span>
<span class="c1"># batch_inference_function = mlrun.import_function("function.yaml")</span>

<span class="c1"># Set the desired artifact path:</span>
<span class="n">artifact_path</span> <span class="o">=</span> <span class="s2">"./"</span>
</pre></div>
</div>
</div>
Expand All @@ -833,23 +830,18 @@ <h3>4.2. Run the Example with MLRun<a class="headerlink" href="#run-the-example-
<div class="highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># 1. Generate data:</span>
<span class="n">generate_data_run</span> <span class="o">=</span> <span class="n">demo_function</span><span class="o">.</span><span class="n">run</span><span class="p">(</span>
<span class="n">handler</span><span class="o">=</span><span class="s2">"generate_data"</span><span class="p">,</span>
<span class="n">artifact_path</span><span class="o">=</span><span class="n">artifact_path</span><span class="p">,</span>
<span class="n">returns</span><span class="o">=</span><span class="p">[</span><span class="s2">"training_set : dataset"</span><span class="p">,</span> <span class="s2">"prediction_set : dataset"</span><span class="p">],</span>
<span class="n">local</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>

<span class="c1"># 2. Train a model:</span>
<span class="n">train_run</span> <span class="o">=</span> <span class="n">demo_function</span><span class="o">.</span><span class="n">run</span><span class="p">(</span>
<span class="n">handler</span><span class="o">=</span><span class="s2">"train"</span><span class="p">,</span>
<span class="n">artifact_path</span><span class="o">=</span><span class="n">artifact_path</span><span class="p">,</span>
<span class="n">inputs</span><span class="o">=</span><span class="p">{</span><span class="s2">"training_set"</span><span class="p">:</span> <span class="n">generate_data_run</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s2">"training_set"</span><span class="p">]},</span>
<span class="n">local</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>

<span class="c1"># 3. Perform batch prediction:</span>
<span class="n">batch_inference_run</span> <span class="o">=</span> <span class="n">batch_inference_function</span><span class="o">.</span><span class="n">run</span><span class="p">(</span>
<span class="n">handler</span><span class="o">=</span><span class="s2">"infer"</span><span class="p">,</span>
<span class="n">artifact_path</span><span class="o">=</span><span class="n">artifact_path</span><span class="p">,</span>
<span class="n">inputs</span><span class="o">=</span><span class="p">{</span><span class="s2">"dataset"</span><span class="p">:</span> <span class="n">generate_data_run</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s2">"prediction_set"</span><span class="p">]},</span>
<span class="n">params</span><span class="o">=</span><span class="p">{</span>
<span class="s2">"model_path"</span><span class="p">:</span> <span class="n">train_run</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s2">"model"</span><span class="p">],</span>
Expand All @@ -859,7 +851,6 @@ <h3>4.2. Run the Example with MLRun<a class="headerlink" href="#run-the-example-
<span class="s2">"model_endpoint_drift_threshold"</span><span class="p">:</span> <span class="mf">0.2</span><span class="p">,</span>
<span class="s2">"model_endpoint_possible_drift_threshold"</span><span class="p">:</span> <span class="mf">0.1</span><span class="p">,</span>
<span class="p">},</span>
<span class="n">local</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>
</pre></div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion functions/development/catalog.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion functions/development/tags.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"categories": ["deep-learning", "model-serving", "utils", "Data Generation", "data-analysis", "NLP", "data-preparation", "machine-learning", "Audio", "Deep Learning", "PyTorch", "data-validation", "feature-store", "GenAI", "Data Preparation", "model-testing", "etl", "Huggingface", "monitoring", "model-training"], "kind": ["serving", "dask", "job", "nuclio", "nuclio:serving"]}
{"kind": ["nuclio", "nuclio:serving", "serving", "job", "dask"], "categories": ["GenAI", "deep-learning", "model-serving", "data-validation", "machine-learning", "Data Preparation", "PyTorch", "utils", "feature-store", "model-training", "etl", "data-analysis", "model-testing", "monitoring", "Audio", "Deep Learning", "NLP", "Data Generation", "Huggingface", "data-preparation"]}

0 comments on commit caff285

Please sign in to comment.