-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding localAi v2.18.1 with default values #87
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
**LocalAI** is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by [Ettore Di Giacinto](https://github.com/mudler). | ||
|
||
|
||
## [💻 Get Started here!](https://localai.io/basics/getting_started/index.html) | ||
|
||
|
||
## 🚀 [Features](https://localai.io/features/) | ||
|
||
- 📖 [Text generation with GPTs](https://localai.io/features/text-generation/) (`llama.cpp`, `gpt4all.cpp`, ... [:book: and more](https://localai.io/model-compatibility/index.html#model-compatibility-table)) | ||
- 🗣 [Text to Audio](https://localai.io/features/text-to-audio/) | ||
- 🔈 [Audio to Text](https://localai.io/features/audio-to-text/) (Audio transcription with `whisper.cpp`) | ||
- 🎨 [Image generation with stable diffusion](https://localai.io/features/image-generation) | ||
- 🔥 [OpenAI-alike tools API](https://localai.io/features/openai-functions/) | ||
- 🧠 [Embeddings generation for vector databases](https://localai.io/features/embeddings/) | ||
- ✍️ [Constrained grammars](https://localai.io/features/constrained_grammars/) | ||
- 🖼️ [Download Models directly from Huggingface ](https://localai.io/models/) | ||
- 🥽 [Vision API](https://localai.io/features/gpt-vision/) | ||
- 📈 [Reranker API](https://localai.io/features/reranker/) | ||
- 🆕🖧 [P2P Inferencing](https://localai.io/features/distribute/) | ||
|
||
### 🔗 Resources | ||
|
||
- [LLM finetuning guide](https://localai.io/docs/advanced/fine-tuning/) | ||
- [How to build locally](https://localai.io/basics/build/index.html) | ||
- [How to install in Kubernetes](https://localai.io/basics/getting_started/index.html#run-localai-in-kubernetes) | ||
- [Projects integrating LocalAI](https://localai.io/docs/integrations/) | ||
- [How tos section](https://io.midori-ai.xyz/howtos/) (curated by our community) | ||
|
||
### 🔗 Community and integrations | ||
|
||
Build and deploy custom containers: | ||
- https://github.com/sozercan/aikit | ||
|
||
WebUIs: | ||
- https://github.com/Jirubizu/localai-admin | ||
- https://github.com/go-skynet/LocalAI-frontend | ||
- QA-Pilot(An interactive chat project that leverages LocalAI LLMs for rapid understanding and navigation of GitHub code repository) https://github.com/reid41/QA-Pilot | ||
|
||
Model galleries | ||
- https://github.com/go-skynet/model-gallery | ||
|
||
Other: | ||
- Helm chart https://github.com/go-skynet/helm-charts | ||
- VSCode extension https://github.com/badgooooor/localai-vscode-plugin | ||
- Terminal utility https://github.com/djcopley/ShellOracle | ||
- Local Smart assistant https://github.com/mudler/LocalAGI | ||
- Home Assistant https://github.com/sammcj/homeassistant-localai / https://github.com/drndos/hass-openai-custom-conversation / https://github.com/valentinfrlch/ha-gpt4vision | ||
- Discord bot https://github.com/mudler/LocalAGI/tree/main/examples/discord | ||
- Slack bot https://github.com/mudler/LocalAGI/tree/main/examples/slack | ||
- Shell-Pilot(Interact with LLM using LocalAI models via pure shell scripts on your Linux or MacOS system) https://github.com/reid41/shell-pilot | ||
- Telegram bot https://github.com/mudler/LocalAI/tree/master/examples/telegram-bot | ||
- Examples: https://github.com/mudler/LocalAI/tree/master/examples/ |
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,18 @@ | ||
{ | ||
"addonType": "system app", | ||
"annotations": { | ||
"commit_msg": "LocalAI - The free, Open Source OpenAI alternative", | ||
"source": "community", | ||
"contributor" : "pedro@spectrocloud" | ||
}, | ||
"charts": [ | ||
"charts/local-ai-3.3.0.tgz" | ||
], | ||
"cloudTypes": [ | ||
"all" | ||
], | ||
"displayName": "LocalAI", | ||
"layer":"addon", | ||
"name": "local-ai", | ||
"version": "2.18.1" | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,161 @@ | ||
# Default values for eck-elastic-operator | ||
# This is a YAML-formatted file | ||
pack: | ||
content: | ||
images: | ||
- image: quay.io/go-skynet/local-ai:v2.18.1 | ||
|
||
charts: | ||
- repo: https://go-skynet.github.io/helm-charts/ | ||
name: local-ai | ||
version: 3.3.0 | ||
#The namespace (on the target cluster) to install this chart | ||
#When not found, a new namespace will be created | ||
namespace: "local-ui" | ||
|
||
charts: | ||
local-ai: | ||
fullnameOverride: local-ai | ||
|
||
replicaCount: 1 | ||
|
||
deployment: | ||
# For Nvidia GPUs uncomment one of the following (cuda11 or cuda12): | ||
# image: localai/localai:v2.18.1-cublas-cuda11 | ||
# image: localai/localai:v2.18.1-cublas-cuda12 | ||
# image: localai/localai:v2.18.1-cublas-cuda11-ffmpeg (Video Acceleration) | ||
# image: localai/localai:v2.18.1-cublas-cuda12-ffmpeg (Video Acceleration) | ||
# More info in Docs: https://localai.io/features/gpu-acceleration/#cudanvidia-acceleration | ||
image: | ||
repository: quay.io/go-skynet/local-ai # Example: "docker.io/myapp" | ||
tag: v2.18.1 | ||
env: | ||
threads: 4 | ||
context_size: 512 | ||
|
||
# # Inject Secrets into Environment: | ||
# secretEnv: | ||
# - name: HF_TOKEN | ||
# valueFrom: | ||
# secretKeyRef: | ||
# name: some-secret | ||
# key: hf-token | ||
|
||
modelsPath: "/models" | ||
download_model: | ||
# To use cloud provided (eg AWS) image, provide it like: 1234356789.dkr.ecr.us-REGION-X.amazonaws.com/busybox | ||
image: busybox | ||
prompt_templates: | ||
# To use cloud provided (eg AWS) image, provide it like: 1234356789.dkr.ecr.us-REGION-X.amazonaws.com/busybox | ||
image: busybox | ||
pullPolicy: IfNotPresent | ||
imagePullSecrets: [] | ||
# - name: secret-names | ||
|
||
## Needed for GPU Nodes | ||
#runtimeClassName: gpu | ||
|
||
resources: | ||
{} | ||
# We usually recommend not to specify default resources and to leave this as a conscious | ||
# choice for the user. This also increases chances charts run on environments with little | ||
# resources, such as Minikube. If you do want to specify resources, uncomment the following | ||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'. | ||
# limits: | ||
# cpu: 100m | ||
# memory: 128Mi | ||
# requests: | ||
# cpu: 100m | ||
# memory: 128Mi | ||
|
||
# Prompt templates to include | ||
# Note: the keys of this map will be the names of the prompt template files | ||
promptTemplates: | ||
{} | ||
# ggml-gpt4all-j.tmpl: | | ||
# The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response. | ||
# ### Prompt: | ||
# {{.Input}} | ||
# ### Response: | ||
|
||
# Models to download at runtime | ||
models: | ||
# Whether to force download models even if they already exist | ||
forceDownload: false | ||
|
||
# The list of URLs to download models from | ||
# Note: the name of the file will be the name of the loaded model | ||
list: | ||
# - url: "https://gpt4all.io/models/ggml-gpt4all-j.bin" | ||
# basicAuth: base64EncodedCredentials | ||
|
||
initContainers: [] | ||
# Example: | ||
# - name: my-init-container | ||
# image: my-init-image | ||
# imagePullPolicy: IfNotPresent | ||
# command: ["/bin/sh", "-c", "echo init"] | ||
# volumeMounts: | ||
# - name: my-volume | ||
# mountPath: /path/to/mount | ||
|
||
sidecarContainers: [] | ||
# Example: | ||
# - name: my-sidecar-container | ||
# image: my-sidecar-image | ||
# imagePullPolicy: IfNotPresent | ||
# ports: | ||
# - containerPort: 1234 | ||
|
||
# Persistent storage for models and prompt templates. | ||
# PVC and HostPath are mutually exclusive. If both are enabled, | ||
# PVC configuration takes precedence. If neither are enabled, ephemeral | ||
# storage is used. | ||
persistence: | ||
models: | ||
enabled: true | ||
annotations: {} | ||
storageClass: hostPath | ||
accessModes: ReadWriteMany | ||
size: 10Gi | ||
globalMount: /models | ||
output: | ||
enabled: true | ||
annotations: {} | ||
storageClass: hostPath | ||
accessModes: ReadWriteMany | ||
size: 5Gi | ||
globalMount: /tmp/generated | ||
|
||
service: | ||
type: ClusterIP | ||
# If deferring to an internal only load balancer | ||
# externalTrafficPolicy: Local | ||
port: 80 | ||
annotations: {} | ||
# If using an AWS load balancer, you'll need to override the default 60s load balancer idle timeout | ||
# service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1200" | ||
|
||
ingress: | ||
enabled: false | ||
className: "" | ||
annotations: | ||
{} | ||
# nginx.ingress.kubernetes.io/proxy-body-size: "25m" # This value determines the maxmimum uploadable file size | ||
# kubernetes.io/ingress.class: nginx | ||
# kubernetes.io/tls-acme: "true" | ||
hosts: | ||
- host: chart-example.local | ||
paths: | ||
- path: / | ||
pathType: ImplementationSpecific | ||
tls: [] | ||
# - secretName: chart-example-tls | ||
# hosts: | ||
# - chart-example.local | ||
|
||
nodeSelector: {} | ||
|
||
tolerations: [] | ||
|
||
affinity: {} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This README does not follow the README template standard. Please use the provided template (see here) as a starting point.