Skip to content

Commit

Permalink
Merge branch 'nomic-ai:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
abdulrahman305 authored Nov 14, 2023
2 parents a04306e + d4ce9f4 commit 0e20f61
Show file tree
Hide file tree
Showing 130 changed files with 9,114 additions and 10,323 deletions.
208 changes: 199 additions & 9 deletions .circleci/continue_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,176 @@ jobs:
- image: circleci/python:3.7
steps:
- run: echo "CircleCI pipeline triggered"

build-offline-chat-installer-macos:
macos:
xcode: 14.0.0
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- macos-qt-cache_v2
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
/Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
fi
- save_cache: # this is the new step to save cache
key: macos-qt-cache_v2
paths:
- ~/Qt
- run:
name: Build
command: |
mkdir build
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.6/bin
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-DCMAKE_GENERATOR:STRING=Ninja \
-DBUILD_UNIVERSAL=ON \
-DMACDEPLOYQT=~/Qt/6.5.1/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-S ../gpt4all-chat \
-B .
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
mkdir upload
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
build-offline-chat-installer-linux:
machine:
image: ubuntu-2204:2023.04.2
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- linux-qt-cache
- run:
name: Setup Linux and Dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk patchelf
- run:
name: Installing Qt
command: |
if [ ! -d ~/Qt ]; then
wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
chmod +x qt-unified-linux-x64-4.6.0-online.run
./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
fi
- save_cache: # this is the new step to save cache
key: linux-qt-cache
paths:
- ~/Qt
- run:
name: Build linuxdeployqt
command: |
git clone https://github.com/nomic-ai/linuxdeployqt
cd linuxdeployqt && qmake && sudo make install
- run:
name: Build
command: |
set -eo pipefail
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.6/bin
mkdir build
cd build
mkdir upload
~/Qt/Tools/CMake/bin/cmake -DGPT4ALL_OFFLINE_INSTALLER=ON -DCMAKE_BUILD_TYPE=Release -S ../gpt4all-chat -B .
~/Qt/Tools/CMake/bin/cmake --build . --target all
~/Qt/Tools/CMake/bin/cmake --build . --target install
~/Qt/Tools/CMake/bin/cmake --build . --target package
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
build-offline-chat-installer-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
- checkout
- run:
name: Update Submodules
command: |
git submodule sync
git submodule update --init --recursive
- restore_cache: # this is the new step to restore cache
keys:
- windows-qt-cache
- run:
name: Installing Qt
command: |
if (-not (Test-Path C:\Qt)) {
Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
& .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.46 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
}
- save_cache: # this is the new step to save cache
key: windows-qt-cache
paths:
- C:\Qt
- run:
name: Install VulkanSDK
command: |
Invoke-WebRequest -Uri https://sdk.lunarg.com/sdk/download/1.3.261.1/windows/VulkanSDK-1.3.261.1-Installer.exe -OutFile VulkanSDK-1.3.261.1-Installer.exe
.\VulkanSDK-1.3.261.1-Installer.exe --accept-licenses --default-answer --confirm-command install
- run:
name: Build
command: |
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
$Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.6\bin"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
$Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
"-DCMAKE_GENERATOR:STRING=Ninja" `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
"-DGPT4ALL_OFFLINE_INSTALLER=ON" `
"-S ..\gpt4all-chat" `
"-B ."
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
mkdir upload
copy gpt4all-installer-win64.exe upload
- store_artifacts:
path: build/upload
build-gpt4all-chat-linux:
machine:
image: ubuntu-2204:2023.04.2
Expand Down Expand Up @@ -163,6 +332,7 @@ jobs:
cd build
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-DCMAKE_GENERATOR:STRING=Ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
Expand Down Expand Up @@ -244,6 +414,8 @@ jobs:
command: |
cd gpt4all-bindings/python/
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
Expand Down Expand Up @@ -274,7 +446,9 @@ jobs:
name: Build wheel
command: |
cd gpt4all-bindings/python
python setup.py bdist_wheel --plat-name=macosx_10_9_universal2
python setup.py bdist_wheel --plat-name=macosx_10_15_universal2
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
Expand All @@ -288,9 +462,6 @@ jobs:
- run:
name: Install MinGW64
command: choco install -y mingw --force --no-progress
- run:
name: Add MinGW64 to PATH
command: $env:Path += ";C:\ProgramData\chocolatey\lib\mingw\tools\install\mingw64\bin"
- run:
name: Install VulkanSDK
command: |
Expand All @@ -311,8 +482,9 @@ jobs:
cd gpt4all-backend
mkdir build
cd build
$env:Path += ";C:\ProgramData\mingw64\mingw64\bin"
$env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
cmake -G "MinGW Makefiles" .. -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
cmake -G "MinGW Makefiles" .. -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON -DKOMPUTE_OPT_USE_BUILT_IN_VULKAN_HEADER=OFF
cmake --build . --parallel
- run:
name: Build wheel
Expand All @@ -323,9 +495,11 @@ jobs:
cd gpt4all
mkdir llmodel_DO_NOT_MODIFY
mkdir llmodel_DO_NOT_MODIFY/build/
cp 'C:\ProgramData\chocolatey\lib\mingw\tools\install\mingw64\bin\*dll' 'llmodel_DO_NOT_MODIFY/build/'
cp 'C:\ProgramData\mingw64\mingw64\bin\*dll' 'llmodel_DO_NOT_MODIFY/build/'
cd ..
python setup.py bdist_wheel --plat-name=win_amd64
- store_artifacts:
path: gpt4all-bindings/python/dist
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
Expand Down Expand Up @@ -442,7 +616,7 @@ jobs:
- run:
name: Build Libraries
command: |
$MinGWBin = "C:\ProgramData\chocolatey\lib\mingw\tools\install\mingw64\bin"
$MinGWBin = "C:\ProgramData\mingw64\mingw64\bin"
$Env:Path += ";$MinGwBin"
$Env:Path += ";C:\Program Files\CMake\bin"
$Env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
Expand Down Expand Up @@ -682,6 +856,7 @@ jobs:
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
Expand Down Expand Up @@ -711,6 +886,7 @@ jobs:
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- run:
command: |
cd gpt4all-bindings/typescript
Expand Down Expand Up @@ -820,14 +996,28 @@ jobs:
command: |
cd gpt4all-bindings/typescript
npm set //registry.npmjs.org/:_authToken=$NPM_TOKEN
npm publish --access public --tag alpha
npm publish
workflows:
version: 2
default:
when: << pipeline.parameters.run-default-workflow >>
jobs:
- default-job
build-chat-offline-installers:
when: << pipeline.parameters.run-chat-workflow >>
jobs:
- hold:
type: approval
- build-offline-chat-installer-macos:
requires:
- hold
- build-offline-chat-installer-windows:
requires:
- hold
- build-offline-chat-installer-linux:
requires:
- hold
build-and-test-gpt4all-chat:
when: << pipeline.parameters.run-chat-workflow >>
jobs:
Expand Down
2 changes: 1 addition & 1 deletion .codespellrc
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
[codespell]
ignore-words-list = blong, belong, afterall, som
ignore-words-list = blong, afterall, som, assistent, crasher
skip = .git,*.pdf,*.svg,*.lock
17 changes: 1 addition & 16 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,21 +27,6 @@ body:
- label: "The official example notebooks/scripts"
- label: "My own modified scripts"

- type: checkboxes
id: related-components
attributes:
label: Related Components
description: "Select the components related to the issue (if applicable):"
options:
- label: "backend"
- label: "bindings"
- label: "python-bindings"
- label: "chat-ui"
- label: "models"
- label: "circleci"
- label: "docker"
- label: "api"

- type: textarea
id: reproduction
validations:
Expand All @@ -67,4 +52,4 @@ body:
required: true
attributes:
label: Expected behavior
description: "A clear and concise description of what you would expect to happen."
description: "A clear and concise description of what you would expect to happen."
7 changes: 1 addition & 6 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
[submodule "llama.cpp-230519"]
path = gpt4all-backend/llama.cpp-230519
url = https://github.com/ggerganov/llama.cpp.git
[submodule "llama.cpp-230511"]
path = gpt4all-backend/llama.cpp-230511
url = https://github.com/nomic-ai/llama.cpp
[submodule "llama.cpp-mainline"]
path = gpt4all-backend/llama.cpp-mainline
url = https://github.com/nomic-ai/llama.cpp.git
branch = gguf
21 changes: 16 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
<h1 align="center">GPT4All</h1>

<p align="center">Open-source assistant-style large language models that run locally on your CPU</p>
<p align="center">Open-source large language models that run locally on your CPU and nearly any GPU</p>

<p align="center">
<a href="https://gpt4all.io">GPT4All Website</a>
<a href="https://gpt4all.io">GPT4All Website and Models</a>
</p>

<p align="center">
Expand All @@ -30,13 +30,24 @@ Run on an M1 macOS Device (not sped up!)
</p>

## GPT4All: An ecosystem of open-source on-edge large language models.
GPT4All is an ecosystem to train and deploy **powerful** and **customized** large language models that run locally on consumer grade CPUs. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions).

> [!IMPORTANT]
> GPT4All v2.5.0 and newer only supports models in GGUF format (.gguf). Models used with a previous version of GPT4All (.bin extension) will no longer work.
GPT4All is an ecosystem to run **powerful** and **customized** large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions).

Learn more in the [documentation](https://docs.gpt4all.io).

The goal is simple - be the best instruction tuned assistant-style language model that any person or enterprise can freely use, distribute and build on.
A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.

A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.
### What's New ([Issue Tracker](https://github.com/orgs/nomic-ai/projects/2))
- **October 19th, 2023**: GGUF Support Launches with Support for:
- Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5
- [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4_0, Q6 quantizations in GGUF.
- Offline build support for running old versions of the GPT4All Local LLM Chat Client.
- **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on AMD, Intel, Samsung, Qualcomm and NVIDIA GPUs.
- **August 15th, 2023**: GPT4All API launches allowing inference of local LLMs from docker containers.
- **July 2023**: Stable support for LocalDocs, a GPT4All Plugin that allows you to privately and locally chat with your data.


### Chat Client
Expand Down
2 changes: 1 addition & 1 deletion gpt4all-api/gpt4all_api/app/api_v1/routes/engines.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ class EngineResponse(BaseModel):
async def list_engines():
'''
List all available GPT4All models from
https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-chat/metadata/models.json
https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-chat/metadata/models2.json
'''
raise NotImplementedError()
return ListEnginesResponse(data=[])
Expand Down
Loading

0 comments on commit 0e20f61

Please sign in to comment.