Open your ~/Downloads
directory. Or your Desktop. It's probably a mess...
There are only two hard things in Computer Science: cache invalidation and naming things.
LlamaFS is a self-organizing file manager. It automatically renames and organizes your files based on their content and well-known conventions (e.g., time). It supports many kinds of files, including images (through Moondream) and audio (through Whisper).
LlamaFS runs in two "modes" - as a batch job (batch mode), and an interactive daemon (watch mode).
In batch mode, you can send a directory to LlamaFS, and it will return a suggested file structure and organize your files.
In watch mode, LlamaFS starts a daemon that watches your directory. It intercepts all filesystem operations and uses your most recent edits to proactively learn how you rename file. For example, if you create a folder for your 2023 tax documents, and start moving 1-3 files in it, LlamaFS will automatically create and move the files for you! (watch mode defaults to sending files to groq if you have the environment variable "GROQ_API_KEY" set, otherwise through ollama)
Uh... Sending all my personal files to an API provider?! No thank you!
BREAKING CHANGE: Now by default, llama-fs uses "incognito mode" (if you have not configured an environment key for "GROQ_API_KEY") allowing you route every request through Ollama instead of Groq. Since they use the same Llama 3 model, the perform identically. To use a different model, set the environment variable "MODEL" to a string which litellm can use as a model like "ollama/llama3" or "groq/llama3-70b-8192". Additionally, you can pick your image model by setting the "IMAGE_MODEL" environment variable to something like "ollama/moondream" or "gpt-4o" (defaults to moondream).
We built LlamaFS on a Python backend, leveraging the Llama3 model through Groq for file content summarization and tree structuring. For local processing, we integrated Ollama running the same model to ensure privacy in incognito mode. The frontend is crafted with Electron, providing a sleek, user-friendly interface that allows users to interact with the suggested file structures before finalizing changes.
-
It's extremely fast! (by LLM standards)! Most file operations are processed in <500ms in watch mode (benchmarked by AgentOps). This is because of our smart caching that selectively rewrites sections of the index based on the minimum necessary filesystem diff. And of course, Groq's super fast inference API. 😉
-
It's immediately useful - It's very low friction to use and addresses a problem almost everyone has. We started using it ourselves on this project (very Meta).
- Find and remove old/unused files
- We have some really cool ideas for - filesystem diffs are hard...
Before installing, ensure you have the following requirements:
- Python 3.9 or higher
- pip (Python package installer)
To install the project, follow these steps:
-
Clone the repository:
git clone https://github.com/iyaja/llama-fs.git
-
Navigate to the project directory:
cd llama-fs
-
Install requirements
pip install -r requirements.txt
-
Install ollama and pull the model moondream if you want to recognize images
ollama pull moondream
We highly recommend pulling an additional model like llama3 for local ai inference on text files. You can control which ollama model is used by setting the "MODEL" environment variable to a litellm compatible model string.
-
Setup the environment variables for MODEL OLLAMA_API_BASE and whatever api keys you need
To serve the application locally using FastAPI, run the following command
fastapi dev server.py
This will run the server by default on port 8000. The API can be queried using a curl
command, and passing in the file path as the argument. For example, on the Downloads folder:
curl -X POST http://127.0.0.1:8000/batch \
-H "Content-Type: application/json" \
-d '{"path": "/Users/<username>/Downloads/", "instruction": "string", "incognito": false}'