Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker fix #583

Open
wants to merge 15 commits into
base: main
Choose a base branch
from
Open

Docker fix #583

wants to merge 15 commits into from

Conversation

ba2512005
Copy link

Description

Slight modifications to code to allow for
docker compose up
to work properly.

Fixes # (issue)

  1. requirements
  2. Dockerfile
  3. app.py
  4. setup.py

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

Please put an x in the boxes that apply. You can also fill these out after creating the PR.

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • [Y] My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • [N] I have tested this code locally, and it is working as intended
  • [N] I have updated the documentation accordingly

Screenshots

If applicable, add screenshots to help explain your changes.
Docker environment and image build works fine,
running the container i get this error:

File "/app/app.py", line 15, in lollms-webui-1 | from fastapi.middleware.cors import CORSMiddleware lollms-webui-1 | ModuleNotFoundError: No module named 'fastapi'

Referring to line 15 of app.py where its trying to use a fastapi import without importing fastapi.
The wierd thing is that I updated the code to remove that yet it is still there..

@ParisNeo can you please test this and let me know if it works on your end or if there's something we have to do to update the lollms_webui package from 5.0.2 to 5.0.3 to have the updated code in the package that is being pulled in the docker compose.

@ba2512005
Copy link
Author

@ParisNeo let's have a discussion about the codebase and the challenges users are facing, how to address them, and how I can help improve the codebase.

I need this solution to be in working order for me to be able to accomplish my vision of how I want AI to interact.

let's schedule an hour long call to do a deep dive and work through this stuff so we can:

  1. Simplify setup: get a working codebase with docker compose working out of the box
  2. Correct configurations: Have working configs for vector db, internet search, and other reliant services out of the box baked into the docker compose.
  3. Fix personalities: Get a working set of personalities that actually are useful. there's a lot of garbage in the personality zoo that needs to be cleaned up
  4. Expand functionality: Discuss future of lollms and integrations that will change how we interact with computers and AI, making lollms the frontier in LLM management and design.

@ParisNeo
Copy link
Owner

ParisNeo commented Jan 3, 2025

Dear @ba2512005,
Thank you for your interest in restructuring LoLLMS and for your patience awaiting my response.

I want to be transparent about both LoLLMS's positioning and my current personal situation:

  1. Project Scope
  • LoLLMS is primarily a research and experimentation platform
  • It serves as an innovation testbed for AI features and concepts
  • Many of its pioneering features (structured outputs, advanced prompting) have inspired commercial platforms
  1. Technical Considerations
  • The platform is designed for single-user, local deployment
  • Security limitations exist, particularly regarding authentication
  • Opening it to external access is not recommended due to potential vulnerabilities

Innovation History and Impact:
LoLLMS has been a consistent pioneer in AI features that are only now being celebrated as "revolutionary" in commercial platforms:

  1. Structured Code Generation
  • LoLLMS implemented the generateCode functionality over a year ago
  • This allows formatted outputs (JSON, YAML, Python, etc.) from any AI model
  • Recent implementations like Ollama's JSON output feature, while praised as groundbreaking, mirror functionality that has been standard in LoLLMS
  • My implementation works across local, remote, and paid AI models
  1. Visual and Interactive Features
  • The canvas-style interface, now celebrated in tools like Claude Canvas, was conceptualized and implemented in LoLLMS long before
  • The visual workspace and document handling capabilities preceded many current commercial implementations
  • The ability to seamlessly blend text, code, and visual elements has been a core LoLLMS feature for many months
  1. Other Pioneering Features
  • Advanced prompting techniques
  • Multi-model routing
  • Contextual awareness
  • Workspace management
  • Visual programming capabilities
  • Skills library

Alternative Recommendations:
For business/production implementation, I strongly recommend:

  • Ollama + OpenWebui combination
  • These projects have robust community support and accept regular contributions
  • They're better suited for commercial applications

Contribution Process:
If you still wish to contribute to LoLLMS:

  1. Create an issue describing your proposed changes
  2. Wait for alignment with the project roadmap
  3. Submit a pull request after approval
  4. Await code review

Personal Situation and Project Constraints:
I need to be candid about my current circumstances:

  • LoLLMS is an unfunded, personal project developed during my free time (vacations, late nights, weekends)
  • I'm currently facing significant family pressures that require my immediate attention
  • My family needs more of my presence and support during this period
  • This will inevitably result in reduced development time and slower update frequency
  • The project has already cost considerable personal time and resources

While I remain committed to LoLLMS, I must prioritize my family's needs in the coming months. This means I'll have significantly less time to dedicate to development and maintenance. I hope you understand that this is a necessary step for maintaining a healthy work-life balance.

You are still welcome to contribute to the project following the guidelines above. However, if you're looking to build something for business purposes, I strongly recommend exploring the alternatives mentioned earlier.

Thank you for your understanding and continued support of LoLLMS. Despite these temporary constraints, I remain passionate about the project and its innovative contributions to the AI community.

Best regards,
ParisNeo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants