Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After upgrading to the latest version, the original Workflow failed to work #53

Open
Hwenyi opened this issue Sep 9, 2024 · 6 comments

Comments

@Hwenyi
Copy link

Hwenyi commented Sep 9, 2024

I use this project to convert numerous API providers like gemini into a unified OpenAI format for scheduling. In the old version, I set my own baseUrl using OpenAI as the provider and called the gemini model (with vision enabled) using the OpenAI format. Everything was running smoothly.
However, after updating to the latest version, with all settings unchanged and OpenAI's vision mode enabled by default in the settings, the original Workflow stopped working and threw an error:

Error calling LLM:
400 Get "": unsupported protocol scheme "" (request id: 20240909140519430065870dDWvROzm)
Cannoli OCR failed with the error:
Error creating LLM request: TypeError:
Failed to fetch

My .cno file is as follows

It seems there might be an issue here. How should I go about troubleshooting it? Could you provide some insights?

To be able to upload, I had to rename the .canvas file to .md.

ocr.cno.md

@Hwenyi
Copy link
Author

Hwenyi commented Sep 9, 2024

I use this project to convert numerous API providers like gemini into a unified OpenAI format for scheduling. In the old version, I set my own baseUrl using OpenAI as the provider and called the gemini model (with vision enabled) using the OpenAI format. Everything was running smoothly. However, after updating to the latest version, with all settings unchanged and OpenAI's vision mode enabled by default in the settings, the original Workflow stopped working and threw an error:

Error calling LLM:
400 Get "": unsupported protocol scheme "" (request id: 20240909140519430065870dDWvROzm)
Cannoli OCR failed with the error:
Error creating LLM request: TypeError:
Failed to fetch

My .cno file is as follows

It seems there might be an issue here. How should I go about troubleshooting it? Could you provide some insights?

To be able to upload, I had to rename the .canvas file to .md.

ocr.cno.md

Update the provider to Gemini, utilizing the native Gemini. Everything functions smoothly, except for the issue where the {{NOTE}} variable incorrectly passes files with same names.

@Hwenyi
Copy link
Author

Hwenyi commented Sep 9, 2024

I use this project to convert numerous API providers like gemini into a unified OpenAI format for scheduling. In the old version, I set my own baseUrl using OpenAI as the provider and called the gemini model (with vision enabled) using the OpenAI format. Everything was running smoothly. However, after updating to the latest version, with all settings unchanged and OpenAI's vision mode enabled by default in the settings, the original Workflow stopped working and threw an error:

Error calling LLM:
400 Get "": unsupported protocol scheme "" (request id: 20240909140519430065870dDWvROzm)
Cannoli OCR failed with the error:
Error creating LLM request: TypeError:
Failed to fetch

My .cno file is as follows

It seems there might be an issue here. How should I go about troubleshooting it? Could you provide some insights?

To be able to upload, I had to rename the .canvas file to .md.

ocr.cno.md

Testing with the vision template from Cannoli College, even in the new version, works well. Could it be that there's a character in my prompt that might have caused misinterpretation?

@cephalization
Copy link
Member

@Hwenyi have you tried using openai directly instead of your proxy in order to rule out the proxy as being an issue? The warning looks like a network issue.

If you give me some instruction on how to use your cannoli I can try to reproduce locally

@Hwenyi
Copy link
Author

Hwenyi commented Sep 24, 2024

@Hwenyi have you tried using openai directly instead of your proxy in order to rule out the proxy as being an issue? The warning looks like a network issue.

If you give me some instruction on how to use your cannoli I can try to reproduce locally

Tried upgrading to the new version again, and it seems the new version throws this error when there are too many images:
index.html:1 Access to fetch at 'https://img.hwenyi.live/202409221807296.webp' from origin 'app://obsidian.md' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

I attempted to adjust the Cross-Origin Resource Sharing (CORS) settings for CloudFlare R2, but it still didn't work.

@cephalization
Copy link
Member

That error is unrelated to the amount of images in a cannoli, but instead related to where they are hosted.

Your image host needs to remove the Access-Control-Allow-Origin header, or set it to *.

CORS attempts to limit the hosts that can connect to a resource, like R2. In your case, obsidian is a host app://obsidian.md that R2 is not configured to allow based on the Access-Control-Allow-Origin header.

@Hwenyi
Copy link
Author

Hwenyi commented Sep 25, 2024

That error is unrelated to the amount of images in a cannoli, but instead related to where they are hosted.

Your image host needs to remove the Access-Control-Allow-Origin header, or set it to *.

CORS attempts to limit the hosts that can connect to a resource, like R2. In your case, obsidian is a host app://obsidian.md that R2 is not configured to allow based on the Access-Control-Allow-Origin header.

I have limited experience with coding, and through extensive searching, including consulting AI, I've tried numerous methods to set up cross-domain requests. Apart from changing the image URL and manually forwarding the request through a CloudFlare worker to reset the request headers, none of the other methods have been able to resolve this issue.
I also tried other S3 image storage services, such as tebi.io, where under default cross-domain settings, older versions of cannoli could successfully convert image references to base64 and request LLM, but the newer versions could not.
I tested many other similar S3 image and R2 storage services set up by others, and they also encounter this cross-domain request issue under the new version, leading to an inability to request LLM under vision. I'm unsure if this is due to an upgrade in the related fetch dependencies or Obsidian. As an ordinary user, there is temporarily no way to resolve this issue.

If this problem is beyond your reach to solve, you can certainly put it aside, as it is clearly not a pressing issue. Thank you for the work you've done on cannoli, it is a very useful plugin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants