Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run NeuralProphet on GPU? #1652

Open
1 task
frankelau opened this issue Oct 19, 2024 · 1 comment
Open
1 task

How to run NeuralProphet on GPU? #1652

frankelau opened this issue Oct 19, 2024 · 1 comment

Comments

@frankelau
Copy link

Prerequisites

Is your feature request related to a problem? Please describe.

A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like

A clear and concise description of what you want to happen.

Describe alternatives you've considered

A clear and concise description of any alternative solutions or features you've considered.

Additional context

Add any other context or screenshots about the feature request here.

@quant-exchange
Copy link

quant-exchange commented Nov 21, 2024

Make sure you have this code run before you create a NeuralProphet instance (assuming you're using CUDA/NVIDIA GPU)*.

import torch

# Check if CUDA is available, if not fall back to CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# If CUDA is available, print information about each available GPU
if torch.cuda.is_available():
    for i in range(torch.cuda.device_count()):
        print(f"Device {i}: {torch.cuda.get_device_name(i)} (ID: {torch.cuda.get_device_properties(i)})")

    # Set the GPU device to use
    gpu_index = 0
    torch.cuda.set_device(gpu_index)
    print(f"Using device {gpu_index}: {torch.cuda.get_device_name(gpu_index)}")
else:
    # Fallback to CPU
    print("CUDA not available, using CPU.")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants