-
Notifications
You must be signed in to change notification settings - Fork 639
Issues: bitsandbytes-foundation/bitsandbytes
FSDP2 integration: torch.chunks(Params4bit) not returning Par...
#1424
opened Nov 21, 2024 by
mreso
Open
1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Link to code for reproducing table found in Multi-backend support (non-CUDA backends) documentation?
#1456
opened Dec 17, 2024 by
epage480
LoRA + deepspeed zero3 finetuing using 8bit quantization of base weights results in increased loss
bug
Something isn't working
contributions-welcome
We welcome contributions to fix this issue!
#1451
opened Dec 12, 2024 by
winglian
CUDA Setup failed despite GPU being available.
Core:setup
A bug with respect to a specific setup
CUDA Setup
waiting for info
#1449
opened Dec 12, 2024 by
Du-Zhai
CUDA Setup failed despite GPU being available.
cross-platform
CUDA Setup
waiting for info
#1446
opened Dec 11, 2024 by
SrKium
No access to rocminfo in a production environment - ability to manually set GPU arch.
AMD integration
contributions-welcome
We welcome contributions to fix this issue!
cross-platform
#1444
opened Dec 11, 2024 by
isaranto
aarch64 whl in PyPi
build
cicd
cross-platform
enhancement
New feature or request
#1437
opened Dec 8, 2024 by
drikster80
8-bit C-Optim optimizers
contributions-welcome
We welcome contributions to fix this issue!
feature-request
#1430
opened Nov 27, 2024 by
odusseys
FSDP2 integration: torch.chunks(Params4bit) not returning Params4bit subclass
bug
Something isn't working
FSDP
help wanted
Extra attention is needed
high priority
(first issues that will be worked on)
#1424
opened Nov 21, 2024 by
mreso
[Question] How to manually quantize every linear of a given model?
#1415
opened Nov 14, 2024 by
AaronZLT
Support for quantization of convolutional layers
contributions-welcome
We welcome contributions to fix this issue!
feature-request
#1414
opened Nov 13, 2024 by
JohnnyRacer
It seems that the current version of bitsandbytes is not compatible with my CUDA 12.4 library.
#1409
opened Nov 8, 2024 by
Yukiiceeee
Rewrite We welcome contributions to fix this issue!
enhancement
New feature or request
assert
s as exceptions
contributions-welcome
#1408
opened Nov 5, 2024 by
akx
bitsandbytes-0.44.1.dev0-py3-none-macosx_13_1_arm64.whl is not a supported wheel on this platform.
#1406
opened Nov 1, 2024 by
wuhongsheng
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.