-
HITSZ
- Shenzhen, China
Stars
This repository is dedicated to summarizing papers related to large language models with the field of law
Lizonghang / GeoMX
Forked from INET-RC/GeoMXGeoMX: Fast and unified distributed system for training machine learning algorithms over geographical data centers.
机器学习算法,大厂面经,coding,算法比赛资源整合大礼包~助你校招乘风破浪!内容持续更新中,欢迎star🌟
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Awesome-LLM-Eval: a curated list of tools, datasets/benchmark, demos, leaderboard, papers, docs and models, mainly for Evaluation on LLMs. 一个由工具、基准/数据、演示、排行榜和大模型等组成的精选列表,主要面向基础大模型评测,旨在探求生成式AI的技术边界.
The first real-world FL benchmark for legal NLP
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
Shepherd: A foundational framework enabling federated instruction tuning for large language models
Desktop application of new Bing's AI-powered chat (Windows, macOS and Linux)
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
Instruct-tune LLaMA on consumer hardware
Code and documentation to train Stanford's Alpaca models, and generate the data.
Making large AI models cheaper, faster and more accessible
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Ongoing research training transformer models at scale
Offsite-Tuning: Transfer Learning without Full Model
ChatGPT 中文调教指南。各种场景使用指南。学习怎么让它听你的话。
中英文敏感词、语言检测、中外手机/电话归属地/运营商查询、名字推断性别、手机号抽取、身份证抽取、邮箱抽取、中日文人名库、中文缩写库、拆字词典、词汇情感值、停用词、反动词表、暴恐词表、繁简体转换、英文模拟中文发音、汪峰歌词生成器、职业名称词库、同义词库、反义词库、否定词库、汽车品牌词库、汽车零件词库、连续英文切割、各种中文词向量、公司名字大全、古诗词库、IT词库、财经词库、成语词库、地名词库、…
Embedding, NMT, Text_Classification, Text_Generation, NER etc.
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
An open-source academic paper management tool.