Stable Beluga 2 Reviews:a Llama2 70B model finetuned on an Orca style Dataset
About Stable Beluga 2
Stable Beluga 2 is a new open-source LLM developed by Stability AI and is based off of the LLamA-2 model by Meta AI with 70 billion parameters. This LLM is currently leading the chart on Hugging Face’s Open LLM Leaderboard. Like most other LLMs, you’ll need an interface installed to run Stable Beluga 2 on your own hardware, and the system requirements are fairly steep. Currently, there is both a 7B and 13B parameter version of the model available which you can find on the Hugging Face space. The model can also be run on Hugging Face, but it requires you to have access to PRO spaces.
Stable Beluga 2
is a Llama2 70B model finetuned on an Orca style Dataset.
What is stable Beluga 2?
Model type: Stable Beluga 2 is an auto-regressive language model fine-tuned on Llama2 70B. Language(s): English. Library: HuggingFace Transformers. License: Fine-tuned checkpoints ( Stable Beluga 2 ) is licensed under the STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT.
Usage
Start chatting with Stable Beluga 2
using the following code snippet:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga2", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga2", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
system_prompt = "### System:\nYou are Stable Beluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n"
message = "Write me a poem please"
prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Stable Beluga 2 should be used with this prompt format:
### System:
This is a system prompt, please behave and help the user.
### User:
Your prompt here
### Assistant:
The output of Stable Beluga 2
Model Details
- Developed by: Stability AI
- Model type: Stable Beluga 2 is an auto-regressive language model fine-tuned on Llama2 70B.
- Language(s): English
- Library: HuggingFace Transformers
- License: Fine-tuned checkpoints (
Stable Beluga 2
) is licensed under the STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT - Contact: For questions and comments about the model, please email
[email protected]
Training Dataset
Stable Beluga 2
is trained on our internal Orca-style dataset
Training Procedure
Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters:
Dataset | Batch Size | Learning Rate | Learning Rate Decay | Warm-up | Weight Decay | Betas |
---|---|---|---|---|---|---|
Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) |
Ethical Considerations and Limitations
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
How to cite
@misc{StableBelugaModels,
url={[https://huggingface.co/stabilityai/StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2)},
title={Stable Beluga models},
author={Mahan, Dakota and Carlow, Ryan and Castricato, Louis and Cooper, Nathan and Laforte, Christian}
}
Citations
@misc{touvron2023llama,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
year={2023},
eprint={2307.09288},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
What is Stable Beluga AI fine tuned large language model from Stability AI?
In the fast, ever-evolving world of artificial intelligence, Stability AI and CarperAI lab have made a significant stride with the launch of Stable Beluga 1 and Stable Beluga 2. These two new open access Large Language Models (LLMs) were unveiled last month, in July 2023, and have since been making waves in the AI community.
Stable Beluga 1, the first of the duo, is built upon the robust LLaMA 65B foundation model. It utilizes a synthetically-generated dataset for fine-tuning, a novel approach that sets it apart from its peers. On the other hand, Stable Beluga 2, the second model, is based on the LLaMA 2 70B foundation model and boasts industry-leading performance.
Stable Beluga training
The training utilized in shaping the Stable Beluga models was not born in a vacuum, but rather stands on the shoulders of giants — more accurately, on the ground-breaking methodology proposed by Microsoft. The tech giant’s game-changing paper, “Orca: Progressive Learning from Complex Explanation Traces of GPT-4,” served as a beacon that lit the path which eventually led to the creation of the Stable Beluga models. Microsoft’s trailblazing contributions provided a potent fuel for our innovation engine.
The process of data generation followed a path that bears a striking resemblance to Microsoft’s modus operandi, albeit with a few nuanced differences. One such divergence lies in our choice of data sources, which were carefully cherry-picked to meet our rigorous quality standards and to cater for the unique requirements of our developmental process.
- COT Submix Original
- NIV2 Submix Original
- FLAN 2021 Submix Original
- T0 Submix Original
As for the actual training dataset, it consists of a whopping 600,000 data points, an impressive volume that equals roughly 10% of the dataset size used in the original Orca research project. Each of these data points was synthetically crafted, making them a product of cutting-edge technologies, untamed creativity, and meticulous attention to detail.
The genesis of these high-quality instructions can be traced back to a set of datasets, the brainchildren of Enrico Shippole. These datasets are universally revered for their exceptional quality, robustness, and reliability, making them the perfect building blocks for our expansive training set.
It is this unique fusion of inspiration, dataset, and methodology that equipped the Stable Beluga models with the tools and knowledge they need to excel—a test testament to the importance of collaboration and shared learning in advancing technology.
Despite the smaller sample size used for training, the Stable Beluga models have shown exceptional performance across various benchmarks. They were evaluated using EleutherAI’s lm-eval-harness, with AGIEval added, and have demonstrated proficiency in intricate reasoning, understanding linguistic subtleties, and answering complex questions.
The results of these evaluations were not only confirmed by Stability AI researchers but also independently reproduced by Hugging Face. As of July 27th, 2023, Stable Beluga 2 ranked #1 and Stable Beluga 1 ranked #4 on their leaderboard, a testament to their superior performance.
Other articles you may find interesting on Stability AI
- Stability AI Stable Chat model featured at DEFCON31
- Learn to code using StableCode
- Stability AI launches SDXL 1.0 text-to-image generation model
- Stability AI launches new StableCode AI coding assistant
- What is StableLM the open source language model from Stability AI
- Learn how to use Stable Diffusion SDXL 1.0
- How to install Stable Diffusion SDXL 1.0
- StableLM vs ChatGPT
Name change from FreeWilly
The Stable Beluga models are expected to significantly advance research, enhance natural language understanding, and enable complex tasks. Initially codenamed FreeWilly, the models were renamed to Stable Beluga to better reflect their optimized “harmlessness”.
“Why did we change the names? These models were renamed from their internal code-name FreeWilly (a homage to the movies that some of us remember fondly), referring to the Orca paper. There were multiple reasons for the name change, the most notable being that belugas are gentler animals, unlike the fierce Orca (commonly known as killer whales). Stable Beluga models are optimized for “harmlessness”; therefore, the new names fit better with the models.”
The weights for Stable Beluga 2 are released as-is, while those for Stable Beluga 1 are released as deltas over the original model. Both models are released under the Stable Beluga Research License, further emphasizing their role in advancing AI research.
The launch of Stable Beluga 1 and Stable Beluga 2 marks a significant milestone in the field of AI, promising to revolutionize natural language understanding and enable complex tasks. For more information jump over to the Stability AI official website.
More AI Resources
Code Llama
Lexica
One Click Setup for Stable Diffusion