in , ,

Stanford Researchers Release Alpaca 7B as Open-Source: A Cheaper Alternative to GPT-3

Alpaca 7B

Stanford University’s Center for Research on Foundation Models (CRFM) has recently released an open-source language model called “Alpaca 7B”, which is based on Meta’s large-scale language model “LLaMA“. It uses OpenAI’s GPT API to “tweak” the 7B model. 

The Alpaca 7B model can exhibit behaviours similar to OpenAI’s “text-DaVinci-003” (GPT-3.5) model, also known as ChatGPT, but can be executed in a much lighter and cheaper environment.

LLaMA is known for its high performance even with a few parameters, and there are four variations from 7 billion to 65 billion parameters. LLaMA supports 20 languages and can be applied to text generation and question-answering tasks. Meta publishes LLaMA’s code and model cards to contribute to open science, making them readily available to other researchers.

The Stanford research team started with the open-source LLaMA 7B model, the smallest language model among LLaMA models, pre-trained with 1 trillion “tokens”. After getting the LLaMA 7B model up and running, the researchers asked GPT to generate 175 human-written instruction-output pairs, 20 at a time, in the same style and format. The researchers then obtained approximately 52,000 sample conversations to use for post-training the LLaMA model. The cost of creating this large amount of training data was less than $500.

Using the generated data, the researchers fine-tuned the LLaMA model in about 3 hours on eight 80GB A100 cloud processors, costing less than $100. The Alpaca model was tested against ChatGPT in tasks such as email creation, social media, and productivity tools, and Alpaca won 90 times while ChatGPT won 89 times.

The Stanford CRFM research team has published the 52,000 questions used in the study, the code to generate them, and the code used to fine-tune the LLaMA model on GitHub. However, Alpaca suffers from common language model shortcomings such as hallucinations, toxicity, and stereotypes.

The release of Alpaca 7B and the ability to construct large-scale language models with a small amount of knowledge mean that an unlimited number of uncontrollable language models may emerge in the future, which could be used for phishing scams, spamming, and many other crimes. Therefore, it is crucial to consider safety and ethical issues when using and developing language models.

Vishak

Written by Vishak

Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Leave a Reply

Avatar

Your email address will not be published. Required fields are marked *

Runway Gen-2

Gen-2: Runway Unveils Second Generation Text-to-Video AI Tool for Short Clips

Adobe Firefly

Adobe Launches Firefly: New AI Image Generation Tool To Give All Creators Superpowers To Work At The Speed Of Their Imaginations