22 mar. 2023

Oh, no wonder the self-installed Alpaca doesn’t handle context — the model was developed with zero consideration for chat history. Cue every tech publication announcing that OpenAI was screwed because Stanford had accomplished something just as good as ChatGPT without all the investment 🤦

Well, you can still run a GPT-ish model on your computer, and that remains impressive. But without a memory it's not a chatbot; it's just an LLM that responds to a single prompt.

Stanford Alpaca 7B instruction fine tuned LLaMA 7B First Look Interactive Demo

In this video I explain at a high level about Stanford Alpaca 7B. Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demons…

Want to know when I post new content to my blog? It's a simple as registering for free to an RSS aggregator (Feedly, NewsBlur, Inoreader, …) and adding to your feeds (or if you want to subscribe to all my topics). We don't need newsletters, and we don't need Twitter; RSS still exists.

Legal information: This blog is hosted par OVH, 2 rue Kellermann, 59100 Roubaix, France,

Personal data about this blog's readers are not used nor transmitted to third-parties. Comment authors can request their deletion by e-mail.

All contents © the author or quoted under fair use.