Here’s a version of LLaMA with the Stanford Alpaca fine-tuning (i.e., made to emulate ChatGPT) running easily on Mac and PC.
Since AI is a magnet for bad actors, I’ve set it up in a VM thanks to VirtualBuddy, which was exceptionally painless.
It writes almost as fast on my virtualized M1 Pro as the paid version of ChatGPT. And let me tell you that talking to a terminal with all network interfaces disabled is more than a little unsettling 😳
GitHub - antimatter15/alpaca.cpp: Locally run an Instruction-Tuned Chat-Style LLM
Locally run an Instruction-Tuned Chat-Style LLM . Contribute to antimatter15/alpaca.cpp development by creating an account on GitHub.
GitHub - insidegui/VirtualBuddy: Virtualize macOS 12 and later on Apple Silicon
Virtualize macOS 12 and later on Apple Silicon. Contribute to insidegui/VirtualBuddy development by creating an account on GitHub.
Want to know when I post new content to my blog? It's a simple as registering for free to an RSS aggregator (Feedly, NewsBlur, Inoreader, …) and adding www.ff00aa.com to your feeds (or www.garoo.net if you want to subscribe to all my topics). We don't need newsletters, and we don't need Twitter; RSS still exists.