31337@sh.itjust.workstoPiracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Farewell RealDebridEnglish
1·
2 days agoMy friend just hooks his laptop up to his TV, connects to his VPN, and plays popcorntime (streaming torrents). He used to use streaming sites, but those have been getting taken down left and right.
Larger models train faster (need less compute), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.
Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.