Forgot your password?
typodupeerror

Comment Supposed to be Temporary (Score 1) 122

Income tax was originally proposed as a temporary measure to help finance the civil war. In fact, the first version of income tax was overturned by the supreme court as unconstitutional. Read: https://gigafact.org/fact-brie... Instead of giving corps or gov all our information, how about we keep what we earn, keep our private info private, and have no income tax at all? How is this a bad plan?

Submission + - Stability AI Launches StableLM, An Open Source ChatGPT Alternative (arstechnica.com)

An anonymous reader writes: On Wednesday, Stability AI released a new family of open source AI language models called StableLM. Stability hopes to repeat the catalyzing effects of its Stable Diffusion open source image synthesis model, launched in 2022. With refinement, StableLM could be used to build an open source alternative to ChatGPT. StableLM is currently available in alpha form on GitHub in 3 billion and 7 billion parameter model sizes, with 15 billion and 65 billion parameter models to follow, according to Stability. The company is releasing the models under the Creative Commons BY-SA-4.0 license, which requires that adaptations must credit the original creator and share the same license.

Stability AI Ltd. is a London-based firm that has positioned itself as an open source rival to OpenAI, which, despite its "open" name, rarely releases open source models and keeps its neural network weights—the mass of numbers that defines the core functionality of an AI model—proprietary. "Language models will form the backbone of our digital economy, and we want everyone to have a voice in their design," writes Stability in an introductory blog post. "Models like StableLM demonstrate our commitment to AI technology that is transparent, accessible, and supportive." Like GPT-4—the large language model (LLM) that powers the most powerful version of ChatGPT—StableLM generates text by predicting the next token (word fragment) in a sequence. That sequence starts with information provided by a human in the form of a "prompt." As a result, StableLM can compose human-like text and write programs.

Like other recent "small" LLMs like Meta's LLaMA, Stanford Alpaca, Cerebras-GPT, and Dolly 2.0, StableLM purports to achieve similar performance to OpenAI's benchmark GPT-3 model while using far fewer parameters—7 billion for StableLM verses 175 billion for GPT-3. Parameters are variables that a language model uses to learn from training data. Having fewer parameters makes a language model smaller and more efficient, which can make it easier to run on local devices like smartphones and laptops. However, achieving high performance with fewer parameters requires careful engineering, which is a significant challenge in the field of AI. According to Stability AI, StableLM has been trained on "a new experimental data set" based on an open source data set called The Pile, but three times larger. Stability claims that the "richness" of this data set, the details of which it promises to release later, accounts for the "surprisingly high performance" of the model at smaller parameter sizes at conversational and coding tasks.

Comment How do you convince your fam? (Score 1) 100

The fam and I are on Win10 for gaming at the moment, but there's no way I'm installing anything newer than Win10 in my house. I already have to arduously disable all the data harvesting in Win10 every time I install it... and every time I hear about a "new AI feature" going into MS's next version of Windows and Office all I can do is groan. How do you convince your fam?

Comment In bed with Microsoft (Score 1) 115

I could have told you this was going to happen as soon as OpenAI got in bed with Microsoft. https://www.theverge.com/2023/... When people invest massive amounts of money like that do you believe they're not interested in some kind of return? Any other conclusion is naive at best.

Slashdot Top Deals

Advertising is the rattling of a stick inside a swill bucket. -- George Orwell

Working...