
OpenAI Debuts GPT-4 Turbo That's 'More Powerful' and Less Expensive Than GPT-4 (techcrunch.com) 11
An anonymous reader quotes a report from TechCrunch: Today at its first-ever developer conference, OpenAI unveiled GPT-4 Turbo, an improved version of its flagship text-generating AI model, GPT-4, that the company claims is both "more powerful" and less expensive. GPT-4 Turbo comes in two versions: one that's strictly text-analyzing and a second version that understands the context of both text and images. The text-analyzing model is available in preview via an API starting today, and OpenAI says it plans to make both generally available "in the coming weeks."
They're priced at $0.01 per 1,000 input tokens (~750 words), where "tokens" represent bits of raw text -- e.g., the word "fantastic" split into "fan," "tas" and "tic") and $0.03 per 1,000 output tokens. (Input tokens are tokens fed into the model, while output tokens are tokens that the model generates based on the input tokens.) The pricing of the image-processing GPT-4 Turbo will depend on the image size. For example, passing an image with 1080x1080 pixels to GPT-4 Turbo will cost $0.00765, OpenAI says. "We optimized performance so we're able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4," OpenAI writes in a blog post shared with TechCrunch this morning.
GPT-4 Turbo boasts several improvements over GPT-4 -- one being a more recent knowledge base to draw on when responding to requests. [...] GPT-4 Turbo offers a 128,000-token context window -- four times the size of GPT-4's and the largest context window of any commercially available model, surpassing even Anthropic's Claude 2. (Claude 2 supports up to 100,000 tokens; Anthropic claims to be experimenting with a 200,000-token context window but has yet to publicly release it.) 128,000 tokens translates to around 100,000 words or 300 pages, which for reference is around the length of Wuthering Height, Gulliver's Travels and Harry Potter and the Prisoner of Azkaban. And GPT-4 Turbo supports a new "JSON mode," which ensures that the model responds with valid JSON -- the open standard file format and data interchange format.
They're priced at $0.01 per 1,000 input tokens (~750 words), where "tokens" represent bits of raw text -- e.g., the word "fantastic" split into "fan," "tas" and "tic") and $0.03 per 1,000 output tokens. (Input tokens are tokens fed into the model, while output tokens are tokens that the model generates based on the input tokens.) The pricing of the image-processing GPT-4 Turbo will depend on the image size. For example, passing an image with 1080x1080 pixels to GPT-4 Turbo will cost $0.00765, OpenAI says. "We optimized performance so we're able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4," OpenAI writes in a blog post shared with TechCrunch this morning.
GPT-4 Turbo boasts several improvements over GPT-4 -- one being a more recent knowledge base to draw on when responding to requests. [...] GPT-4 Turbo offers a 128,000-token context window -- four times the size of GPT-4's and the largest context window of any commercially available model, surpassing even Anthropic's Claude 2. (Claude 2 supports up to 100,000 tokens; Anthropic claims to be experimenting with a 200,000-token context window but has yet to publicly release it.) 128,000 tokens translates to around 100,000 words or 300 pages, which for reference is around the length of Wuthering Height, Gulliver's Travels and Harry Potter and the Prisoner of Azkaban. And GPT-4 Turbo supports a new "JSON mode," which ensures that the model responds with valid JSON -- the open standard file format and data interchange format.
FIVE OpenAI stories on the /. homepage (Score:4, Insightful)
I feel like we're just being trolled by the editors at this point.
Somewhere at Slashdot HQ, someone shouted "Let them eat ChatGPT!"
Responding to user inquiry (Score:5, Funny)
Re: (Score:2, Funny)
True enough, sorry about this folks! I can't even put the site in maintenance mode now! Wipslash tried to unplug the network cable, the power cord then, went to the breaker box and he was zapped by a plasma ray everytime! The third time he got zapped, he passed out and almost died. We don't dare to do anything now! We are going to try to fool the AI with some kind of Star Trek trick like telling it: "I am always lying" so the AI blows up trying to analyze and make sense of that. We'll keep you updated, hang
Re: (Score:3)
I feel like we're just being trolled by the editors at this point.
Somewhere at Slashdot HQ, someone shouted "Let them eat ChatGPT!"
Editors have been replaced with ChatGPT
Re: (Score:2)
Re: (Score:2)
Okay. (Score:3)
Will it burst into flames if I shout 'Turbo Turbo? (Score:2)
For our Belgian/Dutch readers :
https://www.youtube.com/watch?... [youtube.com]
ChatGPT 4 announces new pricing structure (Score:2)