OpenAI Plans ChatGPT 'Supersmart Personal Assistant for Work,' Setting Up Microsoft Rivalry 23
In the span of half a year, ChatGPT has become one of the world's best-known internet brands. Now its creator, OpenAI, has bigger plans for the chatbot: CEO Sam Altman privately told some developers OpenAI wants to turn it into a "supersmart personal assistant for work." From a report: With built-in knowledge about an individual and their workplace, such an assistant could carry out tasks such as drafting emails or documents in that person's style and with up-to-date information about their business. The assistant features could put OpenAI on a collision course with Microsoft, its primary business partner, investor and cloud provider, as well as with other OpenAI software customers such as Salesforce.
Those firms also want to use OpenAI's software to build AI "copilots" for people to use at work. But for OpenAI, building new ChatGPT capabilities will be the focus of its commercial efforts, according to Altman's comments and two other people with knowledge of the company's plans. Companies are still in the first innings of making money from the latest crop of AI services, and the race is on to figure out what products and business models will create the most value. Large-language models that allow ChatGPT and other software to understand conversational commands are relatively new, although Microsoft is already charging a 40% premium to Office 365 customers that want to use OpenAI's LLMs to automate tasks such as creating PowerPoint presentations based on text documents, summarizing meetings or drafting email responses.
Those firms also want to use OpenAI's software to build AI "copilots" for people to use at work. But for OpenAI, building new ChatGPT capabilities will be the focus of its commercial efforts, according to Altman's comments and two other people with knowledge of the company's plans. Companies are still in the first innings of making money from the latest crop of AI services, and the race is on to figure out what products and business models will create the most value. Large-language models that allow ChatGPT and other software to understand conversational commands are relatively new, although Microsoft is already charging a 40% premium to Office 365 customers that want to use OpenAI's LLMs to automate tasks such as creating PowerPoint presentations based on text documents, summarizing meetings or drafting email responses.
Clippy anyone? (Score:4, Insightful)
Re:ClippyGPT (Score:1)
"It looks like your faces are too symmetrical and hands have a boring 5 fingers. Would you like to me to properly distort them?"
Re: Clippy anyone? (Score:1)
Re: (Score:2)
There is Precedent (Score:1)
If you want to know how this is going to go, just look at the relationship between Disney and Pixar.
Now compare Toy Story, The Incredibles and Wall-E to Pixar's latest film.
Think twice.
Good (Score:2)
You know how if you tell someone to fuck off via text, you can blame it on autocorrect "I'm sorry, that was autocorrect .. I meant to ask you wanted to go hunt ducks" .. now you can blame the whole email, context and everything on autoEmail.
Re: Good (Score:2)
Re: Good (Score:2)
Microsoft Manna (Score:3)
Re: (Score:2)
>i owned nothing, and i was happy
Moratorium call on AI stories (Score:2)
Barring something clearly revolutionary, I've grown weary of company announcements along the lines "Company X says it's going to use AI to cure cancer, make practical flying cars, make Greedo shoot first, and bring world peace".
And I'm Clark Kent.
Using ChatGPT correctly (Score:4, Interesting)
Here's a quick test for people who think ChatGPT is useful for returning information.
Ask ChatGPT for the X coordinate of the standard normal curve whose function value is 0.2 (1/2 the maximum value at x == 0). If it complains about mean and SD, tell it the mean is 0 and SD is 1. (Answer: about +/- 1.25 [wikipedia.org], either number being correct.)
Each time I tried this ChatGPT it confidently gives the wrong answer, and it even shows its reasoning for *why* it gets the answer, even though it doesn't know that the answer is wrong.
(Note: Asking a similar question with other functions, such as y = x^2, seems to work.)
Another example: if you type a well known riddle, ChatGPT will supply the answer: "The more we take, the more we leave behind". (Answer: footprints). If you can come up with an *original* riddle however, it has no clue: "What is it that can leap from blade to blade, but will never be cut?". ChatGPT knows the answer to this now that I've told it - it would be interesting to see if this knowledge translates to other ChatGPT accounts. Anyone care to report on this?
Lots and lots of attempts at getting ChatGPT to return knowledge have failed - it simply doesn't know what is correct or incorrect, and in particular it doesn't know when it doesn't know something.
What ChatGPT is good for is expertise. If you already know the information, ChatGPT will supply the expertise of describing it in a style of your choosing. Saying "write a 1-paragraph progress report saying that we replaced the headlight fluid" includes the following:
The new headlight fluid ensures optimal visibility and safety while driving at night or in low-light conditions. We have verified that the headlight system is functioning effectively post-replacement.
ChatGPT has to be used in an effective way. Integrating it into schools for the purpose of teaching, or giving it to people for the purpose of answering questions is most definitely *not* the right way to use it.
Re: (Score:2)
LLMs like ChatGPT actually can't do math at all. They understand words, that's it. They try to predict what answer is expected from them. But they can't do math to figure that out.
Re: (Score:2)
Re: (Score:2)
Lots and lots of attempts at getting ChatGPT to return knowledge have failed - it simply doesn't know what is correct or incorrect, and in particular it doesn't know when it doesn't know something.
I think this is the closest AI has come to imitating “natural intelligence”.
Yes, and a question. (Score:2)
Have you heard of Wolfram Alpha?
I use Wolfram Alpha frequently, and as it happens it doesn't know the answer either.
I don't know how this relates to ChatGPT, though. The article was about ChatGPT, and I wanted to add information *about* ChatGPT.
What does Wolfram Alpha have to do with ChatGPT?
It doesn't have knowledge. (Score:3)
What it has is the ability to mimic what it has ingested in a convincing manner. It doesn't "know" anything which becomes apparent when you ask about something it has not ingested. Instead of a reply indicating a lack of knowledge, it merely generates yet another fabrication that has no basis in reality. I believe we have several stories hear about hapless fools who believed this AI actually had knowledge.
To that end, the neural network component should instead be used as an interpreter for a system that dispenses verified information.
Re: (Score:3)
The best description of LLM chatbots was that they don't know the right answer, they just know what the right answer should look like.
It's a very important distinction.
Re: (Score:2)
We've seen interfaces like that before, Lotus HAL comes immediately to mind, though actually using those felt a lot more like using an awkward programming language than a conversation.
Still, with the right traditional applications, it could make something like this actually reliable: a calculator, a tool for unit conversions, a dictionary and thesaurus, a curated database of facts, you get the idea. Feels like we're coming full-circle, doesn't it? Given how these kinds of models work, there isn't really m
I don't get it. (Score:3)
How can there be a "rivalry" with Microsoft if Microsoft is OpenAI's primary investor? Is the claim that the rivalry is because OpenAI will want to sell software to companies like Salesforce? Has it occurred to the author that Microsoft generally considers selling software a good thing? Has it occurred to the author that Salesforce already buys software from Microsoft?
Re: (Score:2)