OpenAI's Board Set Back the Promise of AI, Early Backer Vinod Khosla Says (theinformation.com) 80
Misplaced concern about existential risk is impeding the opportunity to expand human potential, writes venture capitalist Vinod Khosla. From his op-ed: I was the first venture investor in OpenAI. The weekend drama illustrated my contention that the wrong boards can damage companies. Fancy titles like "Director of Strategy at Georgetown's Center for Security and Emerging Technology" can lead to a false sense of understanding of the complex process of entrepreneurial innovation. OpenAI's board members' religion of "effective altruism" and its misapplication could have set back the world's path to the tremendous benefits of artificial intelligence. Imagine free doctors for everyone and near free tutors for every child on the planet. That's what's at stake with the promise of AI.
The best companies are those whose visions are led and executed by their founding entrepreneurs, the people who put everything on the line to challenge the status quo -- founders like Sam Altman -- who face risk head on, and who are focused -- so totally -- on making the world a better place. Things can go wrong, and abuse happens, but the benefits of good founders far outweigh the risks of bad ones. [...] Large, world-changing vision is axiomatically risky. It can even be scary. But it is the sole lever by which the human condition has improved throughout history. And we could destroy that potential with academic talk of nonsensical existential risk in my view.
There is a lot of benefit on the upside, with a minuscule chance of existential risk. In that regard, it is more similar to what the steam engine and internal combustion engine did to human muscle power. Before the engines, we had passive devices -- levers and pulleys. We ate food for energy and expended it for function. Now we could feed these engines oil, steam and coal, reducing human exertion and increasing output to improve the human condition. AI is the intellectual analog of these engines. Its multiplicative power on expertise and knowledge means we can supersede the current confines of human brain capacity, bringing great upside for the human race.
I understand that AI is not without its risks. But humanity faces many small risks. They range from vanishingly small like sentient AI destroying the world or an asteroid hitting the earth, to medium risks like global biowarfare from our adversaries, to large and looming risks like a technologically superior China, cyberwars and persuasive AI manipulating users in a democracy, likely starting with the U.S.'s 2024 elections.
The best companies are those whose visions are led and executed by their founding entrepreneurs, the people who put everything on the line to challenge the status quo -- founders like Sam Altman -- who face risk head on, and who are focused -- so totally -- on making the world a better place. Things can go wrong, and abuse happens, but the benefits of good founders far outweigh the risks of bad ones. [...] Large, world-changing vision is axiomatically risky. It can even be scary. But it is the sole lever by which the human condition has improved throughout history. And we could destroy that potential with academic talk of nonsensical existential risk in my view.
There is a lot of benefit on the upside, with a minuscule chance of existential risk. In that regard, it is more similar to what the steam engine and internal combustion engine did to human muscle power. Before the engines, we had passive devices -- levers and pulleys. We ate food for energy and expended it for function. Now we could feed these engines oil, steam and coal, reducing human exertion and increasing output to improve the human condition. AI is the intellectual analog of these engines. Its multiplicative power on expertise and knowledge means we can supersede the current confines of human brain capacity, bringing great upside for the human race.
I understand that AI is not without its risks. But humanity faces many small risks. They range from vanishingly small like sentient AI destroying the world or an asteroid hitting the earth, to medium risks like global biowarfare from our adversaries, to large and looming risks like a technologically superior China, cyberwars and persuasive AI manipulating users in a democracy, likely starting with the U.S.'s 2024 elections.
All due respect to Vinod.... (Score:5, Insightful)
Re: (Score:3)
All due respect? This is just a rich dude trying to make sure he gets a return on his investment. The amount of smoke he's blowing up our asses with this op-ed is triggering alarms at the EPA.
Re: (Score:3)
Let's qualify that they rarely care about the *long term* consequences of their investments. Short term, they absolutely care.
Re: (Score:3)
Let's look at the results as they stand now: apart from translation, there is little in the way of actual utility but plenty of phishing, malware, AI nudes, to no mention flood of generated crap. The net result is negative even now, why expect it would be better later.
The ad hominem fallacy is popular today (Score:2)
Reference [wikipedia.org].
So this guy has an obvious profit motive. Ok. That doesn't make him wrong. We should consider his arguments on their own merits.
Isn't it true that a breakthrough in AI could bring amazing technologies that could benefit humanity? Isn't it true that some of those benefits are sorely needed in the modern day? Isn't this something worth considering?
And the same goes for those spreading fear, uncertainty, and doubt about AI.
How realistic are their fears? Hollywood fictions are not relevant; armi
Re: The ad hominem fallacy is popular today (Score:2)
Well, no. Talking only about the tech and ignoring the ramifications for the real world is willful ignorance. You don't live alone, stop acting like you do. Other people have a stake too.
Re: (Score:2)
I didn't say we should ignore the ramifications in the real world. Far from it! I said it shouldn't matter if the people making arguments have a profit motive. We should consider their arguments (including ramifications in the real world), and not what the people making those arguments stand to gain or lose.
Re: The ad hominem fallacy is popular today (Score:2)
I think it should matter if people have a profit motive for promoting certain agendas.
Especially if it is coming from the Venture Capitalist community. These guys are absolute ghouls who have no qualms about giving toxic advice to startups if it meant they could squeeze a bit more money out of them before crashing. To say nothing of how profit-focused approaches to technological development have led to monopolies, surveillance states, and mass concentrations of power in private hands.
The fallacy of the ad hominem fallacy (Score:2)
Aaaaand... (Score:1, Informative)
the waves of stinky bullshit continue.
Puh-lease!! (Score:5, Insightful)
Misplaced concern about existential risk is impeding the opportunity to expand human potential, writes venture capitalist Vinod Khosla
Riiiiiiiiiiight! "human potential" is what's he's 'concerned' about.
Maybe that should be reworded the proper way, which is to say that it's really "impeding the opportunity to expand his investment returns".
Is Sam Altman a rich person? (Score:2)
Sam altman's net worth is estimated [msn.com] at around $500 million.
Lots and lots and lots of people here on slashdot argue that the ultra rich, who don't do anything and simply rake in money from oppressed workers, should be somehow curtailed. In situ, "curtailed" can mean shot (and/or eaten, literally), taxed to below the 1% level, put in jail, or socially hounded with pitchforks and torches.
The 1% level in the US sits at around $5 million. Altman is not only rich, he's ultra rich.
Isn't this an example of someone
Re: (Score:3)
Progressive taxation, progressive fines...
Take Finland as an example.
Re: (Score:2)
He's ultra rich. This is just another ultra rich guy defending him from the oppressive Board of Directors who are all rich, too.
The right answer is obviously to kill and eat all of them. There is no distinguishing rule. No one ever got rich by doing anything but oppressing their workers.
I learned that right here on slashdot. How'd you not?
Re: (Score:2)
FYI, "text" as a communication medium removes most of the conversational ques of sarcasm. Many people in the geeky-personality spectrum have a hard enough time with sarcasm as it is, but when using a communication medium like text it can become nearly impossible to determine whether you are being sarcastic, or wildly idiotic.
It's easy to believe "only an idiot would think a statement like this is serious," but it is just as easy to believe "the world is full of idiots who would make a statement like that s
Re: (Score:2)
Tax brackets? What will the poor eat if not the rich? Tax brackets won't put food on the table.
Re: (Score:2)
It's even dumber than that. Whatever "existential risk" is posed by AI, is not impeding anything or anybody. Those who are worried about existential risk, are academics whining in their publications that nobody but other academics actually read.
lol (Score:5, Funny)
The best companies (Score:3)
"The best companies are those whose visions are led and executed by their founding entrepreneurs"
Tell that to SoftBank about Adam Neumann.
Some people are good with the vision and getting something off the ground but suck and the tedious day to day running. Others are good at the tedious stuff but couldn't have a vision short of taking a bag of illegal mushrooms. The sort of person who can do both is quite rare.
Re: (Score:2)
A hired CEO still needs vision. They don't just push paper. A company with no vision is eventually a dead company.
Re: The best companies (Score:2)
Most CEO vision these days consist of cost cutting or buying out smaller companies to bring in ideas and IP they couldnt think up themselves. Eg Microsoft.
Re: (Score:2)
Yup and most of those companies plateau or die.
Re: (Score:2)
Not while they have something to sell that people want to buy
they don't.
Re: (Score:2)
Sure that's a plateau. And when some other company with vision figures out how to do it better, they die.
Re: (Score:2)
Yet there are the oil companies 100 years old and doing just fine. Sometimes a product can't be improved to any significant extent and even if it could a few billion here and there makes sure your company never makes it.
But I do appreciate your economic naivety, so refreshing.
Re: (Score:2)
Oil itself is not the product oil companies are competing on.
They are competing on finding, extracting, shipping, and refining oil as efficiently as possible.
And these days they've all grown into energy companies where oil is an important but not only source of revenue.
I am always happy to educate my Dunning Kruger friends at slashdot. I'm glad you learned something new today and threw away your old incorrect world view.
Re: (Score:2)
"Oil itself is not the product oil companies are competing on."
Oil is the vast majority of what they sell. The rest is single digit percentages.
"They are competing on finding, extracting, shipping, and refining oil as efficiently as possible."
Thats called operations.
"I am always happy to educate my Dunning Kruger friends at slashdot"
Its always nice to be part of a group isn't it.
our brilliant technological overlord missed one (Score:2, Interesting)
Amazing how he didn't mention global warming as a large and looming risk.
- who face risk head on, and who are focused -- so totally -- on making the world a better place.
what a bunch of bullshit.
I also think it's hilarious that he thinks removing Altman is setting back progress. yeah, because nobody else is working on LLMs or can make progress on them.
techbros like Khosla, and Altman, are becoming completely intolerable. They are demonstrably making the world a worse place for a buck.
How I miss he good old
"was the first venture investor in OpenAI" (Score:5, Insightful)
If he still is an investor in OpenAI, he would be irritated at his loss.
But until we have details of *why* this happened, it's pointless choosing a side.
Re: (Score:2)
If he still is an investor in OpenAI, he would be irritated at his loss.
But until we have details of *why* this happened, it's pointless choosing a side.
Assuming he still is an investor in OpenAI Global (the actual for-profit company he would have invested in) he's probably had a lot of the inside details from the board and/or people in the company as to why this happened.
The not-so-subtle suggestion is the board thought Altman was pushing things too quickly and got very worried about existential risk, and since that was a big part of the mandate of OpenAI they decided to slow things down.
The one open question to me is about them being "misled" or whatever
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What are all these people smoking? (Score:4, Insightful)
The current form of AI is not smart, cannot do anything that requires the slightest bit of insight and, on top of that, frequently hallucinates and gives unmitigated bullshit as answer. In addition, it is subject to model collapse and and model poisoning, and making it better in one area makes it a lot worse in all others. Yet these people all claim this is a revolution. I fail to see anything like that happening. Yes, the NL interface got a bit better. But quality-wise it is not better than, say, IBM Watson, which is not 13 years old. You know, that Watson, that was pulled from doing medical stuff because it occasionally killed a patient in what was probably a precursor of "hallucination".
So, yes, LLMs are a nice trick, but expecting them to do real work is just completely disconnected from reality. All they can do is make the search engine a bit better in mist cases and massively worse in some. That is not revolutionary at all.
Re: (Score:2)
Re: (Score:2)
These people have to have a scam because their intent is not to _run_ a business but to elicit outside capital so they can make a profit in a (relatively) slow motion pump and dump scheme.
AI is great for this purpose. The term has been talked up for years and brings in the stupid money.
And this was the harm wrought by expectations of getting a higher return than normal business activity would dictate. Most M&A activity should be illegal and pump and dumps should have personal liability for those invol
Re: (Score:2)
The "stupid money". I like that term. Very fitting.
Re: (Score:2)
Also like the idea that this is a "slow motion pump and dump". Makes perfect sense. For example, having seen some demos of IBM Watson targeted at experts (so no BS claims of "intelligence" or the like) over the years (which is now 13 years old), it could do all that stuff, probably with less hallucinations and better results. What it was missing was the pretty universal natural language interface catering to non-experts that the current AI hype has. That interface is the _only_ advantage I see. Regarding ac
Re: (Score:2)
Every time I compare the LLMs of today to juiced up Elizas with big data and voice interfaces, I get flak. But it's not completely in-apt.
It reminds me of the crypto flak. Say it was a scam for years, and you'd get a zillion trolls who thought they were going to get rich foaming at the mouth and looking to cut your junk off. Now that essentially no one got rich, it's safe to say it was a scam from the get-go. Give this LLM stuff a year or two.
Re: (Score:2)
Re:What are all these people smoking? (Score:5, Insightful)
The current form of AI is not smart, cannot do anything that requires the slightest bit of insight and, on top of that, frequently hallucinates and gives unmitigated bullshit as answer. In addition, it is subject to model collapse and and model poisoning, and making it better in one area makes it a lot worse in all others. Yet these people all claim this is a revolution. I fail to see anything like that happening.
Depends on the field.
Writing? More work to get it to say what you want than to say it yourself.
Editing writing? Surprisingly good.
Coding? Needs guidance, but a huge productivity multiplier when used properly.
Illustration? The one field where it's potentially putting a lot of lot of people out of work.
But quality-wise it is not better than, say, IBM Watson, which is not 13 years old. You know, that Watson, that was pulled from doing medical stuff because it occasionally killed a patient in what was probably a precursor of "hallucination".
I never heard about that. I think the fundamental issue is that Watson was a less capable model than current LLMs and it was going into a field where current practitioners are extremely well trained.
Basically weaker tech trying to do something extremely hard (outperform doctors).
So, yes, LLMs are a nice trick, but expecting them to do real work is just completely disconnected from reality. All they can do is make the search engine a bit better in mist cases and massively worse in some. That is not revolutionary at all.
Well a lot of people are successfully using them to do real work, so I don't think that we're the ones disconnected from reality.
Re: (Score:2)
Incidentally, people who are weak at reading are impressed by anything that looks like words they've seen before, and people who are weak at visual communication are impressed by something that looks like an image they've seen before. Such people are very glad to use AI tech and think everyone should therefore use it too.
Re: What are all these people smoking? (Score:2)
That's not the bar.
The question is whether an illustrator can do more work with or without it. And they can do more with it.
Re: (Score:2)
But will it be the same quality? Will the illustrator get the same benefits as when doing it conventionally? Having seen now quite a bit of AI generate illustrations, the answer to the first is a resounding "no" and the answer to the second is at the very least "unknown". Maybe if the illustrator trains a model on their very own specific style, this will get better, but while reaction time (latency) will probably get better, total effort may or may not.
Re: (Score:2)
But will it be the same quality?
It may even be higher quality.
Will the illustrator get the same benefits as when doing it conventionally?
You can use it for a basis, then redraw and/or inpaint the problematic parts. Complex art is already done in many layers and steps. Using these tools just reduces the number of steps to a complete image.
Having seen now quite a bit of AI generate illustrations, the answer to the first is a resounding "no" and the answer to the second is at the very least "unknown".
The software doesn't produce the finished image on its own! A human does some of the work in more conventional ways, although some of that will come down to photoshop (etc.) rather than drawing it from scratch.
Maybe if the illustrator trains a model on their very own specific style, this will get better, but while reaction time (latency) will probably get better, total effort may or may not.
Of course it will. Even in the hands of an amateur the software can g
Re: (Score:2)
Good luck with that. You seem to be very naive and not very fact-oriented on this question. Well, have fun while it lasts.
Re: What are all these people smoking? (Score:2)
You are covering your ears and shouting LA LA LA I CAN'T HEAR YOU and you think I am naive? Keep pretending right up until you find yourself unemployed and wishing you had listened.
Re: (Score:2)
Ah, the no true Scotsman approach: "no TRUE artist would benefit from AI tools".
I call bullshit, based on the fact that I know professional artists that are using AI tools to make their work better and faster, and they made great art both before and after they started using AI.
The AI tools have come a long way in the last year, and it's not just "type in a description and hope you get something close to what you want". Nowadays the artists start with a sketch, then use tools like ControlNet to guide the dif
Expertise and 2 channels (Score:4, Interesting)
Depends on the field.
Writing? More work to get it to say what you want than to say it yourself.
Our brains have two channels of information: incoming and outgoing ("afferent" and "efferent" are terms for the neurons involved).
These channels are distinct, and typically one side is underdeveloped: you can receive information and completely understand it, but not be able to send it out to others. This is why lots and lots of people have discovered and said "you only learn something by teaching it". Successful teaching requires you to sort out all of your unrecognized gaps in knowledge.
As an example of this, when you have a moment (such as commuting) pick any object - any object whatsoever - and try to talk continuously about that object for 60 seconds. Say anything you like, rambling on without repeat about that object. Talk about its color, history, position, size... anything you like.
See if you can keep that up for 60 seconds.
Most people can listen for hours on end and understand what they hear, but turning that around is usually difficult because their outgoing channels are not as well developed. Given some time and thought, you could easily write a 1-page script that would take a full minute to read, but doing this extemporaneously is initially quite hard until you get a bunch of practice.
So in the case of AI, lots of people "have an idea" of what they want to convey, but don't have the right words for it. Taking time to guide and modify the input prompt lets them turn this around: they can keep modifying the inputs until it "sounds right" and the input channel matches their internal concept.
Yes, it probably takes longer than a fluent speaker composing the words from scratch, but learning to be fluent takes an enormous amount of up-front time to begin with.
The same with graphics art: I can (for example) form concepts of cartoons and describe them in text, but don't have the skill to draw them out. By iterating over text descriptions, I can get the AI to zero in on the cartoon I want using the input channels instead of the output channels. Learning to draw well takes a lot of up-front time.
Expect to see a lot of really creative people using AI. The people will supply the ideas, but the AI supplies the expertise needed to complete the job.
Re: (Score:2)
I somewhat agree, although the thing is there is an "information keeping" element in the middle. Just listening to things goes into memory badly interconnected with other information and may not even be retained longer-term. Speaking/writing about something requires the information to actually be there in the first place and to be well cross-linked with other information. That is why I recommend to my students to write summary of the whole lecture as exam preparation, even if they cannot use that summary in
Re: (Score:2)
We know that you are going to keep repeating the same claim over and over and over again, even after AI takes over the world and fundamentally changes everything about society.
Why not save yourself some work? Just write up a detailed blog post about all of your claims, and then share the link every time you have something to say about this topic. Heck, I hear there's even a few online services that can write your blog post for you.
'Existential Risk' (Score:1)
This chap sounds like the sort who would build a deep sea submarine out of carbon fibre, and then moan about regulations that prevent such shenanigans.
Re: (Score:2)
lol, zing! Nice.
accustions (Score:3)
Not "She's crazy!", not "Perhaps she has a point and we should check into this?", but no mention whatsoever of her accusations? Just
Re: (Score:2)
Re: (Score:3)
https://www.themarysue.com/ann... [themarysue.com]
I'd never heard of her before 10 minutes ago.
She's a mess but in ways consistent with people who have suffered abuse. There's no way for us to ever know if she's telling the truth or she's broken for other reasons and this is what came out. She says herself that she had pieced together memories to create a narrative.
Make of it what you will.
Re: (Score:2)
Obviously, money has its advantages in terms of media control.
Re: (Score:2)
"A lot of people are saying..." [princeton.edu]
Innovations like Leaded gas and Asbestos? (Score:1)
Leaded gas and Asbestos were immmensely useful. (Score:2)
The benefits from leaded gas and asbestos were enormous. The lives shortened were a minor loss by comparison. Asbestos was incredibly valuable in the age of steam and aside from its risks when friable is a durable, immensely useful material. For example my house has asbestos siding from 1965 and the stuff doesn't decay, corrode and is waterproof. I don't chew on it so it simply isn't a hazard to me.
Leaded fuel enabled the higher octane avgas key to winning the Second World War.
Every choice has a cost. Delay
Re: (Score:2)
Some thalidomide babies would like to have a word. (Score:2)
For that matter, the suppressed information about radiation harms by the USG post-Hiroshima and Nagasaki falls into this category as well.
Hell, what about the OPM hack where the entire personnel database of the USG was exfiltrated, presumably by China.
Because all Luddite fears are grossly overblown.
Wow (Score:2)
That is an impressive pile of bullshit.
OpenAI was founded as a non-profit with the goal of making AI advancements generally available specifically to head off profit motivated companies keeping it all proprietary. OpenAI specifically has limits on the profit any investor in their for-profit subsidiary can make.
Sam Altman didn't "put it all on the line." He's a super rich dude who doesn't have any equity in openAI, except indirectly through a small investment by Y Combinator.
Despite the Ubers and AirBnbs pa
Huh? (Score:3)
LLM "AI" is completely incapable of providing safe medical advice. It has no concept of what a disease is, or of what symptoms relate to what underlying cause, it merely knows what words are often used together. And conspiracy theorists are FAR more common on the Internet than actual medical practitioners. In consequence, very dangerous "advice" is statistically FAR more common than sensible advice.
LLM "AI" is equally incapable of being a tutor, for much the same reason. It knows nothing about any subject, it doesn't understand how anything works, it doesn't comprehend anything, it's merely a statistical calculator. And, again, conspiracy theorists are FAR more common than intellectual sources. In consequence, total nonsense and very dangerous perversions of knowledge are statistically FAR more common than rational understanding.
Of course, we're dealing with a Vulture Capitalist here, not an intelligent, rational, human being. In fact, I'm not entirely convinced VCs are any sort of human. They're statistical calculators, much like LLMs.
Actually, that would be a GREAT way to test the sincerity of this VC. Create an LLM that can process corporate plans and decide which ones to invest in. BS plans will show this through language that a human VC could easily be fooled by, but an AI isn't processing the actual words, merely the relationships between words, and as such can't be bamboozled. My suspicion is that an LLM would do FAR better at VC-style work than any human.
So build an AI VC, get a sponsor to give it some seed money to speculate with, and use its success rate to prove that it is a cheaper, more effective, strategist than a human. My guess is that Vinod Khosla would find it highly objectionable and a potential threat. It's only good, in his eyes, when it earns him money through endangering other people's jobs. When it's his job that's on the line, I doubt very much he'll take the same line.
Ah that old chestnut - imagine free ... (Score:2)
Re: (Score:2)
Re: (Score:2)
Oh shit, now he's gone n done it! (Score:2)
There was no reason to be terribly concerned about any of these things until he had to go n say it! We're fucked in so many ways now....
"But humanity faces many small risks. They range from vanishingly small like sentient AI destroying the world or an asteroid hitting the earth, to medium risks like global biowarfare from our adversaries, to large and looming risks like a technologically superior China, cyberwars and persuasive AI manipulating users in a democracy, likely starting with the U.S.'s 2024 elec
venture capital and free (Score:2)
> Imagine free doctors for everyone and near free tutors for every child on the planet. That's what's at stake with the promise of AI.
So, Vinod Khosla 'invested' in a company but wants that company to be giving things away? Of course that is not why he 'invested'. He must think everyone is stupid and does not see right through him.
And they greatly over-estimate the utility and impact of their tech.
The only way to win is not to play (Score:2)
Anyone concerned about x-risk from superhuman AI should look toward OpenAI as a precautionary tale of the empty futility of arguments AI could ever be controlled or contained.
Humans are routinely social engineered by their intellectual peers and organizations can't even control themselves from the corrupting influences of power and money. OpenAI become the opposite of what it intended to be. The notion they could control AI when they can't even control themselves is absurd.
No it didn't delay AI (Score:2)
Promise of AI? Drama of AI! (Score:1)
Goodness! It sets back the promise of AI? What promise is that? There is no promise. And especially no promise in LLM that employs low paid labour to be able to work in the first place and uses up a lot of energy for what? Mechanized hallucinations.
Can this AI stuff explain me WHY it tells me something? Can it explain me HOW it got to the conclusion it had to say that? NO and NO. So yes it looks impressive but in the end it's a load of bollocks. Smoke and mirrors. Hype.
My perspective as an effective altruist (Score:2)
I signed the GWWC pledge to give 10% of my income to charity for the rest of my life. I used to have a religion, but I found out it was false (any LDS/Mormon folks reading this can ask me how I know, if they dare) so I decided I would no longer donate 10% tithing to missionary work, Books of Mormon, temples and the like. Now I give instead to things like cost-effective malaria nets [givewell.org] which save one child's life per $5,500 spent, encouraging clean energy R&D [ea.do], and with this whole AI thing heating up I woul
Re: (Score:2)
I used to have a religion, but I found out it was false (any LDS/Mormon folks reading this can ask me how I know, if they dare)
Ok, how do you know?
Re: (Score:2)
Sure. I would refer you to this summary of my own story [lesswrong.com] (watch out for the part about the CES letter).
If you are LDS, you may prefer a a less dry/reductionist (and more gradual/meandering/detailed) approach to this topic. In that case, please watch this [mormonstories.org], or possibly this [youtube.com].
BS (Score:2)
Even if true, OpenAI is just a single actor in a market full of players trying to advance AI, so even if OpenAI stumbles, the other can carry on the progress.
Also, does this guy really says we should get medical advice from internet bots?