Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Businesses

OpenAI's Board Set Back the Promise of AI, Early Backer Vinod Khosla Says (theinformation.com) 80

Misplaced concern about existential risk is impeding the opportunity to expand human potential, writes venture capitalist Vinod Khosla. From his op-ed: I was the first venture investor in OpenAI. The weekend drama illustrated my contention that the wrong boards can damage companies. Fancy titles like "Director of Strategy at Georgetown's Center for Security and Emerging Technology" can lead to a false sense of understanding of the complex process of entrepreneurial innovation. OpenAI's board members' religion of "effective altruism" and its misapplication could have set back the world's path to the tremendous benefits of artificial intelligence. Imagine free doctors for everyone and near free tutors for every child on the planet. That's what's at stake with the promise of AI.

The best companies are those whose visions are led and executed by their founding entrepreneurs, the people who put everything on the line to challenge the status quo -- founders like Sam Altman -- who face risk head on, and who are focused -- so totally -- on making the world a better place. Things can go wrong, and abuse happens, but the benefits of good founders far outweigh the risks of bad ones. [...] Large, world-changing vision is axiomatically risky. It can even be scary. But it is the sole lever by which the human condition has improved throughout history. And we could destroy that potential with academic talk of nonsensical existential risk in my view.

There is a lot of benefit on the upside, with a minuscule chance of existential risk. In that regard, it is more similar to what the steam engine and internal combustion engine did to human muscle power. Before the engines, we had passive devices -- levers and pulleys. We ate food for energy and expended it for function. Now we could feed these engines oil, steam and coal, reducing human exertion and increasing output to improve the human condition. AI is the intellectual analog of these engines. Its multiplicative power on expertise and knowledge means we can supersede the current confines of human brain capacity, bringing great upside for the human race.

I understand that AI is not without its risks. But humanity faces many small risks. They range from vanishingly small like sentient AI destroying the world or an asteroid hitting the earth, to medium risks like global biowarfare from our adversaries, to large and looming risks like a technologically superior China, cyberwars and persuasive AI manipulating users in a democracy, likely starting with the U.S.'s 2024 elections.

This discussion has been archived. No new comments can be posted.

OpenAI's Board Set Back the Promise of AI, Early Backer Vinod Khosla Says

Comments Filter:
  • by sethmeisterg ( 603174 ) on Monday November 20, 2023 @12:54PM (#64018759)
    ...but capitalists rarely give a crap about the consequences of their investments if that means more $$$ in their pockets.
    • All due respect? This is just a rich dude trying to make sure he gets a return on his investment. The amount of smoke he's blowing up our asses with this op-ed is triggering alarms at the EPA.

    • Let's qualify that they rarely care about the *long term* consequences of their investments. Short term, they absolutely care.

    • Let's look at the results as they stand now: apart from translation, there is little in the way of actual utility but plenty of phishing, malware, AI nudes, to no mention flood of generated crap. The net result is negative even now, why expect it would be better later.

    • Reference [wikipedia.org].

      So this guy has an obvious profit motive. Ok. That doesn't make him wrong. We should consider his arguments on their own merits.

      Isn't it true that a breakthrough in AI could bring amazing technologies that could benefit humanity? Isn't it true that some of those benefits are sorely needed in the modern day? Isn't this something worth considering?

      And the same goes for those spreading fear, uncertainty, and doubt about AI.

      How realistic are their fears? Hollywood fictions are not relevant; armi

      • Well, no. Talking only about the tech and ignoring the ramifications for the real world is willful ignorance. You don't live alone, stop acting like you do. Other people have a stake too.

        • I didn't say we should ignore the ramifications in the real world. Far from it! I said it shouldn't matter if the people making arguments have a profit motive. We should consider their arguments (including ramifications in the real world), and not what the people making those arguments stand to gain or lose.

          • I think it should matter if people have a profit motive for promoting certain agendas.

            Especially if it is coming from the Venture Capitalist community. These guys are absolute ghouls who have no qualms about giving toxic advice to startups if it meant they could squeeze a bit more money out of them before crashing. To say nothing of how profit-focused approaches to technological development have led to monopolies, surveillance states, and mass concentrations of power in private hands.

          • We don't have time to consider the opinions of every jackass with a keyboard or a microphone. We have to filter the noise and we do that by considering factors such as motivation, qualification, and reputation.
  • Aaaaand... (Score:1, Informative)

    by Anonymous Coward

    the waves of stinky bullshit continue.

  • Puh-lease!! (Score:5, Insightful)

    by Sebby ( 238625 ) on Monday November 20, 2023 @12:57PM (#64018773)

    Misplaced concern about existential risk is impeding the opportunity to expand human potential, writes venture capitalist Vinod Khosla

    Riiiiiiiiiiight! "human potential" is what's he's 'concerned' about.

    Maybe that should be reworded the proper way, which is to say that it's really "impeding the opportunity to expand his investment returns".

    • Sam altman's net worth is estimated [msn.com] at around $500 million.

      Lots and lots and lots of people here on slashdot argue that the ultra rich, who don't do anything and simply rake in money from oppressed workers, should be somehow curtailed. In situ, "curtailed" can mean shot (and/or eaten, literally), taxed to below the 1% level, put in jail, or socially hounded with pitchforks and torches.

      The 1% level in the US sits at around $5 million. Altman is not only rich, he's ultra rich.

      Isn't this an example of someone

      • Progressive taxation, progressive fines...
        Take Finland as an example.

      • He's ultra rich. This is just another ultra rich guy defending him from the oppressive Board of Directors who are all rich, too.

        The right answer is obviously to kill and eat all of them. There is no distinguishing rule. No one ever got rich by doing anything but oppressing their workers.

        I learned that right here on slashdot. How'd you not?

        • FYI, "text" as a communication medium removes most of the conversational ques of sarcasm. Many people in the geeky-personality spectrum have a hard enough time with sarcasm as it is, but when using a communication medium like text it can become nearly impossible to determine whether you are being sarcastic, or wildly idiotic.

          It's easy to believe "only an idiot would think a statement like this is serious," but it is just as easy to believe "the world is full of idiots who would make a statement like that s

    • It's even dumber than that. Whatever "existential risk" is posed by AI, is not impeding anything or anybody. Those who are worried about existential risk, are academics whining in their publications that nobody but other academics actually read.

  • lol (Score:5, Funny)

    by bhcompy ( 1877290 ) on Monday November 20, 2023 @12:59PM (#64018779)
    A venture capitalist is the last person qualified to chime in on this topic. I say this as a capitalist myself
  • by Viol8 ( 599362 ) on Monday November 20, 2023 @01:06PM (#64018793) Homepage

    "The best companies are those whose visions are led and executed by their founding entrepreneurs"

    Tell that to SoftBank about Adam Neumann.

    Some people are good with the vision and getting something off the ground but suck and the tedious day to day running. Others are good at the tedious stuff but couldn't have a vision short of taking a bag of illegal mushrooms. The sort of person who can do both is quite rare.

    • A hired CEO still needs vision. They don't just push paper. A company with no vision is eventually a dead company.

      • Most CEO vision these days consist of cost cutting or buying out smaller companies to bring in ideas and IP they couldnt think up themselves. Eg Microsoft.

        • Yup and most of those companies plateau or die.

          • by Viol8 ( 599362 )

            Not while they have something to sell that people want to buy
              they don't.

            • Sure that's a plateau. And when some other company with vision figures out how to do it better, they die.

              • by Viol8 ( 599362 )

                Yet there are the oil companies 100 years old and doing just fine. Sometimes a product can't be improved to any significant extent and even if it could a few billion here and there makes sure your company never makes it.

                But I do appreciate your economic naivety, so refreshing.

                • Oil itself is not the product oil companies are competing on.

                  They are competing on finding, extracting, shipping, and refining oil as efficiently as possible.

                  And these days they've all grown into energy companies where oil is an important but not only source of revenue.

                  I am always happy to educate my Dunning Kruger friends at slashdot. I'm glad you learned something new today and threw away your old incorrect world view.

                  • by Viol8 ( 599362 )

                    "Oil itself is not the product oil companies are competing on."

                    Oil is the vast majority of what they sell. The rest is single digit percentages.

                    "They are competing on finding, extracting, shipping, and refining oil as efficiently as possible."

                    Thats called operations.

                    "I am always happy to educate my Dunning Kruger friends at slashdot"

                    Its always nice to be part of a group isn't it.

  • Amazing how he didn't mention global warming as a large and looming risk.

    - who face risk head on, and who are focused -- so totally -- on making the world a better place.

    what a bunch of bullshit.

    I also think it's hilarious that he thinks removing Altman is setting back progress. yeah, because nobody else is working on LLMs or can make progress on them.

    techbros like Khosla, and Altman, are becoming completely intolerable. They are demonstrably making the world a worse place for a buck.

    How I miss he good old

  • If he still is an investor in OpenAI, he would be irritated at his loss.

    But until we have details of *why* this happened, it's pointless choosing a side.

    • If he still is an investor in OpenAI, he would be irritated at his loss.

      But until we have details of *why* this happened, it's pointless choosing a side.

      Assuming he still is an investor in OpenAI Global (the actual for-profit company he would have invested in) he's probably had a lot of the inside details from the board and/or people in the company as to why this happened.

      The not-so-subtle suggestion is the board thought Altman was pushing things too quickly and got very worried about existential risk, and since that was a big part of the mandate of OpenAI they decided to slow things down.

      The one open question to me is about them being "misled" or whatever

      • Those are good questions, but they do not exhaust the possibilities. Consider that Altman perhaps was going behind the board preparing to build his own shadowy company with key employees, replicating the technology through the old usb stick in the pocket method. I'm sure other slashdotters have seen first hand these kinds of shenanigans in their working lives.
        • by jythie ( 914043 )
          Given how quickly Microsoft picked up him and his allies, kinda makes it look like he was going behind the board's back and doing something that was to his and microsofts's benefit over openai's.
    • We pretty much know why as it was made public. Sam Altman was advancing too fast and the board wanted a conservative approach to assure the safety of AI. He wanted to make a phone and they did not. They also were unhappy that he was not focussed on OpenAI with his other projects like WorldCoin. Also, none of the principles of OpenAI were awarded stock; it was one of the weird ideas about maintaining the moral purty originally intended for OpenAI. Finally, it happened because of the colossal ineptitude of
    • by jythie ( 914043 )
      That is something I find unsettling about this whole drama. Employees and pundits have really rallied around the CEO, even though we don't know why the board fired him, which has a very 'cult of personality' feel to it. People are emotionally invested in the guy as a person, not necessarily what he has been doing as CEO.
  • by gweihir ( 88907 ) on Monday November 20, 2023 @01:10PM (#64018813)

    The current form of AI is not smart, cannot do anything that requires the slightest bit of insight and, on top of that, frequently hallucinates and gives unmitigated bullshit as answer. In addition, it is subject to model collapse and and model poisoning, and making it better in one area makes it a lot worse in all others. Yet these people all claim this is a revolution. I fail to see anything like that happening. Yes, the NL interface got a bit better. But quality-wise it is not better than, say, IBM Watson, which is not 13 years old. You know, that Watson, that was pulled from doing medical stuff because it occasionally killed a patient in what was probably a precursor of "hallucination".

    So, yes, LLMs are a nice trick, but expecting them to do real work is just completely disconnected from reality. All they can do is make the search engine a bit better in mist cases and massively worse in some. That is not revolutionary at all.

    • by r0nc0 ( 566295 )
      Yeah, I'm with you. There's a LOT of willingness or wanting to believe.... something.
      • by HBI ( 10338492 )

        These people have to have a scam because their intent is not to _run_ a business but to elicit outside capital so they can make a profit in a (relatively) slow motion pump and dump scheme.

        AI is great for this purpose. The term has been talked up for years and brings in the stupid money.

        And this was the harm wrought by expectations of getting a higher return than normal business activity would dictate. Most M&A activity should be illegal and pump and dumps should have personal liability for those invol

        • by gweihir ( 88907 )

          The "stupid money". I like that term. Very fitting.

          • by gweihir ( 88907 )

            Also like the idea that this is a "slow motion pump and dump". Makes perfect sense. For example, having seen some demos of IBM Watson targeted at experts (so no BS claims of "intelligence" or the like) over the years (which is now 13 years old), it could do all that stuff, probably with less hallucinations and better results. What it was missing was the pretty universal natural language interface catering to non-experts that the current AI hype has. That interface is the _only_ advantage I see. Regarding ac

            • by HBI ( 10338492 )

              Every time I compare the LLMs of today to juiced up Elizas with big data and voice interfaces, I get flak. But it's not completely in-apt.

              It reminds me of the crypto flak. Say it was a scam for years, and you'd get a zillion trolls who thought they were going to get rich foaming at the mouth and looking to cut your junk off. Now that essentially no one got rich, it's safe to say it was a scam from the get-go. Give this LLM stuff a year or two.

      • by jythie ( 914043 )
        One of the consequences of periods of unprecedented growth is people expect them to continue forever. Over the last 30 years or so we have seen some massive cultural shifts and their associated economic effects, which made a small number of people very wealthy. Now we have investors who have come to expect this kind of constant explosive growth and they are floundering to try to find the next Amazon or Google that will give them bragging rights to other already obscenely wealthy people. I mean, what ar
    • by quantaman ( 517394 ) on Monday November 20, 2023 @01:46PM (#64018917)

      The current form of AI is not smart, cannot do anything that requires the slightest bit of insight and, on top of that, frequently hallucinates and gives unmitigated bullshit as answer. In addition, it is subject to model collapse and and model poisoning, and making it better in one area makes it a lot worse in all others. Yet these people all claim this is a revolution. I fail to see anything like that happening.

      Depends on the field.

      Writing? More work to get it to say what you want than to say it yourself.

      Editing writing? Surprisingly good.

      Coding? Needs guidance, but a huge productivity multiplier when used properly.

      Illustration? The one field where it's potentially putting a lot of lot of people out of work.

      But quality-wise it is not better than, say, IBM Watson, which is not 13 years old. You know, that Watson, that was pulled from doing medical stuff because it occasionally killed a patient in what was probably a precursor of "hallucination".

      I never heard about that. I think the fundamental issue is that Watson was a less capable model than current LLMs and it was going into a field where current practitioners are extremely well trained.

      Basically weaker tech trying to do something extremely hard (outperform doctors).

      So, yes, LLMs are a nice trick, but expecting them to do real work is just completely disconnected from reality. All they can do is make the search engine a bit better in mist cases and massively worse in some. That is not revolutionary at all.

      Well a lot of people are successfully using them to do real work, so I don't think that we're the ones disconnected from reality.

      • Illustration is no different than writing. Good illustrators can get things done faster themselves than getting the AI to produce exactly what they want consistently.

        Incidentally, people who are weak at reading are impressed by anything that looks like words they've seen before, and people who are weak at visual communication are impressed by something that looks like an image they've seen before. Such people are very glad to use AI tech and think everyone should therefore use it too.

        • That's not the bar.

          The question is whether an illustrator can do more work with or without it. And they can do more with it.

          • by gweihir ( 88907 )

            But will it be the same quality? Will the illustrator get the same benefits as when doing it conventionally? Having seen now quite a bit of AI generate illustrations, the answer to the first is a resounding "no" and the answer to the second is at the very least "unknown". Maybe if the illustrator trains a model on their very own specific style, this will get better, but while reaction time (latency) will probably get better, total effort may or may not.

            • But will it be the same quality?

              It may even be higher quality.

              Will the illustrator get the same benefits as when doing it conventionally?

              You can use it for a basis, then redraw and/or inpaint the problematic parts. Complex art is already done in many layers and steps. Using these tools just reduces the number of steps to a complete image.

              Having seen now quite a bit of AI generate illustrations, the answer to the first is a resounding "no" and the answer to the second is at the very least "unknown".

              The software doesn't produce the finished image on its own! A human does some of the work in more conventional ways, although some of that will come down to photoshop (etc.) rather than drawing it from scratch.

              Maybe if the illustrator trains a model on their very own specific style, this will get better, but while reaction time (latency) will probably get better, total effort may or may not.

              Of course it will. Even in the hands of an amateur the software can g

        • Ah, the no true Scotsman approach: "no TRUE artist would benefit from AI tools".
          I call bullshit, based on the fact that I know professional artists that are using AI tools to make their work better and faster, and they made great art both before and after they started using AI.
          The AI tools have come a long way in the last year, and it's not just "type in a description and hope you get something close to what you want". Nowadays the artists start with a sketch, then use tools like ControlNet to guide the dif

      • by Okian Warrior ( 537106 ) on Monday November 20, 2023 @03:47PM (#64019247) Homepage Journal

        Depends on the field.

        Writing? More work to get it to say what you want than to say it yourself.

        Our brains have two channels of information: incoming and outgoing ("afferent" and "efferent" are terms for the neurons involved).

        These channels are distinct, and typically one side is underdeveloped: you can receive information and completely understand it, but not be able to send it out to others. This is why lots and lots of people have discovered and said "you only learn something by teaching it". Successful teaching requires you to sort out all of your unrecognized gaps in knowledge.

        As an example of this, when you have a moment (such as commuting) pick any object - any object whatsoever - and try to talk continuously about that object for 60 seconds. Say anything you like, rambling on without repeat about that object. Talk about its color, history, position, size... anything you like.

        See if you can keep that up for 60 seconds.

        Most people can listen for hours on end and understand what they hear, but turning that around is usually difficult because their outgoing channels are not as well developed. Given some time and thought, you could easily write a 1-page script that would take a full minute to read, but doing this extemporaneously is initially quite hard until you get a bunch of practice.

        So in the case of AI, lots of people "have an idea" of what they want to convey, but don't have the right words for it. Taking time to guide and modify the input prompt lets them turn this around: they can keep modifying the inputs until it "sounds right" and the input channel matches their internal concept.

        Yes, it probably takes longer than a fluent speaker composing the words from scratch, but learning to be fluent takes an enormous amount of up-front time to begin with.

        The same with graphics art: I can (for example) form concepts of cartoons and describe them in text, but don't have the skill to draw them out. By iterating over text descriptions, I can get the AI to zero in on the cartoon I want using the input channels instead of the output channels. Learning to draw well takes a lot of up-front time.

        Expect to see a lot of really creative people using AI. The people will supply the ideas, but the AI supplies the expertise needed to complete the job.

        • by gweihir ( 88907 )

          I somewhat agree, although the thing is there is an "information keeping" element in the middle. Just listening to things goes into memory badly interconnected with other information and may not even be retained longer-term. Speaking/writing about something requires the information to actually be there in the first place and to be well cross-linked with other information. That is why I recommend to my students to write summary of the whole lecture as exam preparation, even if they cannot use that summary in

    • We know that you are going to keep repeating the same claim over and over and over again, even after AI takes over the world and fundamentally changes everything about society.

      Why not save yourself some work? Just write up a detailed blog post about all of your claims, and then share the link every time you have something to say about this topic. Heck, I hear there's even a few online services that can write your blog post for you.

  • This chap sounds like the sort who would build a deep sea submarine out of carbon fibre, and then moan about regulations that prevent such shenanigans.

  • by kaatochacha ( 651922 ) on Monday November 20, 2023 @01:11PM (#64018823)
    Am I the only one who finds it weird that Altman's sister accused him of some pretty heinous things, and there's only crickets?
    Not "She's crazy!", not "Perhaps she has a point and we should check into this?", but no mention whatsoever of her accusations? Just ... nothing?
  • Just imagine where we would be today if we hadn't been able to use innovations like leaded gas and asbestos. If a little more time was spent making the Internet safer and more secure, we might not be having multiple announcements every week about companies getting hacked. Imagine being the test engineer that discovers a critical and systemic security risk which will prevent an AI from going live and making billions if not trillions of dollars. What do you think will happen next?
    • The benefits from leaded gas and asbestos were enormous. The lives shortened were a minor loss by comparison. Asbestos was incredibly valuable in the age of steam and aside from its risks when friable is a durable, immensely useful material. For example my house has asbestos siding from 1965 and the stuff doesn't decay, corrode and is waterproof. I don't chew on it so it simply isn't a hazard to me.

      Leaded fuel enabled the higher octane avgas key to winning the Second World War.

      Every choice has a cost. Delay

      • This is exactly why we need to be careful about AGI. When humans have such disregard for other human life, computers will learn from that. They may learn that humans are expendable all together, since that is a common theme on the internet.
      • For that matter, the suppressed information about radiation harms by the USG post-Hiroshima and Nagasaki falls into this category as well.

        Hell, what about the OPM hack where the entire personnel database of the USG was exfiltrated, presumably by China.

        Because all Luddite fears are grossly overblown.

  • by ceoyoyo ( 59147 )

    That is an impressive pile of bullshit.

    OpenAI was founded as a non-profit with the goal of making AI advancements generally available specifically to head off profit motivated companies keeping it all proprietary. OpenAI specifically has limits on the profit any investor in their for-profit subsidiary can make.

    Sam Altman didn't "put it all on the line." He's a super rich dude who doesn't have any equity in openAI, except indirectly through a small investment by Y Combinator.

    Despite the Ubers and AirBnbs pa

  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Monday November 20, 2023 @01:42PM (#64018899) Homepage Journal

    LLM "AI" is completely incapable of providing safe medical advice. It has no concept of what a disease is, or of what symptoms relate to what underlying cause, it merely knows what words are often used together. And conspiracy theorists are FAR more common on the Internet than actual medical practitioners. In consequence, very dangerous "advice" is statistically FAR more common than sensible advice.

    LLM "AI" is equally incapable of being a tutor, for much the same reason. It knows nothing about any subject, it doesn't understand how anything works, it doesn't comprehend anything, it's merely a statistical calculator. And, again, conspiracy theorists are FAR more common than intellectual sources. In consequence, total nonsense and very dangerous perversions of knowledge are statistically FAR more common than rational understanding.

    Of course, we're dealing with a Vulture Capitalist here, not an intelligent, rational, human being. In fact, I'm not entirely convinced VCs are any sort of human. They're statistical calculators, much like LLMs.

    Actually, that would be a GREAT way to test the sincerity of this VC. Create an LLM that can process corporate plans and decide which ones to invest in. BS plans will show this through language that a human VC could easily be fooled by, but an AI isn't processing the actual words, merely the relationships between words, and as such can't be bamboozled. My suspicion is that an LLM would do FAR better at VC-style work than any human.

    So build an AI VC, get a sponsor to give it some seed money to speculate with, and use its success rate to prove that it is a cheaper, more effective, strategist than a human. My guess is that Vinod Khosla would find it highly objectionable and a potential threat. It's only good, in his eyes, when it earns him money through endangering other people's jobs. When it's his job that's on the line, I doubt very much he'll take the same line.

  • capitalism doesn't do free,
    • He said "imagine free...". Can you imagine it? Do you see it in your mind's eye? What would it be like? Turn it around in your mind. Think of the possibilities! What will you do when you have it? What will your children do? Your wife is happy that you have it, no? Your life is better now. The future is yours today. You can't do without it. But it's not quite free. What can you pay for it now? This is how much I want, give me your money.
      • you aint doing billionaire technocrat correctly - its everything you have and ever will have with your descendants indentured for all time. * * and anything else they csn think of
  • There was no reason to be terribly concerned about any of these things until he had to go n say it! We're fucked in so many ways now....

    "But humanity faces many small risks. They range from vanishingly small like sentient AI destroying the world or an asteroid hitting the earth, to medium risks like global biowarfare from our adversaries, to large and looming risks like a technologically superior China, cyberwars and persuasive AI manipulating users in a democracy, likely starting with the U.S.'s 2024 elec

  • > Imagine free doctors for everyone and near free tutors for every child on the planet. That's what's at stake with the promise of AI.

    So, Vinod Khosla 'invested' in a company but wants that company to be giving things away? Of course that is not why he 'invested'. He must think everyone is stupid and does not see right through him.

    And they greatly over-estimate the utility and impact of their tech.

  • Anyone concerned about x-risk from superhuman AI should look toward OpenAI as a precautionary tale of the empty futility of arguments AI could ever be controlled or contained.

    Humans are routinely social engineered by their intellectual peers and organizations can't even control themselves from the corrupting influences of power and money. OpenAI become the opposite of what it intended to be. The notion they could control AI when they can't even control themselves is absurd.

  • Open AI isn't everything, the investors might get paid later on then they wanted. They should be less greedy
  • Goodness! It sets back the promise of AI? What promise is that? There is no promise. And especially no promise in LLM that employs low paid labour to be able to work in the first place and uses up a lot of energy for what? Mechanized hallucinations.

    Can this AI stuff explain me WHY it tells me something? Can it explain me HOW it got to the conclusion it had to say that? NO and NO. So yes it looks impressive but in the end it's a load of bollocks. Smoke and mirrors. Hype.

  • I signed the GWWC pledge to give 10% of my income to charity for the rest of my life. I used to have a religion, but I found out it was false (any LDS/Mormon folks reading this can ask me how I know, if they dare) so I decided I would no longer donate 10% tithing to missionary work, Books of Mormon, temples and the like. Now I give instead to things like cost-effective malaria nets [givewell.org] which save one child's life per $5,500 spent, encouraging clean energy R&D [ea.do], and with this whole AI thing heating up I woul

    • I used to have a religion, but I found out it was false (any LDS/Mormon folks reading this can ask me how I know, if they dare)

      Ok, how do you know?

      • by Qwertie ( 797303 )

        Sure. I would refer you to this summary of my own story [lesswrong.com] (watch out for the part about the CES letter).

        If you are LDS, you may prefer a a less dry/reductionist (and more gradual/meandering/detailed) approach to this topic. In that case, please watch this [mormonstories.org], or possibly this [youtube.com].

  • by nicubunu ( 242346 )

    Even if true, OpenAI is just a single actor in a market full of players trying to advance AI, so even if OpenAI stumbles, the other can carry on the progress.

    Also, does this guy really says we should get medical advice from internet bots?

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...