Comment Re:failed a polygraph needed for classified access (Score 1) 13
It's a self-fulfilling prophecy. They met the requirements because they passed the silly test.
It's a self-fulfilling prophecy. They met the requirements because they passed the silly test.
You forget that Microsoft built itself on its addictive developer network.
What was addictive about it?
I just use proton for everything. I'm on the visionary plan with 6TB storage. I also bought my own domain so even if they decide to kill off my account for whatever reason, the domain remains mine, along with any email addresses. Proton also has an AI chatbot included, and so far it works for the only thing I use other chatbots for: searching the internet, because google has turned its core product into shit over the years. I don't know what plan you need to have access to it though.
There are other sites that host free models besides huggingface, e.g. civitai has a bunch of image models.
That depends on what exactly you intend to do. Producing tensor models is going to be out of reach of most people simply because the training data you need is generally going to be quite vast, and it's a huge undertaking to get all of that. Then the hardware/time required to process all of that is even more onerous. But once the tensor model is built (trained) gaming GPUs are generally fast enough to generate content locally at an acceptable speed. They're a bit slower than dedicated ASIC (read: Tensor Processing Unit) and/or FPGA hardware that the big AI companies use, but far from impractical. And the better the GPUs get, or as in the case of Pixel phones, having a dedicated TPU, it will only get faster.
If you've ever done algebra and used a TI graphing calculator, you've probably seen e.g. the linear regression, exponential regression, sinusoidal regression, or other functions that spit out an algebraic function (i.e. "model") that you can then use it to extrapolate/interpolate additional data points around. This is the same basic principle around AI.
There are a lot of pre-trained models available on sites like huggingface that you can just download and use. Take this one for example:
https://huggingface.co/RomixER...
If you want it censored, you'll need to run it through another model trained to censor images. See https://huggingface.co/models and click on tasks. Toy around with these models enough, and you'll get a better understanding of how it works, and where the limitations lie. Chatgippity, and to a greater extent, AI hypists, more or less turn AI into this big enigma that it is not.
My understanding of openAI is they were supposed to be a nonprofit that would research, develop, and spit out models that they would make available to the public generally for free, but then changed to a for-profit model and stopped publishing them. That fact is why they're being sued by Elon Musk, who was one of the original donors, and if I were him I'd probably want my money back too if it just went into somebody else's pocket in the form of private equity shares. Either that or force them to make the models public again, but then the more recent investors would probably sue the company for tearing down its only moat.
Or maybe, instead of going full-Big-Brother and doing away with privacy, personal control of our own hardware, and freedom of speech and expression for EVERYONE; you could do one of two things instead:
1) Vote your own trumpscum out of office and make sure their replacements.
2) Just wait a few years until the trumpscum are gone and the country is run again by people who've read the constitution and watch these local yokels get slapped down by the interstate commerce clause.
Like Mao and Stalin, AI-companies argue that not all truths need to be correct, just consistent.
I don't know what they said about it, but this isn't necessarily wrong. This is something I've seen Neil Degrasse Tyson mention in his lectures and YouTube videos:
https://ryanemorgan.substack.c...
And I would have to concur with his take.
Worth noting
I haven't personally witnessed this. In fact, if I get the impression that a candidate doesn't like the kind of work they're going to do and they just want the pay, then I'll give them the thumbs down, and all it takes is for just one of us to do that, and they don't get hired. I believe everybody on my team would do the same. The last thing we need is a new hire that quickly burns out.
Besides, if we're repeatedly doing something that we don't like doing, our first instinct is to automate it. Though I work for one of those companies who would rather build than buy.
What they're doing is dropping the amount of ram in the budget models to crazy-low levels. My laptop died due to a motherboard problem (ram test was fine), so I just bought a new laptop of the same series, which has a better processor and GPU but only 12GB of RAM : So I'm going to try taking the 32GB of ram out of my old laptop and putting it in the new one. It *should* be compatible.
I bet there's a good market right now for people buying up "broken - for parts" computers to strip the ram out of them.
ED: Forgot to post this when I wrote it. Installed the old ram, and at least thusfar (fingers crossed) it seems to be working well...
Seems kind of like asking "why learn arithmetic when you have a calculator?"
Much in the same as math education shouldn't train people to be human calculators.
But this has been the status quo in "programming" for a long time now. If AI changes anything in the long term, it will only change how you solve the problem, with or without a calculator in your hand.
Whatever you think of their honesty, the phrase has a single meaning, which is "we're not going to do it" with the additional pretty obvious inference "because it is unethical".
Your post isn't merely pedantry, it's just willful denial of what ultimately is a very clear and unambiguous statement.
And the 77 million people who are 100% A-OK, most of them enthusiastically so, with masked thugs rampaging in the streets kidnapping and murdering, women reduced to handmaids, LGBT people denied civil rights and even health care, being a bitch for Putin, massive graft and corruption, and raping children...
I won't.
That's about 340 million people.
1. Force employees to use AI
2. Brag to press how AI is making co. lean & profitable
3. Investors fall for it, thinking co. is cutting edge
4. PROFIT!
Thai foodie: "The eels were delicious! 5 Stars!"
That's okay, Trump will invade Scotland and force them to speak English.
. and perhaps 30 percent never became pregnant
Correct. See my reply further downthread.
Doubt is a pain too lonely to know that faith is his twin brother. - Kahlil Gibran