Comment Re:Psilocybin? (Score 1) 4
-- Nancy Reagan
"They run as a rectangular banner at the bottom — part of a widget that also shows news, the weather and a calendar. "
False, they run in an optional screen saver, not when the display is being used for interaction. It's not a "widget", it's a screen saver, and only a transitional one.
"...and that overall pushback has been negligible."
Right, because the ads run in a screen saver, and no one sees the screen saver. They've walked away already.
"Bosworth thinks it's wrong to take away the new feature as a condition."
The screen saver is NOT a "new feature", and most users aren't aware it exists. I personally have not seen it.
"Wanting to keep the widget but not the ads..."
He wants to see an animation for 2 minutes before his screen goes dark? After he's done using the fridge? Please.
"He hasn't seen another since."
I've never seen one at all. That's because I use the fridge and then leave, just like everyone else.
"One 27-year-old plans to return his refrigerator after the entire display "lit up with a full-screen ad for Apple TV's sci-fi show Pluribus," according to the article. The all-caps ad beckoned him "with an oft-used refrain directed at protagonist Carol Sturka: 'We're Sorry We Upset You, Carol.'""
Doubt it. This article is really trying to lie to you.
funny how this
There's nothing wrong with using AI tools to review code and identify issues, real humans will review those issues and solutions after all. It's a far cry from what the AI industry claims AI tools will be useful for, specifically writing all the code in the first place.
Writing good code requires creativity, hard work and accountability; reviewing code is all over the map, it doesn't require creativity and does not come with accountability. Sounds like something AI might be suited for.
The facts are _really_ clear: The current factorization record without trickery and deception is 21. Might as well predict that "magic" will break classical crypto in any meaningful time with about the same level of justification.
Well, my current estimate id +5 effective qbits every 50 years. That linear scaling may be massive overestimating things, chances are the real scaling is inverse exponential, but lets assume it is linear for the moment. RSA130 needs around 450 effective qbits in a long calculation. We are currently able to factor 21, i.e. 5 bits. Hence we may see RSA130 fall to a QC in something like 4500 years.
I have absolutely no problem with QCs as physics experiments and for advancing some areas of Math. But pushing them as future computing mechanisms is dishonest and should count as scientific misconduct.
And that is just for them breaking even. Anybody thinks they can push ad revenue 150 times higher?
Indeed. Also note that "basically no progress" can be a lot faster than "basically no progress". At the glacial pace that QCs are making, and with the laughably low performance they currently have (factoring 21 after 50 years of research, seriously???) relative speeds are strongly subject to meaningless artefacts.
We are not going to get AGI this century. The people that claim that are lying (Altman) or are delusional. AGI is not a question of throwing more computing power at the problem. Something fundamental is missing and we have no idea what. Also note that most humans may not actually have any meaningful amount of general intelligence. Only about 10-15% are independent thinkers and can fact-check. And that is basically what AGI would need to be able to do to qualify. Unless we find out a lot more, we cannot even make predictions on whether machines can have AGI.
Now, given that state of affairs and tech history, this indicates we are at the very least 100 years away. And that is if we get a credible and practical theory how AGI works tomorrow. The one mechanisms we have that is AGI (automated theorem proving) does not scale at all in practice due to exponential effort and that is a hard limit. We do not have any other mechanisms. And some quasi-mysticism like "put in all human knowledge and AGI will result" is just bullshit and has no scientific value.
Yes. Not quite there, may take another 20 years or so, but I had an opportunity to see where they where 35 years ago. And they already were deep in the details at that time back when. But the thing is, self-driving is a classical problem and classical problems can be divided, parallelized, special cases and maps put into databases, etc. Self-driving is conceptually _easy_. The practical aspects are not. None of that is true for Quantum Computations. Quantum Computations are all-or-nothing and you cannot break them down into smaller parts.
That said, AGI is still completely out of reach and may not even be in reach of machines in this universe. There is far too much unknown to even credibly speculate. Going to Mars might be possible at this time, but you go there to die. Colonization is at least 100 years away and makes no sense. "Colonizing" the desserts and oceans on earth would be far, far easier and I do not see anybody doing that...
QCs exist. With extreme effort and some trickery, they can even factorize 21 now (35 is still a fail at this time). That is 5 effective qbits in a somewhat complex computation. It makes for a nice physics experiment. But that is after about 50 years of research. And it looks very likely that QC effort scales exponentially in two dimensions of the the size of the computation (qbits and steps in the computation). Hence, if we progress at this speed, we may be able to factor 10 bit numbers with a QC in, say, 50 years. The current recommendations for RSA keys are 2048 bits. That needs about 7000 effective qbits to factor. If we assume the current scalability (+5 effective qbits every 50 years) continues, a current RSA 2048 key will be within reach in about 70'000 years.
The whole thing is nice for Physics, but completely meaningless for Computer Science.
The sad thing is that at this time, disabling the warning is probably the most secure thing to do. Of course, that comes with other problems.
AWS-256 will remain quantum resistant forever. QCs only get you a halving of the bits for block-ciphers. Hence AES-256 gets you a computational safety of 2^128 and that is unbreakable in this universe and even more so with dog-slow QCs that cannot do long computations and is about the most unsuitable mechanism for brute-forcing anything that is imaginable. The real threat to AES is conventional attacks getting within reach (reducing the effective key-length to something like 80 bit), but AES is built on top of half a century of research and has survived very well for 25 years now.
Also take into account that breaking, say, RSA-2048 needs a long and complex computation with about 7000 fully entangled effective qbits. The current factorization record for QCs is 21 (when you discount trickery and deception and even that was not with the general algorithm you need to use for any real factorization). That is 5 effective qbits. After about 50 years of research. The whole thing is a total non-starter as computing mechanisms. It is interesting for other reasons, namely to check quantum theory at precisions never reached before (which should be reached at around 60...100 effective qbits and that may be within reach), but it will not be a useful computing mechanisms, ever, unless we find some fundamental, and at this time completely unknown, loopholes in quantum reality and essentially break Quantum Theory. That is a possibility. Nobody knows whether it is a likely one or not.
Professional wrestling: ballet for the common man.