Comment Re:4GB has been insufficient for many years now (Score 1) 22
I'm on Ubuntu 26.04, writing this using Firefox and I am currently using little under 2 GB of my RAM (16 GB in total). I'm not sure how much is used during the boot.
I'm on Ubuntu 26.04, writing this using Firefox and I am currently using little under 2 GB of my RAM (16 GB in total). I'm not sure how much is used during the boot.
I think the better term to independent thinker is critical thinking, where it is not about doubting others, but doubting yourself. Do I have enough information about this subject? Could I be wrong? Are my arguments flawed?
Do you avoid random number generators just because they are not actually random? No, you just take that into account when using it. There are methods that allow you to get million answers from the AI in a sequence without a single mistake. ( https://www.youtube.com/watch?... )
"Thinking about God increases acceptance of artificial intelligence in decision-making"
https://pmc.ncbi.nlm.nih.gov/a...
If we estimate that there are about 1.2 trillion humans that will born in the future and US government estimates the value of single life to be about 10 million dollars (considering what they are willing to do to save one life). If we multiply these values, it should give us a rough estimate of what it is worth to save humanity. Considering that the main goal that Mush has, is to populate Mars to preserve humans (and he is pretty much the only one who actually tries to implement it), I think that is the worth of the Musk.
I actually run usability tests for Gimp because I assumed it would have horrible usability issues. To my surprise there were no usability issues, it was easy to use for both experienced Photoshop user and to a user that is new to photo editing. The tests were obviously tailored for each use case, so new user was not requested to do complex things, but experienced users were.
Here is a remote job opening for neurosurgeon:"... is seeking a BC/BE Neurosurgeon to provide telehealth consultations for patients with neurological and spine-related surgical conditions."
https://www.indeed.com/viewjob...
I only ask AI to write me throwaway code which I wouldn't write without AI at all. With production code, I don't ask AI to write it at all, instead I just ask it questions like. "I upgraded this library from version A to B and now things broke down, what could be the reason?". Another use case is to just paste an error message from logs into AI and it can usually explain pretty well what is going on and how to fix it.
What about the cases where AI helps you solve something you could never solve without it? How do you qualify that?
https://www.bbc.com/news/scien...
FrontierMath shows that AI capability has risen from about 10% (2025-Jan) to about 40% (2026-Jan) (currently about 50%)
https://epoch.ai/frontiermath/...
I think it would be fair to wait a year and then try to figure out if progress is slowing down or not. Between 2025-2026 progress has not been slow. I agree with you that OpenAI won't be leading the scoreboard after a year (because of scaling problems), but I am very interested to see where Gemini will be.
Google's strategy was to keep it in the lab, because they didn't need money and they were winning. It was best strategy for them to just keep going forward while others where sleeping.
OpenAI's strategy on the other hand was to publish old technology as new, make bold claims about AI and get some investors money. OpenAI needs money because they don't have much skills and the only way they know to improve AI is to put more hardware and data behind it. For them, this strategy was optimal.
Because OpenAI went public, Google has to follow, just to show the world who the true leader is.
Beause Google went public, OpenAI is now starting to look pretty bad in comparison. This might not be obvious yet to the general public, but to those who understand the scaling problem and look at the results, this should be visible already. As OpenAI will hit the limit of hardware, they can no longer compete against Google. I expect that they will crash within few years, due to running out of funds.
About the job loss, I am 100% certain that that will happen, but I think that it is unlikely that all workers will lose their job. I think we will see something like 50% of doctors lose their job after something like 20 years, because it will take very long to get approval for new systems on the medical field and most likely we still need people who can evaluate these tools. But even if some jobs will remain, we will lose so many jobs that we will need some kind of reforms.
If prices are higher than production costs, they will come down at some point, because extra price means extra profit and every manufacturer wants to get as much of that as they can. So RAM prices will come down.
Here is how to fix the AI not to do it:
https://www.anthropic.com/rese...
Here is two minute papers video explaining it:
https://www.youtube.com/watch?...
No it isn't. Just look at how Deepmind does their work. They have had several clear minor goals on their way. For example they first learned to play old games, then go and then Starcraft, then they turned their attention into realworld problems like protein folding. They tried to solve protein folding for 2 years and finally solved it.
On the other hand, if they buy from different companies, those companies get money and that money can be used to improve the product.
Is a computer language with goto's totally Wirth-less?