Comment Re:Hardly surprising (Score 1) 163
That's a nice way to oversimplify a complex issue.
That's a nice way to oversimplify a complex issue.
Gut bacteria plays a large and poorly understood factor of weight gain. There are plenty of anecdotes about people whose weight gain patterns changed dramatically after illness or medical treatments like fecal transplants, etc.
I'm glad I don't live in the same bizzaro universe that you do.
Thank you for this breath of fresh air. I'm tired of hearing the simple refrain about "calories in vs. calories out".
Sellers need to band together and file lawsuits against Amazon for copyright infringement regarding the photos and product description text that Amazon is pilfering from their websites. There are probably several other potential legal challenges they could mount as well, related to deceptive marketing and sales.
His first objection: if AI can truly do everything, then everyone can have everything they need, making the question of who owns the robots somewhat moot.
What kind of brain-dead reasoning is this? The question of who owns the robots is directly correlated to who will profit by providing services to others. The more necessary the product, the larger the profit margins will become. Big oil, anyone?
This submission is written like a puff piece for big tech. I think it's quite easily witnessed how detrimental big tech products are to society at large. Ask yourself: if we magically rolled back technology to 1985, would the world be worse mentally? Yeah right.
We Americans shout and cheer for unrestricted unfettered, unregulated, uncontrolled capitalism.
Actually, it's people who are either ignorant or greedy that do this. America just happens to have plenty of people that fit the description.
That may sound complicated for non-techies, but it's really not. A few Google searches - and some actual reading
:-) - is all you really need for almost any Linux question.
You have no idea how technologically tone deaf and entitled you sound. Many computer users can't tell the difference between RAM vs hard drive space, or AMD vs Intel vs Snapdragon, or MacOS vs Windows, or Android vs iOS. They have no patience or willingness to learn about something that is only meaningful to nerds.
A large percentage of users can't tell the difference between a web page and an application on their computer. I think you underestimate the mental friction that is required for people to figure things out for themselves.
You apparently spend a lot of time caring about things that are not meaningful.
When you die, are you going to care that you saved 2 microseconds because your OS had one copy of a third-party library, and not two copies?
If you are really this productive using AI tools for software development, you could probably make even more money than you do presently by making video courses for how to achieve the results that you have. Because for a large majority of software developers, AI modes pose more problems than they solve. Browse this thread to look for examples, including: hallucinations of API methods that don't exist; conflation of two separate technologies that share a common terminology; lack of coherence when asking the AI model to construct anything more complicated than a simple CRUD operation; AI model preference to rely upon counterproductive solutions, such as deleting problematic code instead of fixing it; mixing of methodologies, design patterns, and mental models in a way that results in extremely inconsistent software; generating large amounts of low-value artifacts, such as a hundred unit tests that fail to achieve as much coverage as ten human-authored unit tests; etc. etc. etc.
The problem with this line of thinking is that you are ignorant of the fact that we CAN say what is not thinking, and we've narrowed down the problem quite a bit.
We've not narrowed it down nearly enough to determine which portions of LLM behavior are and are not thinking.
Are houses as easily produced and sold as video cards?
Will that make it stop checking if that value that I clearly defined is null every freaking line?
What, was the LLM trained on Go?
You are always doing something marginal when the boss drops by your desk.