Comment Re:buying stuff with ChatGPT? (Score 5, Funny) 49
I bought a nice pair of gloves this year with ChatGPT, but when they arrived I found that they each had six fingers.
I bought a nice pair of gloves this year with ChatGPT, but when they arrived I found that they each had six fingers.
Investment is a tricky one.
I'd say that learning how to learn is probably the single-most valuable part of any degree, and anything that has any business calling itself a degree will make this a key aspect. And that, alone, makes a degree a good investment, as most people simply don't know how. They don't know where to look, how to look, how to tell what's useful, how to connect disparate research into something that could be used in a specific application, etc.
The actual specifics tend to be less important, as degree courses are well-behind the cutting edge and are necessarily grossly simplified because it's still really only crude foundational knowledge at this point. Students at undergraduate level simply don't know enough to know the truly interesting stuff.
And this is where it gets tricky. Because an undergraduate 4-year degree is aimed at producing thinkers. Those who want to do just the truly depressingly stupid stuff can get away with the 2 year courses. You do 4 years if you are actually serious about understanding. And, in all honesty, very few companies want entry-level who are competent at the craft, they want people who are fast and mindless. Nobody puts in four years of network theory or (Valhalla forbid) statistics for the purpose of being mindless. Not unless the stats destroyed their brain - which, to be honest, does happen.
Humanities does not make things easier. There would be a LOT of benefit in technical documentation to be written by folk who had some sort of command of the language they were using. Half the time, I'd accept stuff written by people who are merely passing acquaintances of the language. Vague awareness of there being a language would sometimes be an improvement. But that requires that people take a 2x4 to the usual cultural bias that you cannot be good at STEM and arts at the same time. (It's a particularly odd cultural bias, too, given how much Leonardo is held in high esteem and how neoclassical universities are either top or near-top in every country.)
So, yes, I'll agree a lot of degrees are useless for gaining employment and a lot of degrees for actually doing the work, but the overlap between these two is vague at times.
The degree isn't about "getting a high-paid job", it's about knowing what the hell you're doing once you get a job. Although, fair enough, it's quite plausible that not many degrees would meet that standard either.
There is a possibility of a short-circuit causing an engine shutdown. Apparently, there is a known fault whereby a short can result in the FADEC "fail-safing" to engine shutdown, and this is one of the competing theories as the wiring apparently runs near a number of points in the aircraft with water (which is a really odd design choice).
Now, I'm not going to sit here and tell you that (a) the wiring actually runs there (the wiring block diagrams are easy to find, but block diagrams don't show actual wiring paths), (b) that there is anything to indicate that water could reach such wiring in a way that could cause a short, or (c) that it actually did so. I don't have that kind of information.
All I can tell you, at this point, is that aviation experts are saying that a short at such a location would cause an engine shutdown and that Boeing was aware of this risk.
I will leave it to the experts to debate why they're using electrical signalling (it's slower than fibre, heavier than fibre, can corrode, and can short) and whether the FADEC fail-safes are all that safe or just plain stupid. For a start, they get paid to shout at each other, and they actually know what specifics to shout at each other about.
But, if the claims are remotely accurate, then there were a number of well-known flaws in the design and I'm sure Boeing will just love to answer questions on why these weren't addressed. The problem being, of course, is that none of us know which of said claims are indeed remotely accurate, and that makes it easy for air crash investigators to go easy on manufacturers.
It's kind of a suprising to me that it was a fungus and not a plant that developed this ability. After all, plants already feed on elecromagnetic radiation.
The chlorophyll in plants is finely tuned to absorb specific wavelengths of light. It already has a hard time with green light compared to blue light, and it's simply not going to work at all with radiation that has wavelengths that are orders of magnitude shorter. Chlorophyll acts like a little antenna that gets excited by certain light frequencies, but ionizing radiation would just blow the chlorophyll molecules apart and destroy them.
Taking advantage ionizing radiation is going to require a completely different mechanism than plant photosynthesis, just like you can't use glass lenses or parabolic mirrors to focus X rays or gamma rays. Plants probably have no more chance of having such a mechanism than fungi do.
Just as a thought experiment, I wondered just how sophisticated a sound engineering system someone like Delia Derbyshire could have had in 1964, and so set out to design one using nothing but the materials, components, and knowledge available at the time. In terms of sound quality, you could have matched anything produced in the early-to-mid 1980s. In terms of processing sophistication, you could have matched anything produced in the early 2000s. (What I came up with would take a large comple
It's far from indisputable. Indeed, it's hotly disputed within the aviation industry. That does NOT mean that it was a short-circuit (although that is a theory that is under investigation), it merely means that "indisputable" is not the correct term to use here. You can argue probabilities or reasonableness, but you CANNOT argue "indisputable" when specialists in the field in question say that it is, in fact, disputed.
If you were to argue that the most probable cause was manual, then I think I could accept that. If you were to argue that Occam's Razor required that this be considered H0 and therefore a theory that must be falsified before others are considered, I'd not be quite so comfortable but would accept that you've got to have some sort of rigorous methodology and that's probably the sensible one.
But "indisputable"? No, we are not at that stage yet. We might reach that stage, but we're not there yet.
In other news, the catholic church is suing OpenAI because they had the idea to simply make suicide illegal a thousand years ago and have been using it ever since.
Not sure if it's a trade secret or a copyright case, the news often don't mention the fine details.
That's a good point. Here on
Those mitigations could cause other problems down the line, so it makes sense that Microsoft didn't want to deal with those for Windows 11.
IOW: "We've only got $3.5T in capital to work with, so this is just too hard for us to figure out. You'll have to switch to an OS made by unpaid volunteers."
The movie analogy is old and outdated.
I'd compare it to a computer game. In any open world game, it seems that there are people living a life - going to work, doing chores, going home, etc. - but it's a carefully crafted illusion. "Carefully crafted" in so far as the developers having put exactly that into the game that is needed to suspend your disbelief and let you think, at least while playing, that there are real people. But behind the facade, they are not. They just disappear when entering their homes, they have no actual desires just a few numbers and conditional statements to switch between different pre-programmed behaviour patterns.
If done well, it can be a very, very convincing illusion. I'm sure that someone who hasn't seen a computer game before might think that they are actual people, but anyone with a bit of background knowledge knows they are not.
For AI, most of the people simply don't (yet?) have that bit of background knowledge.
And yet, when asked if the world is flat, they correctly say that it's not.
Despite hundreds of flat-earthers who are quite active online.
And it doesn't even budge on the point if you argue with it. So for whatever it's worth, it has learned more from scraping the Internet than at least some humans.
It's almost as if we shouldn't have included "intelligence" in the actual fucking name.
We didn't. The media and the PR departments did. In the tech and academia worlds that seriously work with it, the terms are LLMs, machine learning, etc. - the actual terms describing what the thing does. "AI" is the marketing term used by marketing people. You know, the people who professionally lie about everything in order to sell things.
professions that most certainly require a lot of critical thinking. While I would say that that is ludicrous
It is not just ludicrous, it is irrationally dangerous.
For any (current) LLM, whenever you interact with them you need to remember one rule-of-thumb (not my invention, read it somewhere and agree): The LLM was trained to generate "expected output". So always think that implicitly your prompt starts with "give me the answer you think I want to read on the following question".
Giving an EXPECTED answer instead of the most likely to be true answer is literally life-threatening in a medical context.
What will they call it in the US ?
We should call it "job incomplete".
Most common metals have a simple one or two syllable name: Iron, Copper, Tin, Zinc, Lead, Nickel, Silver, Gold, etc.
The USA recognized that to some extent and got started by chopping off one extraneous syllable, paring it down from five to four. However, once it was realized that Al would be a common everyday material like iron, we should have gone ahead and pruned it all the way down to two syllables, maybe something like "Alem".
If a train station is a place where a train stops, what's a workstation?