Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:They should probably... (Score 2) 288

The reason Hollywood does more of the same is that it works! A MCU movie that is competently made is a guaranteed success. People who go to the theater for these movies expect they will have have an enjoyable experience, fit for the big screen, and they usually do, with some highs and lows. These movies are not the reason why people don't go to the theater. If anything, that's the opposite. Moviegoers are risk adverse too, going to the theater is not cheap, they actually have to go out, there is also some commitment to it. If unsure, people can watch movies at home instead, the average screen and sound system are pretty good nowadays compared to what it was a few decades ago, plus there are plenty of movies accessible on demand.

The first Matrix is a good movie, but it was a relatively safe bet, except maybe for the choice of directors. It is an action film with flashy effects, exploring a theme that has been explored before with some success. For me, Dune (2021) was much riskier. Yes, it is an adaptation or a very popular novel, but historically, one that has led to more failure than successes.

Comment Not about trust in science but trust in scientists (Score 1, Insightful) 250

This is a very different thing. Trust in science means trusting a methodology based on observation, experiments and logic over faith, tradition and feelings. Trust in scientists means trusting people who are (supposedly!) practitioners of science.

It is possible, maybe even common to trust science but not scientists, for example if you think that scientists don't do science properly, for example because you think that scientists are more interested in grant money and will readily commit fraud for it.

But it is not even what the article is about. The article is about involving scientists in policymaking. But policymaking is not science! To go with the pandemic theme, scientists will design a covid vaccine, will run experiments and come up with numbers regarding effectiveness, risks, etc... which may be refined later. This is science. Authorizing, subsidizing, and mandating vaccination based on these data is not science, it is politics. Scientists can do politics too, like anyone else, but just because a scientist does politics doesn't make politics a science, not a natural science at least.

Comment Re:Benefits to citizens for the expense? (Score 1) 30

Ordinary citizens benefit from it by seeing nice pictures and maybe get a better understanding of the universe around us. And who knows, maybe something genuinely useful in day to day life can come up from these observations. It may not seem like much, but it is a net positive, that much can't be said of many human activities.
As for the building of the telescope itself, it gets a lot of smart people to work together on a state-of-the-art engineering project, with repercussions on the whole tech world. Maybe one guy who worked at the JWST will use his experience to make a weather satellite that will improve whether predictions and will help "ordinary citizens" avoid thunderstorms. To put it simply, it makes the population in general a little bit smarter.

We are billions on this planet, we can afford to make a few telescopes *and* biology *and* chemistry *and* medicine. We would even do a lot more of that if we didn't spend so much energy fighting each others. Oh yes, another great thing with these scientific megaprojects. They are usually international collaborations, sometimes between historical enemies, in itself, that's awesome. I mean, Russia and the US are together for the ISS, doing science instead of fighting.

Comment Re:Next gen - Nancy Grace (Score 2) 30

It is not really infinite, obviously, but for all practical purposes, I think we can do as if it was.

JWST is not exactly at L2, but it orbits around it at an average distance of about 500000 km and a period of 168 days. Other object have orbits in similar orders of magnitude. Not only that but L2 is not stable, so without station keeping, these objects will naturally be kicked out. By comparison there are tens of thousands of objects in low earth orbit, all within a few 100s of km at about 7000km from the center of the earth, with a period of about 90 minutes. And yet, despite all these uncontrolled debris, we only recorded something like a dozen collisions in the entire history of space flight.

So, we could probably put a few million telescopes in L2 before thinking about the risk of collision.

Comment Unsolvable problem (Score 1) 198

50% want permanent DST, 50% want permanent standard time, 50% want to keep time changes. Yes it is more that 100%, some people can't make up their mind.

No matter what you are doing or not doing, some people will have a problem, not everyone has the same sleep cycles, the same jobs with the same working hours. Personally, I think clock change is a way to match wake up time with sunrise, which makes sense even without the energy saving considerations, but of course, it is debatable.

Comment Re:Someone needs to do something about youtube (Score 2) 263

Out of context, there is no way to tell if the ban is deserved or not.

But besides outright bans, YouTube shadowbans, or at least downranks critical comments a lot. It does a great job keeping toxicity out of the comment section (to a point), but it also makes it very boring. There is almost no meaningful discussion there, just the same memes repeated over and over. But considering the nature of the platform, this stance is understandable, it is not a forum, it is a video broadcasting website, they want you to watch videos first, with the ads that go with them, rather than debate in the comment section.

Comment Re:so much for brexit (Score 1) 167

It is a connector, it is not as if revolutionary ideas come out every year in this field. We are at the point where standardization is more valuable than progress, and that's the reason the EU legislation exists.

Initially, they kindly asked manufacturers to work together on a standard, and it was rather successful, as most smartphone manufacturers settled on micro-USB, which vastly improved the situation from before, and other electronic manufacturers followed. Then USB-C came out, supporting even more use cases and fixing many of the problems with micro-USB, that's the industry doing its best. The only thing remaining was for Apple to switch and it would have been perfect. Except that Apple didn't want to play along, which prompted governing bodies to become more forceful.

Still the EU legislation is still open, in particular, it leaves wireless charging as an option, and it can also be updated later.

Comment Re:Keyboard, keyboard!!!! (Score 2) 28

There are phones with a hardware keyboard, the choice is limited though. There are also add-on options like cases with a builtin keyboard. You can also use a regular Bluetooth or USB keyboard.

Maybe not the most satisfactory answer, but there are options. I find the lack of diversity in Android phones disappointing though. There are hundreds of "different" phones on the market, yet, most of them are essentially the same featureless 6 inch slab, the only difference being the camera arrangement on the back.

Comment Re:Simple solution... (Score 1) 79

It is a bit more complicated than that. On one hand, one shouldn't anthropomorphise LLMs, but they are not simple Markov chains either. They understand concepts, including the concepts of good and evil, and know to apply them when predicting the next word.

They don't know good and evil because of some intrinsic morality, just that "evil" word sequences tend appear in certain contexts, including reprehensible content but also in relation to fictional and historical villains, and "good" word sequences appear in opposite situations. Feeding a LLMs a lot of "evil" content, can potentially make it more evil, but it will mostly make its understanding of evil better. If, as it is often the case, the LLM is then told to be "good" using reinforcement learning and pre-prompting, this can actually help as it will be able to subtract "evilness" from its answers, that is, to make sure the final sequence of words is unlike the reprehensible content in its training data.

If you want to think of it another way, try speaking French to ChatGPT. All the interactions will be in French, it will not switch to English even though most of its training dataset is in English. Same idea as good vs evil. If you tell it to be good and not evil, it will be good and not evil, in the same way that if you tell it to use French not English, it will use French and not English. For a LLM, there is no fundamental difference between good and evil and French and English. These are all concepts (or dimensions, components, embeddings,... whatever) it has learned to identify by weighting and matching other concepts and words.

Comment Re:Simple solution... (Score 1) 79

Even if you don't let it go to the dark corners of the internet, the "bad" content is still there. Villains exist, at least in fiction, just instruct the LLM to take the role of the villain, which is a common jailbreaking technique.

And spouting this kind of stuff is not the only thing people making these LLMs try to avoid. Maybe more importantly, they also don't want the LLM to reveal secret or harmful information. For example, a hotline-style chatbot may be fed with various data about solving problems customers may have, but they don't want the chatbot to reveal personal information about other customers or company secrets. Also, most information that can help you prevent harm can also be used to do harm. For example if you ask "in which way bleach is dangerous" is a good question and one you probably want the LLM to answer correctly, but it also an indirect way of asking how to do harm with beach.

In fact, scraping content from the worst places is a good way of aligning an AI, to tell it what not to do. These datasets exist and are used for this purpose.

Slashdot Top Deals

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...