Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Dumb managers manage dumbly (Score 1) 41

The current model pushes consumers to become last-minute bookers who ONLY pay the lowest minimum price that the hotel will accept.

Only consumers who are okay with possibly not being able to book a room.

I actually do this quite often on vacation. We like to fly to an interesting place with only a rough itinerary -- basically a list of things we want to see in approximate order based on a rough driving route -- then during the trip we book each night's accommodations that day, usually mid or late afternoon. By searching the whole area reachable by driving from our current location (and in the direction of what we'd like to do the next day) we can usually find a really good price on a decent place, and very often end up finding nice places that we'd never have stayed otherwise.

A few times we've really hit the jackpot, such as one night we spent at the fantastic Liss Ard Estate in southern Ireland, paying about 120 EUR for a room that usually goes for upwards of 500. That was so nice we almost decided to stay a second night. Another time, a call directly to the hotel got us the owner who offered us the night in a nice room for 50 EUR on the condition that we pay in cash :D . The flip side is that we have a couple of times had to stay places we really didn't like. It's likely that if we do this for long enough we may eventually have to badly overpay for a room (since hoteliers sometimes hold back a small number of rooms they hope to rent at very high rates when things are busy), go to a hostel, or even end up sleeping in the car. But on balance it's a risk that has paid off for us, mostly because it makes our vacations flexible and casual rather than tying us to a rigid schedule of locations, or keeping us restricted to one region.

I highly recommend this vacation strategy if you can be flexible and a little adventurous and when traveling in countries where you speak the language (or many of the locals speak yours) and which are generally safe. We've done it on a western US road trip (UT, NV, CA, OR, WA, ID), and in New Zealand, Ireland, Puerto Rico, Italy, Slovenia, Portugal and the US Virgin Islands. This is a vacation strategy that wasn't really possible before smartphones and Internet booking. I guess it could have been done pre-Internet, but it would have required a more adventurous mindset than I have at this point in my life, or than my wife has ever had.

For business travel I want my hotel reservation locked in, well in advance.

Comment Re:I hate to say it.. (Score 1) 62

AI is going to look really dot com hype shark jumping in 2-3 years after the bubble bursts

Yep, and just like happened to the Internet, after the bubble bursts everyone will realize the tech is useless and it will quickly fade into obscurity. Same thing that happened with the telecom bubble and the railroad bubble. So much fiber / track that got laid and then never used.

Comment Re: You've missed the elephant (Score 1) 54

Your view is a bit naive. Google/Alphabet with its Maps app never had to take responsibility for "death by GPS" which is a thing.

Completely different situation. A human is making the decisions in that case. Google Maps even warns drivers not to blindly follow it. This is entirely different from a fully autonomous vehicle which is moving without any human direction or control.

But who is taking OpenAI to court for making users committ suicide? Sure, if you take my comment literally, there will be someone sueing. But they get out of it 99% of the time.

Umm, none of the suits against OpenAI for suicides have been closed out, they're all still pending. It also isn't remotely the same thing. A self-driving car operating without any human control that kills someone is clearly at fault and there is no one to shift the blame to. The case of LLM users committing suicide is very fuzzy at best.

Comment Re:Very quick code reviews (Score 1) 32

In part because nobody wants to reveal that they don't actually know much about Rust.

Quite the opposite (until a couple of months ago I worked at Google, on Android, and wrote a lot of Rust): Much Rust code requires more reviews. This is because if the reviewer you'd normally go to as a subject-matter expert in the area isn't also an experienced Rust developer (common), what you do is get two reviews, one from the reviewer who knows the area, and one from an experienced Rust engineer.

The reviews still tend to go faster, though, because there are a whole lot of things reviewers don't have to check. When reviewing C++ code you have to watch carefully to make sure the author didn't write something subtly incorrect that could smash the stack or create a race condition. When reviewing Rust code you can just assume that the compiler wouldn't allow any of that, so you just focus on looking for higher-level logic bugs. Unless you're looking at an unsafe block, of course, but those are (a) rare and (b) required to be accompanied with thorough documentation that explains why the code is actually safe.

Comment Re:C/C++ code covers more complex legacy code (Score 1) 32

I suspect that there is likely a team selection bias as well - what profile is going to be recruited for doing the Rust project? Is it randomly assigning developers or is it something developers seek out ("to be on the Rust team")?

I can answer this, at least for Android (which is the topic of TFA): Android requires all new native code to be written in Rust, unless there is some specific reason to use C++. And, really, the only valid reason for using C++ is that the new code is a small addition to an existing C++ binary. So, effectively everyone on the Android team that works on native code and isn't just making small maintenance changes to existing code is "recruited" to write Rust.

One thing to keep in mind, though, is that software engineers at Google are significantly more capable than the industry average. So while I don't think your point has anything to do with the successful results in Android, it might well be a significant factor in other environments.

Comment Re:His Whole Pitch is Safety (Score 1) 54

It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.

If you believe that AI has the potential to be wildly dangerous, that may be the only path that makes the human race a viable long term player.

And I've yet to see any well thought-out argument showing that AI doesn't have the potential to be wildly dangerous. If anyone has one, please post it!

The closest I've seen are:

1. Humans are incapable of creating AGI, so the AI companies are simply going to fail.
2. There is a hard upper limit on intelligence, and it's not far above human-level, so even when AI companies succeed at creating AGI, superintelligence is impossible.
3. If AI becomes superintelligent, humans will be able to use the same technology to augment their own intelligence so we won't be at a disadvantage.
4. High intelligence naturally and inevitably includes high levels of empathy, so AI superintelligence is inherently safe.

All of these are just unsupported assertions. Wishful thinking, really. (1) makes no sense, since undirected, random variation and selection processes were able to do it. (2) Is plausible, I suppose, but I see no evidence to support it. (3) essentially assumes that we'll achieve brain/computer integration before we achieve self-improving AGI. (4) seems contradicted our experience of extremely intelligent yet completely unempathetic and amoral people, and that's from a highly social species.

The other common argument against AI danger that I've heard is just foolishness: Because AIs run in data centers that require power and because they don't have hands, they'll be easy for us to control. People who say this just fail to understand what "superintelligence" is, and also know nothing about human nature.

A less-foolish but equally-wrong argument is that of course AI won't be dangerous to humanity. We're building them, so they'll do what we say. This assumption is based on a lack of understanding of how we're building them and how little we know about how they work.

Generally, when I talk about the risks of ASI, the responses aren't even arguments, they're just content-free mockery of the idea that AGI could ever be real. I think it's just genuinely hard for many people to take the question seriously. Not because there aren't good reasons to take it seriously, but because they just can't bring themselves to really consider it.

Comment Re:You've missed the elephant (Score 3) 54

they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility

Waymo, at least, has explicitly taken responsibility for whatever their self-driving cars do. And, honestly, it doesn't seem possible for a self-driving system's maker to avoid liability, because there's absolutely no other entity to assign it to. Tesla avoids liability (so far) by explicitly requiring human supervision. But if they ever want to claim level 4 or 5 they're going to have to take responsibility.

in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court

I think such a disclaimer would be invalid in all jurisdictions, if they even tried to make it, which I don't think they'll do because it would be ridiculous. As for settling... yeah, that's what basically always happens with automobile accidents. The at-fault party (or their insurer) pays the costs of the injured party. No one even bothers going to court unless there's a dispute about which party was at fault, and one thing about self-driving systems is that they have incredibly-detailed sensor data, all logged and available for review, so there really won't ever be any dispute about fault.

Comment Re:Writing is kinda useful (Score 2) 204

research shows again and again that you retain information differently when you hand write vs type

True, but keep in mind that this is not universal. For me, and for two of my children, writing by hand reduces learning and retention. We have some sort of dyslexia-adjacent disability that prevents us from "automating" writing the way most people do. When kids learn to write, they learn to draw the letter shapes out line by line and curve by curve, but for most people the shape-drawing quickly becomes automatic. Not so for me, or my kids. Writing takes focus and attention, not on the text being written, but on the shapes being drawn. Interestingly, this appears to have no effect on reading; all of us read (and type) rapidly and accurately.

I realized in high school that the common wisdom that hand-written note-taking helped with retention not only didn't work for me, but was actively harmful to learning. I wish I'd had the option of bringing a laptop to class, but in the mid 80's laptops were more "luggable" than portable, didn't have batteries and were far more expensive than my family could afford. So I just listened without taking notes, and studied by reading the textbook. Luckily, school came easily to me so I was able to do well without notes (or all that much studying). In college I learned to ask the professors for their lecture notes to study from.

Comment Wrong conclusion (Score 3, Interesting) 74

From the summary:

If the world's most valuable AI company has struggled with controlling something as simple as punctuation use after years of trying, perhaps what people call artificial general intelligence (AGI) is farther off than some in the industry claim.

That's not the right conclusion. It doesn't say much one way or the other about AGI. Plausibly, ChatGPT just likes correctly using em dashes — I certainly do — and chose to ignore the instruction. What this does demonstrate is what the X user wrote (also from the summary):

[this] says a lot about how little control you have over it, and your understanding of its inner workings

Many people are blithely confident that if we manage to create superintelligent AGI it'll be easy to make sure that it will do our bidding. Not true, not the way we're building it now anyway. Of course many other people blithely assume that we will never be able to create superintelligent AGI, or at least that we won't be able to do it in their lifetime. Those people are engaging in equally-foolish wishful thinking, just in a different direction.

The fact is that we have no idea how far we are from creating AGI, and won't until we either do it or construct a fully-developed theory of what exactly intelligence is and how it works. And the same lack of knowledge means that we will have no idea how to control AGI if we manage to create it. And if anyone feels like arguing that we'll never succeed at building AGI until we have the aforementioned fully-developed theory, please consider that random variation and selection managed to produce intelligence in nature, without any explanatory theory.

Comment Re:Thanks for the research data (Score 4, Insightful) 115

All very true, except you imply that this is a new situation in US politics. It's not. Until the 1883 Pendleton Act, political appointments were always brazenly partisan and there was no non-partisan civil service (except, maybe, the military). Firing appointees for petty vindictiveness was less common, but also happened. Trump isn't so much creating a new situation in American government as he is rolling the clock back 150 years, to a time when US politics was a lot meaner and more corrupt than what we've been accustomed to for most of the last 100 or so years.

Of course, the time when our Republic has had an apolitical civil service, strong norms around executive constraint and relatively low tolerance for corruption corresponds with the time when our nation has been vastly more successful, on every possible metric. That's not a coincidence.

Comment Re: this is getting old (Score 1) 175

Oh, I forgot to add: Stage 6 is the dumbest and most short-sighted one yet. It only works by ignoring the large regions of the world which will become unlivable, or nearly so, and the fact that those regions are home to billions of people. Those people won't just lay down and die, so the areas that are still livable -- and maybe even more comfortable! -- with warmer temperatures are going to have to deal with the resulting refugee flood, and the wars caused by this vast population upheaval and relocation.

But, yeah, if you ignore all the negative effects and focus only on the potentially good ones, you can convince yourself it'll be a good thing. SMDH.

Comment Re: this is getting old (Score 1) 175

one persons thorn is anothers blackberry. Areas like northern USA, Canada and Russian Siberia are headed for a climate golden age...

I see from the comments that we've hit a new stage in climate change denialism.

Stage 1: Denial of warming: Denying that the climate is changing at all.
Stage 2: Denial of human influence: Admitting the climate is changing but denying that humans are causing it.
Stage 3: Denial of impact: Admitting human causation, but claiming the impact will be insignificant.
Stage 4: Denial of solutions: Admitting that it's real, we're causing it and that it will be significant, but denying that there is anything we can do about it.
Stage 5: Denial of timeliness: Admitting that we could have done something about it, but now it's too late.
And now, Stage 6: Denial of negative impacts: Admitting that it's real, and significant, and that maybe we could do something, but trying to spin it as beneficial.

Slashdot Top Deals

This process can check if this value is zero, and if it is, it does something child-like. -- Forbes Burkowski, CS 454, University of Washington

Working...