Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Huh? Where? (Score 2) 53

Literally every hotel I've booked in both Marriott or Hilton chains has a cancellation policy including night before. Literally. Every. Single. One. I only have about 500 nights in a hotel since 2018 including plenty in several states in America. Is this some hyper localised trend where the writer lives or something?

That's because you're taking the default, most expensive, booking option. On hilton.com, which I almost always use for business travel, click through the "more rates" link and you'll typically see rates for prepayment with no cancellation, rates with 2-3 day cancellation and rates with 24-hour cancellation. Also rates with free breakfast, rates with double points, etc.

Comment Re:Dumb managers manage dumbly (Score 1) 53

The current model pushes consumers to become last-minute bookers who ONLY pay the lowest minimum price that the hotel will accept.

Only consumers who are okay with possibly not being able to book a room.

I actually do this quite often on vacation. We like to fly to an interesting place with only a rough itinerary -- basically a list of things we want to see in approximate order based on a rough driving route -- then during the trip we book each night's accommodations that day, usually mid or late afternoon. By searching the whole area reachable by driving from our current location (and in the direction of what we'd like to do the next day) we can usually find a really good price on a decent place, and very often end up finding nice places that we'd never have stayed otherwise.

A few times we've really hit the jackpot, such as one night we spent at the fantastic Liss Ard Estate in southern Ireland, paying about 120 EUR for a room that usually goes for upwards of 500. That was so nice we almost decided to stay a second night. Another time, a call directly to the hotel got us the owner who offered us the night in a nice room for 50 EUR on the condition that we pay in cash :D . The flip side is that we have a couple of times had to stay places we really didn't like. It's likely that if we do this for long enough we may eventually have to badly overpay for a room (since hoteliers sometimes hold back a small number of rooms they hope to rent at very high rates when things are busy), go to a hostel, or even end up sleeping in the car. But on balance it's a risk that has paid off for us, mostly because it makes our vacations flexible and casual rather than tying us to a rigid schedule of locations, or keeping us restricted to one region.

I highly recommend this vacation strategy if you can be flexible and a little adventurous and when traveling in countries where you speak the language (or many of the locals speak yours) and which are generally safe. We've done it on a western US road trip (UT, NV, CA, OR, WA, ID), and in New Zealand, Ireland, Puerto Rico, Italy, Slovenia, Portugal and the US Virgin Islands. This is a vacation strategy that wasn't really possible before smartphones and Internet booking. I guess it could have been done pre-Internet, but it would have required a more adventurous mindset than I have at this point in my life, or than my wife has ever had.

For business travel I want my hotel reservation locked in, well in advance.

Comment Re:I hate to say it.. (Score 1) 65

AI is going to look really dot com hype shark jumping in 2-3 years after the bubble bursts

Yep, and just like happened to the Internet, after the bubble bursts everyone will realize the tech is useless and it will quickly fade into obscurity. Same thing that happened with the telecom bubble and the railroad bubble. So much fiber / track that got laid and then never used.

Comment Re: You've missed the elephant (Score 1) 59

Your view is a bit naive. Google/Alphabet with its Maps app never had to take responsibility for "death by GPS" which is a thing.

Completely different situation. A human is making the decisions in that case. Google Maps even warns drivers not to blindly follow it. This is entirely different from a fully autonomous vehicle which is moving without any human direction or control.

But who is taking OpenAI to court for making users committ suicide? Sure, if you take my comment literally, there will be someone sueing. But they get out of it 99% of the time.

Umm, none of the suits against OpenAI for suicides have been closed out, they're all still pending. It also isn't remotely the same thing. A self-driving car operating without any human control that kills someone is clearly at fault and there is no one to shift the blame to. The case of LLM users committing suicide is very fuzzy at best.

Comment Re:Many people will stay on console, or give up ga (Score 1) 40

The line has muddied, as consoles went USB and console accessories started being PC compatible.

Once upon a time, you popped a game cartridge into a purpose built specialty thing with bespoke capabilities to do the things the game companies wanted, with proprietary connectors and instant boot up and what you get is what you have.

On the PC side, you futzed with config.sys/autoexec.bat to have just the right memory layout, depending on if you needed the maximum conventional memory, ems or xms, and environment variables to match your dip switches.

Now a game console is an x86 box that takes some time to boot to an OS then you select an app, which probably is a game, and good chance it's developed with a game engine that pretty much equally supports Nintendo, PS4, and Microsoft ecosystem.

The PC side you just plug in, often the exact same accessory, and things automatically go. The UI of Windows can be obnoxious, but this is a prime mindset for Valve to take advantage launching their PC that's 10-foot optimized out of the box.

Nintendo held on to console-ness longer, with their Wii and Wii-U gimmicks, and their switch admittedly isn't an x86 box, but it's basically a gaming tablet, which is the other big thing eating into the casual gamer market.

Comment Re:Very quick code reviews (Score 1) 32

In part because nobody wants to reveal that they don't actually know much about Rust.

Quite the opposite (until a couple of months ago I worked at Google, on Android, and wrote a lot of Rust): Much Rust code requires more reviews. This is because if the reviewer you'd normally go to as a subject-matter expert in the area isn't also an experienced Rust developer (common), what you do is get two reviews, one from the reviewer who knows the area, and one from an experienced Rust engineer.

The reviews still tend to go faster, though, because there are a whole lot of things reviewers don't have to check. When reviewing C++ code you have to watch carefully to make sure the author didn't write something subtly incorrect that could smash the stack or create a race condition. When reviewing Rust code you can just assume that the compiler wouldn't allow any of that, so you just focus on looking for higher-level logic bugs. Unless you're looking at an unsafe block, of course, but those are (a) rare and (b) required to be accompanied with thorough documentation that explains why the code is actually safe.

Comment Re:C/C++ code covers more complex legacy code (Score 1) 32

I suspect that there is likely a team selection bias as well - what profile is going to be recruited for doing the Rust project? Is it randomly assigning developers or is it something developers seek out ("to be on the Rust team")?

I can answer this, at least for Android (which is the topic of TFA): Android requires all new native code to be written in Rust, unless there is some specific reason to use C++. And, really, the only valid reason for using C++ is that the new code is a small addition to an existing C++ binary. So, effectively everyone on the Android team that works on native code and isn't just making small maintenance changes to existing code is "recruited" to write Rust.

One thing to keep in mind, though, is that software engineers at Google are significantly more capable than the industry average. So while I don't think your point has anything to do with the successful results in Android, it might well be a significant factor in other environments.

Comment Re:His Whole Pitch is Safety (Score 1) 59

It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.

If you believe that AI has the potential to be wildly dangerous, that may be the only path that makes the human race a viable long term player.

And I've yet to see any well thought-out argument showing that AI doesn't have the potential to be wildly dangerous. If anyone has one, please post it!

The closest I've seen are:

1. Humans are incapable of creating AGI, so the AI companies are simply going to fail.
2. There is a hard upper limit on intelligence, and it's not far above human-level, so even when AI companies succeed at creating AGI, superintelligence is impossible.
3. If AI becomes superintelligent, humans will be able to use the same technology to augment their own intelligence so we won't be at a disadvantage.
4. High intelligence naturally and inevitably includes high levels of empathy, so AI superintelligence is inherently safe.

All of these are just unsupported assertions. Wishful thinking, really. (1) makes no sense, since undirected, random variation and selection processes were able to do it. (2) Is plausible, I suppose, but I see no evidence to support it. (3) essentially assumes that we'll achieve brain/computer integration before we achieve self-improving AGI. (4) seems contradicted our experience of extremely intelligent yet completely unempathetic and amoral people, and that's from a highly social species.

The other common argument against AI danger that I've heard is just foolishness: Because AIs run in data centers that require power and because they don't have hands, they'll be easy for us to control. People who say this just fail to understand what "superintelligence" is, and also know nothing about human nature.

A less-foolish but equally-wrong argument is that of course AI won't be dangerous to humanity. We're building them, so they'll do what we say. This assumption is based on a lack of understanding of how we're building them and how little we know about how they work.

Generally, when I talk about the risks of ASI, the responses aren't even arguments, they're just content-free mockery of the idea that AGI could ever be real. I think it's just genuinely hard for many people to take the question seriously. Not because there aren't good reasons to take it seriously, but because they just can't bring themselves to really consider it.

Comment Re:You've missed the elephant (Score 3) 59

they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility

Waymo, at least, has explicitly taken responsibility for whatever their self-driving cars do. And, honestly, it doesn't seem possible for a self-driving system's maker to avoid liability, because there's absolutely no other entity to assign it to. Tesla avoids liability (so far) by explicitly requiring human supervision. But if they ever want to claim level 4 or 5 they're going to have to take responsibility.

in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court

I think such a disclaimer would be invalid in all jurisdictions, if they even tried to make it, which I don't think they'll do because it would be ridiculous. As for settling... yeah, that's what basically always happens with automobile accidents. The at-fault party (or their insurer) pays the costs of the injured party. No one even bothers going to court unless there's a dispute about which party was at fault, and one thing about self-driving systems is that they have incredibly-detailed sensor data, all logged and available for review, so there really won't ever be any dispute about fault.

Comment Re:Writing is kinda useful (Score 2) 218

research shows again and again that you retain information differently when you hand write vs type

True, but keep in mind that this is not universal. For me, and for two of my children, writing by hand reduces learning and retention. We have some sort of dyslexia-adjacent disability that prevents us from "automating" writing the way most people do. When kids learn to write, they learn to draw the letter shapes out line by line and curve by curve, but for most people the shape-drawing quickly becomes automatic. Not so for me, or my kids. Writing takes focus and attention, not on the text being written, but on the shapes being drawn. Interestingly, this appears to have no effect on reading; all of us read (and type) rapidly and accurately.

I realized in high school that the common wisdom that hand-written note-taking helped with retention not only didn't work for me, but was actively harmful to learning. I wish I'd had the option of bringing a laptop to class, but in the mid 80's laptops were more "luggable" than portable, didn't have batteries and were far more expensive than my family could afford. So I just listened without taking notes, and studied by reading the textbook. Luckily, school came easily to me so I was able to do well without notes (or all that much studying). In college I learned to ask the professors for their lecture notes to study from.

Comment Re:Cooling? (Score 1) 87

The thing is that while the heat pipes can work in space and may have been used in satellites and then brought to earth, the issue is with the amount of thermal energy and having radiation as the only way to evict heat.

So while the mechanism for heat pipes started in space, the computers are *way* more wattage than the space based applications.

Slashdot Top Deals

"If you want to eat hippopatomus, you've got to pay the freight." -- attributed to an IBM guy, about why IBM software uses so much memory

Working...