Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Dumb managers manage dumbly (Score 1) 41

The current model pushes consumers to become last-minute bookers who ONLY pay the lowest minimum price that the hotel will accept.

Only consumers who are okay with possibly not being able to book a room.

I actually do this quite often on vacation. We like to fly to an interesting place with only a rough itinerary -- basically a list of things we want to see in approximate order based on a rough driving route -- then during the trip we book each night's accommodations that day, usually mid or late afternoon. By searching the whole area reachable by driving from our current location (and in the direction of what we'd like to do the next day) we can usually find a really good price on a decent place, and very often end up finding nice places that we'd never have stayed otherwise.

A few times we've really hit the jackpot, such as one night we spent at the fantastic Liss Ard Estate in southern Ireland, paying about 120 EUR for a room that usually goes for upwards of 500. That was so nice we almost decided to stay a second night. Another time, a call directly to the hotel got us the owner who offered us the night in a nice room for 50 EUR on the condition that we pay in cash :D . The flip side is that we have a couple of times had to stay places we really didn't like. It's likely that if we do this for long enough we may eventually have to badly overpay for a room (since hoteliers sometimes hold back a small number of rooms they hope to rent at very high rates when things are busy), go to a hostel, or even end up sleeping in the car. But on balance it's a risk that has paid off for us, mostly because it makes our vacations flexible and casual rather than tying us to a rigid schedule of locations, or keeping us restricted to one region.

I highly recommend this vacation strategy if you can be flexible and a little adventurous and when traveling in countries where you speak the language (or many of the locals speak yours) and which are generally safe. We've done it on a western US road trip (UT, NV, CA, OR, WA, ID), and in New Zealand, Ireland, Puerto Rico, Italy, Slovenia, Portugal and the US Virgin Islands. This is a vacation strategy that wasn't really possible before smartphones and Internet booking. I guess it could have been done pre-Internet, but it would have required a more adventurous mindset than I have at this point in my life, or than my wife has ever had.

For business travel I want my hotel reservation locked in, well in advance.

Comment Re:I hate to say it.. (Score 1) 62

AI is going to look really dot com hype shark jumping in 2-3 years after the bubble bursts

Yep, and just like happened to the Internet, after the bubble bursts everyone will realize the tech is useless and it will quickly fade into obscurity. Same thing that happened with the telecom bubble and the railroad bubble. So much fiber / track that got laid and then never used.

Comment Re: You've missed the elephant (Score 1) 54

Your view is a bit naive. Google/Alphabet with its Maps app never had to take responsibility for "death by GPS" which is a thing.

Completely different situation. A human is making the decisions in that case. Google Maps even warns drivers not to blindly follow it. This is entirely different from a fully autonomous vehicle which is moving without any human direction or control.

But who is taking OpenAI to court for making users committ suicide? Sure, if you take my comment literally, there will be someone sueing. But they get out of it 99% of the time.

Umm, none of the suits against OpenAI for suicides have been closed out, they're all still pending. It also isn't remotely the same thing. A self-driving car operating without any human control that kills someone is clearly at fault and there is no one to shift the blame to. The case of LLM users committing suicide is very fuzzy at best.

Comment Re:Need a prescription. (Score 1) 49

As the other poster says, the reason for the shortage is because successive British governments have cut funding in the NHS in real terms, and are now flailing around as those cuts have really started to bite.

And every time the doctors or nurses strike to make a point, they get gaslit because "think of the patients".

Healthcare systems run on two things - staff, and good will.

The government has reduced the staff well below minimum, and burned up all the good will, so now theres nothing left. Fewer doctors are coming into the NHS through British training schemes because those are capped and indeed some have been reduced recently, and more doctors are retiring early or leaving the country.

And thats not counting the doctors who were forced to retire early because of the Tory governments cap on lifetime pension contributions - when the government dictates how much you pay into your pension, and also dictate that above a certain threshold of lifetime contributions you become liable for a huge tax bill immediately, and you cant withdraw from the pension contributions without also forfeiting the pension itself, then your only option to avoid a huge tax bill is ... retirement....

Comment Re:Regulations? (Score 1) 46

For a pro-capitalist, anti-socialist country, its astounding how much US law makers get involved in the running of businesses, whether it be with regulations, hearings or "opinions". US law makers love to do it.

Of course, its all performative - calling CEOs into hearings to berate them rather than actually doing fact finding, basically using the hearings as a court where the people appearing have already been judged and sentenced. Got to be seen doing something, but lets certainly not fix the issue through good legislation, because berating people in public is more fun.

Comment Re:Very quick code reviews (Score 1) 32

In part because nobody wants to reveal that they don't actually know much about Rust.

Quite the opposite (until a couple of months ago I worked at Google, on Android, and wrote a lot of Rust): Much Rust code requires more reviews. This is because if the reviewer you'd normally go to as a subject-matter expert in the area isn't also an experienced Rust developer (common), what you do is get two reviews, one from the reviewer who knows the area, and one from an experienced Rust engineer.

The reviews still tend to go faster, though, because there are a whole lot of things reviewers don't have to check. When reviewing C++ code you have to watch carefully to make sure the author didn't write something subtly incorrect that could smash the stack or create a race condition. When reviewing Rust code you can just assume that the compiler wouldn't allow any of that, so you just focus on looking for higher-level logic bugs. Unless you're looking at an unsafe block, of course, but those are (a) rare and (b) required to be accompanied with thorough documentation that explains why the code is actually safe.

Comment Re:C/C++ code covers more complex legacy code (Score 1) 32

I suspect that there is likely a team selection bias as well - what profile is going to be recruited for doing the Rust project? Is it randomly assigning developers or is it something developers seek out ("to be on the Rust team")?

I can answer this, at least for Android (which is the topic of TFA): Android requires all new native code to be written in Rust, unless there is some specific reason to use C++. And, really, the only valid reason for using C++ is that the new code is a small addition to an existing C++ binary. So, effectively everyone on the Android team that works on native code and isn't just making small maintenance changes to existing code is "recruited" to write Rust.

One thing to keep in mind, though, is that software engineers at Google are significantly more capable than the industry average. So while I don't think your point has anything to do with the successful results in Android, it might well be a significant factor in other environments.

Comment Re:His Whole Pitch is Safety (Score 1) 54

It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.

If you believe that AI has the potential to be wildly dangerous, that may be the only path that makes the human race a viable long term player.

And I've yet to see any well thought-out argument showing that AI doesn't have the potential to be wildly dangerous. If anyone has one, please post it!

The closest I've seen are:

1. Humans are incapable of creating AGI, so the AI companies are simply going to fail.
2. There is a hard upper limit on intelligence, and it's not far above human-level, so even when AI companies succeed at creating AGI, superintelligence is impossible.
3. If AI becomes superintelligent, humans will be able to use the same technology to augment their own intelligence so we won't be at a disadvantage.
4. High intelligence naturally and inevitably includes high levels of empathy, so AI superintelligence is inherently safe.

All of these are just unsupported assertions. Wishful thinking, really. (1) makes no sense, since undirected, random variation and selection processes were able to do it. (2) Is plausible, I suppose, but I see no evidence to support it. (3) essentially assumes that we'll achieve brain/computer integration before we achieve self-improving AGI. (4) seems contradicted our experience of extremely intelligent yet completely unempathetic and amoral people, and that's from a highly social species.

The other common argument against AI danger that I've heard is just foolishness: Because AIs run in data centers that require power and because they don't have hands, they'll be easy for us to control. People who say this just fail to understand what "superintelligence" is, and also know nothing about human nature.

A less-foolish but equally-wrong argument is that of course AI won't be dangerous to humanity. We're building them, so they'll do what we say. This assumption is based on a lack of understanding of how we're building them and how little we know about how they work.

Generally, when I talk about the risks of ASI, the responses aren't even arguments, they're just content-free mockery of the idea that AGI could ever be real. I think it's just genuinely hard for many people to take the question seriously. Not because there aren't good reasons to take it seriously, but because they just can't bring themselves to really consider it.

Comment Re:You've missed the elephant (Score 3) 54

they're using them for all sorts of things including supposed self driving cars. If the AI fucks up and causes issues , well , on appendix section 16, sub section A, paragraph 21 there'll be a clause explicitly exempting the AI company from any responsibility

Waymo, at least, has explicitly taken responsibility for whatever their self-driving cars do. And, honestly, it doesn't seem possible for a self-driving system's maker to avoid liability, because there's absolutely no other entity to assign it to. Tesla avoids liability (so far) by explicitly requiring human supervision. But if they ever want to claim level 4 or 5 they're going to have to take responsibility.

in jurisdictions where that disclaimer is void then what the hell, they've made billions anyway and they'll just settle out of court

I think such a disclaimer would be invalid in all jurisdictions, if they even tried to make it, which I don't think they'll do because it would be ridiculous. As for settling... yeah, that's what basically always happens with automobile accidents. The at-fault party (or their insurer) pays the costs of the injured party. No one even bothers going to court unless there's a dispute about which party was at fault, and one thing about self-driving systems is that they have incredibly-detailed sensor data, all logged and available for review, so there really won't ever be any dispute about fault.

Comment Re:Writing is kinda useful (Score 2) 204

research shows again and again that you retain information differently when you hand write vs type

True, but keep in mind that this is not universal. For me, and for two of my children, writing by hand reduces learning and retention. We have some sort of dyslexia-adjacent disability that prevents us from "automating" writing the way most people do. When kids learn to write, they learn to draw the letter shapes out line by line and curve by curve, but for most people the shape-drawing quickly becomes automatic. Not so for me, or my kids. Writing takes focus and attention, not on the text being written, but on the shapes being drawn. Interestingly, this appears to have no effect on reading; all of us read (and type) rapidly and accurately.

I realized in high school that the common wisdom that hand-written note-taking helped with retention not only didn't work for me, but was actively harmful to learning. I wish I'd had the option of bringing a laptop to class, but in the mid 80's laptops were more "luggable" than portable, didn't have batteries and were far more expensive than my family could afford. So I just listened without taking notes, and studied by reading the textbook. Luckily, school came easily to me so I was able to do well without notes (or all that much studying). In college I learned to ask the professors for their lecture notes to study from.

Comment His Whole Pitch is Safety (Score 4, Interesting) 54

Anthropic's entire pitch has always been safety. Innovation like this tends to favor a very few companies, and it leaves behind a whole pile of losers that also had to spend ridiculous amounts of capital in the hopes of catching the next wave. If you bet on the winning company you make a pile of money, if you pick one of the losers then the capital you invested evaporates. Anthropic has positioned itself as OpenAI, except with safeguards, and that could very well be the formula that wins the jackpot. Historically, litigation and government sponsorship have been instrumental in picking winners.

However, as things currently stand, Anthropic is unlikely to win on technical merits over its competition. So Dario's entire job as a CEO is basically to get the government involved. If he can create enough doubt about the people that are currently making decisions in AI circles that the government gets involved, either directly through government investment, or indirectly through legislation, then his firm has a chance at grabbing the brass ring. That's not to say that he is wrong, he might even be sincere. It is just that it isn't surprising that his pitch is that AI has the potential to be wildly dangerous and we need to think about safety. That's essentially the only path that makes his firm a viable long term player.

Comment Re:Need a prescription. (Score 2) 49

My wife is one of those who left.

Yes, a lot of it is to do with the over worked and under paid, but not all of it - a lot of it is also due to the unbelievable stress of the responsibility heaped on you as a doctor, coupled with the diminishing respect for being a doctor by pretty much everyone.

For example, GPs being told that they have to open in the evenings and weekends, despite not having enough staff to run a 9-5 Monday to Friday service already - and your budget is being taken by the pharmacists who are doing random pointless examinations or reviews on anyone who comes through the door (because the pharmacy makes money that way, but they can charge the GP for doing it). And if you refuse to, then a GMC complaint is raised.

How about being rung up by the police at 6pm and told to do a wellness check on a patient, despite it being the police’s responsibility and not yours - but because you have now been told, you have a duty of care if anything happens. Which means a GMC complaint being raised.

How about the physicians associate refusing to take your guidance, and putting in complaints if you have any feed back at all which isnt glowingly positive, despite them being under your license and insurance. Which means a GMC complaint being raised.

How about having to spend £100,000 and two years of your life defending your license because someone thought you had too much to drink at the staff party and thus must be an alcoholic, with no evidence at all.

How about the government dictating how much you pay into your pension fund each month, how much you will get back, the pension fund being massively in profit to the point where the government gets £6Billion in rebates from it annually, and STILL requires you to pay more in and take less out

How about patients coming into your consulting room clutching the Daily Mail, complaining that you get paid too much because thats what the newspaper says and ranting for 20 minutes, and then still complaining that you are running behind.

How about the only way to get a specialist training position is to have an interview on one specific day of the year, but your current training program absolutely refusing you the ability to go to it?

I can go on and on.

Comment Re:C/C++ code covers more complex legacy code (Score 1) 32

Rust [...] makes it harder for you to work around the compiler when it comes to memory.

... which, to be clear, is a good thing. Working around the compiler is dangerous and a code smell, so it shouldn't be something that is easy to do. It usually indicates that either the compiler's capabilities aren't sufficient to meet your needs (in which case, a better solution would be either a better compiler, or to re-evaluate the wisdom of your approach), or that you are doing something the wrong way and should find a way to do it that works with the compiler, rather than around it, so that you get the benefits of the compiler's co-operation.

Slashdot Top Deals

This process can check if this value is zero, and if it is, it does something child-like. -- Forbes Burkowski, CS 454, University of Washington

Working...