Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Are they really that stupid? (Score 1) 47

McKinsey (and high end management consulting) works a bit differently that most companies. The top of the McKinsey pyramid is for dyed-in-the-wool consultants that are mainly responsible for bringing in the big clients and broad thought leadership. The lower level managers and consultants doing the actual day to day work, such as it is, are intentionally high turnover jobs. I think something like 50% of people are gone after two years, and like 90% after 5 years. It is built on intensive work for high pay and incentivizing the ones who really take to client management with gobs of money.

They effectively want turnover anyway. Placing ex-McKinsey-ites in key locations in industry expands their deal flow in the future, as the alumni network will funnel more business back to them. If they need more bodies, McKinsey has the cachet to farm a new crop of Harvard / Yale / etc. undergraduates and Ph.D.s for the (admittedly well paid) management consulting meat grinder whenever they need it again.

In leaner times, they seem to be just helping the process along a bit.

Comment Re:Ironic (Score 4, Insightful) 134

Efforts to curb climate "crisis" ends up adding to it. Seems as myopic with the cure as the cause.

The fuel regulation was an effort to improve air quality, not fight climate change.

Further, I'm not really getting your angle here. Given your scare quotes around "crisis", I'd guess you either think climate change isn't real, isn't caused by humans, or doesn't matter either way.

If it's either of the first two, why would you not accept evidence of anthropogenic climate change, but then accept the findings of this paper, which indicate that, actually, human emissions from tanker ships affect the climate.

If you think climate change doesn't matter, then I'd think you'd be equally unmoved by the fact that the pollution reduction increases warming.

Comment Re:Does this solve a problem? (Score 1) 18

This is an improvement for most people... just not for you because [reasons].

You are not the target market. Most people is the target market.

...which I acknowledged in my first sentence. I was specifically curious if there's a general security benefit for all users that I'm overlooking. I suppose not.

Comment Does this solve a problem? (Score 3, Interesting) 18

I understand that this will likely bring the general level of password management up for most people, but I struggle to see why an already conscientious user would want this.

I already use a password manager with strong passwords for most sites and applications. What happens if I die? What happens if the fingerprint reader dies at an inopportune moment? Right now I can just share the password with my family or put it in a safe deposit box. And sure, I can set up *their* fingerprints as well, but this all seems like a lot more steps to achieve about the same security I have now.

Comment Re:It's like Pidgin Chat (Score 1, Troll) 54

Why have all AlM, ICQ, and MSN installed when you can just use Pidgin?
https://www.pidgin.im/

Very revolutionary stuff going on here.

AIM? ICQ? Did you hop out of a portal from 20 years ago?

I suspect you are being facetious, though, in which case I will at least acknowledge I appreciate the joke! Trillian did that once upon a time as well.

Comment Re:"would likely disappear in the next five years" (Score 1) 56

These shallow no-outlet lakes do that. Shrink and shrink down to nothing, then grow back, rinse and repeat.

Ultimately there's a different point to raising the alarm here, though. Static, dynamic, etc. I don't really care. Let's assume it's entirely natural for the Great Salt Lake to dry up on long time scales. The consequences would still be pretty bad from an air pollution and health perspective for the very *un*natural city on its shores given the waste heavy metals on the lakebed.

Worth saying again, it literally doesn't matter if the lake is currently drying up because of climate change, water usage, regular changes, or the will of God - it's currently headed towards 0, and if Utah doesn't want to deal with the consequences of a dry Great Salt Lake, then it needs to do something about it.

Arguing about the "why" of it so that one can dismiss warnings (at least the original poster) is counterproductive.

Comment Re:"would likely disappear in the next five years" (Score 4, Informative) 56

Given that this is generally attributed less to climate change than to the rapid growth of Salt Lake City and the lack of a rational pricing structure or limits around water usage, that example is less relevant.

I'm not even really sure where your apparent skepticism comes from. Aerial photos of the lake over the past 40 years paint a pretty clear picture and trendline that doesn't really leave much room for debate. I suppose you could argue that humans will "find a way". However, part of finding a way through is general awareness of the consequences of continuing business as usual.

Comment Re:4 Longer days ... (Score 1) 199

You end up working the same hours, just spread over 4 longer days, rather than 5 shorter ones
and they tend to shift which day people are not available so they still have staff available all days

You say that like it's a bad thing. A lot of these companies do actually lessen the stated hours, but I'd take that trade regardless. I suspect a lot of people would.

Comment How is this compute getting paid for? (Score 1) 27

Lot's of good points about the credibility of ChatGPT already posted, but another thing... who is going to be paying for this?

Sam Altman, the CEO of OpenAI, said maintaining ChatGPT as a free service had an "eye-watering" compute cost. They clearly just do it to get the hype and an equity payout. How in the world is it going to make sense for Opera to do this once OpenAI turns off the free ChatGPT spigot? Seems unwise.

Standard answer would be that the Opera user is the product here, but even then I can't see it making economic sense.

Comment Disappointed to see it end like this (Score 4, Interesting) 46

I've been wanting a unified piece of hardware like this for a long time to muck around with the software side of a voice assistant. I haven't had time to futz with hardware so it would help to get solid pieces assembled into a good speaker.

Admittedly I'm part of the problem - even at the $350 price point before the price was raised further, I just felt too hesitant about long term support to bite. It's absolutely true that the downward spiral in the features kept me from investing it it.

Apparently these patent troubles were bad enough they wrote a children's book about it?! https://mycroft.ai/product/myc...

Comment Re:Google Home (Score 1) 56

Great summary, Derec01, it does put some prospective on how it work. However, the fact that it is somewhat simple does not diminish its coolness. I would guess our own minds work also in some simple way. How do we, ourselves, approach answering the question? We get the question, "What is life?", then turn it around, "The life is...", and then "just auto-complete" the rest of it. Think about it, when you answer this question, you really do not do any calculations, or logical proofs, or even literature searches, the "auto-completion" comes up on its own while you are writing it. It is based on your training. I suspect that something very similar is happening in a ChatGPT session. At this point I am not saying that this is the only "operational mode" our minds work in, but definitely it is one of the modes it works in. What do you think about it?

True, it's certainly cool what it can do, and I don't mean to downplay that. There's certainly some interesting things that Transformers are likely doing internally. As a couple examples, the process of taking "in-context" training examples (e.g. the text it is autocompleting) may actually effectively recapitulate a training operation like gradient descent (https://arxiv.org/abs/2212.07677). Also, interestingly, we can learn how it effectively operates by projecting out the underlying computations from inside the black box of the neural network (https://arxiv.org/abs/2211.01288). This may teach us new ways to encode information.

However, I don't think it's going to be quite complete in the near term. I would consider it the difference between the following two things:
1. a concise computer program that takes an input and produces the right output, respecting the constraints of the problem
2. a set of sequential machine instructions that take place in the process of (1) computing an output from an input.

Imagine that you only get to see the operations in (2) rather than the full program. It is both much more raw data than the original program but usually less information than the original program. The best you can do is extrapolate out from what those executions do, but the places you will do worst are the ones that the original program provided no output.

Similarly, I think most of the GPT-like models are training on the "data exhaust" of the more complex processes going on inside a human mind. I believe GPT-3 will often perform pretty well when it is asked to do tasks that are compositions of operations it has seen. However, it has fewer training points for things that are clearly untrue and so less opportunity to abstract in a way that respects the distinction. The set of untruths is much greater than the set of truths, and no one takes the time, for instance, to specifically mention that "Calvin Coolidge did not ride an elephant to the War of the Roses" or that "24 is never equal to 5". I am skeptical that the current approaches will successfully introduce the kind of reusable abstraction that are really necessary to induce a higher level program.

Comment Re: Google Home (Score 1) 56

The first is a pretty good natural language parse to understand what the user wants. That's impressive enough on its own. The other is an ability to generate readable text. That's super useful. The problem is, only a dumb human is using this thing. If it were a backend for something that could generate prompts (or parsed queries) with a ton of detail, it could be great.

I do not believe the first claim to be true. I believe the GPT-3 architecture encodes the contextual text directly via Byte-Pair Encoding, and the rest of of the computation to generate text is fully internal to the black box of the network weights. After all, the architecture is largely trained only on sequentially predicting the next token based on the previous tokens. There is no human readable intermediate abstraction to extract intent. Maybe there is a decent intent extraction system implicit in the weights, but it isn't possible right now to separate that out and build upon it.

If they had a fantastic intent extraction engine, then they would offer that as a service. They do not, as far as I know. They offer the outputs of autocompleting the text, and they offer vector representations of text for downstream algorithms.

More impressive, IMHO, are things like Cicero (https://about.fb.com/news/2022/11/cicero-ai-that-can-collaborate-and-negotiate-with-you/), which melds large language models with actual planning algorithms.

Slashdot Top Deals

Lawrence Radiation Laboratory keeps all its data in an old gray trunk.

Working...