Comment that vertical bar chart.. (Score 1) 38
.. looks like a histogram to me. what am i missing ?
https://live.staticflickr.com/...
"1. A vertical bar chart that fails to convey the test score distribution that a histogram would have"
.. looks like a histogram to me. what am i missing ?
https://live.staticflickr.com/...
"1. A vertical bar chart that fails to convey the test score distribution that a histogram would have"
i checked with someone who knows,
and yeah, nautical miles are an uncommon unit in spaceflight.
> "They're going at least 5,000 nautical miles past the Moon"
that's a surprising choice of unit.
1. it's not an SI unit
2. it seems most relevant when talking about distances w/r/t earth's Lat/Long scheme.
presumably the person has a reason, i wonder what it is
interesting, thanks.
i have more questions but i should read the details myself. and i guess polystyrene is different than all plastics, so my wondering was a little off.
love it.
i do wonder if there's a risk/boon of those grub-gastro-bacteria evolving their way out of the grub-gastro's and becoming viable in the world at large and chomping on all the awesome plastics we have around.
(i can no longer find/see the post i was replying to. i swear it was just here!)
> In fact, LLMs present no danger at all, it's only what an LLM can control that presents a danger.
ime, the presence of "in fact
that aside,
you are crazy if you think AIs pose no danger. it's like saying drunks trying to get home from the bar pose no threat, it's just the cars they pilot which do.
AIs are going to play the role of wardens of life-impacting functionality and decisions. for example health-care decisions. financial capabilities. insurance claim investigation. hiring. firing. driving. flying. emergency vehicle dispatch. etc.
so vulnerabilities in AIs are definitely dangerous.
i think it's good to question whether reality is even understandable in the QM regime, but i think you weaken your point by appealing to survival value. there's loads of things which had no survival value back when we were hunter-gatherers but which we're pretty good at. driving cars, for example. understanding electromagnetics. playing piano.
with only 24% reporting high confidence that their interpretation is correct i wouldn't describe them as "disagreeing wildly".
good luck with that.
pretty sure they're not going to constrain their profile of you to just information you type in on their website.
this is just click bait.
everyone knows these models are not good at actual gameplay nor is it news that they will confidently mis-state stuff. it wasn't news on the first round it's still not news, and it misses the point that there is a Ton of stuff that humans currently do which the models will do cheaper.
was ist, rigs ?
if you're asking for the source code then you haven't been paying attention.
read the first page or so of the paper.
it's clear that 1) in each request they're providing the model with a history of the conversation. this is standard practice. 2) they're also using the API's feature "tools" (also known as "functions") to provide the model with channels of agency. for example the model would be told "use the 'cmd' tool to execute a bash command".
wait so it's just a general purpose LLM given some reading material and asked to make predictions ? failure there is not very interesting. i'd be way more interested in a custom-trained network for horse betting.
Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (3) Ha, ha, I can't believe they're actually going to adopt this sucker.