Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment First, define ultra processed foods (Score 2) 213

The term ultra-processed foods is one of those "I'll know it when I see it" definitions so far. Until they can actually define what that term means and use that as a meaningful way to identify foods that have common characteristics that may be bad or good under what circumstances, then anything someone says about his dangerous they are is full of shit because there is no category.

Comment Re:8K will of course be a thing, just not yet. (Score 1) 136

My point is simply that as the TVs get bigger, there's simply room for more pixels.

But there's a limit, which is when the screen approximately fills your vision at your seating distance. And once there are enough pixels to meet or slightly exceed the resolution of your vision at that distance, there's no benefit to going higher.

I decided to do the math on this, looking up the comfortable viewing angle (theatre screen widths recommendations are 40 degrees wide, but you can push it up to 60 degrees before it gets uncomfotable), and the angular resolution of the human eye (about 1/60th of a degree). That allows calculation of both screen size and pixel size for any given viewing distance. When you do the math it turns out that the distance cancels out. If you have an optimally-sized screen for a given distance, and an optimal pixel density for that distance what you find is that you always need a resolution of 3590x1860. The resolution of a 4k screen is 3840x2160.

If you place a 4k screen at the distance at which 20/20 vision resolution exactly matches pixel size, it fills 64 degrees of your horizontal vision and 36 degrees of your vertical vision, which is slightly more than the maximum comfortable ranges. If you put it just a little further away, so you can comfortably see it without moving your head then the pixels are slightly smaller than you can resolve.

Anything above 4k either gives you pixels that are smaller than your eye can resolve or, if you sit close enough that you can resolve them, requires you to move your head to see the whole screen.

If you are willing to move your head, then the angular size of the screen can increase to about 130 degrees horizontal by about 60 degrees vertical. And... guess what? A 8k screen positioned at optimal resolving distance fills 128 degrees of horizontal by 72 degrees vertical.

If you also want to have the screen covering your peripheral vision then you may want to go even wider, up to about 200 degrees horizontal and 140 degrees vertical. But you'd need far less pixel density outside of the central region.

TL;DR: 4k is optimal if you don't want to move your head and 8k is optimal if you are willing to move your head. Anything higher is not useful unless you have significantly better-than-normal vision. So for watching video where you want to see the whole screen at once, moving only your eyes to focus on different regions, 4k is ideal. For a display screen it makes more sense to move your head from place to place as you focus on different parts of your work, so 8k resolution positioned to cover 120 degrees of horizontal vision is probably the upper limit.

It's almost like someone thought about this when picking those 4k and 8k resolution numbers...

Comment Re:Must be nice ... (Score 2) 176

Whether that is the true valuation or not, well, that's always the game of stocks, right?

Indeed, it is.

If enough people believe a price, that's the price.

Until they don't, and reality always sets in eventually. Companies can maintain sky-high valuations on dreams and promises for a few years, but eventually they have to actually start generating investment returns or their price crashes.

In theory, the current value of a company (and therefore its stock) is the net present value of its future dividend stream plus the current value of its net assets. If that phrase made your eyes glaze over, stop and understand it, because it's really not that hard.

What it means is that, you add up how much profit the company will make this year, plus how much next year, and so on forever, reducing each by a percentage called a "discount rate". Say, 8%, compounding. So year 1's profit is multiplied by 2-(1.08)**1 = 0.92, year 2's is multiplied by 2-(1.08)**2 = 0.83, year 3 2-(1.08**3) = 0.74, and so on. Because the discount rate eventually drives the contribution of future years asymptotically to zero you can sum this infinite series and get a number (as long as you don't assume that the profits grow more than the discount rate every year). Then to that you add the value of whatever stuff the company owns (real estate, equipment, money in the bank etc.) and subtract out whatever debts it has. This math assumes that the company actually pays out their profits in the form of dividends, but it doesn't really change anything if they don't. If they just all profits into a cash pile or other assets rather than issuing dividends then you're just calculating the net present value of that future stream of asset increases.

This simple version of net present value leaves out some details and there are more sophisticated versions, but while they affect the details of the computed result, they don't change the basic structure that a company is worth what it has plus what it's going to generate in the future, discounted.

In practice, of course, even applying the theory requires a lot of prognostication about the company's long-term profit stream and about the discount rate, which depends on the future economy as a whole. And there's a lot of emotion-driven trading that further pushes the price up and down. But the effect of the emotion-driven trading is (generally; there are exceptions) tamped down by hard-nosed institutional investors who do the research to make the best estimates they can, and then do the math.

I mention all of this just because I think there are a lot of people who think that stock prices are unmoored from any underlying reality, that they're just pure randomness and vibes. The people who really believe that lose their shirts when they play the market because they're wrong, handing their cash over to the hardnosed investors who do the research and the math. This is different from cryptocurrency prices, which truly have no basis in anything real. Those are pure vibes.

Comment Re:Ketamine (Score 2) 176

When will the SEC step in? Oh right, never he bought them off.

I don't see any reason the SEC would be involved. When there is an IPO, then the SEC will care, but even then I don't see any particular reason for concern. Musk may be setting ridiculously-high valuations, but that's a question for the market to decide, not regulators. Either investors will agree or they won't, and they'll either make money or lose money depending on the accuracy of their decision, but that's not something the SEC should care about, that's just the market at work.

The SEC's job is just to make sure that the information presented to investors is accurate. Not the projections of what will happen in the future; Musk can make whatever promises he wants and the market gets to decide how much of it they believe. No, the SEC just has to make sure that he isn't misrepresenting the present financial state of the company -- when it goes to IPO. Until then, barring some circumstances which don't seem to exist in this case, the SEC has no role at all.

There are some other federal agencies that might get involved. The FTC scrutinizes large mergers even of non-public companies, and $1.25T is definitely very large. However, it doesn't seem like there's anything here for them to object to. Because of what SpaceX does there may be other scrutiny based on national security implications but, again, that wouldn't be the SEC.

Comment Re:8K will of course be a thing, just not yet. (Score 1) 136

The future will bring us 8K, and heck, much better resolutions than that too.

Why? Even 8k is at or slightly beyond the limits of human vision from a TV-watching distance. There may be room for monitors at 8k, though I think it's more likely that we'll just shift to talking about monitors in terms of dpi, and once they get up to about 200 thee's just no benefit in going any higher. Phones are held close enough to the eyes that there's some justification in 300 dpi, but it's weak.

There's just no benefit in going higher.

Comment Re:Fine, build them... (Score 1) 109

I don't know much about the exact requirements for lights though. Would they have lights at the top for planes?

I think that is the primary purpose. They aren't classified as ATON (Aids TO Navigation) because the wind turbines are floating on a tether, and move around a little, which makes them unreliable for navigation fixes. So the the lights are there to make them visible to boats and aircraft. Maybe they have 70 foot-high yellow lights for boats and red lights on the generator nacelle at the top of the tower for aircraft, and those just aren't mentioned on the nautical charts? Dunno.

Now I really want to sail out there at night and see :-)

The turbines also have AIS beacons, and they're marked on the charts. Those are the real measures used to prevent marine collisions. All commercial vessels and even lots of recreational vessels have AIS receivers (my piddly little 25-foot sailboat does) and of course everyone should always be using up-to-date charts. In this era of electronic charts with GPS there's absolutely no reason not to have fresh charts all the time. Anyway, I'm not really sure what the lights are for. Maybe just regulatory compliance.

Either way, there's the scaling due to the distance, so even bright lights would presumably get pretty faint even if visible over the horizon.

Yes, it takes a very bright light to be reliably visible for 15 miles, even in clear weather. If you look at some of the ATONs that have such lights (like lighthouses), they're really bright. Of course they're going for guaranteed visibility (in reasonable conditions) whereas we're trying to asses a much lower visibility standard.

So, generally, I quite agree, regardless of a person's opinion on their aesthetics, since they mostly can't even really see them, not an eyesore.

Perhaps the strongest evidence for this is the fact that all of the rich and powerful people with clifftop houses on Martha's Vineyard didn't stop the project.

Comment Re:Fine, build them... (Score 1) 109

It's actually quite far offshore, though. I sailed around Martha's Vineyard in August -- we spent the night moored off of Edgartown, then in the morning decided to make our way by going down the eastern side, against the open ocean. The instructor (this was a sailing class, Advanced Coastal Cruising) told us about the wind farm so we looked for it, but couldn't see it. The farm is 15 miles offshore, so you can't see the wind turbines during the day at all, even on a clear day.

Right. At 15+ miles, depending on where you were viewing from, the very tip of a blades would still reach 760 feet above the horizon with maybe the bottom 100 feet hidden. However, that would look about as big at that distance as a pinkie nail at arms length.

You mean the height would be about like a pinkie nail at arm's length? I haven't done the math, but I can buy that. You make a good point, though. I was going off the height of the light (I wonder why it's so low -- is the chart notation wrong maybe?), but the turbines are much taller than that.

Add to that that it is a spindly thing made of stick-like objects and painted a non reflective light gray.

Yes, this is the core point. The towers are about 30 feet in diameter. A 30-foot wide object at 15 miles would look about as wide as a human hair does at arm's length. So, could you see something as wide as a hair and as tall as a pinky nail at arm's length and with low contrast. Against the broad sweep of ocean and sky, they would not only not be an eyesore, they'd be very hard to spot even when you're looking.

Comment Re:Fine, build them... (Score 1) 109

I'm sure they'll welcome those views, just like they welcomed the illegal aliens flown in from Texas and Florida

So, in other words, you think they will welcome those views? The community in Martha's Vineyard did, after all, act to help those kidnap victims.

And they did, in fact, welcome the wind farm view, or at least not oppose it strongly, because the wind farm under construction is, in fact, offshore of Martha's Vineyard.

It's actually quite far offshore, though. I sailed around Martha's Vineyard in August -- we spent the night moored off of Edgartown, then in the morning decided to make our way by going down the eastern side, against the open ocean. The instructor (this was a sailing class, Advanced Coastal Cruising) told us about the wind farm so we looked for it, but couldn't see it. The farm is 15 miles offshore, so you can't see the wind turbines during the day at all, even on a clear day.

The instructor said he thought you could see the mast lights at night, but looking at the nautical chart I think he was wrong, at least from boat-level. The chart says each turbine has a yellow flashing light on it, at 69.9 feet. From boat height (about 6 feet above the water) and applying the "distance to horizon" formula, I get an observable distance of 12.6 nm. The closest we got to the nearest wind turbine was 11.4 nm, according to my chart (we were about 1 nm offshore), so assuming the light was bright enough we could have seen it. But the next-closest turbines (three of them) were right at the 12.6 nm distance, so their lights would have been right on the horizon, if visible at all. All the rest were 15+ nm away from where we are.

People in tall houses on the higher points in the island are high enough to see a 70-foot object from ~22 nm, so they could theoretically see the lights for about a third of the turbines, BUT there's another issue: The charts don't list the luminosity range of the turbine lights. That typically means that they're not very bright, and not visible from more than 3-5 nm.

TL;DR, looking at the charts, I seriously doubt anyone on Martha's Vineyard can see the turbines at all, ever, day or night. You'd have to get a lot further out into the Wild Atlantic to see them, I think, even at night.

Comment Re:Very good for novices, but reinforces bad habit (Score 1) 48

*Those* are the novices I am / we are concerned about never advancing beyond "novice" level.

Indeed. That's a very real concern. We can safely and effectively use LLMs because of our experience and deep understanding of all the layers. But, clearly, novices who come up with LLM assistance will never have to do that. They'll rely on the AI.

I suspect it's more of that than what some are claiming that "software is doomed" and "we're going to lose all experienced coders". Nah...I suspect we're just changing the type of coder that's going to be considered "experienced" and the domain we're going to consider them experienced in.

That's... plausible! And honestly the most hopeful thing I've heard in a while about what the future of the profession looks like. I like your analogy with compilers and other low-level tools that we used to have to know how to double-check.

But my point wasn't about any of that future stuff. My point was that I find Claude to be incredibly useful to me in getting my work done faster and better now.

Comment Re:Very good for novices, but reinforces bad habit (Score 1) 48

"I'm writing a new crypto library"

yeah ok so you can be put on ignore.

Sigh. That's why I clarified that I'm not writing algorithms.

Also, you should consider that I wrote the primary crypto library used on Android, some three billion devices. I'm neither a dilettante nor a clueless noob. I've been a professional crypto security engineer for over 20 years. The reason I'm writing a library with a new API is because I have broad and deep experience with all of the existing libraries and the footguns they provide, and I'm trying a novel approach that I think will reduce user error.

Comment Re:Very good for novices, but reinforces bad habit (Score 2) 48

AI is very good for novices, people who don't know something well.

There is plenty of evidence already that novices using AI will remain novices, rather than develop advanced skills. So yes, as a "novice", you can get to some result quicker by using AI, but the result will be that of a "fool with a tool", and your next work's result won't be better, because you didn't learn anything.

It depends...

So, I'm a very experienced software engineer. Going on 40 years in the business, done all kinds of stuff. But there are just too many tools and too many libraries to know, and you never use the same ones on consecutive projects, that's just reality. What I've found is that telling an LLM to do X using this tool I've never used before and then examining the output (including asking the LLM for explanations, and checking them against the docs) until I understand it is at least an order of magnitude faster than learning it myself. I have no doubt that an expert in that tool would end up questioning some of the choices, because I only end up exploring the parts of it that the LLM chose to use. But that doesn't matter as much as the fact that I have a working solution and I understand how and why it works and am capable of debugging it far quicker than I could learn it myself.

As an example, I'm writing a new crypto library -- not implementing the underlying algorithms, which will actually be executing in secure hardware, just putting a user-friendly API on top and pre-solving a lot of the subtle problems that come up so the users of the API won't have to. Anyway, my implementation is in Rust, for good reasons, but at least some of the clients want C++, so I need to bridge C++ and Rust. After looking up the options and discussing the pros and cons with the LLM, I made a choice (CXX), and told the LLM to write a CXX FFI so the C++ API I wrote can call the very similarly-structured (modulo some C++/Rust differences) Rust API I wrote. The LLM did this in about five minutes, including writing all of the Makefiles to build and link it, and some tests.

It didn't work, of course. But it wouldn't have worked the first time if I'd written it either. So I reviewed the tests the LLM had written, directed it to improve them, then told it to debug the problem and make them pass. It did so, and explained the bugs and the fixes. While the LLM was working, I read the bridge code it had written and looked stuff up in the documentation, occasionally asking questions to another LLM instance. Within 20 minutes it was all working. So, I'm 30 minutes into this FFI task and I already have (a) code that works and (b) tests that prove it. I can also see a bunch of things about he bridge code that I don't like. Some of these things I don't like are actually good, most of them are actually bad, exploring the options (with the LLM's help), tweaking a bit and fiddling with the tests for another hour gets me to something that appears -- with my decades of programming experience but limited knowledge of this tool, to be pretty good.

This is good, because I have some more new tools to learn/use. Today. 90 minutes got me a good-enough-for-now FFI solution (for a pretty large and complex API surface) that's probably not too far from actually being good.

Next up, I need a persistent key/value store with particular performance characteristics, high reliability, a solid track record, a no_std (no standard library) Rust API, and that can run on QNX 7.1. Turns out there is no such beast, but lmdb is pretty close. It has all except the no_std Rust API. But there are some Rust crates that offer thin wrappers around lmdb's C API. lmdb-master-sys, part of heed, looks like the best-maintained and most widely-used of these. So I asked the LLM to take a look at what changes might be necessary to make it work as no_std. The LLM identifies a tiny set of cases where the standard library is used, and they're all trivially-replaceable. So, I make the changes while I ask the LLM to write some unit tests. It works, first time. I send a PR to the maintainer of the library. Total time, about 20 minutes. It would have taken me at least three times that long to figure out how to use the lmdb API to write the tests.

Next up... I'll stop here, but you get the idea. If you need to work with a lot of tools you don't know well -- and at least for me the speed at which I need to jump between tools pretty much guarantees that I'll never know any of them really well -- but you have enough experience and deep-enough expertise to quickly see what an implementation is doing and to understand why and how, LLMs will massively accelerate your work.

They also give you time to post on slashdot, while you wait for the LLM to do stuff. Er, I mean they give you time to catch up on email, do code reviews, watch company training videos, read documentation, etc.

Others' mileage will vary, of course, but I find that using an AI tool significantly increases my overall velocity (probably 1.5X overall) while simultaneously significantly increasing the quality of my output. The quality increase isn't because the LLM is better than me. It's definitely not. But it's way, way faster, especially at doing the grungy work that I tend not to do as thoroughly as I should. For example, writing really thorough commit messages. I totally delegate commit message writing to the LLM now. I review and sometimes tweak, but not often.

And its speed makes some things possible that otherwise weren't. For example, often I'll see some aspect of my code that could make it 10% better with a large refactor and I have to weight the benefit of the small improvement against the time sink of the large refactor. No longer. I tell the LLM to do it. Sometimes I tell three instances of the LLM to do three different things (in different checkouts of the code), then decide which, if any, I want to keep (after significant tightening and improvement, some manual, some by giving detailed directions to the LLM).

The result is that while I might do one of ten of those 10% improvements without an LLM, netting an overall 10% improvement, I'll probably do half of them with the LLM (the other half I'll realize weren't actually good ideas, for reasons that weren't obvious until making the attempt -- or seeing the LLM make the attempt). Net improvement, ~60%.

And as for debugging... wow. Claude is seriously good at debugging. It doesn't always get it right the first time, but between the speed at which it can examine the situation, form a hypothesis, test to invalidate the hypothesis and move on to the next hypothesis and the quality of its hypotheses, it may be two orders of magnitude faster than me. It's especially good if you give it a stack trace to parse. Repeatedly it's found the root cause of fairly deep, grungy bugs in less than five seconds, including the time it took to generate a detailed and precisely-correct explanation of the problem. It then takes me a few minutes to parse and understand the explanation, then validate it against the code and (if necessary) relevant documentation. Claude isn't always right in its analyses, of course. But it's very good.

Anyway, for me LLM assistance for development significantly improves both my productivity and the quality of my output. YMMV.

Comment Re: This slow release... (Score 1) 169

Incompetence isn’t baked in to authoritarianism - it’s left entirely up to chance.

Chance, yes, but there are two factors you're failing to consider.

The first is that competent people generally don't want to work for narcissistic authoritarians, both because it sucks and because they know their own value and want to be hired for that, rather than because they're good at being sycophantic.

The second is that competent people are rare. If you're choosing at random, the odds are extremely high that you'll get an incompetent one. This is exacerbated by the first factor, since competent people are likely to remove themselves from consideration.

So, yes, incompetence is baked into authoritarianism. At least, competence at anything other than brutality and corruption. The people who are really good at those things actually seek out authoritarians, because non-authoritarian regimes won't let them get away with doing what they like, and are good at. If you're a really nasty son of a bitch you can trade undying loyalty and nauseating obsequiousness for a pass to exercise your nastiness. For people like that, it's a good trade. Also for the authoritarian because authoritarians need nasty people to inspire fear in their subjects.

Of course, even with all of these factors working against them authoritarians typically manage to find a few competent people and keep them around. But they're always a tiny minority. Incompetence is baked in.

Comment Re:Given the current microsoft state? (Score 1) 124

He was probably being told daily to shove copilot in system-d somehow. Not ruin linux on purpose or any sort of "make windows win" kind of deal, just get copilot inside linux because the copilot SOMEONE has to use the thing.

Given what he's decided to do, it's more likely he wanted to implement his trust verification ideas in Windows and got shut down, so he decided to go do it with Linux.

As someone who spent much of the last decade thinking and working on topics related to system integrity and remote proof thereof, I'm interested to see what his ideas are and if they're actually novel, or at least have some innovative twist.

Slashdot Top Deals

"All the people are so happy now, their heads are caving in. I'm glad they are a snowman with protective rubber skin" -- They Might Be Giants

Working...