Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Macs (Score 1) 284

That used to be true for a while, but it isn't any more, now that Apple are making their own SoCs. Try finding any kind of commodity (workstation) motherboard with anything remotely approaching a Studio Ultra's memory bandwidth. High-end GPUs have it, internally, but you're not running your applications on those: you're talking to them across a PCI bus and mucking about with disconnected memory spaces. The unified memory of current Mac workstations is _much_ closer to the architectures that you used to see on the DEC/Sun/SGI workstations of yore.

Comment Re:Macs (Score 1) 284

The comparison was "Unix workstations". Those were all "proprietary hardware", with proprietary processors and proprietary versions and variations on Unix. They were all desperately trying to keep people locked into their walled gardens by adding their own special features and differences, not least in the system management interfaces.

Of course there are many things that are rubbish about Apple. There were many things that are rubbish about all of them. They're a one-size-fits-all corporate behemoth. Still make the best unix workstations ever.

Comment Re:Macs (Score 4, Informative) 284

Exactly. Macs (ever since NeXT/OSX) have been easily the best Unix workstations you could buy. Since then they have always been hands-down a better price/performance proposition than the wares of Sun/Oracle, DEC and SGI. Back in the early days Apple deliberately courted that cohort (even though one assumes it was comparatively small) by advertising that they had UNIX(TM) certification from the Open Group, something not really achievable by Linux or the open-source BSDs.

These days you can buy a 20-core 64-bit RISC-powered pizza box(ish) with absurdly good (by "Unix Workstation" standards) graphics performance and memory bandwidth for "reasonable" money. And it'll still run all of your professional Unix commercial CAD tools and all of your old-as-dirt Unix closed and open-source software. What's not to like?

Comment Re:WebAssembly for what? (Score 3, Informative) 67

More curious than enthusiastic about WASM myself, as an engineer all opportunities for efficiency seem like good ideas. However since there doesn't seem to be any way to AOT-compile existing JavaScript applications into corresponding WASM ones, it's hardly surprising that uptake is slow.

Having said that, none of the complaints in the parent post appear to be based in reality:

"maintain binary code on multiple architecutres": nope. WASM is a cross-platform abstract bytecode, not vastly dissimilar from JVM bytecode, but lacking the underlying memory management, object model and access to any APIs (see above about hard-to-use). So no, you aren't maintaining binaries for multiple architectures, any more than you do that for JavaScript itself.

"nasty security consequences": no more so than JavaScript, and arguably less, given the aforementioned lack of access to any APIs. So probably more secure than JavaScript. Of course, it isn't JavaScript, so any "protection" based on content-scraping is probably toast, but since cryptographic permutation and obfuscation is also a thing, it's toast anyway.

"poor support from browser vendors": there are only a couple of javascript engines in the world (V8, SpiderMonkey and JavaScriptCore, now that Edge also uses V8). WASM uses the javascript compiler to do the final-step code generation and optimization, and none of those three are slacking off, and JavaScript isn't going away. Ergo, WASM is well supported (if not necessarily enthusiastically) by browser vendors.

The "not able to find experts" is very probably real though. Not necessarily a ding against WASM itself, more an observation of the state of the learning curve.

Comment Re:Good reasons to switch? (Score 2) 66

My personal reasons for running it (note, this is different from an argument to switch from Linux):
1) I've been running BSD since the mid-80s and haven't felt the need or desire to change. Works for me.
2) Ports: everything on my system is built from source, and the ports framework makes that "just work". I like that I can debug and fix anything on my system.
3) Simplicity: I still feel as though I can understand everything that is happening. There aren't too many system processes.
4) ZFS
5) The community mailing lists are pretty helpful, as a general rule.

Security seems OK. There's a security officer and security announcements and security updates, as you would hope from a well-engineered system.
Performance seems OK. Goes as fast as the hardware, although the v13 announcement suggests that there might be some useful improvements even so.
Stability seems great. Holds up under load. Doesn't go down. Pretty much what it's supposed to do.

So: my reasons may not be reasons to shift from a Linux distro...

Comment Re:GPU (Score 5, Interesting) 82

There are some folk working on building a GPU out of an array of RISC-V cores...

Couldn't comment on how "viable" that is.

Personally, I like to think that the wheel of reincarnation might finally be turning back around to CPUs. I really don't _want_ a GPU. I want a chip with 100+ beefy cores, all with vector engines, and a dumb frame buffer (and heaps of memory bus bandwidth of course) and let an open-source software renderer stack do all the work. I don't _want_ to shuffle shaders off to some opaque, badly documented secondary card. I want to be able to write and debug that code just like any other piece of code. I sometimes dream of making a workstation out of an Altra Max and a DMA-pipe connected to an HDMI port...

Comment Re:That's odd. (Score 2) 113

The 600+ instruction scoreboard/scheduler in the M1 (much larger than the 250-ish instruction window in latest AMD and Intel cores, and those seem fairly adequate anyway) means that compiler microarchitecture cost functions aren't as important as they were in the old days, or on in-order processors like the Cortex-A53 "little" cores, or embedded processors. The processor is effectively dynamically recompiling and instruction-scheduling your code as it runs, anyway.

So: as long as the compiler does a decent job of autovectorising, so that the vector fmac instructions are in the instruction stream in the first place (and both clang and gcc are fairly excellent at this these days) then the processor is going to eat any mismatch that the compiler's cost models have against reality.

Comment Works fine so-far. (Score 1) 101

I upgraded my 5K iMac (2019) this morning: left it chugging away and went for a bike ride. Brightly-coloured login screen waiting for me on my return. Seems to be less disruptive than Catalina, so far, although there's a good deal more visual redesign than the last several changes. All of my usual apps re-opened. Network drives in the dock re-mounted. Nothing adverse to report, so far. Quite boring, really. Oh, there's one (?) extra slice on the root partition and root is now mounted "sealed". Good.

Comment Idea of standardised markup is the failure (Score 2) 161

Writing parsers is easy. Even with a standard markup or serialization language you need to pull in an enormous, buggy support library and _still_ write code that understands what's in the data file, range-checks it and sanitizes it. You're 90% of the way to a bespoke data representation already.

The whole idea that we need one common data representation so that we don't have to keep writing parsers and serializers is bunk.

And get off my lawn...

Comment The default search on the default browser... (Score 4, Insightful) 60

on the default PC is not Google. That means that it's popular (90%? who knows) despite the fact that every one of those PC users has to go out of their way to change the search engine setting (which they do by downloading a different browser, Google's Chrome, usually).

Duck duck go don't even run a web crawler: they outsource to Bing for actual search results.

If any of them provided a competitive service, then perhaps they would be more competitive.

I remember life before google search, AltaVista and all. You can prize Google from my cold, dead, browser...

Comment Re:Apple's ARM vs Intel's x86 (Score 2) 34

Can't be far away now. Server-side has finally reached a point of comparable capability, vis Amazon's (Graviton2), Marvell's (nee Cavium's) (ThunderX3) and Ampere's (Altra) have all been released or announced, and all hold their own against contemporary cloudy Xeon kit. Impressively, Graviton2 (at least) seems to keep up against AVX512 on a single-threaded basis, mostly, according to the article linked. Proper server-grade memory interfaces make an enormous difference to real performance, compared to mobile, of course.

And we already know that Apple's cores are stronger per-core than those.

Comment That saving apps thing is ubiquitous (Score 2) 113

Both Chrome and Firefox can do that, and both support the "offline app" thing for sites that support it: it depends on the coding of the site, not the browser (besides the necessary support, but that's standardized). And of course opening the saved "web app" opens the browser. That's how it works.

So far you're not selling it to me.

Slashdot Top Deals

fortune: cpu time/usefulness ratio too high -- core dumped.

Working...