Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Rust (Score 1) 407

* memory management is explicit [merriam-webster.com] -- what does this mean?

Quantifying the Performance of Garbage Collection vs. Explicit Memory Management
Automatic vs. Explicit Memory Management: Settling the debate

* deterministic [merriam-webster.com] -- what does this mean?

I thought it was self evident. Here is a discussion of the matter.

* endemic [merriam-webster.com] use of a garbage collector... -- what does this mean?

Pervasive would be a better word. Languages that make garbage collected allocations for most or all things. For example in Java, aside from primitives, all allocations conceptually occur on the a garbage collected heap.

reference-counted heap objects

Reference counting: counting the number of references to an object.
Heap: an arena of memory maintained by a memory allocator. Also CPUs typically have no knowledge of how software manages heaps. You may be thinking of virtual memory
Objects: object in the generic sense of some amount of memory managed on a heap. These lecture notes show the same usage. The editors of this page also use the word 'object' in exactly the same manner when discussing pointers. It's not that hard to follow.

Putting it together we have objects on a heap for which reference counts are maintained; reference-counted heap objects.

"exchange" heap -- what does this mean?
* "local" heap -- what does this mean?

The link I provide to Patrick Walton's blog would get you there. Also, there is documentation, Sorry if discussing a new programming language involves terms you haven't heard. Computing can be like that sometimes.

(note: there is only one "heap" on most CPU architectures, so now we have added abstraction)

Now you are definitely confusing heaps and virtual memory. There are usually many, possibly thousands of heaps on a system at any given time with many distinct implementations of which the CPU is entirely ignorant. Memory allocators and virtual memory are different things.

* via an "owned" pointer -- what does this mean?

Similar to a C++ auto_ptr or unique_ptr. Again, the link I provided would get you there.

* wild pointers -- what does this mean?

Dangling pointer and wild pointer are synonomous.

Use of the exchange heap is exceptional and explicit yet immediately available when necessary -- what does this mean?

I provided a link directly to a discussion of this.

Memory "management" is reduced to efficient stack pointer manipulation -- uhh, what? the language sits around modifying content at %esp and %ebp along with some offsets? sounds far from efficient)

Incrementing a decrementing stack pointer registers is very efficient. Offsets are computed at compile time and the instructions typically require one CPU cycle and no memory access, given a naive model of a CPU. These techniques are a ancient and ubiquitous. Sorry you weren't familiar with them.

or simple, deterministic destruction -- what does this mean?

Others seem to have no difficulty with these terms. In particular, they are not compelled to link merriam-webster at each use, for some reason.

(note: 2nd use of word "deterministic")

Reusing words is an important feature of language.

Compile time checks preclude bad pointers and simple leaks so common with traditional systems languages -- what does this mean and how does this work, considering that the value stored at a pointer (or what it points to) can be manipulated at run-time, so how would the language "deterministically know" (see what I did there?) what's "bad" vs. "good"?

"bad", "wild" or "dangling" pointers are memory safely faults or violations. It is an feature inherent to Rust that they can't exist. Feel free to learn about it.

* ... that is productive, concise

Holy shit! An opinion on Slashdot? Say it isn't so.

Comment Re:Article itself is a waste of memory (Score 4, Insightful) 407

The market decided long ago that fewer programmer hours was better than users waiting a few seconds everyday for their device to GC.

No, actually, that's not what happened. As the summary and the story itself (both of which went unread apparently,) point out, one of the most successful systems to emerge in the market recently, iOS, is not a GC environment.

Over here you may learn about iOS memory management. Without getting too far into that wall of text one discovers the following:

If you plan on writing code for iOS, you must use explicit memory management (the subject of this guide).

Ok, so your claim that GC is the only viable solution for contemporary application development is demonstrably false. Lets look some other assertions:

programmers are inherently bad at memory management. Memory will leak [if programmers must manage it].

First, the vast number of iOS applications not leaking shows that a non-GC system doesn't necessary have to leak. At least not badly enough to compromise the viability of the platform, which is the only meaningful criteria I can think of when it comes to the market.

Second, why assume programmers are inherently bad at a thing when that thing has traditionally been exposed via terrible, error prone, demonstrably awful mechanisms? It seems to me that among widely used tools we leaped from 'systems' languages with truly heinous MM primitives (C/C++) directly into pervasive GC systems. Aside from Objective C+ARC there just aren't enough good non-GC systems to make broad generalizations. Thus, you may be right about programmers, but you can't prove it, and I doubt it.

Finally, what proof is there that pervasive GC is better at not leaking than a good explicit MM system? Anyone with an Android system and a bunch of apps will quickly discover that pervasive GC does not eliminate leaks.

[some phone] comes with a whopping 2GB of RAM

Goggle Glass has 682mb of RAM. There is always a new platform into which we much fit our software and the new platform is usually resource constrained, so there will never be a day when questioning the cost of GCs is wrong. Maybe the wearable you eventually put on will have 8 GB of RAM. The computers you swallow or implant or sprinkle around the lawn probably won't. The fact the next generation of phones can piss away RAM to greedy GCs just isn't particularly informative.

Comment Rust (Score 2) 407

Memory management is an issue that has me excited about Rust. Rust memory management is explicit, easy to use, deterministic, efficient and safe. The language designers understand that garbage collection is costly and that endemic use of a garbage collector limits applicability.

Although Rust does have reference counted heap objects on the so-called "exchange" heap, memory is normally allocated on the stack or on a "local" heap (via an "owned" pointer) that has "destructor-based memory management," much like C++ objects but without the leaks and wild pointers.

The result is the vast majority of allocated memory is not managed by the garbage collector. Use of the exchange heap is exceptional and explicit, yet immediately available when necessary. Otherwise, memory "management" is reduced to efficient stack pointer manipulation or simple, deterministic destruction. Compile time checks preclude bad pointers and simple leaks so common with traditional systems languages.

There is a series of detailed blog posts about Rust memory management here.

Rust was inspired by the need for an efficient, AOT compiled systems programming language that is productive, concise and at least as safe as contemporary "managed" languages. Its memory management scheme goes directly to the point of this story.

Comment Abusive admins (Score 5, Interesting) 161

Make a legitimate edit on a controversial article that fails to indulge the bias of an admin and you'll learn all about the ways admins have to ostracize non-admin contributors. Are you aware of this and if so, what has been done recently or what is planned to moderate abuse by admins? How frequently are admin privileges revoked for abuse? I hope this is frequent because I know for fact the abuse is frequent.

Comment Re:HTML5 is now officially been Embraced and Exten (Score 1) 337

you will not be able to watch content without the vendor CDM.

Didn't claim you would be able to. I just pointed out that it isn't Microsoft only. The point of the original post was that the HTML5 extensions were "locked into windows," which they are clearly not. It may be locked into all kinds of other crap, but it's not Windows only.

And stop foaming at the mouth.

Comment Re:HTML5 is now officially been Embraced and Exten (Score 4, Informative) 337

Sounds like this is locked into windows via the Media Foundation APIs

There may be lock in, but it's not exclusive to Microsoft:

Media Source Extensions (MSE) This specification extends HTMLMediaElement to allow JavaScript to generate media streams for playback. Allowing JavaScript to generate streams facilitates a variety of use cases like adaptive streaming and time shifting live streams.

Encrypted Media Extensions (EME) This proposal extends HTMLMediaElement providing APIs to control playback of protected content.

Web Cryptography API (WebCrypto) This specification describes a JavaScript API for performing basic cryptographic operations in web applications, such as hashing, signature generation and verification, and encryption and decryption.

They're all W3C standards track specifications. The first two have editors from the same three corporations; Google, Microsoft and Netflix. Google, in particular, can't tolerate not being capable of playing Netflix (10% of the population of the US subscribes to this) on its platforms (Android and Chrome OS.) It already works on both and you can take it for granted that Google expects to achieve parity with these specifications.

The last specification is not specific to streaming; it's a general purpose Javascript API to perform common cryptographic operations.

Comment Re:Sales Pitch (Score 1) 339

the TSX compatibility timeline will take roughly that long

From the SiSoftware link you provided:

Hardware Lock Elision (HLE) is a legacy compatible instruction set extension, i.e. transparent to CPUs that do not support TSX. The very same code can execute on TSX-capable CPUs - and benefit - but also work on legacy CPUs without performance penalty.

Thus, HLE at least can be adopted immediately by operating systems, compilers and runtimes. That actually started over a year ago for GCC. Intel's compiler uses TSX as well. RTM requires a feature test for compatible use, but it can still be utilized, particularly in runtimes (JVM, CLR, v8, etc.)

So seven years is too pessimistic. Haswell users with recent compilers are already using TSX.

Great benchmark link, BTW.

Comment Re:Sales Pitch (Score 4, Informative) 339

I'd imagine nobody codes for this. [TSE]

That is going to be an important feature when programmers eventually leverage it. Hardware assisted optimistic locking can make concurrency easier, safer and more efficient as the CPU takes care of coherency problems usually left to the programmer and CAS instructions. Imagine being able to give each of thousands or millions of actors in a simulation their own independent execution context (instruction pointer, stack, etc.,) all safely sharing state and interacting with each other using simple, bug free logic, as opposed to explicit and error prone locking and synchronization. This has been done with software transactional memory but it frequently fails to scale due to lock contention. Hardware based TM can prevent that contention by avoiding lock writes.

It is extremely cool that Intel is implementing this on x86.

Comment Re:Welded containment vessel? (Score 1) 123

That Bloomberg story is about the reactor pressure vessel or RPV, the part the contains the reactor core. These authors write poorly and got it wrong calling it the "containment vessel."

This Slashdot story is about the AP-1000 containment vessel, not the RPV. The vessel is 36 meters wide and 65 meters tall. Nothing on Earth can make a single piece of forged steel that large.

The RPVs specified for the AP-1000 are unusual. RPVs are traditionally welded.

Reactor pressure vessels, which contain the nuclear fuel in nuclear power plants, are made of thick steel plates that are welded together.

RPVs for other common reactor designs such as CANDU or VVER are welded assemblies. Often forged steel steel rings are stacked and welded. Some RPVs use large forged plates and are axially welded.

Note that although the bottom of the AP-1000 RPV is a single piece it still has a separate head; the top of the RPV is gasketed and bolted to the vessel like every other PWR or BWR. It has to be to (re)fuel the reactor.

Comment Re:Why do we care about diff distro releases? (Score 4, Informative) 185

Sure Linux Kernels, but beyond that, who cares?

I do. I have been looking forward to Mint 15 for a while and so have a lot of others. I appreciate that it was posted on Slashdot and I hope others consider trying Mint as a result. Mint deserves the attention because Mint is an antidote to terrible Linux desktop environments.

Comment Re:That's what is so funny to me (Score 3, Interesting) 238

There's no magic ju ju in ARM designs.

The magic ju ju is the ARM business model. There is one trump card ARM holds that precludes Intel from many portable devices; chip makers can build custom SOCs in-house with whatever special circuits they want on the same die. Intel doesn't do that and they don't want to do it; it would mean licencing masks to other manufactures like ARM does. For example, the Apple A5, manufactured by Samsung, includes third party circuits like the Audience EarSmart noise-cancellation processor, among others. It is presently not feasible to imagine Intel handing over masks such that Apple could then contract with some foundry to manufacture custom x86 SOCs. This obviates Intel from many portable use cases.

That feature of the ARM business model might be very useful to large scale computing. One can imagine integrating a custom high-performance crossbar with an ARM core. Cores on separate dies could then communicate with the lowest possible latency. Using a general purpose ARM core to marshal data to and from high-performance SIMD circuits on the same die is another obvious possibility. A custom cryptography circuit might be hosted the same way.

Contemporary supercomputers are great aggregations of near-commodity components. However, supercomputing has a long history of custom circuit design and if the need arises for a highly specialized circuit then a designer may decide that integrating with ARM to do the less exotic leg work computing that is always necessary is a good choice.

Comment Re:One cause (Score 4, Insightful) 419

One cause for the lack of demand of electrical engineers is that the hardware design and manufacturing is located to cheaper countries.

Can't be. Those are the jobs we're keeping here in the US because we all have $75k degrees. The low skill jerbs go to Asia and we keep all the high paying jobs because the Chinese are magically incapable of EE.

Right?

Remember: Education. It's the future.

Slashdot Top Deals

fortune: cpu time/usefulness ratio too high -- core dumped.

Working...