Become a fan of Slashdot on Facebook


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:And still (Score 1) 182

by TheRaven64 (#49157943) Attached to: One Astronomer's Quest To Reinstate Pluto As a Planet
If Pluto is a planet, then so is Eris (which is larger), and Earth's moon (around 5 times larger than Pluto) is possibly a binary planet. Ganymede, the largest moon in the solar system, is under 3% the mass of Earth and is about ten times bigger than Pluto. There are quite a lot of moons bigger than Pluto, so would you want to classify them all as planets?

Comment: Re:White balance and contrast in camera. (Score 1) 400

by TheRaven64 (#49154111) Attached to: Is That Dress White and Gold Or Blue and Black?
Zoom right in on the bits that you think are white, so that they fill your entire monitor. They're obviously blue. For a lot of us, that's the colour that we see when we look at it in context as well. I can see how you'd interpret it as being white by overcompensating for the colour in the bottom right, but that doesn't stop you from being wrong. The gold bits are gold when you zoom in (mostly, some are black), but a shiny black often looks yellow-gold in overexposed photos.

Comment: Re: Hard to believe (Score 1) 166

by TheRaven64 (#49148121) Attached to: Microsoft's Goals For Their New Web Rendering Engine

Who says the OS should provide nothing useful and let app makers make their money on it?

If you set up a straw man, then it's very easy to kill it. The issue is not an OS providing something, it's that Microsoft, which had a near-monopoly in the desktop space, used the money from selling the OS to fund development in another market (browsers) and then bundled their version, undercutting the competition with cross subsidies. There was a thriving browser market before IE was introduced, but it's hard to compete when most of your customers are forced to pay to fund the development of your competitor.

Comment: Re:"Free" exercise (Score 1) 295

by TheRaven64 (#49146301) Attached to: I ride a bike ...
150 km a day on a bike? How long does that take? According to my phone GPS, which isn't spectacularly accurate, I do about 18km/hour (though I'm far from the fastest cyclist), so even if you're twice as fast as me that sounds like it would involve a bit over 4 hours on a bike. That's a lot of time to spend commuting each day, it's adding over 50% to the normal work day!

Comment: Re:Kinda stupid since (Score 1) 519

by TheRaven64 (#49145303) Attached to: Machine Intelligence and Religion

Generally Fundamental Evangelical Christians teach humility and service to others and subscribe to the view that others are more important than me. That's exactly opposite to what you claim "ALL" religion is.

Really? Because that's exactly the set of values that I'd choose to indoctrinate my serfs with.

"You know that those who are recognized as rulers of the Gentiles lord it over them; and their great men exercise authority over them. But it is not this way among you, but whoever wishes to become great among you shall be your servant; "For even the Son of Man did not come to be served, but to serve, and to give His life a ransom for many."

Or, to summaries: 'Hey oppressed people, don't think about following a leader from amongst yourself, that kind of thing always ends badly'.

Comment: Re:file transfer (Score 1) 456

by TheRaven64 (#49145287) Attached to: Ask Slashdot: Old PC File Transfer Problem
Laplink also had a neat mode where it would install on the remote machine for you (which was useful for me, because it came on 3.5" floppies and one machine only had a 5.25" drive). The mechanism for this was quite interesting - you ran on the remote machine machine, telling it to use com1 as the console device (something I hadn't been aware DOS could do). Then it would use the type command (similar to cat on UNIX systems) to write a stream of data from the standard input to a file and finally run that file.

This obviously raises the question of why, when you have a serial console with working flow control, do you need laplink at all? If you have a null modem cable and a lot of patience, then you can always extract files by writing them to standard output and reading them off with a serial program - just make sure that you've correctly configured the UART first. If you're a bit paranoid, then running something like par2 first (I think there are DOS binaries and they're pretty small, though they may take a while on a 386) and you'll be able to recover small data errors.

Copying 160MB over a serial connection won't be fast, but I'm assuming that this isn't urgent if it's been sitting on a 160MB disk for years without backups...

Comment: Re: Hard to believe (Score 2) 166

by TheRaven64 (#49145269) Attached to: Microsoft's Goals For Their New Web Rendering Engine

IE itself can EASILY be removed from a system. Delete the EXE, done. Its been that way ALWAYS. Even during the court battles.

While this is technically true, it's also misleading. You could delete iexplore.exe, but don't expect a working system afterwards. Lots of other parts of Windows (and Office) invoked iexplore.exe directly, rather than providing a web view with MSHTML.dll or invoking the default browser via the URL opening APIs.

Comment: Re: Hard to believe (Score 1) 166

by TheRaven64 (#49145259) Attached to: Microsoft's Goals For Their New Web Rendering Engine
What is this, 1998? IE was never part of the kernel. The complaints were:
  • MSHTML.dll (around which IE was a very thin wrapper) was installed by default and used by loads of things.
  • Lots of things in Windows that should have used MSHTML.dll to embed a web view, or just invoked the default browser, used IE so that you couldn't uninstall IE without breaking Windows.
  • MS bundled IE with Windows and used their near monopoly in the desktop OS market to gain a dominant position in the browser market and push Netscape (and a few other browser makers) out of business.

It was never part of the kernel and never ran with system-level privileges.

Comment: Re:I got a goal for you (Score 1) 166

by TheRaven64 (#49145243) Attached to: Microsoft's Goals For Their New Web Rendering Engine
I've not been paying much attention to Windows for a few years, but does IE still have the same poor security reputation? I was under the impression that it did the multiprocess thing and sandboxed each instance, putting it in the same ballpark as Chrome and Safari and ahead of Firefox (which is finally going to start adding sandboxing support now). Did they manage to screw up the sandboxing and make something that's still trivially exploitable, or are you just repeating ten-year-old information?

Comment: Re:But... (Score 1) 254

by TheRaven64 (#49125625) Attached to: The Case Against E-readers -- Why Digital Natives Prefer Reading On Paper

I saw a recent review of a smartphone that had two screens, one LCD and one eInk. The modern eInk display is able to get a high enough refresh for interactive use and doesn't drain the battery when done. The screen that I'd love to see is eInk with a transparent OLED on top, so that text can be rendered with the eInk display and graphics / video overlaid on the OLED. The biggest problem with eInk is that the PPI is not high enough to make them colour yet. You get 1/3 (or 1/4 if you want a dedicated black) of the resolution when you make the colour and so that means you're going to need at least 600PPI to make them plausible.

The other problem that they've had is that LCDs have ramped up the resolution. My first eBook reader had a 166PPI eInk display. Now LCDs are over 300PPI but the Kindle Paperwhite is only 212PPI, so text looks crisper on the LCD than the eInk display, meaning that you're trading different annoyances rather than having the eInk be obviously superior. With real paper you get (at least, typically a lot more than) 300DPI and no backlight.

Comment: Re:amazing (Score 1) 279

by TheRaven64 (#49125595) Attached to: Intel Moving Forward With 10nm, Will Switch Away From Silicon For 7nm
The problem here is latency. You're adding (at least) one cycle latency for each hop. For neural network simulation, you need to have all of the neurones fire in one cycle and then consume the result in the next cycle. If you have a small network of 100x100 fully connected neurones then the worst case (assuming wide enough network paths) with a rectangular arrangement is 198 cycles to get from corner to corner. That means that the neural network runs at around 1/200the the speed of the underlying substrate (i.e. your 200MHz FPGA can run a 1MHz neural network).

Your neurones also become very complex, as they need to all be network nodes with store and forward and they are going to have to handle multiple inputs every cycle (consider a node in the middle. In the first cycle it can be signalled by 8 others, in the next it can be signalled by 12 and so on. The exact number depends on how you wire the network, but for a flexible implementation you need to allow this.

Comment: Re:Good grief... (Score 1) 673

by TheRaven64 (#49125575) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge

What's the justification for compilation unit boundary? It seems like you could expose the layout of the struct (and therefore any compiler shenanigans) through other means within a compilation unit. offsetof comes to mind. :-)

That's the granularity at which you can do escape analysis accurately. One thing that my student explored was using different representations for the internal and public versions of the structure. Unless the pointer is marked volatile or any atomic operations occur that establish happens-before relationships that affect the pointer (you have to assume functions that you can't see the body of contain operations), C allows you to do a deep copy, work on the copy, and then copy the result back. He tried this to transform between column-major and row-major order for some image processing workloads. He got a speedup for the computation step, but the cost of the copying outweighed it (a programmable virtualised DMA controller might change this).

I suppose you could do that in C++ with template specialization. In fact, doesn't that happen today in C++11 and later, with movable types vs. copyable types in certain containers? Otherwise you couldn't have vector >. Granted, that specialization is based on a very specific trait, and without it the particular combination wouldn't even work.

The problem with C++ is that these decisions are made early. The fields of a collection are all visible (so that you can allocate it on the stack) and the algorithms are as well (so that you can inline them). These have nice properties for micro optimisation, but they mean that you miss macro optimisation opportunities.

To give a simple example, libstdc++ and libc++ use very different representations for std::string. The implementation in libstdc++ uses reference counting and lazy copying for the data. This made a lot of sense when most code was single threaded and caches were very small but now is far from optimal. The libc++ implementation (and possibly the new libstdc++ one - they're breaking the ABI at the moment) uses the short-string optimisation, where small strings are embedded in the object (so fit in a single cache line) and doesn't bother with the CoW trick (which costs cache coherency bus traffic and doesn't buy much saving anymore, especially now people use std::move or std::shared_ptr for the places where the optimisation would matter).

In Objective-C (and other late-bound languages) this optimisation can be done at run time. For example, if you use NSRegularExpression with GNUstep, it uses ICU to implement it. ICU has a UText object that implements an abstract text thing and has a callback to fill a buffer with a row of characters. We have a custom NSString subclass and a custom UText callback which do the bridging. The abstract NSString class has a method for getting a range of characters. The default implementation gets them one at a time, but most subclasses can get a run at once. The version that wraps UText does this by invoking the callback to fill the UText buffer and then copying. The version that wraps in the other direction just uses this method to fill the UText buffer. This ends up being a lot more efficient than if we'd had to copy between two entirely different implementations of a string.

Similarly, objects in a typical JavaScript implementation have a number of different representations (something like a struct for properties that are on a lot of objects, something like an array for properties indexed by numbers, something like a linked list for rare properties) and will change between these representations dynamically over the lifetime of an object. This is something that, of course, you can do in C/C++, but the language doesn't provide any support for making it easy.

Comment: Re:Good grief... (Score 1) 673

by TheRaven64 (#49125531) Attached to: Bill Nye Disses "Regular" Software Writers' Science Knowledge
Depends on whether they care about performance. To give a concrete example, look at AlphabetSoup, a project that started in Sun Labs (now Oracle Labs) to develop high-performance interpreters for late-bound dynamic languages on the JVM. A lot of the specialisation that it does has to do with efficiently using the branch predictor, but in their case it's more complicated because they also have to understand how the underlying JVM translates their constructs.

In general though, there are some constructs that it is easy for a JVM to map efficiently to modern hardware and some that are hard. For example, pointer chasing in data is inefficient in any language and there's little that the JVM can do about it (if you're lucky, it might be able to insert prefetching hints after a lot of profiling). Cache coherency can still cause false sharing, so you want to make sure that fields of your classes that are accessed in different threads are far apart and ones accessed together want to be close - a JVM will sometimes do this for you (I had a student work on this, but I don't know if any commercial JVM does it).

If it's not in the computer, it doesn't exist.