Follow Slashdot stories on Twitter


Forgot your password?

Comment Re:work to live (Score 1) 55

I live in Norway where we definitely work to live, not the opposite, but I did spend one year in the US back in 1991-92.

During that year I know that I worked the least number of hours of any of the engineers in my department, averaging 45 hours/week, but I still got a couple of bonuses which they really didn't need to give me, since they knew that I was going back to Oslo after 12 months.

Here in Norway we have 5 weeks of vacation time every year, and employers get in big trouble if they have any employees who don't take all of that. (You can push a maximum of two weeks in front of you from one year to the next.)

OTOH my wife would gladly confirm that I spend a lot of hours in front of my PC every day, but I consider that to be some of my hobbies, not work. :-)

I.e. stuff like NTP Hackers, Mill Computing, Lidar-based mapping work etc.


Comment What about the interconnect? (Score 1) 150

In pretty much every HPC cluster I've seen or been personally involved with (mostly oil/seismic processing or crash simulations), the type of CPU is only one of the cost drivers!

Typically you end up spending about as much on fast interconnects as you do on motherboards/cpus/ram etc. The main exception to this rule is when you have an embarrassingly parallelizable workload, with small memory footprint and no need for cross-system communication, i.e. like a Monte Carlo simulation or password cracking.

For oil we used the largest single-image NUMA/SMP machine we could get at the time, this machine did the initial gridding of the problem space, then a standard cluster of 1K dual-cpu motherboards (i.e. 2K cpus) took over and did the main part of the actual processing.

There are exceptions though, like if you are doing Linear Programming type optimization which can be really hard to parallelize, or if you are using very expensive SW:

When you pay more for the SW than for the HW it is running on, then it makes sense to use bleeding-edge (gamer type) cpus.


Comment Rather a very poor job. :-( (Score 1) 100

I have made many, many panoramas, but none in the multi-gigapixel range, so I realize that they had a very tough stitching job, but even so: This was a pretty bad job!

All the central snow fields look like the result of randomly placed images: With a motorized pano head they should have been able to locate each image pretty accurately even before they started the SIFT runs to look for matching key points (which can be hard in a blue sky or on white snow).

More problematic is the fact that they must have done the actual stitching pretty much without proper blending from one image to the next:

Within the first minute of zooming around in the image I stumbled across a perfectly straight line with totally different exposure/lighting on each side, giving an almost black/white boundary that screams "This isn't natural!".

The proper way to blend such images is to use a multi-spectral approach: Low frequency information (like average light level) is blended across the entire overlap, while higher frequencies use narrower and narrower bands. Doing it this way means that even if one image had a clear blue sky and the next was taken when the sun was hidden by a cloud, the overlap is nearly perfect.


Comment Nash just got the Abel price! (Score 5, Informative) 176

Just 5 days ago, John F. Nash and Louis Nirenberg got the Abel price in a ceremony in Oslo:

With a diploma handed over by the Norwegian King Harald and a NOK 6M prize this is the closest thing math has to a Nobel prize.

Unlike the Fields Medal there is no age limit, so just like the Nobel prizes it tends to be given out at a later date, for work that has proven itself to be really outstanding.


Comment I mosly agree with you... (Score 1) 40

The first big problem with integers is that they are really badly defined in C, so just like you I try to use unsigned as much as possible:

Any underflow turns into a big overflow, so it can be tested for at the same time as the overflow test, and the semantics of power-of-two sized wraparound is pretty solid on all platforms and implementations.

OTOH I don't agree that having proper overflow handling would mostly be a new source of bugs, i.e. on the new Mill cpu architecture we have a full orthogonal set of of all basic operations:

When adding two numbers (belt values) you can specify signed or unsigned, and over/underflow to be handled as saturating, wraparound or trapping, as well as automatically widening.

Look at ADDSW as an example of a Signed ADD that will widen if needed.

Since the Mill carries metadata alongside each belt slot it does not need separate byte/short/word/dword ADD instructions: The size of the operations is defined by the belt slot specified and not in the instruction encoding, so the machine code is polymorphic in data item size.

I.e. you can start with 8-bit values and an 8-bit accumulator, when the sum becomes too large then it is automatically widened to 16 bits or more. This works all the way to 128 bits for all scalar operations.


Comment Bad applications and programming languages! (Score 1) 486

What they actually compared wasn't the speed of the disks, but the speed of the language runtime and OS file IO buffering routines!

It wasn't really that surprising that concatenating java or phyton objects can be slower than letting the low-level runtime do the same task.

If they had wanted to test the disk IO speed then they would have had to add at least some fflush() calls.

It is trivial, in any language, to make your code faster than the actual disk transfer speed, but a lot harder to make it faster than a set of small block moves within (cached) RAM.


Comment Language obviously influences thinking! (Score 1) 274

I'm Norwegian which meant that I had to learn the two main Norwegian languages (bokmål and nynorsk, used to be about 30% overlap, it is larger now) and English. Those are ones I'm currently fluent in. I also had four years of German and two years of French, plus a single year of Old Norse (i.e. Icelandic).

The interesting part here is that the list above was the absolute minimum I could get away with, since I knew very early that wanted to get a technical degree (MSEE from NTNU in Trondheim).

Fluency in any language requires thinking in that language, this is so obvious that only mono-lingual people could possibly doubt it!

Thinking about stuff you have no way to express in language is extremely hard. :-)


Comment Re:Not GoDaddy. (Score 5, Interesting) 295

I've had a single .org domain registered with and hosted by DreamHost for 7-8 years now, absolutely no problems.

I also have 6-8 other domains here in Norway (.no) which are all registered locally but still hosted on the same DreamHost account.

Dirt cheap, very stable and OK performance wise.

I have a tiny search program written in perl ( which allows you to search for any given string within the first billion digits of pi:

Even though the database + index needs about 5 GB, so obviously not cached in memory, I tend to get replies within 0.1 seconds or so:

Find 19570725

Found at 45,109,789: 061632112341128 19570725 293694235201198

Total time = 0.099406 seconds (8 suffix lookups)

I.e. my birth date is located about 45 million digits into pi. :-)


Comment You need hydro-electric pump storage! (Score 4, Informative) 437

One of the reasons Denmark can run on wind (currently 39% of their total) and solar power (500 MW total from 90,000 private installations according to wikipedia) is that we have installed multiple DC transmissions lines between Denmark and Norway, and hydro-electric power is by far the most responsive to changing load.

On the west coast mountains we have storage dams where surplus power can be used to pump up water during periods of surplus production and then let down again when Denmark, Sweden or countries further south need some extra power.

From you can see that this is _by far_ the largest grid energy storage form, accounting for more than 99% of the total capacity worldwide.

The total efficiency (70%-87%) is quite good, which means that this is not just a good idea but can pay for itself anywhere the difference between peak and off-peak energy costs are larger than the ~20% that is lost to pump friction.


Comment Use a hybrid setup like Norway (Score 1) 760

Smaller infractions (up to 20 km/h (13 mph) over the limit) carry fixed penalties, on a scale from about $100 to nearly $1000 depending upon the actual speed and the base speed limit.

More severe stuff, like DUI just over the 0.02 blood alcohol limit will result in income/net worth scaled fines.

Another drink before that drive home and you're looking at compulsory jail time, plus loss of drivers license for two years and the need to take the driving exam all over again afterwards.

This means that the police don't need to check IRS returns for speeding or red light cameras, but only for more serious offenses.

All the fines go to the central government of course, so there is no premium for the police on setting speed traps to generate revenue.


Comment I am one of the ntpd maintainers (Score 5, Interesting) 287

I've been on the "NTP Hackers" mailing list for ~15 years now, my last major effort was to develop a server-optimized multi-threaded version of the core ntpd sw: I was hoping for wire speed packet processing on an embedded linux platform, but had to settle for 3-500 Mbit/s since the target kernel version did not support multi-thread targeting of incoming packets, i.e. I needed to have a single receive thread which would fetch the incoming packets, timestamp them and queue them up, then all the other threads/cores would grab them from there.

Back to the "why are there bugs in such a trivial protocol?" question:

By far the biggest cause of required effort when trying to modify or optimize the NTPD distribution is the need to support a big number of OSs and even larger number of OS versions, some of them more than 20 years old, even if the main targets are Unix-like or Windows.

The second problem is the need to support 30+ reference clocks, with all sorts of OS/version specific interfaces needed in order to timestamp events as accurately as possible.

The third and final major stumbling block is all the crypto stuff, which got added in order to be able to authenticate both time packets and monitoring/configuration requests, and this is where the latest major bugs have been found.

PHK (who is working on Ntimed) has spent a lot of time on NTP, including his time as a core FreeBSD hacker when he made sure that the FBSD had the best possible timekeeping kernel. This is the reason that my personal pool server has always used FreeBSD.

If your only need is to get 0.1s level time sync on a number of client only machines, then it really doesn't matter how you implement the NTP protocol, except that you should really try to measure and adjust the local clock frequency so as to track the reference time!

The default Windows time code implements Simple NTP (SNTP) which uses the NTP packet format but doesn't try to implement the proper control loop to steer the local clock, instead it just yanks the OS clock resulting in a sawtooth-like pattern of clock offsets.


Comment Two easy options (Score 2) 466

I have a Vantec USB2 universal disk adapter, it has connectors for IDE and SATA, with cables and power, for all the hard drives I've used since my last SCSI disk, this is the one I would use here. I picked mine up at Fry's many years ago, just as SATA disks had started to take over.

The alternative has also been mentioned, using a LapLink style cable: These packages usually came with selfloading sw where you just had to enter a single single MODE command on the console of the old machine, then the SW would copy over an ascii type bootstrap program which would load the rest.

I wrote a program to do this (the file transfer part) in the late eighties, in 1995 or so I also write a generic ascii executable generator using only those 70+ characters which the MIME mail standard specifies as transparent across all mail gateways and national encoding standards.


Comment Re:Gurus like Carmack don't need agents (Score 1) 145

Thanks for remembering, that time was a lot of fun. :-)

I'm still doing low-level programming, I've been involved with the Mill for a little more than a year now, I'm working on scalar/vector FP emulation for the smallest models we intend to produce.

Take a look at if you want to widen your mind a bit: A CPU with a belt instead of registers!


Comment Gurus like Carmack don't need agents (Score 3) 145

I've met John C a number of times, he is indeed a guru.

My longtime friend Mike Abrash is also a guru, but according to him, not in the same league as Mr Carmack.

Personally I'm a very competent programmer who've just had some small episodes of greatness: I know I'm not as bright as John or able to work for years at a single task like Mike can do, but I've still had a lot of fun over the last 35-40 years! :-)

Today I declined an offer to become CTO of a 20 year old international sw company, I'm having a pretty good time where I am now.


Comment Duplicate the TLB code entries! (Score 2) 215

To me it looks like this trick has a similar, very simple trick to defeat it:

Assuming you can run some code at kernel (or even SMM) mode, you should be able to scan through all code segments that are marked execute only, and which have a data segment which aliases it? I.e. same virtual address - different physical addresses.

When you find such blocks, you just create new readonly or readwrite mappings which points to the same physical addresses as the decrypted/execute-only memory.

At that point you can dump/debug to your heart's content.