Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Comment Re:How do people optimise their designs? (Score 1) 213

Well, when targeting a machine with large numbers of cores, memory and power, sure - it's a better tradeoff to avoid too much optimisation, and go for maintainability over raw performance. I know that, I do this all the time. As for writing assembler, my assembler days are long gone. The last I did was a bit of 56k maybe 10 years ago for an audio pipeline. The closest I get these days is looking at the output of the compiler ;-)

However, when the user base is continuously worried about battery drain, how do you design a sensible tradeoff between memory use vs CPU time (storing vs re-calculating a given value), or knowing how to arrange data in cache lines to pull data efficiently through the memory bus to reduce runtime (and hence prolong battery life).

These devices are power constrained, and will be no matter what anyone says. Knowing the architecture, and the future direction of the architecture would allow devs to produce solutions that will scale, and be power efficient. Maybe you'll only get 10% power saving, but for a device which is being heavily used, this could translate into an hour or two of extra use which is going to be a big selling point for expensive handheld devices.

Comment How do people optimise their designs? (Score 1) 213

I'm struggling to understand how apple get away with not announcing any info about the codes, the cache size, memory bandwidth etc. Surely on a mobile device with limited power, optimisation of applications is a priority. How do people manage this without any idea of the physical architecture of the machine they are developing for?

Maybe i'm just old school, but knowing what hardware you are targeting is almost the first bit of info which informs an efficient use of the resources available.

Comment Re:Michael Lewis's Vanity Fair article (Score 2) 46

I'm not sure where you heard this, or which market you think this works in, but that sounds dubious at the very least. The realisation that a trade isn't for a good price in an order driven market isn't obvious until further trading moves the price away from touch against the position you have just taken. You can't place one order off touch, the market doesn't work like that.

If, say, this happened on a major market (say NASDAQ) there would be a serious number of broken trade messages, or alternatively, some mechanism to re-instate an order which has been executed at the right place in the book (and there isn't). I can tell you there aren't a serious number of broken trade messages.

Have a look at the ITCH spec -

You can probably download a historical day of NASDAQ data, their main store is restricted to data licencees.

Comment Re:The ultimate ugly hack? (Score 2) 264

float->int conversion used to be expensive on x86 processors due to default rounding modes from C, and lack of suitable built in rounding functions. Looking back, my code contained this handy function. Ugly hack or elegant performance improvement? I'd suggest that the difference comes down to comments, unit tests, and whether people die if it's got bugs ;-)

#ifdef WIN32
        #ifdef ASSUME_ROUNDING // This method relies on knowing the rounding mode of the float processor // // It relies on it being set to round to nearest and offsets the value to // calculate a truncation
                inline int convert(float f)
                        int i;
                        static const float half = 0.5f;
                                fld f // f = f
                                fsub half // f = f - 0.5
                                fistp i // i = round(f)
                        return i;
        #else // Shift the bits in the double around so that the bits can be read directly as an int
                inline int convert(double d)
                        const double D2I = 1.5*(double)(126)*(double)(126);

                        double temp = d - 0.499999999 + D2I;

                        return *((int*) (&temp));
#else // On Mac we just use the standard C conversion since it doesn't suffer the performance hit // as on PC
        #define convert(x) int(x)

Comment Re:Good, Fast and Cheap... Pick Any Two (Score 5, Insightful) 101

My understanding is that there is no room for decode artifacts in this - you either do it right, or it's not a proper decoder. This is a proper decoder, so will produce identical output to the google standard one. I believe there are test streams with md5s for the test frames, and this decoder passes the tests.

So, it's free, and it's correct, and it's fast. I think you have pre-conceived prejudices which are in this case wrong ;-)

From my perspective, faster is good for low power devices, so if this helps spread decent video codecs to more devices, that's a win.

Comment Possible reasons (Score 1) 509

Maybe he doesn't like concurrent code because he's been bitten by nasty bugs enough times to shy away from it. Maybe he doesn't like your source control system as he has lost heaps of work in the past trusting it to a dodgy system. Maybe he has found code reviews a waste of time, or had bad experiences with pitched battles in a meeting room. Why don't you try asking him rather than speculating? 'Hey bob, it looks to me like you aren't keen on code reviews - why is that?' would be a good start.

Alternatively, he's a bit of a jerk, or bad at his job, and i'll leave that to you to figure out for yourself.

Comment Re:Profit & Lies (Score 1) 730

Thanks for taking the time to try and spread some info about what has happened. It's amazing how unreasonable posters are being about this - you've already said the system failed, you have corrected the mistake and are trying to stop it happening again. Obviously people here have never produced software or a process with an error in it right ? ;-)

Comment Re:Futile (Score 1) 160

Not my experience. I'm continually impressed with how fast java and C# are, and how well systems written in these languages perform in realtime apps. Sure, you get outliers, but then you get outliers from the OS, core swaps, networking stacks, etc etc, it's just one more area you have to watch and carefully consider, that's all. I'm not suggesting that code which hasn't been thought about performs well in this environment, but that it's possible to produce perfectly functional realtime systems with these languages.

Comment I've seen how they do this at the cinema.. (Score 1) 612

The clever but somewhat unorthodox hacker employed by the Iranians pulls out his Apple Laptop and types furiously at a constant rate into a window with unrelated scrolling green text. A modal dialog box appears with a progress bar slowly ramping up to 100% and the text 'Sending virus to enemy drone'. He sits back looking smug with his hands behind his head. Once 100% is achieved, he again types furiously whilst explaining to the general standing behind him that he is going to send a surprise to the american scum operators. The drone sends out some sort of pulse of energy back up the channels being used to control it, and the equipment the american scum operators are using explodes in a shower of sparks and electrical discharges, frying the operators. The hacker than pushes a single button and the drone lands on a convenient long empty road. Everyone cheers. The hacker gets the girl.