Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re: Hard to believe (Score 4, Interesting) 166

IE 11 implements W3C standards better than any browser. Webkit might have more check offs from html5test but they are not implemented the same way as w3c.

Css 3 animations are a good example. Chrome does not do them right without hacks.

It is not IE 6 anymore and Sun and IBM subverted and changed proposed standards IE 6 used in development on purpose. It was not designed to break Web pages. Mozilla and Netscape were worse in 2001 believe it or not

Comment Re: If you hate Change so much...... (Score 1) 516

Skuemorphic design with gloss, shininess, gradients, are out dated according to the Art professors.

Look at ios 8? Buttons are gone as they represent real objects. Macosx? Yosemite is flat and high color scales and gradients gone. Andriod M? Same. Furniture? That too is now minimalist color and design.

This is the new thing.

Even chromes icon now no longer looks like 3d plastic circa 2011. It is flat and slightly frosted.

The old way is out of date now. You all whined Skuemorphism sucks!!! Look at the leather in macosx address book??? Well the art professors heard. You got it

Comment Re:Operating at 20W gives zero improvement. (Score 1) 114

AMD was (before haswel) not too bad if you have a multithreaded workload.

Thanks to the XboxONE with 8 cores games will run better on AMD as they will become more threaded since many are crappy xbox ports.

For a cheap box to run VM images in virtualbox/vmware workstation, video editing, or compiling code AMD offers a great value and the bios does not cripple virtualization extensions unlike the cheap ones from intel.

FYI I switched to an i7 4770k for my current system so I am not an AMD fanboy. But I paid through the nose for hyperthreading and 4 cores. I wanted good IPC for single threaded as well.

AMD dropped the ball twice. Both with abandoning the superior 5 year old phenom II which is still 25% faster per clock ticket than their newest system??? Second selling their fabrication plants to raise the shareprice last decade. They have .28 nm chips while intel is busy switching to .10nm??! How can you compete agains't that? Worse global foundaries are more interested in ARM chips as AMD has too low demand. OUCH.

Even if you love Intel it is in your best interest for AMD to stick around for competition and lower prices.

Comment Re:Operating at 20W gives zero improvement. (Score 1) 114

In SWTOR I got a doubling of FPS from moving from a PhenomII black edition to an i7 4770k.

I would be surprised if that were not the case. The i7-4770k came out 5 years after the Phenom II - a lot happened in that time, including the entire Phenom line being discontinued and succeeded by newer architectures. I'd be more interested in a comparison between the i7-4770k and its 30%-cheaper contemporary, the FX-9590 (naturally, expecting the i7-4770k still to win to some degree if we focus purely on single-thread performance, but is that worth it? Once SWTOR is no-longer CPU-bound you wouldn't see any difference between the two at all).

Here is the quicker. The half decade old phenom II is faster per clock cycle than the FX series based on the bulldozer architecture?? AMD really messed up as it was optimized for its graphics hoping it would win this way. In other words those mocking it call it Pentium IV 2.0

Comment Re:But... (Score 1) 261

I saw a recent review of a smartphone that had two screens, one LCD and one eInk. The modern eInk display is able to get a high enough refresh for interactive use and doesn't drain the battery when done. The screen that I'd love to see is eInk with a transparent OLED on top, so that text can be rendered with the eInk display and graphics / video overlaid on the OLED. The biggest problem with eInk is that the PPI is not high enough to make them colour yet. You get 1/3 (or 1/4 if you want a dedicated black) of the resolution when you make the colour and so that means you're going to need at least 600PPI to make them plausible.

The other problem that they've had is that LCDs have ramped up the resolution. My first eBook reader had a 166PPI eInk display. Now LCDs are over 300PPI but the Kindle Paperwhite is only 212PPI, so text looks crisper on the LCD than the eInk display, meaning that you're trading different annoyances rather than having the eInk be obviously superior. With real paper you get (at least, typically a lot more than) 300DPI and no backlight.

Comment Re:amazing (Score 1) 279

The problem here is latency. You're adding (at least) one cycle latency for each hop. For neural network simulation, you need to have all of the neurones fire in one cycle and then consume the result in the next cycle. If you have a small network of 100x100 fully connected neurones then the worst case (assuming wide enough network paths) with a rectangular arrangement is 198 cycles to get from corner to corner. That means that the neural network runs at around 1/200the the speed of the underlying substrate (i.e. your 200MHz FPGA can run a 1MHz neural network).

Your neurones also become very complex, as they need to all be network nodes with store and forward and they are going to have to handle multiple inputs every cycle (consider a node in the middle. In the first cycle it can be signalled by 8 others, in the next it can be signalled by 12 and so on. The exact number depends on how you wire the network, but for a flexible implementation you need to allow this.

Comment Re:Good grief... (Score 1) 681

What's the justification for compilation unit boundary? It seems like you could expose the layout of the struct (and therefore any compiler shenanigans) through other means within a compilation unit. offsetof comes to mind. :-)

That's the granularity at which you can do escape analysis accurately. One thing that my student explored was using different representations for the internal and public versions of the structure. Unless the pointer is marked volatile or any atomic operations occur that establish happens-before relationships that affect the pointer (you have to assume functions that you can't see the body of contain operations), C allows you to do a deep copy, work on the copy, and then copy the result back. He tried this to transform between column-major and row-major order for some image processing workloads. He got a speedup for the computation step, but the cost of the copying outweighed it (a programmable virtualised DMA controller might change this).

I suppose you could do that in C++ with template specialization. In fact, doesn't that happen today in C++11 and later, with movable types vs. copyable types in certain containers? Otherwise you couldn't have vector >. Granted, that specialization is based on a very specific trait, and without it the particular combination wouldn't even work.

The problem with C++ is that these decisions are made early. The fields of a collection are all visible (so that you can allocate it on the stack) and the algorithms are as well (so that you can inline them). These have nice properties for micro optimisation, but they mean that you miss macro optimisation opportunities.

To give a simple example, libstdc++ and libc++ use very different representations for std::string. The implementation in libstdc++ uses reference counting and lazy copying for the data. This made a lot of sense when most code was single threaded and caches were very small but now is far from optimal. The libc++ implementation (and possibly the new libstdc++ one - they're breaking the ABI at the moment) uses the short-string optimisation, where small strings are embedded in the object (so fit in a single cache line) and doesn't bother with the CoW trick (which costs cache coherency bus traffic and doesn't buy much saving anymore, especially now people use std::move or std::shared_ptr for the places where the optimisation would matter).

In Objective-C (and other late-bound languages) this optimisation can be done at run time. For example, if you use NSRegularExpression with GNUstep, it uses ICU to implement it. ICU has a UText object that implements an abstract text thing and has a callback to fill a buffer with a row of characters. We have a custom NSString subclass and a custom UText callback which do the bridging. The abstract NSString class has a method for getting a range of characters. The default implementation gets them one at a time, but most subclasses can get a run at once. The version that wraps UText does this by invoking the callback to fill the UText buffer and then copying. The version that wraps in the other direction just uses this method to fill the UText buffer. This ends up being a lot more efficient than if we'd had to copy between two entirely different implementations of a string.

Similarly, objects in a typical JavaScript implementation have a number of different representations (something like a struct for properties that are on a lot of objects, something like an array for properties indexed by numbers, something like a linked list for rare properties) and will change between these representations dynamically over the lifetime of an object. This is something that, of course, you can do in C/C++, but the language doesn't provide any support for making it easy.

Comment Re:Good grief... (Score 1) 681

Depends on whether they care about performance. To give a concrete example, look at AlphabetSoup, a project that started in Sun Labs (now Oracle Labs) to develop high-performance interpreters for late-bound dynamic languages on the JVM. A lot of the specialisation that it does has to do with efficiently using the branch predictor, but in their case it's more complicated because they also have to understand how the underlying JVM translates their constructs.

In general though, there are some constructs that it is easy for a JVM to map efficiently to modern hardware and some that are hard. For example, pointer chasing in data is inefficient in any language and there's little that the JVM can do about it (if you're lucky, it might be able to insert prefetching hints after a lot of profiling). Cache coherency can still cause false sharing, so you want to make sure that fields of your classes that are accessed in different threads are far apart and ones accessed together want to be close - a JVM will sometimes do this for you (I had a student work on this, but I don't know if any commercial JVM does it).

Comment Re:Operating at 20W gives zero improvement. (Score 2, Informative) 114

Do you have a link for that? It's not that I disbelieve you, I strongly suspect that Intel would do that crap. I'd like to read more about it however if you hae a link handy, then stash the link for the next time this benchmark comes up.

Personally, I like the Phoronix Linux benchmarks. They're more meaningful for me since I use Linux and they're all based on GCC which is trustworthy.

http://www.phoronix.com/scan.p...

The i7 4770 ocasionally blows away the FX8350 by a factor of 2, but in many benchmarks they're close, and Intel loses a fair fraction. The 4770 is the best overall performer, but not by all that much. It seems that the choice of CPU is fairly workload dependent.

For servers, I still prefer the supermicro 4s opteron boxes. 64 cores, 512G RAM, 1U. Nice.

The i7 4770k is a fairly high end chip by Intel. I own one but I would not expect to find one in a sub 700. It is not a Xeon, but it is just 1 notch down from the $900 extreme edition so it is 2nd highest in consumer non server chips.

Well sites like tomshardware.com make it look like a Pentium or i3 can smoke the latest AMD black edition fresh out of the water. However, biased or not my real world experience says otherwise as many games are optimized for intel and use NVidia specific directX extensions with their studio software which boils a lot of AMD users blood but it is the truth.

In SWTOR I got a doubling of FPS from moving from a PhenomII black edition to an i7 4770k. True it has less cores but apps are optimized still for single tasking and I do have 4 real and 4 hyperthreaded cores for my VMs.

Reason being are games are crappy xbox ports. The 360 I think was single or dual core so games were single threaded. Therefore they kick ass on Intel. The only good news is the xboxONE is changing this with 8 cores with an AMD and forcing game makers to optimize more for ATI.

I expect the newer games to be more competitive as a result as they are more threaded and ATI optimized on tomshardware.com and other sites.

But damage is done and the power is much better with intel chips as they leave AMD further and further in the dust with lower chip nm sized dies since AMD sold their foundries. Global Foundaries only cares about ARM chips so sorry AMD stay in 2010 ... Intel is going 10nm next year and will finally put the last nail in the coffin. ... I pray NOT!

Comment Re:Real Engineering (Score 1) 176

In some places it is illegal to call yourself an engineer if you isn't really one (unlike software "engineers").

Alright sounds fair. Why can't a consortium or guild provide this certification? For coders who need something done will be programmers who will earn less and therefore no need for H1B1s and for critical architects and SOA for critical projects you can have certified engineers?

We could have 2 grades rather than average both of them and not having enough talent for one, yet be too expensive for the other use?

Comment Re:I have an H1-B employee (Score 1) 176

Where are you getting the impression that inexperienced kids out of college are regularly getting $70,000 a year to start?

It seems you are just making stuff up to argue against. The odd part is that you would do that to argue against a rising standard of living for some group of working people. Are you the kind of person who wants to tear others down when they are getting more than you? Because that's a more harmful attitude to have for society in general than some businesses trying to pay less for more.

Really? Go look at people posting here and some of the job ads if you do not believe me that kids start at $60k a year?

Not to sound assholish but I thought only Indians would be doing IT and dropped out of CS to do business. Worst mistake ever! I am happy just to make 50k in a few years of experience so yes I do not believe they deserve a rise and HR and accounting need a way to conserve costs.

Only in slashdot is it discussed there are starving programmers all making 30k a year thanks to those horrible greedy H1B1 recruiters. I do not see it. I am being honest as I see it no different than CEOs whinning about making less than a million.

What is a good starting and experienced programmer salary? 100k a year? Unless they are specialized or own their companies a programmer should not make that much. Simple and the corporations are just trying to reballance the market. American programmers make nearly 2x as much as any other major so I have little sympathy.

Slashdot Top Deals

Suggest you just sit there and wait till life gets easier.

Working...