Become a fan of Slashdot on Facebook


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 Internet speed test! ×

Comment Re:It depends on the use (Score 1) 416

You may have a point about Haskell, but not about ML. Further, SML basically gets everything right that Haskell gets wrong.

SML isn't lazy. Humans don't think in terms of lazy evaluation. Even though Haskell is much more popular and has much better tools, MLton will usually compile faster code.

SML allows side effects. Haskell talks about purity, but the presence and reliance on unsafePerformIO shows that purity has limits. The practical answer is to write without side effects then add them to designated parts of the codebase which gets you most of the benefits of purity without all the overhead and headaches for the last 5% of your program.

SML is immutable by default, but allows mutation if needed. Making everything immutable is great for some problems (eg, concurrency), but is generally bad for performance (determining when in-place mutation can occur instead of a new allocation is a hard, branchy problem).

The biggest question is why SML isn't ruling the world. Consider golang vs SML. SML is about as fast, has a similar concurrency model in CML extensions, has a much better type system, and has much more simple syntax (despite being a more powerful syntax). Golang is used because the tooling is very nice, but why did Google choose to pour resources into golang (or even make it in the first place) when a better solution exists? Because familiar beats everything.

In schools where SML is taught as a first language, nobody has problems learning it (compared to imperative languages). A lot of such people I've talked to even prefer the syntax. Most schools teach a language with a C-derived syntax and approach, so those devs learn to prefer that (and usually never even see ML-style syntax). Haskell has issues with popularity because of complexity. SML has issues with popularity because "popularity begets popularity".

Comment Re:Counts sharing, not use. Javascript always shar (Score 2) 125

Most serious JS is definitely NOT open to the public. Common libraries certainly are (and the JS community is very aggressive about pushing the programming envelope), but most significant projects are closed source. You could argue that you can see the source anyway, but between babel transformations and minification, the output is obfuscated (to say differently would be similar to arguing that C projects are open because you can disassemble them).

Comment Re:Unless we know the number of non-dupes. (Score 3, Insightful) 488

Democrats said he was an outstanding, honest man when he dropped the case (while Republicans decried him as dishonest). When the case came back up, the Democrats and Republicans both completely flipped positions. I don't know if he's playing politics or not, but it seems obvious that everyone's hatred/love is tied to their party rather than the truth.

In any case, what could he have done differently? He announced the case closed going into election season. If he didn't mention the new evidence at all, then congress would have him for perjury sooner or later. If he released after and Hillary won, everyone would say he killed the investigation so Hillary could win. If he released before and Trump won, he would be accused of bringing up the investigation again so Hillary would lose.

Given that Hillary looks to win the election, he can claim that his release didn't adversely affect the election. That's about the best outcome he could hope for.

Comment Re:no nothing important is mising from my comment (Score 1) 54

Sounding in from Chattanooga and EPB here.

EPB offers fiber because last-mile fiber was part of the new smart grid power system (and why not use all the extra bandwidth). The actual company offering the service is EPB Fiber Optics which leases the lines from EPB.

NOTHING keeps Comcast from leasing those same lines at the same rates (or even bringing a case to court that the cost is too high). They simply refuse and instead offer sub-par services with 300GB data caps (guaranteed to run huge overages if you're a cord-cutter). Your territory idea only works when greedy corporations with state-granted monopolies aren't in the mood to abuse the people locked into their service.

For the record, EPB is good enough at their job that they already have agreements to do the same thing in north-west Georgia and are still in talks to offer the same thing in north-east Alabama. People want and need good services from companies that aren't out to screw them over and they'll go wherever necessary to make that happen.

Comment Re:Maria Schneider is a great jazz composer (Score 1) 246

Your control of what you make ends when you sell it to someone else. In the digital age, this means selling something opens you up to that person having an infinite number of copies. The only difference is that the government believes selling what you bought is reason enough to strip you of your "god-given rights" it claims to provide, to steal all that you have worked for, to take away freedom and inflict permanent harm.

What about the poor author then?

There are many means of making money without harming others. The first is to not sell something unless you get the price you desire (this is what developers do as do book authors when you consider that most books are out of print within 5 years). The second is patronage by someone interested in your continued creation of works (a very ancient and proven tradition). The third (and particularly relevant to musicians) is to offer performances for a fee. There are alternatives, but using the government as your personal mafia is a much lazier solution for corrupt artists and businessmen.

Comment Re:Still a meaningless stunt (Score 2) 111

The team who made alphago deserve credit, but their approach (from a high level) isn't so revolutionary. Go AI devs moved away from solely using brute force tree pruning (like what deep blue used) a long time ago.

The first big change was to use pattern recognition (matching sub-sections of the game with already known patterns) to prune faster. The second (and far more revolutionary) change was to apply an upper confidence bound based on a monte carlo simulation. This is where computers gained the ability to bypass those billions of moves with a margin for error. The third was the use of neural nets as a way to balance between brute force and pattern matching while managing the confidence levels of the monte carlo simulations.

The biggest difference with Alphago is corporate backing. I don't know how many people Google hired for the job, but the paper lists 20 (so probably more than that). Buying and running supercomputers is extremely expensive as well. With the exception of darkforest (Facebook's go machine which, as expected, appears to use a similar design), most teams consist of a very few people on small budgets without someone willing to spend millions to buy and run supercomputers for them.

Comment Not the overclocking record (Score 5, Informative) 85

I believe the official Guinness record is 8.429GHz on an AMD pre-release bulldozer in 2011. Another record was set at 8.723GHz on an AMD FX-8370 in 2013, but I don't recall it being "official".,13431.html

Comment Re: Yep (Score 1) 125

Companies don't pay FICA taxes (the contractor pays all of them instead instead of splitting them 50/50 with the employer). Some states don't require companies to pay workers comp on contractors instead. Contractors also don't get overtime, travel compensation, 401k matching, insurance benefits, etc (there's not even a minimum wage)

If a contractor works for anywhere close to the same hourly as an equivalent employee, that contractor will be making a lot less.

Comment Re:I think Linus is a year too early with his gues (Score 1) 182

I forgot to mention Nvidia's Denver core. They dropped it in favor of A57 and I don't think we'll be seeing it again for a while. The original reason for making it seemed to be for x86 emulation (literally the next generation of transmeta), but their lawsuit settlement with Intel sunk that ship leaving them to repurpose the architecture for ARM. I like the transmeta idea, but like bulldozer it seems a little less good in practice at present. I think we'll see something similar return in a few years, but for now I think fixed-function reigns supreme.

Comment I think Linus is a year too early with his guess. (Score 3, Interesting) 182

Looking at the latest in the ARM landscape, we have Apple A9, Qualcomm Kryo, ARM A57, ARM A72, and AMD A12. We can probably expect a small jump in Apple's performance next year along with a second revision of Kryo, but nothing competitive with Intel. A57 is being dropped for the fixed A72 since Apple screwed over ARM (tl;dr Apple shipped a new architecture in 2 years while ARM took almost 4 years for an inferior product -- everyone in the industry knows that design to shipping a new architecture is 4-5 years indicating either ARM screwed over all their non-apple partners (and themselves) by giving Apple a head start or Apple forced ARM to adopt a new ISA when they'd already had a couple years to work on int). Of all these architectures, I think only A72, AMD's A57 implementation, and AMD's A12 are worth focusing on.

A72 is supposedly close to the performance of Intel's core M processors, but I'm willing to bet that the default A72 can't actually compete with Skylake's wide dispatch, SMT, and vector units. The biggest question in this area isn't actually the CPU so much as all the "uncore" parts surrounding it. Even if it could have these things in theory, the companies controlling most of the patents in this area aren't using the A72 (AMD, Intel, IBM, Oracle, etc).

AMD's first generation of ARM processor (launching next year) is an A57 server part, but is probably going to be faster than most A72s in practice because it can be manufactured on a high-performance (rather than bulk) fab process and will have faster buses, faster memory, much larger caches, and even some parts of the core (like the branch predictor) may well be replaced with better systems while AMD's reworking the entire architecture for the new fab. This chip will probably be competitive in the low-power server market, but most likely won't be aimed at anything mobile.

Not much is known about AMD's A12, but for the first time, an ARM company seems to be moving into the higher-performance mobile segment. AMD failed with bulldozer (and has taken the heat for beating that dead horse for the past few years), but they at least had the sense to hire Jim Keller to help them make a couple new, next-gen architectures. While AMD has money troubles, it's in the intellectual property sweet spot to be able to put together a competitive chip. This is the chip I think Linus is wanting, but it's been pushed to 2017.

The complete unknown is Intel. They bought DEC and StrongARM was along for the ride, but they sold it in '97. They then made XScale only to sell it to Marvell in '06. I find it hard to believe that Intel's not experimenting with ARM design again. Even if they could make x86 compete in the low-end (atom has been a failure in that regard), convincing companies to switch will probably prove impossible as the current situation with lots of competing CPU providers works to their fiscal advantage. Apple won't be giving up the freedom to make their own chips (nor will Samsung). That said, I don't think we'll be seeing an Intel ARM chip before 2018-19.

tl;dr -- the current chips can't compete with Intel. The ones that can don't launch until 2017 or later.

Slashdot Top Deals

Riches cover a multitude of woes. -- Menander