Follow Slashdot stories on Twitter


Forgot your password?

Comment MS Can Force This Through Driver Signing (Score 2) 458

I thought to myself... how can Microsoft force this? All of their corporate customers have volume licenses with downgrade rights. Intel and AMD can still release drivers for Windows 7 if they wanted to. Then it occurred to me... driver signing.

Microsoft has seriously shaken up how driver signing works starting with Windows 10. The only way to sign any new driver in a way that Windows 10 will accept is to upload it to Microsoft over the web and have them cross-sign it along with your original signature. It used to be that as long as you had a certificate which came from a root CA that was cross-signed by Microsoft then you could sign it yourself and Windows would accept it as valid.

Now Windows 10 checks the time stamp on the driver and if the time stamp is earlier than July 29th, 20015 (the date Windows 10 was released) then Windows 10 will accept the old cross-signed root CA. If its after that date then only drivers that are directly signed my Microsoft are accepted as valid by the OS.

So how does this affect Windows 7? Well believe it or not, Windows 7 will accept certificates with either SHA1 or SHA2 (aka SHA256) for USER MODE signature check (aka .exe and .dll files.) For kernel mode drivers, Windows 7 will only accept SHA1 certificates! So all it takes is for Microsoft to stop providing SHA1 hashes via their driver signing website and then you instantly lock out any new kernel mode binary from being able to load on both Windows 7 and Windows 10. That doesn't prevent someone that still has an old SHA1 code signing certificate from using it to sign Windows 7 only drivers. But most of those certificates are expiring in the next year or two, if they haven't expired already. Intel/AMD/etc could probably release drivers for maybe 1 more silicon generation before their old certificates expire and they lose the ability to release Windows 7 drivers without submitting them to Microsoft for approval.

Basically Microsoft is using code signing to create planned obsolescence for Windows 7.

Comment Re:Wrong... (Score 1) 208

i'm designing Libre Hardware, right now. i've been on this task for the past five years, since the embarrassing time when i encouraged 20 software libre developers to join me in buying one of the very first ARM netbooks to come out (back in 2010) that turned out to be GPL-violating.

So you have a GPL ARM netbook somewhere? Can you please provide me the URL to download the RTL for that ARM chip you have in that netbook? Also, please send me the URL to download the silicon layout files? Which foundry did you contract with to build that chip? TSMC?

All commercial contract silicon foundries with any semi-recent process node (32nm or lower) require you to sign an NDA before they provide you with the transistor models for their manufacturing process. If your ARM chip design is under a GPL license, how do you deal with the fact that it is impossible to distribute your layout file without also distributing the layout for your foundry's transistor design which is under NDA?

Even if your CPU design is fully synthesizable and you only distribute the RTL (which by the way will make your design a bit slower than if you had VLSI engineers custom design some of the critical paths in the CPU layout)... wouldn't running the synthesis tool be the same as running the compiler on software source code, so wouldn't the layout files that result from the synthesis be considered a derived work which also must be GPL licensed? Also, last time I checked there isn't any open source synthesis tools and both Synopsis and Cadence charge 6-figures to license their closed source synthesis tools. Are you addressing the lack of open source design tools? Do you have a cluster somewhere with some of that software available?

In other words... I'm 100% sure that your ARM silicon design is not GPL, in fact I'm 100% sure that it's not open source because ARM Limited Inc. only provides ARM licenses under NDA. You bought that ARM chip from some company with a closed source silicon design. The only thing you are focused on is designing an open source PCB to put a closed source CPU on top of. You make the incorrect assumption that just because you can send your PCB to any PCB manufacturer and get the same result back, the same thing can be applied to chips. PCBs are easy, chips are hard. The OP is right. There is a fundamental difference when you are talking about manufacturing something that requires billions of dollars worth of capital expense in order to create the factory necessary to build the device. Nobody spends billions to create the capability to manufacture modern silicon and then gives away their factory's transistor design in today's world. Until open source foundry exists and open source silicon design exists... your obsession over firmware binary blobs is penny wise and dollar dumb.

If you want to actually change something, you should be pitching open source foundry... honestly I think its a rather hard sell :) The much more feasible thing for you to do would be to start developing open source silicon design software. Just like how GCC was a prerequisite to an open source UNIX, open source silicon design tools are a prerequisite to open source hardware.

Comment Better Web Standards Needed (Score 5, Insightful) 225

I don't know about everyone else, but IMHO the web browser is THE WORST platform to code for in existence. It amazes and depresses me how little has changed about client side web programming since IE4. Instead we have created these huge frameworks to try to hide the suck under an enormous pile of middleware. But still we are doing this fundamentally broken thing of shoehorning a language intended to describe formatted text documents (HTML) to instead describe a GUI for an application. This reminds me of IE4 and its web page dialogs.

If we truly are serious about having the web be an application platform then a new markup intended to describe cross platform application GUIs and a standard bytecode for the web is needed. Asm.js and enscripten or PNaCl both could be our new standard bytecode, both have pros and cons that I won't rehash. Honestly I'm not a huge fan of either one. But no one is trying to address the fact that HTML's layout system is designed for documents... Not for GUIs. We really need something like XUL or XAML made in to a web standard. I don't care about the politics of what language/tool we choose as long as its a good one thats open for all. I'm sick of the holy wars over tools and languages. That said JavaScript is garbage just like HTML and CSS for actual development and needs to be replaced with a sane language.

Comment Re:I can see a glimpse Microsoft's vision (Score 1) 125

Honestly the thing that would make continuum really worth while would be if Microsoft got rid of the phone OS entirely and ported the telephony stack to the full Windows 10 OS and just install full Windows 10 on everything, including phones. Then when you enter desktop mode when you dock for phone you will be able to run Win32 apps in addition to the universal apps. With that, you will truly have a real, full computer in your pocket.

Of course this does mean that Microsoft would have to limit their phone OS to X86 CPUs, otherwise the feature would not be worthwhile. Honestly... I don't think that is as big of a deal as it sounds. Intel's smartphone chips have changed a lot in the last 2 years. If you haven't taken a look at the Zenfone 2 yet, its a great phone. You don't notice anything different between it and other high end Android phones other than the fact that is has an Intel logo on the back and it has the same features as a $600 phone for $300. Since nobody builds phones with Windows installed anymore except Microsoft, restricting the phone OS to X86 only isn't going to affect some existing OEM customer base :)

Comment Re:We're almost at the end with current tech (Score 2) 117

10 years ago, Intel was hinting at a massively parallel future (80 core processor rumored in development at the time)

I think the 80 core processor Intel was developing at the time eventually turned in to the Knights Corner aka Xeon Phi chip. Originally Intel developed this tech for the Larrabee project, which was intended to be a discrete GPU built out of a huge number of X86 cores. The thought was if you threw enough X86 cores at the problem, even software rendering on all those cores would be fast. As projects like llvmpipe and OpenSWR have shown, given a huge number of X86 cores this isn't as crazy of an idea as it initially sounds... but still a little crazy :) Ultimately Intel cancelled that project and decided to use that tech for super computing instead of graphics. A result of this is Intel retained the "Gen" design for their graphics core, which is a more traditional GPU design.

Comment Re:If it's not GPL (Score 4, Informative) 160

If it's not GPL'ed, it's not open source. And we all know what abhorrence MS harbors for GPL...

The Open Source Initiative has certified the MIT license as a valid open source license. Look I'm not a huge MS fan either, but they are using a real OSS license here. Just because MIT isn't copyleft doesn't mean its not OSS.

Comment Re:Portability (Score 0) 437

Speaking as someone who writes firmware for a living, Rust will never be a replacement for C in its current state. C has one property that sets it apart from every other language that is higher level than assembly. It is possible to write a C program that does not need *ANY* C run-time library support. Our firmware runs on the bare metal without any OS whatsoever, so running Rust in firmware requires building all the services that the Rust run-time requires (heap, threads, etc.) and porting the Rust run-time to it.

So unless we want to write heap, threads, etc. in assembly... C is our only option for bootstrapping Rust. Since we are using C for that part of our code base... why not use it for all? The firmware I work on doesn't do a bunch of string manipulation of other things that makes a higher level language like Rust nice. Why add all that additional complexity to support Rust? By the way, anyone writing an operating system kernel is going to run in to the same thing with Rust.

It would be possible to bootstrap Rust on top of a basic kernel written in C and maybe even use it for some of the kernel code (drivers for example, like how OSX uses C++ for I/O Kit drivers), but Rust will never replace C. It would be nice if someone designed a new **Systems Programming Language** that can run without a run-time and had some of the features that newer languages do... but it seems the only thing people think about when designing new languages these days is the web and/or cloud computing.

Despite what the Rust developers claim, Rust is not a real Systems Programming Language in its current state since it requires a run-time. Its OK to have an optional run-time and for some features to stop working when its not there... but it must be optional not required.

Submission + - Broadwell Desktop CPUs Not Actually Discontinued (

nateman1352 writes: Contrary to the report published by IT world and linked in a previous Slashdot story, Anandtech reports that Intel will continue selling desktop Broadwell CPUs:

IT World published an article earlier this afternoon stating that Intel was discontinuing their two desktop Broadwell socketed SKUs, the Core i7-5775C and the Core i5-5675C. The two SKUs are notable because they are to date the only socketed Broadwell processors on the desktop, and they are also the only socketed desktop Core processors available with a GT3e Iris Pro GPU configuration – that is, Intel’s more powerful GPU combined with 128MB of eDRAM.

The idea that these processors were discontinued came as quite a shock with us, and after asking Intel for more details, the company quickly responded. Intel has made it very clear to us that these processors have not been discontinued, and that the company continues to manufacture and sell the processors as part of their current Broadwell lineup.

Comment Re:It's no ARMv8 (Score 1) 54

Both the A8X and the Broadwell Core M have a TDP of ~4.5W, so they give us a good comparison between the latest and greatest ARM vs. x86 CPUs:

Lets compare against the nVidia Tegra K1 as well, which has a TDP of 5W vs. the Core M's 4.5W:

As you can see, Intel is actually competing well against the best ARM can offer in their own backyard. The A8X does ~5% better in multi-threaded workloads, it has 3 cores vs. Core M's 2 cores. Single threaded, the A8X is ~26% slower than Core M. Despite having 4 cores, the Tegra K1 has ~16% worse multi-threaded performance than the Core M with 2 cores and ~53% slower single threaded performance. When you consider most GUI/web applications are single threaded (which is mostly what people use tablets for) Broadwell Core M is the best tablet chip on the market right now. Its only going to get better with Skylake.

At the same time, ARM hasn't been able to really touch Intel's home turf in the high performance market.

On the topic on instruction sets, honestly the most important difference between x86 and ARM is having an x86 design gives you a distinct advantage in the market for computers that run Windows. Given that there is no disadvantage for x86 in any other market segment (Android, Chrome, Mac, etc.) why would Intel switch to ARM when there is only upside to x86 and no downside?

Comment Re:From the 2nd article (Score 1) 242

>Law of supply and demand affects salaries. Companies that have not learned this, can't find qualified candidates, because they're not paying enough.

Companies are completely aware of supply and demand. Not being able to find good candidates is the excuse they give. The real reason why companies want H1-B's is because it increases the supply of high tech labor. Increasing the supply of any good or service, keeping the demand constant will reduce the price of the good or service. This is basic ECON-101. In this case, the service being sold is high tech labor. Increasing the supply of people seeking high tech employment reduces the average wage for high tech work.

Note that this is the reason why the same people pushing for more H-1B's are also donating to public schools to improve their high tech curriculum. They are actively seeking to increase the labor supply using whatever means possible. Whats surprising is how willing they are to make such a long term investment in education "philanthropy." It will take probably ~10 years for the education investment to bear fruit. Hard to call it philanthropy when its done for an entirely self serving reason :).

Comment Driver Differences (Score 2, Interesting) 96

I think what this benchmark really tells us is two things:

1. nVidia has not optimized their driver stack for DX12 as much as AMD has optimized for DX12
2. The performance difference between AMD and nVidia is likely a software issue, not a hardware issue (nVidia's driver has a more optimized DX11 implementation than AMD's). However, it is possible that nVidia's silicon architecture is designed to run DX11 workloads better than AMD's.

Bullet #1 make sense, AMD has been developing Mantle for years now so they likely have a more mature stack for these low level APIs. Bullet #2 also makes sense, AMD/ATI's driver has been a known weak point for a long time now.

Comment Consumer Hostile (Score 1) 82

Freemium is pretty disgusting really. Instead of just buying the game, you have to keep paying for the game constantly. You pay every time they add a new sword/gun/zombie killing plant a la "micro transactions." Honestly its almost as bad as slot machines in a vegas casino. There is a funny tongue in cheek game called DLC Quest about this... which you only have to pay for once :)

Comment Re:Intel is behind (Score 2) 84

28nm is still the cheapest node in per transistor terms.

That's not really true anymore. 14nm is cheaper for Intel to manufacture than 22nm (but Intel is the only company thus far with a mature, cost effective node at 14nm.) Remember that all the problems Intel had with ramping 14nm to high volume every other silicon fab will also experience.

Really what this tells us is that if you look at Intel's past two nodes (22nm and 14nm) they both have had about a 2 year, 6 month development cycle instead of the 2 year cycle we are used to. I think this is more just Intel being open with their customers and dealing with the 2.5 year cadence instead of the 2 year cadence by back-porting some of the new features that were originally going to debut in cannon lake to skylake. I think we can probably consider kaby lake to be skylake 2.0, hopefully more than devil's canyon was (actual new micro-architectural and/or chipset changes not just some new thermal compound and higher MHz.)

Honestly I think we will all be happier to have kaby lake next year than another 2 year wait like haswell and skylake.

Slashdot Top Deals

Hotels are tired of getting ripped off. I checked into a hotel and they had towels from my house. -- Mark Guido