Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Misleading Title (Score 4, Insightful) 154

Really should read "UK's Top Police Warn That Making Aim-Bots/Game Cheats May Turn Kids into Cyber Criminals"

I'm not an expert in sociology, but it seems plausible that unethical behavior in online video games can be a gateway to unethical online behavior in general. From a technical standpoint I know that the skills developed by hacking games are similar to the skills needed to hack financial software.

Comment Re:The iPhone 7/7+ still support CDMA (Score 1) 84

I don't think this made the Slashdot front page but Intel bought VIA's CDMA modem design and license about a year ago. Intel's modems currently only support GSM & LTE, whereas VIA never updated their CDMA modems to be LTE capable. It likely will take a couple years for Intel to integrate VIA's CDMA implementation with their LTE design, but once its done, Intel's modems will be just as capable as Qualcomm's.

CMDA isn't only important for the US Verizon/Sprint market, the much more important reason to implement CDMA is China Telecom. Either way, the iPhone 7S will likely mark the return of all iPhones being universally supported by all carriers... regardless of whether there is an Intel or Qualcomm modem inside it.

Comment Pattern Recognition (Score 4, Insightful) 58

Its a little disingenuous to say that Watson "created" the trailer. The only thing Watson did was run a pattern recognition algorithm to figure out which clips in the movie were tense, happy, scary, etc. Then a human editor sorted through all of the clips, picked the good ones and put them in sequence to create a trailer that actually had narrative instead of just being a hodge podge of disjoint clips.

Pattern recognition is getting better which is the first step to creating an AI... but Watson, and AI in general is still very far off from creating a computer program that is capable of original thought.

Comment Re:Could you gush a little more? (Score 1) 427

Java is now toxic thanks to its owner. For the sake of the entire tech industry, we all should consider it a legacy technology that should be removed from everything as quickly as possible. Unfortunately that will take years... maybe even decades, but we must start the deprecation process as quickly as possible. Besides, in the 20 years since it was created we have better cross platform languages now anyway.

Thankfully a lot of us have input in to technical decisions here. We all need to take a stand and kill Java.

Comment Re:Linux. (Score 1) 405

I keep a Windows system around for minor software that needs it

AKA "games".

Other than games, the very important thing that keeps Windows on my personal system is TurboTax. Like pretty much any other US Taxpayer that has a tax situation too complex for form 1040-EZ and doesn't want to pay ~$150 for H&R Block or ~$300 for a certified CPA. I hired a CPA once and $50 per year TurboTax did a better job!

Before anyone says Wine, its a non-starter. TurboTax uses a bunch of .Net features that don't work 100% right on Wine like WPF. Unfortunately the Mac version is absolute garbage, so that route isn't viable either. It really sucks, but the easiest way to be a lawful US citizen is to have a Windows system.

Comment Its About Ecosystem Development! (Score 1) 81

The fact that Intel is offering to manufacture ARM cores for their custom foundry customers is not new. In fact, there are some Altera FPGAs with embedded ARM cores being manufactured by Intel already. The important thing about this deal is that ARM limited will now provide Hard IP for Intel's process technology.

To understand the importance of this, you have to understand a little more about silicon design and manufacturing than the average Slashdotter. Suppose you are some random fab-less chip designer that builds semi-customized ARM SoCs, a company like Rockchip or Mediatek for example. Generally the way you put together your new SoC is you buy a license for the ARM CPU design, then you buy a license for a GPU design from someone else, then you license a USB controller... so on and so forth, until you have all the building blocks necessary to make your new chip. Then you plug them all together, simulate, fab, validate, and ship.

Those blocks come in two different forms, Hard IP and Soft IP. Soft IP is basically a netlist... its a big text file that lists every transistor in the design and the interconnections between every transistor in the design. Usually soft IP vendors will give you the RTL, which is a more human readable language like Verilog which you compile in to a netlist. Hard IP on the other hard, is more like a vector graphics drawing or a stencil. Hard IP lists every transistor, its x/y coordinates on the silicon, and the exact shape and route of the copper wires. The problem with hard IP is every silicon manufacturer uses different shapes and sizes for their transistors and connecting wires (this is called the process design for the foundry), so a given hard IP design can only be built by the foundry it was designed for.

There is a program called a synthesizer that takes the netlist from the soft IP and generates the layout for the hard IP given a bunch of input parameters that describe the target foundry's process design, rather incredible really. The problem is not every design is "fully synthesizable" for example anything involving high speed I/O or analog (aka the "PHY" layers for modern busses: PCIe, USB, eMMC, Ethernet, SATA etc.) In any case, the pieces of the design that can not be synthesized need to be drawn by hand (aka human hands) using CAD software. For things like CPUs, usually there are some critical pieces that are drawn by hand, because a good human engineer can design a better, more efficient layout than the synthesizer can, at much greater expense of course. So depending on what percentage of your design is not synthesized, switching from one foundry to another can turn out to be a lot of work! This is the important thing here, ARM is providing ready to go hard IP for Intel foundry, just like they do with TSMC already, so the technical barrier for an ARM SoC designer to use Intel foundry is now lower... potentially comparable to TSMC.

Depending on the amount of engineers you have and how sophisticated they are, you might design some of those blocks yourself. Up to the point of companies like Apple and Qualcomm where even the ARM CPU design is a custom implementation and doesn't bear much resemblance to the reference design from ARM limited.

For Intel, using Intel foundry is a non-issue since they have an army of engineers that for the most part they design every IP block themselves anyway. For companies like Apple and Qualcomm that also have armies of engineers switching to Intel foundry is not a technical issue, its about business decisions for them. The big news is the smaller companies that don't have as many resources to do custom design now have Intel foundry as a viable option.

Comment Correlates With Stat Counter (Score 4, Interesting) 272

The data over at Stat Counter seems to agree:


Looks like MacOS and Linux share has remained roughly flat over the last year. Win8.1 use has declined 48.5% and Win7 by 23.1%. Hence Win10's adoption has been at the expense of Win8.1 and to a lesser extent Win7. Overall it seems Microsoft's free upgrade has largely been successful at retaining existing Windows users, but it hasn't won any converts from Apple, and it hasn't slowed down Android at all. They stopped the bleeding, but its not exactly the "threshold" that would return Windows to growth that Microsoft's upper management claimed it would be.

Comment Re: Stupid Software Design Decisions (Score 1) 212

I should have been more clear, every scan *job* not every file. I am very aware of the way Windows works and its horrible amount of overhead for creating a new process compared to UNIX.

This performance concern would need to be balanced against the added security of preventing a persistent malware infection of the scan engine process. Maybe replace the process every 1000 files scanned? This is what performance profiling is for.

From a software engineering standpoint, one should start out with the "best" design and ignore performance for the most part, and then after the initial implementation do performance profiling to see what you actually need to optimize. If you optimize up front, then you have no way of knowing if the optimizations you did in the initial design actually had any benefit.

Comment Re: Stupid Software Design Decisions (Score 2) 212

Since the whole point of it is security it really makes sense to have two copies of your scan engine installed, one in Ring 0 for early boot rootkit detection that scans every driver as it loads and only scans if the binary passes MSFT's driver signing checks first.

All of your scanning of code modules after the kernel is up should be forwarded to a sandboxed user mode service so that even if the scan engine is compromised the malicious code can't go anywhere. Not a bad idea to fire up a new process for every scan so the exploit will be short lived.

Its pretty clear that antivirus software isn't written this way. They run everything in high privileges.

Comment Stupid Software Design Decisions (Score 2) 212

Seriously why the hell does Antivirus software need to run its scan engine at Admin group privileges, and why is half of the scan engine running in Ring 0 kernel drivers?

Its amazing, my work laptop BSODs about once a day just because of some crappy driver included in the Antivirus software installed by IT.

Since it crashes that frequently just in normal operation it seems likely that there is at least 1 vulnerability in that driver which is exploitable from user mode.

Comment Re:The great thing about standards... (Score 1) 221

Would it have killed them to make it backwards compatible with the hardware that already exists?

Speaking from a purely technical standpoint, based on the way the eMMC and UFS standards are written that would be extremely difficult to achieve. The UFS standard actually uses a MIPI M-PHY design for the actual electrical conveyance of data across the copper data lines. The protocol layer of UFS is actually identical to NVMe and the OS storage drivers interact with the UFS device as if it were a NVMe SSD. By comparison, SD cards have their own proprietary bus format that is derived from MultiMediaCard, which was derived from the SPI bus protocol, which was derived from I2C. This is a completely different hardware and software stack from UFS.

Really what it comes down to is when the UFS specification was originally written it was intended to be an internal bus for giving smartphones faster internal flash. It was not intended to become and external card format that would compete with SD. If that was a consideration from the start, I'm sure JEDEC would have baked a good backwards compatibility story in to the standard. Now that the standard already exists UFS v2.0 needs to be backwards compatible with UFS v1.0, so it is too late to add SD bus compatibility since v1.0 already exists in the market and backwards compatibility with it must be maintained.

Maybe they could try to bake in some SD compatibility without breaking the ability of new cards to work with old UFS hosts after the fact... but given how orthogonal the two designs are that would likely add an unacceptable amount of complexity to the flash chip's controller (remember complexity == more transistors == more expensive controller and more power consumed.)

Slashdot Top Deals

The most important early product on the way to developing a good product is an imperfect version.