Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Answer me this... apk (Score 5, Informative) 44 44

The answer is that it varies - GPUs are anywhere from mediocre to useless at "normal" crypto.

It depends on whether the particular encryption algorithm/mode in use is parallelizable or not. For example, CBC is not parallelizable - you have to encrypt each block of data serially. GPUs are useless at CBC mode encryption. More modern modes like GCM and XTS are parallelizable to an extent, as you can encrypt multiple blocks at once, but there is still a serial dependency in the process (there is no real way of completely getting rid of all dependencies while keeping the algorithm usefully secure), so you still need to do some pre or post-processing of the data in a serial fashion. And even then, you're limited by bandwidth in/out of the GPU.

Public-key crypto (RSA, DSA, and ECDSA) isn't really parallelizable either as it only deals with small data sizes. And typical hash algorithms like SHA-1 and SHA-256 are also not parallelizable in their construction.

Thing is, CPUs these days have hardware AES encryption acceleration, making this mostly a moot point. GPUs are good at doing the same thing many times in parallel, which is what breaking encryption requires, but not regular usage.

Comment Re:Simplistic (Score 1) 385 385

It's called Moravec's paradox:

Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.

Comment Re:Quantum Computing Required? (Score 2) 294 294

This paper gives an interesting summary of different assumptions about how detailed a brain simulation needs to be and what they mean for when simulating a brain would be feasible (assuming Moore's Law continues indefinitely, which is obviously not guaranteed). The classical estimates go as late as 2201 depending on what assumptions you accept. See the tables on pages 79-81 for the summary. The quantum estimate is just a question mark; they didn't even bother computing the cost of using classical computers to simulate an entire human brain as a quantum system.

Comment Pretorian Technologies - Joystick, Trackball (Score 2) 100 100

Pretorian Technologies of Lincolnshire, UK http://www.pretorianuk.com/ specializes in computer devices for disabled, and semi-disabled users. They make a wide variety of trackballs, joysticks, mouse alternatives, big switches that can be activated by your elbow or knee, iPad switches, bluetooth linked switches etc.

Their devices are aimed at those with "limited hand control, fine and gross motor skill difficulties, poor hand-eye coordination, limited manual dexterity, repetitive strain injury, involuntary muscle spasms, spastic and flaccid paralysis, cerebral movement disorder or central neuromuscular disability and inflammatory or degenerative change"

  From their website, http://www.pretorianuk.com/n-a...

The n-ABLER Trackball is the most adaptable Mouse Alternative on the market specifically designed to address the needs of computer users with limited hand control, motor skill difficulties, poor hand-eye co-ordination, lack of manual dexterity and involuntary muscle spasms.

In the USA, their products are available through InclusiveTLC.com .... not cheap (the anti-tremor joystick costs $440) but they look excellent for the application. a giant 3 inch diameter bright red switch that talks bluetooth (for the iPad, I think) runs about $150. see http://www.inclusivetlc.com/is...

Comment Re:Maybe not the power supply? (Score 1) 192 192

Usually, "epoxy" around the edges of a BGA chip is neither an anti-hacking attempt nor a light-proofing attempt. It's called underfill, and its chief purpose is to increase mechanical strength and make the bond more durable than tiny bare solder balls would be on their own.

Comment Early analog work from the 1960's (Score 5, Informative) 33 33

From 1964 through around 1975, planetary astronomers at Tucson's Lunar & Planetary Laboratory used physical models to project and remap the moon's surface. They took high resolution photos through an earth based telescope, and then projected the images onto a spherical, white plaster globe. By carefully controlling the geometry, and knowing distances, angles, and (yes) lunar libation, they created detailed maps of the moon's near side, taking into account geometric distortion around the limbs. In this way, they could rephotograph parts of the lunar far-side.

The rectified lunar atlas can now be seen at https://www.lpl.arizona.edu/si...

This was all done using telescopes, photographs, and optical projection ... all analog, earth-based work. (the main telescope was the 61" reflector at Mt. Bigelow in Tucson; the films were Kodak 3-AJ 10x10inch glass plates)

It was my honor to work with several of these astronomers, including Ewen Whitaker, Gerard Kuiper, Bill Hartmann, and Bob Strom. Brilliant scientists who would be astounded and impressed to see those NASA/Goddard videos. What we take for granted today, once required several years of detailed work.

Comment Re:Still ARM11, still a crappy CPU (Score 1) 355 355

Yes they are. Most multimedia processing is parallelizable, and thus benefits greatly from SIMD instructions - for example, just about every CPU-based video codec ever. If you want an actual example, I wrote a high-performance edge detection algorithm for laser tracing, with its convolution cores written in optimized in SSE2 assembly, and am hoping to write a NEON version. It'll never run reasonably on the original Raspberry Pi because it's too underpowered to do it without SIMD (I didn't even bother writing a plain C version of the cores, because honestly any platforms without SSE2 or NEON are going to be too slow to use anyway).

Obviously you can use SIMD instructions for a lot more, but multimedia is the obvious example. And as I mentioned, the Pi makes up for it for standard codecs only with its GPU blob decoder, but that doesn't help you with anything that isn't video decoding (e.g. filtering).

Comment Re: a billion operat per second enough for cat wat (Score 1) 355 355

ESP8266 only became a "thing" last year, so the community is still growing. But the manufacturer is cooperating and is releasing open SDKs, and the hobbyist community is enthusiastic about it. I personally intend to use a bunch of them to automate things around my apartment, so I guess I'll find out just how good/bad it is.

That's for developing on the ESP8266 core itself - if you just want to use the default firmware, plug it into your existing microcontroller platform (e.g. Arduino) and you get wireless connectivity and a TCP/IP stack (running on the module) with some trivial AT commands. Not as cheap since you're still using a separate core as the main app host, but still a really cheap way to add WiFi to something.

Comment Re:Then buy a used PC (Score 1) 355 355

There's a difference between established industrial designs where there is an argument for maintaining compatibility and an existing codebase, and hobbyists which can quite happily move up the chain and are always looking for cool new stuff in other respects. Even in product development, some companies go out of their way to use ridiculously outdated, expensive chips. That usually only flies when it's for non-consumer applications where they can afford to throw more money at a chip vendor to keep making outdated chips at outdated prices (which sometimes even rise); for consumer products the competition will undercut you by using newer, cheaper chips if you don't. For hobbyists, it actually pays off to upgrade - you get better toolchains (no need to deal with all the ROM/RAM/pointer type shenanigans of AVRs on ARM), better debuggability, etc. Of course, it doesn't mean you should jump onto any random chip - the toolchains and ecosystems vary wildly in quality - but it's a shame that so many people just stick with the old instead of trying something new.

There's nothing wrong with the Tiny series - little 6- and 8-pin chips are still the market where AVR/PIC make perfect sense, and I'll be the first to admit that I've used a PIC12F629 as a dual frequency generator in a project. But as a flexible platform for hobbyists, I'd much rather have a Cortex-M3 over an ATmega. Back when I was using PICs more often, my approach was to, every few years, re-evaluate my personal selection of PICs. I'd go through Microchip's (extensive) part database, look at the prices, and see if anything caught my eye, then order some samples. My 8-pin of choice used to be 12F508, then 12F629. For 18-pin I went from 16F84 to 16F88. 28-pin, 16F876 to 18F2520 and 18F2550 for USB. 40-pin, 16F877 to 18F4520 to 18F4550 for USB. I tried dsPIC at one point but didn't like it; by then ARM was picking up steam and it didn't make any sense. I haven't really looked at their line-up in a while, since I've mostly moved on to other chips for interesting stuff and stick to my old PICs for small quick/dirty hacks since I have a bunch in my drawers to get rid of, but you get the idea. It never made any sense to me to get stuck with one particular obsolete part or range.

Comment Re:Still ARM11, still a crappy CPU (Score 1) 355 355

Yup, all the other aliexpress pages I was looking at for the same phone said MTK6517, and I didn't notice that the one that I chose was different (I was just going for the lowest price, though the difference was a few bucks). Turned out to be the more accurate one it seems, since it matches the actual device that I have.

A7 is actually decent. It's low-end (as far as ARMv7 application processors go) but reasonably modern (late 2011, which isn't too bad). Nobody's asking for a bleeding-edge CPU in something like the Pi, but a 2002 vintage core wouldn't have made any sense.

Comment Re:Still ARM11, still a crappy CPU (Score 1) 355 355

TFA used to claim that it was still ARM11. They just edited it a few minutes ago. I stand transitively corrected.

I actually tried to look up any official announcements to corroborate the fact that it was still ARM11 before posting my first comment (because it just felt so dumb), but found none, no mentions of the new chip on Broadcom's site, nothing. I guess they trusted El Reg with the scoop and they screwed it up.

Comment Re:Still ARM11, still a crappy CPU (Score 3, Informative) 355 355

Whoops, you're right. Other pages claimed it was an MT6517, but I just checked /proc/cpuinfo. Still, A7 is still a modern core, 9 years newer than ARM11.

$ cat /proc/cpuinfo
Processor : ARMv7 Processor rev 3 (v7l)
processor : 0
BogoMIPS : 2589.52

processor : 1
BogoMIPS : 2589.52

Features : swp half thumb fastmult vfp edsp thumbee neon vfpv3 tls vfpv4 idiva idivt
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xc07
CPU revision : 3

Hardware : MT6572

(0xc07 means Cortex-A7)

Comment Re:a billion operat per second enough for cat wate (Score 1) 355 355

If you want to control a few motors and lights with network connectivity, get some ESP8266 modules - those are WiFi modules with a user-programmable 80MHz 32-bit CPU that you can buy for $5. Throw in a Cortex-M0 as a slave device to control your I/O (which can be as cheap as $1 in single quantities - yes, you can get a 32-bit CPU for $1 these days). That is what 2015 state-of-the-art silicon gets you to fit the task. A Raspberry Pi with a WiFi dongle is an order of magnitude more expensive and overpowered (and yet underpowered relative to what it claims to be, which is a Linux platform).

Disobedience: The silver lining to the cloud of servitude. -- Ambrose Bierce

Working...