Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment It pleases me that Perl isn't listed as vulnerable (Score 1) 156

Because Perl switched to a better hash function _and_ randomised it ages ago.

Having looked at many different fast hashing functions, I'm amazed at how many in the vulnerability report are still using the ancient multiply-by-small-constant and xor/add. That sort of thing tends to need a prime hash table size and a slow 'mod' operation. We have better hash functions that work on a 2^n table sizes.

Comment Re:E-ink like power consumption? (Score 5, Informative) 168

This page explains near the end: http://www.mirasoldisplays.com/mobile-display-imod-technology
It's bistable, so it retains memory of the image without needing power (or only a little power), which is similar to e-ink.
But it switches much faster than e-ink, so it can do video, presumably consuming power for the regions which change.

Comment Re:What a wonderful project! (Score 3, Interesting) 112

The bit about my own history was just to illustrate that young people (the target audience for RP apparently) do take an interest in that sort of thing, not to suggest a method! Of course nobody would use that approach any more! (The Elite reference was because David Braben co-authored Elite and is also involved in RP).

If analysing the blob statically, and if you know the instruction architecture, we have much better tools now, including disassemblers, decompilers, type inference and much more. And internet so we can collaborate better.

16MB is a big blob, but it's highly unlikely that much of it is needed to make a useful open source subset of the functionality.

For perspective on speed: Recently I had to reverse engineer about half of a 1.5MB ARM driver blob in some detail, enough to fix bugs and improve performance deep within it. I'm not going to say what it was, only that it took me about 2 weeks with objdump and some scripts, not using more advanced tools. I didn't enjoy it because it was just to fix some bugs the manufacturer left in :-/ (The best bit was a one-bit change that tripled video playback performance and stopped it stuttering :roll-eyes:)

But there may be a big fat license prohibiting anyone from openly using the results of that type of deep code analysis on the RP's blob.

Plus, there's the secret GPU/RISC architecture to get to grips with; that's not going to be obvious.

So it would probably have to be Nouveau-style: Run the original, watch its interactions with the device (with tracing probes), replay things, change things randomly, try things, gradually build up a picture through guessing as much as anything. That's a much bigger task than statically analysing a blob's code. (At least, to me it seems so.) I don't know whether it's practical on the RP, and I don't know whether it's too difficult. But it worked with Nouveau - and that now supports a lot of nVidia chips - so not to be dismissed as impossible.

You never start all over after a chip rev. That's why they call them revs, not new architectures. You can diff code in blobs if need be; often the changes for a chip rev are very small.

You may be right about needing a lot of 11-year-olds (or others). Luckily the RP is cheap and interesting enough, that it might attract enough interest.

The suggestion isn't all that serious, but nor is it an impossible task, so I think it's worth floating the idea around, see how much interest there is in at least looking further at the practicalities and legalities.

Comment Re:What a wonderful project! (Score 3, Interesting) 112

all the software is "open" yet obfuscated

The entire Raspberry Pi depends on a gigantic proprietary blob from Broadcom.

So let's do a Nouveau-style reverse engineering project. How hard can it be?

Sounds like a perfect project for the target audience: curious and talented kids. With a bit of experienced help if they get stuck (seems unlikely to me though, with sufficient time & motivation). Some kids love reverse engineering. I did when I was young and I was far from the only one (but we didn't have an internet to meet each other back then).

(I did loads of reverse engineering from about age 11+ (that was 1983), starting with the BBC and moving on to everything I could get access to, pulling apart games (starting from the binaries), changing behaviours, porting them from tape to floppy disk ;-), even porting them to new architectures, and now I think about it, quite a lot of hacking on video hardware of the time, both in hardware, and quirky programming to make it do useful things it wasn't designed to do. If Mr Braben is listening, I printed a whole disassembly of Elite, BBC disk version on dot matrix that took days to print (wow just got a flashback), and spent a long time learning from its algorithms, some of which I still use today - thank you ;-) )

Comment Re:I want more than an arduino(s) (Score 1) 123

These days there's plenty of intersection between embedded control (with GPIOs, I2C etc.) and driving some kind of display.

At the moment, for those applications at low volumes (1000), Raspberry Pi is the only thing I've seen at a competitive price. Everything else - including mini/nano-ITX PCs - are either way too expensive, or lack good video by current standards, or (thinking of STB chips) you can't get the parts without 10-100k volumes, a high initial fee, a big fat NDA, and very buggy drivers/SDK (been there...).

I too am sad that there's not a lot of chip data. I will be getting some Raspberry Pis to trial applications on, but also testing absolutely everything I need to use on it before ordering in quantity. Never trust a manufacturer's specifications - and never trust drivers you can't fix yourself without *lots* of testing. Especially where video is concerned.

It's kinda weird that they can sell them for less than comparable components can be easily bought for, but kinda wonderful compared with everything else out there, if it works as well as they say. I wonder if the low price will really last. And I wonder how long before someone starts a Nouveau-style GPU reverse engineering project ;-)

Comment Re:Install (Score 1) 360

Fair enough.

I use aptitude, both from command line and in system building scripts, and prefer its command line options. Some of the options are unique and handy ("aptitude why"), but there are a few nasty things about it: the way it is extremely slow to do anything (like "aptitude unmarkauto foo"), even if you are queuing up a sequence of changes; even "aptitude search" is slow ("apt-cache search" gives more results and is instant); the aptitude man page is basically out of date and missing important information (it just tells you to read the manual, and you have to find out that's in /usr/share/doc/aptitude); and worst of all, on a system where people have inconsistently used a mixture of "apt-get" and "aptitude", something about the APT state regarding manually/automatically installed packages, combined with aptitude's notion of queued up operations, can get quite muddled, and a subsequent dist-upgrade can sometimes do very strange, bad things.

Both use the underlying APT framework, but dressed in slightly different ways that unfortunately go beyond just how things are invoked and presented.

It would be nice if they'd integrate the states better to be the same for all APT-using programs, integrate the config options (some options have the same name in apt-get and aptitude; others are different, and of course neither are listed fully or accurately in their respective man pages), improve "aptitude search", make it run faster (especially when just querying), and move the curses UI to a separate program so that aptitude really could be an always-recommendable replacement for apt-get and apt-cache. I've admin'd systems where I have to be careful to use the right one of apt-get or aptitude for that system as the other seems to behave weirdly (both ways); that's not nice.

I'm surprised Debian's recommending aptitude as the definite thing to use while it still feels like a work in progress.

Sorry, you may sense I've butted heads with aptitude a few times :-)

Comment Re:Bandwidth fixes don't fix latency problems (Score 1) 341

Actually if you make the bandwidth 100x the amount actually being used, then variable latency and quality cease being problems. In some ways, keeping pipes with excess bandwidth is the simplest engineering solution to what are otherwise rather complicated problems (QoS, negotiation, timing, congestion, neutrality etc.).

Comment Re:Just wave that magic wand (Score 1) 341

Over-the-air HD video is up to 19 megabits per second, so the equivalent download would require a 4.6 gigabit/second link (at the end-user side; the server side would have to be many times that).

Peer to peer, like Bittorrent. No need for the bandwidth to concentrate linearly at the server.
There is no good reason why the upload bandwidth can't be high as well, even if it's not as high as the download speed.

It would also require some type of storage device that can handle 570 megabytes per second, which is an order of magnitude faster than current hard drives.

But not for long, they're at roughly 100 megabyte/s now (multiply up for RAID), and some SSDs are faster. Anyway, if you're only downloading 8GB, that'll fit comfortably in RAM by the time the links are rolled out.

Comment Re:Makes sense... (Score 1) 341

I make that closer to 50Tbit/s for the two video panels.
But why so old-skool?
120Hz is already out of date. Let's play with 300Hz. TVs claim more, it's all same order of magnitude though.
Decent uncompressed holography, 200nm pixels, is about 125kPPI. Let's stick with 32-bit, only need one channel though.
Something you can point a telescope at and still see the details.
And you obviously want a holo-video wall in each bedroom for chatting, not a mere window.
Let's call it 8 feet by 12 feet, or 30,000 square inches per person.

I make that a cool 28 x 10^18 = 28 million Tbit/s = 28 Ebit/s, per person, for home use, if you don't compress.

More up-market houses will want a dedicated holo-conferencing / work-at-home room, and of course pictures of the sky on the ceilings as well as other decorative surfaces. So there's still a market premium for Zettabit links.

That's nice for chatting, parties, pretty sky pictures etc. but anyone doing scientific or computational research at home will want a proper pipe for their off site backups.

(Obviously we would compress all the above heavily, but that's harder to evaluate.)

Comment Re:magsafe fuckers (Score 1) 482

Why don't you just read it unplugged, and plug it in when you put it down and go to sleep? Is your laptop battery that far gone that it can't last however long you're reading in bed?

(a) My housemate's MacBook is permanently plugged in because the battery lasts about 10 minutes now.
I'm not sure how old it is, but as it's completely fine at web browsing including video, there is no reason to replace it.

(b) Some people read in bed for many hours.

Comment Re:Mod summary up! (Score 1) 482

A cell phone isn't going to source power to anything. My PDA isn't going to source power to anything[...]

You don't anticipate them supplying power to the USB peripherals (memory sticks) you plug into them?

[...] My computer isn't going to source power to anything (via the charging jack). [...]

It would be a nice feature if it could, so you could feed one device from another's battery (I do that a lot charging my phone from my laptop when travelling), but I agree it almost certainly won't happen - at least not with the standard under discussion, which is complex and yet not sophisticated in the way USB

Yes, a full-blown USB connection needs to have a smart communication system so the devices can tell the host what they are and etc. No, a device built to charge through a USB connection doesn't need to communicate shit, all it needs to do is see 5V on the input and assume that it is connected to something that will limit the current it will provide if necessary. That means you can use any 5V supply to charge it, whether that is a laptop, a battery, or a wind turbine.

I often charge my cell phone from my laptop, as the phone charges over USB. The phone can also be a USB master. For reasons of bad implementation this phone can't power USB peripherals so you have to use a stupid power+USB splicing cable, but this should change if.when they make a phone that gets it right. That'd be a device charging over USB that needs to negotiate power direction.

Slashdot Top Deals

Remember, UNIX spelled backwards is XINU. -- Mt.

Working...