Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Linux (Score 1) 83

Thanks! But too late. That machine died this time last year, after 6 years of excellent service. I moved on to new hardware.

Hopefully the xorg.conf is useful to someone else.

I've just looked up what people are saying about DebugWait, and I see the font corruption - that's just one of the types of corruption I saw!
But perhaps that was the only kind left by the time my laptop died.

Just a note to others, that DebugWait doesn't fix the font corruption for everyone according to reports. But, it's reported as fixed by the time of the kernel in Ubuntu 13.04 according to https://bugs.launchpad.net/ubu...

I stand by my view that Intel GPU support never quite reached "excellent" because of various long term glitches, although I'd give it a "pretty good" and still recommend Intel GPUs (as long as you don't get the PowerVR ones - very annoying that was, that surprise wrecked a job I was on). Judging by the immense number of kernel patches consistently over years, it has received a lot of support, and in most ways worked well.

Getting slightly back on topic with nVidia: Another laptop I've used has an nVidia GPU, and that's been much, much worse under Ubuntu throughout its life, than the laptop with Intel GPU. Some people say nVidia's good for them with Linux, but not this laptop. Have tried all available drivers, Nouveau, nVidia, nVidia's newer versions etc. Nothing works well, Unity3d always renders ("chugs") about 2-3 frames per second when it animates anything, which is barely usable, the GPU temperature gets very hot when it does the slightest things, and visiting any WebGL page in Firefox instantly crashes X with a segmentation fault due to a bug in OpenGL somewhere, requiring a power cycle to recover properly. So I'd still rate nVidia poorer than Intel in my personal experience of Linux on laptops :)

Comment Re:Linux (Score 1) 83

Now? Intel GPU support has been excellent under Linux even back when the crusty GMA chips were all we had.

Except for the bugs. I used Linux, including tracking the latest kernels, for over 6 years with my last laptop having an Intel 915GM.

Every version of the kernel during that time rendered occasional display glitches of one sort or another, such as a line or spray of random pixels every few weeks. Rare but not bug free.

And that's just using a terminal window. It couldn't even blit or render text with 100% reliability...

I investigated one of those bugs and it was a genuine bug in the kernel's tracking of cache flushes and command queuing.
In the process I found more bugs than I cared to count in the modesetting code.

Considering the number of people working on the Intel drivers and the time span (6 years) that was really surprising, but that's how it was.

Comment Re:Precisely (Score 1) 1098

In addition to what others said about the FSF discouraging the LGPL, it is also not allowed to statically link LGPL code to non-(L)GPL closed code. You can only link dynamically unless you provide full source.

Nonetheless, statically linking with LGPL libraries in the form of uClibc is _extremely_ common in commercial devices running uClinux. Without providing any way to relink. Forbidden, but ignored.

Comment Re:How are mobile phones legal then? (Score 1) 64

As the AC implies, that's not interference from bad or unshielded electronics in the mobile (or it shouldn't be).

An ideal mobile transmits only what it's supposed to, on the correct RF channels to communicate, and nothing else.
Like all devices there will be other emissions, but let's assume it's very well made and effectively perfect.

The sound on the speakers is because the speaker circuit is effectively an RF receiver, converting those high frequencies to audio. They actually demodulate the signal - unintentionally. See http://en.wikipedia.org/wiki/Electromagnetic_interference#RF_immunity_and_testing

If the speaker circuit is made well enough, it won't do this.

Comment It pleases me that Perl isn't listed as vulnerable (Score 1) 156

Because Perl switched to a better hash function _and_ randomised it ages ago.

Having looked at many different fast hashing functions, I'm amazed at how many in the vulnerability report are still using the ancient multiply-by-small-constant and xor/add. That sort of thing tends to need a prime hash table size and a slow 'mod' operation. We have better hash functions that work on a 2^n table sizes.

Comment Re:E-ink like power consumption? (Score 5, Informative) 168

This page explains near the end: http://www.mirasoldisplays.com/mobile-display-imod-technology
It's bistable, so it retains memory of the image without needing power (or only a little power), which is similar to e-ink.
But it switches much faster than e-ink, so it can do video, presumably consuming power for the regions which change.

Comment Re:What a wonderful project! (Score 3, Interesting) 112

The bit about my own history was just to illustrate that young people (the target audience for RP apparently) do take an interest in that sort of thing, not to suggest a method! Of course nobody would use that approach any more! (The Elite reference was because David Braben co-authored Elite and is also involved in RP).

If analysing the blob statically, and if you know the instruction architecture, we have much better tools now, including disassemblers, decompilers, type inference and much more. And internet so we can collaborate better.

16MB is a big blob, but it's highly unlikely that much of it is needed to make a useful open source subset of the functionality.

For perspective on speed: Recently I had to reverse engineer about half of a 1.5MB ARM driver blob in some detail, enough to fix bugs and improve performance deep within it. I'm not going to say what it was, only that it took me about 2 weeks with objdump and some scripts, not using more advanced tools. I didn't enjoy it because it was just to fix some bugs the manufacturer left in :-/ (The best bit was a one-bit change that tripled video playback performance and stopped it stuttering :roll-eyes:)

But there may be a big fat license prohibiting anyone from openly using the results of that type of deep code analysis on the RP's blob.

Plus, there's the secret GPU/RISC architecture to get to grips with; that's not going to be obvious.

So it would probably have to be Nouveau-style: Run the original, watch its interactions with the device (with tracing probes), replay things, change things randomly, try things, gradually build up a picture through guessing as much as anything. That's a much bigger task than statically analysing a blob's code. (At least, to me it seems so.) I don't know whether it's practical on the RP, and I don't know whether it's too difficult. But it worked with Nouveau - and that now supports a lot of nVidia chips - so not to be dismissed as impossible.

You never start all over after a chip rev. That's why they call them revs, not new architectures. You can diff code in blobs if need be; often the changes for a chip rev are very small.

You may be right about needing a lot of 11-year-olds (or others). Luckily the RP is cheap and interesting enough, that it might attract enough interest.

The suggestion isn't all that serious, but nor is it an impossible task, so I think it's worth floating the idea around, see how much interest there is in at least looking further at the practicalities and legalities.

Comment Re:What a wonderful project! (Score 3, Interesting) 112

all the software is "open" yet obfuscated

The entire Raspberry Pi depends on a gigantic proprietary blob from Broadcom.

So let's do a Nouveau-style reverse engineering project. How hard can it be?

Sounds like a perfect project for the target audience: curious and talented kids. With a bit of experienced help if they get stuck (seems unlikely to me though, with sufficient time & motivation). Some kids love reverse engineering. I did when I was young and I was far from the only one (but we didn't have an internet to meet each other back then).

(I did loads of reverse engineering from about age 11+ (that was 1983), starting with the BBC and moving on to everything I could get access to, pulling apart games (starting from the binaries), changing behaviours, porting them from tape to floppy disk ;-), even porting them to new architectures, and now I think about it, quite a lot of hacking on video hardware of the time, both in hardware, and quirky programming to make it do useful things it wasn't designed to do. If Mr Braben is listening, I printed a whole disassembly of Elite, BBC disk version on dot matrix that took days to print (wow just got a flashback), and spent a long time learning from its algorithms, some of which I still use today - thank you ;-) )

Comment Re:I want more than an arduino(s) (Score 1) 123

These days there's plenty of intersection between embedded control (with GPIOs, I2C etc.) and driving some kind of display.

At the moment, for those applications at low volumes (1000), Raspberry Pi is the only thing I've seen at a competitive price. Everything else - including mini/nano-ITX PCs - are either way too expensive, or lack good video by current standards, or (thinking of STB chips) you can't get the parts without 10-100k volumes, a high initial fee, a big fat NDA, and very buggy drivers/SDK (been there...).

I too am sad that there's not a lot of chip data. I will be getting some Raspberry Pis to trial applications on, but also testing absolutely everything I need to use on it before ordering in quantity. Never trust a manufacturer's specifications - and never trust drivers you can't fix yourself without *lots* of testing. Especially where video is concerned.

It's kinda weird that they can sell them for less than comparable components can be easily bought for, but kinda wonderful compared with everything else out there, if it works as well as they say. I wonder if the low price will really last. And I wonder how long before someone starts a Nouveau-style GPU reverse engineering project ;-)

Comment Re:Install (Score 1) 360

Fair enough.

I use aptitude, both from command line and in system building scripts, and prefer its command line options. Some of the options are unique and handy ("aptitude why"), but there are a few nasty things about it: the way it is extremely slow to do anything (like "aptitude unmarkauto foo"), even if you are queuing up a sequence of changes; even "aptitude search" is slow ("apt-cache search" gives more results and is instant); the aptitude man page is basically out of date and missing important information (it just tells you to read the manual, and you have to find out that's in /usr/share/doc/aptitude); and worst of all, on a system where people have inconsistently used a mixture of "apt-get" and "aptitude", something about the APT state regarding manually/automatically installed packages, combined with aptitude's notion of queued up operations, can get quite muddled, and a subsequent dist-upgrade can sometimes do very strange, bad things.

Both use the underlying APT framework, but dressed in slightly different ways that unfortunately go beyond just how things are invoked and presented.

It would be nice if they'd integrate the states better to be the same for all APT-using programs, integrate the config options (some options have the same name in apt-get and aptitude; others are different, and of course neither are listed fully or accurately in their respective man pages), improve "aptitude search", make it run faster (especially when just querying), and move the curses UI to a separate program so that aptitude really could be an always-recommendable replacement for apt-get and apt-cache. I've admin'd systems where I have to be careful to use the right one of apt-get or aptitude for that system as the other seems to behave weirdly (both ways); that's not nice.

I'm surprised Debian's recommending aptitude as the definite thing to use while it still feels like a work in progress.

Sorry, you may sense I've butted heads with aptitude a few times :-)

Comment Re:Bandwidth fixes don't fix latency problems (Score 1) 341

Actually if you make the bandwidth 100x the amount actually being used, then variable latency and quality cease being problems. In some ways, keeping pipes with excess bandwidth is the simplest engineering solution to what are otherwise rather complicated problems (QoS, negotiation, timing, congestion, neutrality etc.).

Comment Re:Just wave that magic wand (Score 1) 341

Over-the-air HD video is up to 19 megabits per second, so the equivalent download would require a 4.6 gigabit/second link (at the end-user side; the server side would have to be many times that).

Peer to peer, like Bittorrent. No need for the bandwidth to concentrate linearly at the server.
There is no good reason why the upload bandwidth can't be high as well, even if it's not as high as the download speed.

It would also require some type of storage device that can handle 570 megabytes per second, which is an order of magnitude faster than current hard drives.

But not for long, they're at roughly 100 megabyte/s now (multiply up for RAID), and some SSDs are faster. Anyway, if you're only downloading 8GB, that'll fit comfortably in RAM by the time the links are rolled out.

Slashdot Top Deals

Always draw your curves, then plot your reading.

Working...