We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
That's good advice.
But there are plenty of things on most networks that aren't critical servers or devices you have the luxury to control and plan.
If you regard security patches as essential only for those things, you're doing defense in depth wrong.
Heartbleed affected clients too, and several things that aren't internet facing services.
Or in the case of Microsoft, discontinue support for the still widely-used Windows XP. Find a vulnerability in that? Too damn bad. It'll never get fixed.
Like when Ubuntu Server 13.04 didn't get a fix for Heartbleed because they discontinued support after 1 year despite the criticality of the bug and the servers seeing considerable use? All the official replies were "it's your own fault" and "change distro version immediately". Which you often can't do quickly. No users really expected 12.04, 12.10, 13.10 and 14.04 to get the fix while 13.04 in the middle was left out - except people who read the really, really fine print and took it seriously. Shipping the security fix would have been trivial and saved a lot of people a lot of work; they just refused on principle.
It was probably the first time many users found out Canonical had changed the support duration (that's why 12.10 got the fix).
Thanks! But too late. That machine died this time last year, after 6 years of excellent service. I moved on to new hardware.
Hopefully the xorg.conf is useful to someone else.
I've just looked up what people are saying about DebugWait, and I see the font corruption - that's just one of the types of corruption I saw!
But perhaps that was the only kind left by the time my laptop died.
Just a note to others, that DebugWait doesn't fix the font corruption for everyone according to reports. But, it's reported as fixed by the time of the kernel in Ubuntu 13.04 according to https://bugs.launchpad.net/ubu...
I stand by my view that Intel GPU support never quite reached "excellent" because of various long term glitches, although I'd give it a "pretty good" and still recommend Intel GPUs (as long as you don't get the PowerVR ones - very annoying that was, that surprise wrecked a job I was on). Judging by the immense number of kernel patches consistently over years, it has received a lot of support, and in most ways worked well.
Getting slightly back on topic with nVidia: Another laptop I've used has an nVidia GPU, and that's been much, much worse under Ubuntu throughout its life, than the laptop with Intel GPU. Some people say nVidia's good for them with Linux, but not this laptop. Have tried all available drivers, Nouveau, nVidia, nVidia's newer versions etc. Nothing works well, Unity3d always renders ("chugs") about 2-3 frames per second when it animates anything, which is barely usable, the GPU temperature gets very hot when it does the slightest things, and visiting any WebGL page in Firefox instantly crashes X with a segmentation fault due to a bug in OpenGL somewhere, requiring a power cycle to recover properly. So I'd still rate nVidia poorer than Intel in my personal experience of Linux on laptops
Now? Intel GPU support has been excellent under Linux even back when the crusty GMA chips were all we had.
Except for the bugs. I used Linux, including tracking the latest kernels, for over 6 years with my last laptop having an Intel 915GM.
Every version of the kernel during that time rendered occasional display glitches of one sort or another, such as a line or spray of random pixels every few weeks. Rare but not bug free.
And that's just using a terminal window. It couldn't even blit or render text with 100% reliability...
I investigated one of those bugs and it was a genuine bug in the kernel's tracking of cache flushes and command queuing.
In the process I found more bugs than I cared to count in the modesetting code.
Considering the number of people working on the Intel drivers and the time span (6 years) that was really surprising, but that's how it was.
In addition to what others said about the FSF discouraging the LGPL, it is also not allowed to statically link LGPL code to non-(L)GPL closed code. You can only link dynamically unless you provide full source.
Nonetheless, statically linking with LGPL libraries in the form of uClibc is _extremely_ common in commercial devices running uClinux. Without providing any way to relink. Forbidden, but ignored.
As the AC implies, that's not interference from bad or unshielded electronics in the mobile (or it shouldn't be).
An ideal mobile transmits only what it's supposed to, on the correct RF channels to communicate, and nothing else.
Like all devices there will be other emissions, but let's assume it's very well made and effectively perfect.
The sound on the speakers is because the speaker circuit is effectively an RF receiver, converting those high frequencies to audio. They actually demodulate the signal - unintentionally. See http://en.wikipedia.org/wiki/Electromagnetic_interference#RF_immunity_and_testing
If the speaker circuit is made well enough, it won't do this.
I had exactly the same thing happen (audio stopped playing until reboot, so phone ring was silent) on Nokia Symbian S60 phones years ago.
It's crappy but it's not exclusive to Windows phones.
We stagnate unless we choose not to. You don't *have* to become a stereotypical angry old conservative. That's up to you. I choose not to.
Because Perl switched to a better hash function _and_ randomised it ages ago.
Having looked at many different fast hashing functions, I'm amazed at how many in the vulnerability report are still using the ancient multiply-by-small-constant and xor/add. That sort of thing tends to need a prime hash table size and a slow 'mod' operation. We have better hash functions that work on a 2^n table sizes.
This page explains near the end: http://www.mirasoldisplays.com/mobile-display-imod-technology
It's bistable, so it retains memory of the image without needing power (or only a little power), which is similar to e-ink.
But it switches much faster than e-ink, so it can do video, presumably consuming power for the regions which change.
If it's anything like the chinese knockoff of the Nokia N900, it'll look identical (right down to the logo) but be completely different and relatively useless.
The bit about my own history was just to illustrate that young people (the target audience for RP apparently) do take an interest in that sort of thing, not to suggest a method! Of course nobody would use that approach any more! (The Elite reference was because David Braben co-authored Elite and is also involved in RP).
If analysing the blob statically, and if you know the instruction architecture, we have much better tools now, including disassemblers, decompilers, type inference and much more. And internet so we can collaborate better.
16MB is a big blob, but it's highly unlikely that much of it is needed to make a useful open source subset of the functionality.
For perspective on speed: Recently I had to reverse engineer about half of a 1.5MB ARM driver blob in some detail, enough to fix bugs and improve performance deep within it. I'm not going to say what it was, only that it took me about 2 weeks with objdump and some scripts, not using more advanced tools. I didn't enjoy it because it was just to fix some bugs the manufacturer left in
But there may be a big fat license prohibiting anyone from openly using the results of that type of deep code analysis on the RP's blob.
Plus, there's the secret GPU/RISC architecture to get to grips with; that's not going to be obvious.
So it would probably have to be Nouveau-style: Run the original, watch its interactions with the device (with tracing probes), replay things, change things randomly, try things, gradually build up a picture through guessing as much as anything. That's a much bigger task than statically analysing a blob's code. (At least, to me it seems so.) I don't know whether it's practical on the RP, and I don't know whether it's too difficult. But it worked with Nouveau - and that now supports a lot of nVidia chips - so not to be dismissed as impossible.
You never start all over after a chip rev. That's why they call them revs, not new architectures. You can diff code in blobs if need be; often the changes for a chip rev are very small.
You may be right about needing a lot of 11-year-olds (or others). Luckily the RP is cheap and interesting enough, that it might attract enough interest.
The suggestion isn't all that serious, but nor is it an impossible task, so I think it's worth floating the idea around, see how much interest there is in at least looking further at the practicalities and legalities.
all the software is "open" yet obfuscated
The entire Raspberry Pi depends on a gigantic proprietary blob from Broadcom.
So let's do a Nouveau-style reverse engineering project. How hard can it be?
Sounds like a perfect project for the target audience: curious and talented kids. With a bit of experienced help if they get stuck (seems unlikely to me though, with sufficient time & motivation). Some kids love reverse engineering. I did when I was young and I was far from the only one (but we didn't have an internet to meet each other back then).
(I did loads of reverse engineering from about age 11+ (that was 1983), starting with the BBC and moving on to everything I could get access to, pulling apart games (starting from the binaries), changing behaviours, porting them from tape to floppy disk
These days there's plenty of intersection between embedded control (with GPIOs, I2C etc.) and driving some kind of display.
At the moment, for those applications at low volumes (1000), Raspberry Pi is the only thing I've seen at a competitive price. Everything else - including mini/nano-ITX PCs - are either way too expensive, or lack good video by current standards, or (thinking of STB chips) you can't get the parts without 10-100k volumes, a high initial fee, a big fat NDA, and very buggy drivers/SDK (been there...).
I too am sad that there's not a lot of chip data. I will be getting some Raspberry Pis to trial applications on, but also testing absolutely everything I need to use on it before ordering in quantity. Never trust a manufacturer's specifications - and never trust drivers you can't fix yourself without *lots* of testing. Especially where video is concerned.
It's kinda weird that they can sell them for less than comparable components can be easily bought for, but kinda wonderful compared with everything else out there, if it works as well as they say. I wonder if the low price will really last. And I wonder how long before someone starts a Nouveau-style GPU reverse engineering project
The next person to mention spaghetti stacks to me is going to have his head knocked off. -- Bill Conrad