Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:The US cannot follow a pact (Score 4, Interesting) 38

So, why should anyone expect China to?

In fact, if I was a Chinese government official I'd be laughing at anything the US suggests. Maybe I'd sign the pact just for a joke though.

the thing is, what the U.S. politicians - and many people around the world - don't realise is that the Chinese Intelligence is so secretive it doesn't even have a name. its members operate in effect as independent cells, through word of mouth contacts, with absolute negligeable two-way contact with the outside world... even inside china and *including with the politicians*. remember, china's politicians, under the "one party state", don't actually have much in the way of power, and are not really that well-respected (or trusted).

so the hilarious thing is that the only way for the politicians to inform the Chinese Intelligence that there's a treaty that's supposed to be signed is, in fact, to announce it in the news and hope like hell that someone relevant, somewhere, in their lair / bunker / hideout, actually reads it. here's the problem, though: if those operatives happen *not to agree* with that treaty, as far as "China National Security and Interests" is concerned, then, well, they don't actually have to take a blind bit of notice.

the same goes for when all these attacks keep occurring. the *simplest* thing to say is "it was chinese hackers! they're nothing to do with us politicians! we have a policy of not attacking foreign assets! no really!" because for the politicians to even *admit* that it was Chinese Intelligence operatives - not that they could possibly find out who they were even if they wanted to - would probably result in them getting a knock on the door and them and their family deported to some remote area of China which hasn't changed in several centuries.

we in the West assume that just because the Politicians in Western countries make the laws, that other countries have to follow that exact same process. China's politicians - people don't realise - are *not* at the top of the food chain as far as power is concerned. They're not even second to top. on mature reflection, you might call that a good thing, as it means that they can't really screw things up.

Comment thinkpenguin (Score 2) 237

i recommend contacting http://thinkpenguin.com/ for several reasons. firstly, yes they install GNU/Linux by default (so they've done all the hard work, and the research, in advance. is that worth paying for? yes!) secondly, they actually go to the trouble of replacing the BIOS with Coreboot. is _that_ worth it, and worth paying for? yes!

and thirdly, they make sure that the hardware that they've selected is FSF-Hardware-Endorseable, which needs some explanation as to why this is important - and it's not *actually* to do with some sort of stupid or idealistic or neo-fascist or brain-washed or self-righteous or [insert suitable continuation of series of derogatory sentences towards the FSF, Dr Stallman in general and their goals, here, which may be in your mind as to why you feel that you should completely ignore anything and everything associated with the FSF, which we're about to show you are completely moot] reason.

no, the clear benefit from buying FSF-Endorsed hardware such as printers, WIFI and 3G dongles etc. is that they JUST WORK. peripherals these days usually have built-in firmware. because the firmware is pre-loaded in FSF-Endorseable products onto NAND Flash or EEPROM, they're pretty much guaranteed to be more expensive than the devices that require the proprietary firmware to be uploaded to the device, from the main OS, before the device can actually function.... BUT...

what that means in practice is that if you don't *have* that proprietary firmware, or if it happens not to be compatible with the OS, or if you lose it, or if the file system becomes corrupted, or if you perform an upgrade of the OS, and many many other reasons all of which amount to a great deal of hassle, you cannot use that device, period.

the most ridiculous instance of this is that ethernet is becoming less common, CD/DVD drives are becoming less common, creating USB-sticks to boot-install systems has always been a pain, EFI-boot (only) is becoming more common.... how the hell is anyone supposed to install an OS when the only network access is WIFI, and the WIFI requires bloody proprietary firmware that has a license that prevents and prohibits that firmware from being installed on the bloody installation media?? how stupidly ridiculous a situation can you possibly get into! and don't get me started about usb-ethernet devices, which, due to them being USB, are often *excluded* from selection as a "main internet connection" during the install process, because, by nature of them being removable, the OS can't guarantee that the device will be there on the next boot.

avoiding all this hassle is what you pay for when you buy pre-vetted products from http://thinkpenguin.com/ and other companies that are listed on the FSF's page http://www.fsf.org/resources/h... . you can also go to http://h-node.org/ and take a look there to see if what you want is listed.

so when you buy a product from http://thinkpenguin.com/ you know that it's "just going to work". if you genuinely want to replace the OS, you can... and it will be a very straightforward job, unlike, i can guarantee, absolutely every other recommendation at the time of writing of this comment with a category "5" score here on slashdot.

ironically, and not surprisingly, thinkpenguin get less support calls (hardware "just works"). their customers are happier.... and so are more loyal. is that worth paying a bit extra for? yeah i'd say so.

Comment simple answer (Score 1, Offtopic) 191

What's your approach to detecting and dealing with Android malware?

don't use android. this is not said in a sarcastic, troll-baiting, flame-fest-demanding or other meaninglessly fucking stupid way or any other way which is to be misunderstood, either accidentally or deliberately. it is said in a simple factual way. if you use a monoculture OS, supplied in binary form only and, for commercial (profit prioritisation) reasons not properly supported by the manufacturer (no, google is NOT the manufacturer of the world's 3rd party android mobile phones, they are the supplier of REFERENCE platform source code which 3rd party manufacturers then take and produce their own customisation and binaries from, and because of the huge fuck-ups that have occurred when 3rd party manufacturers do that, they've been forced to do "flagship" products demonstrating how to do it correctly... but even so they *still* haven't managed to get round the huge "monoculture" problem), then i'm sorry to have to be the messenger here but just like when you run any other proprietary binary-only monoculture OS, then plain and simple, you get everything that you deserve: viruses, malware and more.

now, if someone wants to go and vote the paragraph above down just because it's quotes not nice quotes, i really don't give a monkey's. fact is, i don't use android, therefore i don't get android malware. no complications, no desire to risk my data or my time dealing with other people's crap proprietary "pseudo-open" software. got a problem with that? i genuinely don't care.

Comment Costa Rica (Score 1, Interesting) 381

costa rica is, geographically, a nexus for the undersea fibre cables. translation: the internet connectivity is *fast*. intel has a major centre there. the advantage of costa rica - apart from being absolutely beautiful and one of the most bio-diverse areas on the planet, is that they have NOT signed CAFTA. as a direct result of this they are still a sovereign nation. also, it's *really* hard to do mass-surveillance when most of the country is covered in dense greenery. you can get a tourist visa then, every few months, pay the $30 fine for staying a little bit longer. some foreigners have paid that gosh every few months for shock horror 20 years!

Comment modular computing (Score 1) 345

interesting timing. i've been working on designing modular computer products for the past five years, and just wrote up a white paper yesterday on exactly this topic

the fairphone 2 is designed as "modular" - it's not exactly "modular", it's (very unusually, for a smartphone) designed to be repairable. you have to have a screwdriver, but that's a lot better than a hermetically-sealed unit that needs a saw or scalpel followed by epoxy resin to undo the damage caused by getting into the device.

also... what happened to the "bloom laptop"? i know it was 5 years ago now, but the whole reason why they started the project was because the entire class of students and two professors were absolutely astounded that it took *three hours* to disassemble a standard laptop... into over 140 constituent parts.

Comment Re:Learning to program by Googling + Trial & E (Score 0) 616

This is why so much poor software exists in the world.

I can only imagine what nightmare code is being generated by such efforts.

Yes, anyone can code, just as anyone can build a house. Whether or not the house collapses immediately, whether it has any real value, or by any other measure still depends on the skill of the builder, just as in software.

Garbage in -> Garbage out,
applies to the code as well as the data.


honestly, i can say that if it gets the desired results, who cares. it's going to be maintainable by that person, because they were the ones that wrote it. now, if this was a team effort, however, that would be a completely different matter. if there was a requirement to have a maintenance contract in place, for the long-term success of the code and the project, that would be, again, an entirely different matter.

i *do* actually successfully use the technique that the author uses - i have been using it successfully for over 30 years. however, during that time, i have added "unit tests", source code revision control, project management, documentation, proper comments, proper code structure, coding standards and many many more things which make a successfully *maintainable* project.

whilst such things are most likely entirely missing from the projects that this individual is tackling, the projects that this individual is tackling are also likely to be ones that *don't need* such techniques.

in essence: none of that de-legitimises the *technique* of "programming by random research". it's a legitimate technique that, i can tell you right now, saves a vast amount of time. understanding comes *later* (if indeed it is needed at all), usually by a process of "knowledge inference". to be able to switch off "disbelief" and "judgement" is something that i strongly recommend that you learn to do. if you've been trained as a software engineer, adding "programming by random research" to your arsenal of techniques will make you much more effective.

Comment Re:You know there's a problem... (Score 0, Troll) 616

...when you need to google the hex representation of 'red'. *much* better to understand the encoding, and it certainly isn't hard or requires tricky math. it's literally RRGGBB

you are completely and utterly missing the point, by a long, long margin, and have made a severe judgement error. the assumption that you have made is to correlate "understanding" with "successful results".

believe it or not, the two are *not* causally linked. for a successful counter-example, you need only look at genetic algorithms and at evolution itself.

did you know that human DNA contains a representation of micro-code, as well as a factory which can execute assembly-level-like "instructions"? i'm not talking about CGAT, i'm talking about a level above that. to ask how on earth did such a thing "evolve" is entirely missing the point. it did, it has, it works, and who cares? it's clearly working, otherwise we would not be here - on this site - to be able to say "what a complete load of tosh i am writing"!

what this person has done is to use their creative intelligence as well as something called "inference". they've *inferred* that if enough google queries of "what is hex HTML for red" come up with a particular number and it's always the same number in each result, then surprise-surprise it's pretty much 100% likely that that's the correct answer.

*later on* they might go "hmmm, that's interesting, when i search for "red" it comes up with FFnnnn, when i search for "green" it comes up with nnFFnn" and then they might actually gain the understanding that you INCORRECTLY believe is NECESSARY to achieve successful results.

but please for goodness sake don't make the mistake of assuming that understanding is *required* to achieve successful results: it most certainly is not.

Comment Re:Programming (Score -1, Troll) 616

Programming -- I don't think that word means what she think it means.

actually... i believe it's you who doesn't understand what programming is. programming is about "achieving results". the results - by virtue of their success - have absolutely NOTHING to do with the method by which those results are achieved. this is provable by either (a) unit tests or (b) a system test.

so if this person has found an unorthodox and successful way to do programming (which, by the way, is *exactly* how i do pretty much all of the programming i've ever done, including in programming languages that i've never learned before), then *so what*??

just because *you* memorise all the APIs, go through all the books, go through all the tutorials, go through all the reference material and then re-create pretty much everything that's ever been invented from scratch because otherwise you would not feel "confident" that it would "work", does NOT mean that there isn't an easier way.

there are actually two different types of intelligence:

(a) applied (logical) intelligence. this is usually linear and single-step.
(b) random (chaotic) intelligence. this is usually trial-and-error and is often parallelisable (evolution, bees, ants and other creatures)

an extreme variant of (b) is actually *programmable*. it's called "genetic algorithms".

personally i find that method (a) is incredibly laborious and slow, whereas method (b) is, if you write good enough unit tests and spend a significant amount of time reducing the "testing" loop, you get results very very quickly. genetics - darwin selection - is a very very good example. we don't "understand" each iteration, but we can clearly and obviously see that the "results" are quite blindingly-obviously successful.

by applying the technique that the original article mentions, i've managed to teach myself actionscript in about 48 hours, and java was about the same amount of time. i knew *nothing* about the APIs nor the full details of *either* language... yet i was able to successfully write the necessary code for a project that was based on red5 server and a real-time flash application. it was up and running within a couple of weeks.

in short: to call the method described in the article as "nothing to do with programming whatsoever" is complete rubbish. it's a proven technique that gets results, and, you know what? the most critical insight of the article is that it's *not* people who are "good at maths" who are good at achieving results with this technique: it's people who are creative and who understand language.

Comment newshell.exe (Score 3, Interesting) 354

actually... newshell.exe as it was known was written by the NT team, when Windows NT 3.1 was new and NT 3.51 was in beta. the windows 95 team - who were universally absolutely hated by the NT team - legitimately "stole" newshell.exe from the [internally and legitimately accessible] source repository of the NT team at the time, and release it as the default shell of windows 95 *before* the NT team were able to release it. it wasn't until NT 4 beta that the NT team was able to catch up.

unnnfortunately, the NT team were being pressurised to do some pretty stupid things, because windows 95, being a PROGRAM-RUNNER *NOT* repeat *NOT* repeat *NOT* an "Operating System" (windows 95 didn't even have proper virtual memory management for god's sake: programs were either fully-swapped-out or fully-resident: absolutely nothing in between) - windows 95 was unfortunately *faster* than the flagship operating system (NT).

so they were forced to remove the user-space GDI implementation and associated API (which buggered up citrix and other screen virtualisation technology completely: it had to be re-added back in many years later and was called "RDP"... it was actually another company's screen virtualisation technology... bought and re-badged... but we're talking windows 2000 by then...). removal of the GDI implementation meant two things: firstly, lots more speed, and secondly, if you moved a window off-screen it caused a BSOD in NT 4.0 betas because of course there was no range-checking any more and this was all kernel-space!

many people loved the fact that NT 3.51's user-space screen driver could actually crash, leaving you with no screen... but the mouse, keyboard and the rest of the OS was working perfectly. many sysadmins didn't bother with a reboot when that happened because they could just use keyboard short-cuts, remote logins, or just pure mouse-guesswork!

the NT team did at one point also try to move printer drivers (including 3rd party ones) into kernelspace (to again avoid a userspace-kernelspace context switch... or 100). for obvious reasons that initiative didn't last long....

yeahhhh we don't hear about the history of pain that windows 95 caused within microsoft. and now, many of the people who knew what was going on have retired as millionaires on the stock options from so far back...

Comment split keyboards are fun (Score 1) 240

i had one - it was arm-rest mounted. there was only one space bar. i touch-type, so it would be like "rattle rattle rattle THUMP arse!.... rattle rattle THUMP".

no the weirdest thing i found was that because the keyboard was mounted on the arm rests, it was *outside* of my peripheral vision. it took three weeks to get used to, and i realised that at the time i clearly wasn't genuinely a touch-typist... because i had been using my peripheral vision to locate the keys! within three weeks i was back up to speed and accuracy.

yeahhh i loved that keyboard. the look on people's faces when they would come into my cubicle and see me with my feet up on the desk, 15in monitor 6 feet away in linux "console" mode at 80x60 resolution, happily using vi for programming at over 170wpm....

Comment Re:real-time adaptive video playback (Score 1) 220

Do you know which video codec you're talking about? As far as I can recall, there are a couple of "Flash video" codecs, and none of them are particularly exotic at this point. There was Sorenson Spark, which I believe was essentially H263, and VP6. These days, H264 and VP8 (WebM) are very common, considered to be improvements over previous versions, and not tied to Flash.

it's not the CODECs themselves, it's the "real time adaptation" that's important. i don't know if you were paying attention, but at higher bandwidths you simply wouldn't notice that there was anything important going on underneath, because there would be enough bandwidth to just go straight to the fastest transfer speed with the absolute best and top quality data being transferred.

when the bandwidth is drastically reduced (to 10k/sec), that's when any "imperfections" in the TCP connection create a much larger - and much more noticeable - effect.... but the point is *it doesn't matter* because of the "adaptation".

basically when the bandwidth is noticed (by flash player) to be absolutely terrible, the picture quality is reduced by a factor of 16 - pixels are treated as 4x4 "blocks", and that means that the video bandwidth is drastically reduced as well.... yet the picture remains a moving one.

when the bandwidth picks up again, the picture quality reduction is brought down to 9 (3x3), then if that's ok it's brought down further, and, finally, if it's genuinely all ok, the quality is brought up to the maximum requested.

this simply *DOES NOT HAPPEN* within a CODEC such as VP8, H264 and so on. those are *FIXED* bandwidth, *FIXED* picture size CODECS that, if they are used, assume PERFECT conditions. yes, sure, there are supposed to be "fixed frames" within the streams, so that if the bandwidth drops temporarily then the picture may be "frozen" until the next "fixed frame" comes along... ... but what if the bandwidth drops by 50% and *never recovers*? you can't watch the video in real-time, can you?

and that's the point: adobe's playback *adapts to the conditions*. no open standard that i know of has that capability, even though i know that when i last looked there were "multi-stream" extensions to H.26X being worked on. these were based on the principle that a "coarse" video was encoded at very low resolution (and very low bandwidth), then "additions" were made at ever-increasing quality (and data rates) which you could additionally ask for at the receiving end, *if* you had the available real-time bandwidth to do so. ... but i don't see that being announced in a big splash on any techie news site as having been a successful open standard developed with libre-licensed reference source code.

Comment Re:Freedom does not mean no laws (Score 1) 264

A complete absence of laws for you necessarily means a loss of freedom for me because there is nothing restraining you (or me) from removing other people's freedom.

there is indeed something restraining you: your own moral and ethical judgement. and that's really what man-made laws are there for: to catch the people who have no understanding of either morals or ethics.

the problem we have right now is that the process by which the laws are made has itself been blatantly corrupted, and there are people in positions of power who feel that they can blatantly ignore the entire legal process.

at some point ordinary american citizens - probably pressurised by the rest of the world - are going to wake up and start to demand answers. my money's on that process being inspired by and traced back to people right here on slashdot, of course.

Comment real-time video (Score -1, Redundant) 220

i don't know if anyone's really noticed, but flash's real-time adaptive video CODECs are actually incredibly good. i created a video chat site a few years back [tried red5 as the back-end server, and finally got to actually put some reality behind why i detest java. up until then i'd only known *theoretically* why java is a piss-poor language compared to the alternatives...]

anyway, leaving the back-end alone as it's a red herring, i was deeply impressed at how little bandwidth each video window could be given yet still remain audible and actually convey useful video information. i restricted each user to a paltry 10k-bytes (!) of bandwidth - that's for video *and* audio, limited the window size to 240x180, and was absolutely amazed to find that the video would easily recover from drop-outs.

basically what would happen is that during a drop-out, audio would be prioritised, and video would pause. recovery of the video stream (which could be done *precisely because* i had set the bandwidth so low) would literally "unfold" before my eyes, in exactly the same way that you see those 1980s pop video and children's programs "pixellation" effects.

basically they would transmit a crude video image, then send the improvements as a second round, then a third, and so on. now, here's the thing: i have looked for "adaptive video" algorithms in the past, and, whilst there exists an effort to create such a standard as a public standard, it's simply completely behind the times.

adobe managed it *years* ago... yet no open standard exists in common usage which comes even remotely close to successfully replicating this.

i appreciate that technically, it's incredibly challenging to get right. even the team behind skype - when they sold and created a real-time video streaming company "joost" - failed after a few years and gave up.... but what people forget is that *adobe already succeeded*. ... what has been substituted in its place? well, sure, we can do real-time video browser-to-browser.... but the assumption is that there is "perfect conditions". perfect bandwidth. perfect connections. no drop-outs. no brown-outs. zero latency.

adobe's solution isn't perfect: i know from experience that after a few hours, the real-time adaptive video stream *can* get out-of-sync (by over a minute in some cases), and will "recover" in a flurry of fast-forward stop-motion frames. really quite hilarious to witness. but, the only other alternative that i know of which is even *remotely* close to replicating what adobe did is *another* proprietary video codec, behind "zoom.us". it's developed by a former developer behind cisco's real-time video system. which uses flash in some places, and java in others. and is dreadful and unreliable, and has latency often of up to 1..5 seconds. unlike zoom.us which works incredibly well, and has very little latency.

so i'm going to call this article out, as entirely missing the point, namely that there *really* aren't any good alternatives to the core of what flash does really really well, but the problem is that they should have released the entire client and server as software libre under the LGPL a long, _long_ time ago because it just doesn't make them any money, and they just don't have the manpower to keep on fixing the security issues any more.

"I'm not afraid of dying, I just don't want to be there when it happens." -- Woody Allen