Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
What's the story with these ads on Slashdot? Check out our new blog post to find out. ×

Comment Re:Learning to program by Googling + Trial & E (Score 0) 577

This is why so much poor software exists in the world.

I can only imagine what nightmare code is being generated by such efforts.

Yes, anyone can code, just as anyone can build a house. Whether or not the house collapses immediately, whether it has any real value, or by any other measure still depends on the skill of the builder, just as in software.

Garbage in -> Garbage out,
applies to the code as well as the data.

-AB

honestly, i can say that if it gets the desired results, who cares. it's going to be maintainable by that person, because they were the ones that wrote it. now, if this was a team effort, however, that would be a completely different matter. if there was a requirement to have a maintenance contract in place, for the long-term success of the code and the project, that would be, again, an entirely different matter.

i *do* actually successfully use the technique that the author uses - i have been using it successfully for over 30 years. however, during that time, i have added "unit tests", source code revision control, project management, documentation, proper comments, proper code structure, coding standards and many many more things which make a successfully *maintainable* project.

whilst such things are most likely entirely missing from the projects that this individual is tackling, the projects that this individual is tackling are also likely to be ones that *don't need* such techniques.

in essence: none of that de-legitimises the *technique* of "programming by random research". it's a legitimate technique that, i can tell you right now, saves a vast amount of time. understanding comes *later* (if indeed it is needed at all), usually by a process of "knowledge inference". to be able to switch off "disbelief" and "judgement" is something that i strongly recommend that you learn to do. if you've been trained as a software engineer, adding "programming by random research" to your arsenal of techniques will make you much more effective.

Comment Re:You know there's a problem... (Score 0, Troll) 577

...when you need to google the hex representation of 'red'. *much* better to understand the encoding, and it certainly isn't hard or requires tricky math. it's literally RRGGBB

you are completely and utterly missing the point, by a long, long margin, and have made a severe judgement error. the assumption that you have made is to correlate "understanding" with "successful results".

believe it or not, the two are *not* causally linked. for a successful counter-example, you need only look at genetic algorithms and at evolution itself.

did you know that human DNA contains a representation of micro-code, as well as a factory which can execute assembly-level-like "instructions"? i'm not talking about CGAT, i'm talking about a level above that. to ask how on earth did such a thing "evolve" is entirely missing the point. it did, it has, it works, and who cares? it's clearly working, otherwise we would not be here - on this site - to be able to say "what a complete load of tosh i am writing"!

what this person has done is to use their creative intelligence as well as something called "inference". they've *inferred* that if enough google queries of "what is hex HTML for red" come up with a particular number and it's always the same number in each result, then surprise-surprise it's pretty much 100% likely that that's the correct answer.

*later on* they might go "hmmm, that's interesting, when i search for "red" it comes up with FFnnnn, when i search for "green" it comes up with nnFFnn" and then they might actually gain the understanding that you INCORRECTLY believe is NECESSARY to achieve successful results.

but please for goodness sake don't make the mistake of assuming that understanding is *required* to achieve successful results: it most certainly is not.

Comment Re:Programming (Score 0) 577

Programming -- I don't think that word means what she think it means.

actually... i believe it's you who doesn't understand what programming is. programming is about "achieving results". the results - by virtue of their success - have absolutely NOTHING to do with the method by which those results are achieved. this is provable by either (a) unit tests or (b) a system test.

so if this person has found an unorthodox and successful way to do programming (which, by the way, is *exactly* how i do pretty much all of the programming i've ever done, including in programming languages that i've never learned before), then *so what*??

just because *you* memorise all the APIs, go through all the books, go through all the tutorials, go through all the reference material and then re-create pretty much everything that's ever been invented from scratch because otherwise you would not feel "confident" that it would "work", does NOT mean that there isn't an easier way.

there are actually two different types of intelligence:

(a) applied (logical) intelligence. this is usually linear and single-step.
(b) random (chaotic) intelligence. this is usually trial-and-error and is often parallelisable (evolution, bees, ants and other creatures)

an extreme variant of (b) is actually *programmable*. it's called "genetic algorithms".

personally i find that method (a) is incredibly laborious and slow, whereas method (b) is, if you write good enough unit tests and spend a significant amount of time reducing the "testing" loop, you get results very very quickly. genetics - darwin selection - is a very very good example. we don't "understand" each iteration, but we can clearly and obviously see that the "results" are quite blindingly-obviously successful.

by applying the technique that the original article mentions, i've managed to teach myself actionscript in about 48 hours, and java was about the same amount of time. i knew *nothing* about the APIs nor the full details of *either* language... yet i was able to successfully write the necessary code for a project that was based on red5 server and a real-time flash application. it was up and running within a couple of weeks.

in short: to call the method described in the article as "nothing to do with programming whatsoever" is complete rubbish. it's a proven technique that gets results, and, you know what? the most critical insight of the article is that it's *not* people who are "good at maths" who are good at achieving results with this technique: it's people who are creative and who understand language.

Comment newshell.exe (Score 3, Interesting) 340

actually... newshell.exe as it was known was written by the NT team, when Windows NT 3.1 was new and NT 3.51 was in beta. the windows 95 team - who were universally absolutely hated by the NT team - legitimately "stole" newshell.exe from the [internally and legitimately accessible] source repository of the NT team at the time, and release it as the default shell of windows 95 *before* the NT team were able to release it. it wasn't until NT 4 beta that the NT team was able to catch up.

unnnfortunately, the NT team were being pressurised to do some pretty stupid things, because windows 95, being a PROGRAM-RUNNER *NOT* repeat *NOT* repeat *NOT* an "Operating System" (windows 95 didn't even have proper virtual memory management for god's sake: programs were either fully-swapped-out or fully-resident: absolutely nothing in between) - windows 95 was unfortunately *faster* than the flagship operating system (NT).

so they were forced to remove the user-space GDI implementation and associated API (which buggered up citrix and other screen virtualisation technology completely: it had to be re-added back in many years later and was called "RDP"... it was actually another company's screen virtualisation technology... bought and re-badged... but we're talking windows 2000 by then...). removal of the GDI implementation meant two things: firstly, lots more speed, and secondly, if you moved a window off-screen it caused a BSOD in NT 4.0 betas because of course there was no range-checking any more and this was all kernel-space!

many people loved the fact that NT 3.51's user-space screen driver could actually crash, leaving you with no screen... but the mouse, keyboard and the rest of the OS was working perfectly. many sysadmins didn't bother with a reboot when that happened because they could just use keyboard short-cuts, remote logins, or just pure mouse-guesswork!

the NT team did at one point also try to move printer drivers (including 3rd party ones) into kernelspace (to again avoid a userspace-kernelspace context switch... or 100). for obvious reasons that initiative didn't last long....

yeahhhh we don't hear about the history of pain that windows 95 caused within microsoft. and now, many of the people who knew what was going on have retired as millionaires on the stock options from so far back...

Comment split keyboards are fun (Score 1) 240

i had one - it was arm-rest mounted. there was only one space bar. i touch-type, so it would be like "rattle rattle rattle THUMP arse!.... rattle rattle THUMP".

no the weirdest thing i found was that because the keyboard was mounted on the arm rests, it was *outside* of my peripheral vision. it took three weeks to get used to, and i realised that at the time i clearly wasn't genuinely a touch-typist... because i had been using my peripheral vision to locate the keys! within three weeks i was back up to speed and accuracy.

yeahhh i loved that keyboard. the look on people's faces when they would come into my cubicle and see me with my feet up on the desk, 15in monitor 6 feet away in linux "console" mode at 80x60 resolution, happily using vi for programming at over 170wpm....

Comment Re:real-time adaptive video playback (Score 1) 220

Do you know which video codec you're talking about? As far as I can recall, there are a couple of "Flash video" codecs, and none of them are particularly exotic at this point. There was Sorenson Spark, which I believe was essentially H263, and VP6. These days, H264 and VP8 (WebM) are very common, considered to be improvements over previous versions, and not tied to Flash.

it's not the CODECs themselves, it's the "real time adaptation" that's important. i don't know if you were paying attention, but at higher bandwidths you simply wouldn't notice that there was anything important going on underneath, because there would be enough bandwidth to just go straight to the fastest transfer speed with the absolute best and top quality data being transferred.

when the bandwidth is drastically reduced (to 10k/sec), that's when any "imperfections" in the TCP connection create a much larger - and much more noticeable - effect.... but the point is *it doesn't matter* because of the "adaptation".

basically when the bandwidth is noticed (by flash player) to be absolutely terrible, the picture quality is reduced by a factor of 16 - pixels are treated as 4x4 "blocks", and that means that the video bandwidth is drastically reduced as well.... yet the picture remains a moving one.

when the bandwidth picks up again, the picture quality reduction is brought down to 9 (3x3), then if that's ok it's brought down further, and, finally, if it's genuinely all ok, the quality is brought up to the maximum requested.

this simply *DOES NOT HAPPEN* within a CODEC such as VP8, H264 and so on. those are *FIXED* bandwidth, *FIXED* picture size CODECS that, if they are used, assume PERFECT conditions. yes, sure, there are supposed to be "fixed frames" within the streams, so that if the bandwidth drops temporarily then the picture may be "frozen" until the next "fixed frame" comes along... ... but what if the bandwidth drops by 50% and *never recovers*? you can't watch the video in real-time, can you?

and that's the point: adobe's playback *adapts to the conditions*. no open standard that i know of has that capability, even though i know that when i last looked there were "multi-stream" extensions to H.26X being worked on. these were based on the principle that a "coarse" video was encoded at very low resolution (and very low bandwidth), then "additions" were made at ever-increasing quality (and data rates) which you could additionally ask for at the receiving end, *if* you had the available real-time bandwidth to do so. ... but i don't see that being announced in a big splash on any techie news site as having been a successful open standard developed with libre-licensed reference source code.

Comment Re:Freedom does not mean no laws (Score 1) 264

A complete absence of laws for you necessarily means a loss of freedom for me because there is nothing restraining you (or me) from removing other people's freedom.

there is indeed something restraining you: your own moral and ethical judgement. and that's really what man-made laws are there for: to catch the people who have no understanding of either morals or ethics.

the problem we have right now is that the process by which the laws are made has itself been blatantly corrupted, and there are people in positions of power who feel that they can blatantly ignore the entire legal process.

at some point ordinary american citizens - probably pressurised by the rest of the world - are going to wake up and start to demand answers. my money's on that process being inspired by and traced back to people right here on slashdot, of course.

Comment real-time video (Score -1, Redundant) 220

i don't know if anyone's really noticed, but flash's real-time adaptive video CODECs are actually incredibly good. i created a video chat site a few years back [tried red5 as the back-end server, and finally got to actually put some reality behind why i detest java. up until then i'd only known *theoretically* why java is a piss-poor language compared to the alternatives...]

anyway, leaving the back-end alone as it's a red herring, i was deeply impressed at how little bandwidth each video window could be given yet still remain audible and actually convey useful video information. i restricted each user to a paltry 10k-bytes (!) of bandwidth - that's for video *and* audio, limited the window size to 240x180, and was absolutely amazed to find that the video would easily recover from drop-outs.

basically what would happen is that during a drop-out, audio would be prioritised, and video would pause. recovery of the video stream (which could be done *precisely because* i had set the bandwidth so low) would literally "unfold" before my eyes, in exactly the same way that you see those 1980s pop video and children's programs "pixellation" effects.

basically they would transmit a crude video image, then send the improvements as a second round, then a third, and so on. now, here's the thing: i have looked for "adaptive video" algorithms in the past, and, whilst there exists an effort to create such a standard as a public standard, it's simply completely behind the times.

adobe managed it *years* ago... yet no open standard exists in common usage which comes even remotely close to successfully replicating this.

i appreciate that technically, it's incredibly challenging to get right. even the team behind skype - when they sold and created a real-time video streaming company "joost" - failed after a few years and gave up.... but what people forget is that *adobe already succeeded*. ... what has been substituted in its place? well, sure, we can do real-time video browser-to-browser.... but the assumption is that there is "perfect conditions". perfect bandwidth. perfect connections. no drop-outs. no brown-outs. zero latency.

adobe's solution isn't perfect: i know from experience that after a few hours, the real-time adaptive video stream *can* get out-of-sync (by over a minute in some cases), and will "recover" in a flurry of fast-forward stop-motion frames. really quite hilarious to witness. but, the only other alternative that i know of which is even *remotely* close to replicating what adobe did is *another* proprietary video codec, behind "zoom.us". it's developed by a former developer behind cisco's real-time video system. which uses flash in some places, and java in others. and is dreadful and unreliable, and has latency often of up to 1..5 seconds. unlike zoom.us which works incredibly well, and has very little latency.

so i'm going to call this article out, as entirely missing the point, namely that there *really* aren't any good alternatives to the core of what flash does really really well, but the problem is that they should have released the entire client and server as software libre under the LGPL a long, _long_ time ago because it just doesn't make them any money, and they just don't have the manpower to keep on fixing the security issues any more.

Comment real-time adaptive video playback (Score 5, Interesting) 220

i don't know if anyone's really noticed, but flash's real-time adaptive video CODECs are actually incredibly good. i created a video chat site a few years back [tried red5 as the back-end server, and finally got to actually put some reality behind why i detest java. up until then i'd only known *theoretically* why java is a piss-poor language compared to the alternatives...]

anyway, leaving the back-end alone as it's a red herring, i was deeply impressed at how little bandwidth each video window could be given yet still remain audible and actually convey useful video information. i restricted each user to a paltry 10k-bytes (!) of bandwidth - that's for video *and* audio, limited the window size to 240x180, and was absolutely amazed to find that the video would easily recover from drop-outs.

basically what would happen is that during a drop-out, audio would be prioritised, and video would pause. recovery of the video stream (which could be done *precisely because* i had set the bandwidth so low) would literally "unfold" before my eyes, in exactly the same way that you see those 1980s pop video and children's programs "pixellation" effects.

basically they would transmit a crude video image, then send the improvements as a second round, then a third, and so on. now, here's the thing: i have looked for "adaptive video" algorithms in the past, and, whilst there exists an effort to create such a standard as a public standard, it's simply completely behind the times.

adobe managed it *years* ago... yet no open standard exists in common usage which comes even remotely close to successfully replicating this.

i appreciate that technically, it's incredibly challenging to get right. even the team behind skype - when they sold and created a real-time video streaming company "joost" - failed after a few years and gave up.... but what people forget is that *adobe already succeeded*. ... what has been substituted in its place? well, sure, we can do real-time video browser-to-browser.... but the assumption is that there is "perfect conditions". perfect bandwidth. perfect connections. no drop-outs. no brown-outs. zero latency.

adobe's solution isn't perfect: i know from experience that after a few hours, the real-time adaptive video stream *can* get out-of-sync (by over a minute in some cases), and will "recover" in a flurry of fast-forward stop-motion frames. really quite hilarious to witness. but, the only other alternative that i know of which is even *remotely* close to replicating what adobe did is *another* proprietary video codec, behind "zoom.us". it's developed by a former developer behind cisco's real-time video system. which uses flash in some places, and java in others. and is dreadful and unreliable, and has latency often of up to 1..5 seconds. unlike zoom.us which works incredibly well, and has very little latency.

so i'm going to call this article out, as entirely missing the point, namely that there *really* aren't any good alternatives to the core of what flash does really really well, but the problem is that they should have released the entire client and server as software libre under the LGPL a long, _long_ time ago because it just doesn't make them any money, and they just don't have the manpower to keep on fixing the security issues any more.

Comment winter soldier, zola's algorithm (Score 5, Interesting) 264

whilst others may quote george orwell 1984, philip k dick, V for Vendetta, minority report and so on, i'm reminded of the more recent film captain america winter soldier, in which a swiss nazi/hydra scientist, who was permitted to work in the US after the 2nd world war, creates an "algorithm" that can read people's online digital fingerprint, predicts whether they are likely to be a threat (to hydra's "new world order"), and the results are used to murder them... *before* they can act.

the justifications for such action - delivered by the character played by robert redford - sound so completely sane and rational that it's genuinely hard - rationally - to come up with a counter-argument. questions are asked such as "what if we could stop terrorists before they act?" and to be absolutely honest, the responses by the actors were really not that convincing, as they sounded lame in their "emotive" and "moral conscience" justification.

and that's really illustrative of what we're seeing here. these films merely reflect to us what's *actually* going on. these films are pointing out to us that there are *genuinely* people out there who can, with no moral conscience whatsoever and with a blatant disregard for the spirit of the U.S. Constitution, use purely rational logic to justify the removal of freedom and even of life itself.

the problem is, i feel, that the founding fathers had just been through a war that tore what is now known as the U.S. apart: the lesson was burned into their minds, and it brought together people with good conscience to make sensible and far-sighted committments, in the form of "The Constitution".

by contrast, i cannot honestly say that i can even guess at what truly drives the current power-hungry people who make decisions like the ones that they're making right now. we have people like bruce schneier "calling out" their "security theatrics", but that's just a symptom, not the underlying motivation. we see glimpses that something terribly strange is going on - https://www.youtube.com/watch?... - but it's sufficiently orwellian that even i have a hard time comprehending the implications.

so help me out here: someone please help me to understand why there are people in the world's leading nation - the one that all others look up to - who would blatantly disregard the principles on which the U.S. Constitution is founded.

Comment quad-bike designed a few years ago (Score 1) 80

this isn't a new concept: there was a quad-bike i saw a few years ago with an amazing 4 wheel double wishbone suspension that could articulate at least 2 feet per wheel, independently. watching the videos of the rider tipping the handlebars side-to-side was particularly interesting, because when they did so all four wheels leaned side-to-side as well (because of the double wishbones). can anyone remember what the company was who did that quad-bike? the demo videos they did of going at about 10mph over 1ft high rocks were pretty damn impressive.

Comment efficiency... (Score 1) 248

didn't we just have an article posted on here where someone pointed out that the efficiency from end-to-end of charging a mobile phone is something like *16* percent? ... so why is bill gates investing in an area of least efficiency? it makes me wonder, y'know - when people get a lot of money (like google throwing money at project ara to help create and entrench existing monopoly positions around the UniPro standard), they often don't think "how can this problem be solved in a way that *doesn't* need a lot of money?" not so as to be stingy, but so that creativity is applied instead of brute force, if you know what i mean. just because you *can* solve the *production* of energy doesn't mean that you shouldn't be looking at solving the reduction of energy *consumption*.

Comment $4.3 billion (Score 4, Insightful) 186

wow fuck. imagine how much advancement in software libre could be had for $4.3 billion if the contract had been awared. hell, even 1% of that would make a big fucking difference. someone - such as the gnumed developers to take even one random example - could, with help, have developed a medical records system for ohhh i dunno... the U.S. Dept of Defense, with that kind of money. just to take a random example, y'ken.

Submission + - Open Hardware Team successfully replicating Tesla inventions->

lkcl writes: A small team has successfully overcome the usual barrier to replicating one of Tesla's inventions (death threats and intimidation) by following Open Hardware development practices, encouraging other teams world-wide to replicate their work. Their FAQ and several other reports help explain that the key is Schumann resonance: "tuning" the device to the earth's own EM field and harvesting it as useful electricity. Whilst it looks like it's going mainstream, the real question is: why has it taken this long, and why has an Open Hardware approach succeeded where other efforts have not?
Link to Original Source

The reason computer chips are so small is computers don't eat much.

Working...