Forgot your password?
typodupeerror

Comment: Maybe.... (Score 1) 86

by ledow (#48216021) Attached to: German Publishers Capitulate, Let Google Post News Snippets

So, maybe losing all your content visibility on Google was worse than them publishing a small article headline?

So, maybe, just maybe, Google's exposure was actually to your advantage?

So maybe you've been biting the hand that feeds you?

If the threat of Google doing EXACTLY what you ask for (taking your content off their site) is enough to make you back down, maybe your original intention was something other than was stated?

Maybe you just wanted a free payment?

And maybe Google weren't being so evil in the first place?

Comment: Sigh. (Score 1) 157

by ledow (#48213921) Attached to: Austin Airport Tracks Cell Phones To Measure Security Line Wait

Erm... how do you think the traffic apps work on your satnav?

They ask you to "anonymously" contribute statistics, they talk home over 3G to service centres, who spot traffic moving slowly (given speed and position is easy on a satnav), mark those roads with appropriate average speeds and then transmit that out to everyone with traffic services.

Sure, they use roadside monitors and other things as well but the "HD" traffic you might get from any large satnav provider uses exactly the same technology.

The question is not whether this is worrying data to collect, but exactly what portion of the collected data needs to be collected? If they are hashing the MAC's really quickly and then discarding the original MAC data, and only keeping MAC-hash and position data, then there's nothing to worry about.

Or, you know, you could write an inflammatory article about a technology that every satnav, every shopping mall, and even festival organisers have been using for years.

Comment: Telnet (Score 2) 58

by ledow (#48212537) Attached to: Cisco Fixes Three-Year-Old Telnet Flaw In Security Appliances

Is it just me that wasn't even aware that telnet had an encrypted mode (let alone a horribly-broken one)?

Not been an issue as I always switch it off unless the device is entirely in-house (and, there, someone sniffing the packets is much more of a problem than the fact they might end up with a device password by doing so).

Honestly, we just need to kill this "protocol".

Comment: Re:I disagree. (Score 2) 139

by ledow (#48212473) Attached to: Machine Learning Expert Michael Jordan On the Delusions of Big Data

I'm not scared by the maths. That is working back from a series of 2D images to reconstruct a 3D model, with appropriate error. It's horribly complex, but it's not anything more than a time-saving calculation. It isn't a new realm of science (mathematical or otherwise).

And, again, even the example images in the introduction of the book belie the actual capabilities. The mathematics of 3D geometry are complex, yes, but well-known. Reversing them is difficult, yes, but again well-known - with appropriate error.

Taking enough photographs to be able to identify points (edge-detection, heuristics, manual placement...) in several of those photographs and thus form a correlation between the images to allow you to form a volumetric object is DAMN HARD. I have no doubt.

But it cannot extrapolate the window frame hidden behind another object in a 2D painting, as that book's introductory images suggest. Computer vision is notorious in this area for making undeliverable promises. The point-clouds that result have to be cleansed and interpreted, and information not given to the computer cannot be inferred (of course... why would it? But that's the credibility of the claims at stake).

Taking one example from the book, where a 2D painting is converted to a 3D scene: Sure, the window-frame that's obscured by a foreground object probably DOES extend symmetrically and with the same colour but you cannot know that - and hence the error creeps back in again unaccounted for by having humans "fix" things that the computer can't.

Yes, it saves time if you want to get a 3D sculpture into your computer, or recreate a crime-scene from evidence, but it requires tweaking and a lot of human work - it's back into the realms of the time-saving tool, rather than a whole new paradigm of (as the article is originally about) machine learning and automated extrapolation. The acid-test is how admissible this stuff would be in court, and though a lot of it would be provable, the error margins would need to be stated and then it's not as clear-cut as first impressions might give.

CV is a horribly complex task that performs all kinds of useful functions. But it isn't, and can't yet be, anything beyond a tool that speeds up human calculations. I guarantee that even an average artist would be able to recreate that scene in 3D to a greater degree of accuracy than a computer could (I actually have a personal like for those "we've layered a 2D image over a sidewalk/car to make it look like a black-hole, or that the car isn't there" etc. images).

And, again, it's the usefulness that's limited in scope, and the automation that's only doing the legwork for a human-led interpretation.

CV is maths. That's the end of it (don't be insulted... similarly, quantum physics is "just maths"). Horribly complex maths, with associated error. It gives us useful answers when we apply it. But, as the article is wont to point out, we need to apply it. Or design something that will apply it in a particular circumstance.

This is vastly different from the claims that the CV industry makes, and from those illustrations they choose to adorn their books. Hence why CV comes up in the topic of machine learning. The machine isn't learning, it isn't thinking, it isn't extrapolating, it isn't guessing, it's doing lots of maths very fast that we could do if we had the time. Thus the usefulness extends only so far as a human is willing to work out how to apply it.

And, at the end of the day, when you want to scan in a 3D structure, chances are that some laser distance-based measurement is more accurate and less easily "misinterpreted" by the computer than anything it might get from someone running a camera around it. That's why most of those 3D reconstruction projects make the point-cloud with a laser measuring device first, not rely on the interpretation of a 2D image to infer it.

Comment: Re:I disagree. (Score 5, Interesting) 139

by ledow (#48211423) Attached to: Machine Learning Expert Michael Jordan On the Delusions of Big Data

The problem with computer vision is not that it's not useful, but that it's sold as a complete solution comparable to a human.

In reality, it's only used where it doesn't really matter.

OCR - mistakes are corrected by spellcheckers or humans afterwards.

Mail systems - sure, there are postcode errors, but they result in a slight delay, not a catastrophe of the system.

Structure from motion - fair enough, but it's not "accurate" and most of that kind of work isn't to do with CV as much as actual laser measurements etc.

Photo stitching - I'd be hard pushed to see this as more of a toy. It's like a photoshop filter. Sure, it's useful, but we could live without it or do it manually. Probably biggest use in mapping, where it's a time-saver and not much else. It doesn't work miracles.

Number plate recognition - well-defined formats on tuned cameras aimed at the right point, and I guarantee there are still errors. The systems I've been sold in the past claim 95% accuracy at best. Like OCR, if the number plate is read slightly wrongly, there are fallbacks before you issue a fine to someone based on the image.

Face detection is a joke in terms of accuracy. If we're talking about biometric logon, it's still a joke. If we're talking about working out if there's a face in-shot, still a joke. And, again, not put to serious use.

QR scanners - that I'll give you. But it's more to do with old barcode technology that we had 20 years ago, and a very well defined (and very error-correcting) format.

Pick-and-place rarely relies on vision only. There's much better ways of making sure something is aligned that don't come down to CV (and, again, usually involve actually measuring rather than just looking).

I'll give you medical imaging - things like MRI and microscopy are greatly enhanced with CV, and the only industry I know where a friend with a CV doctorate has been hired. Counting luminescent genes / cells is a task easily done by CV. Because, again, accuracy is not key. I can also refer you to my girlfriend who works in this field (not CV) and will show you how many times the most expensive CV-using machine in the hospital can get it catastrophically wrong and hence there's a human to double-check.

CV is, hence, a tool. Used properly, you can save a human time. That's the extent of it. Used improperly, or relied upon to do the work all by itself, it's actually not so good.

I'm sorry to attack your field of study, it's a difficult and complex area as I know myself being a mathematician that adores coding theory (i.e. I can tell you how/why a QR code works even if large portions of the image are broken, or how Voyager is able to keep communicating, despite interference on an unbelievable magnitude).

The problem is that, like AI, practical applications run into tool-time (saving a human having to do a laborious repetitive task, helping that task along, but not able to replace the human in the long run or operate entirely unsupervised). Meanwhile, the headlines are telling us that we've invented "yet-another-human-brain", which are so vastly untrue as to be truly laughable.

What you have is an expertise in image manipulation. That's all CV is. You can manipulate the image to be easier read by a computer which can extract some of the information it's after. How the machine deals with that, or how your manipulations cope with different scenarios, requires either a constrained environment (QR codes, number plates), or constant human manipulation to deal with.

Yet it's sold as something that "thinks" or "sees" (and thus interprets the image) like we do. It's not.

The CV expert I know has code in an ATM-like machine in one of the southern American counties. It recognises dollar bills, and things like that. Useful? Yes. Perfect? No. Intelligent? Far from it. From what I tell, most of the system is things like edge detection (i.e. image manipulation via a matrix, not unlike every Photoshop-compatible filter going back 20 years), derived heuristics and error-margins.

Hence, "computer vision" is really a misnomer, where "Photoshopping an image to make it easier to read" is probably closer.

Comment: Re:It would be interesting (Score 1) 121

by ledow (#48205073) Attached to: Xerox Alto Source Code Released To Public

Er... windows 3.11 had the same minimum spec as Windows 3.1... 2Mb RAM. And a 15Mb hard disk. So the point still stands.

And I have personally contributed to a project that brought Linux networking and TONS of extra features that we'd have died for in the 3.11 era to a single, bootable, 1.44Mb floppy disk.

Sure, Windows 95 upped the ante, but in terms of what you were given was it really that much of an advance? That's where things started to go downhill if anything... networking stack, yes. Firewalling of any kind? No.

And Windows 95: "To achieve optimal performance, Microsoft recommends an Intel 80486 or compatible microprocessor with at least 8 MB of RAM.".

I think you're forgetting how much you could get done in 2Mb of RAM. Hell, Windows 95 can't even boot if you have 512Mb, it was never designed to have that much RAM EVER. I'm just not sure there was ever a feature worth quite that amount of system resources - at this moment in time, my Bluetooth tray icon takes more RAM than Windows 3.1 needed to load everything. I can't see the justification for that at all.

CPU speed, yes, devices nowadays shove data through them a LOT faster than they ever used to so you need to be able to keep up. Disk space, possibly. But RAM usage? Why should a Bluetooth icon take more RAM than an entire former OS?

Comment: Re:It would be interesting (Score 1) 121

by ledow (#48202847) Attached to: Xerox Alto Source Code Released To Public

The chances of the code even compiling any more are slim. Let alone the required hardware and devices being present in a PC.

You're looking at a full emulation environment, which would kill all the performance anyway. It'd still fly on a modern PC, even so, but I can remember entire games fitting in 16Kb of RAM and Windows graphical interfaces that you needed to upgrade to 2Mb of RAM in order to run.

Of course they'd be fast on modern architecture. But they won't run directly. And by the time you get them to run, you could have wrote a basic GUI that did the same in a language of your choice.

The problem is not that we should be running ancient systems as they were back then. It's asking ourselves why Windows needs a Gig of RAM in order to even boot properly, when the user "experience" of such is a desktop background bitmap and a clicky button in the corner. Windows 3.1 could do that in 2Mb of RAM.

Comment: Sigh (Score 2) 209

by ledow (#48193617) Attached to: More Eye Candy Coming To Windows 10

Because what I want in an enterprise-class operating system, what I desire more than anything else, what I cannot live without, what my users are crying out for, what I will pay good money just to have... ... is more shit jumping out at me on the screen for no good reason.

Gimme WinFS and we'll talk. Gimme complete application isolation and I'll think about it. Otherwise, honestly, you're just papering over the cracks.

Comment: Re:Simple (Score 3, Interesting) 56

by ledow (#48167787) Attached to: Making Best Use of Data Center Space: Density Vs. Isolation

I have just put in a Blade / VM configuration at a school (don't ask what they were running before, you don't want to know).

Our DR plan is that we have an identical rack at another location with blades / storage / VM's / etc. on hot-standby

Our DDR (double-disaster recovery!) plan is to restore the VM's we have to somewhere else, e.g. cloud provider, if something prevents us operating on that plan.

The worries I have are that storage is integrated into the blade server (a SPOF on its own, but at least we have multiple blade servers mirroring that data), and that we are relying on a single network to join them.

The DDR plan is literally there for "we can't get on site" scenarios, and involves spinning up copies of instances on an entirely separate network, including external numbering. It's not a big deal for us, we are merely a tiny school, but if even we're thinking of that and seeing those SPOF's, you'd think someone writing their article into Slashdot would see that too.

All the hardware in the world is useless if that fibre going into the IT office breaks, or a "single" RAID card falls over (or the RAID even degrades, affecting performance). It seems pretty obvious. Two of everything, minimum. And thus two ways to get to everything, minimum.

If you can't count two or more of everything, then you can't (in theory) safely smash one of anything and continue. Whether that's a blade server, power cord, network switch, wall socket, building generator, or whatever, it's the same. And it's blindingly obvious why that is.

Comment: Simple (Score 4, Insightful) 56

by ledow (#48167239) Attached to: Making Best Use of Data Center Space: Density Vs. Isolation

Put all your eggs in one basket.
Then make sure you have copies of that basket.

If you're really worried, put half the eggs in one basket and half in another.

We need an article for this?

Hyper-V High Availability Cluster. It's right there in Windows Server. Other OS's have similar capabilities.

Virtualise everything (there are a lot more advantages than mere consolidation - you have to LOVE the boot-time on a VM server as it doesn't have to mess about in the BIOS or spin up the disks from their BIOS-compatible modes, etc.), then make sure you replicate that to your failover sites / hardware.

Comment: Re:7th grade? (Score 1) 323

by ledow (#48167125) Attached to: Court Rules Parents May Be Liable For What Their Kids Post On Facebook

In the UK:

The age of legal responsibility can be as low as 10. James Bulger's killers, for example. Held personally liable for their actions. This is the "old enough to know" law.

Contract-signing is 16. So a "contract" with Facebook is null and void as they never bothered to check they were over 16. Facebook should terminate the account as soon as they are made aware of it as they are providing service on a void contract.

Financial responsibility, parental responsibility to ensure they are in education, employment or training, and an awful lot of other responsibilities to a child last until they are 18.

In the US, it's a bit different. Hilariously, in the UK, you can legally be married, have sex, have children, drive a car, smoke, drink and sign a contract (hopefully not all at the same time) while still being under parental responsibility because you're not 18.

Comment: Re:Simple solution: bring cookies. (Score 1) 405

by ledow (#48157809) Attached to: Flight Attendants Want Stricter Gadget Rules Reinstated

I live in a country with guaranteed minimum wage.

If the guy sweeping the street doesn't get tips, I don't see why the waitress he sweeps past every morning should. They both received a guaranteed minimum. If that's not enough, there's a reason to campaign for minimum wage rises. Not to tip, charity-like, out of sympathy.

If you're going to tip on the basis of hard work, or sympathy for their plight, tip nurses, tip doctors, tip policemen (you can't, but that's another matter), tip firefighters, tip sewer workers, tip the guy that sweeps the streets cleaner than any other. Don't tip because you feel sorry they are in such a bad job with an employer who doesn't appreciate them.

You say yourself: "tips are often used to allow employers to underpay people"

If you didn't tip, they'd have to pay a proper wage.

In a consumer society there are inevitably two kinds of slaves: the prisoners of addiction and the prisoners of envy.

Working...