Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:KDBus - another systemd brick on the wall (Score 4, Interesting) 232

Systemd is one of those thing that people know will end in disaster. Sure, it works at the moment. But a personality will jump into it, or a bug will catch up with their design, or something else. And it will all come crashing down.

What bugs me about systemd is not the idea behind systemd. It's the implementation. Using cgroups and other kernel-provided features, it's able to provide functionality that we don't have elsewhere. But rather than break-down that functionality and make each part replaceable, and use "old" methods to do some things while they are replaced with "new" methods.

It's the all-or-nothing nature of systemd that I hate. There's no reason it can't be done in some other way. There's no reason that, even at a base level, you can't write scripts that do the same as it does - for all functions, but also for parts of the functions. As such, it's not modular, not changeable, it's just a lump of code that you accept having complete control of your machine or not. And I don't.

Honestly, I'm waiting for the crash-and-burn moment at which someone steps up, gives us the same features, using predictable, modular code or even scripts, and we can put in the bits we like and leave out the bits we don't like and replace any bit and NOBODY will know or care that we've done that.

Comment Re:Okay (Score 1) 74

The biggest edits I ever did on Wikipedia, many years ago now, were to the articles about ZX Spectrum games.

I spent hours loading up games in emulators, capturing screenshots, writing out information, etc. Most of the articles for those games existed already, I just did things like link the developers, publishers, etc. categorised them, added screenshots where they were missing.

By a year later, every screenshot I'd done had been removed. Not because of copyright - but because when I'd first done them, I'd tagged them as per the required tags for copyright (e.g. fair usage, etc.). I'd spent forever putting all the tags on after being told for one article. The next month, my images were removed because a new tag had been introduced and I hadn't updated the images with it. So I updated the tags. Repeat ad infinitum for nearly a year. Every time, warnings about tags, copyright-tag bots spamming my talk page, new tags popping out of nowhere and serving no new purpose but those same bots stripping any images that did not have them.

In the end, I gave up. I stopped editing. I stopped categorising. I stopped screenshotting. All my screenshots (despite being perfectly fine for a year while I was tagging them) disappeared within a month. Most of those articles never got even a title screenshot back and are now either plain-text or the entire article is history.

And every "new" game article I added was removed for being "non-notable", when tiny little indie game articles stayed up for years, and the article were about huge, mainstream, industry-changing games.

Sorry, but my time and effort was wasted, not by fans of the games, readers of the articles, or even the article curators. Just by random paranoid spamming bots and people who - at first - I presumed were editors and moderators but actually were most likely just random people who wanted to criticize and break the articles for their own stats(?), I don't know.

All that happened is that the articles turned to dust and rotted over the years while the talk pages filled up with arguments.

Comment Google+ (Score 4, Insightful) 359

You wanted to compete with Facebook. Which you took to mean that I should be shoved onto it forcibly even though I have a fully-functioning social network with all my details, photos and friends plugged in anyway. You thought I should be badgered into submission until I moved all that content over, and have to go via roundabout routes to opt out of this stuff - on a GMail account I'd have since the first days of invite-only accounts.

And you didn't listen or care at the time. If you're that forcible with getting the information out of me, imagine how forcible you'll be when I try to get that information on me back.

Wouldn't touch it with a bargepole (despite being quite Google-centric in my services otherwise) just because of the "YOU MUST SIGN UP NOW" attitude.

If you'd just done what you did with Google Mail, slowly adding in features (e.g. Google Talk, Google Drive, Google Calendar, etc.) quietly that I can choose to use as I see fit, and just stumble across them as I need, and can just use them without being required to fill out EVERY DAMN BOX every time, then it would have taken off much nicer. And if I don't want to use them... well, they're still there any time I do.

Fact is, my Google Account is still the same one and STILL does not have a Google+ profile. Not even an image. Because, sorry, it doesn't work that way. I choose to use the service, you don't choose who must use it. When you tried to force me to fill out and use that part of my Google profile, I did everything I could NOT to. And look who won.

Comment Okay (Score 3, Interesting) 74

Okay, why does my "bullshit detector" go off. Not on the article, but I thought I'd pop onto Wikipedia and find out when Oculus Rift was first started as a project.

There's no mention. They mention the huge buyout in 2014, but no mention of the start of it, even under the "History" section.

And only one of the citations is from before that - an article in 2012. Now, it's not a deep secret, I can google and find stuff from that kind of era discussing it, but why OMIT this information in the History section of your own product's page?

Maybe it's because, 3 years on from the kickstarter, and millions and millions of dollars later, there's still no commercial product?

Comment Re: Do not (Score 5, Insightful) 133

Oh, fuck off.

They heated cinnabar ore. You get mercury when you do that. These people had metal, mined, and could build vast structures that weigh more than any skyscraper did for millennia after them.

You don't need a supernatural explanation that they found a liquid metal (a liquid mirror, in effect) fucking intriguing and so prized it as some kind of treasure to bury with their kings.

That people in these ancient eras had brains seemed to be frowned upon, as if we're the only humans who could be allowed to do that. Ancient Greek, ancient Egyptian, etc. civilisations all had astounding knowledge and abilities. Just because they were never able to fully capitalise on them and then we suffered a few thousand years of poxy ignorance doesn't mean they weren't geniuses. (Just so happens that several of those millennia were dominated by religious shit, Crusades, etc.).

Antikythera (extremes of "clockwork", gearing and mathematical technology), pyramids, battery technology, steam-powered engines, railways, they had a shit-ton of expertise, but the problem was that the insights were few and far between and hard to do, and secondary to surviving for the most part, so unfortunately they never were able to be joined together in the way we could do now.

Fuck your aliens. Pay your respects to thousands of years of education, science, inquisitiveness, some of the greatest minds who ever lived, single individuals who knew all of established science for their time, amazing insights, and artisans capable of creating their off-the-wall ideas using some of the most difficult craftsmanships in existence.

Comment Re:Stop resting on your laurels (Score 1) 302

The problem is that they know all their own libraries still make them money. People are still making from the White Album.

As that anchor is dragged forward, those artists and albums at the back stop making money for them. And then they realise that as that anchor is inexorably drawn forward from that point on, they lose more money every year because it's likely the new artists aren't making as much as the old back catalogues (maybe individual examples, but not overall).

And then they realise that, in 50 years time, when all they have to monetise is the junk that they've been churning out recently, they are dead in the water and the industry will struggle to sustain itself. They're not saving themselves for today, but for their retirement, when they're basing their business on people buying Britney Spears' back catalogue etc.

That said, any law that has to be revised the number of times that the copyright ones have should really be scrapped or made indefinite. If NOBODY in a certain industry (music industry, Disney, etc.) has ever seen their copyright expire, how on earth can we say that we need to legislate to extend that protection continually - and multiple times - without making the case that it should be indefinite or not?

I'm not saying that's a GOOD solution, but someone needs to review the time and money spent messing about extending laws to cover timeframes - including overruling laws retroactively - and either fix a date in stone or make it indefinite. Pretending that it will eventually end up in the public domain while that never being legally possible is just outright scumbaggery.

Comment Common sense (Score 3) 286

If the software is running on the user's computer, at their express request, to do something - at the user's express request, then I can't see how you could rule any other way.

If we were talking about an online-only service that "proxies" the web for you and removes ads, then you may have more of a case, however.

And spyware that does it against or without user's consent (replacing other's adverts with your own, eh, Lenovo?) then that's a huge other matter entirely.

But it's like ruling that if the user WANTS to look at a plain-text version of a particular webpage then that's up to them. So long as the viewer is the one choosing to change the content and knows that, why would you ever think differently.

The alternative just doesn't bear thinking about. Websites DEMANDING that nothing interferes in the process of displaying their page as they intended. Unskippable ads, etc. like on DVD's. DRM for the web, effectively. No thanks.

Comment Re:It's not surprising (Score 1) 129

Erm... is it just me that immediately thinks of DVB-T here? That's exactly what happened.

In the UK we were pushed to upgrade to "digital" (DVB-T). Within a few years, DVB-T2 - an incompatible standard that required hardware upgrades - was actually required to support HDTV channels, and even the "extra" channels that couldn't fit on standard DVB.

Just being a standard doesn't stop obsoletion. Wireless shows you that. Within days of actually being ratified as a standard, the next wireless standard is in the works and people start pushing our pre-N or pre-AC products.

If anything, being "standard"is something that happens after the event, not before, and when provides basis for obsoletion. "You mean you've only got a HTML-4 browser? Our website requires HTML-5. Why? Because."

This is the cost of change, evolution and rapid development. Things get left behind, even if they were good products/services. It's not even necessarily deliberate. Who the hell is going to want to risk bricking your old devices by pushing a firmware update to a device they no longer sell, running on a chip that's no longer produced, with firmware that no longer has active development, to give you features that the old hardware can't use anyway (e.g. pushing HD or new codecs into the YouTube apps?). Nobody.

Comment Re:BUY MORE (Score 1) 129

I don't know if you've noticed but today's generation just ignores ads. I work in schools - the pupils do not see anywhere near as many ads as I did when I was a child. TV ads are dead - they are background noise. We've trained children to ignore all ads in games and online. Streaming services mean that ads have to be forced and - inevitably - the kids find a way to download without ads anyway.

I bet you could hum the tune to several hundreds ads if you went through one of those websites that shows you old ads from your country. The kids today? Probably only the extreme ones.

The more you force ads, the more you force people to ignore them if they can't bypass them. It's counter-productive.

Honestly, I think it's more to do with legacy code. Who has the code to some 10 year old early "smart" TV that ran on a custom chip that's not non-standard and unavailable, and so who's going to do the development and testing to push newer formats, HD, etc. down to that TV's firmware.

In my house alone, I have YouTube apps on several phones, a tablet, a cable box, an older cable box, a DVD player, a Blu-Ray player, a cheap DVB-S box, the original Wii, etc. To update all of those to newer formats, HD quality, etc. may not even be technically possible (which just generates more exceptions and differences in the codebase), plus any licensing, plus the risk of breaking the device, plus liaising with all the manufacturer's (most of whom just won't care as they're not selling that model any more), etc. It's just an enormous upheaval for zero gain.

And it's not just YouTube. BBC iPlayer suffers the same fate - all the above devices have BBC iPlayer apps on them too and some of those no longer work because it would need some cheap Chinese manufacturer to bother to develop, test and push a new firmware for a device they no longer sell (or, even if they do, represents a tiny portion of their sales in only a particular country). Just the risk of bricking something isn't worth the hassle of trying to update it.

We are certainly breeding a throw-away culture of technology because of this, yes, but that's not "enforced" so much as inevitable. A £20 DVD player with network connectivity and an iPlayer/YouTube app on it - if the app on that stops working? Who cares?

Just the development time alone to push out even the tiniest of working updates for devices like that is enormous. You might even find that the original development team, or even company, doesn't exist any more. Will consumers notice? Not really. They have ten devices that can do the same and they won't be turning on the DVD player to watch YouTube when they can ChromeCast it from their phone or whatever nowadays.

It's obsolescence but not necessarily deliberate and malicious obsolescence. Just necessity.

Comment Re:Rugged Smartphone dock (Score 1) 96

The entire first generations of handheld devices had irda, which is basically this.

Palm pilots etc. used it all the time.

It died for a reason - bluetooth took over. Because there's nothing optical data can do that radio data can't, plus radio never requires line-of-sight (it may benefit from it, but that's another matter).

What makes you think that optical connectors in docking stations are in any way superior to Bluetooth (which has stupendous data rates, more than enough distance, is incredibly low-powered and you can pick up transmitter/receivers for it for literally pence that are small enough to put into ANYTHING).

By comparison an IR LED and a light-sensor are slow (rise and fall times tend to kill fast transmissions), large, and inconvenient.

Comment Re:missing the words OPEN SOURCE in the title (Score 1) 73

I'm sorry... were you under the impression that people have ever claimed that open-source means you can't get any bugs in it?

Then you're an idiot.

But don't let us stop you spreading your misinformation based on a complete misinterpretation of other people's statements about open-source code.

(P.S. You can't stop bugs in any code. But if this was a closed-source library, probably the only people who would ever know about it, see the buglist, would be able to fix it etc, would be the people who wrote it.)

Comment IPv6 (Score 1) 390

My external servers - all IPv6, publish AAAA records, all services available on IPv6.

My home - IPv6 compatible router, IPv6 compatible network, IPv6-compatible clients, even IPv6 VPN to my servers.

What I don't see - IPv6 compatible websites. Slashdot is not IPv6 reachable. Nor is The Register. If even the IT crowd can't manage it, what chance do other places have? But that's no big deal, so long as they're IPv4-reachable anyway.

What I don't have - an IPv6 compatible ISP.

I can't use any IPv6 protocol except for 6to4, but the local 6to4 relay is "not supported" by my ISP and not run by them. That puts me at the behest of whatever routing is set up for that magic 6to4 address at any given point.

Sure, I could go with Sixxs etc. but that requires all kinds of signup. It's actually easier to just VPN to my IPv6-ready external server over IPv5 and bypass worrying the in-between link entirely.

It works. It's up. I receive email from third-party servers solely over IPv6 every day.

And then, you find that Google mail and DNS is IPv6. The occasional website is IPv6. The odd mail server is IPv6. And nothing else. And they are all also on IPv4 too. All that hassle, hardware and configuration and I gain... nothing.

Until we literally say "IPv4 is going to be marked for obsoletion in 6 months, and routing for it will going off on the 1st of Jan 2016, worldwide", nothing is going to change. Absolutely nothing.

Slashdot - I'm invoking my rule again. You can post articles on the IPv6 deployment when you BOTHER to put a single AAAA record on your DNS.

Comment Re:4x strategy when? (Score 1) 58

You've just done what the programmers do. Introduce higher-level heuristics into the rules by pushing everything into blocks of actions.

No different to "find enemy", "target enemy", "shoot enemy". The problem is not breaking down a problem given the goal (in your example, every path taken to get from "I want to build a farm" to "I have built a farm here" is equal-cost to the computer) - a simple optimisation removes them from the tree, yes.

But then you either get them, say, building on tiles that are the most at risk from attack because "it doesn't matter which tile". You know that because you infer it from other information, the computer doesn't. Either it has to specifically check EVERY time (game tree), or pick a random / northernmost grassland to upgrade first (programmed heuristic).

Although the exact tree is prunable, the above is the way to get yourself into the same order of magnitude. And computers can, and do, and will struggle with trees of that magnitude for even simple actions unless they are following heuristics.

But, the biggest part you've skipped over - knowing that you need a farm to do X to do Y to do Z in N moves time is the real struggle, the real key. Optimising a tree for the low-hanging fruit (pardon the pseudo-pun) is trivial and can be done automatically and save you a handful of necessary steps.

But what if the system is attacked halfway through the process? Do we abandon? Start again? Fight on valiantly until we get where we want no matter the cost? How do we decide we need a farm? That's where the VAST majority of the game tree decisions are made and that's where the decision matters and THAT'S the difficult question to answer such that a computer can't do it in real-time given the possibilities and the impact of those. Or you'll see it build farms while you quietly strip away its land, units, etc. and it won't "notice".

Think of pathfinding, because that's what you're doing (just through a "directed graph"). There is no difference, to the computer, between A* pathfinding through a terrain and working out the best way through a game tree.

Some routes are muddy and slow you down, some routes lead to loops where you come back to where you are, some routes take more "steps" but get you there quicker, some routes are only a single step but take forever to walk through, some steps are more risky, some steps are safer.

Evaluating those for a computer means enumerating them, and their children, and their grandchildren - and virtually all of them until there's a point that you know it's definitely worse than some other route. How deep you go down the tree increases the complexity, but also increases the chance you have a strategy that works in the long-run. Not traversing to a certain depth means you're only thinking in the short-term.

And every time you enumerate some risk, factor or cost, you are required to formulate it into a single calculation ("edge cost" in graph theory terms). That means giving it a weighting (heuristics!) or determining a weighting dynamically, performing calculations, maybe looking at the surrounding areas (this path is quicker but is nearer the enemy etc.).

I studied graph theory for several years at uni. This stuff sounds really basic, boring, easy and predictable. We all know how mini-max algorithms work on simple games like draughts/checkers. But as soon as you try to scale to anything even vaguely complex you see factors, costs and weighting that are required and which greatly affect the performance of the search (and, hence, the AI).

If there are only 10-20 options like explore and they each take, say, 10 turns to complete, then the computer is only making a decision every 10 turns in effect. Which means it can't react. Sure, you could program an interrupt on certain events. But then it might ignore your attack for 5 turns so being "dumb" and giving you an advantage. Or if you interrupt it every turn with SOMETHING, it's basically back to having to evaluate every single move.

Computer AI is just a series of programmed heuristics and shortcuts to make the real game-tree-traversal possible and practical precisely because they get too large otherwise. The programmed heuristics are basically programmed orders, programmed weaknesses, programmed ignorance. That's where the AI falls down. Someone has told it that "a knight is worth three pawns" in effect, and while that's a general rule that children are introduced to, even they know that it's not a written-in-stone rule to be obeyed every exchange. It's much more complex than that. Someone, somewhere has told the AI in Civilisation, etc. that losing X unit is half as costly as losing Y unit, or building tile Z.

Without those rules, the game tree is too huge to traverse in time. With those rules, the AI is crippled by hard-coded, predictable actions. And there's also the problem that NOBODY wants to play against an unbeatable AI and the only point we can put in a limit is the game-tree depth, or use of heuristics.

And, proper, true, real AI (and human intelligence) is about forming those rules on their own just by playing enough, and knowing when to break those rules as the situation has differed too much. We do it by inference, which you can't program. AI can't yet do it except by things like neural nets etc. which - while useful - have major limitations.

Otherwise, literally, all the Age of Empires modding community could have made a quick, unbeatable AI in the 15-years since it's release and the modding community being able to program their own AI. I used to tweak the QuakeC code for Quake bots back in the day. Things like OpenTTD (and TTDPatch, which is decades old) have allowed huge communities of clever people to create bots to play against on a game which those people are ABSOLUTELY expert at. Yet, still, the bots don't challenge a seasoned player unless they cheat.

Game-tree depth is the killer. And as soon as you prune a branch, you've introduced a heuristic which is a predictable weakness in the AI's operation.

Slashdot Top Deals

Any sufficiently advanced technology is indistinguishable from a rigged demo. - Andy Finkel, computer guy

Working...