Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment It is extremely simple. (Score 1) 355

Picture this. The Federal Government buys up dark fibre, or lays new fibre, such that there is a "Tier Zero" multi-path network between significant population centres regardless of State boundaries. Tier 1 ISPs can hook into this but the Government's networking is transparent so ISPs can't charge for traffic distance. Tier 1 ISPs aren't obliged to do so, though, and in either case this would not alter any peering agreement between Tier 1 providers. All it would do is provide extra stability, extra bandwidth and extra reach (Tier 1 ISPs could, via such a Tier Zero network, form peer-to-peer agreements even when a hostile Tier 1 geographically isolated the two).

Extra stability would cut Tier 1 costs (fast maintenance costs money, this would buy time for the Tier 1 to make quality repairs rather than cheap, natty repairs) but this would require the Government to charge for "diverted traffic" in a way that would make it cost-prohibitive to simply use the Government network and ignore their own but cost-effective as a way of handling the bursts the Tier 1s can't cope with without expensive upgrades. That's not getting support, that's becoming a parasite.

State Governments would then be allocated money specifically to connect schools, colleges, universities and accredited research centers (or to boost speeds where connections exist, or to offset costs if the speed is perfectly fine), and to build metropolitan mesh networks (at the wired and wireless levels) with support for Mobile IP (since users can be expected to move from node to node). This need not affect ISPs, as most of their customers are in the suburbs/sprawls anyway and the cost of laying down or replacing high-end hardware under city roads is going to be high enough that the profits there will be eaten into very quickly. What it does do is free up ISPs to reach customers further out where access has traditionally been poor. Not that they will, but they could. Customer-level ISPs wanting to remain in cities because they can add value would obviously be able to.

Virtually everything would remain private, except for those areas where individual companies don't have the means and/or motive to do the job right.

The level of security doesn't change, since the ISPs are all pwned by the NSA, you wouldn't get coast-to-coast disconnects due to a single fibre being cut by accident, and a city-wide mesh would speed up access to local information. Government would do what it does best and industry would do what it does best. So, naturally, everyone would complain.

Comment Re:TRS-80 to Retina Macbook (Score 1) 587

PET 3032 (32K RAM, surprise!) to a cheapo very ex-gamer's box (4GB RAM)

On the other hand, the PET's disk drives (the 4040) had their own processor, on-board RAM and OS, as did the printer, and the bus they used allowed both hot-swapping of hardware and communication between any two devices. Ok, most of these now exist for external PC devices, but c'mon! The PET was 1970s technology! There is no current way that I know of of linking my USB drive to my USB printer in such a way that once I sent the command, I could unplug the PC from both and have the operation continue.

(Some devices, like my negative scanner, have to be plugged into specific USB ports and that means I can't use multiple devices that require the same specific USB port. Some devices are just made like that and, typically, those are the devices I need. It would be great if I could build my own hardware, at least it would be built right, but smart devices would be at least tolerable.)

Comment Re:That depends on what kind of user base you want (Score 1) 215

I agree completely with your post. In essence, it boils down to Critical Path Analysis. The fastest a given page or data set can be delivered is equal to the speed of the critical path through the system. When you have heapsloads of parallelism, the critical path can span multiple threads and in some cases be impossible to determine.

My approach to layering is to simplify the CPA, make responses more deterministic and less vulnerable to accidental deadlocks when related pages are simultaneously accessed. That, alone, speeds things up.

Yes, PostgreSQL in charge is good, the lowest level is operating as a powerful pre-fetch that can also act as a sanity check to ensure reads don't write.

Comment Re:That depends on what kind of user base you want (Score 1) 215

If a network fails before the system, then your system cannot fail in a distributed denial of service attack. Since any decent data centre multipaths, the failure of a network is of no significance.

Hey, wait! I am a 4 digit UID who has published code since the late 1970s! What the hell do I need to explain to some 6 digit street urchin? Face facts, you are nothing more than semi-evolved slime with the IQ of a desiccated dung beetle. You want to harass an old-timer? You think your pitiful excuse of a right-wing troll is worthy of consideration? Pffft!

As for your username, Wumpus is probably as intellectual as you can handle. Even a desiccated dung beetle has enough joints to handle the state space.

We can continue this on alt.flame, assuming you don't confuse that with a URL.

Comment Re:That depends on what kind of user base you want (Score 1) 215

I mentioned NoSQL databases (so no marks for observation). The reason for the two layers of RDBMA is that you get a lot of potential blocks by loading everything onto one layer. Worse, you get heavy resource drainage when any serious crunching is done. One system I worked with took 5-6 minutes to complete a particular SQL query and often timed out. I refactored it down to 25 seconds, but that was far longer than I wanted. The tables were huge, swamping the database's caching capacity. The stored procedures were vast, even after refactoring. Intermediate tables were generated by other stored procedures. Moving those intermediate tables and related procedures to a different engine meant they could be fresher without killing the system, and since these tables were now views and not raw tables on the second tier, the security was better. Access times dropped to around 7 seconds.

So the three tier layout using some permutation of NoSQL/memcache, PostgreSQL and MySQL/MariaDB, for extremely heavy loads, is superior to trying to get one engine to digest everything.

For very light systems, obviously multi-tier systems are not going to be efficient. Each layer adds latency, and each layer that has excess capability adds latency. You want the lightest system that'll do what you want.

I started in the late 1980s on gigabyte databases, and they've only grown in size and complexity since then. Back then, I was asked to make the system handle real-time data streams from large gamma ray detector arrays. Which I did. I also managed to show that their network would suffer a meltdown before my software. My approach has been refined, over the years, to include understanding of different topologies (and how to thrash them into submission by writing custom high-performance network protocols), but my personal objective remains the same: there is no way in hell I will let my software suffer a meltdown before your hardware. My software WILL take the punishment you throw at it, give you the results AND complete The Times crossword before you've had time to register that you sent the query at all.

Because I know I can do this, I have little or no regard for programmers who cannot or will not. Anything I can do, they can do.

Comment Re:That depends on what kind of user base you want (Score 1) 215

Let us say that was true. (It isn't, but let's pretend.) MySQL is faster than PostgreSQL or Ingres, correct?

Then use PostgreSQL or Ingres for your primary storage DB, and use MySQL to store cached responses. (Key issues, etc, are then a non-issue - you don't need a vast key to identify a cached pre-generated page.) You then get the full power of a complex DB with the performance of a lightweight DB.

Wait, isn't this what NoSQL databases are used for? Well, duh. Where do you think they got the idea? The rest of the world has been using multi-tier databases for a very long time now, and obviously if you want extremely high performance and only a simple key/value search for your highest-level DB, then why not use a system that is purely key/value?

The problem with NoSQL databases is that they can be a little TOO simple. You'll often want web pages where -some- of the content is universal, -all- of the content is cacheable, but where different content in some div block is used for different users (or different parameters or whatever). For something programmatic like that, you -could- use a language like Cold Fusion. Which, like its namesake from Utah, has no redeeming value whatsoever. It's much better to do something like this in a database engine rather than in an interpreter running in a servlet inside an interpreter, as procedures can be pre-compiled.

But if you want to do this, isn't MySQL still heavier than necessary? Oh, lots. What you really want is (NoSQL || (GDBM/QDBM + Network Access)) + Loadable modules. That's about as lightweight as you can get.

In an "ideal" system, you'd actually have three layers, not two. The lowest level should also be lightweight, but not MySQL lightweight. It wants to load/save data and create views, but having stored procedures on there as well complicates load balancing and high availability. It also means more arcs through the code, and each arc you add is a potential source of bugs. The lowest level wants to be rock steady (though ska will also work), feeding to the servers that do the heavy lifting. That way, database bugs (inevitable, it's complex code) will have no significant impact on transactions, each component in the system is highly specialized (so makes fewer decisions, so is smaller, faster and more reliable), and the critical path of any given transaction is blocked by as few incidentals and overheads as possible.

Tight coupling of components is only a good idea when components run at roughly the same speed and aren't particularly blocking. The greater the speed disparity or the greater the thread blocking, the more you want loose coupling or complete decoupling. Lacking dynamic reconfiguration, you layer things so that each layer will mostly have just one type of behaviour and the adjoining layers also mostly have just one type of behaviour. There will be exceptions, nothing is optimized for all cases, but if you get most of the available performance under most of the conditions that arise, you're ahead of most of the game.

The other reason you want multi-tier is for security. Everyone makes mistakes in coding, so you can expect some component of your system to be vulnerable to attack. If it's a component that an attacker cannot reach (because it's effectively firewalled by the databases above it), it's not an issue. If it's a component that an attacker can do nothing with (because all that's being attacked is cached data that will be refreshed from further down after some time interval or when the data below changes), then only those who hit that specific load balancer in the few seconds of significance will see the defaced data. Moments later, the correct data will replace it.

Comment Re:Early Crimefighting Crowdsourcing in Salem (Score 1) 270

You try them in a civilian court, you don't use inadmissible evidence or coerced "confessions" (people would confess to being Santa Claus and the Tooth Fairy at the same time, if being waterboarded), you use what solid evidence you have. OR, you declare them Prisoners Of War, keep them confined under the terms of the Geneva Conventions until the US ceases meaningful combat operations, then release them.

The jail officials in Gitmo will never be tried or convicted. Neither will the CIA operatives named by Italy, or the staff at any of the Black Prisons operated in Europe. Those found innocent already (or between now and when Gitmo ever closes) will never be paid compensation for unlawful arrest or false imprisonment. Hell, there are still attempts to sue for unpaid wages for spies, defectors and other "plausibly deniable" individuals dating as far back as the American Civil War but including pretty well all the wars between then and now as well. If the US can drag its feet over its own people on its own turf for 200 years, nobody else has much of a chance.

Comment Re:Early Crimefighting Crowdsourcing in Salem (Score 1) 270

Well, according to the lawyer to the remaining British citizen in Gitmo, America has been trying to deport him for six years to the Middle East (where his odds of survival are nil), despite the fact that - being British - he should be deported to Britain. There are a few theories as to why this hasn't happened (apparently said citizen witnessed an MI6 officer being present at an "enhanced interrogation"), but since British intelligence has never been seen as shiny-white and all innocent, the stories don't seem credible. You can't lose a reputation you never had.

Regardless, American intelligence has classed him as innocent of all charges, he's been cleared for release, and it is for those who defend Gitmo to do the explaining, it isn't for those questioning Gitmo to explain anything.

Last, but by no means least, there's nothing to decide. Under the Constitution (which applies to Gitmo), it is for the Administration to prove (not others to disprove) that they have the "Right to the Body", and under Common Law (which also applies to Gitmo), it is for the Administration to show that they have neither withheld nor denied the right to justice (not for others to prove justice has been denied). These are absolutes.

So what if the people were picked up under questionable circumstances? So what if the grounds for holding them initially was "walking whilst wearing Casio"? It seems reasonable to me that YOU would want to have your day in court if you'd been arrested for wearing a digital watch.

Their associates continue to kill people? Can you prove that? Or are you simply assuming that for a large enough group of people, at least one of them must be an associate of a terrorist? How many steps removed would count? Six? If so, tag. And how do you define associate, anyway? From the same village? The Boston bombers came from Boston, but nobody is so stupid as to accuse the whole city of being terrorist. Also, with the Administration defining an "enemy combatant" as being ANY male of potentially military age (plus all others within blast radius), I would be very wary of accusing their associates of anything more than having the wrong number of birthdays without proof.

Delicacy? Like "kidnapping people off the streets of Italy" delicacy? (Btw, he was later found innocent of all charges, which is more than can be said of the CIA agents for whom Italy holds international arrest warrants. They haven't been found guilty either, true, but fleeing the scene of the crime and refusing to answer the warrants would convince most people they're guilty.)

Comment Re:Shocking (Score 0) 270

First, America doesn't have one of those. All they have are Drill Sergeant/PsyOps-trained lawyers who are adept at getting the jury to believe all kinds of bullshit. Remember, jurors who manage to believe six impossible things before the first coffee break get a luncheon voucher for Milliways.

Second, due to this thing called "the right to arm bears" and the complete inability of conspiracy nuts and rightwingers to digest new information, those wrongly accused are at extremely high risk of getting killed by wannabe-vigilantes who reject the evidence against the two accused (and blogs aren't short of such nutters).

Third, even "established" news sources had trouble distinguishing Chechnya and, well, all other countries beginning with Ch. Apparently, inciting xenophobia is a spectator sport for journalists. Either that, or they're irredeemably stupid and bloody ignorant. 'Course, might be all of the above.

Comment Re:Ignore the Critics, Research is Necessary (Score 1) 190

Agreed on all points, though I'd have to agree with femtobyte as well that profiteers make horrible scientists. $100 million is peanuts, as the original article notes, but that is only a bad thing if it operates in complete isolation. If it cooperates with the Connectome Project and other neurological studies, this study could be quite useful. But that is only true if the division of labour is correct. You cannot break a scientific project into N sub-projects at random, even $100 million ones. If everyone got together and discussed who is best placed to do which part, the results could be extremely valuable.

Even more so, when you consider that a 13T MRI scanner capable of handling humans should be online just about now. Since that has already been built, the cost of building it is effectively zero. The resolution achievable from such a scanner, however, should be nothing short of spectacular.

Can you even begin to imagine the advances achievable from a consortium of Connectome researchers, high-end (9.3T and 13T) MRI labs, and this new foundation?

Ok, now you've imagined it, stop. We're talking politicians, scientists under publish-or-perish rules, get-rich-quick corporations and corrupt "advocacy". There's no possible way any of those involved will be capable of doing what they should do.

Comment Re:What about pictures? (Score 1) 300

Excellent! At this rate, by the time the thread is frozen, we'll have beaten DPI and other newspaper publishing systems. (Ok, ok, I'll be honest, we've already beaten most newspaper publishing systems.)

Not messed with tikz, but will take a look.

The main problem I've had with TeX and its subsystems for vectors is that it's actually very difficult to snap to points, or define relationships between vectors. Normally, this is a non-issue - TeX' built-in maths has perfectly good precision for most purposes, so provided the functions are defined correctly, you don't get freaky rounding errors or endpoints in the wrong place. There are pathological cases, however, where certain shapes only scale correctly by certain amounts. You need fiddly conditionals and other hacks. Since most engineering and maths software has had workarounds almost as long as TeX has existed, and it would be an addition to the syntax (so retaining backwards compatibility, just as LuaTeX is backwards compatible with TeX), there should not be any reason for such solutions to exist in TeX.

It may well be that tikz solves 99.9% of all the cases I'm concerned about. If so, great. If not, the system is built to be infinitely extensible. I'll get round to it. Maybe. Or wait for a new package on the TeX archive.

Comment Re:What about pictures? (Score 2) 300

Think it's graphicsx. One of the packages, anyways, lets you include PNGs, JPGs, etc. No problem. I include graphics all the time with LaTeX, very few of which are EPS. True, graphics import isn't as clean as I'd like (it's a bugger to remember all the different nuances of each type of graphics format you can use and through which package you need to use it with).

I also don't like the fact that vector images require you to master Asymptote, Metapost and an armful of other systems. This can - and should - be massively cleaned up.

So, whilst I agree that TeX has crappy image handling, it's not nearly as bad as you depict.

Slashdot Top Deals

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...