Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Are they forgetting that this is the UK? (Score 4, Interesting) 44

New acts of parliament supercede previous laws regardless of source due to Parliamentary Supremecy, a fundamental pillar of English law.... Parliament is the supreme law-making body: its Acts are the highest source of English law.

Unlike in other countries such as the US, there is no such thing as an unconstitutional law, or an act of parliament being "illegal" if properly passed, because there is no constitution in the UK, and an act of the parliament duly passed is supreme.

Comment Re:Makes sense (Score 1) 152

Maybe, but I am sure that is not what this is about. Dell is trying to get more business from members of the Bitcoin community who get all excited and extremely enthusiastic and start buying when a vendor starts accepting their coins.

In other words.... it's not about Bitcoin users being technical or not..... just a way to drum up some additional business for Dell, and to increase margins for some transactions, since banking fees will be much lower.

Comment Re:Totally bogus (Score 1) 608

The perspective is supposed to be from that of the microcomputer revolution, which was to have ended that elitism of mainframe and minicomputer "once and forever".

You're just continuing the same idiocy of the article. The point is programming is not hard because language designers are elitists.

Programming is hard because it is solving a fundamentally hard problem of converting human language into extremely detailed formal procedure which can be executed by a machine, and you have to know how the machine works to do it effectively -- this is a fundamental knowledge barrier.

This is nothing discriminative or exclusionary, but fundamental. It's like saying Calculus is hard to grasp, therefore the culture of mathematics unfairly excludes some groups.

Comment Re:rfc1925.11 proves true, yet again (Score 1) 83

You haven't worked with large scale virtualization much, have you?

In all fairness.. I am not at full scale virtualization yet either, and my experience is with pods of 15 production servers with 64 CPU Cores + ~500 Gb of RAM each and 4 10-gig ports per physical server, half for redundancy, and bandwidth utilization is controlled to remain less than 50%. I would consider the need for more 10-gig ports or a move to 40-gig ports, if density were increased by a factor of 3: which is probable in a few years, as servers will be shipping with 2 to 4 Terabytes of RAM and run 200 large VMs per host before too long.

It is thus unreasonable to pretend that large scale virtualization doesn't exist or that organizations are going to be able in the long run to justify not having large scale virtualization, OR moving to a cloud solution which is ultimately hosted on large scale virtualization.

The efficiencies that can be gained from a SDD strategy versus sparse deployment on physical servers are simply too large for management/shareholders to ignore.

However: the network must be capable of delivering 100%.

Perfectly content to overallocate CPU, Memory, Storage, and even Network port Bandwidth at the server edge. However the network at a fundamental layer has to be able to deliver 100% of what is there --- just like the SAN needs to be able to deliver within a degree of magnitude the Latency/IOPS and Volume space size that the vendor showed as the capacity of the SAN --- we will intentionally choose to assign more storage than we actually have, BUT that is an informed choice, the risks simply become unacceptable if the lower level core resources can't make some absolute promises about what exists and the controller architecture forces us to make an uniformed choice, OR guess about what our own network will be able to handle affected by the loads created by completely unrelated networks or VLANs outside our control, E.g. perhaps another tenant of the datacenter.

This is why a central control system for the network is suddenly problematic. The central controller has suddenly removed a fundamental capability of the network to be heavily subscribed, fault-isolated within a physical infrastructure (through Layer 2 separation), and tolerate and minimize the impact of failures, if designed appropriately.

Comment Re:rfc1925.11 proves true, yet again (Score 1) 83

I hate it when my problems get angry, it usually just exacerbates things.

I hear most problems can be kept reasonably happy by properly acknowledging their existence and discussing potential resolutions.

Problems tend to be more likely to get frustrated when you ignore them, and anger comes mostly when you attribute their accomplishments to other problems.

Comment Re:rfc1925.11 proves true, yet again (Score 2) 83

Your 300 x 10GB ports on 50 Servers is ... not efficient. Additionally, you're not likely saturating your 60GB off a single server,

It's not so hard to get 50 gigabits off a heavily consolidated server under normal conditions; throw some storage intensive workloads at it, perhaps some MongoDB instances and a whole variety of highly-demanded odds and ends, .....

If you ever saturate any of the links on the server then it's kind of an error: in critical application network design, a core link within your network being saturated for 15 seconds due to some internal demand burst that was not appropriately designed for is potentially a "you get fired or placed on the s***** list immediately after the post-mortem" kind of mistake. Leaf and spine fabrics which are unsaturatable, except at the edge ports: are definitely a great strategy to approach sizing of core infrastructure --- from there most internal bandwidth risk can be alleviated by shifting workloads around.

Latency performance seriously suffers instability at ~60% or higher utilization, so for latency-sensitive applications especially: it would be a major mistake to provision only enough capacity to avoid saturation, when micro "bursts" in bandwidth usage are the reality for real-world workloads.
An internal link with peak usage of 40% or higher should be considered in need of being relieved, and a link utilized 50% or higher should be considered seriously congested.

Comment rfc1925.11 proves true, yet again (Score 1, Interesting) 83

Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works.

Case in point: ATM To the Desktop.

In a modern datacenter "2.2 terabits" is not impressive. 300 10-gigabit ports (Or about 50 servers) is 3 terbits. And there is no reason to believe you can just add more cores and continue to scale the bitrate linearly. Furthermore... how will Fastpass perform during attempted DoS attacks or other stormy conditions where there are small packets, which are particularly stressful for any centralized controller?

Furthermore.... "zero queuing" does not solve any real problems facing datacenter networks. If limited bandwidth is a problem, the solution is to add more bandwidth -- shorter queues does not eliminate bandwidth bottlenecks in the network; you can't schedule your way into using more capacity than a link supports.

Comment Re:This is because.... (Score 1) 140

that the companies are _former_ employers as that the companies are _future_ employers.

This is problematic. When you sign up for a regulatory agency to participate in the agency legislating the regulations, there should be a mandatory period of at least 10 years after you leave during which you cannot be employed by anyone in the industry you regulated, and especially, accepting any reward or promise of potential future employment should be illegal.

Comment Re: They aren't looking for public comments (Score 1) 140

The problem is that the FCC has limited regulatory power unless it reclassifies Internet access as a telecommunications service, which is considered the "nuclear option."

How about instead, they reclassify the Cable line or Wireless Data as the telecommunications service and say Provide competing IP providers equal access to the Cable or Wireless Data link to customer facilities, Or Else: All services over that link are telco services for you, including internet, by stating that A telecommunications service always exists for every end-user connection..

So an ISP is not a telecommunication service, BUT the Internet service itself carried over an exclusively owned link to the customer facility IS a telecommunications service UP to the protocol layer where the customer first has choice of who to direct packets to.

In other words: conditional classification. Not all internet services necessarily have to be classified the same. Let's start organizing and classifying IP service for regulation based on the characteristics of the service.

Comment Re:Just ran into this (Score 1) 753

However, small mom & pop shops stayed open, using a hand ledger and accepting cash. I was actually in one store buying supplies that was operating by candlelight.

Not surprising.... big box stores can afford to close, and it's likely cheaper for them to plan to do so.

Which is also one of the reasons local governments should make sure that big box stores can't get 100% of the business for essential goods.

There is much to be said about having $20,000 or so in emergency cash tucked away in your safety deposit vault at a bank with 24x7 access to your locker, just in case the SHTF.

Comment Re:KeePass? (Score 1) 114

An attacker would need my LastPass password (which is not, itself, stored in my LastPass vault); my physical YubiKey; and the knowledge to use both in tandem, in order to gain access to my LastPass account.

Yes, because the Lastpass website enforces this two factor scheme.

On the other hand, once it's open on your computer: the entire database is available for RAM-scraping malware to take a peek.

Or to decrypt using only the master password, since, as I understand: it's just the Lastpass website that requires the 2-factor, before allowing your software to download the DB.

Comment Re:because drinking water is so pristine (Score 1) 242

How do you get that foul chloride dioxide back out of your water?

You leave it in there all the way to the end user, so that the treated water can help disinfect the entire system.

If the user so desires, they can remove it through simple aeration. What the end user won't be able to easily remove (without filtering) is the actual chlorine you need to treat the water with or the fluoride that you add.

Slashdot Top Deals

egrep -n '^[a-z].*\(' $ | sort -t':' +2.0

Working...