Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Shoulda just settled for some A7s and be done. (Score 1) 222

He coulda spent $50K to get him 95% of the quality he was looking for.
It's that diminishing return at the end and not knowing when to stop that doomed him.
Imagine how much better their lives could've been if they'd spent that $1M on experiences and making memories.
(eccentric dads, listen up)

Comment It was just a matter of time (Score 5, Insightful) 24

I used to be a senior engineer at an MLS data processing clearinghouse, handling data from over 200 MLSs. It always boggled me how difficult it was to get various MLS orgs to agree on schemas and metadata for their database entries, while at the same time having it _all_ be managed and distributed by one centralized org. I knew this would happen, and I'm kind of amazed it didn't happen earlier. You'd think that with the amount of money exchanging hands for licensing and access to these massive datasets, they'd figure out how to standardize on a schema and access methods that enables (encourages) decentralized and duplicated storage and transactions. Too much is riding this to not have some sort of redundancy in the system (for transactions and edits, not just replicating the data everywhere, like on ReMax, Redfin, Lyon, Zillow, and other real estate services vendors' networks... that's the easy part)
MLS has been used for YEARS and nobody thought to decentralize its transactional features, making a virtual monopoly on the most important functions, and that irked the hell out of me.

Comment What are your budget and reliability requirements? (Score 2) 219

If you have a small budget and moderate reliability requirements, I'd suggest looking into building a couple Backblaze-style storage pods for block store (5x 180TB storage systems, apx $9000 each), each exporting 145TB RAID5 volumes via iSCSI to a pair of front-end NAS boxes. NAS boxes could be FreeBSD or Solaris systems offering ZFS filestores (putting multiples of 5 volumes, one from each blockstore, together in RAIDZ sets), which then export these volumes via CIFS or NFS to the clients. Total cost for storage, front-ends, 10GbE NICs and a pair of 10GbE switches: $60K, plus a few weeks to build, provision, and test.

If you have a bigger budget, switch to FibreChannel SANs. I'd suggest a couple HP StorServ 7450s, connected via 8 or 16Gb FC across two fabrics, to your front ends, which aggregate the block storage into ZFS-based NAS systems as above, implementing raidz for redundancy. This would limit storage volumes to 16TB each, but if they're all exposed to the front ends as a giant pool of volumes, then ZFS can centrally manage how they're used. A 7450 filled with 96 4TB drives will provide 260TB of usable volume space (thin or thick provisioned), and cost around $200K-$250K each. Going this route would cost $500-$550K (SANs, plus 8 or 16Gb FC switches, plus fibre interconnects, plus HBAs) but give you extremely reliable and fast block storage.

A couple advantages of using ZFS for the file storage is its ability to migrate data between backing stores when maintenance on underlying storage is required, and its ability to compress its data. For mostly-textual datasets, you can see a 2x to 3x space reduction, with slight cost in speed, depending on your front-ends' CPUs and memory speed. ZFS is also relatively easy to manage on the commandline by someone with intermediate knowledge of SAN/NAS storage management.

Whatever you decide to use for block storage, you're going to want to ensure the front-end filers (managing filestores and exporting as network shares) are set up in an identical active/standby pair. There's lots of free software on linux and freebsd that accomplish this. These front-ends would otherwise be your single-point-of-failure, and can render your data completely unusable and possibly permanently lost if you don't have redundancy in this department.

Comment Because money and the inherent problems with AC. (Score 2) 516

It costs money to upgrade and stabilize the power grid. It costs money to stay ahead of the failure curve.

The current infrastructure sucks mainly because it's unpredictable and takes too much effort to synchronize disconnected sections of the grid before connecting them. You can't just "route around" a dead transmission line if there are generator stations active on both sides of the break. You must wait for the two sides to synchronize in phase before connecting them, which can take several seconds to a minute. If you don't, you'll cause even more breakers to trip.

None of this would matter if we switched distribution to HVDC. We have the technology, but again, the cost to convert everything to employ DC-DC switching converters is prohibitive. The biggest upside to switching everything to DC (all the way to the end-user) is that you could add standby capacity by simply connecting batteries to your mains circuit between the main breaker and load panel. The more people in a neighborhood using batteries to buffer their power source, more aggregate protection the neighborhood has against blackouts.

Comment All of these concerns would be moot with DC. (Score 1) 579

(Note, this is more of a stream of consciousness than an actual comment, so I apologize in advance if this sounds ADD-ish)

Get rid of the bulky, loud transformers and phase shifting coils and cap banks. Run -12KVDC to -20KVDC over the residential feeder lines down to neighborhood-located equipment with switchmode buck converters to give -240VDC and -120VDC to homes via their usual 3 mains wires, and a fourth wire for homes who wish to feed power back into the local grid via switchmode boost converters. The power transformer boxes on the corner of every block will contain high-frequency switching equipment and a few batteries (for keeping the block lit during upstream switching events and outages) instead of 2000-pounds of copper and laminated steel. The neighborhood substations will have their giant transformers and oil-filled breakers and phase compensating equipment replaced with IGBT-based switch stacks and intelligent converters that quickly compensate for changing load and back-feed conditions completely silently. Managing connections between substations and the high voltage grid will be an order of magnitude simpler and safer when all you have to worry about is matching the voltages within a few percent and measuring static currents after connections are made, rather than comparing frequency, phase angles, and power factors. With today's "modern" AC grids, you're liable to blow fuses/breakers/transformers if you connect two independently-fed parts of the grid together without first matching phases and frequency.

I know it's just too late for the change from AC to DC in the home to be practical. The biggest, most power-hungry devices just don't have an "upgrade path" to DC: Air conditioning and refrigeration compressors, fan/blower motors, fluorescent lights would all need complete replacement with DC-compatible equivalents. It would have been better if appliance manufacturers had designed their devices to be run off either types of mains from the start... Large, high-torque brushless DC motors are quite cheap now, and switchmode power supplies are now smaller and cheaper than 60HZ AC power transformers, and many of them will actually work equally well being fed by 120-240VDC.

Comment Transfer switches, batteries, and inverters, oh my (Score 1) 579

Automatic transfer switches eliminate any danger of locally generated power being fed back into the grid if there's any sort of danger in connecting the two. The electric company would only have to tell home owners to employ transfer switches in order to stay connected to the grid (with the only side effect being that they can't contribute excess power back to the grid)

My local utility company actually employs smart meters that can monitor both grid-side and home-side circuits for dangerous conditions in cases where there's a grid-tie inverter in the home. The smart meter instantly disconnects the home from the grid if there's an excessive surge in current being fed back into the grid (by analyzing the voltages, transfer current, and phase angles of both sides). The same meters also communicate with the utility company over a combination RF and powerline-based data transmissions, eliminating the need for guys to be dispatched monthly to read everyones meters.

In other news, you can buy a good charge controller, a 50KWh bank of deep-cycle batteries, a 2KW inverter for lights and outlets, and a 12-KW inverter for air conditioning, all for about $12K. This setup can run A/C for 5 hours a day and your only reliance on the grid would be to top-off the batteries on dark days.

If you have the means to get off the grid, by all means, you should, because most electric companies don't care about anything but profits.

Comment Does DJB insist that the library ... (Score 3, Insightful) 140

Does DJB insist that his crypto library gets installed under /var/lib? He's always insisted that his qmail binaries get installed under /var/qmail, and had everyone I know in the unix admin/engineering field shaking their heads, knowing that having executables and libraries on the /var filesystem is retarded and dangerous.

Comment Re:damn philanthropists (Score 4, Insightful) 406

Regarding your statement, "But this is typical of the Progressives, they don't mind when it is THEIR guy mucking up the politics."

It's typical of _everyone_ in politics, _everyone_ in the media, and _everyone_ with an agenda. Don't blame just one party when _everyone_ is doing it. It's human nature to deny the guilt of yourself and the people you associate with when the goal is to discredit or disarm a group with opposing views.

Comment At what scope of time or size of output data? (Score 4, Insightful) 240

At what scope/scale of time or range of values does it really matter if a PRNG is robust?
A PRNG seeded by a computer's interrupt count, process activity, and sampled I/O traffic (such as audio input, fan speed sensors, keyboard/mouse input, which I believe is a common seeding method) is determined to be sufficiently robust if only polled once a second, or for only 8 bits of resolution, exactly how much less robust does it get if you poll the PRNG say, 1 million times per second, or in a tight loop? Does it get more or less robust when the PRNG is asked for a larger or smaller bit field?

Unless I'm mistaken, the point is moot when the only cost of having a sufficiently robust PRNG is to wait for more entropy to be provided by its seeds or to use a larger modulus for its output, both rather trivial in the practical world of computing.

Comment Re: Shoot first (Score 0) 871

Spoken like a true Libertarian. I'm surprised you didn't pull "authoritarian", "fascist", or "statist" out of your hat.

Society prospers when individuals work towards the prosperity of the societal unit, as well as their own being. When you stop caring about the greater good, what good are you to your country?

Would you rather just be an isolationist and give the rest of the world the finger?

Comment Re:How close to 100% is the Windows 7 percentage? (Score 4, Insightful) 246

As an IT manager who oversees deployment and maintenance of about 60 desktops and laptops, some of which are shared among multiple employees, consistency in OS availability for the end user is key. We upgrade one or two machines per month, and we started using Windows 7 three years ago, so about 15 systems still run XP. We're not touching 8.1 until there are no more XP systems on our network, AND people show interest in actually using 8.1, AND at least one service pack has been released to address outstanding issues since its public release, AND we discover a way to disable the "Tiles" start screen. Supporting systems with two different desktop interfaces is a serious pain in the ass, especially for non-technical users. So far, only two people have shown interest in using Windows 8 (techie geek types), and the vast majority of our employees are averse to changing their OS at all.

I've had to customize Windows 7 a bit to make it "comfortable" for the lowest common denominator: Long-time XP/2000 users.

Data Storage

ZFS Hits an Important Milestone, Version 0.6.1 Released 99

sfcrazy writes "ZFS on Linux has reached what Brian Behlendorf calls an important milestone with the official 0.6.1 release. Version 0.6.1 not only brings the usual bug fixes but also introduces a new property called 'snapdev.' Brian explains, 'The snapdev property was introduced to control the visibility of zvol snapshot devices and may be set to either visible or hidden. When set to hidden, which is the default, zvol snapshot devices will not be created under /dev/. To gain access to these devices the property must be set to visible. This behavior is analogous to the existing snapdir property.'"

Slashdot Top Deals

Byte your tongue.

Working...