Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment It seems to be a web plugin (Score 1) 170

If you look at the demo video on their website - http://www.bitmanagement.com/en/demos/geo - it appears to be a browser-based plugin; the actual "demo" link tries to download a file called "BS_Contact_VRML-3DX.exe from www.bitmanagement.com. The demo shows a user-controlled fly-over of an urban area, and seems highly applicable to a lot of military uses. (the software to *create* the model seems to be different - the software named in the suit is the viewer for the 3D models)

It's quite possible that the 500K number refers to the number of machines which downloaded and installed the plugin. It's also possible that the Navy "disabled tracking of installations" (as the FA states) by putting a copy on their own server, and that the vendor was tracking installations by looking at their web server logs.

That's all speculation. It's definitely a job for a lawyer at this point, and it also points out the risks of per-user licenses when you might not have control over the number of users.

Comment Re:Can only hope. He has hired smart people (Score 2) 382

All things considered, is that *really* such a bad plan? Is it any worse than what we have now, which is a government that mostly listens to big corporations?

Unfortunately he's going to open his mouth from time to time, and his advisors may not remain advisors any longer if they don't back him up on what he says. One is reminded of the beginning of Game of Thrones...

Comment Re:A little perspective (Score 3, Funny) 435

What I get from that map is that, despite a clear majority of states voting for Trump, their model still for some reason "predicts" a Clinton win. Almost as if the election has been rigged. Somehow.

The astute reader will notice that 22 states on that map are colored red or pink. I'll leave you to draw your own conclusions.

Comment Re:Open Source vs. other software development (Score 1) 786

I have a great idea - let's allow developers to settle their disagreements by physical combat, just like in the Middle Ages. Set up a conference room properly, have a ref to enforce MMA rules, and if someone calls someone else out, set a time and they can have at it. If you complain that this would be unfair to women, short people, or fat people, I can trot out dozens of counter-examples - just google short / female / fat MMA fighter for a bunch of them.

The problems with this are the same as with the caustic social environment in some open source groups - (a) certain groups are more vulnerable than others; in other words, there are attacks which work against one group and don't work against another, and defenses available to some groups and not others, and (b) the ability to deliver and withstand this sort of abuse has NOTHING to do with engineering ability or how good your ideas are.

When you reach a certain level of seniority you start realizing that one of the roles you need to take at meetings is to defend the shy junior person with a good idea against the outspoken jerk who is opposing it reflexively. Unfortunately we live in a world where four times out of five that person is going to be female, and the jerk is a lot less likely to oppose the same idea proposed by a junior, shy, but male engineer.

Comment Open Source vs. other software development (Score 0) 786

From the original article: In contrast, software development is inherently fair. If you write it correctly, your program runs. Otherwise, it doesn’t. Your computer doesn’t get offended if you don’t state your message well. It doesn’t hold a grudge. It just waits until you write it correctly.

And *this* is the distinction between Open Source and commercial software development. To begin with, a job of almost any sort is inherently a social environment. Not only that, but any large engineering project lives or dies by communication between team members. How many of you have worked with a "lone wolf" who wanted to go and write his (in my experience, always 'his') own code, without talking to anyone else or figuring out whether it would work with everyone else's? Or worse yet, someone who arbitrarily decided that all their code needed to be re-written just when you needed to get a release out, because it failed their internal criteria for perfection? And unless that person was truly brilliant, how many of you would willingly work with that person again?

Open source projects often subscribe to the myth of the individual contributor at the expense of the team, a myth which is sustained by not only people's desire to believe it, but also by the rare examples of people who really *can* accomplish amazing things on their own. Part of this myth is the idea that social relationships, and by extension social niceties, are irrelevant in the project. This myth, in turn, can shape online interactions in ways which are uncomfortable for normally-socialized white males, and positively vicious for women and minorities.

Comment The physics of HDD density limits (Score 4, Interesting) 122

As another commenter pointed out, the 1.5Tbit/in^2 number in the posting (which is taken from the original article) is pretty bogus. Seagate's 2TB 7mm 2.5" drive has an areal density of 1.32Tbit/in^2, and it's probably a safe bet that they (and WD) can wring another 15% density improvement out of SMR technology in the next year or two.

For those commenters bemoaning the fact that the highest density drives today are SMR rather than "regular" drives, get over it - the odds of conventional non-HAMR, non-shingled drives getting much denser than the roughly 1TByte per 3.5" platter we see today are slim to none:

To get smaller bits, you need a smaller write head. That smaller write head has a weaker magnetic field. The weaker field means the media has to be more easily magnetizable (i.e. has lower coercivity). The lower coercivity media needs to have a bigger grain size (size of the individual magnetic domains), so that grains don't flip polarity too often due to thermal noise.

Since a bit can't be smaller than a grain, that means that smaller your write head is, the larger your minimum bit size is. Eventually those two lines cross on the graph, and it's game over.

Two ways of getting out of this are SMR (shingled magnetic recording) and HAMR (heat-assisted magnetic recording):

SMR - stop making the write head smaller, but keep making the bits smaller. Overlap tracks like clapboards on the side of the house (where'd this "shingle" nonsense come from?), allowing small bits with large write heads. Of course this means that you can't re-write anything without wiping out adjacent tracks, which means you need something like a flash translation layer inside the drive, and because of that, random writes might be *really* slow sometimes. (I've seem peak delays of 4 seconds when we're really trying to make them behave badly)

HAMR - Write your bits on low-coercivity media with a tiny, wimpy head, and store them on high-coercivity media with tiny magnetic grains. How do you do this? By heating heating a high-coercivity media with a laser (say to 450C or so) to reduce its coercivity to reasonable levels, then letting it cool down afterwards. But you need a big laser (20mw?) on each head, which causes a whole bunch of problems. Which is probably why they're delaying them.

Oh, and you can overlap tracks on HAMR drives, creating an SMR HAMR drive, with even higher density but the performance problems of both technologies. Which they'll probably do as soon as HAMR hits the market, because with today's SSDs the market for fast HDDs is dying a very quick death.

Comment Dumbass developers, too (Score 1) 121

I'm reminded of the old bag of glass SNL skit - some products (or product features) are just plain dangerous, and saying "but we explain the risks in page 17 of the manual" isn't a good excuse.

How much effort would it take to set defaults that (1) disable anonymous FTP for addresses outside of the local subnet, and (b) inject a fake robots.txt that prevents search engine indexing? And then add an explanation of the risks if you try to disable those defaults?

Comment Re:Interesting idea, nasty downsides (Score 4, Insightful) 93

Drive performance is kind of like airplane legroom - people gripe about it, but in the end they ignore it and buy the cheap ticket.

Shingled drives aren't better - they're bigger, and that's what people pay for. WD's 10TB helium drive is shingled, and I would guess that every drive over 10TB will be shingled for the foreseeable future. By the time HAMR and BPM come out, SSDs will probably have killed off the high-performance drive market, so those technologies will probably be released as capacity-optimized shingled drives, too.

Submission + - New Seagate Shingled hard drive teardown

Peter Desnoyers writes: Shingled Magnetic Recording (SMR) drives are starting to hit the market, promising larger drives without heroic (and expensive) measures such as helium fill, but at a cost — data can no longer be over-written in place, requiring SSD-like algorithms to handle random writes.

At the USENIX File and Storage Technologies conference in February, researchers from Northeastern University (disclaimer — I'm one of them)
dissected shingled drive performance both figuratively and literally, using both micro-benchmarks and a window cut in the drive to uncover the secrets of Seagate's first line of publicly-available
SMR drives.

TLDR: It's a pretty good desktop drive — with write cache enabled (the default for non-server setups) and an intermittent workload it performs quite well, handling bursts of random writes (up to a few tens of GB total) far faster than a conventional drive — but only if it has long powered-on idle periods for garbage collection. Reads and large writes run at about the same speed as on a conventional drive, and at $280 it costs less than a pair of decent 4TB drives. For heavily-loaded server applications, though, you might want to wait for the next generation.

Videos (in 16x slow motion) showing the drive in action — sequential read after deliberately fragmenting the drive, and a few thousand random writes.

Comment Those travel time signs on the highway... (Score 1) 168

in Massachusetts (and probably other places) use Bluetooth phone tracking: http://www.mass.gov/governor/pressoffice/pressreleases/2014/0411-governor-patrick-announces-go-time-expansion.html

"The GO Time real time traffic system measures travel times between two points by anonymously tracking the Bluetooth enabled devices carried by motorists and their vehicles. The system complies with new federal legislation that requires real time traffic information to be provided to the public."

Comment Reasonable conclusions, bad methodology (Score 1) 256

The author of the study makes a lot of arguments based on factors that are easily changed, like the configuration of an SSD. However there are a few basic technological trends:

1. Disks and NAND flash are both getting more dense at fairly comparable speeds - disk has been getting cheap faster than flash lately, but may have a hiccup in the next few years. Where flash has conclusively replaced disk is in applications like iPods and mobile where "enough" storage is cheaper than a single disk. (the iPod went flash when 2GB of flash reached $50, which is the price of a micro-disk) It's not going to replace disk for high volume data storage anytime soon.

2. With today's disks and chips, a hard disk drive has a relatively fixed cost (the cost of the factory amortized over the number of drives produced) and similarly flash has a relatively fixed cost (cost of fabrication plant over the number of chips produced in its useful lifespan). The number of bits on each doesn't really matter - that's why packing them more tightly makes the bits cheaper.

3. Disk bandwidth for 7200K drives isn't going to go over say 300MB/s anytime soon with today's perpendicular recording technology - if the disk is moving past the head at a constant speed, the only way to get more bits through per second is to pack them more closely on the platter. And the best you can do by spinning faster is a factor of 2, at 15K. (and those are very low capacity and very expensive)

2 and 3 mean that flash can easily supply cheaper bandwidth than disk - it's the SSD maker's choice how widely they want to stripe data over the chips in the drive. (64 ways isn't unreasonable) There's a huge advantage today, and it will stay the same (see #2) if flash chips don't get faster, and get bigger if they do. (at some point getting that speed may require paying for more flash than you need, but at that point a single disk will be bigger than you need, too)

For years flash was getting slower and less reliable (requiring more complex error correcting codes) as it got denser - that's partly why it got cheap so much faster than e.g. RAM, where you can't cut those corners. The next generation of flash (3D NAND) may reverse that for a while; in addition SSDs are finally a noticeable fraction of the market so there's an incentive for vendors to make faster flash. (3 years ago SSDs were 3% of the flash market, and the rest went into iPods, phones, and removable drives and cards - SSD vendors had to make do with flash that was designed for systems where you don't care about performance)

Comment Bubble memory anyone? (Score 1) 256

Does anyone else remember when bubble memory was supposed to replace hard drives? There's a long road between the current state of post-NAND technologies (Phase Change Memory, spin-torque-transfer magnetic RAM, Resistive RAM, and a few others) and mass-market high-volume chips. If one of them becomes good enough for someone to risk a $5 billion fab on, and it gives more bits per dollar than flash, then it will probably replace flash almost instantly. If no one bets a cutting edge fab, however, it doesn't matter how promising the technology is. (in particular, the "10x better" is based on assumptions that e.g. PCM can be built in sizes vastly smaller than today's flash - of course we don't know how to build the fabrication plants to do that yet. No one has a story for something 10x better at the same feature size.)

Comment not quite... (Score 1) 256

The paper is from Steve Swanson's group at UCSD, *not* Microsoft Research.
And the reasons for slowdown with more bits per cell: (a) writing is done in incremental steps, which have to be smaller for the more precise levels needed for 8 or 16 levels per bit, requiring more steps, and (b) the charge on a flash cell can't be measured directly; instead the chip can measure which cells in a page are above (or below) a threshold voltage, so sensing 16 levels requires 15 separate read operations.

Slashdot Top Deals

"You show me an American who can keep his mouth shut and I'll eat him." -- Newspaperman from Frank Capra's _Meet_John_Doe_

Working...