Tyson, is that you??
We do have interoperability standards : I used to work for the department that specifies them.
They just don't follow them. Case in point : the meta-standard (HL7 v3) that we used for our messaging had a mechanism for not just sending NULL values, but also sending a reason why they were null. (e.g. - the value wasn't measured, etc). The vendor had no truck with that, and was using magic numbers instead (e.g. baby weights of 9999g which is outside the realms of sanity for a newborn). I was tasked with revamping one of the messages. I specified that the proper NULL flavours get used and ditch the magic numbers.
The vendor at this point rolled out the "full system test" clause in their contract, whereby they could charge £N * 10^6 to perform a full system test, because we'd changed the behaviour of one field. They got their way and kept their magic numbers. Other systems expecting messages that met with the conventions of the overall meta-standard for data now have an additional development cost to cope with those magic numbers.
This is the reason for the focus on interoperability over just having standard data structures - it lets vendors continue to use their own proprietary data schemes, and raises a barrier to new participants in the market, not only do you have to implement all the standard interfaces to interoperate, but you probably also have to design in a "quirks" layer to cope with each vendors *special* variations.
Cost : "between NAND and DRAM."
Even if it was cheaper to fab right now than NAND, they wouldn't admit it, because they'd be less able to charge a premium price for it. I'm betting that since it has a higher density than NAND and a simpler construction, it will probably end up cheaper than NAND in the longer run.
And DRAM is horribly expensive to fab. So "cheaper than DRAM" leaves a large window.
Right now they are pitching it at the enterprise storage market but that's only smart business - while they ramp up production capacity, get the highest price for it you can.
From the video, it's memristor tech, but everyone is reporting that they are carefully abstaining from letting on what the materials are. Which is fair enough - they want to sew up the market for this stuff as long as possible.
It's going to cost more than NAND flash.
But it would make a GREAT cache for spinning rust. None of the longevity problems of NAND, 1,000 times faster. Ka-chow.
They already have this : it now defaults to "on" by default.
It's DNS level filtering though, so it can be defeated by a simple change of settings. The younger generation are mostly techno-dufuses though, so it probably defeats them.
You see it a lot more if you are searching for other types of content through less than legal means.
If you want to torrent something, you'll get pop-ups of webcam girls, porn sites, etc, that you didn't ask for and weren't in the market for. I imagine for the youth crowd, that's probably the main way they get exposed to it - they want to torrent the latest Iron Man movie, and they get pop-ups for Iron Dick.
Option C: Get a subscription with a newsgroup service for a fraction of the money a porn site will cost you, download as much as you like over a securely encrypted connection, have plausible deniability as to what the content was.
Option D: Get one of those P2P thingamajiganibobs
There are so many ways to get porn on the internet other than the vanilla website-and-a-subscription method.
And porn has the ultimate "Long Tail". There already exists enough digital porn for virtually anyone with a normal-ish kink spectrum to whack off to something new twice a day for the rest of their life. Even if you destroy the porn industry (which this won't, because not every jurisdiction is stupid), people will still trade and use porn, with impunity.
In this country it used to be that under-18 year olds could only get a Solo or card, which a lot of places used as a basic age check.
But I read that our banks don't issue them any more because they weren't as widely accepted, because they didn't let you spend funds you didn't have access to.
This is why so many projects require contributors to sign over their copyright when they commit code. They want to retain that control over the licensing centrally.
Copyright assignment is a speedbump in the road that helps to prevent contributions though. Many people are uncomfortable with the notion that their freely contributed work can be taken into a closed project. Many more people are just uncomfortable with the paperwork you need to do.
You only have to look at what happened to OpenOffice when it forked. OpenOffice kept the copyright assignment clause. Now the project is dying in a garden shed in the Apache server farm. LibreOffice ditched copyright assignment and behold, it thrives.
I can confirm that's true. Digging out the driver disk when you reinstalled was a total PITA. A lot of the secret sauce on the card was software, without DRM controls their earlier hardware could probably have been pushed to have most of the features of their later models.
The Linux drivers just work though. I remember booting Linux on that hardware the first time and seeing a colourful SBLive banner in the bootup messages and thinking "Huh. It works!"
Not to shoot out of them. To produce images in them.
At least one piece of science fiction I've read has eyeball lasing cells in it. Now it seems less fictional.
You do need special considerations for XML files though - there are several solutions
The weakest solution is to rely on the ability of the target user to spot diffs and correctly merge XML files. And also not to use automatic merging, ever, because the nature of XML files means that conflicting changes may not occur in adjacent lines.
The next (and inadequate) solution is to order the XML consistently - you can do this in your diff tool, or you can write your tools to produce a reliably ordered file in the first place.
Many tools that work on XML files exhibit what I call "juggling" - the elements and attributes change order when you change the value of them or their siblings, because the software is directly using the DOM to manipulate the file - and does this by creating new objects and removing the old ones from the collection. This is a real PITA for text-based diff tools because not all the changes will even conflict with each other (element sequences are often spread across multiple lines, more so if you put attributes on their own line to enhance the ability to merge).
So, you can either write your code to write a consistent order - usually by serializing a fresh XML stream from a model when you write the file.
Or you can add a layer that re-orders the document when you diff it - many of the available diff tools will let you do this. For some files, I used to write an XSLT sheet (to re-order elements consistently). For attributes, I wrote an extra option for Tidy that sorts attributes - doing that plus laying them out on separate lines is sufficient for many files. I've gone as far as writing custom tools that unpack HTML written into an attribute (with all the escape sequences that entails) into a CDATA section for clarity, runs it through Tidy, and then repacks everything after you're done.
Intermediate : I've thought of taking this a step further and converting the XML to a directory tree of text files designed to merge well, principally to make things clearer for end-users who currently have the kind of diff-tool-plus-converter described above but still occasionally make merge errors.
The next step is to write tools to specifically diff your model. This is probably a bridge too far for most developers, because we have the kind of brain that can abstract a text representation of the model and map it to the actual model that will be created. For end users, it may well be advisable.
Diff / merge tools are a field that need more work - currently the main users are developers who can cope with them being a bit immature. But we will increasingly see collaborative tools based on the kinds of version control that we take for granted, and normal users will need to be able to do this stuff too.
> no more free development for
Community supports the vast majority of useful features... and really, what's the problem with it costing money if you want more than 5 developers or have a $1M+ turnover company? You're still allowed Community if you're using it for classroom learning, academic research, or open-source development.
If you're working for a company that presumably makes money from writing software (in one way or another) is it really so bad to give some of that money to a company that helped you do that with their product? If you hire a developer, their salary is far more than the $1,119 it will cost you for VS Pro with MSDN ; do you really want to waste their time by making them write their code with a text editor and build it with just the
I usually prefer SharpDevelop for my
And the most depressing thing about this?
It basically excludes new players from the market. Only the big firms have the resources to go through all that compliance paperwork. Which means the only people left are the lying, cheating, scum that caused the problem in the first place.
This has only fostered a risk-averse mentality that chokes every aspect of government and big business. No wonder it's the small firms that have the reputation for innovation - it's because they still have their innocence and aren't wasting 95% of their energy looking over their shoulder and checking up on things to cover their asses.