Uh.. secure communications for the client even if the adversary controls the client? Good luck with that.
Shortest exit path routing.
It's pretty much what everyone's expected to do and makes technical sense: you don't know the topology of the other guy's network, so if you have a packet for one of his customers, you should hand it over to that network at the closest possible point, rather than hauling it over your backbone and possibly having the other guy haul it back if your assumptions about costs are wrong..
But this means people that are "push" heavy use a lot less backbone resources than the provider that is receiving the traffic. Historically that has resulted in payments when peering connections have significantly asymmetric traffic flows.
Name one thing you know firsthand is connected to the Internet and could result in casualties if attacked. Sure banks computers could crash, sure amazon could go down, but ICBMs are not going to launch and the power grid wont go down. If anything that could actually cause casualties is connected to the Internet then it shouldn't be.
SCADA (Supervisory Control and Data Acquisition) technology provides the means to monitor and control distributed systems from a central location. They are used widely in the telecommunications, power distribution, oil & gas and transportation industries. SCADA systems are typically deployed with dedicated communication infrastructure, proprietary software and hardware.
iSCADA, on the other hand is an Internet-based SCADA solution that utilizes the public Internet infrastructure as the data communication medium. It uniquely combines traditional SCADA technology with the open data communication protocols, services and data formats of the public Internet to deliver cost-effective and easy-to-use SCADA solutions. With iSCADA, it is now feasible to monitor and control virtually anything from anywhere in the world.
This kind of stuff is getting deployed more and more.
I think he was engaging in some hyperbole for effect, but..
Most of Wisconsin gets over 4' of snow per year, and portions of northern Wisc. get over 13 feet.
Yes. For instance, under California law:
Any person who willfully threatens to commit a crime which
will result in death or great bodily injury to another person, with
the specific intent that the statement, made verbally, in writing, or
by means of an electronic communication device, is to be taken as a
threat, even if there is no intent of actually carrying it out,
which, on its face and under the circumstances in which it is made,
is so unequivocal, unconditional, immediate, and specific as to
convey to the person threatened, a gravity of purpose and an
immediate prospect of execution of the threat, and thereby causes
that person reasonably to be in sustained fear for his or her own
safety or for his or her immediate family's safety, shall be punished
by imprisonment in the county jail not to exceed one year, or by
imprisonment in the state prison.
or (c) Any person who maliciously informs any other person that a
bomb or other explosive has been or will be placed or secreted in any
public or private place, knowing that the information is false, is
guilty of a crime punishable by imprisonment in the state prison, or
imprisonment in the county jail not to exceed one year.
Or there are many other choices of statute depending on specific circumstances. Note that both of these require malice. If you were going to legally set off a bomb as part of a demonstration when you had a pyrotechnics license, neither of these would apply.
The short answer is, CA/CP/AP on a transaction-by-transaction basis depending on application requirements. Also of note: network delay is effectively a special "partition", requiring an engine that can have massive workloads in flight and reconcile/order non-commutative changesets in a distributed fashion.
And that's what Translattice does, actually: for the database part of the system, we transparently shard large tables behind the scenes, and figure out how to store it to the computing resources available taking into account historical usage patterns and administrators' policies on how data must be stored (for redundancy and compliance purposes). A different population of nodes is used to store each shard and the redundancy is effectively loosely coupled, so when a failure or partition occurs, the work involved in re-establishing redundancy is fairly shared over all nodes. This provides linear scalability for many workloads and better redundancy properties, and can also as a side benefit position data closer to where it's consumed.
When it comes time to access the data, the query planner in our database figures out how to efficiently dispatch the query to the minimal necessary population of nodes, introducing map and reduce steps to provide for data reduction and efficient execution.
All of the table storage is directly attached to the nodes, eliminating much of the need for a storage area network and scaling beyond where shared-disk database clusters can go.
I didn't expect we'd be on Slashdot just yet. I'm Michael Lyle, CTO and cofounder of Translattice.
With regards to the original submitter's question, we'd love to talk to him. How much we can help, of course, depends on the specific scenario he's hitting.
What we've built is an application platform constituted from identical nodes, each containing a geographically decentralized relational database, a distributed (J2EE compatible) application container, and distributed load balancing and management capabilities. Massive relational data is transparently sharded behind the scenes and assigned redundantly to the computing resources in the cluster, and a distributed consensus protocol keeps all of the transactions in flight coherent and provides ACID guarantees. In essence, we allow existing enterprise applications to scale out horizontally while keeping the benefits of the existing programming model for transactional applications, by letting computing resources from throughout an organization combine to run enterprise workloads.
Current stacks are really complicated, multi-vendor, and require extensive integration/custom engineering for each application install. We're striving to create a world where massively performing infrastructure can be built from identical pieces.
Link to Original Source
Link to Original Source
Link to Original Source
OK, but to say it one more time, adjusted this time for your example...
If you take a given JPEG2000 compressor, not all valid JPEG2000 files can be created by a given compression implementation, no matter what the input is. If the input is further constrained in some way -- whether it's constrained to be a particular input or say, just something that came from a Bayer-filtered sensor... the number of possible output files is decreased further.
A smart adversary may be able to prove that the beginning of the file (or other files you have around) was created by a given compressor, but that other portions could never be created by that implementation, thus detecting the steganography.
If you're saying to leave the files unaltered, what you're really doing is using the contents of the files as the key, and carrying the enciphered message in the form of offset numbers. Which is really cryptography (and weak cryptography at that), and not steganography.
If you're proposing editing the files on disk to hide the message in them, well.. that's potentially vulnerable to the types of attack I mentioned above. A small amount of modification is no more innocent than a lot of modification when this attack/detection practice applies.
Still, that 6MB high entropy MP3 was not created by hand. You must be able to hide the data in such a way that not only is the file functionally intact for decompression, but also that the file could be created by an MP3 compressor (probably a known implementation even). The problem may be even further constrained if the input is known.
If there's 100,000 combinations of MP3 compressors and reasonable settings for them, and playing the audio back returns a certain song such that it appears the song was ripped from MP3 or is otherwise tied to original source material (as most all MP3s are)... for each song there's less than 100,000 likely values. If you can fingerprint based on output the likely compressor and settings (not too demanding of a task for at least the common approaches), you can then evaluate how that compressor would compress the entire song and compare for differences. You can also do things like fuzzy comparison against known circulating copies of songs, etc.
Functionally intact is not hard. Making the file have a plausible non-steganography-related history of how it was created against smart people carefully looking at it is hard.