Please create an account to participate in the Slashdot moderation system


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


FBI Attempts To Prevent Disclosure of Stingray Use By Local Cops 85

Posted by Soulskill
from the annnnnd-we're-back dept.
Ever since the public became aware that law enforcement is making use of StingRay devices — hardware that imitates a cellular tower so that nearby mobile devices connect to it — transparency advocates have been filing Freedom of Information Act requests to see just how these devices are being used. But these advocates have now found that such requests relating to local police are being shunted to the FBI, who then acts to prevent disclosure.

ACLU lawyer Nathan Wessler says, "What is most egregious about this is that, in order for local police to use and purchase stingrays, they have to get approval from the FBI, then the FBI knows that dozens of police departments are using them around the country. And yet when members of the press or the public seek basic information about how people in local communities are being surveilled, the FBI invokes these very serious national security concerns to try to keep that information private."

Comment: Distributed notification (Score 1) 159

by enriquevagu (#48977743) Attached to: Site Launches To Track Warrant Canaries

So... I add a Canary to my site, and when I remove it, you launch an announcement in yours. Aren't we building together a distributed system which violates the explicit compulsory silence associated to the order? I mean, a canary is used because an explicit announcement is forbidden, so this system might constitute an explicit violation of the silence order, without the original user (the one who added the canary) even knowing. Is this correct? Are both parties liable?

United Kingdom

UK Broadcaster Sky To Launch Mobile Service 32

Posted by samzenpus
from the all-in-one dept.
An anonymous reader sends word that British pay-TV company Sky will launch mobile services next year. UK pay-TV firm Sky is launching a mobile phone service next year in partnership with O2's Spanish parent Telefonica. Sky will use Telefonica UK's wireless network, enabling the satellite broadcaster to offer mobile voice and data services for the first time. It takes Sky into the battle for "quad play", adding mobile to its existing services of internet, landline and TV. Offering all four services is seen as the next big UK growth area for telecoms firms and broadcasters.

Why Didn't Sidecar's Flex Pricing Work? 190

Posted by samzenpus
from the you-get-what-you-pay-for dept.
Bennett Haselton writes Sidecar is a little-known alternative to Lyft and Uber, deployed in only ten cities so far, which lets drivers set their own prices to undercut other ride-sharing services. Given that most amateur drivers would be willing to give someone a ride for far less than the rider would be willing to pay, why didn't the flex-pricing option take off? Keep reading to see what Bennet has to say.

Comment: Original sources (Score 2) 161

by enriquevagu (#47775417) Attached to: Research Shows RISC vs. CISC Doesn't Matter

It is really surprising that neither the linked Extremetech article, nor the slashdot summary cite the original source. This research was presented in HPCA'13 in a paper titled "Power Struggles: Revisiting the RISC vs. CISC Debate on Contemporary ARM and x86 Architectures", by Emily Blem et al, from the University of Wisconsin's Vertical Research group, led by Dr. Karu Sankaralingam. You can find the original conference paper in their website.

The Extremtech article indicates that there are new results with some additional architectures (MIPS Loongson and AMD processors were not included in the original HPCA paper), so I assume that they have published an extended journal version of this work, which is not yet listed in their website. Please add a comment if you have a link to the new work.

I do not have any relation with them, but I knew the original HPCA work.

Comment: Problem and possible alternatives (Score 5, Informative) 131

This is a real pity for the TM community. This is not the first chip with transactional memory support in hardware: The Sun Rock was announced to have hardware TM support, and the IBM Blue Gene/Q Compute chip also supports it. Unlike other proposals for unbounded transactional memory, all these systems employ Hybrid Transactional Memory (ref, ref, ref), in which restricted hardware transactions are designed to correctly coexist with unbounded software transactions, so a software transaction can be started in case a hardware transaction fails for some unavoidable issue (such as lack of cache size or associativity to hold speculative data from the transaction, not because of a conflict). Note that, in any case, very large transactions should arguably be very uncommon, since they would significantly reduce performance (similar to very large critical sections protected by locks).

The problem with the hardware implementation of transactional memory is that they are not simply a new set of instructions which are independent from the rest of the processor. HTM implies multiple aspects, including multiversioning caching for speculative data; allowing for the commit of speculative (transactional) instructions, which could be later rolled back (note that in any other speculative operation such as instructions after branch prediction, the speculation is always resolved before instruction commits because the branch commits earlier); a tight integration with the coherence protocol (see LogTM-SE for an alternative to this very last issue, but still...); a mechanism to support atomic commits in presence of coherence invalidations... From the point of view of processor verification, this is a complete nightmare because these new "extensions" basically impact the complete processor pipeline and coherence protocol, and verifying that every single instruction and data structure behaves as expected in isolation does not guarantee that they will operate correctly in presence of multiple transactions (and non-transactional conflicting code) in multiple cores. There are some formal studies such as this or this, and the IBM people discuss the verification of their Blue Gene TM system in this paper (paywalled).

As some others commented before, the nature of the "bug" has not been disclosed. However, since it seems to be easy to reproduce systematically, I would expect it to be related to incorrect speculative data handling in a single transaction (or something similar), rather than races between multiple transactions.

Regarding the alternatives, Intel cannot simply remove these instructions opcodes because previous code would fail. I assume that the patch will make all hardware transactions fail on startup, with an specific error (EAX bit 1 indicates if the transaction can succeed on a retry; setting this flag to 0 should trigger a software transaction). In such case, execution continues at the fallback routine indicated in the XBEGIN instruction, which should begin a software transaction. Effectively, this will be similar to a software TM (STM) with additional overheads (starting the hardware transaction and aborting it; detecting conflicts with nonexistent hardware transactions) that would make it slower than a pure STM implementation.

Comment: They are right - Uses of unicode ambiguous letters (Score 1) 79

They are right doing so. There are letters in different alphabets whose typing is very very similar -- or in fact they are written exactly the same, depending on the font used.

This can be exploited for interesting uses. For example, "E" and "ÃZ"** are respectively the latin "e" and the greek "epsilon" vowels, but they are indistinguishable in caps, at least in Arial font. The second one is the UTF 395 code. My name has an "E" on it, and for my email signature I spell my name using the traditional latin letter from the keyboard when the email is important and should be archived. By contrast, when the email is mostly irrelevant for future use (such as meeting arrangement emails, which are useless after the meeting takes place) I spell my name using the Greek epsilon letter (hint: 395 followed by Alt+X in most Windows programs). There is no obvious difference for the receiver, but a search tool can be used to quickly find all sent emails which can be deleted safely.

While the previous is a somehow "legit" use, in general any word which combines letters from different alphabets could be used to confuse an trick the receiver, for example by creating an email account which reads exactly the same as the one from another person. There is a nice image of 5 letters a-b-c-d-e in different alphabets in the linked post. I agree with Google in preventing such combinations for email accounts. It would be interesting to know the exact policy used to forbid account names, which is not detailed.

** At the time of writing, these two letters look exactly the same. Classic Slashdot lacks Unicode support and does not represent the greek Unicode letter from my comment. I tried logging into Slashdot Beta (first time, I swear it!!) and it seems to represent a different letter... Please try this on your own computer!

Comment: Re:Is there anything new here? (Score 3, Informative) 143

by enriquevagu (#47299371) Attached to: Researchers Unveil Experimental 36-Core Chip

Some knowledge about multicore cache coherence here. You are completely right, Slashdot's summary does not introduce any novel idea. In fact, a cache-coherent mesh-based multicore system with one router associated to each core was presented on the market years ago by a startup from MIT, Tilera. Also, the article claims that today's cores are connected by a single shared bus -- that's far outdated, since most processors today employ some form of switched communication (an arbitrated ring, a single crossbar, a mesh of routers, etc).

What the actual ISCA paper presents is a novel mechanism to guarantee total ordering on a distributed network. Essentially, when your network is distributed (i.e., not a single shared bus, basically most current on-chip network) there are several problems with guaranteeing ordering: i) it is really hard to provide a global ordering of messages (like a bus) without making all messages cross a single centralized point which becomes a bottleneck, and ii) if you employ adaptive routing, it is impossible to provide point-to-point ordering of messages.

Coherence messages are divided in different classes in order to prevent deadlock. Depending on the coherence protocol implementation, messages of certain classes need to be delivered in order between the same pair of endpoints, and for this, some of the virtual networks can require static routing (e.g. Dimension-Ordered Routing in a mesh). Note a "virtual network" is a subset of the network resources which is used by the different classes of coherence messages to prevent deadlock. This is a remedy for the second problem. However, a network that provided global ordering would allow for potentially huge simplifications of the coherence mechanisms, since many races would disappear (the devil is in the details), and a snoopy mechanism would be possible -- as they implement. Additionally, this might also impact the consistency model. In fact, their model implements sequential consistency, which is the most restrictive -- yet simple to reason about -- consistency model.

Disclaimer: I am not affiliated with their research group, and in fact, I have not read the paper in detail.

"I'm not afraid of dying, I just don't want to be there when it happens." -- Woody Allen