Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×

Submission + - Buh-bye, H-1B's 1

DogDude writes: From the Washington Post: Trump and Sessions plan to restrict highly skilled foreign workers. Hyderabad says to bring it on.
"Trump has described H-1Bs as a “cheap labor program” subject to “widespread, rampant” abuse. Sessions co-sponsored legislation last year with Sen. Ted Cruz (R-Tex.) to effectively gut the program; Issa, a congressman with Trump’s ear, released a statement Wednesday saying he was reintroducing similar legislation called the Protect and Grow American Jobs Act."

Comment Re:Fighting nebulous "hate speech" will kill them (Score 2) 373

If these companies even tried to end "hate speech" or whatever nebulous crime where a specific group of pigs are more equal than another group of pigs, we will see the end of these platforms and companies full sail.

Banning trolls will hurt their business, how? As an employer, I'm MORE likely to advertise on a platform that wasn't full of screaming, stupid Trump people. Those are not people that I want to advertise to, anyway.

Comment Re:Gibberish (Score 2) 70

Not exactly... A neural net is just a function that takes an input and produces an output. At training time the weights are adjusted (via gradient descent) to minimize the error between the actual and desired output for examples in the training set. The weights are what define the function (via the way data is modified as it flows thru the net), rather than being storage per se.

The goal when training a neural net is to learn the desired data transformation (function) and be able to generalize it to data outside of the training set. If you increase the size of the net (number of parameters) beyond what the training set supports, you'll just end up overfitting - learning the training set rather than learning to generalize, which is undesirable even if you don't care about the computing cost.

The use of external memory in a model such as Google's DNC isn't as an alternative to having a larger model, but rather so the model can be trained to learn a function that utilizes external memory (e.g. as a scratchpad) rather than just being purely flow thru.

Comment Re:Don't know what the "vector" is? (Score 1) 88

The summary is complete gibberish. For anyone interested, Google's own paper describing their NMT architecture is here:


and a Google Reseach blog entry describing it's production rollout (initially for Chinese-English) is here:


The executive summary is that this is a "seq2seq" artificial neural net model using an 8-layer LSTM (variety of recurrent neural network) to encode the source language into a representation vector, and another 8-layer LSTM to decode it into the target language. A lot of the performance improvement is in the details rather than this now-standard seq2seq approach.

The "vector" being discussed doesn't represent words but rather the entire sentence/sequence being translated. This is the amazing thing about these seq2seq architectures - that a variable length sentence can be represented by a fixed length vector!

The representation of words used to feed into this type of seq2seq model is often a wordvec/GloVe embedding (not WordNet), but per the Google paper they are using a sub-word encoding in this case.

Comment Re:Why do Slashdot users continually defend hacker (Score 1) 54

Most of us have come to accept that black hats will never be punished, because on the internet it's very easy to involve multiple unfriendly countries in a crime, and when you put American and Russian agents on the same case it's very hard to get them to stop playing "my country has the biggest dick therefore I'm in charge" and start cooperating to catch the black hat. There's a subtle difference.

Comment Re:hype from google (Score 1) 33

Yep, the summary is cringe-worthy. Tensor flow is just a framework that lets you easily build multi-step pipelines for processing multi-dimensional matrices (aka tensors). The matrices/tensors flow thru the pipeline, hence the name. The main targeted application is deep neural nets, and there are layers of functionality built into TF for building deep neural nets. There are a number of other preexisting open source frameworks that provide similar functionality. TF appears well designed (very modular, good for research), but it's no game changer.

Comment Re:I don't (Score 2) 507

Because if you buy a TV for picture quality and non-smart features (4k, deep color, whatever), you'll probably end up with 'smart' just because it's the default now. 'Dumb' is getting hard to find in the middle market segment, it's either $10k audiophile grade nonsense, or $199 Walmart specials that aren't 'smart' because they're still using a chipset from 2008.

Submission + - Badlock Vulnerability Falls Flat Against Hype (threatpost.com)

msm1267 writes: Weeks of anxiety and concern over the Badlock vulnerability ended today with an anticlimactic thud.

Badlock was the security boogeyman since the appearance three weeks ago of a website and logo branding the bug as something serious in Samba, an open source implementation of the server message block (SMB) protocol that provides file and print services for Windows clients.

As it turns out, Badlock was hardly the remote code execution monster many anticipated. Instead, it’s a man-in-the-middle and denial-of-service bug, allowing an attacker to elevate privileges or crash a Windows machine running Samba services.

SerNet, a German consultancy behind the discovery of Badlock, fueled the hype at the outset with a number of since-deleted tweets that said any marketing boost as a result of its branding and private disclosure of the bug to Microsoft was a bonus for its business.

For its part, Microsoft refused to join the hype machine and today in MS16-047 issued a security update it rated “Important” for the Windows Security Account Manager (SAM) and Local Security Authority (Domain Policy) (LSAD). The bulletin patches one vulnerability (CVE-2016-0128), an elevation of privilege bug in both SAM and LSAD that could be exploited in a man-in-the-middle attack, forcing a downgrade of the authentication level of both channels, Microsoft said. An attacker could then impersonate an authenticated user.

Comment Re:Cool story. One question... (Score 1) 177

If only computing devices had some sort of a virtual pointer... One could use a dedicated peripheral to position this "pointer" over the green, underlined IFTT in the article summary. One could then press a button on the controller for this "pointer" and have a document describing exactly what the hell "IFTT" stands for and what the "If this, then that" service it refers to does delivered to them.

But alas, it is futile dream.

Slashdot Top Deals

"Aww, if you make me cry anymore, you'll fog up my helmet." -- "Visionaries" cartoon