Follow Slashdot stories on Twitter


Forgot your password?
Compare cell phone plans using Wirefly's innovative plan comparison tool ×

Submission + - Samsung Pay hack allows fraudulent payments

jmcbain writes: The Verge reports that a security researcher at DefCon outlined a number of attacks targeting Samsung Pay, Samsung's digital payment system that runs on their smartphones. According to the article, the attack "focuses on intercepting or fabricating payment tokens — codes generated by the user's smartphone that stand in for their credit card information. These tokens are sent from the mobile device to the payment terminal during wireless purchases." In a response, Samsung said that "in certain scenarios an attacker could skim a user's payment token and make a fraudulent purchase with their card", but that "the attacker must be physically close to the target while they are making a legitimate purchase."

Comment Deep learning and gaming (Score 0) 134

The original article says this card was first announced at an AI meet-up:

At an artificial intelligence (AI) meet-up at Stanford University this evening, NVIDIA CEO Jen-Hsun Huang first announced, and then actually gave away a few brand-new, Pascal-based NVIDIA TITAN X GPUs.

In fact, the Titan X is currently the preferred GPU for deep learning thanks to its 12GB memory. But I'm not going to argue that this card can be a great GPU for both gaming and deep learning (unlike the Quadro which is largely for CAD-like applications).

Comment Earnings guidance (Score 1) 42

Samsung, like many companies, releases earnings guidance early for investors to chew on. They consistently release the guidance the first week of each quarter and then the full earnings report by the end of the first month of each quarter. They have done this every year for as long as I can remember (going back 3-4 years now). The full earnings report's numbers are usually well within 1 percent of everything that was reported in the guidance. See, for example, the April 2016 guidance and the April 2016 report.

Comment Email client (Outlook or Gmail) for searching (Score 1) 286

I used to take notes with paper and pencil, but you can't search through old notes unless you scan and OCR your content.

I instead have been using E-mail clients for the last several years (whether it's company Outlook or personal Gmail). This has several advantages:

1. You can search through your notes.
2. If on corporate Outlook, there is security thanks to the IT department.
3. You can have rich markup if you need it.
4. You can immediately email out meeting notes.

Comment Sales over first 3 years (Score 1) 314

The original article has a small blurb that compares sales over the first three years:

Microsoft’s follow-up console, the Xbox One, has not sold nearly as well as the 360. In 2008, less than three years after it was launched, the company said the 360 had sold over 19 million units worldwide. The Xbox One was released in 2013, and has sold about 10 million units in roughly the same amount of time as its predecessor.

Comment Comcast subscribers want the service, (Score 1) 112

not NBC Universal. The point of OP's article is the comparison of subscribers between Netflix and Comcast. People who subscribe to Comcast want the service, whether it's cable to Internet. The fact that Comcast owns NBC is not very relevant here. No one says "I want to subscribe to Comcast to get NBC."

Comment Sets a great precedent for AT&T, Comcast, etc. (Score 1) 339

Surprise, surprise. Being rude to a company results in bad service from that company. Hardly news except that it was [AT&T / Comcast / insert any company] that was the victim. Maybe the entitled customer has learned his lesson, but probably not.

Wrong message to send to corporations.

Comment How I came to work in ML (Score 2) 123

I was in the same position as OP about 5 years ago. I have a PhD in CS from many years back but in operating systems and programming languages. Around 2010 I wanted to get into machine learning and decided to enroll part-time in a university to take some classes. Currently, I am leading a small team of engineers that work on ML-related topics.

Here are some points that the OP needs to understand.

1. There are two different levels of expertise with working on machine learning: either as a library/tool user, or as a ML algorithm developer. It is EXACTLY analogous to how one approaches SQL: You can make a great living being a SQL user and knowing how to write efficient queries and build indexes, or you can go deeper and build the SQL engine itself along with its query optimizer, storage layer, etc. If you want to use ML as a library/tool user, you can have a great career as long as you know what tools and algorithms to use. If you want to be a ML algorithm developer, that means you want to work on the innards, such as using new SVM kernels or building new deep learning networks; for this role, you'll usually need a PhD-calibre background heavy in math. I personally started out as a library/tool user with Weka and Mallet, but as I used them more, I was able to understand the math behind them.

2. ML is an abstract field, and it's best to approach it from an applications point of view. Pick a problem that needs ML, such as natural language processing or image recognition. It's important to pick a problem that has an abundant amount of labelled data. There are some fields such as voice recognition where it is terribly difficult to get real labelled data. For NLP (aka computational linguistics), you can start with some basic problems such as document classification (e.g. for this document, is it about sports, business, entertainment, etc.?) or sentiment analytics (e.g. for this Twitter tweet, is it positive or negative?). There are lots of good datasets in the NLP field.

3. You can explore datasets from the Kaggle competitions and the University of California, Irvine, repository:

4. Pick a tool and stick with it. I have used Weka, Mallet, and R. You can also use Python and Matlab.

5. When you read the literature, you will find two nearly-synonymous terms: "machine learning" and "data mining". Both are closely related. Machine learning historically comes from the AI community and generally focuses on building better ML algorithms and solving supervised ML problems. Data mining historically comes from the database community and generally focuses on using tools and solving unsupervised ML problems (e.g. finding clusters of similar customers).

6. At the end of the day, creating a better solution does not come down to the ML algorithms themselves. Rather, the better solution comes from the amount of data and what features you are able to extract. As for the many ML algorithms for supervised learning: at the end of the day, your main responsibility will come down to picking the one that best suits your application. It is just like picking which sorting algorithm to use: when do you use Quicksort, and when do you use Mergesort?

7. Here are some really good books that I have personally read:

Beginner level:
- Programming Collective Intelligence by T. Segaran.
- Introduction to Data Mining by P.-N. Tan and M. Steinbach.

Intermediate level:
- Data Mining: Practical Machine Learning Tools by I. Witten and E. Frank. (goes with the Weka tool)

Advanced level:
- Artificial intelligence: A Modern Approach by S. Russell and P. Norvig. (touches on all aspects of AI, such as tic-tac-toe algorithms with minimax and First Order Logic)
- Introduction to Machine Learning by E. Alpaydin

PROTIP: How to tell if you're reading an advanced machine learning book -- if the index contains reference to Vapnik–Chervonenkis dimension or shattering, then the book is hardcore.

Slashdot Top Deals

To stay youthful, stay useful.