Ripley did it before South Park -- she found Newt in the alien nest because of the tracking device. That was what, 1986?
Ripley did it before South Park -- she found Newt in the alien nest because of the tracking device. That was what, 1986?
However, merchants are allowed to store limited CC data on the terminal. This includes the card number and expiration date as long as they are encrypted. CID and raw track data are forbidden from being stored. This means it is possible to reverse transactions without the card present.
Is it not possible to do this using transaction ID?
Unless the stored data can only be decrypted via the operator entering a key which is unique per transaction (and not stored in the machine) any encryption is rather pointless. Storing key and cyphertext together is for all practical purposes storing the plaintext.
Now you see why despite being technically allowed, it truly is debatable whether or not it is a good idea. I agree with you on this: it is a bad idea. However, the people that make the rules (state/federal governments, and the payment card industry itself) disagree.
Interesting you should mention using an ID unique to the transaction: one of the major pushes right now is to use tokenization. Essentially, the PINpad provides the track data to the POS. The POS then sends this to the payment processor, who returns a token which is a unique character string. Any future actions taken for that card and transaction (i.e. the initial authorization, a return in the future) will use that token. The token is not a credit card number: it is useless to a thief, since it is only useful at that merchant to transfer funds between the combination of card and merchant.
In this scheme the card data is stored at the payment processor, which offloads liability. The processors already have tons of sensitive data, but are better equipped to protect it. Instead of card data being stored on hard drives all over the country, it is physically secure and hopefully secure from electronic intrusion. But it is no less an issue than the banks themselves storing data.
Why are they storing CCs at all on the terminals? The terminals should be just that, data entry points that transmit data to and from a secure location.
Should be, yes. However, merchants are allowed to store limited CC data on the terminal. This includes the card number and expiration date as long as they are encrypted. CID and raw track data are forbidden from being stored. This means it is possible to reverse transactions without the card present. While most of the time you will need to swipe your card to process a return, this is not required by law or PCI. The only time it is required is for debit, since any debit transaction requires physical possession of the card and PIN entry (although this is changing). By swiping the card, the terminal reads the track data which proves physical possession since it is not allowed to be stored.
Anyway, there is a reason for systems working this way: whether it is a good idea or should be allowed according to any random person is a different issue entirely.
- don't the PIN pads have unique IDs?
- hasn't the terminal software been updated to sound an alarm when the stored PIN pad ID doesn't match the ID read from the PIN pad?
- doesn't the terminal alarm WHENEVER the PIN pad is disconnected?
I work in the payment card industry. PINpads do have unique IDs, but the IDs don't serve much purpose. Furthermore, the POS software and payment processor rarely validate the ID or state of the PINpad. The reason is there is no real encouragement to do so. No laws, banking regulations, PCI standards, etc.
Contrast with other countries such as Canada. Up there, the payment processor does check the ID. Each device has its own key as well, which is checked (similar to PKI but not quite). Tampering is easier to detect.
Aside from that, different devices work differently. The vast majority of PINpads you will encounter at big box stores are from VeriFone or Ingenico: there are a few smaller brands out there as well (e.g. Hypercomm). VeriFone tends to take security very seriously and their devices are typically more difficult to hack. They can be touchy too: I dropped one at work and it refused to process any cards at all. The impact triggered a mechanism which destroys its internal volatile memory that stores the keys: this makes it difficult to perform an offline attack against the device (i.e. power down, disassemble, hook the memory chips up to another device).
What is it with Americans' hatred of passenger rail? It works, it's safe, cost-effective, and requires less government subsidy than highways or airport travel. It's also a hell of a lot more pleasant than flying.
Here in the U.S., you get Amtrack. Subsidized, expensive, and slow. Doesn't own its own tracks, so regularly stops to let cargo trains through. It can cost twice as much as flying and take twice as long to get there. Sometimes it is faster (rarely), but never cheaper that I have heard of.
The U.S. is more spread out than Europe. We have cities which are essentially islands of millions of people with hundreds of miles of cornfields between them. Travel is different here than in Europe. Different strokes for different folks.
No, time crystals -- no flux involved. Just like in Napolean Dynamite, when Napolean and Uncle Rico electrocuted their balls on the time machine.
For comparison-based sorts, sure. But the moment you have a finite spacing between elements (eg. Strings, Integers), a tuned Radix sort can do a much better job, particularly for eliminating your pipeline-destroying decisions.
Radix sort is one of those "special case" that tends to come up. It is a specific case of the bucket sort which has its place. Honestly though, I rarely need to use it. Data normally either lends itself to a traditional comparison-based sort, is small enough not to matter (10 or fewer elements? Who cares?), the implementation is completely hidden (SQL's ORDER BY clause), or is already sorted (e.g. SQL sorted the data and returned it to your application).
There is always an argument for a better way to do something, but there are two important points to consider here:
Show me a lego-building programmer who blindly uses libraries and can create a high performance multi-threaded async system.
For the 99% of the application development that is neither multi-threaded nor special-case, the Lego approach of plugging in existing modules works fine. Your typical web app or DB app (think Oracle Forms or Access) doesn't have to deal with threading on a level that concerns the (wannabe) developer.
Developing thread-safe software is really fucking hard. And I don't mean inserting critical sections in code so your data structure is thread-safe: that's easy. I mean, and I think you meant as well, designing systems with multiple actors that pass data back and forth. Using well-known structures such as blocking queues, semaphores, etc. is a good start but that's seeing the trees, not the forest.
I have developed multi-threaded applications, applications that run on multiple systems let alone threads, and require synchronization. I would not want a "blue collar coder" designing such a system. This is a perfect case for having an experienced developers who is educated on these topics design the software, then working with code monkeys to make it work. This is the type of software that would require a more hands-on approach with more strict code reviews, testing, etc.
It's a natural evolution really. Who uses bookmarks on their browsers anymore? I have thousands of them, and a nifty hierarchy to classify them. But it's not worth spending a long time finding what I had stored there several years back.
I don't know about other browsers, but I have tons of bookmarks in Firefox. When I start typing in the address bar, it searches through them by URL and by name. Sort of like the start menus in Windows 7 and KDE. So while I may not navigate the hierarchy of programs or bookmarks, it does serve a useful purpose as what is essentially a database.
Sure, this would be great if programs required no math, were short, single threaded, didn't require complex algorithms, and didn't require interfacing to other things... but that isn't how programming works in the real world. If your design can be done by someone with the education levels or mental faculties of a welder, it can be done by outsourced talent more cheaply anyways.
What we need are a small number of software architects with C.S. degrees, and a large number of code monkeys with trade school educations. The design and other high level work is done by the C.S. people, and the code monkeying is done by the trade schoolers. This is already what I do for a living -- I handle the hard stuff such as designing the software, and let other people fill in the blanks. I check their work, ensure we have automated testing in place, and deliver the software. The code monkeying is done by C.S. majors who are less capable and honestly wouldn't need a C.S. degree if we had code monkey trade schools.
This would help all around -- companies could pay less in salaries, the code monkeys wouldn't be in nearly as much debt from student loans, and people would be paid according to their ability.
We found it much more productive to take existing employees who understood the various tax procedures and workflows in the department and train them to program versus hiring CS graduates and train them in tax policy and procedures.
I write software in the retail industry. Aside from having worked in retail in my younger years, I know how to write quality software. What I have learned is that to get the best software, I need to sit and talk to an expert in the problem domain. If I were writing tax software I would spend a day or two talking over how tax procedures work, not even talking about the software. Then look at the existing software (if any). All of that background ensures that when I design a system I understand what it is trying to achieve. I would rather "lose" the 12-16 hours of coding up front but spend far less time fixing bugs or redesigning features at the tail end of the project.
We do have non-developers writing code, typically our customers. While they understand their business and we have enough guardrails up in the code to prevent completely assinine code from working, it often ends poorly. We are still better off spending the time to discuss in-depth what they are trying to do. Even if it means getting on an airplane and paying for a hotel, it still ends up cheaper than the complete clusterfuck that occurs at the end of a project where you have either developers not understanding the problem, or non-developers not understanding software development.
into some enterprise software where bubble sort is fine
For fuck's sake, a business dataset is never going to be so small that "bubble sort is fine".
You see, people? Do you see? This is why we fucking need college.
The real issue here is talking about the sorting method to begin with. Coding is like playing with Legos. You plug in the "sorting brick" and forget about it. Whatever library you are using should have a properly optimized sorting algorithm with any necessary speed hacks or whatever else is required. Such a library should be well-tested and proven.
Unless you're a Ph.D. candidate looking into sorting highly specialized data sets such as Google's search index, you have no business implementing a search algorithm. You need to be using whatever standard implementation is defined by the language or framework libraries.
At this point in the evolution of Computer Science, "sorting" is a problem that is solved. People smarter than either of us have proven that O(n log n) is the fastest it gets, and we have multiple algorithms to do it. Other smart people have plugged these algorithms into standard libraries and frameworks. People dumber than us decide to reinvent the wheel.
They'll be even more unhappy once they realize that robots can do their jobs even cheaper than they can. You know it's bad when even mainstream media is picking this up. A few months ago I was watching one of those Nightline/Dateline/Whateverline evening news shows that was talking specifically about Foxconn. At the end they showed the up-and-coming robot that does the work of the Chinese workers and in half the time of a human for half the cost. The reporter asked something along the lines of "what is going to happen when businesses realize they can assemble the gadgets in the U.S. and not pay to ship them across the ocean?"
Still need a big data drive in most uses
as 120GB-256GB is small for some uses and the cloud is slower and ISP data caps suck.
I'm going to pick one up for my desktop. I'm thinking of around 256GB. That'll work for my primary system. My data lives on a server behind my desk with approximately 3.5 TB of hard drives. No RAID, but the data I care about is backed up and stored on multiple spinning rust platters.
Seriously, if she has signal, she can stream talk radio from
... I don't know, RADIO?
Apparently not in the cement and steel canyon of downtown. Trust me, I've had to hear all about it and nod along, wishing some football or something would come along to rescue me.
Your wife sounds like a psycho. Anyone addicted to talk radio inevitably becomes a liberal asstard freak or a neocon asstard freak.
Not really a psycho, she doesn't listen to the political talk radio. She's addicted to Rover's Morning Glory. I find it boring and stupid, but she likes it. She finds video games boring and stupid, but I like it. So we each have our thing. Meh.
Or worse, runs your cell phone bill up.
This is why, if you read up a few posts, I mentioned that she might need her own cell phone plan while I'll keep my elderly mother on mine and just have one data plan for me and my mom's dumb phone with voice only.
Some people claim that the UNIX learning curve is steep, but at least you only have to climb it once.