Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:Lets use correct terminology. (Score 1) 151

by tlhIngan (#49497111) Attached to: MakerBot Lays Off 20 Percent of Its Employees

Is it really common practice now to have laid off workers escorted out by security?

Depends on location. In the US, it's extremely common, potentially due to their more violent nature and the second amendment.

In a lot other countries, layoffs can take different schemes - they may provide notice of layoff - as in you're going to get a severance and all that, but it's a 2-week notice, and no, they're not going to buy you out, you're going to work those two weeks. Seems incredible, but a lot of companies do it because they want an orderly transition, and they do trust their employees enough to not be burning bridges. Some even go out of their way to help them find a new job (instead of just giving them the number of the employment agency) - and that includes counseling services. Heck, even benefits often continue for a few months after the layoffs (health insurance).

I don't know what it is about the US - perhaps their proclivity towards violence leads to basically shoving them out the door after the meeting is over - if you need any personal belongings, they'll fetch it for you and pack up the rest of your stuff.

Comment: Re:And it's already fixed in 1.8.4 (Score 4, Insightful) 99

by tlhIngan (#49496043) Attached to: Exploit For Crashing Minecraft Servers Made Public

Yes, but it took two whole years before the fix came out. And the fix was made within a day of the exploit being released.

Yes, I can understand 90 days being a bit tight if you're talking fundamental software like operating systems (which require a lot of testing, staging, and you lose some to Patch Tuesday), especially since root causing and fixing can require a bit of time. But two years is a bit on the long side.

More like the guy got ignored and once he released the code, the "OH SH*T" came out.

This is one of those struggles between what's right and what's reasonable... 90 days is a bit quick for something big like an operating system where a change can break everything, but it's also on the long side for something that only breaks something really minor, like Minecraft.

Comment: Re:Does it report seller's location and ID? (Score 1) 139

by tlhIngan (#49491351) Attached to: Google Helps Homeless Street Vendors Get Paid By Cashless Consumers

I seriously doubt it. I don't see how location reporting for a payment transaction in which location data is irrelevant could possibly pass Google's privacy policy review process. Collection of data not relevant to the transaction is not generally allowed[*], and if the data in question is personally identifiable (mappable to some specific individual), then a really compelling reason for collection is required, as well as tight internal controls on how the data is managed and who has access. I don't see what could possibly justify it in this case, and I can see a lot of risk in collecting it.

There's a lot of information involved in payment systems you think are irrelevant, but are passed on in order to judge if the payment is potentially fraudulent or not.

That includes things like location data (are you making a transaction where you normally make transactions, or in a new location (this is often the reason why credit cards are blocked when on vacaation), are transactions being made in impossible schedules - e.g., you used your credit card in Seattle, then in New York an hour later?). Online sellers routinely pass your shipping information to the card processor for the same reason - and can even ask for enhanced scrutiny. This verifies that the shipping address is known to the processor (either you put it on your credit card or it's been used constantly).

You may be confident that Google isn't collecting the data, but it's made available as part of the standard fraud checks. It's also why there's slightly more pushback against Google Wallet (where Google does get all the information involved in the transaction) than Apple Pay (where after setup, Apple is out of the picture and the information is shared between you, the retailer, the processor and your bank).

Comment: Re:all in the implementation (Score 2) 113

Oh for fuck's sake.. it's very simple: Avionics need to be on a physically separate network from everything else, preferably encrypted. If there was 'air gap hacking' going on or even possible, wouldn't we have seen it long before now? Wouldn't an intelligent, capable, well-organized, well-thought-out terrorist (yes, Virginia, they do exist) have found a way to sneak the equipment necessary aboard a flight and implemented his hack, taken control of the plane?

Exactly.

And yes, it's possible to "break through" the airgap - cellphones are known to cause EMI issues with certain equipment on certain planes (e.g., lose GPS lock, increase INS errors, cause drift in the heading indicators, etc).

If you really wanted to cause problems, I'd go with a broadband transmitter that causes EMI in the airgapped control network more so than trying to hack it through in-flight WiFi.

Comment: Re:...Wikipedia has "atrophied" since 2007... (Score 4, Interesting) 182

by tlhIngan (#49485861) Attached to: How Many Hoaxes Are On Wikipedia? No One Knows

The big problem with Wikipedia is that in spite of what the publicity says, it is only a small number of people who contribute, and a surprisingly large number of those people have an agenda for what they edit.

imo, with Wikipedia, truth is not the goal. A certain point of view is the goal.

No, the big problem with Wikipedia is politics. Wikipedia is the reinvention of communism, and it's proceeding just like Animal Farm and other Communist nations down the path to failure.

Heck, it's already at the "Everyone is equal, but some are more equal than others" stage.

That's the main problem - you have editors and higher ups who now patrol their part of Wikipedia who are not interested in the truth, correctness or other aspects - just in having little power struggles. Heck, for a time there were massive parts being deleted for arbitrary reasons (usually along the lines of "this content is not suitable for Wikipedia" despite having plenty of similar content around). And these days, well, edit-reversions by the same power-mad editors have basically rendered any reason to edit it moot.

I mean, there's a small amount of contributors because everyone else got driven away. Try to fix a mistake and you'll et into an edit war with an editor who thinks their interpretation is completely correct even if it's obviously wrong.

Yes, it's an encyclopedia anyone can edit. Except that if you do so, chances are someone will revert it in a few minutes because they don't agree with what you edited, even if all you did was fix an error. "Everyone has equal edit rights, but some people have more equal edit rights".

The study of Wikipedia itself is quite fascinating, no many times you get to see political ideology put into play and see the results. Usually you end up with people getting hurt or humanitarian crises if you try to experiment.

Comment: Re:"Just annouced" eh? (Score 1) 72

by tlhIngan (#49480845) Attached to: Samsung SSD On a Tiny M.2 Stick Is Capable of Read Speeds Over 2GB/sec

OS X does not support NVME, so no, there are no NVME drives being supplied to Apple. And finally these NVME drives are just entering production and are not available in either the OEM or Retail channels yet.

The new MacBook supports NVMe on OS 10.10.3, so support is rapidly coming...

Comment: Re:HTTP.SYS? (Score 1) 118

by tlhIngan (#49478933) Attached to: Remote Code Execution Vulnerability Found In Windows HTTP Stack

Their reasons involve context switching and interprocess communications. Context switching has got to happen (unless they run IE in kernel space) so just get it over with. Interproces communication has always been a weakness in Microsoft systems. Since day one. Multitasking OSs are here, folks. Get over DOS.

The bug here affects the HTTP server side, not IE.

And in HTTP servers, there are LOT of context switches - in basic static file handling mode, you read a file (syscall to read file), then you write it to a socket (syscall to write to socket). in effect, a webserver is just copying from two file handles, and incurring a kernel-usermode transistion twice every round.

Add in a moderately busy webserver and you could be spending significant amounts of time just switching between modes.

Using larger buffers helps, but if your site consists of lots of little files, it's still the bottleneck.

Linux has similar functionality - see sendfile(2) and splice(2), among other commands to actually manipulate in-kernel memory buffers.

In fact, doing it in the kernel has an added bonus - if you support zero-copy, no copies are made rather than potentially having to copy to/from userspace (more overhead).

Of course, in the Linux model, all the processing happens in user made and only the tedious file copying is accelerated which ups security.

Comment: Re:I Closed the Frikkin' Page for a Reason! (Score 1) 197

by tlhIngan (#49478643) Attached to: Chrome 42 Launches With Push Notifications

You do realize google has a history of "boiling the frog", right?

You do realize there are other browsers out there, right?

And having it done in the browser is far better than the way notifications are done now - which is usually in a little tick box that then sends updates to your email... at least browsers can make a unified window listing every site push notification allowed and offer things like "disable all" or "delete all".

Comment: Re:Koomey's law (Score 1) 101

by tlhIngan (#49478581) Attached to: Fifty Years of Moore's Law

Moore's law is sort of a mangled version of Koomey's law. Koomey's law states that the number of computations per joule of energy dissipated has been doubling every 1.6 years. It appears to have been operative since the late 1940s: longer than Moore's law. Moreover, Koomey's law has the appeal of being defined in terms of basic physics, rather than technological artefacts. Hence, I prefer Koomey's law, even though Moore's law is far more famous.

  There is another interesting aspect to Koomey's law: it hints at an answer to the question "for how long can this continue?" The hinted answer is "until 2050", because by 2050 computations will require so little energy that they will face a fundamental thermodynamic constraintâ"Landauer's principle. The only way to avoid that constraint is with reversible computing.

Ah, but Moore's Law has a direct correlation to a fundamental piece of computing - memory. Doubling transistors easily doubles storage capacity per unit area (and memory devices are area bound devices - there's a certain tradeoff between making huge memory devices versus defect rate - as you increase area, the defect rate increases dramatically). This isn't just RAM, but also non-volatile storage.

CPUs and other random logic parts have pretty much ignored Moore's law for decades now as the their limiting factor is wiring, not transistors per unit area. in fact, most random logic parts contain tons of transistors that are not hooked up to anything - they're just there. The reason for this is for revisions - fabbing extra transistors in costs nothing. But if you have a bug, if you can utilize those extra transistors, then it's less masks that have to be recreated, and at $100K a pop each, not having to redo the transistor level masks saves easily half a million or more. (It's why steppings are usually thought of as two parts - the first will be A0, while minor revisions that only change the metal layers increment the number, e.g., A1, A2, A3. Major revisions that change everything including the transistor masks change the letter, e.g, A3 to B0, etc). With proper metal layer allocations, fixing broken logic blocks may only change 1-2 metal layers rather than all metal, saving even more money when you consider that the most advanced ICs are already at 10 metal layers or more, requiring 20-30+ masks.

As clock speeds go up, random logic uses less and less minimum-size transistors and switches to larger transistors to increase drive strength. But again, transistor density isn't a problem on random logic.

Comment: Re:For work I use really bad passwords (Score 4, Insightful) 136

by tlhIngan (#49476223) Attached to: Cracking Passwords With Statistics

They have this draconian douchebag policy that you can't ever reuse one for like 20 tries, you have to have a capital, number and punctuation.... so I just keep adding numbers to the end of it. Fark them if we get hacked.

Give me a reasonable password requirement with a reasonable expiry (NOT 30 days) and we'll talk.

Here's some...

2015January!
2015February@
2015March#
2015April$
2015May%
2015June^
2015July&
2015August*
2015September(
2015October)
2015November-
2015December=

If it's too long, shorten to 3-letter months.

And for next year, you'll have another set of "unique" passwords so it doesn't matter if they demand it doesn't match the last 100 passwords.

Numbers, capital, punctuation it's got it all.

With a few modifications, you can come up with similar passwords that will obey any other rules you need.

Comment: Re:Usability metrics, anyone? (Score 5, Insightful) 183

I've consciously avoided jobs where my code is responsible for life-and-death decisions. The problem, I guess, is that too many other good people have made the same decision, and there aren't enough good people available to do what needs to be done. I'm not sure what to do about this.

The problem is not just that, it's that those companies don't actually pay that well, either.

Writing safety-critical code is not hard - there are plenty of guidelines on what you should and shouldn't do (e.g., memory allocation is verboten). It is a specialized skill, and the job should really be done by people who have the requisite training and knowledge and often even certifications (e.g., engineering certifications).

The problem is this is very specialized, and it costs a lot of money because those people know they are taking on professional risk (not unlike many other engineers - civil, mechanical, etc., who design stuff that could fail and take lives). Of course, the IT companies behind it all? They're not willing to pay for that enhanced risk - they're going to pay market rates.

Well geez, if I'm going to be paid market, I'm not going to put my name on anything to certify because that's a specialized skill that gets paid for. (hence, things like "approved drawings" which mean some engineer actually reviewed it all and put their stamp and certification on it).

There's a reason why NASA's software for the space shuttle costs 5+ times what a normal software project of similar size and scope would cost. It's not incompetence on NASA's part, and it's not just the extensive documentation and paperwork that goes along with it, but the fact that writing safety-critical software is hard, specialized, and for every line of code, probably generates a book's worth of documentation proving it fails safe, who wrote it, who changed it, who reviewed it, etc.

Yeah. Most IT companies for health don't even come close.

Comment: Re:Wouldn't a re-write be more fruitful? (Score 1) 207

by tlhIngan (#49470415) Attached to: Linux Getting Extensive x86 Assembly Code Refresh

The problem with total rewrites is they almost always involve a huge amount of effort, introduce new bugs and when they "work", users barely notice the difference. So the company soldiers on, applying patch upon patch to some bullshit codebase and suffering from it but in a incremental way.
Worst of all is when they embark on a rewrite and give up half way through. I was involved in a project to port a C++/ActiveX based system to .NET forms. They ported most of the major views but left a lot of the minor stuff from the old codebase lying around and wrote bridges to host it in the new framework. So they doubled the code, half of it became bitrotten and hidden by the new code and bloated out the runtime. Great project.

And what you described is technical debt.

Rewriting code costs time, effort and money, which is how you repay the debt. After all, when faced with a fix, you can do it quick and dirty and borrow from the bank, incurring debt, or fix it properly but takes longer, with no debt. The former will get it done the quickest and put the fire out, but that stuff you borrowed will haunt you later.

Your new system where you wrote bridges means you got rid of some debt, but the incurred a bunch more.

Comment: Re:Museums? (Score 1) 44

by tlhIngan (#49470329) Attached to: Turing Manuscript Sells For $1 Million

For stuff like manuscripts, museums are pretty much obsolete. What matters is what's on the paper, not the paper itself, so a hi-res picture is just as good, and a plain-text searchable copy is even better.

Except in a museum, there's a good chance it's available for public viewing. In a private collector's hands, unless they're philanthropic, it'll likely be locked away in a drawer never to see the light of day again. And the public will never get a chance to see it either.

Oh yeah, no collector will want to damage the value of their acquisition by making it easy to copy, either.

Comment: Re:Why is it even a discussion? (Score 1) 439

by tlhIngan (#49467999) Attached to: Republicans Introduce a Bill To Overturn Net Neutrality

The open internet is one of the most democratizing things we have in a modern society, why is this even up for debate? What benefit would society have in enabling "Fast lanes" or "premium" connections or other nonsense? What do we get protecting commercial interests?

Easy, people with money get to make more money.

You have to remember in our capitalistic society (it's not free market, but it is capital-based), money is power, and those in power want to get more of both.

Internet fast lanes allows one to do that.

Democratizing, or rather, re-balancing of power (because it gives those who wouldn't have power, easy access to it) threatens those who have power. Power is almost a zero-sum game - get power and someone inevitably loses it. So letting the proles get access to power means those in power feel threatened.

Comment: Re:C64 had a cassette drive (Score 1) 74

by tlhIngan (#49465419) Attached to: 1980's Soviet Bloc Computing: Printers, Mice, and Cassette Decks

I started using a Commodore 64 when I was 7. All of our family was amazed when we realized how to do 'Load"*",8,1' and Load"*",8,1

That always mystified me - what magic incantations did they do so that that command would actually load it off disk AND auto start the program. (I never did find out, so I don't know today).

Or how that even worked...

Make sure your code does nothing gracefully.

Working...