Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:where?! (Score 1) 537

It is very much backed by the real world: Debt Deflation

Waving your hand and saying that it's "not a problem" when it obviously is, doesn't make it magically go away. Keep in mind that the deflationary spiral talked about by most economists is for deflation on the order of a few percent per year. The deflation of Bitcoins is an order of magnitude greater than that!

The inventor of Bitcoin is a criminal genius, and should be jailed for inventing the biggest Ponzi scheme ever.

Comment False assumption (Score 4, Informative) 226

This assumption by the OP:

Mathematica generates the result based on the combination of software version, operating system, hypervisor, firmware and hardware that are running at that time.

... is entirely wrong. One of the defining features of Mathematica is symbolic expression rewriting and arbitrary-precision computation to avoid all of those specific issues. For example, the expression:

N[Sin[1], 50]

Will always evaluate to exactly:

0.84147098480789650665250232163029899962256306079837

And, as expected, evaluating to 51 digits yields:

0.841470984807896506652502321630298999622563060798371

Notice how the last digit in the first case remains unchanged, as expected.

This is explained at length in the documentation, and also in numerous Wolfram blog articles that go on about the details of the algorithms used to achieve this on a range of processors and operating systems. The (rare) exceptions are marked as such in the help and usually have (slower) arbitrary-precision or symbolic variants. For research purposes, Mathematica comes with an entire bag of tools that can be used to implement numerical algorithms to any precision reliably.

Conclusion: The author of the post didn't even bother to flip through the manual, despite having strict requirements spanning decades. He does however have the spare time to post on Slashdot and waste everybody else's time.

Comment Re:Libraries And Documentation (Score 1) 168

Interestingly, you claim your choice of programming language suits your requirements, but then you state a bunch of issues endemic to it, but mitigated or absent in other languages.

For example, the need to sometimes, but not always, initialize objects, libraries, or whatever is typical of C/C++ code, but rare in Java or C#, where constructors or factory methods usually do that automatically for you on demand. The worst I've seen is some Microsoft C++ code where every object had both a C++ constructor and an init function, which wasn't consistently named and would lead to annoying run-time crashes if missed.

Similarly, the need to chase related code between two unrelated files is decidedly a C/C++ centric problem. A typical Java or C# class file is complete and self-contained, except for some special circumstances such as generated "partial" files used in C# or whatnot. Code discoverability is many-fold improved in Java and C# because of intelligent refactoring IDEs that can accurately chase references across millions of lines of code. That's just not possible with C/C++ where the same header code can be interpreted differently depending on the context in which it is included! Macros in general, particularly when combined with opaque 'void*' pointers, severely limit IDE capabilities.

I feel your pain. I've tried to hook in C libraries such for basic tasks such as ZIP compression or PNG decode in the past, only to discover that each and every one of them reinterprets what "int" means, how to "malloc", "free", read a stream, and return messages or error codes. Meanwhile, this just never happens in Java or C#. The size of integers is fixed and typedef is flat out missing, memory is garbage collected and released automatically, both languages have a built-in System.IO.Stream, and both have exceptions for safe and consistent error handling.

Sure, I'll believe you can remember to call "free", but which one of the dozens in the libraries you're using? Are they all thread-safe? Are you sure? Are your co-workers? All of them?

I'll even believe that you "need" C++ performance, except that in my experience I can spend 1/5th of the time developing the logic a C++ programmer, which then leaves me 4/5ths of the time for optimisation, usually by making the code highly multi-threaded or whatever. Given the same "budget" I can usually produce faster, better code, with less pain.

That was all actually slightly off-topic relative to your original gripe regarding insufficient documentation, which is also largely "solved" (as much as it can be, anyway) in Java/C# land: not only do you get vastly better tab-complete, but both systems have standardized embedded doc-comment standards that are indexed for searching in IDEs!

Comment Is it HDMI 2.0 or 1.4?! (Score 4, Interesting) 212

Has anyone else noticed that despite the endless 4K resolution marketing being put out there by AMD, there is not a peep on the specific type of HDMI port the card has?

There is a HUGE difference between HDMI 2.0 and 1.4, but it's always specified as just "HDMI" with no version number. No review mentions the HDMI version, even though one would think that a real journalist would put in some effort to research this and find out.

I suppose it's easier to run the card through a bunch of automated benchmarks, cut & paste 15 pages of results to maximise ad impressions, and call it a "review".

Comment Re:Illusion of privacy (Score 2) 224

The weak point is not with the mathematics. It's like claiming nobody can break into your house because you have a solid steel door, but at the same time you have glass windows.

The weakness in SSL is the trust you have to place in the CA infrastructure, none of which is really that secure. Your browser will trust any valid certificate rooted in a trusted CA. There's no need to crack the keys of the certificates issued by Google. Keys have leaked, CAs have been hacked, intermediate authority certificates are often very weak (512bits), and the NSA could simply issue an order to a US corporation under national security to provide them with whatever key material they desire. The Stuxnet worm is a great real-world example of this happening: its creators used private keys stolen by intelligence agencies to create fake device driver code signing certificates.

Not to mention that it wouldn't be a difficult for an agency with the resources of the NSA or the CIA to simply infiltrate larger IT organisations such as Google and make copies of their private keys. That way they could man-in-the-middle without having to change the certificate fingerprint.

That's all academic anyway, the rumours are that the NSA doesn't have to bother decrypting anything because they have moles inside all large organisations that provide them with the plain text content directly whenever they want. This wouldn't even require that many people. Just by having someone in the top-5 ISPs, Apple, Google, Microsoft, IBM, Oracle, and Amazon you'd basically ensure coverage of the core "cloud" services that most computers connect to on a daily basis.

Comment Re:Uh huh (Score 1) 570

I could reply in detail, but you missed my core point, so I'll pick out just a couple of the more relevant ones to reply to:

vi on the other hand has the advantage of being universal.

Which is what we were told in the lab too. But how sad is that? I shouldn't have to "make do" with a shitty text editor that's saddled with the lowest-common-denominator limitations of the 37 year old systems it was originally developed for, in 1976!!! This is the problem with both UNIX and Linux: they haven't really changed, at their core, for three or four decades. Sure, there's GUIs and whatever on top, but a soon as you want to build real systems, it's time to roll the sleeves up and get elbow deep in decades old crap, with all the limitations and inefficiencies that implies.

Sure, there might be some sort of masochistic pride in learning how to use a text editor that basically no first-time user can even exit without a cheat-sheet, but I have better uses for my time.

Type "top". When it comes up, type the letter "M" (for memory). Five keystrokes, and you get it updated continually.

Did you not read the bit where I said that the whole single-letter option thing is insane, because nobody can possibly guess what commands mean based on just one symbol?

The whole two- or three- letter commands with one-letter options isn't a good thing, it's a legacy from the ancient times when tab complete didn't exist, and "terminals" were typewriters. In that era, every character saved improved administrator efficiency measurably.

Here's a hint, in case you've been living under a rock for the last four decades: those times are over. I have a 1920x1200 LCD screen, and my phone gets 10 megabits. I don't need to be shaving a few hundred bits off my commands so that it will transfer faster over a 300 baud link, because those exist only in museums.

So back to your "example": So great, I can sort by "memory" by pressing "M". Awesome. Here's the columns I could sort by in Windows:


  > Get-Process | Get-Member *memory*64 | select -ExpandProperty Name

  NonpagedSystemMemorySize64
  PagedMemorySize64
  PagedSystemMemorySize64
  PeakPagedMemorySize64
  PeakVirtualMemorySize64
  PrivateMemorySize64
  VirtualMemorySize64

First of all, that command line made perfect sense to you, right? You can understand what it means, without having to look up any of the commands in help.

For Linux, I have no idea how what memory statistics are available for a process, but Google came to the rescue with what looks like the likely set:

  vsize - The size of virtual memory of the process.
  nswap - The size of swap space of the process.
  cnswap - The size of swap space of the childrens of the process.
  size - The total program size of the process.
  resident - Number of resident set size, this includes the text, data and stack space.
  share - Total size of shared pages of the process.
  trs - Total text size of the process.
  drs - Total data/stack size of the process.
  lrs - Total library size of the process.
  dtp - Total size of dirty pages of the process (unused since kernel 2.6).

Answer me this: Which one is "M"? What are the letters for the other ones? Do you know off the top of your head? If you see some random "top" command-line, would you be able to immediately identify every single option from memory? Can "top" sort by any of those columns? What about every other Linux command with a single-character options? Have you memorised all of them too?

PS: right after I finished typing all of that up, I actually read through the Wikipedia "vi" page. I love this bit:

"Joy used a Lear Siegler ADM3A terminal. On this terminal, the Escape key was at the location now occupied by the Tab key on the widely used IBM PC keyboard (on the left side of the alphabetic part of the keyboard, one row above the middle row). This made it a convenient choice for switching vi modes. Also, the keys h,j,k,l served double duty as cursor movement keys and were inscribed with arrows, which is why vi uses them in that way. The ADM3A had no other cursor keys. Joy explained that the terse, single character commands and the ability to type ahead of the display were a result of the slow 300 baud modem he used when developing the software and that he wanted to be productive when the screen was painting slower than he could think"

Haha... that's just gold!

Comment Re:Uh huh (Score 2, Interesting) 570

Ok, I'll bite.

I'm a Windows admin, but I just went to a training course to learn about a high-end enterprise product that runs on top of Linux. I've dabbled with Linux-based stuff before (proxies, VMware, ESX, etc...), so it's not exactly new territory, but I figured it's 2013, it'll be interesting to get a glimpse into the current state of the "Linux Enterprise" world.

My experience was this:

-- You still need to patch, or install 140+ dependencies to install one application. Same difference.
-- You still need to reboot. A lot. More than I thought. I suspect that it is possible to avoid most of them though by judiciously restarting services, but the effort is much higher and the outage level is practically the same, so what's the benefit, really?
-- Things that really ought to be automatic, aren't. I spent a good 50% of the lab doing really fiddly things like cut & pasting iptables rules to open firewall ports. The installer really should have just done that for me.
-- Binding services together and just generally getting things to start up and talk required an awful lot of error prone manual labour. The lab guide was liberally sprinkled with warnings and "do not forget this or else" sections. Lots of "go to this unrelated seeming file, and flip this setting... because.. just do it or nothing will work."
-- I love the disclaimer in the training guide: "Linux configuration scripts do not tolerate typos, are case sensitive, and are not possible to validate before running the associated service." Fun stuff. I can't wait to diagnose random single-character problems in 10 kilobyte files when the only error is that one of a dozen services barfed when started.
-- Wow, the 70s called and wanted their limitations back: spaces in file names? You're risking random failures! Case-insensitive user names? Nope. Unicode text? Hah! IPv6? In theory, not in practice. GUI config wizards? Nope. Text-based config wizards? Not many of those either. Want to make a configuration change to a service without having to stop & start it? You're dreaming! An editor more user friendly than vi? Eat some cement and harden up princess!
-- I love the undecipherable command-line wizardry. I'm not an idiot, but how-the-fuck would I know what "-e" does on some random command? There is just no way without trawling through man pages using a command-line reader with no mouse support and keyboard shortcuts I don't know. Compare this to a sample PowerShell pipeline "Get-Process -Name 'n*' | sort -Descending PagedMemorySize". You'd have a hard time finding an IT engineer that can't figure out what that does.

I keep hearing about the supposed efficiency advantage of Linux, but I just don't see it. Given a Hypervisor, PowerShell, and Group Policy, Windows administration a piece of cake in comparison.

Comment Re:Can superconductors compute? (Score 2) 73

They're probably using Rapid Single Flux Quantum (RSFQ), which isn't really a "quantum" computer logic, but is very fast and very low power.

It's the latter property that is of interest for making supercomputers. One of the biggest performance limitations is latency, which is caused by the speed of light delay between processors. Moving processors closer reduces the delay, but increases the specific power until there is just no practical way to cool the computer and it overheats.

Superconducting logics like RSFQ have very VERY lower power requirements, which means that you can pack the processing elements very close. It's likely that they can even be stacked, along with memory. In theory, it would be possible the squeeze a petaflop supercomputer into the space inside an average sized cryogenic Dewar!

In practice, manufacturing complex RFSQ chips has been a bit tricky. Simple ones however are used relatively often, for example as an analog to digital converter in radio telescopes and some very high-end radar systems.

There have been suggestions to miniaturize this stuff using tiny cryocoolers based on stacked Peltier elements and good insulation, but I think the temperatures required are just too low.

At the end of the day, the NSA or their ilk funding research into this stuff might be a good thing! It sounds to me like this is a great technology that just needs a few billion dollars of research funding to become practical for commercial use.

Comment Re:Um excuse me ... (Score 1) 543

But, I think the crowd always saying use vi for this or that are the kinda people whose job is more about optimizing complex algorithms than it is about writing lots of business logic.

My point still stands: I was working on 3D games at time! I would have killed for the concurrency visualiser in Visual Studio, because the engine was multi-threaded. Similarly, live edit-and-continue debugging has its uses when twiddling around with something like a tree rebalancing algorithm.

Comment Re:Um excuse me ... (Score 5, Insightful) 543

Well, I'd never hire you in the first place, because modern IDEs are the automation of the software development world and demonstrably improve productivity while lowering error rates.

You're basically saying that we should let the guys in the warehouse manhandle 500kg loads by hand because they "prefer" not to use the forklifts. We should just let them do whatever they please, because that's what makes for good management, right?

I've been in mixed work environments before where everyone just used whatever tools they wanted: Linux, Windows, Mac, Vi, Emacs, etc... I personally used IntelliJ IDEA on Windows because it had code analysis and safe refactoring. My productivity was at least 50x higher than other developers. I was told not to submit changes too fast because the code reviewers couldn't keep up. Note that I didn't say I was 50x better than anyone else -- there were smarter and more experienced developers there -- but I was running circles around them because of the tools that I was using. A woodcarver, no matter how skilled in his art, simply cannot keep up with a CNC milling machine. A blacksmith cannot possibly outproduce a ten ton press that can stamp out a part every five seconds.

Inefficiencies were everywhere: they took 30 seconds to check out a file from source control using a command-line tool, whereas I could just start typing with a barely noticeable pause on the first character as the IDE does it for me. They used "diff", I used a GUI 3-way merge tool that syntax highlighted and could ignore whitespaces in a semantically aware way. There was one particularly funny moment when some guy walked up to me to ask me to look into a bug in a specific method. He starts off with the usual "now go to the xyz folder, abc subfolder, now somewhere in here there's a..." meanwhile I'm staring at him patiently because I had opened the file before he'd even finished giving me the full method name at the start of his spiel. Global, indexed, semantic-aware, prefix search with jump to file is a feature of IntelliJ IDEA, not Emacs or Vi. He's never even heard of such a thing! Thought it was magic. Grep couldn't have found the method anywhere near as fast, not through 30 million lines of code anyway, and then it would have returned every reference to the method name as well, not just the method definition itself. Then I'd have to find the damned file in a haystack of thousands and open it manually anyway.

Minutes of work in a seconds, hours in minutes.

It's not about typing, or shortcuts, or block select, or the specific dialect of regular expressions in your favorite text editor. It's about indexing, refactoring, code analysis, live error highlights, popup-help, tab-complete, source control integration, boilerplate generation, integrated debuggers, and a thousand other things that most programming oriented text editors simply do not have. It's about letting the CPU in your computer do what it is there for, instead of just waiting patiently for the next keyboard interrupt so that it can use all 3 gigahertz of power to put a byte into a buffer and then go back to sleep.

It's not even a good idea to let developers pick their favorite IDE either, because there are productivity gains to be had from consistency. Training is cheaper, licenses can be purchased in bulk, plugins will work for everyone, custom extensions may be cost-effective to develop for one IDE but not many, etc...

Comment Re:so this...... (Score 1) 177

Nobody is racing a keyboard.

The problem is that when the system has many other components, all adding latency, the result is perceptible and can be huge in some games that require fast reaction times.

Add up all the latencies in a typical game, and you're well past human reaction times:
- keyboard: 50ms
- network round-trip to game server: 20-50ms
- game logic: 1-20ms
- graphics rendering + vsync: 17-50ms
- LCD buffering delay: 17-50ms

That adds up to over 200ms in the worst case, of which as much as a quarter is the keyboard! Many manufacturers already sell "gaming optimized" LCD monitors with either 120Hz and/or non-buffered control of the pixels for this reason. So if that's a market, then why not have low-latency keyboards too? It would be way cheaper, and could have a bigger impact, since it would be possible to reduce latencies to practically zero!

Comment Re:Mongolian Horde (Score 1) 66

I wrote about this before in an unrelated post, but the point is the same: most "enterprise" vendors will sell you kit that can tolerate nuclear war, but as far as I know, there are very few solutions to protect from administrator error or malice.

Think about the harm someone could do to a typical business with nothing other than an Active Directory "Domain Admin" account! Given something like that, I can think of a whole bunch of ways to harm an environment in such a way that even the availability of backup tapes stored off site wouldn't be sufficient to repair.

Ill will isn't even required. I've personally witnessed a fat-fingered administrator nearly destroy a business in seconds! That organization just barely managed to remain solvent, despite full backups that were successfully restored.

There is an enormous amount of research waiting to be done to develop systems with "Byzantine Security", that is, systems that can tolerate not only external attacks or simple component failures, but also deliberate attacks by trusted parties.

Slashdot Top Deals

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...