Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Is this a request for optimal code design... (Score 1) 373

It might be more prudent to ask for particularly nice before- and after refactoring examples in general.

Seconded!

I once saw a Java class with pages of complex multi-threaded code to update the horizontal offset of the ticker in a GUI. Not counting the usual minimum content of a class file, I replaced it with a single line along the lines of "return velocity * getCurrentTime();". So satisfying! 8)

Comment Re:General goodness (Score 4, Informative) 114

I love hearing from the front-line what the users actually want, what they like and what they would like to see improved.

This.

It's surprising how little feedback there is in the real world.

One of the best experiences of my career (when I had a developer hat on), was sitting in the room where Level 1 and 2 support staff were on the phone, supporting a system that I had built and was doing Level 3 support on. Until then, it would not have occurred to me that a good 20% of their time was wasted on looking up contact details. No problem, I integrated a one-click contact-lookup function into the dashboard system. They loved it. I never would have thought that "fast search" (think milliseconds) was a "feature" until I saw how important it was for a helpdesk person to not have to wait for anything while talking to someone interactively.

Things of that nature resulted in a UI that -- while a bit quirky from a developer's perspective -- allowed them to get their jobs done efficiently! It was all really simple stuff to implement, but I wouldn't have ever gone down that path if I didn't have that direct feedback and on-site observation of user behavior.

Comment General goodness (Score 5, Informative) 114

Specific examples are hard to come by, but I've noticed the general trend that differentiates the "good" from the "barely usable"..

* Scalability. For example, a good interface will pop up a "search" box for finding a security group in Active Directory. A bad one will let me chose security groups from a list or a drop-down. Both look equally good when the developer is working in a test environment. The latter will crash when used in a million-object directory. Similarly, check out the DNS management dialog box in Windows, or some Oracle tools. Both will show you "all" objects up to some limit (e.g.: 5000), but then provide a filter option to allow you to narrow down the "search" to prevent the GUI from melting if you look at a database with 500K tables. Yes. It happens. A lot. More than you think. Really.
* Annotations. It's 2014 for Christ's sake! There is absolutely no reason not to include a general "note" or at least a "description" field with every. Single. Thing. Seriously. All of them. I'm not kidding. Look at VMware's vSphere interface as an example of this done reasonably well but not perfectly. They at least allow custom columns so you can tag things systematically. Better yet, newer versions of Microsoft's Group Policy allow annotations on every single setting.
* Versioning. For example, Citrix NetScaler keeps the last 'n' versions of its configuration automatically (5 by default I think). Why the fuck Cisco can't do the same with their 1KB but omfg-they're-ultra-critical-to-the-whole-goddamned-enterprise config files I just don't understand. Maybe they're trying to save precious bytes...
* Policy. Good examples are Cisco UCS Blades and, of course, Active Directory Group Policy. Settings should trickle down through hierarchies. I should never have to set the exact same setting five hundred times. Settings should set-and-unset themselves automatically based on the scenario, e.g.: replacing a blade should not involve having to reconfigure its BIOS settings by hand. A typical bad example is 99% of Linux, where every setting has to be either manually set or set via a script. A script is still manual, just faster. No! Smack yourself in the face! A script is NOT a replacement for a policy engine. Don't breathe in, ready to go on a rant about how great Linux is, and how easy it is to manage, because it's really not. Scripts are a "write only" management tool that result in impossible-to-reverse-engineer solutions that can only be replaced wholesale years down the track.
* Help. I'm not really a storage engineer, I just... dabble. However, I've set up labs with IBM and EMC kit, no problem. The one time I got asked to create a simple logical volume on a Hitachi array, I walked away backwards and refused to touch the stupid thing. It seriously had 10 pages of settings along the lines of "L3 Mode: 5/7?" I mean... wat? So sure, I press F1 for help like a naive fool. It helpfully informed me that the setting configures L3 Mode to either mode 5 or mode 7. I can press "OK" to accept the mode setting, or "Cancel" otherwise. I was enlightened. Meanwhile, the same dialog box on the EMC array basically asks for where, what size, and what RAID level.
* Behind the Scenes. Some GUIs have 1:1 mappings with some sort of underlying command-line or protocol. Consoles based on PowerShell such as most Microsoft and Citrix products come to mind, most Linux/Unix GUIs, and Database admin tools. The better ones will have a "tab" or a pop-up somewhere which shows the "script equivalent" of whatever you're doing in the GUI. This is very useful, particularly for beginners, and we're all beginners with every product at least once.

Really, GUI design is -- or should be -- a science, and not a trivial one! It integrates serious engineering constraints, business restrictions, project management priorities along with the fuzzy complexities of both individual psychology and the complex dynamics of interacting groups of people. It's done woefully wrong even by the largest corporations. In practice, it boils down to the developers not having administrator experience, or a realistic test environment. The best GUIs I've seen were usually those were the developers were working against production systems at scale. That's both rare, and by no means a guarantee...

Comment Re:where?! (Score 1) 537

It is very much backed by the real world: Debt Deflation

Waving your hand and saying that it's "not a problem" when it obviously is, doesn't make it magically go away. Keep in mind that the deflationary spiral talked about by most economists is for deflation on the order of a few percent per year. The deflation of Bitcoins is an order of magnitude greater than that!

The inventor of Bitcoin is a criminal genius, and should be jailed for inventing the biggest Ponzi scheme ever.

Comment False assumption (Score 4, Informative) 226

This assumption by the OP:

Mathematica generates the result based on the combination of software version, operating system, hypervisor, firmware and hardware that are running at that time.

... is entirely wrong. One of the defining features of Mathematica is symbolic expression rewriting and arbitrary-precision computation to avoid all of those specific issues. For example, the expression:

N[Sin[1], 50]

Will always evaluate to exactly:

0.84147098480789650665250232163029899962256306079837

And, as expected, evaluating to 51 digits yields:

0.841470984807896506652502321630298999622563060798371

Notice how the last digit in the first case remains unchanged, as expected.

This is explained at length in the documentation, and also in numerous Wolfram blog articles that go on about the details of the algorithms used to achieve this on a range of processors and operating systems. The (rare) exceptions are marked as such in the help and usually have (slower) arbitrary-precision or symbolic variants. For research purposes, Mathematica comes with an entire bag of tools that can be used to implement numerical algorithms to any precision reliably.

Conclusion: The author of the post didn't even bother to flip through the manual, despite having strict requirements spanning decades. He does however have the spare time to post on Slashdot and waste everybody else's time.

Comment Re:Libraries And Documentation (Score 1) 168

Interestingly, you claim your choice of programming language suits your requirements, but then you state a bunch of issues endemic to it, but mitigated or absent in other languages.

For example, the need to sometimes, but not always, initialize objects, libraries, or whatever is typical of C/C++ code, but rare in Java or C#, where constructors or factory methods usually do that automatically for you on demand. The worst I've seen is some Microsoft C++ code where every object had both a C++ constructor and an init function, which wasn't consistently named and would lead to annoying run-time crashes if missed.

Similarly, the need to chase related code between two unrelated files is decidedly a C/C++ centric problem. A typical Java or C# class file is complete and self-contained, except for some special circumstances such as generated "partial" files used in C# or whatnot. Code discoverability is many-fold improved in Java and C# because of intelligent refactoring IDEs that can accurately chase references across millions of lines of code. That's just not possible with C/C++ where the same header code can be interpreted differently depending on the context in which it is included! Macros in general, particularly when combined with opaque 'void*' pointers, severely limit IDE capabilities.

I feel your pain. I've tried to hook in C libraries such for basic tasks such as ZIP compression or PNG decode in the past, only to discover that each and every one of them reinterprets what "int" means, how to "malloc", "free", read a stream, and return messages or error codes. Meanwhile, this just never happens in Java or C#. The size of integers is fixed and typedef is flat out missing, memory is garbage collected and released automatically, both languages have a built-in System.IO.Stream, and both have exceptions for safe and consistent error handling.

Sure, I'll believe you can remember to call "free", but which one of the dozens in the libraries you're using? Are they all thread-safe? Are you sure? Are your co-workers? All of them?

I'll even believe that you "need" C++ performance, except that in my experience I can spend 1/5th of the time developing the logic a C++ programmer, which then leaves me 4/5ths of the time for optimisation, usually by making the code highly multi-threaded or whatever. Given the same "budget" I can usually produce faster, better code, with less pain.

That was all actually slightly off-topic relative to your original gripe regarding insufficient documentation, which is also largely "solved" (as much as it can be, anyway) in Java/C# land: not only do you get vastly better tab-complete, but both systems have standardized embedded doc-comment standards that are indexed for searching in IDEs!

Comment Is it HDMI 2.0 or 1.4?! (Score 4, Interesting) 212

Has anyone else noticed that despite the endless 4K resolution marketing being put out there by AMD, there is not a peep on the specific type of HDMI port the card has?

There is a HUGE difference between HDMI 2.0 and 1.4, but it's always specified as just "HDMI" with no version number. No review mentions the HDMI version, even though one would think that a real journalist would put in some effort to research this and find out.

I suppose it's easier to run the card through a bunch of automated benchmarks, cut & paste 15 pages of results to maximise ad impressions, and call it a "review".

Comment Re:Illusion of privacy (Score 2) 224

The weak point is not with the mathematics. It's like claiming nobody can break into your house because you have a solid steel door, but at the same time you have glass windows.

The weakness in SSL is the trust you have to place in the CA infrastructure, none of which is really that secure. Your browser will trust any valid certificate rooted in a trusted CA. There's no need to crack the keys of the certificates issued by Google. Keys have leaked, CAs have been hacked, intermediate authority certificates are often very weak (512bits), and the NSA could simply issue an order to a US corporation under national security to provide them with whatever key material they desire. The Stuxnet worm is a great real-world example of this happening: its creators used private keys stolen by intelligence agencies to create fake device driver code signing certificates.

Not to mention that it wouldn't be a difficult for an agency with the resources of the NSA or the CIA to simply infiltrate larger IT organisations such as Google and make copies of their private keys. That way they could man-in-the-middle without having to change the certificate fingerprint.

That's all academic anyway, the rumours are that the NSA doesn't have to bother decrypting anything because they have moles inside all large organisations that provide them with the plain text content directly whenever they want. This wouldn't even require that many people. Just by having someone in the top-5 ISPs, Apple, Google, Microsoft, IBM, Oracle, and Amazon you'd basically ensure coverage of the core "cloud" services that most computers connect to on a daily basis.

Comment Re:Uh huh (Score 1) 570

I could reply in detail, but you missed my core point, so I'll pick out just a couple of the more relevant ones to reply to:

vi on the other hand has the advantage of being universal.

Which is what we were told in the lab too. But how sad is that? I shouldn't have to "make do" with a shitty text editor that's saddled with the lowest-common-denominator limitations of the 37 year old systems it was originally developed for, in 1976!!! This is the problem with both UNIX and Linux: they haven't really changed, at their core, for three or four decades. Sure, there's GUIs and whatever on top, but a soon as you want to build real systems, it's time to roll the sleeves up and get elbow deep in decades old crap, with all the limitations and inefficiencies that implies.

Sure, there might be some sort of masochistic pride in learning how to use a text editor that basically no first-time user can even exit without a cheat-sheet, but I have better uses for my time.

Type "top". When it comes up, type the letter "M" (for memory). Five keystrokes, and you get it updated continually.

Did you not read the bit where I said that the whole single-letter option thing is insane, because nobody can possibly guess what commands mean based on just one symbol?

The whole two- or three- letter commands with one-letter options isn't a good thing, it's a legacy from the ancient times when tab complete didn't exist, and "terminals" were typewriters. In that era, every character saved improved administrator efficiency measurably.

Here's a hint, in case you've been living under a rock for the last four decades: those times are over. I have a 1920x1200 LCD screen, and my phone gets 10 megabits. I don't need to be shaving a few hundred bits off my commands so that it will transfer faster over a 300 baud link, because those exist only in museums.

So back to your "example": So great, I can sort by "memory" by pressing "M". Awesome. Here's the columns I could sort by in Windows:


  > Get-Process | Get-Member *memory*64 | select -ExpandProperty Name

  NonpagedSystemMemorySize64
  PagedMemorySize64
  PagedSystemMemorySize64
  PeakPagedMemorySize64
  PeakVirtualMemorySize64
  PrivateMemorySize64
  VirtualMemorySize64

First of all, that command line made perfect sense to you, right? You can understand what it means, without having to look up any of the commands in help.

For Linux, I have no idea how what memory statistics are available for a process, but Google came to the rescue with what looks like the likely set:

  vsize - The size of virtual memory of the process.
  nswap - The size of swap space of the process.
  cnswap - The size of swap space of the childrens of the process.
  size - The total program size of the process.
  resident - Number of resident set size, this includes the text, data and stack space.
  share - Total size of shared pages of the process.
  trs - Total text size of the process.
  drs - Total data/stack size of the process.
  lrs - Total library size of the process.
  dtp - Total size of dirty pages of the process (unused since kernel 2.6).

Answer me this: Which one is "M"? What are the letters for the other ones? Do you know off the top of your head? If you see some random "top" command-line, would you be able to immediately identify every single option from memory? Can "top" sort by any of those columns? What about every other Linux command with a single-character options? Have you memorised all of them too?

PS: right after I finished typing all of that up, I actually read through the Wikipedia "vi" page. I love this bit:

"Joy used a Lear Siegler ADM3A terminal. On this terminal, the Escape key was at the location now occupied by the Tab key on the widely used IBM PC keyboard (on the left side of the alphabetic part of the keyboard, one row above the middle row). This made it a convenient choice for switching vi modes. Also, the keys h,j,k,l served double duty as cursor movement keys and were inscribed with arrows, which is why vi uses them in that way. The ADM3A had no other cursor keys. Joy explained that the terse, single character commands and the ability to type ahead of the display were a result of the slow 300 baud modem he used when developing the software and that he wanted to be productive when the screen was painting slower than he could think"

Haha... that's just gold!

Comment Re:Uh huh (Score 2, Interesting) 570

Ok, I'll bite.

I'm a Windows admin, but I just went to a training course to learn about a high-end enterprise product that runs on top of Linux. I've dabbled with Linux-based stuff before (proxies, VMware, ESX, etc...), so it's not exactly new territory, but I figured it's 2013, it'll be interesting to get a glimpse into the current state of the "Linux Enterprise" world.

My experience was this:

-- You still need to patch, or install 140+ dependencies to install one application. Same difference.
-- You still need to reboot. A lot. More than I thought. I suspect that it is possible to avoid most of them though by judiciously restarting services, but the effort is much higher and the outage level is practically the same, so what's the benefit, really?
-- Things that really ought to be automatic, aren't. I spent a good 50% of the lab doing really fiddly things like cut & pasting iptables rules to open firewall ports. The installer really should have just done that for me.
-- Binding services together and just generally getting things to start up and talk required an awful lot of error prone manual labour. The lab guide was liberally sprinkled with warnings and "do not forget this or else" sections. Lots of "go to this unrelated seeming file, and flip this setting... because.. just do it or nothing will work."
-- I love the disclaimer in the training guide: "Linux configuration scripts do not tolerate typos, are case sensitive, and are not possible to validate before running the associated service." Fun stuff. I can't wait to diagnose random single-character problems in 10 kilobyte files when the only error is that one of a dozen services barfed when started.
-- Wow, the 70s called and wanted their limitations back: spaces in file names? You're risking random failures! Case-insensitive user names? Nope. Unicode text? Hah! IPv6? In theory, not in practice. GUI config wizards? Nope. Text-based config wizards? Not many of those either. Want to make a configuration change to a service without having to stop & start it? You're dreaming! An editor more user friendly than vi? Eat some cement and harden up princess!
-- I love the undecipherable command-line wizardry. I'm not an idiot, but how-the-fuck would I know what "-e" does on some random command? There is just no way without trawling through man pages using a command-line reader with no mouse support and keyboard shortcuts I don't know. Compare this to a sample PowerShell pipeline "Get-Process -Name 'n*' | sort -Descending PagedMemorySize". You'd have a hard time finding an IT engineer that can't figure out what that does.

I keep hearing about the supposed efficiency advantage of Linux, but I just don't see it. Given a Hypervisor, PowerShell, and Group Policy, Windows administration a piece of cake in comparison.

Comment Re:Can superconductors compute? (Score 2) 73

They're probably using Rapid Single Flux Quantum (RSFQ), which isn't really a "quantum" computer logic, but is very fast and very low power.

It's the latter property that is of interest for making supercomputers. One of the biggest performance limitations is latency, which is caused by the speed of light delay between processors. Moving processors closer reduces the delay, but increases the specific power until there is just no practical way to cool the computer and it overheats.

Superconducting logics like RSFQ have very VERY lower power requirements, which means that you can pack the processing elements very close. It's likely that they can even be stacked, along with memory. In theory, it would be possible the squeeze a petaflop supercomputer into the space inside an average sized cryogenic Dewar!

In practice, manufacturing complex RFSQ chips has been a bit tricky. Simple ones however are used relatively often, for example as an analog to digital converter in radio telescopes and some very high-end radar systems.

There have been suggestions to miniaturize this stuff using tiny cryocoolers based on stacked Peltier elements and good insulation, but I think the temperatures required are just too low.

At the end of the day, the NSA or their ilk funding research into this stuff might be a good thing! It sounds to me like this is a great technology that just needs a few billion dollars of research funding to become practical for commercial use.

Comment Re:Um excuse me ... (Score 1) 543

But, I think the crowd always saying use vi for this or that are the kinda people whose job is more about optimizing complex algorithms than it is about writing lots of business logic.

My point still stands: I was working on 3D games at time! I would have killed for the concurrency visualiser in Visual Studio, because the engine was multi-threaded. Similarly, live edit-and-continue debugging has its uses when twiddling around with something like a tree rebalancing algorithm.

Slashdot Top Deals

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...