Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:It'd better happen quick then (Score 1) 311

The primary reason why most laptop vendors don't offer a hard drive in place of the optical drive is power consumption. Optical drives use basically nothing when they're sitting there idle, and since most optical drives today are used about as much as floppy drives were used in the mid 1990's (i.e., virtually never, only to install software, and even that is going away with Internet delivery of software), that means that they can spec the rest of the laptop around a lower power draw, which allows them to claim a longer battery life and allows them to use a smaller fan to cool the laptop and a smaller brick to power the laptop and allows you to actually set the laptop on your lap without scorching your lap and requiring a trip to the surgery suit for skin grafts. The large desktop replacement laptops from Dell and HP (amongst others) actually do have space for multiple hard drives, but those are more portable desktops than laptops -- you aren't going to unplug one of those and walk across the lab with it to plug into a different network to sniff for a rogue machine that's brought your installation to a crawl, they sit on a desk and stay there, with short excursions to the car and home perhaps, because they are large, bulky, hot, will last maybe 20 minutes on battery power, and have power bricks the size of concrete blocks.

Given that, if you need a lot of space in a single 2.5" form factor package to fit into, say, a current Apple Macbook Pro (where Apple very highly optimizes the power consumption and the OS really doesn't deal well with you removing the optical drive and putting in a second hard drive using a third-party bracket, it won't go to sleep correctly much of the time), the hybrid approach makes a lot of sense. The primary issue with the prior generation of the Momentus XT in that application was a) it topped off at 500GB, and b) it consumed more power than a 750GB 7200 RPM Western Digital Black drive. The current generation solves the capacity problem, but I'll need to take a close look at power consumption, because power budgets in many modern slim (i.e., actually portable) laptops are quite tight.

Comment Hybrid can actually be sometimes faster (Score 1) 311

The core problem with SSD's is write speed on workloads that have a large number of small updates. My testing on the older 500GB Momentus XT showed that in general it had better write speed doing, e.g., a Fedora install, than the 80GB Intel SSD that I benchmarked it against (same generations of product here, about a year ago), due to the large number of small updates that the non-SSD-aware EXT3/4 filesystems do during the course of installing oodles of RPM's. Because the Momentus only caches *read* requests in the SSD (write requests flow right through it, other than to invalidate anything in its internal cache that is getting written), writes went through at full 7200 rpm 2.5" hard drive speed. In general when I benchmarked writes on similar-generation 7200 rpm 2.5" hard drives and SSD's, the hard drives ended up faster for virtually all real-world workloads, so the end result of my benchmarking was that on real-world workloads the hybrid drive was faster at reads than a hard drive (primarily due to SSD-cached filesystem metadata) and faster at writes than an SSD.

Please not that I have *not* tested the current generation of SSD's and Momentus XT. Just that it's baffling that the Momentus XT never seemed to really get any traction in the marketplace, given the performance advantages of the approach for many real-world tasks.

Comment Re:teachers make the difference (Score 1) 292

Wow, great job reciting Fox News talking points. Too bad they have nothing to do with reality. I know what my local school districts' pay scale is. I know what I get paid as a top software engineer. I know what my benefits are, and I know what my local school districts' benefits are. Here's some facts:

* I get free health, dental, life, and disability insurance as a software engineer. Teachers in the local district pay 100% of the cost of their health insurance.
* The top pay scale for a teacher with 20 years experience and an advanced degree in my district is less than half of my salary as a software engineer.
* Tenure rights for public school teachers are based on Constitutional due process and as long as due process is followed, any teacher can be fired for any *valid* reason (i.e., not just because the principal doesn't like gays or Mormons). Any principal who says he has a problem getting rid of an incompetent teacher is either himself incompetent or is lying to you, there is due process to follow but in every state of the nation an incompetent teacher can be fired regardless of tenure.
* Tenure rights don't have anything to do with layoffs. 40% of teachers in some of our local districts got layoff notices this year. A large percentage of those teachers had tenure.
* I will receive more money from Social Security when I retire (due to maxing out the contribution limit each year) than the teachers in the top pay scale at my local district will receive from CALPERS. And because of the double-dipping penalty in the Social Security formula, they'll never make more combined pension and Social Security than I get from Social Security when I retire.

Really, with a disrespectful and ignorant attitude being the norm, why *would* I want to teach? So people like you could spit on me for doing a job that's ten times harder than software engineering? Been there, done that. No thanks.

Comment Re:teachers make the difference (Score 5, Informative) 292

I am wondering what in the world you are talking about. During the three years I was teaching, a) my highest salary was the munificent sum of $21,800 per year (roughly $40K/year in today's dollars), b) I paid 100% of my health insurance costs (NO district subsidy of the cost), and c) the retirement benefit was 40% of my ginormous salary if I managed to survive 30 years without stroking out, being knifed or shot by one of my students, or being thrown under the bus by a school administrator upset that I cared about whether my students learned or not (and note that I did NOT pay into Social Security and if I had managed to get Social Security via some other job, there's a "double dipper" penalty in the SS formula that would take most of that away from me). In the years since I switched to doing software engineering rather than teaching mathematics I've sometimes worked 60+-hour weeks and multiple all-nighters but never worked anywhere near as hard as I worked as a teacher and get paid more than three times as much money than a teacher. If you paid me the same six-figure salary I make as a senior-level engineer I still wouldn't go back, because the job is thankless, never-ending, and utterly exhausting both physically and intellectually if you're doing it right. My hats off to those teachers who stay on the job and do it well, year after year, because the fools who criticize such teachers have not a clue.

BTW, once you get above 35 students in a classroom, it becomes simply impossible to manage in a way conducive to learning. Above 35 students learning starts dropping off rapidly, past 40 it's just baby-sitting and make-work. Teachers know this the hard way. The fact that politicians and parents talk about 40+ student classrooms as if that were some reasonable solution to the cost of running public schools tells me that either a) they don't care about education, they just want free babysitting to keep kids off the streets, or b) they're clueless cretins who need to be drummed about the head with a clue stick. That is all.

Comment Re:Want videogame studios? (Score 1) 292

Perhaps Media Molecule should think about hiring some of the 50%+ of UK Computer Science graduates who cannot find a job in the field? When I see statistics that say that 70% of Computer Science graduates are not working in the field five years later, I call balderdash on the notion of a shortage of software engineers in the UK. If Media Molecule truly believes that 50%+ of UK Computer Science graduates are unqualified to write software, it sounds to me as if their beef is with the universities that credential people not worthy of said credential, not with anything happening at the primary school level.

Comment Re:Superheros are trained young (Score 1) 292

I saw my first computer at age 17. I've been making a living writing software or doing other related things for over 20 years now and while I'm no Linus Torvalds, I still don't have any problem finding a job when I need one and making significant contributions everywhere I go. What differentiates those who will be good at writing software from those who will never be has nothing to do with how young you are when you encounter computers, and everything to do with your ability to think in a logical and straightforward manner. I would much rather see our schools teach thinking skills than computer skills. Thinking skills are useful for other things (say, in figuring out which politicians are lying to you and thus you should vote for the one *not* lying to you, for example), while skill writing computer programs is useful only for a small set of problems. I don't write algorithms to go grocery shopping or change the cat box. Just sayin'.

Comment Re:It's not just the paren, it's the order (Score 2) 425

One thing I'll note is that MIT MACLisp had a functional compiler that compiled all the way down to machine code before most of Slashdot's readers were born. The original MIT Multics Emacs was written in MACLisp, I looked at the resulting ALM (Assembly Language Multics) from the compilation process for a few functions just to see how well it compiled, and it was pretty close to the output I saw from the PL/1 compiler, which had an excellent optimizer for that era. There's nothing inherent about Lisp that makes it impossible to compile and optimize, Bernie Greenberg and David A. Moon certainly proved that.

That said, I'm of the opinion that the age of extension languages embedded into applications is over. The extension language paradigm explicitly limits the extension language to manipulating a single application, when what we have open on our modern desktop is dozens of applications and we'd like to create something new from all of them. What is needed, instead, is a universal interface exported by applications to use to script applications. Think Windows PowerShell rather than Emacs Lisp -- the shell lives outside the application and calls a defined application API to perform actions. SOAP/XML-RPC/REST are far more important to the future of scriptability than some archaic concept from the 1960s and 1970's, if I implement one of those interfaces to the core functionality of my application I can write my extensions in *anything* -- Perl, Java, Python, you name it. The problem of course is that SOAP/XML-RPC/REST are also horrendously inefficient, surely we can do better than marshalling and demarshalling everything to/from XML or JSON. Still, I present this as a conceptual thought -- though I might point out that the Amiga *almost* managed something similar with AREXX (theoretically it was possible to use the "AREXX port" to control Amiga applications from other scripting languages, though it was rarely done).

Comment Re:Do we need network transparency? (Score 1) 145

One thing I'll point out is that RDP (using the current Windows clients and servers) is extremely efficient compared to "network-transparent" X. When I use Wireshark to look at what's on the wire, opening a Firefox window on Windows and displaying it to my desktop uses roughly the same bandwidth as X's "network transparent" windowing, but happens much quicker due to latency -- the X client is issuing multiple requests to the X server then *waiting for the response* before continuing on. Furthermore, RDP is transferring ONLY THE CHANGED BYTES, *not* the whole screen, so the notion that RDP transfers the entire screen every time the screen updates on Windows is just plain nonsense. Meanwhile, X is not only transferring the changed bytes, but only doing so after a series of *synchronous* commands. The net result is that the 500ms total turnaround time between my house and my work ends up very painful with "X", while is virtually unnoticeable with RDP.

From a theoretical point of view, it all makes sense. There is a theoretical minimum number of bytes which must be sent to update a window from its old state to its new state. This occurs whether the detection of which bytes need sending is via screen scraping or via the application directly telling the graphics library "these are the bytes that have changed". Recent revisions of RDP have gotten *very* efficient at approaching this theoretical minimum. Standard "X" doesn't even try -- if an application draws a new graphic and tells "X" to display it, X sends the entire new graphic to the remote end, *even if only a few pixels have changed*. (And yes, I know there is such a thing as FreeNX etc., but that's add-ons that are not part of X proper, attempting to work around these performance limits of X).

I guess what I'm saying is that if Wayland chooses to use an RDP-like screen scraping protocol for remote display rather than doing "network-transparent" windows like X, there's no theoretical reason why it cannot be efficient. Pixels are pixels, in the end, and the minimum number of pixels needed to refresh a window are identical whether the pixels are being derived via screen scraping, via screen scraping with hints based on GUI library calls (what RDP does), or via the application telling the display protocol "these are the pixels that have changed". The only difference is that the last requires every application to be written to efficiently tell the display protocol "these are the pixels that have changed" -- and we already know that this isn't so for many major "X" applications, which are far from efficient in that regard.

Comment Re:As a Mac user... (Score 1) 95

Do note that my development platform is Fedora 15 / Gnome 3 running KVM VM's of the various target distributions for the product, so I'm clearly familiar enough to use it every day. But I suppose yes and no, because I stopped my investigations into replacing ESXi with KVM or Xen when I ran into a couple of show-stopper issues. The first one was with KVM stability, basically there is a feature we needed that KVM supposedly supports, but KVM regularly kernel panics if we try to use it. I tried at least three different distributions -- Fedora 15, RHEL6, and the latest Ubuntu -- and ran into the same problem with each of them. The second was with distribution Xen support. The only distribution that had a version of Xen new enough to support the functionality we needed was OpenSUSE 11.3, hardly what I would call a stable distribution, and then OpenSUSE 11.4 broke Xen. Any other distribution would have required that we create a new DOM0 kernel and update the Xen runtime utilities and QEMU in DOM0 (note that Xen uses QEMU for its HVM virtualization). *PLUS* write an entirely new virtualization management infrastructure, since libvirtd doesn't support the functionality of Xen that we needed.

Meanwhile ESXi 4.1 Just Works(tm). An attribute that I appreciate more and more as I get older (thus why this is being typed from a Macbook Pro, the only laptop that doesn't annoy me with a palm-surfing trackpad, clunky keyboard, or ridiculously short battery life). So, let's see, where do I want to put several man-months of my time -- into creating actual product that can be sold, or into futzing with Xen? In the end it's a no-brainer, putting the man-months into the product results in a more functional and faster product, whereas putting the man-months into Xen in order to get away from ESXi is just a cheaper product, not a better one -- and price isn't our selling point, functionality and performance is.

Comment Re:As a Mac user... (Score 1) 95

A new iscsi volume for each VM is certainly one approach, and actually that's sort of what we're doing with the ESXi VM's -- they have a boot disk on VMFS on an iSCSI volume, and the VM then mounts an iSCSI volume off the SAN for their actual data. And you are certainly correct that it is *technically* feasible to do it with KVM, I have in fact done it -- for a one-off prototype. The pain I endured doing that basically told me that there was no way in BLEEP that this was a supportable solution in the field deployed at customer sites that lack dedicated IT resources.

The ESX console, BTW, isn't necessary for a lot of things if you enable the ssh on your ESXi hosts. vim-cmd is your friend. With an appropriate oem.tgz you can even set up your keys so you can script it from outside. The only time I fire up the VSphere GUI in a VirtualBox Windows VM on my Macbook Pro is when I need to see the actual console (as vs. use rdesktop to a Windows VM or vnc to a Linux VM).

Comment Re:As a Mac user... (Score 1) 95

Thing is, neither Xen nor KVM/QEMU are as capable as VMware for the things that VMware is good at. For example, it's typical in the VMware world to create a VMFS volume on an EMC block storage server and use this for virtual machine migration between physical hosts. This works because VMFS is a clustered file system that has some unique attributes that make it good for hosting virtual disk files (it's extent-based, for example, so it will keep the extents of a virtual disk contiguous, meaning that the elevator algorithm for disk management inside your VM actually works and you get close to real-world disk performance). Migrating a virtual machine is literally as easy as stopping it on host A and starting it on host B. I can't do anything of the sort with Xen or KVM/QEMU. Then there's VCenter, which provides a central GUI to manage an entire network's worth of virtual machines.

Don't get me wrong, I despise VSphere. I curse it every time I use it, and I've attempted a Xen or KVM/QEMU migration multiple times to get away from it. I had hoped that the release of Red Hat Enterprise Linux 5 with its fairly up-to-date KVM support, GFS cluster filesystem support, etc., could allow me to replace VSphere in our shop. Unfortunately to get adequate disk performance I ended up having to create LVM volumes to use as VM raw disks, qcow2 on top of any currently-existing Linux filesystem is a disaster. And once you're in the realm of LVM volumes, you're beyond any management tools available for managing KVM - not to mention that the way Linux volume groups currently work, you can't share the same volume group between multiple hosts because Things Go Badly, meaning you can't do the trick that VMware uses to do rapid migration via simply turning off the VM on system A and turning it back on, on system B. (There's some state storage required to do it seamlessly, but again, that's all handled by the shared storage).

So yes, I know Linux (I mean c'mon, I've been using Slashdot since before it had user ID's!) and how to use Xen and KVM/QEMU. But no, they're competitive with VSphere in the enterprise environment only in a limited set of circumstances, and stating that the only reason to use VSphere is "ignorance" is, itself, ignorance.

Comment Re:Hemos Says: "So Long, and Thanks For All The Fi (Score 1) 1521

Indeed, I suspect many of us old-timers will show up now :). Actually, the only reason my UID is so high is that when the Taco introduced the UID system, I was, like, "Puh-leeze. I don't feel like being sorted that way and it won't solve the troll problem anyhow, just give everybody mod points to mod the trolls out of existence." So I commented as "Anonymous Coward" for a while. But that became annoying so I finally broke down and signed up for an ID. Hard to believe that now ID 627 is considered a low ID :).

Comment Re:Stupid (Score 1) 413

Don't be so sure that network bandwidth required will be higher than remote X. Displaying a window refresh is displaying a window refresh, in the end -- the number of bytes required to do so has a fixed minimum (i.e., the difference between the old window contents and the new window contents) which doesn't require a "network transparent" protocol to encapsulate, just requires that *something* capture the fact that a certain area has changed and send only that certain area's changes to the remote end. RDP does this quite well, at least as efficiently as "X" does it, and there's no reason to suppose that Wayland could not do it equally well if Wayland's designers put the proper hooks in for a network transparency layer to intercept change events.

Comment Re:Stupid (Score 1) 413

I've been using "X" since before modern Windows or Mac OS X existed. My first experience with "X" was on a Sun 4 workstation, running olwm on SunOS. One thing that has been true for every year since I started using "X": The end user experience is terrible. Awful. No application looks like any other application or works the same as any other application, simple things like, e.g., cut and paste, simply don't work the way any sensible end user would expect, and the configuration of the system has been a Deep Dark Dreary Dismal Swamp for the entirety of "X"'s existence. Then when more complete desktop environments like Gnome arrived, they learned altogether all the wrong lessons. Gnome 2 reminded me of Windows 95 if the Soviet Union had not fallen and had decided to clone Windows. You could almost hear the poorly-ground gears gnashing along and the clank of poorly fitting mechanical parts. The whole 'gconf' system, for example, was a dreary re-implementation of the Windows registry which did not even bring with it the redeeming value of replacing all those individual system configuration files that make it impossible to replicate the equivalent of Windows "restore points" for backing out of failed configuration changes. All Gnome 2 managed to do was add yet another layer of mess to the muddled mess that is the Linux user experience. (I'm still reserving judgement on Gnome 3, which appears to be an attempt at cloning Mac OS Lion's UI, but behind the scenes it still clanks).

The only time Unix has *ever* managed to get significant end-user client/desktop share managed to do so only by completely abandoning "X". Look at Android. Do you see "X" anywhere there? Exactly. Look at Mac OS X. Yes, you can run "X" there, but only via a compatibility layer which is deliberately ugly compared to the native UI. Why did these people abandon "X" and go to something different? Once you can answer that question, then, and only then, are you qualified to say "X is the bestest windowing system, like, EVAH!" Until then, you're just blowing hot air.

Finally, regarding network transparency and "X": Uhm, have you used Microsoft's RDP recently? It makes any network transparency you get from "X" look sad and dated. Even the *sound* from the remote end comes out on your local desktop! And if you want to share files between your local desktop and the remote system you're logged into, just set up file sharing in your RDP session, and presto, it pops out on the other end as a network share automatically when you log in to the remote end. And your desktop resizes itself to your *LOCAL* screen size, instead of being hard-wired to the *REMOTE* screen size like with VNC. And the performance is a quantum leap over the sad pathetic laggy excuse for VNC that is Vino on Gnome. Anybody who claims network transparency as an advantage of "X" hasn't used a modern RDP client to a modern Windows system. What people want is to access their desktop whether they're local or remote. RDP is currently the best way of doing that on the planet, period. Any advantage "X" ever had in this area was in the 20th century. Given that, why bother with all the overhead and mess of "X"? Why not just take some popular GUI library and make that talk directly to hardware (or to a remote display like RDP) via a light abstraction layer? We could call it something novel, like, say, "Mac OS X". And I'm sure it'd sell like hotcakes.

Comment Re:Notability is a consequence of verifiability (Score 1) 533

Utter nonsense. I've had articles that I follow from time to time deleted for "non-notability" even if they were well-sourced and had verifiable information. If the person or subject they were about was incomprehensible to some cheeto-munching "editor" in his mommy's basement who decided that an article that had been up for literally years wasn't "notable" because nobody had edited it in years and only a few dozen people had even *looked* at it in that time, zap. End of article. A notability delete tag would get added, and because we're talking about an obscure topic in the history of mathematics of interest only to mathematicians, few of whom bother checking Wikipedia on a regular basis to check for things like delete tags, *gone*. Blam bam thank you m'am. History deleted. Down the memory hole. Who needs history anyhow? We got articles about recently dead crack-smoking singers to update, after all!

Wikipedia worked better when it was an actual anarchy as vs. this current hyper-moderated BS. Sure, popular pages got defaced regularly back in those days, but those of us who had info to contribute on actual scholarly topics didn't have to worry about "notability" back in those days, and the popular pages eventually got edited or reverted back into some semblance of order. Now... eh. I'm not going to bother.

Sayyyyy.... why doesn't someone start "RealPedia" which is Wikipedia as it was back before all the nonsense? Hmm...

Slashdot Top Deals

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...