Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Ubuntu

Journal Journal: No, Canonical did not put Ubuntu into a smartphone 14

The Motorola Lapdock accessories run their own linux-based OS. All Canonical did was swap that out for Ubuntu. In other words, you can't run Android apps while docked - all you can do is share files on the phone, the same as before ...

Kind of ironic that they can only run the Ubuntu touch interface on the non-touch portion of the combo, hmmm?

One more nail in Canonicals' coffin, as OEMs won't touch this with a 10-foot pole because if they ever design a dock for their own phones (which they won't - the whole smartphone+dock+larger display was obsoleted by tablets), they'll just run Android on the secondary display anyway.

Ubuntu

Journal Journal: When I tried to switch people to Ubuntu ... 2

It seems like an eternity ago, but back when Canonical was sending out batches of promo CDs, I figured that maybe something as silly as a properly printed cardboard CD package and a professionally silk-screened CD might make a difference to the masses, because, people being people, they do tend to judge a book by its cover.

So I handed out my share.

While many people popped the CD into the tray and gave it a spin, ultimately only three people made the switch from Windows. One switched to a mac, the other two to opensuse (and those other two are now also looking for a new distro because opensuse has become too flaky for them lately).

Simply put, most people would rather pay an extra $50 or so every few years for a computer with something they're familiar with that is mostly backward-compatible. Or they need a specific program that only runs under Windows. Or they are willing to pay the Apple premium to have a computer that runs twice as long as the average consumer box.

Nothing is going to change that. Canonical will never reach its goal of fixing Bug#1 - Microsoft has a majority market share.

Similarly, Shuttleworths other goal of Ubuntu having 200 million users by 2015 is dead. XP will have a larger market share for years after it's EOLed in 2 years.

Given the continued lack of profitability, dwindling market share, and new products that are obsolete before they make a single sale (UbuntuTV, Ubuntu Webbook), the only question I have is how long before Ubuntu is "Kubuntu'd"?

Oh well, Ubuntu's loss is Mint's gain.

Open Source

Journal Journal: Is the GPL running out of steam? 3

The number of projects released under permissive licenses (Apache, MIT, BSD) continues to ries, and the popularity of the GPL continues to drop.

Whatâ(TM)s clear is that over the last few years, many of the highest profile open source projects have chosen the Apache license, including âoecloud computingâ platforms such as Hadoop, OpenStack, Cassandra, and CloudFoundry. Node.js, another of-the-moment cloud platform, uses the MIT License. And even the big-name mobile platforms have joined the crowd. Googleâ(TM)s Android mobile OS used the Apache license, and just this week, HP announced its schedule for open sourcing Palmâ(TM)s webOS platform under the Apache. (license)

People are finally figuring out what the folks behind the *BSDs always knew - that code contributions will still make it back, because there's value in not having to support a custom fork all by yourself. Not to mention that it's also better when you get people to give back because they want to, and not because they have to.

Privacy

Journal Journal: Reversing the tables on Canadian CyberSnoop 6

Remember that clown who said that anyone opposed to Canada's proposed new law is siding with pornogrphers, and that people who have nothing to hide shouldn't worry?

Well, looks like someone is going through the affidavits from his messy divorce, and tweeting the details of this "family values" Member of Parliament.

The guy had an affair with his wife's sister's nanny, then another one with the babysitter, got her pregnant, divorced his wife and stopped paying support (they have 2 kids) ... but hey, he has nothing to hide so he shouldn't be embarrassed, right? After all, he's a fine, Christian Fundie.

http://twitter.com/#!/vikileaks30

Would you trust this guy with warrantless searches?

Facebook

Journal Journal: Unfriend someone, get a bullet to the head. 2

http://www.zdnet.com/blog/facebook/couple-unfriends-woman-on-facebook-father-murders-them/8930
[cue the "duelling banjos" music from Deliverance]

36-year-old Billy Payne Jr. and his girlfriend, 23-year-old Billie Jean Hayworth, recently unfriended 30-year-old Jenelle Potter on Facebook. Jenelle was upset, but not as much as her 60-year-old father, Marvin "Buddy" Potter (pictured right). He was so angry when he learned about the unfriending that he and 38-year-old Jamie Lynn Curd (pictured left), who reportedly had romantic feelings for Jenelle, went out and murdered Payne and Hayworth last week. The couple is survived by an eight-month-old baby boy, who was found unharmed, in Hayworth's arms.

This is nuts!

GNU is Not Unix

Journal Journal: I see the natives are restless again ...

More noise about how "evil" it is to replace busybox with MIT/BSD-licensed code. http://laforge.gnumonks.org/weblog/2012/02/index.html

The fact is that what the SFC was doing - requiring the source for ALL software, not must modified busybox - was illegal (thanks to Barnes and Noble vs Microsoft for reminding us that you cannot do this sort of "tying").

Besides, we now have so many ways to work around the GPL legally that it's becoming moot. For example - load the original into memory, patch the memory image, then run. Copyright law (on which the GPL depends) only applies to works in a fixed medium - in-memory images don't count - so you don't have to distribute either the code for doing the in-memory patches. or the code for doing the patching. Any part of the license that says you cannot modify the in-memory image is void, as it goes beyond the rights granted to the copyright holder by copyright law.

In other words, the GPL is "defective by design".

Happy Valentine's Day [tt]

Medicine

Journal Journal: Drug warning labels I'd like to see ...

olmesartan medoxomil (Olmetec, BeniCar). Warning: This drug may turn you into a zombie, cause you to sleep 2/3 of your life away (but you won't do anything because after it really kicks in it may also cause short-term memory loss and lowered affect), mood changes, depression, blurred vision, sensitivity to bright light, neck pain, cold in the extremities, etc.

Most people don't know that the government wants consumers to report adverse drug reactions.

US: Start here
Canada: Start here

Ubuntu

Journal Journal: Ubuntu Dead Pool Update 5

"How do you make a million-dollar linux company? Start with $20 million."

Let's see how it's going so far this year.

So far this year, Canonical has stated that it was out of the smartphone biz 2 years after saying that it would have product on shelves (as well as Ubuntu tablets) within a year. On January 8th I predicted that Canonical's latest offering, which was rumoured to be UbuntuTV, would be more of the same - a total bust.

Sure enough, at the same CES show where UbuntuTV was trolled, Lenovo introduced a much better product - a tv that has Android (the latest ICS), a remote with multi-touch and motion sensors, a second game remote, video cam for facial recognition, mic for voice and speech recognition - all stuff that UbuntuTV lacks, and, unlike UbuntuTV, in production.

This week, Canonical dropped support for Kbuntu. Not all that earth-shattering, really - it was only one guy being paid to work on it, but the "we're concentrating our resources on other projects" has more of an air of "we're circling the wagons" retrenchment than anything else. Having dropped regular Gnome users and KDE users, what's left except the badly-misnamed Unity?

Next up on the death watch - Canonical's Ubuntu Cloud and Ubuntu 1. Neither service made much impact (or much sense), and with Google getting ready to extend its' current cloud offerings to include "do-what-you-want" storage, well, companies with much larger user bases than Ubuntu, such as dropbox, are also worried.

What does it really mean?

We've seen this sort of "we're going to focus on our core product" talk (in this case, disUnity) from businesses that are failing, same as businesses abandoning failed products "to the community". As a venture capitalist, Shuttleworth is probably irked that he's wasted 7 years on a project that will never make a profit, is in decline, with a history of failed and dropped projects, and no exit strategy. The latest moves are telling - and the story they tell is that the clock is ticking, and he's probably got a deadline after which he'll pull the plug if there's no change.

And to give a distro in decline an evaluation of $1 million is probably overly-generous. It's probably more like negative $1 million, because of commitments to leases, on-going losses, termination fees, etc.

Ubuntu's goal was to fix "Bug #1 - Windows dominance in the marketplace." Ubuntu won't even outlive XP at this rate. Since Canonical's financial year is April-to-March, my guess is that the rest of Ubuntu will be thrown to the wolves (oops - "donated to the community") right before XP goes end-of-life on April 8th, 2014.

Operating Systems

Journal Journal: The Rule of Most Specific 9

Treat your users like children, and they'll keep acting like children

I forgot that most distros now install the cpu hog known as tracker by default, so sure enough, it started indexing my files while I was doing other stuff. Of course, I killed it, then removed it, but still, this got me to thinking ... why do we need desktop search?

The answer, of course, is that we don't take two minutes to teach someone the easiest rule for organizing stuff, whether it's files on your computer or anything else.

The Rule of Most Specific

People tend to go with the defaults. So, when creating a project called Foo, they will stick their code in Projects/Foo folder, but the documents for that project end up in a Documents/Foo folder, their backups in a Backups/Foo or Archives/Foo folder, their graphics resources in a Pictures/Foo folder, etc.

The Rule of Most Specific says that instead, since all these things are related to a specific project, they should ALL go in Projects/Foo. So, documents related to Foo go in /Projects/Foo/Doc, backups in Projects/Foo/Bak, graphics in Projects/Foo/Img, and any files being served locally from the users public_html directory should be in Projects/Foo/public_html, with a symlink to /home/$USR/public_html/Foo

Of course, since designers think users are total idiots who are incapable of learning a few simple rules, they think it's much better to index everything ("let the computer do it for the user") and then force the user to wade through a bunch of search results to find the one thing they're looking for.

The end result is that people aren't given an incentive to organize things a bit, and as a result, their backups are incomplete ("gee, who would have thought that people don't want to go through a dozen different directories all over the place to assemble a backup for a specific project"), and they waste time looking for stuff because their drives are a mess.

In other words, it's the same as in real life when you were a kid - you learned the hard way that it was quicker to put things where they belong the first time than to find them after entropy takes over and you can find everything except what you're looking for. Or your place looks like an episode of "Hoarders."

"Oh, but now I can search for stuff on my drive by tag!!!" So what - now you have the extra work of figuring out a tagging scheme. It's easy to remember "The graphic I used as one of the wallpapers in project Foo" and look in Projects/Foo/Imgs/Wallpapers, or when you need to transfer a copy of all the files so you can work on another machine, just tarball the whole Foo directory and know you got everything.

Use the Rule of Most Specific to put stuff in the most specific place the first time and you'll find things quicker with just a file browser than you could using any desktop search application. Or prove the "designers" right and keep treating your file system like a kids messy room or the junk drawer in the kitchen.

Programming

Journal Journal: Comparing javascript compression methods. 2

From the bad-geico-commercial agency)
Wanna save 80% or more on your javascript file sizes?

When you have 30 or more javascript includes, and a dozen or more css includes, that's more than 40 separate hits to the server, each with its' own fileseek, etc. - and the browser has to stop whatever else it's doing to parse each one.

So I decided to exercise my perl-fu.

Step 1: Write a perl script to read the list of includes from the main index file, and concatenate them into one big js.$VERSION.js and js.$VERSION.css

Step 2: Strip out all the comments and empty lines, and remove all extra whitespace.

These two steps will give some speedup to your site loading,

There are plenty of "minifiers" out there that will combine step 2 with changing any local variables from thisLongVariableName, anotherOne, andAnotherOne to a, b, c. At the end of the process, you have a file that is probably, depending on your use of whitespace and comments, between 40 and 70% smaller. This will definitely load faster, and the use of smaller variable names inside functions also gives an extra little (mostly unmeasurable but still there) boost, as well as reducing the memory footprint of your page or application.

Step 3: Global function substitution is a bit more complicate. Functions aren't that hard to identify in source code - you can make a list just by splitting the source on the keyword "function" and keeping the next word (just remember to throw away the "zeroeth" one - it's not a function.

Then go through the source and replace every function name with a 2 or 3-letter identifier. I started at A1 (A0 is reserved for "main()", and the sequence is pretty obvious A0 to A9, AA to AZ, Aa to Az, B0 to B9, etc.

Step 4: Global variable substitution is a bit more complex, because variables can be listed one after another: var a, b, c=10, d=45, e="thank you for the fish". So, you have to do a bit more work, a bit more head-scratching, but you can substitute the variables that weren't substituted locally. Pick up where you left off with the functions, (for example, if the last function was Ca, continue wiht Cb).

What is the net savings from global substitution? In my test case, here's the comparison:

1. Merged source file: 184.4k
2. After removing whitespace and comments: 141.2k
3. After local variable substitution: 114.2k
4. After global variable and function name substitution: 70.5k
5. After gzip: 21.3k

Total saving: 89.5%

With string de-duplication, it could probably surpass 90%.

Current bugs - the code for finding the variables sucks, and it can only remove whitespace from css files - not variable substitution (need an "exclude" list for that), so css files don't shrink nearly as much.

It's funny.  Laugh.

Journal Journal: Why is politics like hooking up in a club? 6

Clicky: Ain't it the truth?. (stupid slashdot "designers'" - links no longer look like links. Thank $DIETY they don't "design" toilets, or we'd be deeper in s*** than we are now!

Top 10 answers

1. Someone's going to get screwed.
2. You'll regret it - either the morning after, or some time in the next 4 years ...
3. Lies, lies, lies ...
4. The consequences of a bad decision will haunt you long after they're gone.
5. They always look better in the dark ...
6. "I was drunk" is the only rational explanation for either one.
7. No matter what they promise, if you fall for it, you'll end up paying for it
8. The odds of getting a real winner are pretty much zero
9. You keep wanting to believe that "this time is different", but deep down you know it's just wishful thinking.
10. In the end, you'd have probably been happier just staying at home and watching a movie.

The only thing worse is Soviet Russia, where club hooks up YOU!

"Welcome to the vote, comrade. Here's your secret ballot. Please place it in the voting box."
"Sure, do you have a pen so I can mark it?"
"NYET! Comrade! It is called a secret ballot for a reason!"

Hardware Hacking

Journal Journal: In the future, kernels will be smaller, simpler, & faster 2

Today you can put together a box with 32 gigs of rram for under $1k. Today's hard drives come with 64 megs of cache built in. Contrast that to the situation 20 years ago, when a meg of ram was $100, and hard disk caches were 8, 16, or a "whopping" 32k.

The design and implementation of a kernel based on what was available 20 years ago wouldn't be like one made from scratch today. As current trends continue, computers with 32 gigs, 64 gigs, etc., will get cheaper. Similarly, hard disk manufacturers will continue to both increase cache size on spinning platters as well as increase their market penetration for SSDs.

So, what do you throw out when making a future kernel?

First, if you have 32 or 64 (or more) gigs of ram, get rid of swap space - it's simply not going to be needed. It's not even really needed in many situations today. This old box has only 2 gigs of ram, no swap partition, and yet it runs database, ftp, mail and ssh servers, a gui at 3840x1200, and openoffice, gimp, eclipse - all within 1 gig of ram ... so the only virtual memory on a future system will be for mapping addresses, not "disk-based fake ram".

Second, get rid of huge portions of the I/O system that tried to make up for hard drives being dumb. The "elevator algorithm" (whereby you just sweep the head back and forth in large arcs on the disk, and only write data when the head is in the right place, rather than jumping back and forth all the time) is obsolete when you have a half-dozen 4tb drives, each with 128 to 512 megs of cache - and each implementing the same algorithm internally ... or a bunch of SSDs that don't have drive heads. In either case, the future kernel has less work to do, so it will take up fewer clock cycles ...

Third, the idea of using ALL free memory for buffers and caches will disappear - a relatively small ramdisk will be more efficient, same as it was 20 years ago, when if you over-sized your cache, you saw a performance drop because of time wasted searching the cache first and managing it.

Buffers? That's going to change too. We're finally making the move (January 2011 was the "offical date") to 4k real hd sector sizes (up from 512 bytes), but we need to implement variable-sized sectors on either a per-track, per-partition, or per-drive basis. having a few tracks with sector sizes of 1 to 4 megs each could come in pretty handy for those big videos. Eventually, it will be supported at the storage level directly (the new standard allows for hd sectors greater than 4k), so the drive will be able to handle a 4-meg read as a single chunk, with only one inode to update. Not splitting it up into 1024 inodes and 1024 4k disk blocks and managing each one.

End result - less drive fragmentation, quicker reads and writes on large sequentially-accessed files (which also makes program loading quicker).

Having what used to be considered an "umpossible" amount of ram and absurd on-disk caches is going to make future kernels simpler to design, simpler to write, more energy-efficient, and faster. In other words, all of todays kernels - whether windows, linux, frebsd, or whatever, will be obsolete within 10 years.

Red Hat Software

Journal Journal: Bye-bye OpenSUSE 12.1, Hello Fedora 16 35

Well, since a second upgrade attempt left opensuse even weirder (dialogs with half the controls not working, firefox going from crashing every second load to every load, etc) it was time to nuke it again, but this time replace it with something different.

The question was, what?

It turns out that FreeBSD does not like my video setup (which is too bad, because I had 6 consoles open, and compiling a different part of the ports tree in each one, and there was no indication that it was under any sort of load, even though the load average was ~6).

Linux Mint? Tempting, very tempting ... but they're going off in 3 different directions right now.

Good old slakware? I downloaded the DVD (using knoppix, since the os was hosed), then went looking for updates ... apparently, the package browser is now someone else's problem .. and that page says that they're not doing it any more, and to come back when they've got their "new improved" whatever ... and slackware.com is down at the moment, so no linky for U!

So, what the heck - go grab Fedora 16 and install it ... then find out after doing the install and a couple of gigs of new packages and updates, that it hangs on reboot ...

I finally figured out the problem - for some reason it doesn't see my usb keyboard (plugged into my screens' usb hub) and it's waiting for a keyboard to appear ... so I have one of those rubber roll-up keyboards and a usb2ps2 adapter sitting on top of the box, out of harms way ... probably one of those "1 in a million" things - like having to unplug the second screen to get the initial install to work, then plug it in and when it reboots, dual screen goodness with no fuss, no muss. Using xrandr to dynamically set up the screens and dumping xorg.conf looks like a real winner.

The funny part - I've always found gnome to be kind of ugly, but the old 2.whatver gnome, the way they fixed it up is nice. I could get used to it ... (though I can't wait to see how lxde looks).

the evil part

SElinux. I removed it, and the machine is MUCH faster. so when they say it "only used ~7%" I'm not buying it.

In other good news, my colour laser FINALLY WORKS!!!!!! It was recognized, but no drivers - and this time the Samsung drivers installed with no hassles, so the only thing that still doesn't work is the scanner. I can always scan to usb (or maybe just make a patch cord and plug it right into the computer that way????)

One last speed-up ... no swap file, so there's a lot less time wasted managing fake ram (and more real ram available for running programs). It's not quite as fast as my dual-core lappy was, but for an 8-year-old ram-deficient box (only 2 gigs), it's still got lots of life left in it.

If you looked at my previous post, I tried to load it down, opening eclipse, openoffice, the gimp, playing mp3s in amarok, firefox and opera both open, web server, ftp server, mail and news servers running in the background ... and it still used less than a gig of ram.

Just goes to show that the real bottlenecks for everyday use are mostly self-inflicted "best practices" that aren't so great any more. If you want to try your machine w/o swap, but have the option of restoring it, just fdisk the drive and change the partition type from swap to anything else - no need to format it, since you won't be mounting it. If it runs okay, then you might want to reformat it and use it as a separate /tmp or whatever, and clean it on every reboot (note - do not do this until after you remove SElinux or you will be very sorry).

User Journal

Journal Journal: Another "interesting" upgrade attempt 8

Regexes are a PITA to work with, but editing them is more so when you're having a hard time seeing properly - all those (*$^!.*/ tend to look more like comic-book swearing than ever.

So I figured it would just be quicker to write a program in c.

But realloc() kept throwing errors on the 3rd or 4th call, but only for one variable. Did I make a mistake? It happens ... but after wasting more time than I want to admit, I said to heck with it, it's got to be the compiler.

So, went and upgraded the distro once again, and sure enough, it was the compiler. I have to admit I was taken by surprise when I didn't get that long assertion failed message instead (and who is the retard who makes assertions that are so long and complicated that they're going to need an audit just to verify that they actually do what they claim to do???).

So ... all's well that ends well ... except ... lots of programs now just "sort of" work. Firefox crashes on start, then works okay on restart. Close it, restart - crash again immediately. Restart, runs fine. Close, restart, crash again ... Other programs are now missing chunks of functionality, dead areas that don't respond to the mouse, whatever. And this is after pretending it was a sick windows box instead of a linux box (update, reboot, test, force update, reboot, test).

All this got me thinking - how come the code I wrote today, in c, to parse out some files doesn't run faster than the c code I wrote a couple of decades ago on a machine that was 100x slower?

The answer is simple, and disappointing. Past a certain level of complexity of the software stack (OS, libs, compiler), you don't get improved performance. It all gets sucked up by the stack.

20 years from now, are we going to have machines with a terabyte of ram, 256 cores, and only running as fast, on average, as an old 386 because by then we'll have past the peak and gone into negative returns territory but can't go back because everything would break worse? For example, code with so many security checks that it's into "infinite bug" state, where fixing one exploit opens up another one (personally, I think we're there already, but that's another story).

Slashdot Top Deals

Don't panic.

Working...