Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Temporary Plan Upgrade (Score 3, Informative) 270

When I was visiting Canada from the US, I actually called Verizon from my car on the way up. The customer service rep was very informative, and after much questioning we agreed to basically upgrade my account to "international" for exactly the time that I was in Canada. It would be prorated to the higher fee for the 4 days, then revert to its original US plan. It was something like $15US/day extra for both voice and data - nothing horrible. I checked my bill afterwards and it went seamlessly. I recommend at least investigating this option.

Comment Re:KDE vs Gnome (Score 1) 175

Personally, I find KDE to be a much more polished, integrated, and comprehensive suite than GNOME.

I agree--and it's why every time I've tried KDE I've abandoned it and gone back to XFCE or Gnome after a few days.

"Ugh, kmail sucks, I'm gonna use Thunderbird... KOffice still blows, gotta set it to open files with (Open/Libre)Office instead. Konqueror? Fuck no, Firefox or Chromium or Opera, anything but that piece of crap. Amarok is so damn slow and bloated, need to find another player, not many QT options, guess I'll use a GTK solution..."

And so on, until I'm barely using any QT apps and almost no apps at all that integrate well with KDE, and all the while KDE seems to be mocking me for not using its integrated apps, most of which I hate.

If you like its default apps, fine. If not, all that work to make a tightly integrated DE and apps is just a bunch of useless bloat and features that only half-work if you don't do things exactly the way the devs want you to. I don't even like any of its competitors that much, and I really want to like KDE because it looks nice and has a few nice features that the others don't, but it's hard to justify using it if you don't run a single k* app.

My KDE experience usually involves a good number of GTK applications, too. For example, my core browser is Google Chrome or Firefox (both GTK), I use Thunderbird for e-mail, and I definitely use exclusively LibreOffice. KDE is not an all-or-none decision ... you can (and should) pick applications based on how they work, not whether or not they were developed by the same working group.

Now, that said, much of KDE is under active development, and this is the real deal. It's worth retrying KDE applications every now and then to see how they are doing. For example:

  • rekonq, a Konqueror-like browser built on Webkit, is actually pretty damned usable. Not compatible enough to be an only browser, but adequate for most things. If browsers weren't so central, I'd probably use it a lot more.
  • kmail has made significant integration and feature-set advances in the last three KDE versions. The whole KDE PIM suite has, actually. That team deserves a pat on the back; if I actually used a PIM application (instead of GMail's web interface) I would definitely use it.
  • kopete, KDE's IM client, is great. It has definitely surpassed Pidgin for a while in my book.
  • The koffice ... er, Calligra Suite team has been doing a tremendous job. It's one of the fastest-advancing open-source product that I know of, and each release brings it more into mainstream. I eagerly await the day they gain the full LibreOffice feature set, as I feel their design choices, UI, and approach are all superior. They just aren't there yet, last I checked.

All of these apps are more or less interchangeable though. You can use them just fine on GNOME. The core KDE experience is (in my opinion) kwin, the KDE Plasma Desktop (and associated Plasma widgets), the Dolphin File Browser, Nepomuk, and the KDE System Settings Suite. These are the core KDE features that one would choose to use. One can use primarily GNOME applications on top of these technologies and still be subscribed to the KDE user experience.

Comment Re:KDE vs Gnome (Score 4, Insightful) 175

This may devolve into a vi/emacs debate, but I'll ask anyways. I'm running Ubuntu, and and quite happy with Gnome (having quickly borfed Unity). What could KDE offer that might convince me to try it out?

Well, we can argue better this and more refined that until we're blue in the face. Bottom line is that KDE offers a wholly-different perspective on what a Linux desktop user interface can do. Minimally, it's worth taking a look at, if only to broaden your horizons and solidify your preferences.

Personally, I find KDE to be a much more polished, integrated, and comprehensive suite than GNOME. It's snappy, sexy, and highly-configurable. In terms of appearance, KDE definitely has more of a stylistic Mac OSX-like approach and graphic set, though that's also highly-configurable. In fact, KDE's UI is so versatile that I could use KDE to recreate a default GNOME desktop without much effort. The applications tend to favor configurability over simplicity (which seems to be the opposite of much of GNOME's design choices). I can fine-tune most KDE applications to my personal, picky standards. Due to KDE4's kwin window manager rewrite, compositing (3D) effects are built into KDE's core, and are much more seamless than GNOME2's (although GNOME3 has followed suit).

Now, KDE has quite an advanced suite of applications that they bring to the table. However, keep in mind that almost every KDE application will run just fine under GNOME, and vice-versa. You can try almost any KDE application within GNOME should you find one you like (for example, I definitely prefer KDE's Konsole terminal over GNOME's gnome-terminal. The opposite is also true - any GNOME application will work just fine under KDE. You don't have to choose one over the other, though each is designed around and better-integrated with its native environment. Another winner is KDE's Amarok, which has long-held my personal favor as the best available audio player anywhere.

That said, I highly recommend giving it a shot. If you're using Ubuntu, you can try it with no risk by just installing the kubuntu-desktop and kde-full packages and choosing KDE as your window manager at login. It's worth a few days' trial to find out what you truly like.

Comment Re:Standard modus operandi (Score 1) 254

PS - If not obvious, this is all my own armchair analysis of the situation and it's probably way off base.

Not at all; I totally agree, and the problem is systemic and probably not a bad thing. It's a simple consequence of choice and variety. Different people and companies will take different approaches to solve problems, while several developers may want to solve the same problem for all platforms. Everyone has to meet in the middle.

There are several approaches to the problem. Some involve comprehensive frameworks (QT, Java/Swing, Java/SWT, .NET, Mono, and tons of others), there are attempts to enhance one language to the capabilities of another via bindings, and there are approaches to obsolete underlying systems with standards (like web browsers). There's no one-size-fits-all, and there really shouldn't be. What's important is that no one technology gets supported to the point where developers lose interest in the other (e.g., IE6).

Frameworks are nice because you can learn to program for the framework, and then your code is more or less consistent across platforms. The problem is that then you lock your experience into that framework, and so your general development capability is constrained by that of the framework. But I suppose that's true for all tools. Metaphorically, you take having a shovel for granted when you garden, but there's an entire history of generations of people focused on inventing, refining, and developing that shovel. There's nothing wrong with that - it's a simple distribution of focus and responsibility. We build on each other, and programming is no different; it just starts from near-scratch more often and progresses more rapidly.

Comment Re:You defeated your own argument (Score 1) 254

So your argument is that Microsoft intentionally periodically obsoletes languages in order to make money? Am I reading this correctly?

You do understand that:

Pretty much every commercial MS developer already has an MSDN license, which (minimally) gives them access to the latest development languages, SDKs, and tools.

You do understand that:

MSDN licenses cost a lot of money. Were it not for the constant churn, developers wouldn't need MSDN subscriptions, and could save a a lot of money.

Companies pay for MSDN licenses for a lot more than the latest language. They provide the latest SDKs, documentation, tons of tools, exemplar operating systems, future betas and product (to test and build against), and, of course, a gigantic repository of forum knowledge and a means of engaging Microsoft. Obviously the MSDN / developer model that Microsoft has established is for profit and cash. No shit.

The point I was making is that they don't have to phase out .NET to keep that stream going. Sure, you may have the latest Visual Studio, but without the rest you're still a second-rate developer with poor access to resources. Almost every single Microsoft development company maintains some form of MSDN subscription; Microsoft choosing one language or another will not affect that. The OP alleged that they will abandon .NET solely to make money off licenses, while I believe the current ecosystem quite thoroughly abolishes that idea.

Comment Re:Standard modus operandi (Score 5, Insightful) 254

Microsoft makes a lot of money from selling its development tools, documentation, etc... to its developer base. Microsoft simply runs the whole show. They are in full control, and call all the shots. And they understand perfectly well that if they keep the same technology platform in place, over time, they lose a good chunk of their revenue stream. That's why they have to obsolete their technology platforms, time and time again. They need revenue. It makes perfect sense. If you are a Microsoft Windows developer, one of your primary job functions is to generate revenue to Microsoft. Perhaps not from you, directly; maybe from your company. Whoever pays the bills for Visual Studio, MSDN, and all the other development tools. Maybe it's not you, personally, but it's going to be someone, that's for sure.

So your argument is that Microsoft intentionally periodically obsoletes languages in order to make money? Am I reading this correctly?

You do understand that:

  • Pretty much every commercial MS developer already has an MSDN license, which (minimally) gives them access to the latest development languages, SDKs, and tools.
  • Developing a new language that is at least as compelling as a current one is an expensive and non-trivial feat.
  • Obsoleting a language costs Microsoft a ton of money in rewriting their own software to create new APIs and then use them.
  • Each API and system rewrite introduces new bugs, costing Microsoft even more money identifying, patching, and being held accountable for.
  • One of the oldest MS-supported development languages, C++, has not been obsoleted.
  • One of the major issues with MS development is the legacy APIs that bias towards C++ functionality.

I think your theory has some holes. Now, Microsoft has definitely obsoleted languages - Visual Basic for one (and good riddance) - but they did that because the language had shortcomings. I'd detail them but we have a nice article that already does that. The .NET framework and language stack, C# in particular, is on the same general level as Java: it is a language that more or less suits the needs of every platform developer. Why the hell would they want to obsolete that?

No, languages aren't the issue with MS development, nor are they the theme of the article; frameworks are. A perfectly good language can be horrendous to use if it is unable to properly interact with its host environment to accomplish what it needs to accomplish. In this case (once again FTFA) C++ could interact worlds better with Windows than .NET could, and so .NET use suffered. This was an implementation failure on Microsoft's part. The article stipulates that Windows 8 intends to bring .NET back on-par with C++ as a development language, which (if true) means that it will be stronger than ever.

It's also worth mentioning that in terms of accumulated skills and experience, learning a new language is trivial compared to truly learning a new framework. How you interact with the system and cause it to give you the resources and services that you want in the manner in which you want them is the heart of all modern systems programming, regardless of language. If Microsoft emphasizes .NET in their APIs, then .NET will be a viable Windows development platform; if not, then who knows? None of that reflects on the language itself, but rather on its appeal over other languages.

Now, eventually every language will be obsoleted ... probably? I suppose we haven't been through that many generations of languages to know for sure, but that seems to be the case so far. There are various reasons languages die ... they suck, better ones come out, nobody likes them, no frameworks support them, or their target developer group gives up on it. .NET's main backer is currently and will likely always be Microsoft, and its most viable candidate platform will likely always be Windows. Supporting or not supporting .NET as a first-class Windows development language (through framework support) is a serious to its standing as a desirable developer language. However, the article makes the point that Microsoft's .NET framework capabilities are increasing substantially, not decreasing. This speaks positively for its future.

Disclaimer: I am a UNIX developer as well. That said, the article is well-written and I would bet on its speculation being correct. In that case, MS is making the right moves, and that includes their new frameworks and continued support for .NET.

Comment Re:Netwinder anyone ... 1999? (Score 2) 94

Anyone remember it?

Remember during the days of kernel 2.0 or 2.2 a decade ago you could buy a Netwinder appliance that came iwth Redhat Linux? Corel even shipped WordPerfect for Unix on it, and I remember reading a commentator who used it on LinuxMagazine.

ARM support has been supported in Linux for a very long time. This story is pure FUD.

I know this is Slashdot and reading the article is sacrilegious, but you could at least read the summary!

Well, blogger Brian Proffitt explains just how messy the state of Linux support for ARM is right now, partially as a result of mutually conflicting kernel hacks from ARM manufacturers who just wanted to get their products out the door and weren't necessarily abiding by the GPL obligations to release code. Things are improving now, not least because Linus is taking a personal hand in things, but sorting the mess out will take time."

Nobody's challenging that ARM is supported by Linux. This article is about how Linux's ARM support is poorly-coded and internally-inconsistent. The problem is that the ARM code is neither scaleable nor maintainable. This is critical as both the Linux kernel and number of ARM systems supported continues to grow, which it almost certainly will.

It is pretty foolish to blow a Linux kernel issue off as "FUD" when the maintainer of said kernel himself is taking action to address it.

Comment Re:This is why you use encryption programs... (Score 2) 128

I don't know where your delusion of 70 "typable" keys comes from. Maybe from being egocentric, and thinking ASCII-only. PROTIP: http://www.neo-layout.org/

Good luck scanning through way more Unicode key combinations

Also: what stops a hacker from trying out passwords on the keyfile instead of the encrypted file?

It's an example to demonstrate how much more limited the typable keyspace is than an unconstrained binary keyspace, nothing more. I think you're quite out of line throwing words like "egocentric" around because an arbitrary example used QWERTY/ASCII.

However, say you did have 1024 typable characters. A random 10-character password with such a keyboard layout would yield only about (at most) 1024 ^ 10 = possible combinations, still well short of the example binary keyspace.

We're getting a bit off-topic, but if anyone's interested, more information on a method of deriving keys from passwords can be found here. Notice how part of the process cycles over and over again to increase the computing time required to exhaust the keyspace. If you cycle 10,000 times, the decryptor only has to perform 10,000 operations, but a brute-force technique has to perform 10,000 operations for every password attempt, greatly increasing the CPU time required for each attempt. This is an attempt to compensate for the different in keyspace sizes by increasing CPU time outside of the cryptographic algorithm.

Comment Re:This is why you use encryption programs... (Score 5, Informative) 128

I was under the impression that brute forcing did exactly that. They're not using a dictionary. They're taking advantage of the GPU processing power.

For this kind of encryption, the archive password is converted into a key. This is done because remembering a large key is hard, but remembering a password is not.

However, this kind of conversion is not remotely secure. With around 70 typable characters ("a-z", "A-Z", "0-9", a few symbols, etc.) the number of possible keys for keylength l is around 70^l . If we use a secure crypto algorithm, say, AES-256, then we would encrypt the archive with a 256-bit key. Something that uses a password for encryption does so by permuting the password into a key, typically through some combination of hashing, concatenation, and salting. This process deterministically maps the relatively-small ASCII password space to a 256-bit key space. So even though you're using a secure-sized 256-bit key, there are still only (at most) 70^l possible keys, since each key must be generated from a password.

Now, with AES-256, there are 2^256 possible keys. While brute-forcing the 256-bit keyspace is considered hard (that works out to about 1 * 10^77 possible keys), brute-forcing the possible plaintext passwords that could have generated the key is significantly easier (a 10-character password has only 2 * 10^18 possibilities).

So back to what the OP said, while the crypto and keysize of the underlying cryptography are secure (in this example, AES-256), the keyspace is inherently limited since it has to be derived from a much-smaller set of passwords. The OP is spot-on ... if you really want to encrypt something securely, you have to use a much larger keyspace, which, in this case, means generating a complete 256-bit key rather than deriving one from an ASCII password. This article shows that password-derived keys are not secure.

Comment Clear Path to the Public Domain (Score 3, Interesting) 65

I propose the following change to the current patent system:

  • When a patent is initially filed, the patent filer may optionally include an itemized list of costs incurred to directly develop the patent.
  • At any time after the patent is granted, the USPTO offers the following option: If members of the public can accumulate and pay 150% of the stated development cost of the patent to the patent holder, then the patent irrevocably enters the public domain.
  • Failure to disclose development cost will result in a default value (around $100,000, can be raised by USPTO as needed) being assigned to the value of the patent.
  • Misleading or incorrect information on an itemized list disqualifies the list and results in default value being assigned to the patent.
  • Challenges and negotiations regarding the value of a patent can be brokered in a public setting through an institution established by the USPTO.

Under this system, inventors have a clear path to profit from the effort they invested to create a patent. No matter how much they invest, they will always make 50% of that investment back in profit. There is also a clear path to the public domain for the patent - anything so fundamentally critical can be purchased and contributed to the public domain with the USPTO as the intermediate broker. It is likely more profitable for any given company to place a patent in the public domain for all to enjoy than it is for them to license it from the individual company.

The buyer of the patent can be a company, community pool of money, or even the US Government itself (think cancer cure) based on the criticality of that patent to any entity's set of interests.

So patents aren't gold mines anymore. You can't build a business model around exclusivity. Who cares? Innovation will continue, as it always has, and now everyone can participate. I dunno; I like the idea.

Comment Re:Next step, consulting (Score 1) 90

Watch him start a "consulting" business that counts among its clients some very high profile tech companies.

Let him, and more power to him. In the real world he will be entirely accountable for his actions and have to stand for his competence. A foolish company who hires a bad security advisor will go down in flames with him arm-in-arm. He'll actually be put to the test, instead of just appointed. Maybe he'll be great? Who knows ... at least now we'll have actual data to judge him by.

Comment Data Anonymization Exemption? (Score 1) 90

I like the disclosure aspects of the bill, but I hope it doesn't hinge on the personal aspect of the data. If data that has undergone anonymization can still be used without consent, then this bill might as well not exist.

One of the easiest things possible is to de-anonymize anonymous location data sets. I suspect that looking at:

  • Overall geographic affinity
  • Most frequently visited locations (e.g., home and work)
  • Consistently-visited outliers

... would provide more than enough data to positively identify almost any human in the world. Honestly probably just the (work, home) tuples could identify 95% of us.

Comment Re:This is an extremely important accomplishment. (Score 4, Informative) 77

Remember when it used to be first, by a huge margin? It's not dead by any means, and still a very active language, but it's not taught as much anymore. Within a generation, it'll be in the same class as FORTRAN - only used to support legacy apps.

... and kernels, and drivers, and embedded applications, and core libraries, and runtimes, too, unless those go away.

C is a fantastic language that very effectively performs a much-needed role in software development: to provide a lightweight, usable, and readable language while retaining (most of) the capabilities of machine code. C is intended to interface directly with the system, or closely with the operating system.

C is in decline because many modern programming challenges don't benefit from working on the level of machine code or operating system, nor should they. If I want to write a game, I want to focus on the game design and mechanics, not bit blitting pixels onto a buffer. Libraries, interfaces, and abstraction levels are all things higher-level languages leverage to constrain the perspective and duty of the developer to the most productive (and, oftentimes, interesting) areas.

Also, let's not forget that in the common non-kernel case, most of the reason C is even usable is because C, itself, leverages a massive host of support libraries and a not-so-lightweight runtime.

Comment Re:This is an extremely important accomplishment. (Score 2) 77

I don't think that the article goes into enough detail about just how important this accomplishment is. Frankly, this is our only hope going forward. With so much slow software written in languages like Ruby and JavaScript becoming popular, it will again fall back to the hardware guys to really make things fast again. This will probably be the way they'll do it!

While I agree with your statement that this is likely incredibly important, your concept of the state of software is absurd.

Non-specialized (e.g., consumer-grade) software platform choices - language, compiler, interpreter, execution environment, and operating system - are made largely based on the current hardware status quo of the typical software user. If hardware (CPU, GPU, network, etc.) continues to get faster, software will be written to complement that hardware. The second hardware becomes a limitation, software will back off, trim down, and optimize. While the factors behind it are too numerous to fully detail in a post, the key idea here is that waste and bloat can often be the consequences of a tradeoff for functionaity, stability, speed, and development time to the net benefit of the consumer.

A good example of this is the Android operating system. In the midst of gigantic browsers and bloated (not necessarily in a bad way) operating systems running on the latest quad-core beasts, an operating system derived from a high-performance kernel (Linux), set of libraries (libc, etc.), and high-level runtime (Dalvik VM) was created to specifically scale graphical and operational code to an open embedded platform. Everything running on Android could easily have been run on computers 10 years ago, yet it's currently a bleeding-edge development platform.

Another example is a modern high-profile web page/application. Consumer-grade Javascript-intense pages (Google Maps, etc.) often provide lightweight alternatives for smartphones and netbooks. The user consumes what they can handle and no more, and this results in a positive experience.

Just remember: software scales to meet the demands of its consumers and the capabilities of its hardware platform.

Slashdot Top Deals

"Only the hypocrite is really rotten to the core." -- Hannah Arendt.

Working...