Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?

Hackers vs. Phishers 137

An anonymous reader writes "Some hackers out there don't like to do all the hard work of running a successful phishing campaign. Instead, they developed a simple online service to 'steal' account details from the hard-working phishers. Named AutoWhaler, the service allows anyone to scan a phishing server for log files that contain juicy information such as usernames and passwords."

Comment Re:openfiler (Score 1) 206

Openfiler's web gui is buggy as hell, its local LDAP server option is poorly documented and provides terrible diagnostic messages when improperly configured, and it has no official support for installing/booting from flash. Never trust a product that wants to charge money for the admin guide.

I only tried FreeNAS briefly, and did end up using openfiler, but I would love to see anything beat openfiler.

Comment Re:No because they are different (Score 3, Interesting) 205

GCD is a mechanism to let one central authority dispatch threads across multiple cores, for all running applications (including the OS).

This is what most people talk about, and what is most obvious from the name, but it is not the interesting part of GCD.

The interesting part of GCD is blocks and tasks, and it is useful to the extent which it makes expressing parallelism more convenient to the programmer.

The "central management of OS threads" is marketing speak for a N-M scheduler with an OS wide limit on the number of heavyweight threads. This is only useful because OS X has horrendous per-thread overhead. On Linux, for instance, the correct answer is usually to create as many threads as you have parallel tasks and let the OS scheduler sort it out. Other operating systems (Solaris, Windows) have caught up to Linux on this front, but apparently not OS X. If you can get the overhead of OS threads down to an acceptable level, it is always better to avoid multiple layers of scheduling.

Comment Re:Its been done for years already (Score 5, Informative) 711

So we've had a defined standard that was, arguably, not the easiest to understand. THEN harddrive manufacturers started their fraud. And THEN people started complaining. So what, and please think about this, would be the right decision here?

This is revisionist at best and really just wrong. Despite all "wisdom" to the contrary, there has never been a universal acceptance of 1 MB = 2^20 bytes on computers. For instance, all of IBMs mainframe hard drives from the 60s and 70s were sold using base-10 prefixes. Early desktop hard drives from the 80s used both. I think the ST506 used base-2, but some other models used base-10. All networking and communications standards (ethernet, modems, PCI, SATA...) use base 10 prefixes for MB/s and Mbit/s. 3.5" floppy disks used NASA-style units where 1 MB = 10^3*2^10. Even while RAM is still almost always measured in base-2 units (due to manufacturing issues making it much easier to produce in power-of-2 sizes -- something which is not true for hard drives) the speed of the memory bus on your CPU is still measured in base-10 units.

It is a *good* idea to have K and M mean the same thing everywhere. A system where a 1 GB/s link transfers 0.93 GB every second is stupid. This is especially important as computers are being used in more and more environments. Should a 1 megapixel camera mean 2^20 pixels? What about CDs with a 44.1 KHz sampling rate?


Torpig Botnet Hijacked and Dissected 294

An anonymous reader writes "A team of researchers at UC Santa Barbara have hijacked the infamous Torpig botnet for 10 days. They have released a report (PDF) that describes how that was done and the data they collected. They observed more than 180K infected machines (this is the number of actual bots, not just IP addresses), collected 70GB of data stolen by the Torpig trojan, extracted almost 10K bank accounts and credit card numbers worth hundreds of thousands of dollars in the underground market, and examined the privacy threats that this trojan poses to its victims. Considering that Torpig has been around at least since 2006, isn't it time to finally get rid of it?"

Vatican To Build 100 Megawatt Solar Power Plant 447

Karim Y. writes "The Vatican is going solar in a big way. The tiny state recently announced that it intends to spend 660 million dollars to create what will effectively be Europe's largest solar power plant. This massive 100 megawatt photovoltaic installation will provide enough energy to make the Vatican the first solar powered nation state in the world! 'The 100 megawatts unleashed by the station will supply about 40,000 households. That will far outstrip demand by Pope Benedict XVI and the 900 inhabitants of the 0.2 square-mile country nestled across Rome's Tiber River. The plant will cover nine times the needs of Vatican Radio, whose transmission tower is strong enough to reach 35 countries including Asia.'"

Comment Re:/usr/bin/pride, /usr/bin/ego, /etc (Score 1) 425

[quote]Truth is, nothing irreplacable was provided by the GNU project. [/quote]

I can't see you you can possibly think that is relevant. Whether there were other options that *could* have been used doesn't change the fact that a circa 1992 "linux" system was largely a GNU system.

I certainly agree that a modern Linux based desktop is not a GNU OS, but I think it was a perfectly reasonable request in the early 90s. I still call it Linux, mostly because the name is shorter, and I am not about to call it GNU/X11/Gnome/Linux. And the reasons I choose to run Linux over (say) FreeBSD are mostly to do with the kernel and the kernel specific system tools, and not with the userland.

Comment Re:/usr/bin/pride, /usr/bin/ego, /etc (Score 4, Informative) 425

Compiler and toolchain, and all the 'standard' UNIX tools: the shell, the text utils like cat, grep, awk, etc.

Basically, back in the 80s, the FSF, reimplemented what was at that time nearly the entirety of what was called UNIX except the kernel (which was what the HURD project was/is). It was to be the GNU OS. While the kernel was in development, the userspace tools were developed and ported to other UNIX systems like sunos as a replacement for the often deficient historical versions supported by the UNIX vendors.

So when Linus came along and wrote a UNIX-like kernel using gcc, he could load all those programs on and have a mostly functioning UNIX environment. This was the reason RMS objected to calling it just Linux, at that time the majority of the code running on the system was GNU. It was probably a legitimate point at the time. And even if there were a different compiler, without a set of userspace tools that people could freely get and use it is unlikely Linux would have been able to take off.

Now, of course, a huge part of the user experience is provided by X11, the desktop environments, and various graphical appliations. GNOME is part of the GNU project but, KDE, and most of the applications are not. So it isn't really true that GNU software is still the majority of the OS. Of course, the kernel is even less important in terms of the user environment, and despite all the other software around it, GNU utilities are what makes it (not) UNIX.

Comment Re:Pass by reference (Score 1) 612

That is a pretty bad example. C++ references hardly count as references since you can't reassign them. They are really just syntactic sugar to make operator overloading look nice and reduce the number of -> operators. They cannot be used alone to create complex data structures.

A better example would be references in lisp, perl, or java. They solve many of the problems of C/C++ pointers. They can't be out of bounds. They must respect the type safety of the language. They can't point to an invalid or destroyed object due to garbage collection. However, they all support a null reference.

Maybe there is a better way to do this where you don't ever need null references, but I know two things for certain 1) SQL is not it 2) People will still make errors where data they expect to be there is not.


30th Anniversary of the (No Good) Spreadsheet 407

theodp writes "PC Magazine's John C. Dvorak offers his curmudgeonly take on the 30th anniversary of the spreadsheet, which Dvorak blames for elevating once lowly bean counters to the executive suite and enabling them to make some truly horrible decisions. But even if you believe that VisiCalc was the root-of-all-evil, as Dvorak claims, your geek side still has to admire it for the programming tour-de-force that it was, implemented in 32KB memory using the look-Ma-no-multiply-or-divide instruction set of the 1MHz 8-bit 6502 processor that powered the Apple II." On the brighter side, one of my favorite things about Visicalc is the widely repeated story that it was snuck into businesses on Apple machines bought under the guise of word processors, but covertly used for accounting instead.

Comment Re:not able to be used == not useful (Score 4, Insightful) 171

You think that quantum computers are not able to justify grants or PhDs? What world are you living in?

until these quantum computers exist and are cheap enough to fill datacentres

Yeah, because classical computers were never useful to anyone (or anyone important) until datacenters existed.

No, to be really useful, quantum computing has to be as easy to afford and deploy as current computing technology.

And until then, developments that bring us closer are irrelevant? Applications that could give us more reason to develop the technology are pointless?

What exactly is your point here?

Data Storage

Ext4 Advances As Interim Step To Btrfs 510's Kernel Log has a look at the ext4 filesystem as Linus Torvalds has integrated a large collection of patches for it into the kernel main branch. "This signals that with the next kernel version 2.6.28, the successor to ext3 will finally leave behind its 'hot' development phase." The article notes that ext4 developer Theodore Ts'o (tytso) is in favor of ultimately moving Linux to a modern, "next-generation" file system. His preferred choice is btrfs, and Heise notes an email Ts'o sent to the Linux Kernel Mailing List a week back positioning ext4 as a bridge to btrfs.
The Military

NSA Takes On West Point In Security Exercise 140

Wired is running a story about a recent security exercise in which the NSA attacked networks set up by various US military academies. The Army's network scored the highest, put together using Linux and FreeBSD by cadets at West Point. Quoting: "Even with a solid network design and passable software choices, there was an element of intuitiveness required to defend against the NSA, especially once it became clear the agency was using minor, and perhaps somewhat obvious, attacks to screen for sneakier, more serious ones. 'One of the challenges was when they see a scan, deciding if this is it, or if it's a cover,' says [instructor Eric] Dean. Spotting 'cover' attacks meant thinking like the NSA -- something Dean says the cadets did quite well. 'I was surprised at their creativity.' Legal limitations were a surprising obstacle to a realistic exercise. Ideally, the teams would be allowed to attack other schools' networks while also defending their own. But only the NSA, with its arsenal of waivers, loopholes, special authorizations (and heaven knows what else) is allowed to take down a U.S. network."

Slashdot Top Deals

Computer programs expand so as to fill the core available.