Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Comment Re:The stupid! It hurts! (Score 1) 287


rpm --root
yum history rollback
yum history redo
yum downgrade
rpm -V (may be debsums?)
yum versionlock
yum --enablerepo --disablerepo
rpm --docfiles --configfiles
rpm/yum reporting is nicer

Only two commands rpm underneath and yum on top.

The GP tore a hole out of yum/rpm vs apt/deb. As you point out there really isn't much daylight between. Except that (as a non-administrator having to do occasional admin tasks) I find rpm superior.

Comment Re:The stupid! It hurts! (Score 1) 287

First, yum is superior technically to debians packaging system. Not going to bother explaining, because I seriously doubt it would do any good.

Next, this idea of rebooting (technically, two boots - download updates, reboot into a small system, apply updates, reboot into complete environment) is an idea that won't matter much soon.

Hard drives are already in the 3TB territory. btrfs or zfs will become necessary for reliability. When this happens, snapshots will be available to solve the problem properly.

Now, why Redhat recommends re-install? Redhat really only sells servers. Best practice is to re-install to verify that nothing has been forgotten during an upgrade. This applies whether its AIX, Redhat, Solaris or even Windows.

Fedora? Supports preupgrade. Just updated from F16 to F17 with that. Note that /bin is now only symlinks to /usr/bin; a major filesystem layout change was included.

It just worked (originally installed from a Fedora XFCE live CD spin).

Comment Strange sense of Technology (Score 1) 263

robots.txt is a hint file to automated software crawling websites.

Note that everything on a web site is published.

Possibly not indexed, but, for an individual, robots.txt is just as valid an index as index.html.

So, the company published the information; the hacker group now has the information.

It wasn't theft -- the company still has the information.

The hacker group now told the company about this information. Actually, this should have been known by the company. Given that the company did not want to pay for suppressing republication, we can assume that they were aware.

The information accessed was a simple data list. Since this is pure information, it cannot be copyrighted.

So, republishing this information is not copyright infringement.

A simple offer was made -- please pay us not to republish the information. This is a normal legal offer. No law would be broken by republishing, and the information was not obtained illegally. It may have been worth something to republish, or (as the government has shown by paying farmers not to grow crops) it may have been worth something to not republish.

Given that the company should have aware of the availability of the information, we must assume that they wouldn't mind the republishing.

The hacker group would wish to remain anonymous. I imagine that the people on the list may like to sue someone, and may try to sue the hacker group. Making this more difficult makes sense. (Especially if the hacker group is not US resident).

This is not illegal access, extortion, copyright infringement or any other crime that I can think of. You may not like it. Heck, I don't like arbitrage.

It appears from your comment (focussing on the header) that you believe there is a difference between moral and legal here (Sophocles' tragedy Antigone comes to mind). As Plato exposes, you may want to work to bring your morality and law closer.

Be careful. Steps in that direction may bring the downfall of the Web (certainly the concept of URLs).

The hacker group has it right. They simply demanded a fee for stupidity. I don't believe that you can legislate stupidity out of existence.

Comment On Keeping Up with OpenGL (Score 1) 497

"Let's see the open source keep up with the GL spec instead of holding the whole damn platform back in 2.0 land"

Normally, I wouldn't bother responding, because there is little chance that you will see this response. However, the above quote is important.

I will refer you to [Blythe2011] http://www.cs.cmu.edu/afs/cs/academic/class/15869-f11/www/lectures/blythe_compute.pdf

for an interesting critique on current 3D work.

And remember, the "radeon" driver supports R100 on up.

Comment Re:The community failed on ATi (Score 5, Insightful) 497

The AMD community supports all (11 now?) chip types, over all (4 now?) generations of Radeon released (since 2000).

KMS (kernel mode setting) and other features of the Linux graphics stack are supported over all hardware, including TV out, and other features.

3D is a work in progress. Yes, it's been almost five years, but the features do work.

I would say that, objectively, the open source drivers have been a success. I would even say that the open source drivers are arguably superior to the closed ones. Work continues (especially in the 3D area). Does the proprietary driver support stuff like multi-seat?

Of course, you claim that it doesn't work at all, and that the effort has been for nought. Please clarify. Bug reports would probably be welcome (not sure, but check x.org, freedesktop.org).

At the least, please post your hardware information, so that other people will know to avoid it.

Comment Re:Every programming language is touted as "simple (Score 2) 138

You are very right.

May I recommend Paul Grahams "On Lisp"?

Use of functional programming, and macros to build dsm's reduces the code you need to write, and can simplify things.

You then need good ffi (foreign function interfacing) to utilize external libraries.

My favorite system (currently) is Gambit-C Scheme. It supports define-macro as well as hygenic macros. It compiles to C, so the ffi is simply writing "in-line" C code if needed. Best of all is it has a 20 year history behind it.

Comment For Real? (Score 1) 663

Some Ramblings

I have an NVidia (something or other) in my company assigned Dell Laptop.

Works a treat with the open driver. No 3D, but so be it. I don't game anyway.

The AMD/ATI situation? Same thing. Just takes time and effort to get the driver to the point where it can be even REMOTELY considered ready for the kernel.

The "driver du jour" from the hardware vendors? Can't be trusted.

Sure, if you are building your Supreme Gaming Machine, go for it. For any real work? Not so much.

Now, it gets weird. Because I am about to backpedal on that statement.

In some very limited circumstances, the vendor drivers may be deployed. Specifically, for GPU calculations. I still wouldn't trust these drivers in the role of a DISPLAY driver yet.

(I consider the GPU calculation testing to be more comprehensive and useful, although I find the use of OpenCL to be.. abhorrent).

But the set of features implemented by these drivers for 3D rendering (OpenGL) tends to be oriented towards gaming, and not the kind of visualizations needed for "real work". For example, 3D depth cued lines. A feature handily supported by SGI and SUN in the past, but missing from NVidia, ATI and Matrox the last time I looked -- 10 years ago, but I suspect still missing. Not that the feature was available in Mesa, either, but Mesa is the LOWEST level of support expected.

I would be happy if I were wrong, but, as far as I can see, Intel graphics is just as good (or bad) for my 3D visualization needs.

Now, I do have to give a tip of the hat to NVidia. They (at least) tried to support OpenGL with their implementation. But, I really don't understand how NVidia managed to create an OpenGL implementation that was arguably inferior to the SGI and SUN implementations. Possibly (and I speculate) that their attention was split by DirectDraw, and the perceived need to micro-optimize. The second reason was the need to replace a good deal of the driver stack, which NVidia tried to do without the cooperation of the kernel developers.

Which brings us to the present day, and a question: "What to do now"? Is it too late to have NVidia assist in laying out the driver stack? Most likely. The only beneficiary of the current situation is Intel. Intel has participated in laying out the ground work for display on Linux. Intel will reap the rewards of this. Both NVidia and AMD will be relegated to providing GPU processing, but will be squeezed from the bottom. After all, Intel will control the GPU sharing protocols. OpenCL will probably continue to entrench, and NVidia is trying to keep their compiler presence (they own the space right now). Intel is likely to release more general compilers and infrastructure to squeeze them.

AMD? I am afraid for them. They deserve better than to become a footnote in this saga.

Comment Re:on the other side of the coin (Score 1) 490


I use XFCE, and with Linux 3.3.7 and 3.4, I have been having an issue with Intel 915 graphics where icons and the title bar go black after a while.

Yes, it's annoying. Yes, it's reported.

But -- I had another issue involving the "rts_pstor" card driver in the kernel staging drivers. I need that driver to support a new-ish card reader. The icons for inserted devices were not appearing on the desktop. Reported and fixed in 24 hours.

Mind you, that isn't why I choose an Open Source Operating Environment. The reason I did was simply that it better matched my needs.

As an added benefit, it is far more advanced and useful to me, as compared to the current common Closed Source Operating Environments. These would be Windows 7 and Mac OS X.

Defect reporting is centralized and automated. Driver support is more complete. Security is much better.

(abrt, rts_pstor as an upcoming piece, and tripwire/selinux/firewall as standard components, if you really want to know).

Tripwire on Windows? Sure, it's available. Not common, though. I imagine it's also available on OS X, but I've never seen it. SELinux (MAC?) Yes, since Vista for Microsoft. Good on them. Must be embarassing to have been "beaten to the punch" by Open Source OS's. Fedora Core 2 had SELinux but it was disabled by default -- Fedora Core 3 had it enabled by default -- released in 2004. Vista was "beta'd" in 2005, and released in 2007.) THAT may have been an effect of an "Open Source" development model. The Fedora (subset of Linux) community has had a few additional years to adjust to MAC systems.

Now, these benefits have little if anything to do with being "Open Source". The benefit of "Open Source" is that I could go and find the graphics defect myself if the normal support channel doesn't resolve it.

What is interesting is that my ecosystem is as robust as it is. As I have mentioned in an earlier post, the Fedora community is probably 2 million (could be more, could be less). Hard to count, but small compared to either Microsoft or Apple.

And yet I use a World-Class Operating Environment. Of course the priority of the communities is different. The Fedora community is much more aligned to my interests. This may simply be because it is a much smaller community.

So, I may have a few more problems with "niggly" bits, but I have a community more aligned to my interests, and a top-shelf Operating Environment that is superior to the top two commercial products.

A tradeoff that I have made.

Note, though, that for other people, the tradeoff may be different. For instance, at home my kids use Macs. You dread Linux (not clear why, but, ok).

So, different tradeoffs.

Back to the HARM of closed source. Programs that stop working (examples from my collection include Microsoft CD-ROM encylopedia for MPC). Platforms that just vanish (Palm). Data that is no longer accessible (for reasonable cost). Use of "Open Source" gives a hedge against these problems. It may not completely eliminate them (for example, material on 8 inch floppies is pretty much no longer available), but if physical formats are brought forward, there is a good chance that the data and programs will still be usable.

Comment Re:Harsh (Score 1) 170

Read what I said. I claimed that it would be just as valid.

I don't think that SAMBA stole from Microsoft, and I don't think that Microsoft stole from SAMBA.

I just stated that the chronology made the original claim silly.

Comment Re:Harsh (Score 1) 170

Who are "they"?

SAMBA didn't bring a lawsuit against Microsoft. SAMBA purchased the protocol description from Microsoft for 10,000 Euros. There was also a round of legal discussion needed to keep SAMBA as GPL software.

The European Commission investigated Microsoft. This was triggered by a request from SUN to Microsoft asking for interoperability documentation for AD. Microsoft refused, SUN entered the complaint -- SAMBA didn't get involved until Microsoft tried to use SAMBA as an example of why protocol documentation wasn't needed.

"They" would then be SUN and the EC.

Why would SAMBA sue Microsoft? I don't think Tridge and Allison are "anti-Microsoft".

Comment Re:Harsh (Score 1) 170


The reason for SAMBA was simply that Windows (Windows 3.1 for Workgroups) came with SMB file sharing.

SAMBA helped integrate these workstations with larger networks and servers.

Comment Re:Harsh (Score 1) 170


Lan Manager 2.0 for Windows couldn't have been released before Windows NT 3.1 (https://en.wikipedia.org/wiki/Windows_NT_3.1) which was released in 1993.

SAMBA was first released in 1992 (http://www.rxn.com/services/faq/smb/samba.history.txt).

Lan Manager for OS/2 was available, but I did say Windows. Also, OS/2 was seriously impeded by the x86 platform. Servers were Unix boxes (Linux was released in 1991, and wasn't yet ready for prime-time).

Windows was "peer to peer" networking, or OS/2 server to Windows clients back in '92. SAMBA was the first product that allowed "real" servers to serve files to Windows clients.

Comment Re:internals? in python? (Score 2) 170

Huh... so a lot of people are wrong.

How many? Hard to count how many people, but we can certainly look at applications.

Let's examine my Fedora 17 laptop. In /usr/bin, 139 programs are written in Python, including my music player (quodlibet), repo management, some of abrt (defect reporting), time tracking, and desktop wiki.

Another 194 are written in Perl, including parts of fvwm, foomatic and callgrind.

388 more are POSIX shell script and 45 bash scripts.

There are 381 symbolic links, 31 hard links (which I now disclude), and 2346 binary executables.

There are 3522 total programs in /usr/bin.

67% binaries, 22% scripts, and 11% links.

[Java applications, and LibreOffice are not counted, but I'd imagine you would probably classify Java apps as scripts too]. This is a freshly upgraded netbook, and (since this is /usr/bin) we have only examined "system" or "distribution" applications.

I guess that 22% must be wrong. If we can extrapolate from applications to developers (hey, I know that is just wrong, but it's better than NO data at all), 1 in 5 developers is shipping scripts.

And that (proudly) includes me.

As to the "900MB"? The python interpreter has a 10kb front-end, and 1.7mb library. Yes, there is some additional overhead.

Let's examine QuodLibet (music player application): 67mb resident, compared to 11mb for Terminal (collecting this data), and 267mb for firefox (as I'm typing this comment).

QuodLibet offers albums, playlists, sophisticated queries, and runs just fine on a NETBOOK.

In Python.

Comment Re:Harsh (Score 3, Informative) 170

"it (open source) just seemed to want to steal someone else's work in this particular area."

What a baddass comment. Completely wrong, of course, but badass.

SAMBA predates Windows SMB server.

It would be just as accurate to say Microsoft "just seemed to want to steal someone else's work in this particular area."

Slashdot Top Deals

Save yourself! Reboot in 5 seconds!