Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:IoA (Score 1) 124

1 IPv4 address hosting a subnet with NAT vs. an IPv6 /64 prefix are roughly equivalent

    2^32 = ~4.3 billion
    2^64 = ~18 billion billion

So they are only "roughly equivalent" if by that you mean "within 10 orders of magnitude of each other".

I think you meant "one IPv4 Internet (4.3 billion hosts) where each host NATs an entire IPv4 internet vs. one IPv6 /64 prefix (4.3 billion IPv4 Internets) are exactly equivalent".

In practice you can't assign anything smaller than a /64
-- snip --
It's still way more address space than we'll ever reasonably need, but not quite as ridiculous as it looks at first glance.

While true, you will get that /64 assigned to you within a /64 prefix. In other words, there are 18 billion billion prefixes each with 18 billion billion addresses, so it really is as ridiculous as it looks at first glance. Not complaining, though....

Comment Re:And this will change nobody's minds.. (Score 1) 378

This is why some 3rd world countries won't use it, not fear of GMO itself, but they don't want to be beholden to an American company for their seeds.

Hardly. Most developing countries want to use GM crops (read: farmers want to use it, but government forbids it), but the countries they export to, like Germany, are poised to instantly block all imports if they allow GM crops to be used. Seriously, it really is that crazy. Needing to buy seed from somewhere is the least of their concerns.

Comment Re:Brace for shill accusations in (Score 1) 378

Safety is a red herring.

When you are talking about GM the technology and whether it should be used, safety is the only consideration that matters, because that is the only unknown (that and effectiveness). Every other consideration is concerned with the industry of agriculture itself and has nothing to do with GM directly.

I have two objections to GM crops: biodiversity and lock-in

Biodiversity was a concern long before GM crops were on the scene. Any kind of controlled breeding and selection of popular varieties (driven by the free market) can create problems with biodiversity. Cavendish bananas are not genetically-modified, and yet they are by far the most widely used cultivar globally. Lock-in is a more valid concern, although the recent Supreme Court decision on the patentability of genes may make it less so. If seed companies can only patent the seeds, but not the actual genetic modifications, it would be the same situation as currently exists with patentable crop varieties where there is plenty of room for free market competition. Of course, the government could also refuse to recognize any patents on food crops. Either way, it is a regulatory problem, not a technology problem.

Comment Re:Stupid appers (Score 1) 127

It's not really the version of the library that's the problem, in the majority of cases. As a few have already mentioned, the interfaces often don't change between library versions, so older software can often compile fine against newer libraries. The problem is, most people want binary distributions. Source distributions are great, and very flexible, until you want to A) install something closed-source, or B) install something large and complex, like LibreOffice. Most people, myself included, don't care to sit around and wait for 2-3 hrs for something to compile just so they can get some work done. If you can just install it, you are usually much happier.

The problem is, to get binaries to share dependencies, it is not just the filenames and locations that must be same. The symbol tables have to be exactly what the app is looking for (ie: linked against). So that means, the build environment has to be the same, or at least generic enough, to get the required compatibility. If you change something like glibc, everything compiled against it has to be recompiled and relinked. That is the major source of frustration, and it is not an easy problem to solve.

Comment Re:Why? (Score 1) 127

Not really. Deb and rpm can handle multiple versions just fine, as long as the underlying software supports multiple versions. Remember, a package is just a collection of files with some instructions where to put them. So if you try to install two files with the same name in one place, you are going to have problems. In other words, it's not the versioning, it's the inherent limitation of the filesystem itself. If a library renames itself between versions, though, it won't have problems. Very few libraries go to the trouble to do that, though.

Comment Re:Read about it before commenting, people! (Score 1) 127

Interesting. That's more information than I was able to find anywhere else. Thanks.

Here's what I'm most worried about, though. How dependent is it on non-lazy packagers? In other words, the easiest and most convenient way to package anything is to ship with all dependencies and the app uses those. The problem, though, is each application is then solely responsible for updating itself, including to patch bugs in any dependencies, so it quickly leads to running a million app updaters in the background, which is the current nightmare on Windows and OS X. Ideally, this system would be smart enough to use the base system by default and only use the supplied dependency if the base system can't provide it or if there is a conflict of some sort. But I doubt it will do that, which means it is on the packagers to check the base system first before installing their own dependencies. Somehow I doubt they are going to do that, though.

Comment Re:Scant on details, high on assumptions (Score 1) 127

g. RPM dependencies are calculated from files and SONAMEs, but can also be specified manually by the packager, including version inequalities of other packages.

Debian has this too, and I think it is actually a good deal more flexible than rpm, at least from what I remember from my brief stint with Red Hat back in the day. There's a reason Debian was able to have apt long before Red Hat/Fedora had yum.

Well, then that's really a problem with the community not enforcing proper requirement standards that reflect reality on important packages.

This is the real problem. And I dare say it is 99% an Ubuntu problem because they really like to break everything with each subsequent release. Debian has been a rolling release distribution since forever and is renown for the incredible robustness with which packages can be shared between stable, testing, and even unstable, as well as the ease of transitioning from one to the other (ex: when testing becomes the new stable). And they have once or twice had to do some massive renaming of library dependencies, but managed without a hiccup, which is a testament to the quality of the deb system.

No, the problem is Ubuntu. Their versioning is a constant clusterfsck of broken, incompatible package naming. And they heavily abuse "virtual" packages for their own purposes which leads to the breakage in Samba like the GP described. It is horrible release management and is one of the many things wrong with Ubuntu. However, Ubuntu manages to stay more up-to-date, and has some pretty nifty userland tools, so I find myself using it much more than Debian. But I lament every time I have to upgrade, or if I want to move packages between versions.

Snap sounds like a system with some much-needed features, but what I would really like is for those features to be integrated into deb. Unfortunately, Ubuntu is following their usual pattern of aloofness. Both Debian and Ubuntu would benefit tremendously if they could work together to enhance deb. Transactional updates? Who doesn't want that? That is a great feature. But nope, that's not going to happen, apparently. We're going to end up with another Mir, or Upstart, or Compiz (shudder).

Comment Re:Scant on details, high on assumptions (Score 1) 127

The details on this new packaging system are scarce--and I've checked--but it looks like a reimplementation of Docker,

I guess we'll find out more in time, because I too couldn't find any details on how this is implemented. If it does use containers (a la Docker), that would be really cool. As soon as Docker started getting more fleshed out, this was the first application I thought of that would be perfect for it.

An application being able to use alternative libraries is definitely a need on modern linux. I can't count the number of times that I needed to do massive upgrades of the system just to install a newer version of an app I was using. My only worry is that this will depend on the non-laziness of app developers to work well. Snap packages can use the underlying system, but only if app developers take the time to specify their dependencies, which is something they already don't want to do, apparently. So instead, they bundle their own libraries, even if they are already available on the system, and we get OSX bundles, which I'm not a big fan of. Ideally the snap system would default to using a system library if it meets a dependency and only a bundled library if that dependency is not met, but based on the scarce information available that doesn't seem to be the case.

It would also be nice if there was a quick way to determine library versions in all installed snaps, so that you can see which might be vulnerable to recent security errata, for example. Not sure if they have plans for tools like that, but it would sure be useful.

Comment Re:Why? (Score 1) 127

Why? That has been the standard way to do what you are trying to do on linux since forever. The way installers, like the Ubuntu installers, work under the hood is 1) they format the disk, 2) they set up a staging area, and 3) they install everything the system needs using the package manager. Afterward they will do some initial config to get the system to boot. The package manager will only install files that belong to its packages. So it won't 1) delete or empty directories that have already been created, or 2) overwrite or delete files that don't belong to it (ie: user-created files). Config files that have been modified from their defaults will be overwritten, but in many cases there is a *.d/ directory that allows you to put in custom config that won't get overwritten when you update or reinstall. That's why things like network interfaces are preserved when you update, because the interface configs are written into a .d/ directory, allowing the package-owned config file to be upgraded without wiping away the interfaces.

So to do a safe reinstall, the instructions are accurate. Tell the installer not to format the disk, use the same partitions that were already in place, and that's it. It is actually a very well designed system. If I anticipated needing to do this frequently, the only thing I might do is keep /home on a separate partition (and maybe /usr/local depending on how much I use it) so that I would be able to format the root partition safely. But like I say, not necessary to safely reinstall without losing your files.

Comment Re:The future of dosage? (Score 1) 113

For that matter, the machine would not be producing the drugs, it would just be packaging them

That was my reaction (no pun intended!) too at first, but no, this is actually chemical synthesis from starting materials. It is not quite as modular as they imply from the summary. You need to clean and restandardize the system to change product. But the idea is that it is capable of following a programmed synthesis and purification strategy. The purification is actually the coolest part to me. The synthesis uses an optimized flow chemistry design (think no solvents, short reaction times, high temperatures and pressures), but this is fairly standard process chemistry. The purification is the complicated part because the machine has to do liquid partitioning, column purification, and multiple recrystallization steps without human monitoring. And it has to meet USP standards for quality control at the end. That is really cool. There was some serious engineering that went into it, so even though it has somewhat limited applicability right now, it is an impressive feat.

That said, I'm not sure where this really fits. I can't think of many situations where you would benefit from on site synthesis. Remember, you would still need to preselect which drug you want to synthesize ahead of time and have all of the materials ready to go. And it would still take anywhere from 1-3 days start to finish. So in a hospital where you might work with hundreds of drug formulations, which ones are you going to maintain this system for? And is it really easier to synthesize on site as opposed to just managing shipments from manufacturing facilities? It might be able to help in the case of manufacturing shortages, but that seems like it would be a fairly rare occurrence....

Comment Re:The future of dosage? (Score 1) 113

It's really the cost of the pharmacy that's inhibiting this, not the drug manufacturing. Compounding pharmacies were essentially the genesis of the profession, but it takes expert skills and knowledge, and is not a high throughout process. So it doesn't fit in with the fast food delivery system for drug dispensing that we use now. There are still some compounding pharmacies around for special niche cases, but outfits like cvs stick with very standardized methods and procedures so that they can hire a skeleton staff to just crank through prescriptions at a high rate per hour. The other side of a pharmacists work (drug counseling and advice) is completely neglected because if you don't meet prescription quotas you get in trouble. In-patient pharmacies aren't much better anymore because hospitals are doing the same thing, they go for patient volume and number of procedures over actual outcomes. So, crank through IVs and narcotics so that you can do more surgeries and other procedures faster. Anything custom, even if it's better for the patient, is avoided because it slows you down.

Comment Re: Dead serious answer (Score 1) 155

I'm surprised a standard user would have the required security permissions to alter the MBR.

That's Windows security for you. Decades of established security practices where everyday users run unprivileged and only become root for administrative tasks, plus very user friendly implementations by Apple for OS X that nobody has complained about AFAIK, but nope, Microsoft has to come up with UAC instead. It is an improvement over XP, but it is still far too easy to inadvertently hose your system. The first thing I do when I install Windows is create an unprivileged user and set a password for the administrator. This instantly gets rid of 99% of the problems. The remaining 1% is training users when it is appropriate for an application to be asking for admin rights (almost never), but if you tell them to just never enter their password unless they are making a deliberate change to their system, or to ask if they are unsure, this is usually sufficient. I've never had malware problems on the boxes I administer.

Comment Re:Dead serious answer (Score 3, Insightful) 155

Found another article,

After the payload file has been downloaded from a link, it will ask for elevation of privilege from the user. That file has a shield icon, so users expect the Windows User Account Control to be triggered. Unsurprisingly, they open it and give it permission, as they don’t suspect that this is a Trojan horse containing the payload for the Petya ransomware.

This is unbelievably stupid. I know, social engineering and all, but why the f$#%k would you click ok to a UAC warning to read a CV?! Cryptolocker I could understand because it just used the current user's credentials, but there is no excuse for getting infected by this.

Comment Re:Gnome 3 pushed me to OS X (Score 1) 193

All kinds, really. Sometimes fatal errors because it wants to use shm or gvfs, neither of which works over a network. Or other assumptions - here are a couple from Gnome 2 (I don't have a Gnome 3 system nearby, because it just doesn't work for me):

Works for me with Gnome 3. It's laggy as hell, but that's X forwarding for you. Haven't used Gnome 2 in so long I have no way to test it. Your errors look like some kind of library conflicts are going on. Is the desktop on the remote machine fully functional on its own?

Slashdot Top Deals

If in any problem you find yourself doing an immense amount of work, the answer can be obtained by simple inspection.