Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Sample Size (Score 1) 247

19 students, one codebase, 4500 lines.

Hell, my HOBBY project has something like 120,000 lines and that's not including any of the huge open-source libraries that I suck in to do the interesting stuff. And that's just the stuff I tinker with when I have a spare day or so.

I wrote more than 4500 lines (not including bracketing and whitespace) for my first set of Introduction To Programming coursework answers, for the first week, for the course I took in my first year of my degree, in Java, that was designed for the people on the maths courses who'd never touched a computer.

Comment Re:So basically we're finally catching up to Novel (Score 1) 125

Such datacentre-level facilities often take decades to come down to consumer hardware and consumer OS.

Virtualisation is, to x86 PC's, relatively new. But we've been doing it on the proper hardware for decades.

It's not that some things were so brilliant, it's that the features are rarely needed and take a long time to filter down to commodity OS and hardware.

Hell, I've never needed a cluster-based filesystem, and you don't see me complaining that Windows didn't introduce one to Windows until decades after they existed.

On-the-fly patching, like a lot of features, isn't something needed on commodity OS. Virtualised infrastructure and distributed systems and high-availability features have largely made such things pointless up until now.

But now that we're pushing for zero downtime clouds and mobile devices that can stay on for months at a time, it's good to revisit, re-purpose and use the established technology to do so. Before? Why did we need it when Linux would barely resume from suspend reliably?

Comment Re:new path for virus. (Score 1) 125

To live-patch, you'd need to run code as root.

If a malicious executable ever gets root, it can persist itself in any fashion it likes. Live-patching isn't a necessity, nor a hole in this sense.

Even with SecureBoot, there's nothing stopping such code going through boot up again, and exploiting the same hole again through any of the millions of ways a root-running-executable could make something start at startup.

So long as this works in tandem with facilities like cryptographic module signatures, I don't see how its any more a risk than the alternative.

And, as always, you can always turn it off.

Comment Re:What could possibly go wrong? (Score 1) 125

At all points you can modify the kernel, there's a potential for mischief, of course.

But what you're saying is that rebooting is somehow a magic cure-all that guarantees the system isn't infected somehow, or that there's a user (or SecureBoot) there to "notice" something amiss.

If SecureBoot can be fooled into loading an older kernel that can then be upgraded on-the-fly, it can be fooled into doing that at boot too.

How often do you check your machine boot-up process to ensure it's on the version that grub etc. says it's loading? Anyone could fake that and then replace on-the-fly once the OS was loaded.

Once you're into a system at that level, persistence of the underlying system is not a defence mechanism and can be subverted. Anyone could boot up the old, insecure kernel via SecureBoot, show you boot options that claim to load the latest kernel, then when running and the live-patching facility is up and running malicious code can run and claim it's the latest version number and you'd never know.

Live-patching is not a security mechanism, but neither is it a lack of one.

Comment Re:We already had this with the modules... (Score 1) 125

Don't quote me on it, but from my understanding of the trampoline kernel patches there's a point at which the calls to old system calls are blocked and the calls to the new replacement system calls are demanded.

There's a lot of logic involved in determining when the system is in a state to do that such that you don't end up feeding new structures to old syscalls, or old structures to new syscalls mid-way through (by checking that their dependent / source syscalls are all upgraded by that point, etc.)

But, mostly, things tend to stay the same. You can do an awful lot to the running kernel just by loading kernel modules. I know I added in DRBD devices to a kernel that I couldn't modify the source too (running under a Xen hypervisor that I don't have control of) just by compiling and inserting a kernel module for it.

Comment Re:file magic - use the content to determine type (Score 1) 564

We're talking at cross-purposes.

My view is that we shouldn't be identifying files manually AT ALL. They should be part of the meta-data, as already is whenever you download a file. Just because it ends in .docx doesn't mean it's sent to you as application/microsoftworddocument (or whatever it is) by your browser. In fact, you can break stuff easily that way if you don't populate your webserver with proper mimetypes.

The OS shouldn't be encoding the type into the filename.
The OS shouldn't be encoding the type into the file itself.
The OS should be encoding the type as a file attribute.

How that file attribute is initially determined (a one-off process for all "legacy" files without such metadata) is inconsequential. How that file attribute is then handled and facilitated by the OS and browsers and other transport mechanisms - that's not as difficult as anyone makes out.

The transition to file types being metadata is quite simple, but no OS supports that. They ALL rely on string-parsing of the filename to determine attributes (dot-hidden files on Linux, filename extensions on Windows, etc.). That's not sensible, even if it is how it's "always been done".

To get to the better situation means that we probably WOULD have to trust the extension for the initial conversion but then, after the mime-type is determined from that, we can discard it. For unknown/un-extensioned files, we could do regexp matching etc. to set the additional mimetype attribute.

But from that point on, we don't NEED to ever identify a file again. If the OS has the facility to transfer that information as a file attribute to remote servers (e.g. web mime-types) already, and could just encode the mimetype as a file attribute for other kinds of transfers (just putting it in the filesystem structure should be more than enough) then we can properly keep it separated from non-related data forever more.

And if we then WANT to interpret a JAR as a ZIP for whatever purpose we can by changing what we interpret it as, leaving the file data intact, and allowing the user to keep filename separate from the program they wish to open that type with. For instance, take log files. They are plain/text. But some people might want to open them in a logviewer. It's trivial to imagine a system that generates logs with no extension (Linux, /var/log/messages for example), logs with ".txt" or plaintext logs with other extensions. But if you could associate the file with a mime-type of plain/text it doesn't matter what the program NEEDS it to be called. You've separated the name from the contents and it's easily customisable per-user, per-file, or per-type.

As it is, we have a mess of having to rename or forcibly open such files where it's not necessary.

What we need is an OS that demands you provide a mime-type (even if its just application/binary for unknown/custom types at first) when you write a file. Then it doesn't matter what you call it, or what user opens it, or what kind of backwards compatible filename you were trying to emulate, you can open it in an appropriate program.

Half-arsing the type into the extension, or trying to guess it from the file content isn't a long-term solution to this stuff. Sure, a one-off transition, but not a long-term solution.

The solution is an OS and applications that know what type of data they are handling and encode it as a separate attribute entirely.

Comment TLS (Score 1) 89

Sigh.

So, as I understand it, the current situation is:

- We can't allow use of RC4 because it's weakened significantly.
- If we disallow RC4, we open ourselves up to BEAST in practical terms.
- We need to move towards PFS and TLS 1.2 but the major libraries don't support it in major stable versions and/or we break an awful lot of the world's clients in doing so.
- A lot of the chain certificates out there are still using only SHA1 which makes them weak.
- And now we have to start worrying about clients that allow downgrade attacks on the connection.
- We can't use OpenSSL at the moment because all the interesting new features (TLS 1.2, etc.) are only in Beta.
- We can't use LibreSSL at the moment because it isn't available in many mainstream distros.

Seems to me like we really need a massive revamp of security here and ditching old clients entirely.

Almost every site on the Qualsys Labs tests rates B at best now because of the current situation (from which they recognise there is no practical escape even though it should probably rank them all lower): https://www.ssllabs.com/ssltes...

I think it's time we just ditched everything and provide a way for browser security to be pulled out of the browsers entirely and made independently upgradeable, so you can browse a modern TLS 1.2 site with a browser that's a few weeks old.

Comment Re:file magic - use the content to determine type (Score 1) 564

And encoding the filetype into the file means that you have to examine (and potentially interpret) the file to work out what to open it in. That's fine for certain things (e.g. executables all start with MZ) but not for others (e.g. JAR files are indistinguishable from ZIP until you interpret the ZIP file contents and act upon that interpretation).

As soon as the contents could be malicious, and you're running even a regexp of any complexity on it, it's a risk.

Encoding it into the filename itself is shoving metadata into other metadata. There's even a metadata separator involved here, the period in between! As such, they should be two separate and independently changeable pieces of information. Parsing the filename to work how to interpret the data inside is a nonsense, when you could just store "filename" (without the extension) and "filetype" separately. This also allows .jpg and .jpeg to be seen as the same thing (which they are!) and not require two separate and confusing entries!

Adding any in-data identifiers to existing files also means modifying the file, potentially modifying hashes and security on them. Changing the way they are interpreted on one machine will affect every machine they are visible on and require write-access to the file.

The filetype thus needs to be a separate attribute from the data (exactly why mimetypes exist and are broadcast as separate attributes!), which can be separately modified and interpreted by a user to their own preference.

Of course, given a random unknown file from a source that doesn't keep mimetype attributes means falling back on a) filename extension and b) internal file type, but that's only to "seed" the initial data - the type itself is not reliant or dependent on that and merely renaming can break things (an innocent activity is for a user to accidentally rename and strip the extension without realising, thus ending up with an "unopenable" file).

Also, you don't want to be reading any portion of a multi-gigabyte file just to see what it could be interpreted as. In the same way as you don't modify ANY file data to change the filename, filename extension, hidden atttributes, ownership etc.

Some things about a file are metadata. Some are data. But file type is metadata - it's data about the data, and how it should be accessed or interpreted. As such it does not belong in the filename (which is itself a piece of metadata) or the data. What we're doing at the moment is stuffing two bits of separate (and important) information into one and then wondering why we have to hide one of those from users half the time.

There's a reason that the entire web and email runs on things that force you to associate a filename extension with a separate piece of metadata - the mime type.

Comment Re:Strange (Score 1) 80

LACP would, indeed, fulfill the purpose but relies on you being able to obtain LACP support on upstream connections from your ISP. LACP must be enabled and known about on both ends for it to do anything.

It's not always true that you could get support on upstream connection, but they are many, and multiple, types of bonding that provide similar facilities.

However, in terms of being able to get disparate connections that can be conjoined without specific support on the other end or high-end hardware, there are fewer - but non-zero - ways of doing that too.

Comment Strange (Score 5, Interesting) 80

Strange.

I was using routing patches to Linux nearly 7 years ago to do this (admittedly it wasn't in the stock kernel, but the patches weren't huge)... you were able to specify multipath and multiple gateways and if one route went down, the others were prioritised and would take over, and also your upstream etc. were balanced properly and took account of failing routes automatically without any kind of daemon etc. running.

I ran a school off multiple ADSL and even 3G connections with it - the only manual maintenance I ever had to do was to put the ADSL modems onto a SMS-controlled relay (SMS came in on the same 3G stick!) because our ISP would often give us "dead" sessions if they'd had problems (where you'd get PPP and an IP and a remote gateway but couldn't do anything across them) and we were then able to manually reset if necessary. My bursar and I used the system for five years like that, only ever resetting it to enable VPN when all the upstream routes had got dead sessions, and that less than once or twice a year.

And, no, we didn't have to do much. It was a stock Slackware install with one set of patches to a (2.6?) kernel to enable the multipath routing etc. Pretty well advertised at the time, one plain page of simple patches (I remember porting them myself to a newer kernel version, just before the new diffs came out), I'll try and dig it up.

And "RAID-0 for upstream"? Bollocks. It "just worked" whatever interfaces were up (proven by it would even include the 3G PPP interface whenever it came up, and that only came up when we manually instructed it to connect as it cost money).

Not saying this isn't good software, but it's by far not the problem the summary purports it to be, not a first by any means, and certainly not "new".

Comment Re:Yes, I agree (Score 2) 564

There speaks somebody who's not managed other systems, presumably.

"My Documents" is stupid when it's not even a document-storing account. Local Administrators having My Documents is stupid. Plus, then, they aren't My Anything. They are Company Documents.

That aside, I rename My Computer (or, nowadays, create a shortcut to the same) to This PC. It just makes more sense, whether you are at home or at work.

On top of that, the My Document folder is full to the brim of "CompanyName" folders for every concievable software manufacturer on any PC you've used for more than a day. Most of "My Documents" isn't close to "My" at all - I'd rather they weren't in there whatsoever, because everything thinks it has the right to throw junk into My Documents under a folder all its own (because, at one point, My Saved Games, etc. didn't exist).

On top of that, My Documents INCLUDES My Pictures. They're both types of documents. But, oh no, one defaults to one location and one to another. Stupid. Microsoft's fix is indexing and collation of all these places into one huge globular - but temporal - mess where you can have multiple copies of the same document/photo appear.

On top of THAT, if you ever browse a newly-setup server and go to the User areas (I separate Profiles and Documents, but some people don't), you'll see a thousand "My Documents". Because it's a fake name applied by desktop.ini and the like to any document folder. Want to get into a particular user? You have to turn off buckets of options, type their username in manually, or show another column - the REAL name of the folder.

So now we're breaking stuff out of Documents and putting it in Pictures - is that part of the user profile (and thus needs to be downloaded to every client they log onto), or is that a storage area that can be pushed off with Folder Redirection to a network share? Okay, what about My Data Sources? What about My Videos? What about My Saved Games? What about My Third-party Things That Some Program Created In The Profile Folder?

Can you redirect them all? Not easily. And why is Downloads outside of My Documents? Surely that's a bulk-storage area that you don't need to download to every client every time you logon?

It's a damn mess. Yet, in base AD, we have two options - Profile Path and Home Path (not even called My Documents!). Everything else is GPO and Folder Redirection.

So now when you backup your home laptop, you have to get not only My Documents but My Pictures, My Videos, My DVD-Rips created by some freeware, etc. too. Or you have to backup the entire User folder, which is a massive waste and includes - amongst other things - your registry which isn't necessarily portable.

To you it's "just a name". To a sysadmin it's a bunch of junk that's slowly getting out of hand and there's few sensible ways to organise it.

And yet all the user cares about is "magical, mystical special settings I should never play with" (Profile!), and "all my stuff" (which they can arrange how they want into subfolders of their own choosing) (Home!).

This PC, This Network, Profile and Home. Universal, not personal/business specific, not unbelievably twee and unnecessarily humanising, and been the basis of user accounts for decades.

But no, "My CDBP Projects" or whatever the ones that keep reappearing in my profile/document folders (at random it seems!) whenever I run some bit of freeware are the way to go...

Slashdot Top Deals

Say "twenty-three-skiddoo" to logout.

Working...