Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Oh please (Score 1) 253

Sure your PC can decrypt the file transfer at the speed of your WAN, but a server has many more clients connected to it which it must deal with at the same time. Take your 15% CPU for one transfer and then imagine it multiplied by 10 or 20 or even 1000 connections on the server. Now imagine a 1GB file sent to 10 clients. It would take quite a lot of processing power to encrypt the 1GB of data for each client, or it could just send the file unencrypted and not do any extra processing at all. If you have a really huge file there's no reason for the server to waste huge amounts of processing power to encrypt the data when it's not necessary at all.

Comment Re:Oh please (Score 1) 253

The problem isn't that you don't need the firewall (it is fairly mandatory to have one on any network connected to the internet), it's just that using the firewall to block services by port number is an outdated concept and easily bypassed by most software tunnelling over port 80. For example, you can block the ports commonly used by MSN Messenger at the firewall, but when MSN cannot connect over those ports it simply falls back to using port 80 which is almost never blocked by any firewall (since HTTP is fairly common). More advanced methods (proxies, IDS, stateful inspection) are required to block these kinds of services.

Comment Re:Oh please (Score 1) 253

Um, as I recall Microsoft implemented WebDAV in Windows long before anyone else. It was included in Internet Explorer 5 as "Web Folders" under "My Computer" in Windows 98/Me (and possibly even under Windows 95). The WebDAV redirector has been included in every modern (NT based) version of Windows from 2000 to Vista, and you can download KB907306 and install it to get the origional web folders client on newer Windows as well.

Over the years there have been a few security fixes to the Windows implementations which have broken some features, it was not the easiest to use, and it's popularity has declined over the years. Most of the WebDAV functionality is being removed or replaced in current products (like Exchange Server, SharePoint and Hotmail), so it is dying out as far as support from Microsoft.

In my own experiences WebDAV has worked flawlessly in even the oldest Windows clients (running software like DreamWeaver) and IIS. Other implementations done on Apache were more spotty (but that was some time ago).

Comment Re:Oh please (Score 1) 253

IMHO No one really sends anything over FTP anymore that your ISP or anyone else would even care about if they were sniffing your connection. People have long since moved to bittorent and other P2P networks for sending illegal or questionable files to each other (which is what your ISP and others will be mosty trying to look at). Most companies just use Windows file sharing (SMB) or encrypted e-mail for sending any kind of private data. Clueless people are still sending files over Windows Live Messenger and regular e-mail.

If they're really trying to figure out what everyone is sending over FTP they're going to find the majority of transfers are simply mainstream files (like the Linux kernel or some Microsoft patches) which are from public download sites.

Comment Re:mod parent up (Score 1) 253

No. For servers where you don't care about security, FTP will transfer files. Given that the cost is sending your password en clair it's just not worth it.

Well yeah the password is sent plaintext, but it doesn't mean it's a gaping security hole in your servers, you just have to be intelligent about it. If someone sniffs your FTP password, they should only be able to use it to login to your FTP account on the server (it shouldn't be the same password you use for other services like SSH). The FTP login should also only be able to modify a restricted set of directories that the user needs access to (like his home directory or something in /pub).

If you're allowing the user to use his system logon password to get into FTP, and if the user can write to the entire file system from FTP, then plain text passwords would be a big concern (if this is the case you should really be looking at other places where your security could be improved). Otherwise it should be no more of a security threat than plain text passwords sent over other services (like POP3) which also only allow access to a restricted set of objects (like the user's mail) and not the entire system.

Comment Re:I can't be the only one who thought of this... (Score 1) 296

I really hate that FF and Opera are promoted as so much more secure than IE, yet they don't take advantage of the new security models used in Chrome and IE8/9. It's like running your desktop machine on the internet without a firewall, because the OS vendor says that your listening services are secure. The low rights security model provides another layer of security which the exploit writers must deal with...and can therefore stop a larger range of attacks.

Today while browsing with the IE9 beta I suddenly saw the Java2 Runtime Environment snd Adobe Reader load, which I instantly knew that a drive by exploit was attempting to leverage them. Both software products were the most current versions and were exploited successfully. The exploit fsiled to install it's payload however, because it was stuck in a low privilege container which it could not write outside of.

I think the main problem is the fact that a low rights model requires significant changes to the browser's design and code. FireFox and Opera slso run on many different platforms and it may be much more difficult to keep their codebase in sync if some use the new model and others don't.

It also seems like a lot of the browser development lately is being focused around statistics and bragging rights for being the fastest or most compatible at something (like Javascript or HTML5). Since there's no statistic that can be generated for "number of exploits prevented", they can all just say they are the most secure browser and no one can prove it either way.

Comment Re:10 Years On - The Dream Is Dead (Score 1) 473

Compared with XP, 7 just isnt really better. I cant think of a single new thing off hand (not that there arent any,

Windows 7 and Vista have vast improvements over XP. As a power Windows user I immediately notied a ton of improvements in Vista and never wanted to go back to XP again...but this was not how most people perceived it. The problem is that they were mostly features that work "under the hood" and are only noticed by power users and developers. All regular users see is the eyecandy and they assume that's all there is to the new Windows version. For example, moving the audio drives from the kernel to user mode prevents a lot of blue screens that were caused by audio drivers in XP (especially from Creative!). Nothing about this jumps out at users except the fact that they need to install different audio drivers on Vista (which makes people think XP was better because they didn't need new drivers for ther hardware).

Of course, they DID remove the ability to install a decent recovery console to the hard drive

Recovery Console was an absolute piece of garbage. It was very restricted and had a very limited set of commands which often made it inneficient or even useless for fixing many Windows problems. The local install of Recovery Console was installed alongside the Windows system on the hard disk, and so it would often fail to boot along with the local Windows installation (ie. damaged CONFIG registry hive or blue screening disk driver). So it couldn't be used in many of the situations where you'd expect it to be useful. Most power Windows users would have a PE disc which they would boot to perform any real repair work. Vista and 7 use WinRE allows you to boot a full Windows system and run a full subset of Windows tools and drivers (like the PE disc but less complicated to get things working). It can be installed locally but it's more complicated to do since it requires it's own partition (unlike Recovery Console which was just in a directory on the local file system).

and completely ruined boot.ini in favor of the disaster that is bcd

Although boot.ini and NTLDR were easier to manage they were lacking many features which were needed to work with different platforms and newer operating systems. BCD can do a lot of awesome things that you couldn't do with boot.ini. I do agree with you on the point that BCD is overly complex and difficult to work with (it's a typical Microsoft thing to overkill a design that way)

and remove the ability to do a meaningful repair of a broken installation (it has to be initiated from a working installation-- no more boot from cd--> repair)

If Startup Repair cannot determine how to fix a boot problem it will pretty much perform a repair install as a last resort. I think a repair install leaves the system file versions in too unpredicatable a state.

Oh, and the control panel is a disaster now, with Network Connections and adapters taking some 6 clicks to get to (or actually typing ncpa.cpl into start menu).

I also dislike that they buried Network Connections...it should have at least been available from the right-click menu of the system tray icon. Microsoft wants to make Windows easier for regular people to use and a lot of things that people would use from previous Windows systems are considered too advanced and were buried away in the task pane of various control panels.

Just my 2 cents...

Comment Re:Shotwell instead of f-spot, almost Yay (Score 1) 473

Type ahead selection has been a featue of GUI based systems since at least Mac System 7.0. You start typing the file name and the selection jumps to the first object with a matching name. IIRC the MacOS allowed you to cycle the selection through the other matching object names using TAB. In a way, the type ahead selection system in the GUI works pretty pretty similarily to command line completion (you start typing the name and hit TAB to cycle through each matching entry).

Comment Re:Symlinks in Windows Explorer? (Score 1) 473

No version of Windows Explorer lets you create Symlinks. A .lnk does what most users want to do (create a link to a file or folder) whithout adding the extra complications involved in adding links at the file system level. Hardlinks/junctions/symlinks are handled at the file system driver level which means they are seen by higher level drivers and applications as regular files or directories. They must explicitely check to determine if they are actually file system links, if they do not do this then there are all kinds of problems with things like parent references and recursion. Some examples:

Under older versions of Windows if you deleted a Junction with Explorer it wouid recursively delete the contents of the folder which the junction pointed to, because it appeared to Explorer like any other directory on the filesystem. Under Vista and 7 Explorer was updated to be aware of file system links and will only delete the symlink or junction reference. There are still uses in Vista which you can see when you check size of the "Windows" directory under it's properties. The WinSxS directory contains a lot of hardlinks to files in system32. Windows Explorer incorrectly adds the size of the linked files into it's total calculation. This means that the space of a single file on the file system is counted multiple times, resulting in a massive directory size.

NTFS file permissions that are inherited from parent objects will be applied differently when opened from the file system links, because the path is completely different (and will have a different set if parent objects to inherit from). This means a user can create a symlink to the C:\Wimdows\system32 folder in a their Documents folder, and when it is opened it will ingerit permissions from the user folder and not from the Windows folder (which requires Administrative privileges to change).

So you can only create file system links through the command line using fsutil or mklink. Third party tools can be used if you need a GUI interface. LinkShellExtension is freeware and integreates handling of the file system links into Explorer so you can work with them properly and easily.

Comment Re:Last time I looked (Score 1) 103

In a place like Dubai the airport is probably pretty busy and it's very hot on the ground. It gets hot enough in the cabin sometimes when you're stuck sitting on the runway for a while, and I'm sure the cargo compartment probably heats up pretty good too.

Of course when the plane is at cruising altitude the air around the plane is incredibly cold and the cargo compartment would be too if it's not heated...

Comment Re:Foo (Score 1) 345

You could always open and save 2003 documents in 2007, just use the drop down combo box in the common dialog and change the file type to an older format (like for Word, select *.doc instead of *.docx). Microsoft had to introduce the new file formats to clean them up and introduce new features. Microsoft has admitted that even they have trouble parsing doc files as they have grown over many Word versions and increased in complexity. So they introduced the new better designed file types in 2007 and it got a bad rap while people transitioned to them. Office 2010 uses the 2007 formats with no conversion, and future Office version will probably work the same way.

Unless OSS software has some magic model where document formats never need to be updated then they will eventually encounter the same problem. You have to transition users somewhere.

Comment Re:Let's take a stab, shall we? (Score 1) 199

5,579,517 and 5,758,352 Common name space for long and short filenames. Let's write a file system that contains long file names. But we need to let people use short ones too. I know, we'll put them in the same namespace! Obvious

Actually if you read the patent it goes into the excruciating details of how (Windows 95) long file names are implemented on FAT. You see...they couldn't break file listings if the file system was used in older MS-DOS versions. So they had to fit the long file name data (which IIRC was also UNICODE) into the existing 8.3 file name structures in the FAT, in a way that Windows 95 would notice and be able to read, but MS-DOS would ignore. The method they developed is unique (from other methods that were used) and is sort of clever and retarded at the same time.

Everyone who makes FAT devices will need to use this method of storing long file names. People will most likely use the device on Windows which won't recognize LFN data stored in other ways (like OS/2's), so it's pretty much the standard.

It should be noted however that you don't violate the patent unless you store the long file name data to the file system in this manner. So if you let Windows do the writing of the LFN data, then your FAT device is not affected by the patent (for example a cameras writes 8.3 names when snapping pictures, leaves it to the user to change the file names (to long names) in Windows.

Comment Re:More like 30 years buying cutting-edge software (Score 1) 199

Windows NT is Microsoft's own design (with no parts of OS/2) and was inspired from VMS. Microsoft realized OS/2 was being driven in a bad direction by IBM and hired Dave Cutler (and others from the Digital VMS team) to design a new OS. Cutler actually didn't like any of the OS/2 design and didn't use it in NT.

IBM wanted OS/2 to work on 286 systems under 16-bit protected mode which hampered it's multitasking and development. IBM didn't care about portability or supporting many different pieces of hardware (with a flexible driver model). They couldn't make the Windows API work correctly under OS/2 so they broke compatibility with Windows, and then demanded Microsoft change their API's to match.

NT used a very clean design which separates and abstracts everything nicely. Microsoft was able to make NT compatible with OS/2 by building things like the OS/2 subsystem (for running OS/2 applications) and HPFS IFS driver. There isn't much else in NT which borrows from OS/2 which is why OS/2 struggled and failed while NT continues to be used today.

NTFS also is quite different from HPFS, they only share the same partition number. HPFS was designed to replace FAT, and didn't have any of the extensibility and enterprise features in it's design like NTFS. HPFS doesn't support journaling, compression, file/folder permissions and other features that were in the original NTFS. NTFS 3.1 contains even larger differences which are completely beyond the capabilities (and design) of HPFS.

Comment Re:Obvious shit.. WHY??!?!@ WHY !?!?! (Score 1) 199

It might be pretty obvious right now but it wasn't so much a decade ago. When BlackBerry shipped a full web browser on their phone it was unheard of (data rates were so expensive it would cost a fortune to use). Now every smart phone has a full web browser...and people forget that older devices never had them. Smart Phones themselves are a pretty new concept when you look back too.

Think back to the shitty phones we used a decade ago when these patents were filed. The internet on a phone was only accessed through WAP and was expensive as hell (you would never use it for trivial shit like calendar events or checking e-mail). E-mail was all done vis SMS. You were lucky if you could sync anything else on the phone using a data cable (never mind the GSM network). PDA's were still king and did the "Smart Phone" style stuff, but they used a cradle to sync, not cellular networks.

So some of the patents may be somewhat valid for the time... I'm not sure about the battery strength patent though (which is a really ancient concept for mobile phones).

Comment Re:To Earn Respect Accumulate Knowledge (Score 1) 53

Part of the efficiency of C is that it is closer to the "bare metal" than high level languages like Java and C#. If you don't have a proper understanding of how C works then it will totally bite you in the ass. Things like garbage collection are a nightmare when improperly implemented in a C program, while it's automatically handled in higher level languages.

Many higher level language compilers can produce fairly efficient code and can be aware of intricate architecture details that the programmer may not know about. For example, gcc is aware of instructions that can stall the pipeline and will reorder them in it's output to account for this. Someone writing assembly code by hand may not be aware of this, and even if they are they're doing a lot of work that gcc does automatically. The high level runtime environments used by Java and C# also can be improved and thus improve the efficiency of all programs that use them.

It's 2010 and programming should be abstracted from the bare metal by high level languages and libraries. The last thing a guy writing a web app needs to do is start worrying about memory allocation, pointers and string buffers when he needs to focus on sanitizing and safely handling the input.

While C allows the program to work more efficiently, inefficient languages like C# and Java allow programmers to get stuff done much more efficiently. There's just a point where you have to accept inefficient things like XML and make the computer do the extra work instead of the programmer. C's "thousand times less servers" really doesn't mean much anyway these days with everything being virtualized.

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...