Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:I can't be the only one who thought of this... (Score 1) 296

I really hate that FF and Opera are promoted as so much more secure than IE, yet they don't take advantage of the new security models used in Chrome and IE8/9. It's like running your desktop machine on the internet without a firewall, because the OS vendor says that your listening services are secure. The low rights security model provides another layer of security which the exploit writers must deal with...and can therefore stop a larger range of attacks.

Today while browsing with the IE9 beta I suddenly saw the Java2 Runtime Environment snd Adobe Reader load, which I instantly knew that a drive by exploit was attempting to leverage them. Both software products were the most current versions and were exploited successfully. The exploit fsiled to install it's payload however, because it was stuck in a low privilege container which it could not write outside of.

I think the main problem is the fact that a low rights model requires significant changes to the browser's design and code. FireFox and Opera slso run on many different platforms and it may be much more difficult to keep their codebase in sync if some use the new model and others don't.

It also seems like a lot of the browser development lately is being focused around statistics and bragging rights for being the fastest or most compatible at something (like Javascript or HTML5). Since there's no statistic that can be generated for "number of exploits prevented", they can all just say they are the most secure browser and no one can prove it either way.

Comment Re:10 Years On - The Dream Is Dead (Score 1) 473

Compared with XP, 7 just isnt really better. I cant think of a single new thing off hand (not that there arent any,

Windows 7 and Vista have vast improvements over XP. As a power Windows user I immediately notied a ton of improvements in Vista and never wanted to go back to XP again...but this was not how most people perceived it. The problem is that they were mostly features that work "under the hood" and are only noticed by power users and developers. All regular users see is the eyecandy and they assume that's all there is to the new Windows version. For example, moving the audio drives from the kernel to user mode prevents a lot of blue screens that were caused by audio drivers in XP (especially from Creative!). Nothing about this jumps out at users except the fact that they need to install different audio drivers on Vista (which makes people think XP was better because they didn't need new drivers for ther hardware).

Of course, they DID remove the ability to install a decent recovery console to the hard drive

Recovery Console was an absolute piece of garbage. It was very restricted and had a very limited set of commands which often made it inneficient or even useless for fixing many Windows problems. The local install of Recovery Console was installed alongside the Windows system on the hard disk, and so it would often fail to boot along with the local Windows installation (ie. damaged CONFIG registry hive or blue screening disk driver). So it couldn't be used in many of the situations where you'd expect it to be useful. Most power Windows users would have a PE disc which they would boot to perform any real repair work. Vista and 7 use WinRE allows you to boot a full Windows system and run a full subset of Windows tools and drivers (like the PE disc but less complicated to get things working). It can be installed locally but it's more complicated to do since it requires it's own partition (unlike Recovery Console which was just in a directory on the local file system).

and completely ruined boot.ini in favor of the disaster that is bcd

Although boot.ini and NTLDR were easier to manage they were lacking many features which were needed to work with different platforms and newer operating systems. BCD can do a lot of awesome things that you couldn't do with boot.ini. I do agree with you on the point that BCD is overly complex and difficult to work with (it's a typical Microsoft thing to overkill a design that way)

and remove the ability to do a meaningful repair of a broken installation (it has to be initiated from a working installation-- no more boot from cd--> repair)

If Startup Repair cannot determine how to fix a boot problem it will pretty much perform a repair install as a last resort. I think a repair install leaves the system file versions in too unpredicatable a state.

Oh, and the control panel is a disaster now, with Network Connections and adapters taking some 6 clicks to get to (or actually typing ncpa.cpl into start menu).

I also dislike that they buried Network Connections...it should have at least been available from the right-click menu of the system tray icon. Microsoft wants to make Windows easier for regular people to use and a lot of things that people would use from previous Windows systems are considered too advanced and were buried away in the task pane of various control panels.

Just my 2 cents...

Comment Re:Shotwell instead of f-spot, almost Yay (Score 1) 473

Type ahead selection has been a featue of GUI based systems since at least Mac System 7.0. You start typing the file name and the selection jumps to the first object with a matching name. IIRC the MacOS allowed you to cycle the selection through the other matching object names using TAB. In a way, the type ahead selection system in the GUI works pretty pretty similarily to command line completion (you start typing the name and hit TAB to cycle through each matching entry).

Comment Re:Symlinks in Windows Explorer? (Score 1) 473

No version of Windows Explorer lets you create Symlinks. A .lnk does what most users want to do (create a link to a file or folder) whithout adding the extra complications involved in adding links at the file system level. Hardlinks/junctions/symlinks are handled at the file system driver level which means they are seen by higher level drivers and applications as regular files or directories. They must explicitely check to determine if they are actually file system links, if they do not do this then there are all kinds of problems with things like parent references and recursion. Some examples:

Under older versions of Windows if you deleted a Junction with Explorer it wouid recursively delete the contents of the folder which the junction pointed to, because it appeared to Explorer like any other directory on the filesystem. Under Vista and 7 Explorer was updated to be aware of file system links and will only delete the symlink or junction reference. There are still uses in Vista which you can see when you check size of the "Windows" directory under it's properties. The WinSxS directory contains a lot of hardlinks to files in system32. Windows Explorer incorrectly adds the size of the linked files into it's total calculation. This means that the space of a single file on the file system is counted multiple times, resulting in a massive directory size.

NTFS file permissions that are inherited from parent objects will be applied differently when opened from the file system links, because the path is completely different (and will have a different set if parent objects to inherit from). This means a user can create a symlink to the C:\Wimdows\system32 folder in a their Documents folder, and when it is opened it will ingerit permissions from the user folder and not from the Windows folder (which requires Administrative privileges to change).

So you can only create file system links through the command line using fsutil or mklink. Third party tools can be used if you need a GUI interface. LinkShellExtension is freeware and integreates handling of the file system links into Explorer so you can work with them properly and easily.

Comment Re:Last time I looked (Score 1) 103

In a place like Dubai the airport is probably pretty busy and it's very hot on the ground. It gets hot enough in the cabin sometimes when you're stuck sitting on the runway for a while, and I'm sure the cargo compartment probably heats up pretty good too.

Of course when the plane is at cruising altitude the air around the plane is incredibly cold and the cargo compartment would be too if it's not heated...

Comment Re:Foo (Score 1) 345

You could always open and save 2003 documents in 2007, just use the drop down combo box in the common dialog and change the file type to an older format (like for Word, select *.doc instead of *.docx). Microsoft had to introduce the new file formats to clean them up and introduce new features. Microsoft has admitted that even they have trouble parsing doc files as they have grown over many Word versions and increased in complexity. So they introduced the new better designed file types in 2007 and it got a bad rap while people transitioned to them. Office 2010 uses the 2007 formats with no conversion, and future Office version will probably work the same way.

Unless OSS software has some magic model where document formats never need to be updated then they will eventually encounter the same problem. You have to transition users somewhere.

Comment Re:Let's take a stab, shall we? (Score 1) 199

5,579,517 and 5,758,352 Common name space for long and short filenames. Let's write a file system that contains long file names. But we need to let people use short ones too. I know, we'll put them in the same namespace! Obvious

Actually if you read the patent it goes into the excruciating details of how (Windows 95) long file names are implemented on FAT. You see...they couldn't break file listings if the file system was used in older MS-DOS versions. So they had to fit the long file name data (which IIRC was also UNICODE) into the existing 8.3 file name structures in the FAT, in a way that Windows 95 would notice and be able to read, but MS-DOS would ignore. The method they developed is unique (from other methods that were used) and is sort of clever and retarded at the same time.

Everyone who makes FAT devices will need to use this method of storing long file names. People will most likely use the device on Windows which won't recognize LFN data stored in other ways (like OS/2's), so it's pretty much the standard.

It should be noted however that you don't violate the patent unless you store the long file name data to the file system in this manner. So if you let Windows do the writing of the LFN data, then your FAT device is not affected by the patent (for example a cameras writes 8.3 names when snapping pictures, leaves it to the user to change the file names (to long names) in Windows.

Comment Re:More like 30 years buying cutting-edge software (Score 1) 199

Windows NT is Microsoft's own design (with no parts of OS/2) and was inspired from VMS. Microsoft realized OS/2 was being driven in a bad direction by IBM and hired Dave Cutler (and others from the Digital VMS team) to design a new OS. Cutler actually didn't like any of the OS/2 design and didn't use it in NT.

IBM wanted OS/2 to work on 286 systems under 16-bit protected mode which hampered it's multitasking and development. IBM didn't care about portability or supporting many different pieces of hardware (with a flexible driver model). They couldn't make the Windows API work correctly under OS/2 so they broke compatibility with Windows, and then demanded Microsoft change their API's to match.

NT used a very clean design which separates and abstracts everything nicely. Microsoft was able to make NT compatible with OS/2 by building things like the OS/2 subsystem (for running OS/2 applications) and HPFS IFS driver. There isn't much else in NT which borrows from OS/2 which is why OS/2 struggled and failed while NT continues to be used today.

NTFS also is quite different from HPFS, they only share the same partition number. HPFS was designed to replace FAT, and didn't have any of the extensibility and enterprise features in it's design like NTFS. HPFS doesn't support journaling, compression, file/folder permissions and other features that were in the original NTFS. NTFS 3.1 contains even larger differences which are completely beyond the capabilities (and design) of HPFS.

Comment Re:Obvious shit.. WHY??!?!@ WHY !?!?! (Score 1) 199

It might be pretty obvious right now but it wasn't so much a decade ago. When BlackBerry shipped a full web browser on their phone it was unheard of (data rates were so expensive it would cost a fortune to use). Now every smart phone has a full web browser...and people forget that older devices never had them. Smart Phones themselves are a pretty new concept when you look back too.

Think back to the shitty phones we used a decade ago when these patents were filed. The internet on a phone was only accessed through WAP and was expensive as hell (you would never use it for trivial shit like calendar events or checking e-mail). E-mail was all done vis SMS. You were lucky if you could sync anything else on the phone using a data cable (never mind the GSM network). PDA's were still king and did the "Smart Phone" style stuff, but they used a cradle to sync, not cellular networks.

So some of the patents may be somewhat valid for the time... I'm not sure about the battery strength patent though (which is a really ancient concept for mobile phones).

Comment Re:To Earn Respect Accumulate Knowledge (Score 1) 53

Part of the efficiency of C is that it is closer to the "bare metal" than high level languages like Java and C#. If you don't have a proper understanding of how C works then it will totally bite you in the ass. Things like garbage collection are a nightmare when improperly implemented in a C program, while it's automatically handled in higher level languages.

Many higher level language compilers can produce fairly efficient code and can be aware of intricate architecture details that the programmer may not know about. For example, gcc is aware of instructions that can stall the pipeline and will reorder them in it's output to account for this. Someone writing assembly code by hand may not be aware of this, and even if they are they're doing a lot of work that gcc does automatically. The high level runtime environments used by Java and C# also can be improved and thus improve the efficiency of all programs that use them.

It's 2010 and programming should be abstracted from the bare metal by high level languages and libraries. The last thing a guy writing a web app needs to do is start worrying about memory allocation, pointers and string buffers when he needs to focus on sanitizing and safely handling the input.

While C allows the program to work more efficiently, inefficient languages like C# and Java allow programmers to get stuff done much more efficiently. There's just a point where you have to accept inefficient things like XML and make the computer do the extra work instead of the programmer. C's "thousand times less servers" really doesn't mean much anyway these days with everything being virtualized.

Comment Re:Perhap the kernel's size is becoming too unweil (Score 2, Informative) 274

Yeah but none of those exploits is in the Windows 7 kernel itself (which is rarely ever patched). They'll all be related to other components distributed with the operating system. This could be many things including Windows Media Player and IIS. If you want to compare the number of Linux patches with Windows Updates you would need to compare the Windows patches to the patches of s Linux distro not just the Linux kernel itself.

Comment Re:Why is there anything 32 bit on a 64 bit server (Score 1) 274

Unless you need the big address space and MOST apps don't - 32 bit code runs faster.

The address space is needed since the total virtual addressible space needs to be divided among things like the hardware, kernel and user space. You already see this when you have 4GB of memory on a 32-bit Intel system and the hardware consumes 1GB of memory for it's addressing. Windows further splits the space into 2GB for kernel/hardware and user space. So no application can every address more than 2GB of memory (paged or physical combined) and the kernel shares the other 2GB with the 1GB of hardware addressing (leaving 1GB for it's own).

On 64-bit the addressible space is larger and the hardware address ranges can be located anywhere withing the massive amount (like 128TB) of virtual address space. So the 1GB of hardware address ranges don't have to fit in the 4GB of address space that the physical memory uses, and you can access the full 4GB of memory.

amd64 has many more registers and a better instruction set than i386 so it should execute both 32 and 64-bit code as fast if not faster than 32-bit systems. The performance gain might not be massive if software is not geared for the newer opcodes or registers, but by all rights there is no reason that the 64-bit code should run slower than 32-bit code.

It's also smaller - uses less disk, uses less memory.

You don't allocate memory or disk space in blocks of 32 or 64 bits so the savings are not affected by the larger size of the addressible bits. If you allocate a page of memory in Windows the memory manager allocates (IIRC) 64K of storage for the page regardless of the smallest addressible unit size. The extra 32-bits used to store an integer would be insignificant in the overall 64K of page space.

Comment Re:Patch (Score 1) 274

I might be wrong but it's probably because the only character code you can fit into a single bit of memory is SOH. ASCII characters use 8 bits of memory for a single character which is what char was origionally designated for (storing a character).

Comment Re:Sure. More the merrier (Score 1) 206

As Windows continues to grow exponentially larger and slower (Win7 requires a 40 GB partition), people may eventually throw up their hands and install a more sensible alternative that does what an operating system is supposed to do: run other programs, and that's all.

Except that most users don't just want an OS to run programs, they want it to have things like a web browser and media player right out of the box. Why do you think there's still a market for preloaded OEM software like Norton Antivirus, because people don't want to go select and purcase an AV product they want it to be already there when they buy the computer.

I don't use Windows Media Player but I like that it's installed with the OS in case I need to use it (and there are times where you do need to). In the old days I would setup Windows 95 and 98 systems for a school and we had to manually install so many things on each system it was just crazy. I'm grateful that now I don't have to select and manually install a media player, movie editor or e-mail client on every Windows computer I setup for people.

The days of MS-DOS are long gone and we have to accept that to most people the OS is no longer just a kernel and shell. There is a high demand from average users for more built in functionality than there is a demand for a lean and customized system.

Slashdot Top Deals

Scientists will study your brain to learn more about your distant cousin, Man.

Working...