Forgot your password?
typodupeerror

Comment: Re:in root? Am I missing something? (Score 2) 215

by benjymouse (#47335137) Attached to: Exploiting Wildcards On Linux/Unix

Er.. most of the exploits are only possible if one is root and/or the directory is writable for some other user (e.g. leon in this case).

Since one is root, one can do anything anyway so why bother with all this misdirection? If someone leaves world writable directories lying around (especially without the sticky bit set), then they deserve everything they get. Or is this some kind of "trap the (completely) unwary sysadmin" wake up call? If I see some strange named file (especially if I know I didn't put it there) I would investigate very, very carefully what is going on. I can't be alone in this - surely?

The point is that this can be used to trick a root user into issuing what he believes is a safe command. The combination of the text-reinterpreting shell and specially crafted file names combines into a seemingly innocent command ending up allowing the attacker (the creator of the specially crafted file) root access on the system.

It doesn't help that some (on the surface) idempotent commands like find packs a number of dangerous options that can be used to execute shell scripts, commands or remove files.

Comment: Re:Even more work for spies! (Score 1) 99

And to think that just the other day Microsoft were complaining that the NSA fallout was getting worse. Are they hoping to swamp them with simply too much data on Microsoft's servers?

So, would you expect Microsoft to hold it's breath while the lawmakers pull their collective behinds together to reign in the runamok NSA? Should they stop doing business while they wait for the political system?

Comment: Re:Under the hood (Score 1) 187

by benjymouse (#47155313) Attached to: Windows 8.1 Finally Passes Windows 8 In Market Share

There's heaps of us who like Windows 8.x/2012, but Slashdot has its mind made up and every time there's a Windows 8 submission these idiots bring out their pitchforks while people like us just ignore it. So no, you're not the only one.

At this stage it looks like Microsoft could patch in a new Start Menu, throw in the option to use oh I don't know, KDE's menu or whatever your DE of choice is these days, put in a tool that converts fucking lead to gold, and donate 50% of their net profit to NASA, and people here would still hate it.

This.

Comment: Re:12.64 percent in only 17 months (Score 2) 187

by benjymouse (#47155161) Attached to: Windows 8.1 Finally Passes Windows 8 In Market Share

I seem to recall reading somewhere that the Windows kernel, UI, and default browser all share essential low-level processes, and therefore could never ever possibly be decoupled.

However, that is wrong.

Windows kernel is an incredibly modular piece of work, much more so that Unix/Linux. In fact, the "Win32 subsystem" is just *one* possible subsystem mapped onto a very generic kernel. From the start, the core was designed with WIn32 subsystem as just one of a number of subsystems and originally also included a POSIX subsystem and an OS/2 subsystem. Note, that these were NOT emulation layers, but full blown "peers" of the Win32 subsystem, That design is still very much alive within the kernel.

The confusion with respect to the "browser in the kernel" is at least partly Microsoft's own fault. During the browser anti-trust trials they claimed that Internet Explorer could not be unbundled from the core. Until someone actually did and demonstrated it during the trials.

Like virtually all OSes today, some of the core GUI administration components use HTML as rendering mechanism for at least parts of the user interface. Hence a html renderer is part of the core OS (unless a GUI less server SKU is used). However, a HTML renderer being distributed as part of the *core* OS does NOT mean that it will execute in kernel space. This is such a mindbogglingly stupid assertion that whenever someone brings up that claim I get suspicious that they actually know better, but finds pleasure in throwing it out there and watch the immediate condemnation and ridicule.

The HTML renderer is of course the same one as used in Internet Explorer (Trident, IIRC). That *still* does not mean that Internet Explorer is "part of the OS" - it merely means that Internet Explorer (the browser) uses the same rendering library as the core components in the same way that an XML parser can be used buy the browser as well as the core OS without it running in kernel space.

Comment: Re:Bye-Bye Java (Score 2) 303

Name a platform that is end-to-end not proprietary in any way shape or form?

Even if such a platform exists, how does that preclude Microsoft from suing? Remember that the thesis here is that Microsoft would disregard the licenses already granted for C#, .NET Framework, compilers etc and just sue to exhaust your funds. Why couldn't they claim that you infringed an algorithm (or whatever) even if you were using Java or Python? After all, they have no legal standing but are considered *so* malicious that they will sue even when they have no legal standing.

The whole "Microsoft will sue!" is nothing but FUD.

In reality - because of the promissory estoppel of the community promise - users of .NET and any other technology under the community promise is much better protected than when using alternatives. This is because the promissory estoppel can be used to dismiss a lawsuit outright.

Comment: Re:Trolling? (Score 1) 270

by benjymouse (#46729341) Attached to: The New 'One Microsoft' Is Finally Poised For the Future

Microsoft SHOULD have taken MVC design to its next logical level, and built upon .net instead of throwing it all away in the blighted name of Metro... common model and controller code across all Windows platforms, with different views for desktop, tablet, and maybe mobile devices whose displays are too small to treat like a tablet. They could have compiled the code to CLR, then had the installer itself compile it to native code optimized for the local platform. But no... they just *had* to ruin a good thing, and try to ram touch down everybody's throats.

This does not make sense to me at all. While I agree that's the way they should have taken (IMHO using MVVM instead of MVC), it is almost exactly the way they took. They didn't have all the ducks in row at the first iteration, but it was the plan all the way. They said so at the time.

You did not belive the FUD about Microsoft abandoning .NET did you? .NET is very, very much in the game. At /Build// Microsoft just announced Universal Apps.

MSDN has documentation

With universal apps you build one app for phone, tablets and laptops/desktops. The same app can share views and viewmodels (MVVM) across the form factors, or they can have completely different view/viewmodels. A view/viewmodel can also "adapt" to the formfactor - showing only primary and essential information on phones, more on tablets and include secondary/tertiary information on desktops.

When deployed, the universal apps are deployed as IL/CLR code. When a device installs an app, the cloud service will perform the compilation and serve a native app to the device, compiled for the architecture, memory requirements and core count. The delivery system will only serve resources used by the specific device, i.e. even if the universal app is distributed with extensive resources for desktop users, the package that is downloaded to a phone will strip those resources.

Metro was never mutually exclusive with .NET. Microsoft made plenty of blunders both with their messaging on Metro as well as the initial Dr. Jekyll-and-Hyde two-personality Windows 8. But they have been consistent on their messaging on .NET and apps.

Comment: I call BS (Score 3, Informative) 270

by benjymouse (#46729165) Attached to: The New 'One Microsoft' Is Finally Poised For the Future

The links have long disappeared due to DCMA takedowns.....

No they haven't. You just do not want slashdot readers to read them, because they do not say what you claim.

http://www.internetnews.com/de...

Quote from that article:

One technology enthusiast at Web site kuro5shin noted many of the hacks (additions) to the code base included some colorful comments and creative use of adjectives in noting programming changes.

In this case, the reviewer concluded the code was generally "excellent." But he also noted the many additions to the Windows code to be almost universally compatible with previous Windows versions. And third-party software has "clearly come at a cost, both in developer-sweat and the elegance (and hence stability and maintainability) of the code."

GP is correct, those who took a look at it indeed came away with the impression that it was quite pristine.

You, OTOH, are just lying.

Comment: Re:ASLR anyone? hype? (Score 2) 303

by benjymouse (#46694517) Attached to: OpenSSL Bug Allows Attackers To Read Memory In 64k Chunks

I've actually wondered about this too. Read overruns will crash a program just as badly as write overruns; Read AV in Windows [NT], Segmentation Fault in *nix (General Protection Fault in legacy Windows), etc. reading memory will tell you enough about the layout of memory to cherry-pick addresses pretty well, and probably to determine the ASLR mask, but you're still going to have the issue of what, within the heap, is allocated. You could probably do OK by starting from the stack (which is in a predictable enough location) and working from there, I guess?

ASLR was invented as a mitigation of "return oriented programming" which was itself a way to get around DEP/NX. As such, ASLR targets executable memory, making the memory addresses of candidate executable code fragments hard to guess. ASLR does not randomize data segments - there's no need since the original intent was to make executable locations hard to guess. Non-executable locations was not the problem ASLR tried to solve.

And in the case it would not matter at all if the location was randomized, since this bug is an unbounded offset to a memory location. The attacker does not need to know the actual memory location, he just needs to specify a too large or too small offset to read adjacent memory. Yes, going too far could trigger a segfault, but the attacker will have dumped all memory until then. So what? The attacker can just continue the attack once the service restarts.

The point is: The attacker does not need to know anything about the memory layout. The server already allows him to offset from a pointer to a known valid location.

Comment: Re:The Slide-to-Unlock Claim, for reference (Score 1) 408

by benjymouse (#46694009) Attached to: Apple: Dumb As a Patent Trolling Fox On iPhone Prior Art?

As mentioned in a different reply, I see non-continuous movement: slider at the left side; slider in the middle; slider at the right side. Three images, replaced in succession, as I said.

clearly demonstrates the intent to create an appearance of an animated continuous movement. The technology at the time did not allow for the same smoothness as today. But even today you can argue that the movement is *still* not continuous - it is just that Apple has "invented" smaller and more steps.

Let it go: The video is clearly prior art for state change. It is presented as a general way to change state on an electronic device with a touchscreen.

What Apple has is
      1) Apple "re-invented" the state change for an handheld device
      2) The Apple state change is "unlock" - a specific example of a state change

For 1: it is trivial to demonstrate that such a state change on a handheld device would derive automatically from the technological advances that shrink devices to the point where the touch screen can be handheld.

For 2: It is interesting if the *specific* (unlock) state change is not covered by the broader state change mechanism demonstrated in the video.

Comment: Re:The Slide-to-Unlock Claim, for reference (Score 1) 408

by benjymouse (#46693825) Attached to: Apple: Dumb As a Patent Trolling Fox On iPhone Prior Art?

Compare (original)

A method of unlocking a hand-held electronic device, the device including a touch-sensitive display, the method comprising:
detecting a contact with the touch-sensitive display at a first predefined location corresponding to an unlock image;
continuously moving the unlock image on the touch-sensitive display in accordance with movement of the contact while continuous contact with the touch screen is maintained, wherein the unlock image is a graphical, interactive user-interface object with which a user interacts in order to unlock the device; and
unlocking the hand-held electronic device if the moving the unlock image on the touch-sensitive display results in movement of the unlock image from the first predefined location to a predefined unlock region on the touch-sensitive display.

with

A method of (changing state of) an () electronic device, the device including a touch-sensitive display, the method comprising:
detecting a contact with the touch-sensitive display at a first predefined location corresponding to a (state) image;
continuously moving the (state) image on the touch-sensitive display in accordance with movement of the contact while continuous contact with the touch screen is maintained, wherein the (state) image is a graphical, interactive user-interface object with which a user interacts in order to (change state of) the device; and
(changing the state of) the () electronic device if the moving the (state) image on the touch-sensitive display results in movement of the (state) image from the first predefined location to a predefined unlock region on the touch-sensitive display.

The latter accurately describes what happens in the Microsoft video demonstration. All I did was to substitute (state) for "unlock", (change state of) for "unlocking". I also removed "handheld".

So what we have is that Apple is using the general application of switches with graphical representation to perform a specific function (unlock) rather than the general (changing state) and Apple applying it to handheld devices.

Everyone can recognize unlocking as a specific example of a state change. Your "invention" does not become more original because you narrow the scope to which it is applied.

Same goes for handheld. It was done on a electronic device with a touch screen. When the technology advances and allows the electronic device to be carried around it does not make the same idea new again.

Comment: Re:What about number-crunching performance? (Score 2) 217

by benjymouse (#46654823) Attached to: .NET Native Compilation Preview Released

I skimmed over the links, but I probably just missed it. So apps take 60% less time to start, and they use 15% less memory. What about run-time performance? How much faster are they when executing?

During runtime, a.NET already runs compiled. This saves on the JIT compiler.

However, they also announced (later session at /Build//) that the new compilers (including the JITs) will take advantage of SIMD. For some application types this can allegedly lead to serious (like in 60%) performance gains. Games were mentioned.

Comment: Re:Only benefits smaller devices (Score 2) 217

by benjymouse (#46654779) Attached to: .NET Native Compilation Preview Released

The raw speed of the code might actually diminish since the .net runtime could have optimized it better for the specific environment (CPU model, available RAM, phase of the moon, etc).

MS announced that developers still need to pass hints to the compiler on what architecture, CPU core count, available memory etch, to compile for. You can (cross) compile to multiple architectures.

This technology is already at work when deploying apps for Windows Phone 8: Developers pass IL code to the store, native compilation is performed per device type in the cloud (CPU architecture, OS version, memory, ...) and the binary is passed to the device.

You've been Berkeley'ed!

Working...