Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Interesting proposal; just might work (Score 3, Insightful) 254

Actually, it's not a bad compromise for Google.

FTFY. Customers will get their internet wirelessly, because they move around and want the internet. Phones, iPad, laptop... these all make people want wireless internet.

Business use wired internet, because they have a fixed location and don't need to roam.

So what Google is saying is "don't extort us, but do extort users". This is a perfect world for Google, because with their deep pockets they can bribe wireless carriers to muscle Bing and Apple and whoever else out of the market. But with guaranteed fair wired access, worst case they could start their own wireless service... they would only have to set up the wireless instead of having to potentially own everything in between their servers and the user; if their wireless network had to hook up to Verizon for instance, then without wired neutrality Verizon could make it prohibitively expensive.

In the end, if any part of the network is not neutral, then to users none of it is. Which makes this initiative from Google a case of "do less evil", or worse.

Comment Re:Good luck with that. (Score 1) 764

[Microsoft] came into the [netbook] game late with an inferior product, but used their position to push the hardware manufacturers and retailers to sell XP netbooks instead of Linux netbooks.

Microsoft won with netbooks because they had a better product. Windows XP would have beat out linux on netbook even without any network effects from being able to run Win32 programs.

For example, when I put linux on my netbook it ran at ~12 watts idle whereas Windows XP got ~9 watts idle. So Windows XP had about 1.3x more battery life. Not to mention that XP was better at flash, games, and firefox.

It was possible with a ton of work to get a particular distro to run at the same idle watts by lowering the core clock frequency (not the cpu frequency), but even still it was flaky and broken on distro upgrades.

Comment Re:Coal (Score 2, Interesting) 635

If I have to pay for the negative externalities of the process ... then my process is only competitive for gold prices above $1050 per gram. However, if I can get away with just dumping the toxic water somewhere for free, then at $50 per gram of gold my process is highly competitive

There is another angle to this. If you can improve your solar efficiency by 0.1% but it will cost you $10 million to modify the factory then you need to recoup that $10 million from sales that would otherwise go to competitors or not be made. If you aren't selling much then you have less ability to improve the product.

So the reason we should be investing a lot on solar in the form of subsidies is to grow the market, which will improve the technology as a side effect. The difference between solar and a lot of other green fuels is that there can be large improvements in the efficiency. Even if solar is not the cost effective choice now, we should still invest in it so that it will be.

Comment Re:Pass Phrases (Score 1) 563

Pass phrases are the wrong answer because they have the same weakness as passwords... once the adversary knows it then you are screwed. While you are sleeping, they are using the passphrase they got from you the last time you entered it.

But no matter if your password is "cat" or "password" or "myvoiceismypassportverifyme", if you have to hit a physical button to log in then the worst they can do is hijack that one login. And that's a much harder problem for them and much easier to defend against.

Cracking is not the problem. Software-only credentials are the problem.

Comment Re:Screenshot/Mockups (Score 1) 366

But will Firefox stay relevant? Chrome is coming up fast and Mozilla seems to be stagnating.

Not sure what you mean...

- FF 3.7 is actually snappy on linux now, and without hardware acceleration turned on.

- FF will almost certainly get hardware acceleration before Chrome. From Chromium blog, "the image data must be transferred to the main browser process before it can be drawn to the screen, which limits the possible approaches we can take". They have to re-architect a bunch of stuff to get hardware acceleration.

- FF is getting a new, cleanly written HTML5 renderer to replace gecko.

- Old javascript VM put floats into separate heap-allocated objects, which was slow. New interpreter will use 128-bit fat values instead. They also bet on tracing and it didn't pan out, but they are correcting this with a method jit.

So there's a lot of improving going on with firefox. Also, Mozilla has a much better track record having created and managed an entire browser from scratch (webkit was finished already and for V8 Google essentially bought a Self VM).

Comment Timer wheels (Score 1) 298

Continuing a discussion...

Seems to be that this bheap just reduces the number of pages probably needed from log(n) to log(n)/C where C is the number of levels that are stored in the same page. And for removing a node it may need to access log(n)/C random pages. So this is just a constant factor improvement... it's just that the constant is number of pages so has a large real world cost.

I'd like to get people's thoughts on using timer wheels instead, like the linux kernel uses for timers. Say you take say 8 wheels of increasing spans of time where each wheel is just an unordered list:

insert: one of the 8 wheel list-ending pages must be resident (certainly will be)
delete: just zero out the list entry (probably 1 page-in).
remove min: 1 page to pop the list head at wheel 0, or a bunch of sequential accesses to cascade the expire timers down.

Could some data structure expert please comment on the relative merits of bheap vs timer wheels? Seems to me that a small fixed set of pages and sequential access should be far better in terms of swapping. The OS should be designed to recognize in-order access, especially if each list is a mmap'd file, and the thread blocked does not need to have any locks (you can replace the list with a new one while it is being processed).

Comment Re:Maybe they've grown up a bit (Score 1) 546

Unless the compiler special-cases it, or is able to inline this kind of thing across object files, this does mean that the standard C qsort() is slower than standard C++ std::sort().

So you compile libc with -flto and it puts the parse tree into the object file. Problem solved, and by including "some form of source code" like I said. Try even getting 'export' to work in C++, let alone this.

I don't know if -flto is currently embedding enough to actually generate a specific version of qsort, but it certainly could.

There's one other thing. The way qsort and bsearch are defined, the comparator/predicate have to access elements indirectly via untyped void* pointers. Though a smart optimizer can still understand this and remove the indirection while inlining

A smart optimizer is a given for anything to do with fast code, whether in C++ or C or any other language. And C compilers are pretty much past masters at knowing the type a void* really points to.

I'm not sure what you are trying to say here. That C++ code is easier for the compiler to optimize?

Comment Re:Maybe they've grown up a bit (Score 1) 546

For example, compare the stl sort routine with qsort. The stl version is declared with a predicate method that can be made inline. The C version is passed a pointer to a predicate function that can't be inlined.

Not quite. The C++ version can only have the comparison function inlined if a separate version of qsort is generated, which means some form of source code must be available to it. But this is also possible in C if the compiler has seen the qsort code. If you #include "qsort.c" instead of linking to it in libc.a then the C compiler can see that the function pointer is not modified and emit a specialized version of qsort optimized just like the C++ one.

C++ forces the compiler to not use a ".o is everything" model, but to in effect recompile parts of other modules over and over again. It's the forced complexity of compilation that really is giving C++ binaries better performance, not anything defect in C itself, because it gives the compiler access to more of the source code.

Comment 24 (Score 1) 142

It should be shown with 24-style action panels simultaneously showing what Paragon Shepard and Renegade Shepard do at each conversation. Also sometimes the panels should show male Shepard and other times female Shepard.

Also if you watch the DVD a second time it should skip some scenes because you already started with Paragon left over from the last viewing.

Comment Re:Too Expensive (Score 1) 125

Right, but I think this is largely the case because Unix DAC and SELinux MAC are mixed in an unholy matrimony. This causes things to get complicated, and frankly not many people care, so there's not enough work done to do SELinux right. An experimental distro that ripped out Linux DAC would be an interesting project.

It has much more to do with SELinux exploding when it has to deal with shared resources.

Take shared libraries for instance. All the policies label everything in /usr/lib with the same type, so any program can link any library. Obviously this is bad, but the thought of specifically labeling each library and then having rules for every program so they can link the ones they use is madness. They don't do it because in practice, in the real world, it just can't be done.

And then how do you protect mydiary.txt vs say an email attachment the user picks at runtime? You have to first add support for dynamic rules to SELinux (IIRC the only dynamic thing it has is for sockets and that's a hack) and also individually label and make rules for a user's files.

If what users really care about is their data then SELinux doesn't really provide anything useful. And I contend that protecting a user's data also protects the system from being 'owned'. So SELinux doesn't really provided (in practice) anything useful at all then.

Comment Re:Too Expensive (Score 2, Insightful) 125

witness the uproar when somebody suggests replacing Unix DAC with SELinux MAC

The uproar is because SELinux is a complete pain and tons of work to set up correctly and completely. The SELinux policy for Fedora is ~10mb compiled. Although it does work pretty well at preventing escalation.

But finally you get the system locked down with SELinux and still it does nothing to prevent BadAddOn.js from reading mydiary.txt if it's owned by the current user.

What's really needed is:

- A hardware device to authenticate the user. Put it on your keychain, shoe, watch, whatever.

- OS that grant permissions for specific objects based on user input, not to processes. If the user selected mydiary.txt from the trusted input dialog then the browser can read it. Otherwise it can't, or it has to ask permission to do so (OS puts up a dialog).

These two things could reliably cover the vast, vast majority all actual security needs, without hassles to the user, and without remote automated attacks. It wouldn't be perfect still, but it would be magnitudes better than what we have now. Unfortunately there's no mass market to provide a general purpose hardware device like that, and software would have to be modified slightly.

Comment Netfliix (Score 1) 310

If you have an unused Wii, you should check out Netflix on it.

I was skeptical that it would be any good, and got the boot dvd just because it was free + 10% discount. At 480p it's good enough... it's not distracting, and I can enjoy the movie or tv show on it. Even at 480i it was ok. With the point & click interface it's actually a pretty nice system to use for this.

Comment Re:Find a new job (Score 1) 555

So having a virus run amok doesn't really concern me as it would get stopped in its tracks by the entire clusterfuck that is Healthcare IT.

Also there is a very good solution to viruses in a hospital network.

First add a large server with several gigabit NICs, but no IP addresses. Put the interfaces into promiscuous mode. This will 'consume' packets from the network, but never create any itself. This causes a negative pressure on the network, causing virus packets to gradually 'flow' towards the server, preventing them from spreading.

However to make sure enough flow is always present, another server should be added to the network that continuously sends out broadcast pings. It is important to use a UV light source on the network connections from this server to protect the network from unsterilized ping packets. Also, this server should be located in the basement or underground garage area, so that virus packets 'float' through the network; otherwise they may settle on surfaces.

Comment Re:I don't get it... (Score 1) 232

Google is saying "hey, we have a motto and doing business with such a government is not in keeping with it"

- If China steals gmail then they have a complete 'home grown' email service.
- If China steals search and gives it to Baidu they have a better search.
- If China steals Maps then they have a complete 'home grown' mapping service.
- ...

Google is leaving the innovative 'beta' stage of its properties that actually make money and are concerned that their long-term competitiveness now rests on competitors not stealing their already-complete products. It's as simple as that.

If China steals all of google's code and copies their methods, then it's not just a question of Google losing that market, but losing every market. They would be competing against their own services (same quality, features), but their competitor would have a large domestic market to leverage monopoly-style to crush free markets in other countries.

Slashdot Top Deals

Nothing happens.

Working...