Forgot your password?

Comment: Re:power consumption (Score 2) 151

by zealot (#38087750) Attached to: Intel's Plans For X86 Android, Smartphones, and Tablets

Despite what many other commenters will say, no, it isn't a power hog compared to ARM. Or at least it doesn't have to be. Intel/AMD/VIA don't yet offer processors that have as low power as ARM (although some are pretty power/performance efficient depending on your workload), but they will within the next year for smartphones and tablets. On modern manufacturing processes the "x86 tax" becomes almost non-existant.

Comment: Re:Still requires creation of user "nx"? Noooooo! (Score 1) 257

by zealot (#28691211) Attached to: Google Releases Open Source NX Server

Yes, you could have stopped after the word "think".

We have and use VNC. It's supported and we depend on it. I did install and enable unsupported VNC clients like TightVNC to try to get some more speed. NX is the next step. You can argue whether or not users should be installing potentially insecure networking servers, and you can argue about productivity.

When working remotely everything is over VPN anyway.

Comment: Still requires creation of user "nx"? Noooooo! (Score 1) 257

by zealot (#28685445) Attached to: Google Releases Open Source NX Server

I work for a large corporation that uses VNC,and several years ago I tried to install NX at work, hoping to get a speed boost when working remotely. Unfortunately, the creation of a user "nx" was required. I'm not in the IT department, I don't have root access, and they IT department had no interest in deploying NX. So I gave up.

I saw this announcement and hoped that an "nx" user would no longer be required, but it appears this is still necessary. If I could get it installed and it actually worked better I know the other engineers would jump on it and eventually IT would be forced to support it.

Anyone have a workaround?

Comment: Re:It can't do HD.Fail. (Score 2, Informative) 97

by zealot (#26249795) Attached to: XBMC Running On an Atom-Based MID

But this ISN'T that old, craptastic, power-hungry chipset used by most Atom netbooks. It's a new chipset code-named Poulsbo designed specifically to go with Atom. Quoting a article:

"The Atom Z500 has a TDP that varies between 0.85 W (for the 800 MHz version without HyperThreading) and 2.64 W (for the 1.86 GHz model with HyperThreading enabled). The SCH consumes approximately 2.3 W in its most evolved version, which brings the SCH + CPU together to under 5 W. By comparison with existing solutions, thatâ(TM)s obviously a big step forward â" the Via Nano, for example, is announced at 25 W for the 1.8 GHz version and a Celeron-M ULV at 5 W at 900 MHz.",1947-3.html

In addition, the Atom Z-series/Poulsbo combo supports the C6 idle power state where the CPU saves away its architectural state in a small SRAM which remains powered up while the rest of the CPU shuts off entirely. Idle power for the processor is somewhere from .01W - .1 W (this is from what I remember reading somewhere, but I can't find a link right now). Not sure what the chipset's powercomsumption is like when idle.

The biggest known downside to this chipset is that it supports 1 GB of RAM max.

Comment: Re:It can't do HD.Fail. (Score 5, Informative) 97

by zealot (#26247383) Attached to: XBMC Running On an Atom-Based MID

From what I've read elsewhere, the chipset involved does have video decode acceleration support. After googling, has an article that says that the chipset can support 1080i and 720p decode. A article says that it can do hardware decode of H.264, MPEG2, MPEG4, VC1, and WMV9 formats.

Comment: Linus on SSD Vendors and Filesystems (Score 5, Interesting) 255

by zealot (#26078325) Attached to: Which OS Performs Best With SSDs?

>I'm suspicious of the suggestion that a log-based
>filesystem will cure all the ills of the limited flash-
>controller based wear leveling.

Yeah. Total bull.

Anybody who thinks the filesystem can do really well has
bought into the crud from most existing vendors about how
you have to use those things differently. If you really
do believe that, you shouldn't touch an SSD with a ten-foot

If the flash vendor talks about "limits" in the wear
levelling, and how you have to write certain ways, just
start running away. Don't walk. Run away as fast as you

>A question keeps coming up in my mind about what happens
>when you split an SSD into multiple partitions, and what
>*you want to happen*. I use separate partitions for root,
>boot, and var, because I tend to make root and boot

Again, if your SSD vendor says "align to 64kB boundaries"
or anything like that, you really should tell them to go
away, and you should do what Val said - just get a real
disk instead. Let them peddle their crap to people who are
stupider than you, but don't buy their SSD.

So what you want to happen if you split an SSD into multiple
partitions is exactly nothing. It shouldn't matter
one whit. If it does, the SSD is not worth buying. If it is
so sensitive to access patterns that you can't reasonably
write your data where you want to, just say "No, thank you".

Anyway, I have a good SSD now, so I can actually
give some data:
- Most flash-based SSD's currently suck.

      I don't have these ones myself, but last week we had the
      yearly kernel summit here in Portland, and a flash
      company that shall remain nameless (but is one of the
      absolute biggest and most recognizable names in flash)
      was selling their snake-oil about how you need to write
      in certain patterns.

      So I called them on it, and called them idiots. Probably
      one reason why I didn't get one of the drives they were
      handing out, but one of the people who did get a drive
      was the Linux block system maintainer. So he ran some

      Those things suck. You will never get any decent
      performance of anything but a very specialized filesystem
      out of them, unless you use them as essentially read-only

      For a basic 4kB blocksize random write test, the SSD got
      around 10 IOps. That's ten, as in "How many fingers do
      you have?" or as in "That's really pathetic". It means
      that you cannot actually use it as a disk at all, and
      you need some special filesystem to make it worthwhile,
      and certainly means that wear levelling is probably not
      working right.

      (For the math-challenged, 10 IOps at a 4kB blocksize
      means 40kB/s throughput and 100ms+ latencies for those
      things. It also means that even if some operations are
      fast, you can never trust the drive)

- In contrast, the Intel SSD's are performing exactly as

      I did get one of these, with warnings about how
      if I want to get low-power operation etc I need to make
      sure that disk-initiated power management is enabled etc.

      Whatever. The important thing is that the Intel SSD does
      not care one whit where you write stuff, or how you do
      it. With the same 4kB random write benchmark test, the
      Intel SSD gets 8,000+ IOps (34MB/s throughput) with
      absolutely zero tuning. With bigger blocks and multiple
      outstanding requests, I got the promised 70MB/s. And it
      didn't matter one whit whether it was random or linear,
      the difference between 34MB/s and 70MB/s was purely in
      block sizes (ie there is some per-command overhead, which
      should not surprise anybody).

      On the read side throughput, if you can feed it enough
      requests, it was actually limited by the 1.5Gbps link
      I had on my realistic test-system (yeah, I have other
      machines that have full 3Gbps SATA links, but in mobile,
      1.5Gbps is common). And once more, it made no real
      difference whether accesses were random or linear.

So I finally have an SSD that really lives up to the
promise. And I can tell you - it makes an absolutely
huge difference in how the system performs. Just
try running Firefox for the first time - that mobile
platform is now snappier than my main desktop machine with
a new Nehalem and two fast disks in it.

And the write performance is important to that snappy
feeling. I can untar trees, install packages, do any amount
of writes etc and you can't even really tell. The system
still feels snappy.

As to reliability - sure, it's new technology, but since
I've been averaging around one dead harddisk per year, I'm
not so convinced about the old technology being superior
as Val is. So if the vendor gets the wear levelling right,
it's likely to be at least as reliable as those (not very
reliable) spinning platters are.

And right now, I do have numbers. Just based on behaviour,
I can pretty much guarantee that the Intel SSD's do a fairly
good job at wear levelling. At least they don't care about
your write patterns, and that should make people feel a lot
better about them.

So I can absolutely unequivocally say: if you want an SSD
today, you really can get a better disk than a traditional
disk. But as far as I can tell, it has to be an Intel drive.
Everything else is utter crap.

And no, Intel doesn't pay me to say so. Yes, I get early
access to some of their technology. But I'm an opinionated
bastard, and if it was bad I'd tell you so. As people here
should know (Itanium, anyone?).

That thing flies. The moment I can buy one more, I'll
spend my money where my mouth is. Because the difference
really is so clear. Right now, that tiny Mac Mini
(obviously running Linux ;) is actually nicer to use than
my main machine in many scenarios. All thanks to the SSD.


PS. The reason I tested mainly 4kB block sizes is that that
is what I use in the normal filesystems. I actually did test
512-byte writes too, and they perform perfectly fine and
got higher IOps than the 4kB case (but lower throughput:
the IOps didn't improve that much ;). I just don't
care too much personally, since nobody uses 512-byte blocks
anyway. But the thing really does act as a 512-byte sector
disk, with no access restrictions I can find.


+ - Valve's Steam API uncovered->

Submitted by
Anonymous User
Anonymous User writes "Programmers from the internet harassment group known as "myg0t" have recently released a source code to the public exposing some sensitive "hidden material" in the Steam.exe and steam_api.dll, this hidden content includes Steam's billing interface, utility interface, client interface, user interface and many login exports from steam.

In the news post it states:
"This is a 100% complete Steam API hooking base written by [myg0t]s0beit. It will allow you access to several well hidden interfaces inside the Steam application and some games as well. Here is a short run-down of the basic interfaces it will allow you to completely hook:

ISteamFriends — the steam friends and community class
ISteamUser — user information on the steam client
ISteamClient — client information
ISteamBilling — steam billing information
ISteamUtils — misc steam utilities

It's important to mention that while this is 100% fully functioning, it is outdated as of the release of TF2; they use several new interfaces inside the game and have moved other interfaces into the Steam application itself or vice-versa. That said, the current updated base will not be released anytime soon if ever, if you can understand this release well enough then this should be a non-issue for you.

We must insist that you use this proof of concept code only for non-harmful, peaceful, education purposes only and that it not be discussed anywhere outside of our news forum. myg0t does not and has never condoned illegal activity of any kind or activities with otherwise malicious intent. This is a learning tool so please use it responsibly, as we have for the last year."

No doubt this is something Valve must take seriously, hopefully they will fix this soon."

Link to Original Source
The Internet

+ - Lawsuit in open-source tuning land-> 1

Submitted by
David Blundell
David Blundell writes "I owned and operated the largest online site dedicated to tuning and open-source solutions for engine management — chipping and tuning engine computers, basically. From May 2002 till the beginning of this year. Last year, I received a Cease and Desist notice (which was forwarded to the EFF, who were very helpful) for a matter involving a posting on the forum that was removed within 48 hours of telephonic notification. The company involved was pursuing the matter rather aggressively initially, but I thought the matter had been dropped earlier this year after I sold the site until I was surprised by a lawsuit last week.

If anyone is curious about the details of this mess and how it has been handled up to this point, go check out (don't worry — no registration required) — it's probably an hour read, but there is a timeline of events and all legal correspondence exchanged over this mess is available for your viewing pleasure.

I'm trying to spread awareness of this matter because I think it is important for forum operators everywhere to understand the risks involved with companies willing to aggressively protect their IP. Also, I think there are some rather novel (well, at least interesting?) issues here:

-The "software" in question here was a backdoor. An existing product's protocols were used in a manner that the original authors had not intended. A software license agreement forbidding reverse engineering may have been violated in the course of creating the "software." Who should be the target? Hosting provider or author? Limitations? At what point does a product that makes use of reverse-engineered protocols (something like Samba, for instance) become a violation of intellectual property?

-The company suing me presumably are laying claim to the code that the downloader can access as their intellectual property. This code was originally written by Honda, reverse engineered and presumably modified by Hondata, who are suing me. Honda could care less about the matter. Without any patents or copyrights, do Hondata have an intellectual property claim to code that they didn't exclusively write (merely modified) running on hardware they did not design, build or sell?

-What are the limits on the duty of care of a forum hosting provider? Moderator? Mere domain owner?

-Is this a case of a large, established commercial provider using strong-armed legal tactics to manipulate and push around an open-source project (and/or take over it, see demands in link), or were there more legitimate claims?

I'm hoping to receive some answers to these questions from an IP attorney, and I'll be sure to share as things progress.

Thanks for listening."

Link to Original Source

+ - MPEG LA: "Vizio HDTV success from patent viola

Submitted by schwit1
schwit1 (797399) writes "A recent article in the Wash Post talked about Vizio's fast rise to the top of HDTV sales. Larry Horn, CEO of MPEG LA claims "that unlike other manufacturers mentioned (Samsung, Philips, Sony and Sharp), Vizio reduces costs in part by failing to pay for a license under patents enabling the core digital compression technology used in all high-definition televisions, including its own.

What's more, it encouraged the unauthorized use of intellectual property, which in this case is readily available to all high-definition television suppliers, including Vizio, on fair, reasonable nondiscriminatory terms."

Is MPEG LA a patent troll? Is Larry upset because Vizio is using someone else's HD technology? If a violation is occuring where's the lawsuit?"

+ - Device to audit and replay RDP, SSH, and Telnet tr->

Submitted by
eldar40k writes "I have visited the Systems exhibition this week (Munich, Germany), and came across a device that can transparently control and audit RDP and SSH traffic, store and search the results, and even replay the sessions like a movie. You can even search in the texts displayed by the server or typed by the client, for both RDP and SSH. Trial VMWare version is provided upon request at"
Link to Original Source

Never appeal to a man's "better nature." He may not have one. Invoking his self-interest gives you more leverage. -- Lazarus Long