Comment Re:The webcam light... (Score 1) 330
Have you tried non-transparent tape?
Have you tried non-transparent tape?
I can't speak to what Amazon measured as "50%" nor what PhrostyMcByte measured at "6%"
So I don't know what quantitative percentage difference that is, nor exactly what a percentage would measure. But as a threshold thing, it went from noticeably lacking to nothing to worry about. I assume that the improvements to other brands of e-ink readers have crossed a similar threshold by now, and my guess is that they have all reached "good enough" (there are other things I would wish better: I have software freeze-ups on my Kindle DX that really suck; I wish the flicker on page turn was much less [and it could be with a smarter algorithm for pixels to change]; the interface could be improved; etc.
And some people still wonder why many phone owners want to root their phone or flash a custom ROM?
I can uninstall or install anything on my G2. Sprint is acting like Sony.
Bad, Sprint! Bad!
@TrentTheTheif: Do you mean that you can uninstall everything because you've already rooted? On my T-Mobile G2--while generally an excellent phone--I am stuck with several irritating and stupid unremovable apps that T-Mobile stuck on there. I probably should get around to rooting to get rid of them, but I certainly can't do so in stock configuration.
Oops... my case is actually FAR stronger than I wrote. The actual number of distinguishable states of my terminal isn't 256*4320. It is 256**4320. Which is to say, around 4e10405. That is also VASTLY more than the number of particles in the universe, which is a measly 1e84 or something.
The reason terminals are so useful is because they have greater usable information density than really any other interface. The article desperately misses this in its lead about the 2.3 megapixels of display space, and the presumed potential information content. But a terminal has quite a lot of information in it! Far more information--from a human usability POV--than the oversized icons this TermKit uses to adorn every small bit of textual information.
For example, on my MacBook Pro, using a pretty large font size, I run a 90x48 terminal (with multiple tabs, but that's a different issue). This terminal occupies approximately half of my display (sometimes I put another one next to it, though there's slight overlap at my screen size, font size, frame elements, dock/menu bar, etc). Now, as we can see using the 'calc' utility I wrote for systems I work on:
505-Documents % calc 90x48 # result on stdout, canonical form of expression on stderr
90x48
4320
Well, 4k-ish characters isn't that many positions (but it's not tiny already), but each of those characters might be any of approximately 256 values. Saying exactly how many it is is a little tricky though. Many of my tools (e.g. the bash prompt itself, ls, less with lesspipe, vim, etc) colorize output, making for multiple easy distinguishable ways the letter 'A' might appear. On the other hand, while high-bit characters are not generally usefully or frequently displayed, modern terminals *do* display many thousands of Unicode characters potentially. So as an approximation, we might say that there are approximately 1.1M easily distinguishable states of my terminal. I know, of course, that, most of the time most of the characters that display are along the left edge of my terminal, and the right side is largely blank. But nonetheless, there are at least a couple 100k states that are both plausible and importantly EASILY distinguishable... not *instantly* distinguishable, of course. Obviously, my eyes need to flit back and forth a while to compare, say, the file sizes and permissions of a bunch of things that show up in an 'ls -l' display. But it is still at least an order of magnitude more information than I'd discern with equal ease using TermKit (or, say, using a GUI file manager like Finder) to look at the same 'ls -l' directory.
Obviously, the theoretical information content of a high-res display is enormous. If I even have a 16-bit display, running somewhere over 1.5 megapixels (my screen is apparently slightly lower res than Steve Wittens') that's something like... well, more than the number of particles in the universe, possible states (e.g. (2**16)**(1680*1050)). But in fact, as a human, I really can't meaningfully distinguish nearly any of those states. I can't even *see* individual pixels, nor distinguish very close colors very well. But even within my actual perceptual threshold, I cannot give direct meaning to a slight color difference in some small part of the GUI screen, except in very broad categories that contain a few bits of information each. My recognition and discernment of the meaning of *characters* of my native language is far greater than some other graphical abstraction.
I wrote about my preferred dual-monitor setup a while back as a guest editorial at: http://onyourdesktop.blogspot.com/2007/07/david-mertz.html
That's still pretty much what I like. I wish the screenshot there was from the work machine I describe in the article, with dual 30" screens. Sadly, I haven't had such nice desktop real estate elsewhere (neither at home nor other workplaces). But what I do typically work with in my own office space nowadays (I'm a consultant, so it's really my own space) is still two screens, though not identical in dimension. I use a (company provided) laptop hooked up to an external 24" monitor (1920x1200--those widescreen ones with only 1080 vertical pixels feel like they rob me of important vertical space). On this main consulting setup I keep a pretty fixed set of apps open... sadly, the work laptop is Windows 7, and I can't choose otherwise. But my setup hides most negatives of that OS.
To the left, on the laptop screen, everything maximized:
* Email client (usually in front)
* Version control GUI
To the right, on the external screen:
* Code editor (jEdit) maximized. This allows two full panes of code, and one
slightly smaller one for file navigation, project manager, search results, etc
(my editor tabs between different functions in the utility pane). Each code
pane is about 85x65 in a reasonably large font. My editor also lets me hide
all the frame elements, title bar, etc, which removes the look of Windows and
gives me a couple extra rows of code.
* Web browser, full vertical, but only about half the total screen with. Lots of tabs
* Two side-by-side SSH sessions, each one about 90x70 characters. These connect
to the real machines where I do work. Often I run vim in these session to edit
code on the remote machines, but also to run test commands, launch compilation,
etc.
* Sometimes a chat window or two that use the full monitor height.
* Sometimes a PDF viewer or two, usually maximized and two-page display
Obviously, I have to switch focus sometimes on the right (external) monitor. But most times I am just looking at the two large SSH windows. It's a bit disjunctive what I edit in my local text editor versus what I edit in my two "panes" of vim on the SSH terminals; but either way I can see a similar two full screens of code to compare visually, which is really useful (e.g. one I use to look at the code of the supporting library while in the other pane I write the code that calls into it; or I am working on two related scripts and seeing both next to each other helps synchronize changes).
I've been using Natty since betas, and this complaint about Unity seems utterly silly to me, since it is SO easy to just use "Ubuntu Classic." I've played with Unity a little bit, and think that it has a couple good ideas, but also don't really like it (even on my netbook where the space savings should be most helpful). So you know what I did, I selected the simple pick list on the GDM login screen to use Ubuntu Classis! And after that, my selection sticks as my default choice (and sticks per-user as you'd expect).
Clicking on one pick list ONCE during a login really doesn't seem burdensome to me. And it's not like Unity is all that bad or all that different either (the betas were a little buggy, but 11.04 release seems stable).
It's plain easy to calculate the sixty-trillionth digit of Pi... as long as you don't care about the digits that come before it: http://www.sciencenews.org/sn_arc98/2_28_98/mathland.htm.
Since I'm at 11.0.686.3, I'm even more blasé about this.
I think the frequency of jury duty in the USA is less than Space cowboy suggests. I just turned 46 yo, have always been registered to vote (in various jurisdictions), and have served on a jury once. Obviously, I know other people who have done so, a couple of them more than once. But it's hardly like being called up every year. Of course, jurisdictions are bound to vary somewhat depending on how many cases come up, but it's a pretty minimal obligation of citizenship.
FWIW, you really cannot give up UK citizenship by getting US citizenship. I know a number of dual (or multiple) citizenship people between those jurisdictions. None of which, of course, means that there's any particular reason for the commenter to seek US citizenship if it doesn't have any particular benefit in his/her case.
Parent is basically correct. However, pedantically, Dalvik does not, in general, run programs written in the Java language. The language is defined not just by its syntax, but also by a certain set of standard libraries being present and implemented according to Sun/Oracle specification. Dalvik doesn't support all of those, and hence doesn't run Java.
However, Dalvik does run a very Java-like language. One that has all the syntax of Java, and *many* of the same libraries. Moreover (as everyone here knows, I'm sure), programs compiled by 'javac' to
It would be proper to prevent Google from claiming that Android "Runs Java"... but then, I'm pretty sure they never claimed that to start with. Indeed mostly--almost entirely--it's claims about patents that should never have been granted, or really just about lawsuits to try to mess up competition and technical progress just for the sake of disruption (I doubt Oracle actually cares that much about the outcome, it's mostly FUD).
I know a number of owners of Priuses. Not one of them makes >$200k, and the only one that perhaps makes more than $125k is my 80 yo landlord.
There may be some skew in the direction the article claims, but its actual claims are wildly hyperbolic to the point of being best simply to ignore.
Good graphic designers do good work, and should (and generally do) get paid well for doing so. The problem is that most client have no real ability to tell good work from mediocre work. Something that looks bearably OK, is not dramatically unattractive, nothing is outright wrong about, may well be "good enough" for a client with no eye for design. But in the end, it won't be something memorable that sticks in the minds of consumers and helps differentiate the product or company it's attached to. The distinction between adequate and brilliant can be subtle, but that subtle difference can make a BIG difference in the longer run.
Similarly, and maybe more familiar to slashdot readers, the very worst programmers can write some lines of code that "look" pretty much the same as what the best programmers can write. The failures and problems of bad code won't even necessarily be obvious on first impression. The code might well do the one thing it initially needs to, but just be fragile, difficult to maintain, break as soon as unexpected cases arise, etc.
Distinguishing good from bad often requires expertise. Exactly the sort of expertise you should be willing to pay for.
A couple of my science/geek tattoos:
Hemoglobin, because I work for some folks doing amazing stuff in molecular dynamics (and it's easy to spin some superficial symbolism about hemoglobin on top of my heart):
http://picasaweb.google.com/david.mertz/HemoglobinTattoo
A Julia set sleeve (just for fun):
http://picasaweb.google.com/david.mertz/FractalTattoo#
There's also perhaps something a little bit geeky about writing a tattoo in proto-Indoeuropean (and International Phonetic Alphabet):
HOLY MACRO!