Considering how few PPC apps in use now days, it seems logical.
I've only been a mac user for a few months, but I've never seen a PPC binary, with the exception of the one 'hello world' universal binary I made just to see what would happen.
Never run Quicken for Mac, I guess (except perhaps for a pre-release of Quicken Life for Mac^W^W^W^WQuicken 2010 for Mac):
/Applications/Quicken\ 2007/Quicken\ 2007/Contents/MacOS/Quicken\ 2007
/Applications/Quicken 2007/Quicken 2007/Contents/MacOS/Quicken 2007: header for PowerPC PEF executable
Not only is it not fat, it's not even Mach-O....
Replace your dog with a goat, or sheep. Now you have a pet that feeds itself in the yard, and you don't need to buy (or run) a lawn mower anymore.
Macro virus? Really? I'm just curious, how long ago was that? I don't think I've seen a Macro Virus in a decade or so. Not doubting your statement, I'm just curious.
I have to disagree. I am 24, and remember growing up with the simpsons. It is an amazingly layered show, or was. As a kid, the focus was on bart and lisa....., but growing up with the simpsons, you tend to notice the adult storylines and slowly understand and find funny or emphasize with their points of views and stories.
To me, this has always been the mark of great art. Enjoyable by and appropriate for a child on one level, and then with deeper layers for more mature viewers that keep it interesting at all ages. Pixar are past masters at this. Take their latest film, UP. From a kid's point of view it's YAY BALOONS YAY AFRICA YAY TALKING DOGS, but it's also an incredibly poignant look at the life of an old man desperately trying to salvage the lost dreams of his youth.
They actually claimed that the 2000+ chip was equivalent to what the performance of 2GHz Thunderbird (their older CPU) would be. Thatit also was similar to 2GHz P4 was left to the imagination of the buyers.
Though 2000+ Athon XP and 2GHz P4 were quite similar.
And now neither company uses the clock frequency for advertisement, since all the clock frequency can tell you is if the chip in question is faster/slower than other chips in the same family, and model numbers can do that too (for example Opteron 275 is faster than 270).
First off, let me admit that I have not yet read the Book reviewed here, but from reading the review, it sounds like it is targeted mainly to the "new to programming" crowd.
I have started my Android Development career by reading Mark Murphy's "Busy Coder's" books, and gotten a lot of details out of his Tutorial book.
I'm not affiliated with him, but I'd like to really recommend his books to any developer who has an existing background in either Java and wants to quickly get productive in Android Development.
As an additional bonus, all of Mark's books are available electronically or as self-published printed paper back's.
He himself is also a great guy and very active on the Google Android developer forums.
Yes. All my work you can find on arXiv.org. When you submit there, you have to choose the license under which the arXiv can distribute, and two of the options are Creative Commons licenses. We submit our articles to the arXiv first, and then to journals. This means that when the journals receive it, they know the content is already out, and they're not going to get an exclusive distribution deal in any case. There used to be a "preprint" system in which major labs would physically mail around recent articles. The WWW started at CERN and is an outgrowth of this idea.
Everyone should use the arXiv. There are sections for many scientific disciplines, but far from all of them (all it would take is a request to start a new one). In many other fields, the journals exercise draconian control over the scientists (Medicine, Computer Science) and that needs to stop. They work for us, not the other way around.
But, I'm talking about journal articles -- I haven't written any books, and don't really intend to. For scientific journals I agree copyright is a useless hindrance. It was a nice tool when distribution is expensive, but now that the marginal cost of distribution has gone to zero, it's better done with a central government grant, and open access. Now we're just missing a peer-review/referee system attached to the arXiv. When that happens, the journals will die for good.
You should try out a Kindle for a month. I think there is a no-questions return policy anyway within 30 days.
I have always been a big physical book person and was worried about buying the Kindle but I have to say it was worth every cent.
The thing is that the screen used (via eInk) is completely different from any other screen you're used to. It has high contrast, the rough color of a page of paper, and, most importantly, low glare and eye strain. I've taken my kindle on the beach, full sun, and it was actually easier to read than your usual magazine. It was even nicer given that the wind was kicking up and i didn't have to screw around with trying to hold my pages down. You can read it for hours with no problem, the battery life is incredible, and you have the ability to just go on and download any kindle book in the store (provided you have mobile reception)... which was fantastic when I was about to board a flight recently and realized I didn't have anything to read.... in the time it took them to call boarding and give them my ticket (5 minutes) I had gone online, download a book, and was set for the 4 hour flight (with a book selection that is obviously > airport bookstores). Things like that are priceless.
Give it a try.
Firewalls are capable of providing all of the positive benefits of NAT (transient traffic flow approval instead of mapping for example, blocking traffic not originated from the LAN, etc) save obfuscating the source address. Obfuscating the source address isn't particularly relevant from an attack perspective given that the entire LAN is still protected by the same Firewall process, NAT or not.
For example: you could NAT your LAN in 192.168.10.x space behind IP 18.104.22.168
Or, if you use IPv6 for your LAN, let's say you are allocated 1:2:3::/112. No need to NAT it, so you just firewall behind your gateway, let's say 1:2:3::4. You connect to shady.com port 80, sport [1:2:3::101]:2000. Firewall doesn't have to allocate a damned thing for you, but instead records the flow for [1:2:3::101]:2000 shady.com:80 as established from within the LAN and thus authorized. Shady sees all the traffic coming from [1:2:3::101]:2000, but it's not relevant since all access to 1:2:3::101 is still mediated by the firewall at gateway 1:2:3::4. Shady.com can port scan 1:2:3::101 if it likes, but won't see any open ports if you only allow LAN established traffic, or else sees your whitelisted ports for that IP only (instead of your entire LAN). Just like the IPv4/NAT scenario, keep your open ports secure.
As you can see, source IP obfuscation provides no meaningful advantage to the end user in this scenario. If anything, IPv6 users who feel like they want to use NAT could have the firewall choose random source addresses as well as random source ports out of their
Still, the major drawback to be avoided with NAT is in breaking the globally unique address space and complicating inbound connection access, which will become a growing part of popular network policy over the next few decades. One thing Bit Torrent teaches us is that "the server" will less and less frequently have resources comparable to the "client swarm", so crowdsourcing the heavy lifting (from distribution to content creation to editing to caching) becomes vital to any scaling strategy worth it's salt. The hub/spoke communication model is slowly eroding in the presence of more sophisticated, decentralized many-to-many connection models.
NAT reduces a peer to a "consumer" which can only fetch data, but never re-offer it without convoluted port forwarding messes. Entire LAN's are limited to one named service per outbound IP, unless one wishes to screw with what port they offer services on, further complicating the job for other firewalls and participants of the content network.
You'll know what I mean if you've ever tried to configure mobile SIP access. Half the time you are behind a NAT, and you'll never know in advance if it's full cone, symmetric, or just somehow pathological. Sometimes you are nested within multiple NATs which each behave differently!
Some legacy UDP protocols I've worked with need to make connections to thousands of remote IP addresses at multiple, highly transient port mappings which bring NAT mapping tables to their knees. In a firewall-only environment, it's easy to whitelist access to swaths of ports for clients and then the gateway need not maintain tables for related traffic, but can continue to protect unrelated ports unlike with SOHO DMZ.
To sum up, NAT is not only a bandaid, but it's already pulling at our short-hairs.
They probably wrote it using a combination of Emacs macros and LaTex.
Ohh, so the same way my parents created me.
it can't hurt and the price is certainly right
The price is right only if your time is free. The price of the entire adobe suite is less than a few days of billable work.
"When the going gets tough, the tough get empirical." -- Jon Carroll