That one billion figure doesn't sound as impressive after one considers that it's fairly likely that it's mostly obtained by counting every Android install that comes bundled with Chrome. I'd be shocked, just shocked, if Google does NOT count someone who used Chrome a few times, before installing Firefox mobile. Like me, for example. I hardly ever use Chrome on my Googlephone. But, I'm sure I'm counted in that billion-plus figure.
This is sad. Why not cite the original Brian Aldiss short story, instead of Spielberg's abomination that had very little to do with the original content? In the movie, the super-toy was just a minor sub-plot.
I've grown enough to figure out how to work-around Gmail's piss-poor filters. Too bad you can't claim the same.
And you should've known that, if your comprehension skills meet at least the 8th grade's level; since I mentioned that in my initial post.
Grow up. I have no competing mail filtering service to advertise through a Slashdot link.
The norecruitingspam guy himself spammed news.admin.net-abuse.email a few days ago with this. All he's offering is an email filtering service that blacklists the Jobdiva spambags.
He posted his screed in a Usenet thread that I started over five years ago, that's archived by Google, at apparently has a pretty high ranking when someone is searching for more information about all the spam they're getting from the Jobdiva spam factory. Over five years ago I happen to notice that every recruiter spam that I received turns out to have come from the Jobdiva spam factory. Ever since then, once or twice a year someone finds that thread in Google Groups, and post a "me too" to the Usenet group. Which I find pretty funny.
After figuring out where all my recruiter spam is coming from, it was a simple matter of adjusting a few settings on my mail server, and, poof!, it was all gone. Originally I never thought much of it, and only posted the first message in that thread as a means of sharing my thoughts, and nothing more, but apparently someone else now discovered effective email filtering and thinks it's the greatest thing since sliced bread. As Benny Hill would've said: biiiiiiiiiiiiiiiiig.... deal.
One good thing here is that now that he's got a good link from Slashdot, and, presuming that his web site is still up (haven't checked), because all his web site now only contains a big rant against the Jobdiva sleazebags, this will shine a bit of a brighter spotlight on those vermin, and perhaps shine some well-deserved sunshine on these sleazebags. Sunlight is the best disinfectant.
That's strange. My WNDR3700v3 is rock stable. The only time it goes down is when I lose power, once or twice a year. The router Is always busy. Various members of in my household are constantly streaming videos. I've got laptops, i-device, and android devices pinging the intertubes constantly. Everything works. I don't use "device discovery", whatever that is, though.
Cut the guy some slack. He simply wanted to fly to D.C.; he had his own gyrocopter, and he really didn't feel like having his nuts groped by the TSA.
Can you really blame him?
Oh geeeee..... The only good thing about "Lolani" is that it is a perfect remake of a classically terrible, awful, ST:TOS episode. It's a perfect homage to "Spock's Brain", and "Savage Curtain".
So from that viewpoint, it's a great episode. And I really enjoyed watching it, but only for its artistic value of a faithful recreation of a botched ST:TOS time filler. Really, I'll take "Fairest Of Them All", or even "Pilgrim of Eternity", over "Lolani", any time. And I do appreciate seeing The Incredible Hulk himself, in full-body green makeup; but it can't make up for the awfulness of the rest of the episode.
I'm genuinely curious. Can someone explain how Mrs. Clinton could use her personal email for official state business, and NOT break half a dozen laws and rules?
So, what's with the "possibly" stuff?
Please read on what POSIX is first. It is what guarantees that your software will be portable, which is a foundation upon which UNIX is built.
Yes, POSIX is important. But as with any standard it defines the least common denominator. Couple that with the fact that POSIX was not updated in years and you have to address the least common denominator from more that 5 years ago (I think even longer...). That is an eternity in IT. A standard is fine, but it should not stop you from playing to your strength.
Systemd argues that an init system is closely related to the Kernel and should make all the fancy kernel features available to user space. There is enough precedence for this in commercial unix variants by the way: Many come with init systems tailored to their specific strength of their kernels. I do not see that as a bad thing. So far I am not aware of anybody in the BSD camp even wanting to port systemd. At least the FreeBSD developers said they wanted a modern init system, too, but they are going for something that plays to the strength of their own kernel. So why should systemd bother about being portable to OSes that want to come up with their own solution?
That BSDs require some compatibility layer is nothing new, either. There is support for Linux style
There are projects on the BSDs as well, that are non-portable: LibreSSL and openSSH from openBSD spring to mind here. Those use interfaces from the BSD kernel. There a separate porting projects that bring those code bases over to Linux. They actually introduced a new kernel API due to libreSSL into the Linux kernel.
I see nothing bad in targetting specific platforms whatsoever. Yes, I do think POSIX is important: If you can do something with POSIX, then use that. If not, then use something else. And when in doubt target one platform and let people that care for other platforms port the stuff if they care.
Usually, "epoxy" around the edges of a BGA chip is neither an anti-hacking attempt nor a light-proofing attempt. It's called underfill, and its chief purpose is to increase mechanical strength and make the bond more durable than tiny bare solder balls would be on their own.
Yes they are. Most multimedia processing is parallelizable, and thus benefits greatly from SIMD instructions - for example, just about every CPU-based video codec ever. If you want an actual example, I wrote a high-performance edge detection algorithm for laser tracing, with its convolution cores written in optimized in SSE2 assembly, and am hoping to write a NEON version. It'll never run reasonably on the original Raspberry Pi because it's too underpowered to do it without SIMD (I didn't even bother writing a plain C version of the cores, because honestly any platforms without SSE2 or NEON are going to be too slow to use anyway).
Obviously you can use SIMD instructions for a lot more, but multimedia is the obvious example. And as I mentioned, the Pi makes up for it for standard codecs only with its GPU blob decoder, but that doesn't help you with anything that isn't video decoding (e.g. filtering).
ESP8266 only became a "thing" last year, so the community is still growing. But the manufacturer is cooperating and is releasing open SDKs, and the hobbyist community is enthusiastic about it. I personally intend to use a bunch of them to automate things around my apartment, so I guess I'll find out just how good/bad it is.
That's for developing on the ESP8266 core itself - if you just want to use the default firmware, plug it into your existing microcontroller platform (e.g. Arduino) and you get wireless connectivity and a TCP/IP stack (running on the module) with some trivial AT commands. Not as cheap since you're still using a separate core as the main app host, but still a really cheap way to add WiFi to something.
There's a difference between established industrial designs where there is an argument for maintaining compatibility and an existing codebase, and hobbyists which can quite happily move up the chain and are always looking for cool new stuff in other respects. Even in product development, some companies go out of their way to use ridiculously outdated, expensive chips. That usually only flies when it's for non-consumer applications where they can afford to throw more money at a chip vendor to keep making outdated chips at outdated prices (which sometimes even rise); for consumer products the competition will undercut you by using newer, cheaper chips if you don't. For hobbyists, it actually pays off to upgrade - you get better toolchains (no need to deal with all the ROM/RAM/pointer type shenanigans of AVRs on ARM), better debuggability, etc. Of course, it doesn't mean you should jump onto any random chip - the toolchains and ecosystems vary wildly in quality - but it's a shame that so many people just stick with the old instead of trying something new.
There's nothing wrong with the Tiny series - little 6- and 8-pin chips are still the market where AVR/PIC make perfect sense, and I'll be the first to admit that I've used a PIC12F629 as a dual frequency generator in a project. But as a flexible platform for hobbyists, I'd much rather have a Cortex-M3 over an ATmega. Back when I was using PICs more often, my approach was to, every few years, re-evaluate my personal selection of PICs. I'd go through Microchip's (extensive) part database, look at the prices, and see if anything caught my eye, then order some samples. My 8-pin of choice used to be 12F508, then 12F629. For 18-pin I went from 16F84 to 16F88. 28-pin, 16F876 to 18F2520 and 18F2550 for USB. 40-pin, 16F877 to 18F4520 to 18F4550 for USB. I tried dsPIC at one point but didn't like it; by then ARM was picking up steam and it didn't make any sense. I haven't really looked at their line-up in a while, since I've mostly moved on to other chips for interesting stuff and stick to my old PICs for small quick/dirty hacks since I have a bunch in my drawers to get rid of, but you get the idea. It never made any sense to me to get stuck with one particular obsolete part or range.
Yup, all the other aliexpress pages I was looking at for the same phone said MTK6517, and I didn't notice that the one that I chose was different (I was just going for the lowest price, though the difference was a few bucks). Turned out to be the more accurate one it seems, since it matches the actual device that I have.
A7 is actually decent. It's low-end (as far as ARMv7 application processors go) but reasonably modern (late 2011, which isn't too bad). Nobody's asking for a bleeding-edge CPU in something like the Pi, but a 2002 vintage core wouldn't have made any sense.