Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Digital

Submission + - The iPhone As Camera... Where To Now? (utah.edu)

BWJones writes: "Many non-photographers and even photographers, particularly the working professional photographers are accustomed to looking down their nose at cell phones as cameras, but if you look at the market, all of the innovation in photography has been happening with smart phones in the last couple of years. Sure, camera sensors have gotten better and less noisy, but convergent technologies are primarily happening in the smart phone market, not the camera market. On top of that, statistics show that the most common cameras are now cell phone cameras, the iPhone in particular. Flickr reports that as of this posting, the Apple iPhone 4s is the most popular camera in the Flickr Community. If you add in the iPhone 4 and then the large upswing in the newly available iPhone 5 and the now waning iPhone 3GS, you have in the iPhone platform a huge lead in the number of cameras people are using to post to Flickr."

Comment Re:Wow (Score 5, Insightful) 471

I had presumed that Apple wanted to have tight control over the lightening connector - that is to say, they wanted to maximize their profit - but geesh!

Way to act like Veruca Salt!

In what way? Their terms for licensing the "lightening [sic] connector" are well known, and this project started before the iPhone 5 was even released. Somehow it has become a deal breaker for the project, despite the connector not being officially announced when the project began.

Now the project owner has thrown his toys out of the pram because apparently the built in USB ports on the device will simply make it totally useless and non-viable because Apple denied them a licence for a connector that didn't exist at the start of the project.

Apple didn't "kill a kickstarter project" - the originator of the kickstarter project killed a kickstarter project.

How biased do have to be to post this? You keep saying over and over again that the connector was not announced when the project was announced. So what? The connector exists today. Apple denied them a license because they do want their connector to coexist with another connector because in their special universe, only apple products exist.

This level of arrogance is staggering. On top of it, you are not only supporting their arrogance but also trashing a bunch of guys that just wanted to make a simple combo connector. Dude that is pathetic. Apple is a company that makes some great products but are also filled with hubris, and I don't know why you can't let these two thoughts coexist in your head.

Comment Re:Good old Slashdot (Score 1) 78

Man, you are clutching at straws,

How so?

The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?

You do realise that nested replies are replied to parent posts, not the original story, right?

I claim that Intel do charge a hefty premium for ECC, which is why the comment is relevent. AMD do not as can be witnessed by cheap midrange desktop CPUs supporting ECC. In other words, you can use cheap AMD CPUs for server grade tasks. Because AMD don't charge a premium for ECC and Intel do. Because for Intel, you need to fork out for a low performing Xeon which will be more expensive than an equivalent AMD desktop processor by a long way. And you can use the AMD desktop processors for servers. Because they support ECC, cheaply, unlike Intel ones, which don't. Got it yet?

Even if you put aside the fact that this is supposed to be server RAM, an extra 12 bucks sounds a "hefty premium" to you?

I don't believe you. Why don't you paste a link. Oh look, now you've pasted it go back and read it really carefully. Go on, read it again. But carefully this time. You will see that, surprise, it is NOT intel who you're buying the RAM from, in fact, Intel don't even sell RAM.

You at least expect a certain standard when it comes to snarkiness

As requested, I've upped the level of snarkiness.

I can't make head or tail of what you are trying to say.

For the record, I'm not trying to be snarky *at* you or asking you to be - my comment was about the OP's comment being lame - which it was.

Yes, I agree with what you are saying about AMD, and definitely, AMD offers and has always offered better value for money than Intel. That is indeed their USP and how they compete. And it is a good thing for average customers like you and me.

My point was that this is a dedicated server CPU so ECC is to be expected. In fact, a snarky comment would have been appropriate if Intel had *not* supported ECC.

As far as the price goes, I simply searched Google for 8GB ECC RAM and 8GB RAM, and verified that Newegg price which was the first search result.
If you really don't want to believe, here are the links:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139262
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231297

And yes, I know Intel doesn't make RAM, and neither did I claim that they did. So your comment above is quite puzzling and I don't understand what I need to re-read *carefully*. It was in response to your previous comment, "It is quite significant that the Atom CPUs support ECC memory, and Intel do make you pay for a lot for it."

I was trying to say that paying an extra 12 bucks for ECC RAM isn't much, so what's your point?

Comment Re:Good old Slashdot (Score 1) 78

Sole "editorial" contribution (and I use that word loosely): a silly and irrelevant snarky comment.

Actually, it's neither silly nor irrelevent.

It is quite significant that the Atom CPUs support ECC memory, and Intel do make you pay for a lot for it. AMD supports ECC memory on the mid range desktop CPUs and above, whereas for Intel, you have to fork out for the Xeon brand and pay a very hefty premium.

Damn, but Slashdot is a sad place these days.

Then leave and demand your money back.

Man, you are clutching at straws, just like the OP did with his snarky comment about ECC. The title itself says that this chip is targeted towards high density servers and you compare this to AMD's desktop CPUs?? By what stretch of the imagination is ECC not relevant to a server CPU? In fact, it would have been noteworthy if Intel had cut corners and just rebranded their mobile Atom CPU and not even added ECC support.

And Newegg sells 8GB ECC RAM for 52 bucks vs 40 bucks for non-ECC RAM. Even if you put aside the fact that this is supposed to be server RAM, an extra 12 bucks sounds a "hefty premium" to you?

And yes, for the record, the comment was not just biased (which is okay since this is /.) but was pathetically lame. You at least expect a certain standard when it comes to snarkiness. I mean, OP could have pointed out that this chip only supports up to 8GB memory which is actually a significant drawback considering this is a 64bit chip.

Comment Re:I read the article (Score 1) 536

That's a great question. You start to need a concept of transactions and rollback in more places. Databases already have this. Journaling filesystems already do this to an extent. (Btrfs actually COWs, so you theoretically could roll back to an older version also.)

I'm not saying you can do this everywhere, but I think it's a strategy that can find a home many places.

Comment Re:There is no such thing as an error. (Score 2) 536

Speaking of iOS: Are you saying that if the battery is low, the phone should shut off without warning, saving all data, or give a few warnings as the battery gets low? The no-error-alert paradigm is just stupid.

My car warns me when it detects a failure, and I think it's no failure of software designers if they also warn me when things are amiss. I'd hate it if my car just tried to "handle" low fuel, low oil pressure, low tire pressure, or what-have-you, as about the only thing it could do for any of those is just stop. iOS devices are in a similar circumstance with low battery.

Are you still of the opinion that there should never be an error alert unless it's the programmer admitting some sort of failure? "I failed to program an infinite capacity battery."

Comment Re:I read the article (Score 1) 536

Make forking exceptionally cheap, and move to a checkpoint-and-commit paradigm. Fork just before the first open(), go acquire all your resources (open(), malloc(), etc.). Depending on whether all that succeeds or part of that fails, you know which thread to kill. Kill the thread that did the open, etc. if that path failed, otherwise kill the thread that's waiting at the last checkpoint.

If that sounds at all familiar, it should. Most modern CPUs already do this in hardware. It's called speculative execution, and they do one of these forks at nearly every branch.

Comment Re:There is no such thing as an error. (Score 1) 536

Hmmm... so if I ask a program to read a file that doesn't exist, should it just create an empty document of that name? Possibly the right answer for a word processor, but quite probably the wrong answer when specifying an attachment to an email.

I think that statement needs to be clarified: An internal error alert pop-up that could happen without a hardware failure is an admission of failure on the part of the programmer, no doubt. But, if the user truly is in error, there's nothing to admit on the programmer's part when they tell the user they're wrong. The program still has to check the user, though.

Comment Re:Simple... (Score 1) 536

...write code without error.

How to I find users that won't give it incorrect or inconsistent input, or hardware that won't fail unexpectedly? I don't program the users, and I've yet to find 100% reliable hardware that never wears out.

"Never test for an error condition you don't know how to handle." -- Steinbach's Guideline for Systems Programmers.

Ah, that's a bit different. It's advice as to what level you should place your error checking. For example, if you do "fd = open( ... )", you probably should at least check the file descriptor, since you know you can't proceed without it. But, you don't know how to fix it, so pass it up. But, if a subsequent call to "write( )" fails, most of the time you don't actually care. (In the event you do care, because it's critical that your write succeed, then check. But, I argue in most cases, it's OK to let the write silently fail; folks will notice their disk got full in other ways.)

Comment Re:People just doesn't get it (Score 1) 536

yes you really do care. Once you've started using exceptions for normal things, then you quickly find your program will be throwing the buggers all the time. In many server applications you'll be getting 3 or 4 exceptions per request (I see this, even in the Microsoft code that you have no control over)

Wow, that sounds like the programming equivalent of bumper bowling.

Comment Re:People just doesn't get it (Score 2) 536

I happen to agree that exceptions should be left to exceptional events—something entirely outside the scope of the algorithm—and not something that seems well within the purview of the task at hand. For example, when validating data sets, which is something it sounds like your code does, detecting and handling an invalid datum sounds like the code's raison d'etre. An exception here would be ridiculous.

Another example might be a parser of some sort (for example, in the compiler itself). If it detects an error in the input, that's not an exception. It's a well defined state transition in the parser. Exception handling is not error handling in the general sense. It's only for errors for which the right course of action isn't even knowable except perhaps a couple levels up, and is unlikely to happen much in practice.

For example, if you have data structures that automatically resize to fit whatever your program needs, and you hit an OOM situation, what then? There's likely to be no good way forward. You want to unwind far enough that you can leave things in as consistent a state as possible, and otherwise probably just crash with a hopefully-useful error message. Depending on the nature of the program, "crash" could mean widely different things here. If it's a command-line program or a restartable system service, "error message and exit" is probably the best thing. If it's a GUI environment, if you can close the document or whatever triggered the blow-up and free the resources, that's probably a better idea.

Drifting topics here...

I'm fortunate that most of the programming I have to do is best served by Perl's "die", "croak" and "carp" functions. Usually, whatever error my scripts encounter is best handled outside the script, because the error is either bad input, or a bug in the code. Neither of those can really be handled within the script itself. So, we pop out an error message and a stack trace and say "here, you fix it."

Before you say "you're putting an undue burden on your users," most of this code is developed for ourselves to use. I work with a chip design team, and we manufacture and eat our own dog food. Just as the scaffolding around a skyscraper-under-construction doesn't need to be ADA compliant ("What, no wheelchair ramp up here?"), our scripts for internal use don't need quite as much polish as something we'd ship as a product. :-)

That said, I do also work on code that I intend other less-technical people to use. It takes considerably more work to make that code bulletproof and friendly. I'd say more work goes into polish than into the core algorithms.

Slashdot Top Deals

"The four building blocks of the universe are fire, water, gravel and vinyl." -- Dave Barry

Working...