You get just under 3kbytes on a QRCode - so there would still be sharp limits on what could be stored there - but certainly it could contain a tiny URL *and* a bunch of other data. Also, there is an issue with very small items in that a max-resolution QRcode would be too small to print cheaply. a QR code that only has to contain a URL could be smaller than the current bar code (because it's 2D).
What it would take is for government to step in and require it. That's how come we have food labeling at all. They could specify the rules for what has to be recorded and how - just like they do now.
All I'm proposing is that the argument that there isn't enough room on the label for any more information is kinda silly. You only need a pointer to the information to be printed onto the label - not the information itself.
I think I said that in my last paragraph.
There is plenty of room on the label for a tinyurl.
If you were to accept that you needed a smartphone in order to read food labels (a big "IF") - then the entire label could be replaced by a QRCode which links to a page with *ALL* of the information. The actual label could then be simplified to a really simple "UNHEALTHY/HEALTH" number going from 1..10 as proposed previously to simplify things for the 95% of people who aren't going to read anything more detailed than that anyway.
For people like you - I'd imagine that using a phone to get vitally important data that would never fit on a label is less of an imposition. Furthermore, it would be easy to have software provided for you that would allow you to scan the product and get a personalized "OK TO EAT"/"DO NOT EAT!" indicator as set by your doctor.
Come to think of it - you wouldn't even need any extra printing at all...pretty much all labelled food already has a bar-code on it - it would be simple enough to prepend a standard URL onto that number to turn it into something that a special app could use to pull all of the necessary information. Legislation to make product vendors add this information would then be simple enough.
To be fair, clippy is a damned good source of interruptions.
Well...it *might* be that your radio used an IF (intermediate frequency) to decode the AM or FM encoding...
This signal is sufficiently high in frequency that it actually 'leaks' outside the radio - and, I suppose, might be picked up by a radio in a nearby car. But the IF's frequency isn't close to where you're tuning...so I'm not sure this completely explains the story.
(In Britain, there is a television licence you're supposed to pay to operate a TV receiver - and at one time the government used "Television Detector Vans" that drove around to houses that didn't have a TV license and picked up the IF frequencies that televisions inadvertently send out...allegedly, they could tell which room the TV was in - AND which channel you were watching - so the IF frequency must be different for different radio channels.)
I dunno - this is one of those stories that sounds kinda OK in theory - but I really doubt it would work in practice.
I heard you could fix that issue by putting the stereo into the freezer for a while. Allegedly this takes the memory chip down below it's minimum operating temperature and erases it so the stereo boots up with factory defaults. Never tried it myself, but it's a trick that car stereo thieves are known to use.
I was working on one of those gigantic 'motion theatre' fairground rides:
This was back in the era of 286 PC's - running DOS. The software was suffering timing issues and we really needed a hardware timer interrupt - but DOS already stole all but one of them - and we simply didn't have enough.
I needed a *roughly* 1kHz interrupt to monitor some ride function or other (I forget exactly what) - so I came up with the idea of putting a bent paperclip between the RxD and TxD lines of the RS232 port and using the serial port interrupt. I'd send a character out through the serial port - and at 9600 baud, with one stop bit and one start bit the character took ~1/960'th of a second to arrive back in the serial port chip...at which point it triggered an interrupt - and I could send another byte out to make it happen again.
We used paperclips on a couple of machines as an emergency hack - but later versions used a 'dongle' plug that went into the RS232 port with a wire soldered across those two pins)...this plug was named the HPE..."Hardware Paperclip Emulator".
When you expect to get most of your revenue from selling apps in the iStore - it's essential that people are unable to get apps for free via fancy web pages.
Is this because Apple can't support WebGL? Hell no! The browser actually DOES contain code for WebGL, but it's disabled...UNLESS your web site signs up to display Apple-provided advertising banners...in which case, WebGL works great!
Safari uses the exact same core rending software ("WebKit") as Chrome - so it can trivially support everything that Chrome supports - it's really just a matter of Apple deciding to deliberately cripple the browser to prevent people from providing apps for free.
Whenever you divide by zero, the problem ISN'T the division - it's the previous code that either assumes that dividing by this number will produce a valid result, or is doing something wrong in turn.
Checking - and somehow kludging - a divide by zero does nobody any good. You have to ask WHY you're dividing by zero and what it should mean.
I *want* divide by zero errors because they inform me that I'm doing something wrong elsewhere.
(And even if you wanted to kludge it - returning a very large number would be a better choice than zero...but don't do that).
Bottom line - if you're doing lots of div0 tests then you're doing something wrong in many other places!
HMD's have been around since LONG before there were 3D graphics on the PC at all. They'd been used (for example) on military flight simulator back when you'd need a million dollars of mainframe hardware to generate a 3D image. I very much doubt that any of this tech is actually new. Probably someone like Evans & Sutherland were the first to do it - and they had 3D graphics back in the late 1970's. I doubt that much of the general concept is still patentable - so this argument is probably over some kind of small feature.
Consider this...suppose you are just over a mile from the SOUTH pole. You walk a mile south - and now you're maybe a hundred feet from the South pole. Then you turn west and start walking...around and around in a tiny 100 foot radius circle centered on the pole. When you've finally clocked up a mile - you turn and head North again...where do you end up?
Well, the answer depends on the exact circumference of the circle that you walked around. Generally, you'll end up someplace very different from your starting point...BUT if that circle is an EXACT sub-multiple of a mile - then you'll end up precisely where you started.
So...the North pole is clearly NOT a unique answer.
Furthermore - the north pole is only ONE answer. My approach reveals an infinite number of possible answers:
1) You could have started ANYWHERE that's at the exact right distance from the pole - so anywhere on that circle will do...an infinite number of starting points will work.
2) Note that ANY exact sub-multiple of a mile will do - so with mathematical precision, there are an infinite number of sub-multiples of a mile - and hence an infinite number of distances from the pole where you could have started.
Truly - the "North Pole" example exhibits very little lateral thinking... if that was your answer then you **FAILED** the Musk test...which (I'm pretty sure) is the whole point here.
The original version of the story is that a hunter walk a mile south, a mile west, shoots a bear, then walks a mile north to return to his starting point. What color was the bear?
Since there are no bears at the south pole - and only polar bears live anywhere near the north pole - then the north pole is the right place and the correct answer is "WHITE!"....but Musk isn't asking *that* question...he's trying to trick people into jumping to a false conclusion without stopping to think about it.
-- Steve Baker
I don't know about you - but there are two parts to my job...thinking and doing.
The doing part is easy enough to segregate...If I'm sitting at my desk at work "doing"...typing in code, debugging, documenting, etc - then clearly that belongs to my employer and I have no right to be "doing" anything that I'm going to have control over outside of work.
But thinking is near impossible to segregate. I may well be thinking about solutions to my employer's problems as I commute, or as I'm fritzing around with something else at home...and it's impossible not to have an idea for an outside-work project pop into your head while you're trying to come up with a solution to something that's work-related.
In my opinion, the inability to segregate work-thinking from home-thinking means that I shouldn't try. In my mind, I'm paid for the 'doing' part during office hours - and whatever 'thinking' is required in order to get the 'doing' done. 'Doing' that gets done on my own time is mine - as is whatever thinking went into making it happen. When I think of something that relates to my job - it belongs to them, even if I come up with it at 4am in a flash of dream-inspired wakefulness. And if I come up with something that would make a great off-time project while I'm waiting for my code to compile at work - then that's my idea and it's nobody's business when and where I came up with it.
The only requirement to make that work is a clean separation between the kinds of things I'm paid to do and the kinds of things I do for myself - but since "a change is as good as a rest", there is a natural tendency for me to do very different things in my off-time anyway. If you find that you have a fuzzy grey area in there - then you'd better lawyer-up and make sure everyone has a crystal clear idea of where the "doing" boundaries lie an that the "thinking" boundaries don't exist.
The real question is how long the reboot time is relative to the glide duration from 30,000 feet?
There are a significant number of 'missing features' in the free version of Unity3d...for example, render-to-texture. That's a pretty serious omission for any kind of serious software development - so the $1500 (or $75/month with a 2 year commitment) is necessary if you are really serious about game development. In a typical game company, $1,500 is roughly the salary of one programmer for a week. So over the life of any reasonable commercial game, the cost of buying a full license for each worker is essentially negligible.
What the free versions do is to enable indie studios to grow to the point where they can afford to pay for a game engine - and to get amateur game developers to grow interest, loyalty and expertise in a particular free engine that will hopefully translate into sales of the professional version when they become paid game developers in the future. But there are enough annoying road blocks that even an amateur developer may be tempted into buying (or renting!) the full version after running into a few of them.
It's a good model, and I hope it grows and continues.