Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
I was pretty damn impressed, and pretty much every common day object I searched for, I got pictures back with that in it, very accurately. Screw manually tagging pictures (all the rage a couple of years ago), a computer just goes through and classifies them. Everyone else didn't see it as a big deal -- "so what, they figured out a light brown shed". Without really realizing the sheer amount of computing horsepower and sophistication that went into something like that.
Now, if you've been thinking about implementation details, you realize that the fundamental question is: "how do I know that I'm at half power instead of full, or my phase has changed?". Well, there's basically a synchronization period -- you listen to the stream for long enough to kind of know where you are at. Some streams also send synchronization patterns periodically. The next issue then is "what happens when my signal fades, or my signal bounces and the phase gets screwy". The answer then is in algorithms and multi-hypothesis guesses as to how the channel medium is acting. Lots of math there, but no matter how good you get more highly advanced tighter packed schemes are going to be more vulnerable to things like signal fades, etc and then also take more time to get back up to speed because you need more symbols flying by you to sync up to where you are at. But you can push them at a higher rate, so you gain some of that back. You end up wit ha constellation that you synchronize to, and then to make it more complex, Fourier tells us that if the bigger phase/amplitude change you have per bit period, the more bandwidth you occupy. So, actually, sub-dividing the phase/amplitude helps you generally occupy less bandwith, but you can also get tricky where the constellation is adaptive in such a way that you minimize amplitude/phase changes for each bit set transmitted, making you occupy even less bandwidth. But that's one more thing for the receiver/transmitter to keep in sync....
As you can see, this gets incredibly complicated quickly. It's a very math heavy field, with lots of very neat, clever tricks to make it all work seamlessly. These guys just figured out how to maintain coherency, etc at higher frequencies, which is fairly notable, but this march is expected to carry on as we get faster processors, higher performance amplitude/phase modulators, and low noise devices we can keep packing those bits tighter and having more points on the constellation.
There's definitely more to it than that in a typical cell site (including other ways to add more users), but at some point you have to deal with physics. You have a certain amount of bandwidth at your frequency to use, and no matter how clever you get, there are thigns like noise, interferers, limitations on the sophistication of hardware you can put at cells or in phones, the laws of physics, etc. You hit hard limits pretty fast. One of the main reasons Verizon and some of the US networks went to CMDA was that at the time you could pack in more users per channel, because you weren't limited by timeslices, you were limited by SNR (more users effectively increased the noise floor since their codes wouldn't correlate), so you could get some pretty impressive numbers of users per cell, making deploying a network cheaper. Newer 4G and advanced 4G waveforms are kind of an interesting combination of an optimized waveform that's TDMA based, but has some similar features to other networks.
This high speed is relevant, because you usually can use some of the techniques to divide up the bandwidth effectively to get more users per cell -- you can have smaller timeslices to transmit if the amount of data you can transmit in that timeslice is massive. The maximum amount of data passable over a link is kind of an industry standard metric for how much capacity a given channel can handle. It's easier to grok than channel capacity, etc
TLDR: We're trying our damned best to multiplex as many users as possible into a cell site. The more that you can get in one site, the cheaper it is to operate and deploy networks, so tens of millions of dollars annually is spent making it better, and the strides that have been made are pretty darn impressive. But we still have work to do!
If you run at single digit margins you have absolutely no ability to invest in development.
I agree with you in general, but to be fair that 24% margin is *after* all of the R&D, internal investment, etc, etc. So they could keep everything at the same level, which is among if not the highest in the industry, but 20% drop in price and still make 4%. This is true profit -- after everything else has been paid for -- if it was just an amortized profit per product without all the external costs wrapped in, then yea, 24% is not very healthy to begin with and 4% would put the company out of business.
Oddly enough, those same group of people don't take their vast meteorological knowledge to places like the national weather service where it could be put to better use.
Quoted for truth.
But in the commercial space, every single person on an assembly line could benefit from this -- the F-35 has projects and computer vision systems to overlay work instructions, rivet patterns, and check whether they're in there right. You have to design the assembly line around not obscuring the projectors that are telling you what to do. Making it on the operators face, but doing the same job would be a massive boon. Police officers recording interactions. Medical professionals pulling up charts, etc. There are a couple of very viable commercial uses that they should use to survive and refine over a couple of generations until the tech gets to the point of being able to be packaged into a consumer friendly package. Honestly, spin off a small lean company to keep it alive in the commercial sphere for 5-10 years and then absorb back into the mothership.
I hope that this effort of GM's succeeds at least well enough for them to continue R&D into EV's, but there are 2 significant problems I see that they'll need to overcome: First, they'll need a high-speed charging network that will allow for long-distance road trips...Second, the established dealer network has no interest in selling EV's.
I've always had this question, and I think that I know the answer (that the Big 3 aren't serious about EV's yet). There are dealerships *everywhere*, and they're large and have service bays. If any of these Big 3 were interested in EV's it wouldn't take much to help turn dealerships into quasi supercharger stations, where service and maintenance can also be performed. Heck, battery swaps too if the tech gets there. I think that once the Big 3 finally come full bore into EV's that the dealership network is going to have to be key to their strategy. Dealerships are independent, but if they help out and come up with the right incentives to put solar panels on dealerships and install fast charging stations, they could effectively cover all of the US in charging networks very, very quickly.
I'm all for data driven stuff; although Psychology is a tough one -- it's incredibly hard to effectively account for all the variables, and I think that she may be reading into the data bit much, as can happen in the field.
I use the Keurig for when I have multiple people wanting something. I like my coffee incredibly strong, so most people don't drink my french press coffee. But with the K-cups, they can get coffee, tea, whatever without me having to expend any extra effort. Makes hosting people easier than reworking my fancy coffee setup for 12-18 cups of coffee.