It would be cool if we could track the trackers, and post their location on maps in real time; showing where they troll for cars, where they park at night, what donut stores they frequent. After all, the license plate trackers are plainly visible, anybody could see them and remember where and and when they did.
So, I checked slashdot on my phone today over lunch, and I saw the big "We hear you!" post discussing beta. Then, I got home tonight and was redirected to the new beta interface. So, clearly, slashdot the corporate group doesn't hear what slashdot the community is saying. If people are still being involuntarily redirected to something that has put the community at the edge of open rebellion, slashdot is clearly plunging in relevance even faster than a post Gox bitcoin. It's been a good run. I had over a decade of fun here on slashdot. I had excellent karma. But, clearly it's time for me to walk away. It's a shame that that
I count that as wise. If you put a real IP address, it would likely get a lot of traffic.
Which is why I've always been confused by the fact that they use fictituous IP's, rather than a production company website with trailers for upcoming projects...
It is a lot of work to raise your arm and point at an exact location on the screen (and slow too). After a short time you will be feeling the fatigue building up in your arm, which starts feeling very heavy. Then you will hate your touch screen and go back to using a mouse, touchpad, or keyboard, none of which require you to make large arm movements, or hold up the weight of your arm in front of you.
Why is touch on the desktop always assumed to be something that would have to replace using other inputs? I mean, if touch added $5 to my monitor, and I used it once every few weeks, I'd consider that a win. And, if it were widely deployed, economies of scale would mean that it really would be very cheap to add. (Like audio on the motherboard.) Having things like pinch to zoom could be handy on the desktop.
Way easier to toss a condom than to clean a sex bot. Just sayin'.
Yes. I understand that they have built these arrays of so-called microantennas. I believe that they are props, fakes, shiny objects to distract from what is really happening.
Those antennae are tiny, too small to pick up the relatively long wavelengths of current transmissions. The are packed together so tightly that they would be shielding one another from the signals. Running analog signals from those antennae to tens of thousands of separate tuners? Come on, really?
Does anybody really think that there is actually one antenna per customer? And that that antenna is hooked up to a particular DVR? And that that antenna and DVR are connected to just one customer?
I just can't and don't believe it. The 'antenna array' is surely a prop, and the DVR has to be a rack of shared servers.
I did visual effects for the first four Fast and Furious movies. We did a lot of the car photography on a green-screen stage, and comped in backgrounds shot driving down streets. We used arrays of film cameras, usually Arri 435s (on Fast 2 we also used VistaVision cameras.)
These would be much simpler, cheaper, and more rugged.
There are similar cameras from Point Grey [ptgrey.com]. These have been out for quite some time. The Point Grey cameras are an order of magnitude more expensive than these vaporware cameras, though.
instead, they ran rampant and now we have a bullshit system which even on my system, sometimes fails...chrome doesnt play audio, firefox does...no idea why...although getting my HDMI tv to play sound on fedora was interesting, the eventual solution was I had to edit a file in
/usr/share and add a :0 to the end of one of the parameters...I have no idea why....in linux mint it was fixed and I never had to do it...but weird shit like this seems to happen all the time...
Despite my best efforts, with Chrome on Ubuntu, Some YouTube videos will play out of one sound card, and some videos will play out of another. I think it's Flash vs. HTML5 being used for different videos. Seriously, it's the most bewildering user experience to have to randomly switch between my USB headphones and my analog headphones. Getting bluetooth audio working reliably is just a lost cause. Skype used to work. I apparently broke it in the course of trying to fix other things. 10 years professional experience as a UNIX admin, and I can't figure out how to make Youtube work without wearing two different headphones. It's sort of fucked.
Slashdot's terrible at interviews. Hopefully somebody much more qualified would interview them, and then amonth later slashdot would post a link to it several times.
Well, if he has identified it as taking up a large amount of the available bandwidth, then it certainly makes sense to consider it a target for reductions. Perhaps more importantly, users tend not to care about updates like that. A user actively downloading a file from some source is probably more important than some automated process the user doesn't care about, and can be deferred until the user gets home without them noticing anything.
That said, I've been saying for a while that there needs to be some sort of bandwidth discovery protocol. My original thought process was driven by apps on mobile phones, but this seems like it would benefit for the same reasons. Wireless oeprators are always concerned about using scarce bandwidth resources so we get plans with low data caps and such. Imagine if there was a completely standardised way for an application (say an email app on a phone) to "ping" bandwidthdiscovery://mail.foo.com with some sort of priority metric. If nothing responded back, it would act normally, so the system would be completely backwards compatible. If something did respond back along the route (for example, the wireless ISP you are connected to, but it could theoretically be something local or distant. The school's DDWRT router in the OP example.) it could reject the session, or encourage a delay. That way an email app set to check every 5 minutes could occasionally get a polite rejection from the ISP asking the app to hold off since circuits are overloaded. The phone would then wait a few minutes before trying again. Eventually the phone would download new email, but at high traffic times, it might wind up going 15 minutes instead of 5, saving the network some trouble. Software updates might defer a download for days or weeks if there is a continual rejection.
My Android phone lets me set software updates and podcast downloads to only happen over wifi, under the assumption that cellular data is expensive, but wifi data is unlimited. But, if I connect to a Mifi access point connected to a cellular connection, my phone currently has no way to discover that it is actually using (limited) cellular data. With a bandwidth discovery protocol, it would get the same rejections from the ISP that it would get if it had directly connected to the cellular data itself. And, local admins could easily set up rejection rules like the OP would be interested in, while still allowing the possibility of user overrides in cases where the school IT guy really wants to manually update the school's computer systems and whatnot. Think of it as a sort of queryable QoS.
And because any intermediate system on the route can let apps know to reduce bandwidth usage, a server being slashdotted can have some queries be rejected, rather than everything being on the link local side near the user. Obviously, none of this helps the admin in the immeadiate term. But, it would seem like that's how it ought to work.
Curiously, in my youth in the 60's, we referred to Luna-9 as a "hard landing", and the first "soft landing" was Surveyor 1 three months later. Now, it's clear that the Luna 9 lander really was a soft landing (similar to the landings of the Mars Pathfinder and Spirit/Opportunity rovers) and we were just ragging on the Soviets.
And then you need to duplicate the whole thing in another datacenter for geographical redundancy.
Useful for some workloads, sure. But if it is an internal service, rather than something like a website (gasp, not all servers are public facing websites) then if my office gets taken out by a meteorite, none of the corpses in the building actually care about whether or not some instance of the service exists in some other safer geographic region.
The flip side is that at a small scale, you get a certain amount 'for free.' If you need to have some infrastructure locally, then you already have some sort of a room with space to put a new server in, you already have sufficient electricity. You already have a guy to replace a blown hard drive. The extra time he spends replacing it is technically nonzero, but it's a fairly rare event, so a single extra server tends to be "in the noise." The big cost is as soon as you exhaust your existing capacity. I.E. The guy is already replacing drives full time, so adding one more server will mean needing to add another full time guy. Or, all the racks are full and you will need to add additional space. You can see a point where the TCO of the last server was genuinely much less than outsourced infrastructure, but the TCO of the next server will effectively be $500000 if you only add one more machine.