Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
To answer TFP's question, which many seem to be avoiding, I have one word: yes.
The answer should have been 'yes' 15 years ago, when micro-scale video recording became commercially feasible. Nowadays, with almost countless SoC and autonomous micro-controller hardware married to multiple GB (possibly TB) of solid-state storage —all within the size of a deck of playing cards— I would have to say that video monitoring is not only feasible, it's an imperative.
When we get news of a flight disturbance, what do we get to see? That's right, just some blurry, hand held camera-phone footage with muffled audio. Of course, any footage from the cabin is going to be at the discretion of passengers and not the airline corporate, which mitigates any in-cabin monitoring. But perhaps we should think about it a different way.
How many gadgets are out there for our car windscreens, to monitor other drivers? A dozen? A hundred? These essentially represent the solution for any warranted video monitoring. They have long-term (i.e. per-trip) recording functionality, as well as constant-loop recording for capturing the unexpected. These devices are typically the same size that radar-detectors were two decades ago!
Now, does this necessarily mean that we have to see inside the cockpit? No, it doesn't. Take that privacy argument and stow it.
Video monitoring could mean many things, such as the cockpit door exterior. (IMHO, a much more compelling angle when considering hijackings) It could also mean hull-exterior views, which could be quite valuable for take-off/landing mishaps. Rather than rely on modeling to visualize the attitude, speed and point of impact, it could be right there on a screen for you.
In an aviation scenario, we just start with the functionality of the classic black-box device and evolve it to include video, solid-state storage and an automated distress feature that attempts to upload the last 2 minutes of recorded data to satellite. (for extra credit, make a monitoring algorithm that senses flight-path and altitude deviations for real-time alerts and warranted monitoring)
True, there may be limits to the durability, but being able to put such systems in a compact physical space already increases the survivability of such a system. I bet it could even fit inside a contemporary black-box chassis without much effort. It's anybody's guess why there hasn't been any significant retrofit of the classic Flight Data Recorder design, now that technology is more compact and survivable than when the program began in 1967. The current debate over 'deployable' recorder systems just seems silly. With the profits that airlines are making lately, it's horrifying to consider that one's final message to the world would be from 40-year-old tech.
If this Germanwings incident reveals anything, it's that the safeguards for mishaps are still in human hands, including the reporting of essential data. There's no technological solution for suicidal pilots, because experienced pilots know every manual override. (and can wield a roll of duct tape) Let's at least take the next step (looking at you, FAA) and start mining these mishaps for the valuable lessons they could teach to future avionics, international regulation and corporate norms. Put some bloody cameras on that flight!
In the past MS used http://iconfactory.com/
They did not use internal staff.
But the managers that approve it are to go first.
At least the folks at Icon Factory know a thing or two about iconography, which is as much of an exact science as UI design ever was; part pixel art, part language. As other 'dotters here have happily provided links to not only the historical iconography of Microsoft, but other platforms as well, you can see the evolution of aesthetic choices; the playful isometric simplicity of BeOS, the monochromatic elegance of NeXT, and the neo-realism of Gnome. Saying that the flat colors is a throwback to the primitive computer era (8/16 bit) is rather ignorant, simply because the color-palette choice wasn't a matter of preference, as much as necessity. Back then, the engineers were put in charge of defining the color gamut based on just 16 or 256 'slots' to use. Naturally, the engineers approached this in an algorithmic fashion, rather than aesthetically. That's why it took us 30 years to come up with color rendering that could represent natural/earth/skin tones, because there were all these mathematical gaps in the subtle spectra of blues, browns and greens. In that sense, I suppose the selection of flat saturated colors is indeed ironic in the age of hyper-realistic imagery. I applaud an aesthetic choice for elegant iconography, however the execution can be equally delightful or disastrous.
While I agree in part with the dissent over the design choices, I don't agree that TFA is representative of any significant "majority". Let's be real here, the headline reads, "icons look like a bad joke." Do you really think that contributing readers would be unbiased? You might as well have a big sign out front, "MS-bashing Trolls Welcome!"
But here's the catch. It's hard to have a serious discussion about UI choices even in this forum, one that's so inclined to conflate the design with every poor PR move, questionable politics and troubled past of the legacy platform, all making it impossible to take a step back and appreciate the design choices for what they are. It's also important to add that UI choices aren't just about making it artful, but mostly, meaningful. These mini-pictures are purpose-made to fall into the background, rather than be their own eye-candy. (that's what custom icon sets are for)
So here, I'll take a stab at it. This icon gallery clearly perpetuates the traditional Windows brand "manila folder" trope as a foundation. With flat colors and angled lines, it does an attempt at three-dimensional appearance, which arguably does look very 'flat', with or without comparison to its predecessors. While those do not make up 100% of the new icon set, the "folders" establish the overall paradigm and 'look' of the interface. I'm not convinced that the non-folder icons are even complete, since most of them still resemble Aero's photo-realistic set of devices. The icons that notably reflect the new art style are the "My Computer" and "Network" icons, which is a simple line-art treatment style. This is not consistent with the folder paradigm, not only because they don't resemble folders, but because these images are using boundary lines to define shapes, rather than flat colors. Overall, it's rather inelegant and poorly executed. The folders use subtle boundary lines, but inconsistently, and the line doesn't diminish on the smaller icons, making the left face of the folder look awkward, like a backwards "L" from a varsity jacket. Again, we see that the Redmond workshop has neglected the beauty of scale and only centers their model on an 'ideal' size, whatever size that may be, and also belies an underlying framework that is—yet again—bullishly ignorant of modern, precision rendering. As I'm running Win10-TP myself, I can also see that File Explorer attempts to express folder contents as foreground icons using the open-folder trope as a background. It renders another closed-folder icon atop the background if a sub-folder is present. A poor choice, since the flat colors run together and make the background folder look strange. We also see that, when the contents throw the object-recognition algorithm for a loop, the second foreground icon is a square outline; a tell-tale indication that MS still hasn't learned it's lesson about meaningful representations. If this was an in-house job, then I strongly recommend outsourcing it to an expert group once again, all the way down to the visual rendering engine. Aero was dumped, rather than evolved; another poor choice.
And there you have it; an attempt at a serious discussion about the merits and properties of the new icons that isn't just a splattered statement of subjectivity. I welcome anyone to contribute, and I hope (against hope) that the sincerity of this discussion may be preserved.
So far, it's the patent owners and warchest protectors that seem to be driving the definition of what can and cannot be patented in the digital realm. This should be reversed; there should be an international (or universal) standard definition that a applicant must fulfill before it can even be considered for legal protection.
Just spit-balling here, but maybe it should be a rule of threes; a project must demonstrate it leverages the three parts of digital technology: the hardware, the software and the network. Among each of those, there must be three distinct techniques being used to separate it from common operations and, in each technique, three uncommon modules that can be considered proprietary in nature and therefore be protected as "trade secrets". So, in total, we have a basis for patentability that covers the basic facets of digital products, requires them to define how they set themselves apart and lastly requires that the applicant specify what makes their work unique at the code and/or API level; requiring nine points of uniqueness in each digital facet. No 'black box' definitions either; all patents must encompass and explain the concept that makes the patent... well, patentable.
This not only provides a structure for burden-of-proof arguments, (currently non-existent, apart from the ruling described in OP) but also creates the need for distinguishing one's work to set it apart from what platform developers and shared-library contributors can claim as prior art or common practice. More importantly, it eschews the petty bickering of single-factor patents; things like "swipe to unlock" or "presentation as a square tile with 10% rounded corners" or "putting a virtual button in the corner of the screen to select a program" sort of nonsense.
If we're to fully plagarize Neil Armstrong, let's do it right.
That's one small click for man, one giant drag-and-drop for mankind.
I don't know if that's how we're going to celebrate the first crack in the shell of software patents, but yes, it is a step in the right direction.
Indeed. The Xfinity Wifi service is not like a public hotspot, it's app-enabled and otherwise walled off.
The app asks you to log-in with Xfinity HSI credentials, connects to a geographic database and shows 'coverage' on a small map. When you want to connect to a hotspot, the app coordinates the security automatically, kinda like a pushbutton feature on a router.
If you don't have the app, these hotspots look like any other secured private WAPs.
Despite all that, it's an arrogant and draconian move to just switch-on customer equipment to provide a service. I believe Zordak made the point that the gateway/router devices are leased to customers, but essentially Comcast property. To me, that means they can take control to provide enhanced services, like advance port forwarding, traffic balancing and delivering QoS metrics back to their root network. All that makes sense, right?
What doesn't make sense is basically hijacking the device to provide a subscription-based service for other customers. If I have one of these routers, then I expect it to serve the purpose of fulfilling my service subscription, not someone else's. Providing such a service should be at the option of the subscriber, not the default stance with an opt-out procedure. Organizing the majority of subscribers to opt-out of this service clause will surely pressure Xfinity to re-think their strategy, but good luck getting the attention of all 50,000 households. (or even half of them)
A responsible, progressive and fair-minded company would provide incentives for becoming part of their service infrastructure. Monthly service discounts would be a good start, and might even improve Xfinity's reputation in the process. Let's say, the more isolated your WAP is on the Xfinity map (thereby filling in a wide gap in coverage) the more of a discount the homeowner gets.
In this day and age, it takes a level competitor to enact change in the marketplace; so we're looking at you FIOS, DSL and Google Fiber. Do it better!
Anyone think of the percentage of iPhone adopters that switch to Android? Those numbers are conspicuously absent. I doubt they did any follow-up for iPhone "consumer corrections" to see how many later dropped iPhone and went back.
And they say Microsoft "drinks the kool-ade" on their own products. Seems like both camps have a strange brew now. However in this respect, Apple has some serious catching up to do.
If the rumors are true, then we'll get to see who can make the better "geez I feel like I'm going to break this thing it's so thin" device for 2015.
Still waiting for the bluetooth, bio-powered, wetware interface cartilage implant accessory. (stereo, please)
Buy Comcast stock. Get 51% in customer hands and vote in a NEW fucking board of directors. Capitalism, free market, and democracy.
Well... 2 outta 3 ain't bad.
Well Comcast is raking in billions, so why cant they?
I think that's the point.
They can, but they don't. They'd rather pay higher dividends and pocket more profits than install meaningful upgrades to 30-year-old infrastructure.
[...] (1) regulation, (2) competition, and (3) public ownership of pipes [...]
- (1) What regulation? Lobbyists control legislators, and lobbyists are powered by corporations. Along with the recent chairman appointment, (y'know... a former lobbyist) the FCC is as good as sold.
- (2) What competition? The feeding frenzy of cable infrastructure -- 80's and 90's -- has already been divvied up. The alpha predators are just bloated giants and looking to mate with– or destroy the rest.
- (3) See #1... or do you really think there's a budget, or even a motion, for that purchase? It would be political suicide, because it would be interpreted as "big government getting bigger." You'd have better luck going parcel-by-parcel with Kickstarter or Indiegogo, but then the big boys would just play the same game they did with smaller competitors; milking you dry until you end up selling it back. (and at a discount) It has to be all or nothing, and the sticker-shock on that could just about kill you.
This isn't a simple game, if it's a game at all. In fact, it's more like a quail hunt, where the hunters are doing so well that they're getting bored and shooting their friends in the face. (see what I did there)
That's what this 'calendar' essentially says. Let's just call it what it is, a simple algorithm for a few celestial body movements. It's rail-minded development applied to the solar system, with only a nod to the Gregorian lunar-based system. (28-day months, or approximately one lunar rotation) Also, and let's be honest here, the whole "timemods" idea is just a gadget. It's not practical outside of the inner-workings model. I mean c'mon... calendars are supposed to work for everyone.
All that doesn't mean it's a bad idea.
On the contrary, it's a great start. But if it's to become a great system, worthy of usurping the Gregorian calendar, then it has to embrace the natural marks of celestial time frames... not just one solstice per year.
- First improvement would be to include both solstices in measurements. This already doubles the accuracy of the system.
- Take it one step further and include both equinoxes for additional reliability.
- The previous two suggestions annihilate the 13th "mini month" idea, (which BTW is horrible) so tack those on to the quarterly ends as 'meta months'. See? Quarters are now built-in!
- The whole point of a standard calendar is to be predictable, so making corrections four times every year means the next cycle is always more reliable than the last. (though it will never be perfect, because entropy)
- We also open up the possibility that sub-diurnal adjustments can now be quarterly, semi-annual or annual. Another leap-second in June, why not?
This system then retains the single greatest advantage of the Gregorian calendar; division by the most factorials. (!12=1,2,3,4,6,12 -vs- !13=1,13) And now it has more frequent course corrections. Consider this programmatically with the above suggestions, and the system is still computationally simpler than our legacy Gregorian system. So there it is, an accessible system that everyone can use.
Seems to me like there must be something to it. If the ISP's are threatening to sit on their asses (believe me, they'll do it, those crazy bastards) then there's got to be something proper and fair about that bill.
The ensuing pity party will undoubtedly be called the thumb-up-the-ass-mageddon.
Either that, or face the rise of a Chart-warner-cox-cast abomination, sure to be renamed the Cable Operators Commision Kabal. The acronym should make it obvious.
It didn't start that way. In fact, there's a distinct correlation to the increasing age of George Lucas and the increasing "hijinks" of his characters and/or the ephemeral nature of the characters he introduces. JarJar is just the cataclysmic conclusion of a string of bad decisions that had a truly promising start.
In order, ep. 4 has witty repartee between Threepio and Artoo, somewhat diluted by SE retcons. This is the par excellence of their performances. You'll see that ep. 5 is where Threepio starts "hamming it up", but in a self-aware manner. A caricature of uptight British absurdism that doesn't take itself too seriously, played well as the "straight man" opposite of Artoo's escapades. Then in ep. 6, we get a par performance from Threepio with somewhat more heroic notions from Artoo. In a way, the Ewoks took the burden from the droid duo for providing the comedy/tragedy aspects of the third film. For the droids, those performances worked well enough and didn't take away from the story.
It all goes to shit with the prequels. Artoo is immediately framed as a "tragic hero" in ep. 1 because of the apparent slavery/fodder undertones of Astromech Droids overall. Thereby delivering a heavy-handed message of oppression and strife, "humanizing" this artificial-life character. Threepio, as the invention of young Anakin, is supposedly imbued with the values and morals of young Anakin, but doesn't explain how Threepio is unique from the protocol droids have been mass-produced for millennia. It's like building a toaster out of Erector/Technix parts... what was the point, exactly? Oh, right... it's a conveniently close-knit origin story that way. This film does little more than get principal franchise characters (Anakin, Obi Wan, Artoo, Threepio and Yoda) together by the end of the story. JarJar is introduced. He's inserted into the story as both a "CGI triumph" and as the sad clown. (I. Hate. Clowns.) In a way, it was JarJar that pushed Threepio into the "uptight ninny" niche that ultimately doomed him as a character and prevented any kind of humorous moments from Artoo throughout the prequel films. JarJar took over that job... kind of appropriate, considering the outsourcing epidemic that was happening at the time.
Then we get to the travesty of ep. 2; the Republic Army of Clones. (they didn't "attack" anyone, really... and it's a clear evasion of the obvious "Clone Wars" title, which would have made tons of sense based on canon, but was "strategically reserved" for a later animated television series.) If you can somehow manage to keep your last meal down and endure the frigid romance of Padme and Anakin for at least an hour, you'll see that the relatively minimal screen-time with the droids has been "lubed up" with predictable, over-the-top and depressingly corny gags. These two characters are ruined for being made into even shallower caricatures of themselves. There are zero moments with Artoo and Threepio that had to be written that way for the sake of storyline. Zero. Their abominable performances in this film were entirely by choice, and it was a very, very poor choice indeed. JarJar didn't even make up for it, he's now just a piece of the background. There's nothing less satisfying than to see a pathetic comedy-relief character turn into bland scenery. There's no real dichotomy here; JarJar doesn't offset Threepio in any way. And at this point, neither does Artoo. It's all ruined.
Now there's that lingering aftertaste; ep. 3. It's almost embarrassing to think of it, but it's the latest SW franchise feature-length motion picture to date. (*shudder*) While it has a most heroic opening, (Artoo... yes, again) this story later unfolds with almost no droids at all, and doesn't even really leverage them for comic relief. This was like putting ep. 5 after ep. 6 -- giving us a dark finish to a hopeful segway. The visual-gag moments we're given with Artoo and Threepio only reinforce the two-dimensional cutouts we laboriously endured in ep. 2. Nothing new to see here. JarJar is all but missing... but unfortunately, we see that he's still alive and doing rather well for himself; again, thoroughly unsatisfying.
If I were to visualize this progression, it would follow the chronological timeline of motion picture releases. (4 > 5 > 6, then 1 > 2 > 3) Under ep. 4, you would see the iconic image of Artoo and Threepio from the final "commencement" ceremony of ANH. (shiny and presentable) As you move to the end of the first trilogy, you would see them become more like drawn caricatures (anyone remember that animated "Droids" that assaulted us for a few Saturdays back in the '80s? Yeah, a bit like that) As we move into the prequel trilogy, we would see the stripped-down "naked" Threepio alongside a burnt-out Artoo, and as if Anakin himself had drawn it. In the second film, we see Threepio with plating (finally) but also amateurly hand-drawn... perhaps JarJar is trying his hand at it? (kinda like how Dub-yah tries his hand at painting) By the sixth film, the images of Threepio, Artoo and JarJar are just hand-drawn by a toddler, who is the only member of the audience impressed by their performances.
Thanks for the legacy, Mr. Lucas. May we please do it the right way now?
It's nice to see that some things never change.
Introduce a profound article on
I, for one, applaud the policy described in TFA. Calculating the median time to crack weak passwords, then requiring the password to be replaced within that time frame, is nothing short of brilliant. It's a practical approach to security; something they should have been doing all along. Can't wait until this elevates to law-of-the-land status.
Until then, please, keep discussing whatever it was you felt was so important.