How do you acquire materials for your 3d printer? How do you acquire food? What about software? What about movies and music? There is so so much more to the economy than the area that 3D printers cover that you can't say money or some equivalent mechanism is 'obsolete'.
For one, most of the work still done by humans is so still far away from the realm of what AI and machinery can do. Anyone who thinks otherwise greatly overestimates the nature of AI they've seen and underestimates how very alien simple day to day things are relative to the state of the art in AI.
For another, this progress is going to be curtailed for the same reason why desktop market is plateauing. Even if we *could* get there, we don't have the collective will to advance technology. There are people talking about the relatively hard wall physics presents in various fields we will bump into, but I suspect we won't even get that far as the 'good enough' situation will make it unprofitable to even get that close. It's increasingly difficult to justify high power designs that niche markets still need that formerly got to come along for the ride with mass market amortizing the cost.
Finally, I think as a collective whole, we don't *want* it. We have spent millennia fostering cultures that largely have us value ourselves and each other in terms of the work we do. We don't know how to do anything else. We have no other way that has worked of deciding how to divvy up resources among ourselves.
I still use links (not lynx) in very select cases (basic navigation from a shell on a server), but mostly I accept that modern browsers have little downside.
I gave up pine because just too much content I receive cannot be rendered without it in my professional correspondence and integrated calendaring becomes a must. However, for a lot of cases it is hard to beat the simplicity of pine or mutt.
Terminals have always been and will continue to be a fact of life *if* you are going to go into IT/programming as a profession. A critical focus area of MS has been making cli/scripting more competent, so even they recognize it.
As new technology comes along, you have to eye it critically. It may be empty hype or it may carry value. Sometimes it is worse than your favorite approach, but the reality is clear that your favored strategy *will* lose. In that case it may be best to go ahead and try to improve the inevitable winner. Sometimes its better to not assume what everyone thinks to be the winner is inevitable and push hard for what you believe to be better and make the case. Sometimes even if the new thing is objectively a little better than what you have in place assuming you didn't have either implemented, the worse solution winds by being good enough, already done, and risks an faults known.
Basically, if you want to be in the industry, you have to be constantly monitoring change, assessing whether it will be good as-is or perhaps is amenable for your guidance to change, or sometimes rejecting it even as everyone around you *thinks* otherwise. A career in the industry is a career of unending vigilance.
Of course, I strongly hope my own child will not want to get into this field. It can be rewarding but it is frankly exhausting never knowing if the next bend in the path will leave you hopelessly irrelevant. You have to know everything about anything so that you can jump on the next opportunity should it dry out.
I think a software developer should proceed in order working with the components comprising a modern software stack.
Start with fiddling a bit with breadboards and basic circuits.
In relatively short order, move on the assembly programming for maybe half a year.
Then, get comfortable with C level development, understanding all the syntax shortcuts for what you formerly had to do, but by and large still being able to easily tell how it maps to your assembly effort. The bulk of the educational aspect should be spent in C.
Then they should touch perhaps Erlang and Lisp. At least a couple of languages with pretty interesting tendencies that display a line of thinking that diverges prettty strongly from the typical to give the developer a bit more context.
Near the end, move on to Python, C#, Java, and perhaps Powershell. Have some opportunity to explore the languages most likely to be directly relevant to their immediate career, understanding what sort of things they tend to take care of without you worrying, and what sorts of strategies end with those facilities not providing adequate help (e.g. scenarios that muck up things for reference counting).
I'm not even that old, but have taken a perverse interest in understanding the low level implications of everything I do. Certainly, most of the time I don't have to think too hard about it, but the awareness has been very useful more frequently than one might think.
I think that low level understanding helps me to adapt to higher level language features as they come out. People who don't understand the low level stuff seem to regard their craft as some sort of unfathomable magic where they learned how to apply a particular generation of technologies without truly understanding it. When a new language feature comes out, if you understand the low level well, you have a pretty solid feeling to understand what that feature is doing and how to exploit it.
Unless a curriculum is touching all of this, low level and high level and the nitty gritty of how it comes together, i don't think it is constructing durable skills.
netsh is not a powershell cmdlet. write-output is explicitly for converting objects into a screen-readable format, and is used on the end of a pipe.
My point is that programmers have to play by the rules and a lot of stuff commonly done in powershell is older stuff and done by less than thorough developers.
lack of convenience in Linux for implementing signing is pretty irrelevant... that is a Linux flaw, not a powershell problem.
It means MS didn't use any existing standard for signing and had a new ASN.1 formatted structure. It's an example of MS ignoring industry standards to do the exact same thing in an incompatible, proprietary way.
The SSD part uses an undocumented format for caching
Fun thing about non-volatile cache, there is nearly zero pressure to push from cache to disk and certainly one large point of this is to have write-back cache with arbitrarily long delay to commit.
I say 'nearly' zero since there is still a desire to have space amenable to be rewritten as soon as possible (in the same way modern OSes proactively write page-out candidates to swap so that they may be evicted without delay), but the urgency is not there like it is with memory backed cache (both because of volatility and the rather generous amount of extra space in SSD).
I suspect the 'disk' is still just a block manager (well, disk(s)) that doesn't have any awareness of the FS, just the driver half that makes it 'look' like one volume 'transparently'.
This is of course similar to 'fakeraid' with all the potential trappings thereof. If it presents as two disks otherwise or something similar in a standard way (and the driver is just to deliver product ahead of the standardized implemention being available) that might not be too bad.
*If* everyone picked exactly the same lib version, yes.
In practice, people aren't going to standardize on the same library version.
Bonus problem: Now each app provider is responsible for addressing a hypothetical libcrytpo vulnerability rather than the distro patching it in one place.
All pipelines in powershell are objects, and something will only become text if it has become serialised somewhere along the way (e.g., read/write from disk).
See output from 'netsh'. See use of 'write-output'. Maybe you call an array of strings meangingfully distinct from "text", but it really isn't. You can't use select-object, have to switch to select-string. Have to do an assignment and a get-member or gettype() to know what you are really in for.
Returning multiple objects from a query is a feature, not a bug.
My issue is not the plurality. My issue is that the singular will have a different set of behaviors as indicated by GetType than one. I would have either liked it if a single object was returned as a single element array, or if the language did a better job of implicit array creation if you treat such singular returns as an array.
And no, a cmd script can NOT turn code-signing off if you have UAC turned on
powershell set-executionpolicy bypass currentuser
If you have an enterprise CA, and an AD integrated subordinate, signing scripts is trivial. My workflow with writing powershell involves saving the file in notepad++, running "./add-signature foo.ps1" (add-signature is a 1-2 line script I've got which retrieves my code-signing cert and signs the argument with it) and then running the script. That's it.
First, that's hoops to jump through even assuming the infrastructure setup. Second, no convenient linux tools to generate the ASN.1 structures to sign in non-windows envs.
But, there are things you can do in Powershell in a few lines of code that you just can't do with a shell script without a massive time investment.
That's absolutely true of cmd and vbscript, less so in bash or perl or python on a linux system.
Much of the power of powershell lies in its ability to easily query and interact with WMI objects
WMI in my mind is less desirable than sysfs under linux which provides much the same data. WMI objects aren't very discoverable as they aren't unified under a single root. If you ask me about some data in wmi I have not previously worked with, it;s off to technet forums to search for the answer. In linux, a little ogling of sysfs and I can generally find it without so much as tabbing over to my browser.
not have to worry about people running malicious powershell scripts
Untrue, see above
If someone in the enterprise pushes a dodgy script (either accidentally, or maliciously), I can easily disable it from running, wherever the copies may exist by simply revoking the cert.
Ok, that's the one effective use case of the code signing as implemented today.
Want to actually search for something? The start screen makes more sense then the smallish non-resizable start menu window.
The size might make sense, but the behavior does not. In Win 7 desktops, I can generally type what I want to find and it will show up on the list. In Win 8, despite having more real estate they are more stingy with it. Search for 'update' and you get 'No apps match your search'. You have to ascertain that in fact you need to click over to a different category to see 'settings' results or 'file' results depending on what you are searching for.
The problem is that powershell is in an awkward place between languages like perl/python and shell scripting.
The pipe might be simply the string you see, or it might be a stream of objects that are being formatted, and that reality is generally not clear at a glance. In bourne shell, it's pretty much always what you see is what you get (for better or for worse). Once understood, this *usually* provides the programmer more power when they test to determine what the stream will look like, but it's still inconsintent as to whether provided cmdlets support pipeline input or not, causing very distinct calling conventions to mix and match cmdlets that require pipeline input and cmdlets that cannot comprehend pipeline input.
Many scenarios might present your code either a single object or a list depending on circumstance. Select-Object for example will return the result as one object not in an array if one matches, but will return an array if multiple matches. This means code that assumes one result will suddenly be faced with an array, and code that assumes it will see an array to iterate will break when faced with only one value.
Because they want the user to be able to be fast and loose with quoting, some odd behaviors creep in there. A string suddenly becomes an array because it had a , in it, and the user that had been typing things free form suddenly results in a vastly different argument being passed into your commandlet.
It's interesting you should mention the codesigning. It causes some headaches without a lot of meaningful protection (e.g. a
Process directives in functions are unable to effectively stop execution of the function. 'return' just iterates to next hop, and 'break' will escape all the way to the parent loop of function or exit the whole script if no loop.
There's all sorts of weirdness in powershell.
We already have perfect sound simulation
Actually, we can *reproduce* sound 'good enough' but generating convincing sound from nothing is still beyond at least anything I know of. Speech synthesis, for example, is always obviously unnatural. We are still at a point where we have to assemble sounds from samples recorded or carefully engineered rather than spontaneously generated. Sure, we can do things like manipulate where the sound is being perceived as coming from, but we still require scripted voices and sound samples.
Video is more complicated so we can't cheat as much.
The problems are not what is possible with a tablet interface, the problems center around the dual personality awkwardly slapped together.
If you have a classic application, it's awkward to use in touch interaction.
If you do anything in metro, it's awkward to use without touch.
If you are running a Metro app, you cannot arbitrarily have it sized and present alongside arbitrarily many other apps.
If running a desktop app, it doesn't enumerate the same in win-tab (though alt-tab behaves differently) and the navigation schemes are just pretty fundamentally different.
As a developer, you have to pick from either a Metro app with very low (as-yet) market penetration *or* the desktop api that would support both the new and old editions of Windows.
Alt-tab and Win-Tab act significantly different from one another as well.
I can run netflix in browser and it works fine, but Windows 8 app fails to play on my laptop (an AMD driver issue around DRM, but speaks to the stark inconsistencies)
In short, it's not about what Microsoft enabled, it's about how poorly they executed in doing so. If MS had added the better interface tricks to the same, consistent API they offered before, perhaps this conversation would be different. Instead, they doubled down on Silverlight, renaming it as they went because Silverlight was a failure, creating a huge inconsistent mess between their 'Metro' strategy and the rest of the environment. When MS first rolled out
As the technology has matured, the inevitble is coming to pass, the laptop/desktop market is coming to a plateau. The tablet and phone markets will (or already have by some accounts) hit the same point. Some even say that the 'tablet' market other than iPad *started* in that fashion. Apple probably has the most durable strategy, inspiring their customer base to consider their devices a fashion statement as well as a tool (same way some auto makers can extract more volume and margin out of select models versus others)
While there of course are examples of people who use Tablets to displace their usage of computers, by far the vast majority have both depending on circumstance. Maybe they bother to bring their laptop out to lunch or the couch less, but at work and at home they are still pounding on a traditional PC system at least once a day.