Comment Re:A better question (Score 1) 181
Thank you Anonymous Coward for your brave act of naysaying and your willful lack of imagination. You are a role model to us all.
Thank you Anonymous Coward for your brave act of naysaying and your willful lack of imagination. You are a role model to us all.
One potential gotcha to think about- I'm not sure how many USB monitors require USB3. DisplayLink makes most of the chipsets, and their origins are in USB2 but I'm not sure how well their newest USB3 products do when attached to USB2.
There's the "is there enough data" question (but their DL2xxx chipsets did 1080P on USB2),
And there's the "is there enough power" question, since USB3 offers 80% more juice (0.9A vs 0.5A).
Also I've never heard of anyone try to compile DisplayLink's proprietary drivers on ARM, so more cross-your-fingers.
I suspect it *will* work but some potential gotchas to think about. Maybe you have a different idea for a portable monitor than me, dunno, but these USB powered things are what I think about.
Letting Larry really get his claws into the state, after one of the most egregious fuck ups Oracle could possibly manage? What the frell is wrong with these people? What the hell happened here, and why did Oregon- like a complete sucker- agree to let themselves be completely swindled for a second time, like a total n00b sucker? The poor people of Oregon, you failed to get software built for yourself in a inglorious fashion, now you are again being taken.
NV open sourced CUDA in 2011, but I don't believe there are any other implementations out there. The rest of the world continues adopting OpenCL and now the whole Khronos supergroup is super hyper for Vulkan (NV even giving a solid thumbs up), with Apple and NV being the two rogue vendors pushing proprietary wares (Metal and CUDA). Even with NVidia doing really *really* well in the GPGPU market, even with a really great dev env, the extreme proprietary-ness of CUDA makes it really hard to sell to the alpha techies.
Cuda has a lot of traction in academic and applied fields, but the technical industry doesn't take it seriously, isn't comfortable saddling themselves to a one-trick-horse offering from NVidia. This ridiculously powerful box, and it's cool software with cool visibility into a neat problem, but it's really a pipeline play, to get you into NVidia's world. For some, going full in on NVidia is ok, but I don't think it's unlike going full in as a MS Developer or iOS developer- you're picking up, putting on the blinders, and all you'll be able to do is sprint towards a fixed, not too far away point.
Vivid Vervet ships with 3.18.3 rather than a modern 3.18 such as 3.18.12, which seems unconscionable.
In particular, there's a known regression where BTRFS fails to clear it's logs and the system become unbootable. This gotcha seems to take around two weeks to manifest, at which point the kernel will lock. https://btrfs.wiki.kernel.org/...
http://www.cnet.com/news/googl... seems similar. They claim 99.9% effective utilization through their per-server battery backup system, compared against 95% for a centralize lead-acid UPS based system.
http://hackaday.com/2014/11/11... might also have some nuggets. a lead acid battery is going to be heavily de-rated at the energy rates required. lead-acid will likely not have the same charging efficiencies.
holding the batteries around 70% is no big loss for this use case, given that the alternative is shortening the battery life.
Launchd is the young whippersnapper on the block. Solaris has had daemon administration for years.
The old guard is a huge fan of PID1 doing it's thing then going away: it's up to everyone else to manage the world after PID1 kicks everything off. The new world- the people who like systemd- are enthusiastic beyond belief to have a PID1 which serves as a master control point where the system can continue to be managed. Every systemd subsystem has a DBUS API we can program and talk to, we can schedule coordinate and manage processes over systemd's core DBUs endpoint- this speaks of the new dawn where we might not be able to hack our shell scripts to do whatever, but we can write higher level code to effectively manage their operation. Which is something that royally sucked egg in the old guard's world.
Sure some of this could sort of be dealt with by continuing to add more shell scripts. But the init script world is mess. Individual daemons have radically different ideas of what kind of responsibility they need to handle in their init scripts and even though for the most part the skeleton is visible across all, it's a hack job that outsiders have to wrack their brain to understand. Conversely, systemd gives us uniform control over the system: the master control program PID1 that is systemd will let us start/stop things, AND will tell us the status of things (over either shell interfaces or DBus).
I look at this more like the innovation of steering- which permitted four wheel vehicles- than I do a particular engine configuration (different muscle, same end). Sure you could get there with the old two wheel drive cart, but as it turns out you have a lot more flexibility when the platform has consistent stability that permits being added to. Where the cam goes is an argument that affirms the lie that systemd is just a really complex initscript: it's not, it's a resident system control daemon.
Where is the source? The Github repo says "This repository contains the core OpenBCI hardware and software frameworks," but there's no schematics, no board layout, nothing.
The $500 price tag seems absolutely absurd for what is essentially an already-made $30 ADC and it's breakout board.
Don't buy anything today. Wait until there are media boxes with quad Cortex A15/A17 chips and buy one of them. They'll be out any week now. Rockchip RK3288 is coming, should be affordable, and the company is spending a lot of effort making sure it's well supported in mainline.
Cortex A9 hails from 2007. It's ancient. The GPUs are at best old Mali-400's. The compute/watt is not-great.
If you want to go really low power- if battery life is your concern and you don't actually have serious CPU use (you mention MSP430, so it sounds like you don't have real CPU use needs) get a Cortex A7 or Cortex A5. There are dozens of dual core Allwinner A7 boards out there. A5 has slimmer pickings, but will get you pleasantly below the one watt range, and the boards come with more embedded targeted peripherals that might not be included on media devices.
Yes, well, the terminal was a much more sensible sane client that could take care of itself. We should _definitely_ go back to that on-the-line paradigm.
These are such tiny little warts. A) don't use global variables, perhaps 'use strict' if you want to be good. B) most languages have arbitrary bit limits. Holding up the floating point limit of 52 bits and making mock of that, but not holding up the 64-bit limit of integers? That's weak sauce accusations from sore fucking whiney babies. Oh you want to insist on arbitrarily deep numerical precision? Have fun crossing off a huge section of people that need moderately performant math.
Languages are all basically the same shit, with slight flourishes that everyone gets zealous and overblown about. Get serious. Go find something real to fight about, like how vim is so much better than emacs.
Yes, but the proliferation of tools makes it harder to make sensible decisions about which one's are directly applicable. Copy pasting random stack overflow answers in and hoping they work is a regular practice, and it's the very embodiment of what's happened in the technical realms: information glut.
Worse: a lot of information, very little sense. Very few projects out there bother spending the time to trace their genetic roots, to find historical context where sense-making of information can even begin.
I don't want to agree or disagree about web or web apps being kludgetastic or not, but I do want to point out- there were a lot less people doing programming and they'd built themselves a lot less tooling. What had to be understood was far less, and what it could be done was yet far less still.
A diverse technical ecosystem springing up is, in my view, a healthy thing: a natural awakening and striving for new potentials. That the many technical societies and practices don't all form themselves towards the same careful deliberate ends, one free of subcultures and instead pushing towards one unified culture, is natural.
This claim of elegant understandable tools of old is more likely to be the unavailability of other signals out there cluttering up the programming spectrum. Thrown into the mess of programming, it's hard to discern relevance of the many things one is being exposed to.
-LM
Spotify's extensibility is really good! Their API is great, very flexible, and extends the common-est platform on the planet. Playing with Spotify made me a better programmer, for sure.
The ability to use language (the command line) to express complex tasks is indeed it's highlight, but there's absoluetly nothing magical about this task-constructing that makes it uniquely UNIX'y in nature IMHO. Noflo, Node-RED, jBPM, IFTTT &c demontsrate user-authored task composition at the GUI level. Android Activities are themselves a kind of pipe to "?", where the GUI asks the user to complete.
The difference with UNIX is that the shell is a language, one that has very wide expressibility, and one that has multiple levels of grammar: the shell itself has a grammar, and the programs each have their own argument grammar, and this multi-level flexibility has proven robust, durable, & capable for expressing a very wide range of things. Which I don't see as uniquely UNIX, as uniquely CLI, but as a characteristic & not necessarily a good one that explains it's survival & persistence as an expressive tool.
You can write a small letter to Grandma in the filename. -- Forbes Burkowski, CS, University of Washington