Qbertino writes: One of the oldest Science Fiction TV serials, the famous German "Raumpatrouille Orion" (Space Patrol Orion) turned 50 today. Heise.de has a scoop on the anniversary in German. The production of Space Partol Orion predates Star Trek by roughly a year and was a huge TV hit in Germany, gaining the status of a "Street Sweeper" (Straßenfeger), referring to the effect it's airing had on public life. 6 episodes were produced. Watch Episode One here here and link subtitled versions below if you find one. Enjoy! "Fallback to Earth!"... In Germany that phrase is about as cult as "Engage Warpdrive!".
Qbertino writes: I want to boost my productivity and the performance in my Linux desktop setup without compromising any of the hardware intergration that comes with modern zero-fuss distributions such as Ubuntu.
I'm currently using Ubuntu 16 LTS with the Xubuntu Setup, but I'm not happy with the performance, despite it running on 18GB of RAM from an SSD on a ThinkPad W510 with a quadcore CPU and an Nvidia Quadro GFX Card. Loadtimes of Firefox are abysmal (true thing), and responsiveness of Xubuntu is OKish, but it could be better.
I would like a Linux desktop that boots in roughly 10 seconds into the desktop, allows me to use all the extra keys (Multimedia, Brightness, etc.) has zero-fuss Wireless, Bluetooth, Mouse & Extra Mousekey integration and otherwise does not bog down the system.
I have no fear of some purist WM like i3WM and learning how to use and configure it. I do use the CLI, although my main IDE currently are from the Java-driven Jetbrains familiy and I plan on continueing their use. But if I'm using a lightweight Linux and WM, I would still like wireless and the extra keyfunctions to work as intended. Nvidias 3D drivers would be nice too, but I don't know if those are slowing down the system (any experience with this?).
What should I do to ensure such a setup? Is there a modern, perhaps rolling distro that you can recommend, that will give me the speed to expect on a modern day supercomputer laptop? Perhaps with automatically custom compiling a kernel after analysing the system or something? Also what is the general problem here? Is this a Ubuntu problem? Is this 32bit vs. 64bit? Is it just me, or are modern Linux distros getting slower?
Your educated opinion and advice is needed. Thanks.
Qbertino writes: A lot of our everyday lives today hinges on having our smartphone and our apps/services/data that are on it working and available.
What are you tactics/standard procedures/techniques/best pratices for preparing for a lost/stolen/destroyed Android Phone and/or iPhone? And have you needed to actually use them?
I'm talking concrete solutions for the worst case scenario: Apps, backup routines (like automating Google Takeaway downloads or something) tracking and disabling routines and methods and perhaps services. If you're using some vendor specific solution that came with your phone and have had positive experience with it, feel free to advocate.
Please include the obvious with some description that you use such as perhaps a solution already build into Android/iOS and also describe any experience you had with these solutions in some unpleasant scenario you might have had yourself. Also perhaps the procedures and pitfalls for recovering previous state to a replacement device.
Please note: I'm talking both Android and iOS. And thanks for your input — I can imagine that I'm not the only one interested in this.
Qbertino writes: I’m a tablet user. I bought the HTC Flyer when it was just roughly 1,5 years old to fiddle with it and program for it. I was hooked pretty quickly and it became part of my EDC. The hardware has since become way outdated, but I still think it’s one of the best tablets ever built in terms of quality and consistency. About a roughly four years later I moved to a then current 10“ Yoga 2 with Atom CPU & LTE module + a SD slot for a 64GB card. I’m very happy with the device and it goes with me where ever I go. It has 12 — 16 hours of battery time, depending on usage and basically is my virtual bookshelf/music/multimedia/mailing device and keeps the strain on my eyes and my fingers to a minimum. It has some power-button issues, but those are bearable considering all the other upsides.
I’ve got everything on this device and it has basically become my primary commodity computer. My laptops are almost exclusively in use when I need to code or do task where performance is key, such as 3D or non-trivial image editing.
In a nutshell, I’m a happy tablet user, I consider it more important than having the latest phone — my Moto G2 is serving me just fine — and I’m really wondering why there are no tablets that build on top of this. Memory is scarce on these devices (RAM and storage) as often is battery time.
Most tablets feel flimsy (the Yoga 2 and Yoga 3 being a rare exception) and have laughable battery times (again, the Yoga models being a rare exception). However, I’ve yet to find a tablet that does not give me storage or memory problems in some way or other, lasts for a day or two in power and doesn’t feel chinsy and like it won’t stand a month of regular everyday use and carrying around in an EDC bag.
Of course, we all know that RAM is an artificial scarcity on mobile devices, so the manufacturers can charge obscene amounts of money for upgrades but 1GB as a standard? That’s very tight by todays standards. Not speaking of storage. Is it such a big deal adding 128GB or perhaps even 256GB of storage to these devices as a default? Why has none of the manufacturers broken rank? Do you think there’s a market for the type of tablet described in the title and we can expect some movement in that direction or am I on my own here?
What are your thoughts and observations on the tablet market? Do you think they are the convergence devices we’ve all been waiting for — as apparently Apple and Aquaris & Ubuntu seem to think? (I’d agree to some extent btw.)
Qbertino writes: At work I'm basically the sole developer at an Agency of 30 and building and maintaining a half-assed LAMP / WordPress Stack Pipeline for Web and Web Application development.
It's not completely chickenwire, spit and duct tape, but just a corner or two ahead of that. The job is fine and although my colleagues have no clue whatsoever about anything IT and couldn't be bothered to think twice about the difference between a client and a server, they do respect my decisions and my calls as an experienced IT guy.
I'm currently in the process of gradually improving our environment step-by-step with each project, and while I know how I can automate stuff like WordPress instancing and automated staging host setup and how that might look for our LAMP stack, I am fully aware of the limitations of this platform. Especially in view of this cloud-development thing slowly catching on and going mainstream and self-maintained stacks and pipelines going the way of the dodo.
In view of all this I'm wondering if it is worthwhile to move custom development away from LAMP and into something less mixed up. Something potentially scalable and perhaps even ready for zero-fuss migration to an entirely cloud-based platform. I'm my book NodeJS has the advantage that is puts the same PL on both client and server and its overall architecure appears to be more cloud/scaling friendly. (Please cue the whitty npm repo jokes and wisecracks in a seperate thread. Thanks.)
My question: Have you moved from LAMP (PHP) to NodeJS for custom product development and if so, what's your advice? What downsides of JS on the server and in Node have a real-world effect? Is callback hell really a thing?
And what is the state of FOSS Node Products? PHP is often and perhaps rightfully considered 15 years ahead of the game with stuff like Drupal, Joomla, EzPublish or WP, but the underlying architecture of most of those systems is abysmal beyond words. Is there any trend inside the NodeJS camp on building a Platform and CMS Product that competes with the PHP camp whilst maintaining a sane architecture, or is it all just a ball of hype with a huge mess of its own growing in the background, Rails-style?
Please, do take note: This all is not about tinkering (for that Node actually *is* on my shortlist), this is about speedy production and delivery of pretty, working and half-way relyable products that make us money. At the same time it's about correctly building a pipeline that won't be completely outdated in 10 years — which I actually do see looming for LAMP, if I'm honest.
Qbertino writes: I like Ubuntu and find Unity generally bearable and perhaps even a good idea for convergence, but this weekend compiz has pissed me off just one time to many lagging my W510 ThinkPad with 18GB memory and Nivida Quadro GFX with a hefty Core i7 to a totally unusable slowness. Unacceptable, and screw that neat OpenGL Exposé if it needs a quadcore supercomputer to bareley get working. I've switched to XFCE/Xubuntu bracing myself for the manual cli config orgy to be expected. It's a breath of fresh air being back to normal speeds and ditching Ubuntu 14ns laggy/buggy/flaky default setup, but I'm ready for more. I've been pondering awesomeWM for about a year now and remember Fluxbox being the hip thing on Linux 10 years ago. I used WindowMaker for quite some time in the 2000nds but I would like to ask the Slashdot crowd what the newest and shiniest of WMs are on Linux these days? Tiling would be nice, hence me eyeing awesome. I would also like to continue using the Nvidia drivers.
What are your current preferences and why? And do you see a bearable migration path on a well-maintained Ubuntu 14 or are there better distros you'd recommend for the machine mentioned above and WM installation nimbleness? (Please note: I know my way around Debian and can handle all the cli stuff including apt-get without crying like a baby. If my Ubuntu 14 LTS install is up for the task, I am too.)
Ideas, suggestions welcome. Thanks.
Qbertino writes: Dirk Brunner, a drone-hacker/builder from Munich has set the official world record for fastest drone ascent, reaching RC rocket speeds. Website is in german, but contains a neat video of the difficulties (motors frying and such) and the tests and attempts at the record, including the record flight itself (various on-board-camera views). Looks impressive.
Qbertino writes: In a letter to customers Apple CEO Tim Cook publicly responds to the FBIs recent demans in the San Bernardino Terrorism Case to build backdoor/cracking software for and into iOS. With admirable clarity Apple opposes this and takes a clear stand for privacy and liberty. Tim Cook gives in-depth details on the matter and shows a clear intent to spark a public debate in the US and closes with a clear statement: "While we believe the FBI’s intentions are good, it would be wrong for the government to force us to build a backdoor into our products. And ultimately, we fear that this demand would undermine the very freedoms and liberty our government is meant to protect." ... I have to admit, I am impressed.... Now let me spoof my data as a built in feature and my next phone might just be an iPhone — 700$ be damned.
Qbertino writes: German officials recently suggested to make all transactions larger than 5000 Euros illegal in cash. It's only a proposal, but definitely some back-room grey-suits machiavellian attempt to introduce the concept of ultimate transaction tracking in the long term. We all know how this goes. With all this and the ever-looming cyberpunk future in close proximity, I'm starting to wonder if it isn't time to get myself familiar with crypto currency as a means of trade. Bitcoin is all the hype, but the blockchain has flaws, in that it isn't as anonymous as one would hope for — you can track past transactions. Rumors of Bitcoin showing cracks are popping up and also there are quite a few alternatives out there. So I have some questions: Is getting into dealing with crypto currency worthwhile already? Is bitcoin a way to go or will it falter under wide use / become easyly trackable once NSA and the likes adopt their systems to doing exactly that? What digital currency has the technical and mind-share potential to superceed bitcoin? Are there feasible cryptocurrencies that have the upsides of bitcoin (such as a mathematical limit to their amount) but are fully anonymous in transactions? What do the economists and digi-currency nerds here have to contribute on that? What are your experiences with handling and holding cryptocurrency? And does bitcoin own the market or is it still flexible enough for an technology upgrade? May the discussion begin...
Qbertino writes: Heise.de (German TFA) reports that the Dutch police is training raptor birds — bald eagles too — to take down drones. There's a video (narrated and interviewed in dutch) linked in TFA. It's a test phase and not yet determined if this is going real — concernes of the birds getting injured are amoung the counter-arguments against this course of action. This all is conducted by a company called "Guard from above" designing systems to prevent smugling via drones. TFA also mentions MTUs net-shooting quadcopter concept of a drone-predator.
Qbertino writes: As Spiegel.de reports (German link) inventor Artur Fisher has died at the age of 96. Artur Fisher is a classic example of the innovator and businessman of post-war Germany — he invented the synchronous flash for photography, the famed Fisher Fixing (aka Screwanchor/rawlplug or "Dübel" in German) and the Fisher Technik Construction Sets with which many a nerd grew up with, including the famous C64Fisher Robotics Kit of the 80ies. His heritage includes an impressive portfolio of over 1100 patents and he reportedly remained inventive and interested in solving technical problems til the very end.... Rest in piece and thanks for the hours of fun tinkering with Fishertechnik.... Now where did that old C64 robot go?
Qbertino writes: Hoi Everybody. I've got this problem on my hands: I want to tie an entire team (aprox. 30 people) into a simple versioning pipeline based on Git. This is an agency that sells "Social Media Marketing", production of PowerPoint presentations (big market for this — no shit!) and other stuff along those lines.
Just about everybody can barely tell a client from a server and walks out of a talk that is "too complicated" simply because I've shown them a slide with 3 lines of simple HTML code thrown into a general knowlege presentation. (Yeah, I know. Please curb the remarks. Unexpectedly I absolutely love working here. Everybody on the team is great and the culture is amazing, despite just about the entire crew being abysmally ignorant of even the simplest IT basics. I'll take this team over any anti-social bunch of experts any time.)
The cluelessness and general fear of even the slightest thing with IT has my boss tell me that SourceTree — a neat free (beer) cross-plattform client — is "too complicated and confusing" for most of the team and "has too many buttons".
We run all our work of a single massive off-the-shelf NAS share and a regular admin would get horrid nightmares with our asset workflow. Most of the team versions manually with date-numbers added to filenames.... Which, admittedly, does have the advantage of not requireing any sort of versioning software or client at all... We do have regular backups and some disaster recovery — but it's all manually maintained via web interfaces and some not-so-super-pro linux admin work by me via SSH and some sorta IT-savy marketing guys on the online team. I do this on the side, my main job is dealing with web projects.
My idea is to have a central set of repos on a central server (already have that) on which everyone can push and pull and perhaps all of them offered up in project access based shares for direct access for those who don't want to touch a versioning client. However, I would love to have a client of some sort that offers up the simplest of GUIs and isn't ugly (this is important). "Commit & Push", "Pull", "Show Unversioned", "Show All", "Browse History", "Tag" would be the main set of buttons required. The ususal colored-icon highlight of unversioned changes would be helpfull too.
The accompaning views would need to show a prominent comment box upon commit and handle conflicts and errors without spartan dumps of Git output — these scare my mates and make them cry. No branches, merges hidded and some sort of automatic [stash,pull,stash-apply & push] if the user was not in the LAN for a few workdays and is off track with his master-branch. I would also want the history to hide merges in the view. Web releases would be done with conventioned tags — perhaps a button for something like that would be neat aswell. I want the introduction of the team with the basics of automated distributed versioning, history browsing and it's advantages to be as smoothly as possible and the stuff listed about would be just fine for what we do in our everyday work.
Are their any clients and perhaps accompaning pre-confectioned custom workflows you know of that offer this sort of thing? At this point I'd even be willing to abandon Git, although I think it's awesome and really wouldn't want to go back to SVN (*shudder*). I know of a neat looking commercial Git-compatible toolkit called "Plastic", but that's for big non-trivial projects. Git-Kraken looks neat, but it's a kitchen-sink solution aswell. Is there something like that on the other end of the spectrum? Clients would need to be scriptable in some hidden way and cross plattform (Win & Mac). Should I start getting my hands dirty in Xamarin/MonoDevelop and roll my own? Are there other systems out there that work and have GUI-clients that don't look like a Xenomorph to regular users? Any other ideas? Suggestions welcome. Thanks.
Qbertino writes: Out of professional need I’ve started to dabble with VM Setups last week — mostly KVM/qemu and VirtManager as a Hypervisor GUI. It’s all very open-sourcy“ — a bit flaky, convoluted and some CLI stuff thrown in. It worked, but needed caretaking and expert knowledge for the basics and there are some features that I missed or couldn’t get running. I was wondering how I could get a solid and disaster-safe VM setup up and running on Linux. Here are my requirements:
1.) Base-OS: Linux (Debian, Ubuntu, whatever)
2.) Hypervisor with stable Click-UI and following features:
2.a.) One-click Copy/Backup of VMs, preferably ones that are actually still running; reasonable disaster recovery behavior (the Hypervisor and VM shouldn’t wet their pants if not all virtual/real Hardwarefeatures are present — it should be possible for a VM to run with a standard base set of features provided by the Hypervisor — in a pinch I want to be able to Launch a backuped VM on a Laptop to rescue data and such)
2.b.) zero-fuss virtual-to-real NIC configuration and zero-fuss NIC/bridging configuration on the base OS/Hypervisor, all with a click-UI — preferably with neat network diagrams (in a pinch the system should be operateable by part-time student admins)
2.c.) copy/paste/instancing of preconfectioned VMs. Launching a fresh extra Linux or Windows installation shouldn’t take more than 5 minutes or so and be as idiot-safe as possible
2.d.) Zero-fuss dynamic storage management across all running VMs. (see below)
3.) Storage abstraction: I know this issue is separate from CPU virtualisation, but none-the-less the same scenario camp: I’d like to be able to virtualize storage. That is, be able to allocate storage as I wish to any VM in any size I want, with dynamic storage assignment options (max. expansion parameters and such). This probably involves two stages: combining all storage from a storage rack into one monolithic storage block with dublication across HDDs for safety and then a Hypervisor with the ability to dynamically assign virtual storage to each VM as configured. Is this sort of correct?
What experience do you guys have with storage virtualisation? As I mentioned, I have no problem with the base OS doing the first stage on its own, without some killer NAS setup that costs as much as a Ferrari, but I do prefer some Click-UI solution that provides zero-fuss storage management.
4.) Nice to have: Dynamic CPU assignment based on time and/or usage. I’d like a render VM to get extra CPUs at night and would like to time that — for example, a VM gets extra CPUs between 1 and 7 o’clock for extra rendering power while the other VMs get to share CPUs.
I’m thinking two *big* failover Linux PC setups (dublicate setup), 2-3 storage racks and one or two professional applications that do Hypervisor/VM stuff and storage as mentioned above and can also cost a little (500 — 3000 Euros).
OK, so that’s a broad overview. For perspective: The setup is for an agency with digitial and print production pipelines and the only web-consultant/web-dev as the single non-intern IT person. That would be me. I know my way around the Linux CLI and have been doing Linux since the 90ies, but do my deving and daily work on OS X and, as you can image, have no time for "scripting-masturbation“ or any setups that come to a grinding halt if I’m not around when a VM runs out of space or memory. We also have no time for downtime longer than 2-3 hours if disaster strikes.
What do you suggest? What are your experiences with FOSS setups and perhaps with proprietary pointy-clicky apps? Hoping for some educated input on this. Thanks.