I read that as "tinder is launching live video".
Oh god no. If you think unsolicited dick pics are bad...
I read that as "tinder is launching live video".
Oh god no. If you think unsolicited dick pics are bad...
You do not need a VPN.
Exposing a port is quite a reasonable option. Simply require HTTPs with username/password authentication.
If your server and the monitoring provider both support it, configure the server to require an X.509 client certificate and supply one to the provider. It's unfortunately unlikely that they will support this, though.
If your video server is a horrible insecure piece of garbage that doesn't do HTTPs, or that has a hardcoded secret key that's in 100,000 other servers around the world, proxy the SSL support between it and the router with ngnix or Apache or similar, presenting a sensible SSL interface.
VPNs for each customer are an incredible pain. I'd refuse to consider it too. Most VPN endpoints are buggy horrible pieces of garbage. Clients are awful. Multiplexing them all is horrible, and means someone who successfully attacks the host handling all the VPNs probably gets much more access to your clients' networks than if you just used direct SSL connections.
Um... std::smart_ptr ?
Present since c++0x TR1. It should not have taken that long and Microsoft's painfully slow adoption of TR1 hasn't helped. It's there though.
Finally we're back where we were before Ubuntu upended everything with upstart. We have one init system across major distros. Except now it solves all sorts of issues that were problematic on sysvinit.
Great to see we can make progress and solve problems, even if it takes an astonishing amount of wailing and gnashing of teeth. (You'd think half Slashdot was born before 1960 and learned some traditional UNIX before there even was a Linux, the way they carry on whenever anything changes).
I think you're too focused on the risk of your backups being compromised. Do you store all your data only on encrypted hard drives with passphrases that you have to enter at boot time, on machines that auto-lock after a short timeout? If not, then you're probably at way more risk of data exposure through theft of "live" media and machines than backups, even stored offsite.
Not to mention the fact that a fire safe is a juicy target for Mr Burglar when you're away from home.
IMO the solution to the issue of cumbersome storage is to heavily encrypt, multiply duplicate and distribute your data.
I GPG-encrypt zip files, copy them to USB keys, and post the USB memory keys to my mother. Small USB keys are practically free now. She posts her backups to me too, along with outdated backup USB keys for re-use. This isn't all that frequent, so it serves for worst-case DR, but it has the handy effect of keeping old backups for quite a long time in case of accidental deletion not detected immediately etc.
I've also left an old laptop hard drive with photos with her, for example. Unencrypted because I don't much care if they get out, but it'd be easy enough to use an encrypted file system and store the keys elsewhere.
Key storage is the important part. Those keys must be in multiple places but stored fairly securely. If you lose the keys, your encrypted backups are worthless. A safe deposit box is an option, but it's kind of expensive and not necessarily as safe as you might like to think. I keep my GnuPG private in paper and electronic form in a couple of places, passphrase protected. Only I know the passphrase. So if I have a head injury when my house burns down I'm in trouble, but really, what else is there to do?
I also use SpiderOak for cloud sync of much of my data. Anything particularly interesting like key financial and identity records, passwords, etc, gets gpg-encrypted first. The rest is in the clear on my laptop, and I'm at much more risk of my laptop being stolen than of SpiderOak being compromised. SpiderOak offers versioning and deleted file tracking, which is very important, since many people's backup routines fail to properly account for files that are accidentally deleted/damaged at some unknown point older than the last backup rotation.
(I really should get a fire safe and run a spare SSD in there for up-to-date protection from more moderate incidents; that way I could real-time sync it from my laptop via the media PC.)
Otherwise known as the "voting machine company was too stupid to implement SSL" attack?
Or, for email, the "what idiot thinks email is secure without local S/MIME or PGP signatures" attack. Seriously, on-wire tampering is the least if your worries if you're *emailing* ballots around.
You'll be welcomed in engineering roles for mission-critical / safety-critical machine control systems, etc.
I fix bugs and develop features for customers in open source (BSD-alike licensed) software as part of my job, through a company that's made a reputation for PostgreSQL support.
That said, I agree that the support model for open source software doesn't work as well for people who want to use something casually. It's great if it's a core and major part of what your company uses, because you can influence its direction, prioritise improving the things you care about, etc.
It doesn't work as well when it's just a minor part of your tooling/infrastructure where no one user/company can really justify paying a contractor to do the work. It becomes like a shared driveway: everyone wants it fixed, but everyone just waits for someone else to fix it. It's very hard to get people to go in together to fix it, and if you do they'll spend forever arguing about whether they want a quick bodgy patch-up or the full long-lasting strip and resurface treatment.
It's easy to sink a lot of time into it if you go the "just fix it yourself" route too, as even simple issues often have huge learning curves around the languages, tools, conventions, etc involved. You can find yourself feeling like you're chasing a supply of issues growing faster than you can fix them too.
That said, I've dealt with many closed source vendors who add bugs faster than I can report them, or make bug reporting impossible / ignore bugs entirely.... and still charge huge premiums. On the flip side there are OSS projects that're incredibly stable and solid, so you can use them without really being concerned about issues. A lot of it comes down to choice of vendor/project/tool.
My favourite trick when dealing with sales people is to construct questions so that they have to say the word "no", or say that their product does not do / does not support something.
It's hilarious watching the "oh, yeah, we can do that" people double-take as they're forced to drop out of their automatic and unthinking responses and actually construct a new lie or really think about and answer the question.
You only have backups if you test them regularly, otherwise all you have is a false sense of security.
The same is true of failover.
If it's anything like many such big companies the customer service team couldn't tell the difference between a bug and user error to save their lives. The're probably also under pressure to admit no bugs - which is easy, because the software is bug-free until the instant a new release is made, anyway.
I wouldn't be surprised if the dev team used the mailing list to bypass the customer service people as well. That way they can actually find out about problems from customers without having to interpret reports filtered through four layers of idiot between the customer and the dev team.
I have the pleasure of working for a company where the dev team ARE the support escalations team and keep an eye on support contacts. Of course, we're small, and that doesn't scale, but we're also not afraid to say "I think that might be our bug, let me look into it."
After meeting your #1 requirement - finding the project interesting - I'd focus on the attitude and community:
* Is it friendly?
* Is the discourse on the mailing lists / forums / whatever generally positive in tone?
* Is it welcoming to new people?
* Is there a list of new developer / getting-started tasks, tutorials, documentation, etc?
* Is there any sort of mentorship program? Or at least a code-review / patch-review process?
* Is there a well defined process for people without direct commit access to get changes included?
and the codebase:
* Is the codebase sanely structured, documented, and commented with a fairly consistent coding style?
* Does their revision control show disciplined series of commits with good commit messages?
* Is there a way to report bugs?
* Is there regular development activity, not just occasional patches?
* Are commits usually followed by streams of fixups, or do they tend to be reasonable the first time around?
* Is the complexity level accessible for you - i.e. is it simple enough that you can follow, but complex enough to be challenging and interesting?
In other words, I'd be looking for something with a healthy community, code that isn't buried in technical debt, with good development practices and a positive attitude.
I've actually run business Linux desktops for years, and I had endless problems.
* Random GNOME profile corruption. Lots of it. XFCE was no better, just different corruption issues;
* OpenOffice bugs and crashes;
* OpenOffice crashing, starting "recovery" but failing to find the tempfile it's trying to recover, and endlessly trying to recover that file every time the user launches it from then on;
* Mail clients (Evolution or Thunderbird) crashing but leaving dead processes around that had to be manually killed before they'd relaunch;
* Painfully difficult and buggy central configuration and management of things like desktop profiles, mail setup, etc;
* Handling of archives in email attachments, those horrible broken outlook TNEF files, etc, sucked;
* Printing was painful and buggy despite my being quite careful to get only native PostScript printers. Various apps would generate broken PS in all sorts of exciting ways, or CUPS would set job options that printers would choke on, basic printer features were unusable, etc etc;
* Random app devs who decided to call umask(0700) and override the system umask before creating files, because OBVIOUSLY they know better than the user and sysadmin what the file/dir creation perms should be;
* Numerous apps that'd suppress the setgid bit when creating new subdirectories in shared working trees, leading to more permissions issues;
* I was nervous about even minor upgrades to fix bugs, because for every bug fixed there'd usually be three new exiting bugs;
* For the Windows desktops (for a few users who needed accounting packages etc) using Samba for roaming profiles, *tons* of profile corruption issues, endless printing problems, incredibly poor performance, and difficulties interoperating with the Linux desktops
These were "basic users" who needed little more than word processing and occasional use of other simple document exchange, PDF viewing, printing (oh god, so much printing), email and simple attachment handling, and image viewing/sorting/saving. They weren't doing anything complicated.
Windows 7, Active Directory, and Group Policy were an incredible breath of fresh air when we bit the bullet and switched over after acquiring a Win2k8 R2 server for unrelated reasons. Sure, they have plenty of problems - but wow, did it work better overall. Things like volume shadow copy snapshots of server-side roaming profiles were a huge improvement over periodic bacula snapshots of bits of user homedir state.
The main problems we had with Windows were with roaming profiles - and were caused by obvious bugs in OpenOffice, Firefox (moved to Chrome which was better), etc, especially keeping piles of temp state in %APPDATA% not %LOCALAPPDATA% where it should be, modifying SQLite databases directly on remote storage, etc. These apps don't get tested on "business network" type setups, with roaming profiles and redirection, and they don't follow MS's recommendations on file layout etc. It shows.
The only serious issue I had with the Windows deployment was that %APPDATA% redirection for roaming profiles is horribly broken with caching enabled; the sync tool just throws a spak, gives up, and waits for the user to resolve conflicts. It's quite capable of creating conflicts even if there's never any connectivity problem, and the results are messy. Once I disabled offline access and caching for %APPDATA% (a significant performance hit, and it meant that if the server was down even briefly all clients would just freeze) the sync issues went away, but it wasn't a great compromise.
I wasted a huge amount of time babysitting the Linux desktops. I reported so many bugs, wrote so many patches - even though back then my C programming was
I use Linux on my laptop for work, and I'd hate to use anything else. Though with the KDE4/GNOME3 thing I'm getting less fond of it. For basic end users, though? Nope. No way, never again.
... and yes, I know it's hard. I've spent *hours* figuring something out to write it up. Many, many times.
OTOH, I'm using someone else's often very good work for free. Perhaps it's not an efficient use of time
Agreed. You can produce good docs without the solid co-operation of the dev team, but it's much more difficult, more time consuming, and a lot more frustrating. They also tend to bit-rot a lot faster.
Why do we want intelligent terminals when there are so many stupid users?