I recommend looking into SpiderOak, and their SpiderOakOne service. It allow greater privacy because using their client, you encrypt the data client side before sending it to the servers, so the information just looks like random noise to them. The idea being that they can't look at it even if they want to. (Which also helps them from a legal liability standpoint, because they can't be expected to police content they can't read.)
I've used SpiderOak for a while and it's been great.
Strictly speaking, the client application is proprietary, so it would be very difficult to prove that they don't know the encryption codes. However, they claim that a future version of the client will be fully open source so that it can be audited. Also, you can't use the web browser interface to see your files if you want the strong privacy, since that would imply sending the password to their webservers.
In any case, I think for strong privacy, SpiderOak is the best available backup right now aside from programming your own client side encryption and running web servers yourself. (I've done some of that too, it's completely possible, but it's a lot of work to code.)
... Any reasons why KDE is so great, beyond its vast customizability?...
KDE is my favorite UI on any OS I've used, and that includes Windows, OSX, Android, etc.
I don't always use the additional KDE applications, and yes, I usually use the more mainstream ones like LibreOffice and Firefox. However many of those KDE apps are actually pretty good, and they provide reasonable alternatives which are nice to have.
It doesn't really matter though, because I use KDE primarily for the actual desktop environment itself. In additional to a really excellent desktop UI (launcher, taskbar, etc) it has so many minor utility applications that improve a Linux computer so much.
The fact that it offers lots of configuration options is really important for me as well.
I should mention that generally speaking, I install most of the other desktop environments as well, since many of them have useful utilities etc. In addition, it's also nice to have other ones in case you need to run with low resource usage or in the event that you break KDE somehow. (KDE doesn't easily break, but I hack around so much that I've often broken it by playing with experimental settings etc.)
So yeah, I love KDE. Thanks so much KDE developers!
Yes, security only updates are a goo
d thing, but Microsoft is in this case mixing essential security updates with other things, such as new features, which ultimately makes the sysyems less deterministic and syable long term.
Critical bugs and security fixes should be seperated from feature updates, and any other updates that may have usability, stability, or privacy related consequences.
Debian stable is a great example. Lot's of security updates delivered very quickly when needed, but system remains the same apart from those necessart changes.
Optional feature updates are great to have available, but they need to be optional.
Although much of what you say is true, I think it somewhat misses the point.
This is essentially a system hacking scenario. The idea being to find whatever methods get to the higher level support sooner, through their system for lower level support.
>> "but every single other customer think's their issue requires a T3 tech"
Yes, however, every other customer isn't going to apply strong research and reasoning skills towards finding the easiest way to get high level support. Also, generally speaking, the customers who will apply such problem solving skills to get higher level support are the same customers who would have already tried the simplistic solutions T1 tech support would provide.
The vast majority of people, will never apply any significant research or reasoning towards getting better support. However for some people, like the question submitter, it is worthwhile to find a solution.
I think similar questions quite a bit when faced with various life problems, and the system hacking strategy generally it works quite well. In general, these solutions just take advantage of the wider population's intellectual laziness. If everyone started "hacking" for escalation, then yes, it would be a problem, but most people won't.
Solutions will get shared however, and whenever a solution becomes widely known, the system creators will come up with protections against it, at which point further hacking is required.
In a world where most things are designed for very intellectually lazy people, anyone who wants to avoid such stupidity needs to find ways to beat the system, because the system wasn't designed with them in mind.
Considering how badly set up these systems are, I don't have any ethical problem with hacking them. Systems that are set up so badly needlessly inconvenience people, and they deserve to get hacked. After enough hack/fix/hack cycles, the system itself usually gets a lot better, and start to provide methods for accommodating people differently.
The alt text on the xkcd comic mentioned in this thread demonstrates the absurdity of the existing systems quite well:
"I recently had someone ask me to go get a computer and turn it on so I could restart it. He refused to move further in the script until I said I had done that."
Being XKCD/806-compliant sounds like a good strategy until that method starts getting overused.
I strongly suggest video games related material, in particular, Unity3D, Unreal 4, or for really simple intros, Scratch. All of these examples can be used to teach programming in a very interesting way that is fun for students and gives immediate feedback and results.
Unreal 4 is pretty amazing because the "blueprint" system is a visual block/node based programming langauge that can function as a complete programming language without much concern for codes/syntax.
Unity is better for direct coding. Boo is the easiest of the supported languages to teach, and very much like python, which is the 3D industry's standard scripting language, so I often start with that, and then some students move on up to C# coding. It's really about the same but with slightly different syntax, and of course the C# is less forgiving.
Another great method, although it isn't quite a full blown game engine, is Python programming in Blender. There's an interactive command line for working with the 3D scene. The great things about programming for 3D software and game engines is that stuff can be extremely immediate and visual, so concepts can be understood quickly. For games, often you can see what's happening in your "world" by pausing the game and interactively exploring the state of things. Blender actually has a built in game engine, although it's pretty basic and limited compared to Unity or Unreal 4.
If they are young, then you needn't focus on job skills just yet. What's more important is getting them interested so they start teaching themselves and getting into the habit of independent learning. You also don't necessarily *need* to do anything with hardware, focusing on software can work just fine as a intro for students.
In conclusion, I suggest that you should be successful using anything that gives very immediate visual feedback on the state of the world (without debugging or printing/logging), and which has the "oh wow, this is fun" factor, something that grabs children's attention and triggers their imaginations.
I can say with confidence that when teaching children, grabbing their attention and making it "fun" is a huge priority. I've been teaching this stuff for almost 20 years, and the games / VFX industry is full of my students. I've taught many adults, but also many children as well. If I can help at all, or if you would ever like to talk, feel free to contact me more directly. If you like, you can email me using: questions in the domain teaching3d.com (To avoid spam I didn't directly put the exact info there, but you can piece it together I'm sure!)
Overall I like the sentiment of the post made, but it falls apart at the point when it incorrectly defines sharing:
>> "Sharing: Willingly giving a portion of your possessions to another, denying you use or benefit thereof."
You have just redefined sharing for your own purpose. Your argument makes the same mistake it seeks to oppose, loading words for it's own purpose.
Sharing is not so limited in definition. I can "share" my knowledge with my students, and not be deprived of anything myself. In addition, I can share things that don't belong to me with others, although it might be illegal, it's still pretty clearly sharing. In particular, transferring information is definitely "sharing" and is not always illegal. I could be sharing information I created myself, perhaps my own artwork.
Even if your definition is copy pasted a dictionary definition, one particular dictionary definition does not suffice to fully define a word. Dictionaries are extremely simplified definitions written for quick reference. Etymology and semantics of words are much more complex. For example, even by just using other dictionaries I can find that a common definition is "to use or enjoy something jointly".
Specific types of copying can (and do) run afoul of particular laws, so "copyright infringement" meets your definition of it, but sharing simultaneously meets a definition of sharing that is more reasonable and widespread than that which you use. Copying itself, and in general, is not wrong. Whether particular copyright infringement is ethical or not depends on a lot of factors too complex to really get in to here (eg. the legitimacy of the laws in effect, the proper functioning of democracy, the consent of those governed, etc).
It's painful to see all these incredibly complex things, but not see the addition of basic undo support for native text editing widgets in Android.
Please Google please, make all native gui elements/widgets support undo. Pretty much every other platform/toolkit already does! (See Qt as one example, or perhaps iOS.) Get the basics right first, then go for the complicated stuff.
The issue tracker has this mentioned several times and it's just not getting the priority and attention it should be getting.
>> Python and C++, because numpy/scipy can't do everything
Yes, definitely true, and it's actually pretty easy to use them together.
If you don't want to write C++ however, there are a couple other options:
Cython - basically let's you generate c/c++ by writing Python like code and is very easy to use interacting with Python. It keeps the Cython parts of your code super fast, like straight up C.
Pypy - a super fast version of Python. If you write Python code yourself, and don't use off the shelf Python stuff, Pypy is crazy fast. (About C speed in my own tests of doing C like things.) Pypy gets slower if you use a lot of other Python code that wasn't written with Pypy in mind, but even then it's still normally much faster than regular Python. Using Pypy, you might just be able to write all the code in it and not have to bother with anything else.
Both of these are easy enough that you can be up and running, writing/using new code, same day as downloading.
Finally, even if you are calling other code from C/C++, there's some new tools to make that easier. CFFI is a good example. It makes calling C/C++ pretty easy. I'm not sure how ready it is for a lot of real world use though.
I'm Canadian, and upon reading that part I burst into laughter loud enough that people are now asking me what was so funny.
As response to the above I can confirm that Unity is very much used because of the development environment, ease of use for 3D artists, and an incredibly simple tool chain that lets you target many platforms with one codebase. Art assets can be shared between platforms as well, or specified per platform.
For these reasons, Unity is used a lot at small studios, particularly where gameplay is the main focus and the technology doesn't have to be cutting edge. Systems like Unreal and CryEngine are more powerful from a technology and graphics standpoint, but are not nearly as easy to use for small teams of developers.
In particular, Unity's documentation, specifically its scripting documentation, is outstanding. The documentation for other systems is extremely rough by comparison.
I have no affiliation with Unity3D, other than the fact that I've used the software in the past and like it. I know the facts I mention above because I've done consulting and training for many local game studios, many of which have used or are using Unity3D. Also, hundreds of my students currently work in the game industry (many in Vancouver BC) so I often hear about what's going on in local studios.
I agree that Python is a great choice.
Python is perfect for someone in your situation because it is very easy to get into, and you have room to grow with it, since it can be made to work as fast as you need it to.
Keep in mind that there is a good chance that you will find you never need to code anything in C or C++ for speed reasons. Python could turn out to be "fast enough" for everything you want to do. You'll probably use libraries to do the heavy lifting, and they are probably already C or C++ based.
However, when Python isn't fast enough, it's pretty easy to write 95% of code in Python, profile your code, find the slow parts, and then write the really CPU heavy stuff in C or C++. Getting C and C++ code working with Python is pretty automated these days. In fact, Python even has "cython" available which is essentially C coding with a more python-like syntax. (It can compile to C).
For an IDE, you can use Eclipse and PyDev. Both are entirely free and excellent. There are plenty of other free tools as well.
For GUI development, you have easy access to the best GUI toolkit on Earth, QT. The Pyside project provides the official binding to QT, and the bindings are excellent. QT is used in incredibly complicated software such as Autodesk Maya, so it's not just for small stuff. At the same time, it takes about 5 minutes to write a fairly simple but useful application using QT and Pyside. (As an example, a GUI for wrapping the functionality of a command line program.)
Another great thing about Python as a language is that you pretty much never run up against a wall. "No, you can't do that" is something you almost never hear when people ask questions about Python. It's more often, "no, you *shouldn't do that.... but you can if you want".
You'll save so much time writing apps in Python that you'll have hours and hours of free time to spare optimizing the slow parts or adding new features. As a personal example, I'm comfortable in other languages as well, but I can make working apps about 5 times faster in Python than in C# or Java, just because there's less code to write. Assertions and test driven development can make the code just as robust as other languages with compile time type checking.
I second that. Wii works great for FPS games. Even if it's not as good as a mouse and keyboard, it's better than analog sticks, (since you can instantly point to where you want without overshooting) and it's *way* more fun. There's something about holding and pointing the Wiimote, as if it is a gun, that makes the experience far more gratifying. It made Golden Eye fantastic. Also, I've replayed Quake and Doom on the Wii Homebrew channel, and I've had much more fun than I did playing them the first time on PC.
Obviously it's very subjective and personal, but if you haven't tried it, it's definitely worth a shot.
Simply pass the request on to the OS's media layer. That way any format the OS knows how to play, you play.
A lot of browsers have been able to do this for years, and if every browser and OS had a free open standard that content browsers could bank on being present, then it wouldn't be a problem. The problem now though is that a content developer can't be sure that the codec used is installed on the end-user's system.
By requiring Flash, the developer gets around the problem since the developer can safely assume that Flash will have the same codec support everywhere. Of course, Flash is bad for the internet since it isn't a free and open standard, which is why we are dealing with all this WebM stuff now. We need a format for video that is equal to PNG/JPEG in term of freedom and openness.
*Imagine if there were no image standards for the net* if images were just left to the OS. We'd constantly be downloading new image codecs, or we'd run across images pages where we couldn't view the images. I think everyone can agree that would be awful. Video deserves a free and open standard just as much as images do.
Hopefully this is useful to someone. A lot of posts I read seem to come from people unaware or misinformed of these basic issues. (I probably should have included this in my last post but I hadn't thought of it yet!)
Free and Open Source software is fundamentally incompatible with "Free as in Speech" but not "Free as in Beer" standards. (At least from a distribution perspective.)
When people say we need a standard that is free and open they mean both "Free as in Speech" and "Free as in Beer". Anything else puts Free Software at a disadvantage.
Giving up on the argument that standards should be both open and free of charge means giving up on Free software as a whole, which I for one am unwilling to do.
Backed up the system lately?