Slashdot videos: Now with more Slashdot!
This is not the car telling you where you can or cannot go imperatively. This is the car sharing dynamic information to you about where you could go before you are stuck in the middle of nowhere, just like you would get stuck with non-intelligent ones but without the empty tank warning @60miles from a gas statio. It won't prevent you from doing the stupid thing itself, it will let you know how stupid it is to do it, before you even have the chance of starting it.
Don't be a glass half-empty type of person: the original topic was about "people being anxious about having enough juice to go somewhere", not "ways I can rage about how an intelligent car performing tasks you liked to predict mentally with a non-intelligent one and how hipster that was and how lazy people are becoming" ^_^.
Todo lists, like Todoist might also work.
Hundreds of todo apps turned up with the smartphone wave, but I believe that's the one that best integrates across platforms (Web, Mobile, and even some specific OS apps and MS/Open Source Office suites. Oh and the cloud, I think there's a Gmail plugin too). The main benefit of Todoist though is, like Trello, that they are very easy to get into, but can evolve if you need the added complexity.
See it like this: you can simplify a code-centric issue tracker like JIRA or Redmine to non-code tasks much like you can evolve Todoist or Trello into coding trackers (i.e. like with KANBAN). But I think Trello eventually leans to be more of a code tool while Todoist seems like a Swiss Army of task-oriented needs, i.e. more generic.
You don't want issue tracking - you want task scheduling and task completion methodologies. The non-engineer have schedules to fulfill which are usually not associated with a deliverable but a task. If there's no deliverable, there's no bug, no feature, i.e. no ISSUE. So tracking issues loses the focus. Issues aren't always tasks in trackers and that's why those are so tied to code, since they mold issues to whatever a release date/agile software development needs.
Unlike issues, tasks always translate to effective actions to undertake someplace, sometime, with someone, for whatever reason.
Post-its are still used nowadays because they do their job representing tasks, and their physical form, order or the fact it is in the trash can imply its relevance, priority, date/time-frame and status. Tell her to keep using tools she's comfortable with, but customize a variation of KANBAN for her team's specific needs. And then maybe decide if a web platform or a physical board make more sense in her context, and the learning curve is acceptable. Post-its + a board or Trello are a good place to start.
Ignore them people saying your lack of programming "freshness" is a barrier. You could be the best/most productive programmer around here and still have no clue where to start digging for useful, relevant exploits you could abuse in any particular system you seem to be an expert in.
With that said, what you want to do is get yourself involved in the latest articles about zero day exploits, trojan horses, patch fixes, heartbleed, so on a so forth. You can get started right here on slashdot: any single search of one of those keywords will point you to news about a known issue, then it probably links to specifics of such issue. Eventually they lead to techniques used, be them SQL/packet injections, memory exploits, privilege escalation. With this you get the basics on the WHY and the HOW things are happening. When you start reaching outside of
Now what you have to do is pick a system you want to test. Familiarize with its architectural patterns, integration with internal and external components, the system it resides in (including hardware/software), but specifically it's use of memory, it's use of the OS APIs, etc. Do this until you get a feel of something fragile. The smell of weakness is usually an exploit waiting to happen. Then you will probably hit a lot of walls.
Also, remember that most exploits come in the form of an actual feature. Change your mindset to something like "if this can be used for good, it can be used for something not that good". That also works your way when you want to have your way with a specific technology.
When it's not a feature that reeks of bad engineering, the only thing left are bugs. But you can't look at bugs in the closed source, black-box environment most technologies you would want to test come packaged in. So find integration bugs: IPC, external interfaces, dependencies can usually be abused with heavy load, injections and whatnot, to induce unexpected behavior.
Ask yourself a question, with a cognitive and morally correct mindset instead of that straight-edge abiding citizen mask you usually wear for society approval: Is it constitutional to block TPB itself?
NO! IT'S THE FREAKIN INTERNET, AND THE FACT YOUR GOVERNMENT IS SANCTIONING IT DOESN'T MAKE IT RIGHT.
With that said, why even bother finding logic to this proxy listing blocking? Linking to a site that links illegal content is illegal? Linkception nonsense you say?
The nonsense started way back. Fight the root of the problem, not the ever branching ramifications of an unconstitutional decisions that keep bending the law.
You're missing two points:
PRISM is one program. There are many others out in the wild (as per Snowden leaks) that don't rely on bulk data collection. This dragnet you talk about is meant to do exploratory investigation, yet intelligence methods also apply to targeted data collection. Discriminating factors in this data (e.g. the fact the user is inclined to opt-in) make it the more interesting for targeted collection, although some might disagree and argue the contrary also holds true (people not encrypting data not to raise suspicion).
Secondly, encryption by default burdens the actual relevance of the data. In the statement I made, conspiracy theory, XKCD comic, name it what you will I am also implying PRISM becomes more effective, as enabling the collection of data that is decryptable in due time renders it usable. Add the fact that opting in is made post-flashing/initial setup so the phone is exponentially more likely to have a connection to the internet during the process of opting-in/encryption. Run-time generated key is thus more likely to be passed around the cloud like an Indian smoke pipe that agencies drag from middlemen (Google) whenever they feel like getting a proverbial high.
This has all the nuances of Android file system's own version of a warrant canary: it was there, by default, until it wasn't.
Makes it easy for the NSA to distinguish those that feel the need to encrypt their data, and those who don't. I'm betting this flag is passed to Google's server for some business logic reason (reason being "unspecified" due to non-disclosure of law enforcement requests).